id
stringlengths 47
47
| text
stringlengths 426
671k
| keywords_count
int64 1
10
| codes_count
int64 2
4.68k
|
---|---|---|---|
<urn:uuid:b7d6b8fc-8935-4e9f-9848-9205d5cc2b4d> | Ken Thompson and Dennis Ritchie
|developers:||Ken Thompson, Dennis Ritchie, Douglas McIlroy, and. A.|
|Version:|| freely, publicly:|
Legally protected UNIX V7 (
SCO open server 6,0 (June 2005)
SCO UnixWare 7.1.4 (June 2004)
|License:|| until 1981: None (freely)|
starting from 1981: Prop. guessing acre (AT&T, Novell, SCO, SCO Group)
UNIX [ˈjuːnɪks] is a multi-user operating system. Itbecame at the beginning of the 70's 20. Century of Bell Laboratories for the support of the software development develops. Unix designation generally linguistic usage of operating systems, either their origin in the Unix system of AT&T (original Bell Laboratories) of the 70's has or itsConcepts implement.
Since UNIX is a registered brand name the open Group, only certified systems may bear the name UNIX. One assigns also operating systems such as Linux nevertheless to the Unix family. In the technical literature one uses usually Unix as name for unix well-behavedSystems, while one uses UNIX (in capital letters or Kapitälchen) in addition, certified systems to mark.
Among all these systems, which can be divided in Unix derivatives and Unix well-behaved operating systems, for example the BSD ranks - systems, Mac OS X, HP-UX, AIX, IRIX and Solaris. Some other systems such as GNU, Linux or QNX are not in the historical sense Unix derivatives, since they are not based on the original Unix source text, but were separately developed. BSD was originally based on Bell lab source texts,these were completely removed however to center of the 90's.
To table of contents
The Unix Kernel has access to the hardware over device drivers alone and administers processes. Besides it places the file system to the order, in modern variants additionally the network minutes pile. System calls from processes serve for starting (system calls fork, EXEC)and taxes of further processes as well as for communication with the file system. Accesses to the device drivers are illustrated on accesses to special files in the file system. Thus files and devices from view of the processes and thus application programs are as far as possible standardized(System calls open, READ, write,…). A multiplicity of programs including a cent coil system and a text record program (troff) complete the system.
The file system is organized as hierarchical listing with arbitrary sublists, a then new concept, that todayeverywhere is natural. Root listing (root listing) of this hierarchy is the listing „/“. One of the outstanding basic concepts of UNIX are, also disk and CD drive assemblies, further non removable disks own or strange computer, terminal, to illustrate tape-units and others special files in the file system. „Everything is a file “is a basic principle of Unix. This generalized file term belongs to the nature of UNIX and makes a simple, uniform interface possible for most diverse applications. In some UNIX derivatives even processes and their characteristics to files are illustrated (proc file system).
The command interpreter, Shell, - under Unix a normal process without privileges - as well as numerous standard commands make an unequalled simple in/expenditure bypass for the user possible into these files, and over Pipes communication between processes.
A large collection ofsimple commands, the UNIX toolbox, can be combined in such a way with the help of the programming possibilities of the command interpreter and complicated tasks to take over. It is frequently avoided by the combinability of to a large extent standardized tools that one writes programs specialized in each case for „tasks of A MARk “or simpler administration workmust, as this is in other operating systems frequently the case.
To the important characteristics of a typical Unixsystems belong: high stability, multi-user, multitasking (meanwhile also Multithreading), storage protection and virtual memory (first implements in the BSD line), TCP/IP - network support(likewise first in the BSD - line), outstanding Scriptingeigenschaften, a fully developed Shell and a multiplicity of tools (see university X-commands) and Daemonen. Operating systems of Unix workstations as well as Unix derivatives contain usually a graphic user surface based on X11.
Unix is historically closely linked with the programming language C - both help themselves mutually to the break-through, and so C is also today still the preferential language under Unixsystemen.
the name Unix
the system was called originally Unics (latershortened on Unix), an allusion on the Multics - system. The name Unics was interpreted gladly also as UNIplexed information and Computing service, however this is a subsequent interpretation - neither Unics nor Unix or UNIX are acronyms.
The discussion,which name is now the more correct, UNIX or Unix, inflames again and again again. Unix the older name, UNIX than name is historical dipped only 1974 up for purely aesthetic reasons.
for information in more detail see History of Unix.
Ken Thompson provided the first version of Unix to 1969 in assembler language on the DEK PDP-7 as alternative to Multics. As one of the first programs for the new operating system core Thompson and Ritchie wrote the play Space Travel to plumb over which for interfaces it would need. The 1972 - 1974 in C implemented system completely again together with a C-compiler to different universities were free of charge distributed - from it developed the BSD - line ofUnix. AT&T tried finally, Unix profitably to marked out, from which those developed for system V line of Unix. Into the 1980er years became Unix the dominating operating system at the universities, and it existed an abundance of most diverse Unix derivatives, all this in anyForm of the two main lines it descended with which slowly need according to standardisation developed.
current right distribution
the rights at the source code lie according to own statement with the SCO Group (whereby Novell this however partly denies; see SCO v. Novell).The rights at the registered trade mark lie against it with the open Group.
extended the system changed standards for each manufacturer in the 80's 20. Century after own conceptions. Versions with different abilities, commands developed,Command options and program libraries. In order 1985 began to standardize those IEEE first, the interfaces for application programs. From this the IEEE developed 1003 - standard, which is called on suggestion of smelling pool of broadcasting corporations Stallman POSIX. It consists itself today of approximately 15 documents,with all aspects of Unix systems like the command line interpreter (POSIX writes compellingly the grain Shell forwards), the university X-commands and their options, the input/output and other one concern.
The prices the IEEE for the pos ix-documentation are very high, the publication are by copyrightforbidden. In more recent time therefore a tendency is to the single Unix Specification - standard the open Group to register. This standard is openly, in the Internet freely available and accepts suggestions of everyone.
free Unix derivatives
to Unix V7 1979, the source code appeared by Unix against refunding the copying and data medium costs at universities was distributed. Unix had thereby the character of a free, portable operating system. The code was used in lectures and publications and could after own conceptions changed and supplementedbecome. The University of Berkeley developed its own distribution with substantial extensions, the Berkeley Software Distribution (BSD).
In the early 80's decided AT&T, Unix to marked out; at&T source code could be made publicly accessible starting from this time no longer. Also thoseUse in lectures etc. was impossible. Also on BSD which are based systems were raised - there a part of the code of AT&T came - high royalties.
The unavailability of the source code arranged 1983 smelling pool of broadcasting corporations Stallman to bring the GNU project (“G nu is n ot U nix “) into being. A goal of the project was a free, Unix compatible system.Until 1990 the project had developed, however all substantial parts - inclusive the GNU C compiler - with exception of the Kernels.
1987 appeared the instructional system Minix, developed by Andrew S. Tanenbaum to the Vrije Universiteit Amsterdam. Minix was a Unix clonewith Mikrokernel, C compilers, editor and many commands, which on undemanding PC - hardware ran. The source code was part of the scope of supply. It was commercial, due to its very low price approximated it a free system however much. Howbefore times Unix served this system many as starting point for own experiments.
1991 worked the student Linus Torvalds on a terminal emulator, with which he wanted to access a university computer. With the time it inserted file system access and many other useful features.Soon he noticed that he programmed more than one terminal emulator. It published the SOURCE code in the newsgroup comp.os.minix as operating system, which should be executable on Intel - 386er - a PC. First to be project Freax should be called. There the overeager administrator“, called he assigned it simply in such a way to the University of it as Login for its Ftp Repository „Linux. In the SOURCE code of the version 0.01 of Linux still the name Freax comes forwards („Makefile for the FREAX kernel “).
The pos ix-standard and the GNU project, all thisif tools force such as compilers and Shell offer, offered a suitable way there. Torvalds used the min ix-system and the GNU C compiler as basis. It wrote a core, which it called Linux. Whereupon it transferred the software tools and libraries of the GNU project. ThisTools offered free operating system in combination with the Linux core the basis for a pos-ix-faithful. See also history of Linux.
1992 appeared with 386BSD von Bill Jolitz a further free system for 80386-Prozessoren. It consisted of a Patch for thosenot of AT&T and a further free very advanced operating system for Intel processors formed coming free parts of the BSD distribution.
1994 published Berkeley with 4.4BSDLite the last version of their distribution, which was released from AT&T source code. This formed the basis together with 386BSDfor NetBSD, FreeBSD and shortly thereafter OpenBSD.
Since 2005 is also Solaris (version 10)in the current in each case version for the use free of charge available. Solaris runs on 32-bit processors (x86) of AMD and Intel as well as on 64-bit systems with Suns UltraSPARC and so-called x64-Systemen such as z. B. AMDs Opteron. For access to sourcesand cooperation inclusive. Extension is available it in the version OpenSolaris, which does not differ functionally from the binary version. Sun Microsystems required however a registration and has own license regulations, which deviate from the GPL.
the following compilation gives only a rough overview. Only the most important systems are mentioned. These have in each case theirown versions and their own history of the development.
|1969||UNICS||first version of Bell Laboratories|
|1970 - 75||UNIX V1-V5 Time sharing system||Bell lab|
|1976||UNIX V6 (6th edition)||Bell lab|
|1977||first Berkeley Software Distribution (BSD)|
|1978||2BSD||second Berkeley Software Distribution|
|1979||UNIX V7 (7th edition)||last version of Bell lab with free source code|
|1980||UNIX 32V||Portierung of the UNIX V7 on VAX - computer|
|1980||XENIX OS||Unix version of the company Microsoft, late company SCO|
|1980||3BSD and 4BSD||Berkeley Portierung on VAX computers|
|1981||UNIX system III||first commercial version of Bell lab|
|1982||SunOS, 1,0||Unix version of the company Sun Microsystems|
|1983||start of the GNU project||(GNU: Gnu is emergencyUnix - GNU is not Unix)|
|1983||UNIX system V||Bell lab|
|1983||Ultrix||Unix version of the company Digital Equipment corporation (DEK)|
|1983||Sinix||Unix version of the company Siemens|
|1983||Coherent||Unix similar system of the Marks of Williams company|
|1984||Start of Mach - Mikrokernel project to the Carnegie Mellon University (California).|
|1986||AIX 1,0||Unix version of the company IBM|
|1986||A/UX||Unix version of the company Apple|
|1986||HP-UX 1,0||Unix version of the company Hewlett-Packard (HP)|
|1987||Minix 1,0||Unix clone of the Vrije university, Amsterdam|
|1988||IRIX||Unix version of the company Silicon graphics|
|1989||SCO UNIX||Unix version the one far spreading at the market found|
|1990||OSF/1||Unix clone the open software Foundation|
|1991||4.3BSD Net/2||BSD version without AT&T code, to incompletely|
|1991||Linux||oriented at Minix, large spreading|
|1992||Solaris 2,0||company Sun Microsystems|
|1992||386BSD||Patch for BSD4.3 Net/2 for Intel processors|
|1992||UnixWare 1,0||Unix version by Univel (AT&T & Novell)|
|1993||UnixWare 1,1||Unix version of Novell|
|1994||4.4BSDEncumbered and 4.4BSDLite||(without Bell lab code)|
|1994||NetBSD 1.0||based on 4.4BSDLite|
|1994||FreeBSD 1,0||based on 4.3BSD Net/2 (shortly thereafter 2,0 on 4.4BSDLite)|
|1994||Tru64||successor outgoing from|
|OSF/1||1995 SCO open servers||of 5 successors from SCO UNIX and open Desktop - UNIX SVR3.2|
|V5.0.0||1995 OpenBSD -||projectfrom NetBSD|
|the Unix system||lab of Novell takes over 1995 UnixWare 2 SCO|
|1996||AT&T arranges the Bell lab into the enterprise Lucent Technologies|
|2000||Darwin||company Apple, based on Mach and 4.4BSD|
|2004||SCO UnixWare 7.1.4||UNIX version of thesupposed source code right owner SCO Group|
|2005||Solaris 10 (SunOS 5,10)||to company Sun Microsystems|
|2005||SCO open server 6,0||UNIX version of the alleged source code right owner SCO Group|
- The BellSystem Technical journal, volume. 57, July August 1978, No. 6, part 2, S. 1897-2312
- Brian W. Kernighan, Rob Pike: The Unix toolbox - programming with UNIX, (German translation), Hanser publishing house, Munich 1986, ISBN 3446142738
- E. Foxley: Unix for superuser. Addison-Wesley, 1988, ISBN 3-925118-24-1
- J. Gulbins, K. Obermayr: UNIX system V.4. Terms, concepts, command, interfaces. 4. Aufl. 1995, ISBN 3540588647
- Jerry Peek, Grace Todino, John strand: UNIX. A practical entrance. O’Reilly Verlag, 2002, ISBN 3897211572
- http://www.unix.org/ (English)
- web page with information about Unix
- history of UNIX
- history and Zeitlinie to UNIX
- The Creation OF the UNIX operating system (English)
- forum for users of Unix derivatives
- German-language Unix user forum
- space Travel (English)
- file system Hierarchy standard (English)
- 4.4BSD document (English)
- Unix philosophy
- important Unix of derivatives on a time border from 1969 to today (with History tree to expressions)
- over confusion with the trade mark UNIX
- further one left to the topic „Unix “ in the open directory Project
- Linux federation
- UNIX Referenzkarte, summarizes many important instructions briefly.
|This article or section requires a revision. Details are indicated on the discussion side. Please to improve it and removes afterwards this marking helps.| | 1 | 4 |
<urn:uuid:56e7ca55-601f-489b-86e3-73cb7ba4f936> | |Manufacturer:||Digital Equipment Corporation|
|Byte size:||8 bits (octet)|
|Address bus size:||32 bits|
|Peripheral bus:||Unibus, Massbus, Q-Bus, XMI, VAXBI|
|Architecture:||CISC, virtual memory|
|Operating systems:||VAX/VMS, Ultrix, BSD UNIX, VAXELN|
VAX was an instruction set architecture (ISA) developed by Digital Equipment Corporation (DEC) in the mid-1970s. A 32-bit complex instruction set computer (CISC) ISA, it was designed to extend or replace DEC's various Programmed Data Processor (PDP) ISAs. The VAX name was also used by DEC for a family of computer systems based on this processor architecture.
The VAX architecture's primary features were virtual addressing (for example demand paged virtual memory) and its orthogonal instruction set. VAX has been perceived as the quintessential CISC ISA, with its very large number of addressing modes and machine instructions, including instructions for complex operations such as queue insertion or deletion and polynomial evaluation.
"VAX" is originally an acronym for Virtual Address eXtension, both because the VAX was seen as a 32-bit extension of the older 16-bit PDP-11 and because it was (after Prime Computer) an early adopter of virtual memory to manage this larger address space. Early versions of the VAX processor implemented a "compatibility mode" that emulated many of the PDP-11's instructions, and were in fact called VAX-11 to highlight this compatibility and the fact that VAX-11 was an outgrowth of the PDP-11 family. Later versions offloaded the compatibility mode and some of the less used CISC instructions to emulation in the operating system software.
The plural form of VAX is usually VAXes, but VAXen is also heard.
The "native" VAX operating system is DEC's VAX/VMS (renamed to OpenVMS in 1991 or 1992 when it was ported to Alpha, "branded" by the X/Open consortium, and modified to comply with POSIX standards). The VAX architecture and VMS operating system were "engineered concurrently" to take maximum advantage of each other, as was the initial implementation of the VAXcluster facility. Other VAX operating systems have included various releases of BSD UNIX up to 4.3BSD, Ultrix-32, VAXELN and Xinu. More recently, NetBSD and OpenBSD support various VAX models and some work has been done on porting Linux to the VAX architecture.
The first VAX model sold was the VAX-11/780, which was introduced on October 25, 1977 at the Digital Equipment Corporation's Annual Meeting of Shareholders. Bill Strecker, C. Gordon Bell's doctoral student at Carnegie-Mellon University, was responsible for the architecture. Many different models with different prices, performance levels, and capacities were subsequently created. VAX superminis were very popular in the early 1980s.
For a while the VAX-11/780 was used as a baseline in CPU benchmarks because its speed was about one MIPS. Ironically enough, though, the actual number of instructions executed in 1 second was about 500,000. One VAX MIPS was the speed of a VAX-11/780; a computer performing at 27 VAX MIPS would run the same program roughly 27 times as fast as the VAX-11/780. Within the Digital community the term VUP (VAX Unit of Performance) was the more common term, because MIPS do not compare well across different architectures. The related term cluster VUPs was informally used to describe the aggregate performance of a VAXcluster. The performance of the VAX-11/780 still serves as the baseline metric in the BRL-CAD Benchmark, a performance analysis suite included in the BRL-CAD solid modeling software distribution. The VAX-11/780 included a subordinate stand-alone LSI-11 computer that performed booting and diagnositic functions for the parent computer. This was dropped from subsequent VAX models. Enterprising VAX-11/780 users could therefore run programs under three different Digital Equipment Corporation operating systems, VMS and using compatibility mode, RSX-11M, and RT-11 using this subordinate computer.
The VAX went through many different implementations. The original VAX was implemented in TTL and filled more than one rack for a single CPU. CPU implementations that consisted of multiple ECL gate array or macrocell array chips included the VAX 8600 and 8800 superminis and finally the VAX 9000 mainframe class machines. CPU implementations that consisted of multiple MOSFET custom chips included the 8100 and 8200 class machines.
The MicroVAX I represented a major transition within the VAX family. At the time of its design, it was not yet possible to implement the full VAX architecture as a single VLSI chip (or even a few VLSI chips as was later done with the V-11 CPU of the VAX 8200/8300). Instead, the MicroVAX I was the first VAX implementation to move most of the complexity of the VAX instruction set into emulation software, preserving just the core instructions in hardware. This new partitioning substantially reduced the amount of microcode required and was referred to as the "MicroVAX" architecture. In the MicroVAX I, the ALU and registers were implemented as a single gate-array chip while the rest of the machine control was conventional logic.
A full VLSI (microprocessor) implementation of the MicroVAX architecture then arrived with the MicroVAX II's 78032 (or DC333) CPU and 78132 (DC335) FPU. The 78032 was the first microprocessor with an on-board memory management unit The MicroVAX II was based on a single, quad-sized processor board which carried the processor chips and ran the MicroVMS or Ultrix-32 operating systems. The machine featured 1 MB of on-board memory and a Q22-bus interface with DMA transfers. The MicroVAX II was succeeded by many further MicroVAX models with much improved performance and memory.
Further VLSI VAX processors followed in the form of the V-11, CVAX, SOC ("System On Chip", a single-chip CVAX), Rigel, Mariah and NVAX implementations. The VAX microprocessors extended the architecture to inexpensive workstations and later also supplanted the high-end VAX models. This wide range of platforms (mainframe to workstation) using one architecture was unique in the computer industry at that time. Sundry graphics were etched onto the CVAX microprocessor die. The phrase CVAX... when you care enough to steal the very best was etched in broken Russian as a play on a Hallmark Cards slogan, intended as a message to Soviet engineers who were known to be both purloining DEC computers for military applications, along with reverse engineering their chip design.
The VAX architecture was eventually superseded by RISC technology. In 1989 DEC introduced a range of workstations and servers that ran Ultrix, the DECstation and DECsystem respectively, based on processors that implemented the MIPS architecture. In 1992 DEC introduced their own RISC instruction set architecture, the Alpha AXP (later renamed Alpha), and their own Alpha-based microprocessor, the DECchip 21064, a high performance 64-bit design capable of running OpenVMS.
In August 2000, Compaq announced that the remaining VAX models would be discontinued by the end of the year. By 2005 all manufacturing of VAX computers had ceased, but old systems remain in widespread use.
The SRI CHARON-VAX and SIMH software-based VAX emulators remain available.
|Designer||Digital Equipment Corporation|
|Encoding||Variable (1 to 56 bytes)|
|Branching||Compare and branch|
The VAX virtual memory is divided into four sections, each of which is one gigabyte in size:
|P0||0x00000000 - 0x3fffffff|
|P1||0x40000000 - 0x7fffffff|
|S0||0x80000000 - 0xbfffffff|
|S1||0xc0000000 - 0xffffffff|
For VMS, P0 was used for user process space, P1 for process stack, S0 for the operating system, and S1 was reserved.
The VAX has four hardware implemented privilege modes (see Processor Status Register):
|0||Kernel||OS Kernel||Highest Privilege Level|
|3||User||Normal Programs||Lowest Privilege Level|
|31||PDP-11 compatibility mode|
|29:28||MBZ (must be zero)|
|27||first part done (interrupted instruction)|
|25:24||current privilege mode|
|23:22||previous privilege mode|
|21||MBZ (must be zero)|
|20:16||IPL (interrupt priority level)|
|15:8||MBZ (must be zero)|
|7||decimal overflow trap enable|
|6||floating-point underflow trap enable|
|5||integer overflow trap enable|
The first VAX-based system was the VAX-11/780, a member of the VAX-11 family. The high-end VAX 8600 replaced the VAX-11/780 in October 1984 and was joined by the entry-level MicroVAX minicomputers and the VAXstation workstations in the mid-1980s. The MicroVAX was superseded by the VAX 4000, the VAX 8000 was superseded by the VAX 6000 in the late 1980s and the mainframe-class VAX 9000 was introduced. In the early 1990s, the fault-tolerant VAXft was introduced, as were the Alpha compatible VAX 7000/10000. A variant of various VAX-based systems were sold as the VAXserver.
Cancelled systems include the "BVAX", a high-end ECL-based VAX, and two other ECL-based VAXen: "Argonaut" and "Raven". A VAX known as "Gemini" was also cancelled, which was a fall-back in case the LSI-based Scorpio failed. It never shipped.
A number of VAX clones, both authorized and unauthorized, were produced. Examples include:
The SPEC-1 VAX, the VAX 11/780 that was used as the benchmark for the speed of each of DEC's VAXes. Now on display at the Department of Computer Science & Engineering, University of Washington, Seattle, Washington. | 1 | 13 |
<urn:uuid:52204792-3f84-4813-b72f-ef4feee9725f> | This information is produced and provided by the National Cancer Institute (NCI). The information in this topic may have changed since it was written. For the most current information, contact the National Cancer Institute via the Internet web site at http://cancer.gov or call 1-800-4-CANCER.
Fortunately, cancer in children and adolescents is rare, although the overall incidence of childhood cancer has been slowly increasing since 1975. Children and adolescents with cancer should be referred to medical centers that have a multidisciplinary team of cancer specialists with experience treating the cancers that occur during childhood and adolescence. This multidisciplinary team approach incorporates the skills of the primary care physician, pediatric surgical subspecialists, radiation oncologists, pediatric medical oncologists/hematologists, rehabilitation specialists, pediatric nurse specialists, social workers, and others to ensure that children receive treatment, supportive care, and rehabilitation that will achieve optimal survival and quality of life. (Refer to the PDQ Supportive and Palliative Care summaries for specific information about supportive care for children and adolescents with cancer.)
Guidelines for pediatric cancer centers and their role in the treatment of pediatric patients with cancer have been outlined by the American Academy of Pediatrics. At these pediatric cancer centers, clinical trials are available for most types of cancer that occur in children and adolescents, and the opportunity to participate in these trials is offered to most patients/families. Clinical trials for children and adolescents diagnosed with cancer are generally designed to compare potentially better therapy with therapy that is currently accepted as standard. Most of the progress made in identifying curative therapy for childhood cancers has been achieved through clinical trials. Information about ongoing clinical trials is available from the NCI Web site.
Dramatic improvements in survival have been achieved for children and adolescents with cancer. Between 1975 and 2002, childhood cancer mortality has decreased by more than 50%. Childhood and adolescent cancer survivors require close follow-up because cancer therapy side effects may persist or develop months or years after treatment. (Refer to the PDQ summary on Late Effects of Treatment for Childhood Cancer for specific information about the incidence, type, and monitoring of late effects in childhood and adolescent cancer survivors.)
Childhood cancer is a rare disease with less than 13,000 cases diagnosed before the age of 20 years each year in the United States. The Rare Disease Act of 2002 defines a rare disease as one that affects populations smaller than 200,000 persons in the United States and thus, by definition, all pediatric cancers would be considered rare. The designation of a pediatric rare tumor is not uniform; for example, the Italian cooperative project on rare pediatric tumors (Tumori Rari in Eta Pediatric [TREP]) defines a pediatric rare tumor as one with an incidence of less than two per 1 million population per year and is not the subject of specific clinical trials. Yet, this definition excludes common histologic subtypes such as melanoma and thyroid carcinoma, both of which have an incidence rate in excess of five per 1 million per year.
Most diagnoses included in this summary of rare cancers are in the subset of malignancies listed in the International Classification of Childhood Cancer (ICCC) subgroup XI, including thyroid cancer, melanoma and nonmelanoma skin cancers, as well as multiple types of carcinomas (e.g., adrenocortical carcinoma, nasopharyngeal carcinoma, and most adult-type carcinomas such as breast cancer, colorectal cancer, etc.). These diagnoses account for about 4% of cancers diagnosed in children aged 0 to 14 years, compared with about 20% of cancers diagnosed for adolescents aged 15 to 19 years (see Figure 1). The majority of cancers within subgroup XI are either melanomas or thyroid cancer, with the remaining subgroup XI cancer types accounting for only 1.3% of cancers in children aged 0 to 14 years and 5.3% of cancers within adolescents aged 15 to 19 years. The very low incidence of patients with any individual diagnosis, as well as their age distribution, makes these rare cancers extremely challenging to study.
Figure 1. Cancer incidence rates for patients aged 0 to 14 years and 15 to 19 years in the Surveillance Epidemiology and End Results (SEER) program from 2005 to 2009. Incidence rates are age-adjusted and age-specific and are shown for leukemia, lymphoma, central nervous system (CNS) tumors, neuroblastoma, retinoblastoma, renal tumors, hepatic tumors, bone tumors, soft tissue tumors, germ cell tumors, carcinomas and melanomas, and other cancers. Retinoblastoma occurs infrequently in adolescents aged 15 to 19 years.
Several initiatives to study rare pediatric cancers have been developed by the Children's Oncology Group (COG) as well as international groups. The Gesellschaft für Pädiatrische Onkologie und Hämatologie (GPOH) rare tumor project was founded in Germany in 2006. The TREP project was launched in Italy in 2000, and the Polish Pediatric Rare Tumor Study Group was launched in 2002. Within the COG, efforts have concentrated on increasing accrual to the COG registry and the rare tumor bank, as well as developing single-arm clinical trials and increasing cooperation with adult cooperative group trials. The accomplishments and challenges of this initiative are described in detail.
The tumors discussed in this summary are very diverse; they are arranged in descending anatomic order, from infrequent tumors of the head and neck to rare tumors of the urogenital tract and skin. All of these cancers are rare enough that most pediatric hospitals might see less than a handful of some histologies in several years. The majority of the histologies described here occur more frequently in adults. Information about these tumors may also be found in sources relevant to adults with cancer.
|1.||Smith MA, Seibel NL, Altekruse SF, et al.: Outcomes for children and adolescents with cancer: challenges for the twenty-first century. J Clin Oncol 28 (15): 2625-34, 2010.|
|2.||Guidelines for the pediatric cancer center and role of such centers in diagnosis and treatment. American Academy of Pediatrics Section Statement Section on Hematology/Oncology. Pediatrics 99 (1): 139-41, 1997.|
|3.||Ries LA, Smith MA, Gurney JG, et al., eds.: Cancer incidence and survival among children and adolescents: United States SEER Program 1975-1995. Bethesda, Md: National Cancer Institute, SEER Program, 1999. NIH Pub.No. 99-4649. Also available online. Last accessed October 23, 2012.|
|4.||Ferrari A, Bisogno G, De Salvo GL, et al.: The challenge of very rare tumours in childhood: the Italian TREP project. Eur J Cancer 43 (4): 654-9, 2007.|
|5.||Howlader N, Noone AM, Krapcho M, et al., eds.: Childhood cancer by the ICCC. In: Howlader N, Noone AM, Krapcho M, et al., eds.: SEER Cancer Statistics Review, 1975-2009 (Vintage 2009 Populations). Bethesda, Md: National Cancer Institute, 2012, Section 29. Also available online. Last accessed October 31, 2012.|
|6.||Brecht IB, Graf N, Schweinitz D, et al.: Networking for children and adolescents with very rare tumors: foundation of the GPOH Pediatric Rare Tumor Group. Klin Padiatr 221 (3): 181-5, 2009 May-Jun.|
|7.||Balcerska A, Godziński J, Bień E, et al.: [Rare tumours--are they really rare in the Polish children population?]. Przegl Lek 61 (Suppl 2): 57-61, 2004.|
|8.||Pappo AS, Krailo M, Chen Z, et al.: Infrequent tumor initiative of the Children's Oncology Group: initial lessons learned and their impact on future plans. J Clin Oncol 28 (33): 5011-6, 2010.|
Childhood sarcomas often occur in the head and neck area and they are described in other sections. Unusual pediatric head and neck cancers include nasopharyngeal carcinoma, esthesioneuroblastoma, thyroid tumors, oral cancer, salivary gland cancer, laryngeal carcinoma, papillomatosis, and respiratory tract carcinoma involving the NUT gene on chromosome 15. The prognosis, diagnosis, classification, and treatment of these head and neck cancers are discussed below. It must be emphasized that these cancers are seen very infrequently in patients younger than 15 years, and most of the evidence is derived from case series.
Nasopharyngeal carcinoma arises in the lining of the nasal cavity and pharynx.[2,3] This tumor accounts for about one-third of all cancers of the upper airways. Nasopharyngeal carcinoma is very uncommon in children younger than 10 years but increases in incidence to 0.8 and 1.3 per 1 million per year in children aged 10 to 14 years and in children aged 15 to 19 years, respectively.[4,5] The incidence of nasopharyngeal carcinoma is characterized by racial and geographic variations, with an endemic distribution among well-defined ethnic groups, such as inhabitants of some areas in North Africa and Southeast Asia. In the United States, nasopharyngeal carcinoma is overrepresented in black children when compared with other malignancies.
Nasopharyngeal carcinoma is strongly associated with Epstein-Barr virus (EBV) infection. In addition to the serological evidence of infection, EBV DNA is present as a monoclonal episome in the nasopharyngeal carcinoma cells, and tumor cells can have EBV antigens on their cell surface. The circulating levels of EBV DNA, as well as serologic documentation of EBV infection, may aid in the diagnosis.
Three histologic subtypes of nasopharyngeal carcinoma are recognized by the World Health Organization (WHO). Type 1 is squamous cell carcinoma; type 2 is nonkeratinizing squamous cell carcinoma; and type 3 is undifferentiated carcinoma. Children with nasopharyngeal carcinoma are more likely to have WHO type 2 or type 3 disease.
Nasopharyngeal carcinoma commonly presents as nosebleeding, nasal congestion and obstruction, or otitis media. Given the rich lymphatic drainage of the nasopharynx, bilateral cervical lymphadenopathies are often the first sign of disease. The tumor spreads locally to adjacent areas of the oropharynx and may invade the skull base, resulting in cranial nerve palsy or difficulty with movements of the jaw (trismus). Distant metastatic sites may include the bones, lungs, and liver.
Diagnostic tests should determine the extent of the primary tumor and whether there are metastases. Visualization of the nasopharynx by an ear-nose-throat specialist using nasal endoscopy, examination by a neurologist, and magnetic resonance imaging of the head and neck can be used to determine the extent of the primary tumor. A diagnosis can be made from a biopsy of the primary tumor or of enlarged lymph nodes of the neck. Nasopharyngeal carcinomas must be distinguished from all other cancers that can present with enlarged lymph nodes and from other types of cancer in the head and neck area. Thus, diseases such as thyroid cancer, rhabdomyosarcoma, non-Hodgkin lymphoma, Hodgkin lymphoma, and Burkitt lymphoma must be considered, as should benign conditions such as nasal angiofibroma, which usually presents with epistaxis in adolescent males, and infectious lymphadenitis. Evaluation of the chest and abdomen by computed tomography and bone scan should also be performed to determine whether there is metastatic disease.
Tumor staging is performed utilizing the tumor-node-metastasis classification system of the American Joint Committee on Cancer (AJCC). The majority (>90%) of children and adolescents with nasopharyngeal carcinoma present with advanced disease (stage III/IV or T3/T4).[6,10,11] Metastatic disease at diagnosis is uncommon (stage IVC). A retrospective analysis of data from the Surveillance Epidemiology and End Results (SEER) program reported that patients younger than 20 years had a higher incidence of advanced-stage disease than did older patients, higher risk of developing a second malignancy, and a superior outcome after controlling for stage.
The overall survival of children and adolescents with nasopharyngeal carcinoma has improved over the last four decades; with state-of-the-art multimodal treatment, 5-year survival rates are in excess of 80%.[5,6,11,12] However, the intensive use of chemotherapy and radiation therapy results in significant acute and long-term morbidities.[6,11]
Treatment of nasopharyngeal carcinoma is multimodal:
Combined-modality therapy with chemotherapy and radiation: High-dose radiation therapy alone has had a role in the management of low-stage nasopharyngeal carcinoma, but studies in both children and adults show that combined modality therapy with chemotherapy and radiation is the most effective way to treat nasopharyngeal carcinoma.[6,11,12,13,14,15,16]|
|2.||Surgery: Surgery has a limited role in the management of nasopharyngeal carcinoma because the disease is usually considered unresectable due to extensive local spread.|
|3.||EBV-specific cytotoxic T-lymphocytes: The use of EBV-specific cytotoxic T-lymphocytes has shown to be a very promising approach with minimal toxicity and evidence of significant antitumor activity in patients with relapsed or refractory nasopharyngeal carcinoma.|
(Refer to the PDQ summary on Nasopharyngeal Cancer Treatment for more information.)
Esthesioneuroblastoma (olfactory neuroblastoma) is a small round-cell tumor arising from the nasal neuroepithelium that is distinct from primitive neuroectodermal tumors.[23,24,25,26] In children, esthesioneuroblastoma is a very rare malignancy with an estimated incidence of 0.1 per 100,000 children younger than 15 years. Despite its rarity, esthesioneuroblastoma is the most common cancer of the nasal cavity in pediatric patients, accounting for 28% of all cases.[27,28] In a series of 511 patients from the SEER database, there was a slight male predominance, the mean age at presentation was 53 years, and only 8% of cases were younger than 25 years. Most patients were white (81%) and the most common tumor sites were the nasal cavity (72%) and ethmoid sinus (13%).
Most children present in the second decade of life with symptoms that include nasal obstruction, epistaxis, hyposmia, exophthalmos, or a nasopharyngeal mass, which may have local extension into the orbits, sinuses, or frontal lobe. Most patients present with advanced-stage disease (Kadish stages B and C).[27,28]
A meta-analysis of 26 studies with a total of 390 patients, largely adults with esthesioneuroblastoma, indicates that higher histopathologic grade and metastases to the cervical lymph nodes may correlate with adverse prognostic factors.
The mainstay of treatment has been surgery and radiation. Newer techniques such as endoscopic sinus surgery may offer similar short-term outcomes to open craniofacial resection. Other techniques such as stereotactic radiosurgery and proton-beam therapy (charged-particle radiation therapy) may also play a role in the management of this tumor. Nodal metastases are seen in about 5% of patients. Routine neck dissection and nodal exploration are not indicated in the absence of clinical or radiological evidence of disease. Management of cervical lymph node metastases has been addressed in a review article.
Reports indicate the increasing use of neoadjuvant or adjuvant chemotherapy in patients with advanced-stage disease with promising results.[23,24,34,35,36]; [Level of evidence: 3iii] Chemotherapy regimens that have been used with efficacy include etoposide with ifosfamide and cisplatin; vincristine, actinomycin D, and cyclophosphamide with and without doxorubicin; ifosfamide/etoposide; cisplatin plus etoposide or doxorubicin; and irinotecan plus docetaxel.[Level of evidence: 3iiA]
The annual incidence of thyroid cancers is low in children younger than 15 years (2.0 per 1 million people), accounting for approximately 1.5% of all cancers in this age group. Thyroid cancer incidence is higher in children aged 15 to 19 years (17.6 per 1 million people), and it accounts for approximately 8% of cancers arising in this older age group. Most thyroid carcinomas occur in girls.
There is an excessive frequency of thyroid adenoma and carcinoma in patients who previously received radiation to the neck.[41,42] In the decade following the Chernobyl nuclear incident, there was a tenfold increase in the incidence of thyroid cancer compared to the previous and following decades. In this group of patients with exposure to low-dose radiation, tumors commonly show a gain of 7q11. When occurring in patients with the multiple endocrine neoplasia syndromes, thyroid cancer may be associated with the development of other types of malignant tumors. (Refer to the Multiple Endocrine Neoplasia (MEN) Syndromes and Carney Complex section of this summary for more information.)
Tumors of the thyroid are classified as adenomas or carcinomas.[45,46,47,48,49] Adenomas are benign growths that may cause enlargement of all or part of the gland, which extends to both sides of the neck and can be quite large; some tumors may secrete hormones. Transformation to a malignant carcinoma may occur in some cells, which then may grow and spread to lymph nodes in the neck or to the lungs. Approximately 20% of thyroid nodules in children are malignant.[45,50]
Studies have shown subtle differences in the genetic profiling of childhood differentiated thyroid carcinomas compared with adult tumors. A higher prevalence of RET/PTC rearrangements is seen in pediatric papillary carcinoma (45%–65% vs. 3%–34% in adults). Conversely, BRAF V600E mutations, which are seen in more than 50% of adults with papillary thyroid carcinoma, are extremely rare in children.
|Characteristic||Children and Adolescents (%)||Adults (%)|
|a Adapted from Yamashita et al.|
|Lymph node involvement||30–90||5–55|
Patients with thyroid cancer usually present with a thyroid mass with or without cervical adenopathy.[57,58,59,60] Younger age is associated with a more aggressive clinical presentation in differentiated thyroid carcinoma. Compared with adults, children have a higher proportion of nodal involvement (40%–90% vs. 20%–50%) and lung metastases (20%–30% vs. 2%). Likewise, when compared to pubertal adolescents, prepubertal children have a more aggressive presentation with a greater degree of extrathyroid extension, lymph node involvement, and lung metastases. However, outcome is similar in the prepubertal and adolescent groups.
Initial evaluation of a child or adolescent with a thyroid nodule should include the following:
Tests of thyroid function are usually normal, but thyroglobulin can be elevated.
Fine-needle aspiration as an initial diagnostic approach is sensitive and useful. However, in doubtful cases, open biopsy or resection should be considered.[62,63,64,65] Open biopsy or resection may be preferable for young children as well.
Treatment of papillary and follicular thyroid carcinoma
The management of differentiated thyroid cancer in children has been reviewed in detail. Also, the American Thyroid Association Taskforce has developed guidelines for management of thyroid nodules and differentiated thyroid cancer in older adolescents and adults; however, it is not yet known how to apply these guidelines to thyroid nodules in children.
Surgery performed by an experienced thyroid surgeon is the treatment required for all thyroid neoplasms.[52,55] For patients with papillary or follicular carcinoma, total or near-total thyroidectomy plus cervical lymph node dissection is the recommended surgical approach.[52,57,67] This aggressive approach is indicated for several reasons:
The use of radioactive iodine ablation for the treatment of children with differentiated thyroid carcinoma has increased over the years. Despite surgery, most children have a significant radioactive iodine uptake in the thyroid bed, and studies have shown increased local recurrence rates for patients who did not receive radioactive iodine after total thyroidectomy compared with those who did receive radioactive iodine. Thus, it is currently recommended that children receive an ablative dose after initial surgery.[45,50,55] For successful remnant ablation, serum TSH levels must be elevated to allow for maximal radioactive iodine uptake; this can usually be achieved with thyroid hormone withdrawal for 3 to 4 weeks after thyroidectomy. A radioactive iodine (I-131) scan is then performed to search for residual, functionally active neoplasm. If there is no disease outside of the thyroid bed, an ablative dose of I-131 (approximately 30 mCi) is administered for total thyroid destruction. If there is evidence of nodal or disseminated disease, higher doses (100–200 mCi) of I-131 are required.[Level of evidence: 3iDiv] In younger children, the I-131 dose may be adjusted for weight (1–1.5 mCi/kg).[45,71,72] After surgery and radioactive iodine therapy, hormone replacement therapy must be given to compensate for the lost thyroid hormone and to suppress TSH production.
Initial treatment (defined as surgery plus one radioactive iodine ablation plus thyroid replacement) is effective in inducing remission for 70% of patients. Extensive disease at diagnosis and larger tumor size predict failure to remit. With additional treatment, 89% of patients achieve remission.
Periodic evaluations are required to determine whether there is metastatic disease involving the lungs. Lifelong follow-up is necessary. T4 and TSH levels should be evaluated periodically to determine whether replacement hormone is appropriately dosed. If thyroglobulin levels rise above postthyroidectomy baseline levels, recurrence of the disease is possible, and physical examination and imaging studies should be repeated. The use of various tyrosine kinase inhibitors or vascular endothelial growth factor receptor inhibitors has shown promising results in patients with metastatic or recurrent thyroid cancer in adults.[76,77,78,79]
Treatment of recurrent papillary and follicular thyroid carcinoma
Patients with differentiated thyroid cancer generally have an excellent survival with relatively few side effects.[75,80,81] Recurrence is common (35%–45%), however, and is seen more often in children younger than 10 years and in those with palpable cervical lymph nodes at diagnosis.[47,82,83] Even patients with a tumor that has spread to the lungs may expect to have no decrease in life span after appropriate treatment. Of note, the sodium-iodide symporter (a membrane-bound glycoprotein cotransporter), essential for uptake of iodide and thyroid hormone synthesis, is expressed in 35% to 45% of thyroid cancers in children and adolescents. Patients with expression of the sodium-iodide symporter have a lower risk of recurrence.
Recurrent papillary thyroid cancer is usually responsive to treatment with radioactive iodine ablation. Tyrosine kinase inhibitors such as sorafenib have shown to induce responses in up to 15% of adult patients with metastatic disease. Responses to sorafenib have also been documented in pediatric cases.
Medullary thyroid carcinoma
Medullary thyroid carcinomas are commonly associated with the MEN2 syndrome (refer to the Multiple Endocrine Neoplasia (MEN) Syndromes and Carney Complex section of this summary for more information). They present with a more aggressive clinical course; 50% of the cases have hematogenous metastases at diagnosis. Patients with medullary carcinoma of the thyroid have a guarded prognosis, unless they have very small tumors (microcarcinoma, defined as <1.0 cm in diameter), which carry a good prognosis.
Treatment for children with medullary thyroid carcinoma is mainly surgical. A recent review of 430 patients aged 0 to 21 years with medullary thyroid cancer reported older age (16–21 years) at diagnosis, tumor diameter greater than 2 cm, positive margins after total thyroidectomy, and lymph node metastases were associated with a worse prognosis. This suggests that central neck node dissection and dissection of nearby positive nodes should improve the 10-year survival for these patients. Most cases of medullary thyroid carcinoma occur in the context of the MEN 2A and MEN 2B syndromes. In those familial cases, early genetic testing and counseling is indicated, and prophylactic surgery is recommended in children with the RET germline mutation. Strong genotype-phenotype correlations have facilitated the development of guidelines for intervention, including screening and age at which prophylactic thyroidectomy should occur.
(Refer to the Multiple Endocrine Neoplasia (MEN) Syndromes and Carney Complex section of this summary for more information.)
The vast majority (>90%) of tumors and tumor-like lesions in the oral cavity are benign.[92,93,94,95] Cancer of the oral cavity is extremely rare in children and adolescents. According to the SEER Stat Fact Sheets, only 0.6% of all cases are diagnosed in patients younger than 20 years, and in 2008, the age-adjusted incidence for this population was 0.24 per 100,000.[96,97]
The incidence of cancer of the oral cavity has increased in adolescent and young adult females, and this pattern is consistent with the national increase in orogenital sexual intercourse in younger females and human papilloma virus (HPV) infection. It is currently estimated that the prevalence of oral HPV infection in the United States is 6.9% in people aged 14 to 69 years and that HPV causes about 30,000 oropharyngeal cancers. Furthermore, the incidence rates for HPV-related oropharyngeal cancer from 1999 to 2008 have increased by 4.4% per year in white men and 1.9% in white women.[99,100,101] Current practices to increase HPV immunization rates in both boys and girls may reduce the burden of HPV-related noncervical cancers.
Benign odontogenic neoplasms include odontoma and ameloblastoma. The most common nonodontogenic neoplasms are fibromas, hemangiomas, and papillomas. Tumor-like lesions include lymphangiomas, granulomas, and eosinophilic granuloma (Langerhans cell histiocytosis).
Malignant lesions were found in 0.1% to 2% of a series of oral biopsies performed in children [92,93] and 3% to 13% of oral tumor biopsies.[94,95] Malignant tumor types include lymphomas (especially Burkitt) and sarcomas (including rhabdomyosarcoma and fibrosarcoma). Mucoepidermoid carcinomas have rarely been reported in the pediatric and adolescent age group. Most are low grade and have a high cure rate with surgery alone.; [Level of evidence: 3iiA]
The most common type of primary oral cancer in adults, squamous cell carcinoma (SCC), is extremely rare in children. Review of the SEER database identified 54 patients younger than 20 years with oral cavity SCC between 1973 and 2006. Pediatric patients with oral cavity SCC were more often female and had better survival than adult patients. When differences in patient, tumor, and treatment-related characteristics are adjusted for, the two groups experienced equivalent survival.[Level of evidence: 3iA] Diseases that can be associated with the development of oral SCC include Fanconi anemia, dyskeratosis congenita, connexin mutations, chronic graft-versus-host disease, epidermolysis bullosae, xeroderma pigmentosum, and HPV infection.[105,106,107,108,109,110,111,112]
Treatment of benign oral tumors is surgical. Management of malignant tumors is dependent on histology and may include surgery, chemotherapy, and radiation. Langerhans cell histiocytosis may require other treatment besides surgery. (Refer to the PDQ summaries on adult Oropharyngeal Cancer Treatment; Lip and Oral Cavity Cancer Treatment; and Langerhans Cell Histiocytosis Treatment for more information.)
Salivary Gland Tumors
Salivary gland tumors are rare and account for 0.5% of all malignancies in children and adolescents. Most salivary gland neoplasms arise in the parotid gland.[116,117,118,119,120] About 15% of these tumors may arise in the submandibular glands or in the minor salivary glands under the tongue and jaw. These tumors are most frequently benign but may be malignant, especially in young children. Overall 5-year survival in the pediatric age group is approximately 95%.
The most common malignant lesion is mucoepidermoid carcinoma.[115,123,124] Less common malignancies include acinic cell carcinoma, rhabdomyosarcoma, adenocarcinoma, adenoid cystic carcinoma, and undifferentiated carcinoma. These tumors may occur after radiation therapy and chemotherapy are given for treatment of primary leukemia or solid tumors.[125,126] Mucoepidermoid carcinoma is the most common type of treatment-related salivary gland tumor, and with standard therapy, the 5-year survival is about 95%.[127,128]
Radical surgical removal is the treatment of choice for salivary gland tumors whenever possible, with additional use of radiation therapy and chemotherapy for high-grade tumors or tumors that have spread from their site of origin.[122,124,129,130]
(Refer to the PDQ summary on adult Salivary Gland Cancer Treatment for more information.)
Sialoblastomas are usually benign tumors presenting in the neonatal period and rarely metastasize. Chemotherapy regimens with carboplatin, epirubicin, vincristine, etoposide, dactinomycin, doxorubicin, and ifosfamide have produced responses in two children with sialoblastoma.; [Level of evidence: 3iiiDiv]
Laryngeal Cancer and Papillomatosis
Tumors of the larynx are rare. The most common benign tumor is subglottic hemangioma. Malignant tumors, which are especially rare, may be associated with benign tumors such as polyps and papillomas.[135,136] These tumors may cause hoarseness, difficulty swallowing, and enlargement of the lymph nodes of the neck.
Rhabdomyosarcoma is the most common malignant tumor of the larynx in the pediatric age group and is usually managed with chemotherapy and radiation therapy following biopsy, rather than laryngectomy. SCC of the larynx should be managed in the same manner as in adults with carcinoma at this site, with surgery and radiation. Laser surgery may be the first type of treatment utilized for these lesions.
Papillomatosis of the larynx is a benign overgrowth of tissues lining the larynx and is associated with the HPV, most commonly HPV-6 and HPV-11. The presence of HPV-11 appears to correlate with a more aggressive clinical course than HPV-6. These tumors can cause hoarseness because of their association with wart-like nodules on the vocal cords and may rarely extend into the lung, producing significant morbidity. Malignant degeneration may occur with development of cancer in the larynx and squamous cell lung cancer.
Papillomatosis is not cancerous, and primary treatment is surgical ablation with laser vaporization. Frequent recurrences are common. Lung involvement, though rare, can occur. If a patient requires more than four surgical procedures per year, treatment with interferon may be considered. A pilot study of immunotherapy with HspE7, a recombinant fusion protein that has shown activity in other HPV-related diseases, has suggested a marked increase in the amount of time between surgeries. These results, however, must be confirmed in a larger randomized trial.
(Refer to the PDQ summary on adult Laryngeal Cancer Treatment for more information.)
Midline Tract Carcinoma Involving theNUTGene (NUT Midline Carcinoma)
NUT midline carcinoma is a very rare and aggressive malignancy genetically defined by rearrangements of the gene NUT. In the majority (75%) of cases, the NUT gene on chromosome 15q14 is fused with BRD4 on chromosome 19p13, creating chimeric genes that encode the BRD-NUT fusion proteins. In the remaining cases, NUT is fused to BRD3 on chromosome 9q34 or an unknown partner gene; these tumors are termed NUT-variant.
The tumors arise in midline epithelial structures, typically mediastinum and upper aerodigestive track, and present as very aggressive undifferentiated carcinomas, with or without squamous differentiation. Although the original description of this neoplasm was made in children and young adults, patients of all ages can be affected. The outcome is very poor, with an average survival of less than 1 year. Preliminary data seem to indicate that NUT-variant tumors may have a more protracted course.[145,146]
Preclinical studies have shown that NUT-BRD4 is associated with globally decreased histone acetylation and transcriptional repression; studies have also shown that this acetylation can be restored with histone deacetylase inhibitors, resulting in squamous differentiation and arrested growth in vitro and growth inhibition in xenograft models. Response to vorinostat has been reported in a case of a child with refractory disease, thus suggesting a potential role for this class of agents in the treatment of this malignancy.
|1.||Gil Z, Patel SG, Cantu G, et al.: Outcome of craniofacial surgery in children and adolescents with malignant tumors involving the skull base: an international collaborative study. Head Neck 31 (3): 308-17, 2009.|
|2.||Vasef MA, Ferlito A, Weiss LM: Nasopharyngeal carcinoma, with emphasis on its relationship to Epstein-Barr virus. Ann Otol Rhinol Laryngol 106 (4): 348-56, 1997.|
|3.||Ayan I, Kaytan E, Ayan N: Childhood nasopharyngeal carcinoma: from biology to treatment. Lancet Oncol 4 (1): 13-21, 2003.|
|4.||Horner MJ, Ries LA, Krapcho M, et al.: SEER Cancer Statistics Review, 1975-2006. Bethesda, Md: National Cancer Institute, 2009. Also available online. Last accessed October 31, 2012.|
|5.||Sultan I, Casanova M, Ferrari A, et al.: Differential features of nasopharyngeal carcinoma in children and adults: a SEER study. Pediatr Blood Cancer 55 (2): 279-84, 2010.|
|6.||Cheuk DK, Billups CA, Martin MG, et al.: Prognostic factors and long-term outcomes of childhood nasopharyngeal carcinoma. Cancer 117 (1): 197-206, 2011.|
|7.||Dawson CW, Port RJ, Young LS: The role of the EBV-encoded latent membrane proteins LMP1 and LMP2 in the pathogenesis of nasopharyngeal carcinoma (NPC). Semin Cancer Biol 22 (2): 144-53, 2012.|
|8.||Lo YM, Chan LY, Lo KW, et al.: Quantitative analysis of cell-free Epstein-Barr virus DNA in plasma of patients with nasopharyngeal carcinoma. Cancer Res 59 (6): 1188-91, 1999.|
|9.||Edge SB, Byrd DR, Compton CC, et al., eds.: AJCC Cancer Staging Manual. 7th ed. New York, NY: Springer, 2010.|
|10.||Casanova M, Ferrari A, Gandola L, et al.: Undifferentiated nasopharyngeal carcinoma in children and adolescents: comparison between staging systems. Ann Oncol 12 (8): 1157-62, 2001.|
|11.||Casanova M, Bisogno G, Gandola L, et al.: A prospective protocol for nasopharyngeal carcinoma in children and adolescents: the Italian Rare Tumors in Pediatric Age (TREP) project. Cancer 118 (10): 2718-25, 2012.|
|12.||Buehrlen M, Zwaan CM, Granzen B, et al.: Multimodal treatment, including interferon beta, of nasopharyngeal carcinoma in children and young adults: Preliminary results from the prospective, multicenter study NPC-2003-GPOH/DCOG. Cancer 118 (19): 4892-900, 2012.|
|13.||Al-Sarraf M, LeBlanc M, Giri PG, et al.: Chemoradiotherapy versus radiotherapy in patients with advanced nasopharyngeal cancer: phase III randomized Intergroup study 0099. J Clin Oncol 16 (4): 1310-7, 1998.|
|14.||Wolden SL, Steinherz PG, Kraus DH, et al.: Improved long-term survival with combined modality therapy for pediatric nasopharynx cancer. Int J Radiat Oncol Biol Phys 46 (4): 859-64, 2000.|
|15.||Langendijk JA, Leemans ChR, Buter J, et al.: The additional value of chemotherapy to radiotherapy in locally advanced nasopharyngeal carcinoma: a meta-analysis of the published literature. J Clin Oncol 22 (22): 4604-12, 2004.|
|16.||Venkitaraman R, Ramanan SG, Sagar TG: Nasopharyngeal cancer of childhood and adolescence: a single institution experience. Pediatr Hematol Oncol 24 (7): 493-502, 2007 Oct-Nov.|
|17.||Mertens R, Granzen B, Lassay L, et al.: Treatment of nasopharyngeal carcinoma in children and adolescents: definitive results of a multicenter study (NPC-91-GPOH). Cancer 104 (5): 1083-9, 2005.|
|18.||Rodriguez-Galindo C, Wofford M, Castleberry RP, et al.: Preradiation chemotherapy with methotrexate, cisplatin, 5-fluorouracil, and leucovorin for pediatric nasopharyngeal carcinoma. Cancer 103 (4): 850-7, 2005.|
|19.||Nakamura RA, Novaes PE, Antoneli CB, et al.: High-dose-rate brachytherapy as part of a multidisciplinary treatment of nasopharyngeal lymphoepithelioma in childhood. Cancer 104 (3): 525-31, 2005.|
|20.||Louis CU, Paulino AC, Gottschalk S, et al.: A single institution experience with pediatric nasopharyngeal carcinoma: high incidence of toxicity associated with platinum-based chemotherapy plus IMRT. J Pediatr Hematol Oncol 29 (7): 500-5, 2007.|
|21.||Varan A, Ozyar E, Corapçioğlu F, et al.: Pediatric and young adult nasopharyngeal carcinoma patients treated with preradiation Cisplatin and docetaxel chemotherapy. Int J Radiat Oncol Biol Phys 73 (4): 1116-20, 2009.|
|22.||Straathof KC, Bollard CM, Popat U, et al.: Treatment of nasopharyngeal carcinoma with Epstein-Barr virus--specific T lymphocytes. Blood 105 (5): 1898-904, 2005.|
|23.||Kumar M, Fallon RJ, Hill JS, et al.: Esthesioneuroblastoma in children. J Pediatr Hematol Oncol 24 (6): 482-7, 2002 Aug-Sep.|
|24.||Theilgaard SA, Buchwald C, Ingeholm P, et al.: Esthesioneuroblastoma: a Danish demographic study of 40 patients registered between 1978 and 2000. Acta Otolaryngol 123 (3): 433-9, 2003.|
|25.||Dias FL, Sa GM, Lima RA, et al.: Patterns of failure and outcome in esthesioneuroblastoma. Arch Otolaryngol Head Neck Surg 129 (11): 1186-92, 2003.|
|26.||Nakao K, Watanabe K, Fujishiro Y, et al.: Olfactory neuroblastoma: long-term clinical outcome at a single institute between 1979 and 2003. Acta Otolaryngol Suppl (559): 113-7, 2007.|
|27.||Bisogno G, Soloni P, Conte M, et al.: Esthesioneuroblastoma in pediatric and adolescent age. A report from the TREP project in cooperation with the Italian Neuroblastoma and Soft Tissue Sarcoma Committees. BMC Cancer 12: 117, 2012.|
|28.||Benoit MM, Bhattacharyya N, Faquin W, et al.: Cancer of the nasal cavity in the pediatric population. Pediatrics 121 (1): e141-5, 2008.|
|29.||Soler ZM, Smith TL: Endoscopic versus open craniofacial resection of esthesioneuroblastoma: what is the evidence? Laryngoscope 122 (2): 244-5, 2012.|
|30.||Dulguerov P, Allal AS, Calcaterra TC: Esthesioneuroblastoma: a meta-analysis and review. Lancet Oncol 2 (11): 683-90, 2001.|
|31.||Ozsahin M, Gruber G, Olszyk O, et al.: Outcome and prognostic factors in olfactory neuroblastoma: a rare cancer network study. Int J Radiat Oncol Biol Phys 78 (4): 992-7, 2010.|
|32.||Unger F, Haselsberger K, Walch C, et al.: Combined endoscopic surgery and radiosurgery as treatment modality for olfactory neuroblastoma (esthesioneuroblastoma). Acta Neurochir (Wien) 147 (6): 595-601; discussion 601-2, 2005.|
|33.||Zanation AM, Ferlito A, Rinaldo A, et al.: When, how and why to treat the neck in patients with esthesioneuroblastoma: a review. Eur Arch Otorhinolaryngol 267 (11): 1667-71, 2010.|
|34.||Eich HT, Müller RP, Micke O, et al.: Esthesioneuroblastoma in childhood and adolescence. Better prognosis with multimodal treatment? Strahlenther Onkol 181 (6): 378-84, 2005.|
|35.||Loy AH, Reibel JF, Read PW, et al.: Esthesioneuroblastoma: continued follow-up of a single institution's experience. Arch Otolaryngol Head Neck Surg 132 (2): 134-8, 2006.|
|36.||Porter AB, Bernold DM, Giannini C, et al.: Retrospective review of adjuvant chemotherapy for esthesioneuroblastoma. J Neurooncol 90 (2): 201-4, 2008.|
|37.||Benfari G, Fusconi M, Ciofalo A, et al.: Radiotherapy alone for local tumour control in esthesioneuroblastoma. Acta Otorhinolaryngol Ital 28 (6): 292-7, 2008.|
|38.||Kim DW, Jo YH, Kim JH, et al.: Neoadjuvant etoposide, ifosfamide, and cisplatin for the treatment of olfactory neuroblastoma. Cancer 101 (10): 2257-60, 2004.|
|39.||Kiyota N, Tahara M, Fujii S, et al.: Nonplatinum-based chemotherapy with irinotecan plus docetaxel for advanced or metastatic olfactory neuroblastoma: a retrospective analysis of 12 cases. Cancer 112 (4): 885-91, 2008.|
|40.||Shapiro NL, Bhattacharyya N: Population-based outcomes for pediatric thyroid carcinoma. Laryngoscope 115 (2): 337-40, 2005.|
|41.||Cotterill SJ, Pearce MS, Parker L: Thyroid cancer in children and young adults in the North of England. Is increasing incidence related to the Chernobyl accident? Eur J Cancer 37 (8): 1020-6, 2001.|
|42.||Kaplan MM, Garnick MB, Gelber R, et al.: Risk factors for thyroid abnormalities after neck irradiation for childhood cancer. Am J Med 74 (2): 272-80, 1983.|
|43.||Demidchik YE, Saenko VA, Yamashita S: Childhood thyroid cancer in Belarus, Russia, and Ukraine after Chernobyl and at present. Arq Bras Endocrinol Metabol 51 (5): 748-62, 2007.|
|44.||Hess J, Thomas G, Braselmann H, et al.: Gain of chromosome band 7q11 in papillary thyroid carcinomas of young patients is associated with exposure to low-dose irradiation. Proc Natl Acad Sci U S A 108 (23): 9595-600, 2011.|
|45.||Dinauer C, Francis GL: Thyroid cancer in children. Endocrinol Metab Clin North Am 36 (3): 779-806, vii, 2007.|
|46.||Vasko V, Bauer AJ, Tuttle RM, et al.: Papillary and follicular thyroid cancers in children. Endocr Dev 10: 140-72, 2007.|
|47.||Grigsby PW, Gal-or A, Michalski JM, et al.: Childhood and adolescent thyroid carcinoma. Cancer 95 (4): 724-9, 2002.|
|48.||Skinner MA: Cancer of the thyroid gland in infants and children. Semin Pediatr Surg 10 (3): 119-26, 2001.|
|49.||Halac I, Zimmerman D: Thyroid nodules and cancers in children. Endocrinol Metab Clin North Am 34 (3): 725-44, x, 2005.|
|50.||Waguespack SG, Francis G: Initial management and follow-up of differentiated thyroid cancer in children. J Natl Compr Canc Netw 8 (11): 1289-300, 2010.|
|51.||Feinmesser R, Lubin E, Segal K, et al.: Carcinoma of the thyroid in children--a review. J Pediatr Endocrinol Metab 10 (6): 561-8, 1997 Nov-Dec.|
|52.||Hung W, Sarlis NJ: Current controversies in the management of pediatric patients with well-differentiated nonmedullary thyroid cancer: a review. Thyroid 12 (8): 683-702, 2002.|
|53.||Hay ID, Gonzalez-Losada T, Reinalda MS, et al.: Long-term outcome in 215 children and adolescents with papillary thyroid cancer treated during 1940 through 2008. World J Surg 34 (6): 1192-202, 2010.|
|54.||Skinner MA: Management of hereditary thyroid cancer in children. Surg Oncol 12 (2): 101-4, 2003.|
|55.||Rivkees SA, Mazzaferri EL, Verburg FA, et al.: The treatment of differentiated thyroid cancer in children: emphasis on surgical approach and radioactive iodine therapy. Endocr Rev 32 (6): 798-826, 2011.|
|56.||Yamashita S, Saenko V: Mechanisms of Disease: molecular genetics of childhood thyroid cancers. Nat Clin Pract Endocrinol Metab 3 (5): 422-9, 2007.|
|57.||Thompson GB, Hay ID: Current strategies for surgical management and adjuvant treatment of childhood papillary thyroid carcinoma. World J Surg 28 (12): 1187-98, 2004.|
|58.||Harness JK, Sahar DE, et al.: Childhood thyroid carcinoma. In: Clark O, Duh Q-Y, Kebebew E, eds.: Textbook of Endocrine Surgery. 2nd ed. Philadelphia, PA: Elsevier Saunders Company, 2005., pp 93-101.|
|59.||Rachmiel M, Charron M, Gupta A, et al.: Evidence-based review of treatment and follow up of pediatric patients with differentiated thyroid carcinoma. J Pediatr Endocrinol Metab 19 (12): 1377-93, 2006.|
|60.||Wada N, Sugino K, Mimura T, et al.: Treatment strategy of papillary thyroid carcinoma in children and adolescents: clinical significance of the initial nodal manifestation. Ann Surg Oncol 16 (12): 3442-9, 2009.|
|61.||Lazar L, Lebenthal Y, Steinmetz A, et al.: Differentiated thyroid carcinoma in pediatric patients: comparison of presentation and course between pre-pubertal children and adolescents. J Pediatr 154 (5): 708-14, 2009.|
|62.||Flannery TK, Kirkland JL, Copeland KC, et al.: Papillary thyroid cancer: a pediatric perspective. Pediatrics 98 (3 Pt 1): 464-6, 1996.|
|63.||Willgerodt H, Keller E, Bennek J, et al.: Diagnostic value of fine-needle aspiration biopsy of thyroid nodules in children and adolescents. J Pediatr Endocrinol Metab 19 (4): 507-15, 2006.|
|64.||Stevens C, Lee JK, Sadatsafavi M, et al.: Pediatric thyroid fine-needle aspiration cytology: a meta-analysis. J Pediatr Surg 44 (11): 2184-91, 2009.|
|65.||Bargren AE, Meyer-Rochow GY, Sywak MS, et al.: Diagnostic utility of fine-needle aspiration cytology in pediatric differentiated thyroid cancer. World J Surg 34 (6): 1254-60, 2010.|
|66.||Cooper DS, Doherty GM, Haugen BR, et al.: Revised American Thyroid Association management guidelines for patients with thyroid nodules and differentiated thyroid cancer. Thyroid 19 (11): 1167-214, 2009.|
|67.||Raval MV, Bentrem DJ, Stewart AK, et al.: Utilization of total thyroidectomy for differentiated thyroid cancer in children. Ann Surg Oncol 17 (10): 2545-53, 2010.|
|68.||Newman KD, Black T, Heller G, et al.: Differentiated thyroid cancer: determinants of disease progression in patients <21 years of age at diagnosis: a report from the Surgical Discipline Committee of the Children's Cancer Group. Ann Surg 227 (4): 533-41, 1998.|
|69.||Chow SM, Law SC, Mendenhall WM, et al.: Differentiated thyroid carcinoma in childhood and adolescence-clinical course and role of radioiodine. Pediatr Blood Cancer 42 (2): 176-83, 2004.|
|70.||Verburg FA, Biko J, Diessl S, et al.: I-131 activities as high as safely administrable (AHASA) for the treatment of children and adolescents with advanced differentiated thyroid cancer. J Clin Endocrinol Metab 96 (8): E1268-71, 2011.|
|71.||Luster M, Lassmann M, Freudenberg LS, et al.: Thyroid cancer in childhood: management strategy, including dosimetry and long-term results. Hormones (Athens) 6 (4): 269-78, 2007 Oct-Dec.|
|72.||Parisi MT, Mankoff D: Differentiated pediatric thyroid cancer: correlates with adult disease, controversies in treatment. Semin Nucl Med 37 (5): 340-56, 2007.|
|73.||Yeh SD, La Quaglia MP: 131I therapy for pediatric thyroid cancer. Semin Pediatr Surg 6 (3): 128-33, 1997.|
|74.||Powers PA, Dinauer CA, Tuttle RM, et al.: Tumor size and extent of disease at diagnosis predict the response to initial therapy for papillary thyroid carcinoma in children and adolescents. J Pediatr Endocrinol Metab 16 (5): 693-702, 2003.|
|75.||Vassilopoulou-Sellin R, Goepfert H, Raney B, et al.: Differentiated thyroid cancer in children and adolescents: clinical outcome and mortality after long-term follow-up. Head Neck 20 (6): 549-55, 1998.|
|76.||Kloos RT, Ringel MD, Knopp MV, et al.: Phase II trial of sorafenib in metastatic thyroid cancer. J Clin Oncol 27 (10): 1675-84, 2009.|
|77.||Cohen EE, Rosen LS, Vokes EE, et al.: Axitinib is an active treatment for all histologic subtypes of advanced thyroid cancer: results from a phase II study. J Clin Oncol 26 (29): 4708-13, 2008.|
|78.||Schlumberger MJ, Elisei R, Bastholt L, et al.: Phase II study of safety and efficacy of motesanib in patients with progressive or symptomatic, advanced or metastatic medullary thyroid cancer. J Clin Oncol 27 (23): 3794-801, 2009.|
|79.||Cabanillas ME, Waguespack SG, Bronstein Y, et al.: Treatment with tyrosine kinase inhibitors for patients with differentiated thyroid cancer: the M. D. Anderson experience. J Clin Endocrinol Metab 95 (6): 2588-95, 2010.|
|80.||Wiersinga WM: Thyroid cancer in children and adolescents--consequences in later life. J Pediatr Endocrinol Metab 14 (Suppl 5): 1289-96; discussion 1297-8, 2001.|
|81.||Jarzab B, Handkiewicz-Junak D, Wloch J: Juvenile differentiated thyroid carcinoma and the role of radioiodine in its treatment: a qualitative review. Endocr Relat Cancer 12 (4): 773-803, 2005.|
|82.||Alessandri AJ, Goddard KJ, Blair GK, et al.: Age is the major determinant of recurrence in pediatric differentiated thyroid carcinoma. Med Pediatr Oncol 35 (1): 41-6, 2000.|
|83.||Borson-Chazot F, Causeret S, Lifante JC, et al.: Predictive factors for recurrence from a series of 74 children and adolescents with differentiated thyroid cancer. World J Surg 28 (11): 1088-92, 2004.|
|84.||Biko J, Reiners C, Kreissl MC, et al.: Favourable course of disease after incomplete remission on (131)I therapy in children with pulmonary metastases of papillary thyroid carcinoma: 10 years follow-up. Eur J Nucl Med Mol Imaging 38 (4): 651-5, 2011.|
|85.||Patel A, Jhiang S, Dogra S, et al.: Differentiated thyroid carcinoma that express sodium-iodide symporter have a lower risk of recurrence for children and adolescents. Pediatr Res 52 (5): 737-44, 2002.|
|86.||Powers PA, Dinauer CA, Tuttle RM, et al.: Treatment of recurrent papillary thyroid carcinoma in children and adolescents. J Pediatr Endocrinol Metab 16 (7): 1033-40, 2003.|
|87.||Waguespack SG, Sherman SI, Williams MD, et al.: The successful use of sorafenib to treat pediatric papillary thyroid carcinoma. Thyroid 19 (4): 407-12, 2009.|
|88.||Hill CS Jr, Ibanez ML, Samaan NA, et al.: Medullary (solid) carcinoma of the thyroid gland: an analysis of the M. D. Anderson hospital experience with patients with the tumor, its special features, and its histogenesis. Medicine (Baltimore) 52 (2): 141-71, 1973.|
|89.||Krueger JE, Maitra A, Albores-Saavedra J: Inherited medullary microcarcinoma of the thyroid: a study of 11 cases. Am J Surg Pathol 24 (6): 853-8, 2000.|
|90.||Raval MV, Sturgeon C, Bentrem DJ, et al.: Influence of lymph node metastases on survival in pediatric medullary thyroid cancer. J Pediatr Surg 45 (10): 1947-54, 2010.|
|91.||Waguespack SG, Rich TA, Perrier ND, et al.: Management of medullary thyroid carcinoma and MEN2 syndromes in childhood. Nat Rev Endocrinol 7 (10): 596-607, 2011.|
|92.||Das S, Das AK: A review of pediatric oral biopsies from a surgical pathology service in a dental school. Pediatr Dent 15 (3): 208-11, 1993 May-Jun.|
|93.||Ulmansky M, Lustmann J, Balkin N: Tumors and tumor-like lesions of the oral cavity and related structures in Israeli children. Int J Oral Maxillofac Surg 28 (4): 291-4, 1999.|
|94.||Tröbs RB, Mader E, Friedrich T, et al.: Oral tumors and tumor-like lesions in infants and children. Pediatr Surg Int 19 (9-10): 639-45, 2003.|
|95.||Tanaka N, Murata A, Yamaguchi A, et al.: Clinical features and management of oral and maxillofacial tumors in children. Oral Surg Oral Med Oral Pathol Oral Radiol Endod 88 (1): 11-5, 1999.|
|96.||Young JL Jr, Miller RW: Incidence of malignant tumors in U. S. children. J Pediatr 86 (2): 254-8, 1975.|
|97.||Berstein L, Gurney JG: Carcinomas and other malignant epithelial neoplasms. In: Ries LA, Smith MA, Gurney JG, et al., eds.: Cancer incidence and survival among children and adolescents: United States SEER Program 1975-1995. Bethesda, Md: National Cancer Institute, SEER Program, 1999. NIH Pub.No. 99-4649., Chapter 11, pp 139-148. Also available online. Last accessed October 31, 2012.|
|98.||Bleyer A: Cancer of the oral cavity and pharynx in young females: increasing incidence, role of human papilloma virus, and lack of survival improvement. Semin Oncol 36 (5): 451-9, 2009.|
|99.||D'Souza G, Dempsey A: The role of HPV in head and neck cancer and review of the HPV vaccine. Prev Med 53 (Suppl 1): S5-S11, 2011.|
|100.||Gillison ML, Broutian T, Pickard RK, et al.: Prevalence of oral HPV infection in the United States, 2009-2010. JAMA 307 (7): 693-703, 2012.|
|101.||Simard EP, Ward EM, Siegel R, et al.: Cancers with increasing incidence trends in the United States: 1999 through 2008. CA Cancer J Clin : , 2012.|
|102.||Gillison ML, Chaturvedi AK, Lowy DR: HPV prophylactic vaccines and the potential prevention of noncervical cancers in both men and women. Cancer 113 (10 Suppl): 3036-46, 2008.|
|103.||Morris LG, Ganly I: Outcomes of oral cavity squamous cell carcinoma in pediatric patients. Oral Oncol 46 (4): 292-6, 2010.|
|104.||Perez DE, Pires FR, Alves Fde A, et al.: Juvenile intraoral mucoepidermoid carcinoma. J Oral Maxillofac Surg 66 (2): 308-11, 2008.|
|105.||Oksüzoğlu B, Yalçin S: Squamous cell carcinoma of the tongue in a patient with Fanconi's anemia: a case report and review of the literature. Ann Hematol 81 (5): 294-8, 2002.|
|106.||Reinhard H, Peters I, Gottschling S, et al.: Squamous cell carcinoma of the tongue in a 13-year-old girl with Fanconi anemia. J Pediatr Hematol Oncol 29 (7): 488-91, 2007.|
|107.||Ragin CC, Modugno F, Gollin SM: The epidemiology and risk factors of head and neck cancer: a focus on human papillomavirus. J Dent Res 86 (2): 104-14, 2007.|
|108.||Fine JD, Johnson LB, Weiner M, et al.: Epidermolysis bullosa and the risk of life-threatening cancers: the National EB Registry experience, 1986-2006. J Am Acad Dermatol 60 (2): 203-11, 2009.|
|109.||Kraemer KH, Lee MM, Scotto J: Xeroderma pigmentosum. Cutaneous, ocular, and neurologic abnormalities in 830 published cases. Arch Dermatol 123 (2): 241-50, 1987.|
|110.||Alter BP: Cancer in Fanconi anemia, 1927-2001. Cancer 97 (2): 425-40, 2003.|
|111.||Mazereeuw-Hautier J, Bitoun E, Chevrant-Breton J, et al.: Keratitis-ichthyosis-deafness syndrome: disease expression and spectrum of connexin 26 (GJB2) mutations in 14 patients. Br J Dermatol 156 (5): 1015-9, 2007.|
|112.||Alter BP, Giri N, Savage SA, et al.: Cancer in dyskeratosis congenita. Blood 113 (26): 6549-57, 2009.|
|113.||Sturgis EM, Moore BA, Glisson BS, et al.: Neoadjuvant chemotherapy for squamous cell carcinoma of the oral tongue in young adults: a case series. Head Neck 27 (9): 748-56, 2005.|
|114.||Woo VL, Kelsch RD, Su L, et al.: Gingival squamous cell carcinoma in adolescence. Oral Surg Oral Med Oral Pathol Oral Radiol Endod 107 (1): 92-9, 2009.|
|115.||Sultan I, Rodriguez-Galindo C, Al-Sharabati S, et al.: Salivary gland carcinomas in children and adolescents: a population-based study, with comparison to adult cases. Head Neck 33 (10): 1476-81, 2011.|
|116.||Ethunandan M, Ethunandan A, Macpherson D, et al.: Parotid neoplasms in children: experience of diagnosis and management in a district general hospital. Int J Oral Maxillofac Surg 32 (4): 373-7, 2003.|
|117.||da Cruz Perez DE, Pires FR, Alves FA, et al.: Salivary gland tumors in children and adolescents: a clinicopathologic and immunohistochemical study of fifty-three cases. Int J Pediatr Otorhinolaryngol 68 (7): 895-902, 2004.|
|118.||Shapiro NL, Bhattacharyya N: Clinical characteristics and survival for major salivary gland malignancies in children. Otolaryngol Head Neck Surg 134 (4): 631-4, 2006.|
|119.||Ellies M, Schaffranietz F, Arglebe C, et al.: Tumors of the salivary glands in childhood and adolescence. J Oral Maxillofac Surg 64 (7): 1049-58, 2006.|
|120.||Muenscher A, Diegel T, Jaehne M, et al.: Benign and malignant salivary gland diseases in children A retrospective study of 549 cases from the Salivary Gland Registry, Hamburg. Auris Nasus Larynx 36 (3): 326-31, 2009.|
|121.||Laikui L, Hongwei L, Hongbing J, et al.: Epithelial salivary gland tumors of children and adolescents in west China population: a clinicopathologic study of 79 cases. J Oral Pathol Med 37 (4): 201-5, 2008.|
|122.||Rutt AL, Hawkshaw MJ, Lurie D, et al.: Salivary gland cancer in patients younger than 30 years. Ear Nose Throat J 90 (4): 174-84, 2011.|
|123.||Rahbar R, Grimmer JF, Vargas SO, et al.: Mucoepidermoid carcinoma of the parotid gland in children: A 10-year experience. Arch Otolaryngol Head Neck Surg 132 (4): 375-80, 2006.|
|124.||Kupferman ME, de la Garza GO, Santillan AA, et al.: Outcomes of pediatric patients with malignancies of the major salivary glands. Ann Surg Oncol 17 (12): 3301-7, 2010.|
|125.||Kaste SC, Hedlund G, Pratt CB: Malignant parotid tumors in patients previously treated for childhood cancer: clinical and imaging findings in eight cases. AJR Am J Roentgenol 162 (3): 655-9, 1994.|
|126.||Whatley WS, Thompson JW, Rao B: Salivary gland tumors in survivors of childhood cancer. Otolaryngol Head Neck Surg 134 (3): 385-8, 2006.|
|127.||Verma J, Teh BS, Paulino AC: Characteristics and outcome of radiation and chemotherapy-related mucoepidermoid carcinoma of the salivary glands. Pediatr Blood Cancer 57 (7): 1137-41, 2011.|
|128.||Védrine PO, Coffinet L, Temam S, et al.: Mucoepidermoid carcinoma of salivary glands in the pediatric age group: 18 clinical cases, including 11 second malignant neoplasms. Head Neck 28 (9): 827-33, 2006.|
|129.||Kamal SA, Othman EO: Diagnosis and treatment of parotid tumours. J Laryngol Otol 111 (4): 316-21, 1997.|
|130.||Ryan JT, El-Naggar AK, Huh W, et al.: Primacy of surgery in the management of mucoepidermoid carcinoma in children. Head Neck 33 (12): 1769-73, 2011.|
|131.||Williams SB, Ellis GL, Warnock GR: Sialoblastoma: a clinicopathologic and immunohistochemical study of 7 cases. Ann Diagn Pathol 10 (6): 320-6, 2006.|
|132.||Prigent M, Teissier N, Peuchmaur M, et al.: Sialoblastoma of salivary glands in children: chemotherapy should be discussed as an alternative to mutilating surgery. Int J Pediatr Otorhinolaryngol 74 (8): 942-5, 2010.|
|133.||Scott JX, Krishnan S, Bourne AJ, et al.: Treatment of metastatic sialoblastoma with chemotherapy and surgery. Pediatr Blood Cancer 50 (1): 134-7, 2008.|
|134.||Bitar MA, Moukarbel RV, Zalzal GH: Management of congenital subglottic hemangioma: trends and success over the past 17 years. Otolaryngol Head Neck Surg 132 (2): 226-31, 2005.|
|135.||McGuirt WF Jr, Little JP: Laryngeal cancer in children and adolescents. Otolaryngol Clin North Am 30 (2): 207-14, 1997.|
|136.||Bauman NM, Smith RJ: Recurrent respiratory papillomatosis. Pediatr Clin North Am 43 (6): 1385-401, 1996.|
|137.||Wharam MD Jr, Foulkes MA, Lawrence W Jr, et al.: Soft tissue sarcoma of the head and neck in childhood: nonorbital and nonparameningeal sites. A report of the Intergroup Rhabdomyosarcoma Study (IRS)-I. Cancer 53 (4): 1016-9, 1984.|
|138.||Siddiqui F, Sarin R, Agarwal JP, et al.: Squamous carcinoma of the larynx and hypopharynx in children: a distinct clinical entity? Med Pediatr Oncol 40 (5): 322-4, 2003.|
|139.||Kashima HK, Mounts P, Shah K: Recurrent respiratory papillomatosis. Obstet Gynecol Clin North Am 23 (3): 699-706, 1996.|
|140.||Maloney EM, Unger ER, Tucker RA, et al.: Longitudinal measures of human papillomavirus 6 and 11 viral loads and antibody response in children with recurrent respiratory papillomatosis. Arch Otolaryngol Head Neck Surg 132 (7): 711-5, 2006.|
|141.||Gélinas JF, Manoukian J, Côté A: Lung involvement in juvenile onset recurrent respiratory papillomatosis: a systematic review of the literature. Int J Pediatr Otorhinolaryngol 72 (4): 433-52, 2008.|
|142.||Andrus JG, Shapshay SM: Contemporary management of laryngeal papilloma in adults and children. Otolaryngol Clin North Am 39 (1): 135-58, 2006.|
|143.||Avidano MA, Singleton GT: Adjuvant drug strategies in the treatment of recurrent respiratory papillomatosis. Otolaryngol Head Neck Surg 112 (2): 197-202, 1995.|
|144.||Derkay CS, Smith RJ, McClay J, et al.: HspE7 treatment of pediatric recurrent respiratory papillomatosis: final results of an open-label trial. Ann Otol Rhinol Laryngol 114 (9): 730-7, 2005.|
|145.||French CA: NUT midline carcinoma. Cancer Genet Cytogenet 203 (1): 16-20, 2010.|
|146.||French CA, Kutok JL, Faquin WC, et al.: Midline carcinoma of children and young adults with NUT rearrangement. J Clin Oncol 22 (20): 4135-9, 2004.|
|147.||Schwartz BE, Hofer MD, Lemieux ME, et al.: Differentiation of NUT midline carcinoma by epigenomic reprogramming. Cancer Res 71 (7): 2686-96, 2011.|
Thoracic cancers include breast cancer, bronchial adenomas, bronchial carcinoid tumors, pleuropulmonary blastoma, esophageal tumors, thymomas, thymic carcinomas, cardiac tumors, and mesothelioma. The prognosis, diagnosis, classification, and treatment of these thoracic cancers are discussed below. It must be emphasized that these cancers are seen very infrequently in patients younger than 15 years, and most of the evidence is derived from case series.
The most frequent breast tumor seen in children is a fibroadenoma.[2,3] These tumors can be observed and many will regress without a need for biopsy. However, rare malignant transformation leading to phyllodes tumors has been reported. Sudden rapid enlargement of a suspected fibroadenoma is an indication for needle biopsy or excision. Phyllodes tumors can be managed by wide local excision without mastectomy.
Malignant breast tumors
Breast cancer has been reported in both males and females younger than 21 years.[5,6,7,8,9,10] A review of the Surveillance, Epidemiology, and End Results (SEER) database shows that 75 cases of malignant breast tumors in females 19 years or younger were identified from 1973 to 2004. Fifteen percent of these patients had in situ disease, 85% had invasive disease, 55% of the tumors were carcinomas, and 45% of the tumors were sarcomas—most of which were phyllodes tumors. Only three patients in the carcinoma group presented with metastatic disease, while 11 patients (27%) had regionally advanced disease. All patients with sarcomas presented with localized disease. Of the carcinoma patients, 85% underwent surgical resection, and 10% received adjuvant radiation therapy. Of the sarcoma patients, 97% had surgical resection, and 9% received radiation. The 5- and 10-year survival rates for patients with sarcomatous tumors were both 90%; for patients with carcinomas, the 5-year survival rate was 63% and the 10-year survival rate was 54%.
Breast cancer is the most frequently diagnosed cancer among adolescent and young adult (AYA) women aged 15 to 39 years, accounting for about 14% of all AYA cancer diagnoses. Breast cancer in this age group has a more aggressive course and worse outcome than in older women. Expression of hormone receptors for estrogen, progesterone, and human epidermal growth factor 2 (HER2) on breast cancer in the AYA group is also different than in older women and correlates with a worse prognosis. Treatment in the AYA group is similar to that in older women. However, unique aspects of management must include attention to genetic implications (i.e., familial breast cancer syndromes) and fertility.
There is an increased lifetime risk of breast cancer in female survivors of Hodgkin lymphoma who were treated with radiation to the chest area; however, breast cancer is also seen in patients who were treated for any cancer that was treated with chest irradiation.[9,15,16,17,18] Carcinomas are more frequent than sarcomas. Mammograms with adjunctive breast magnetic resonance imaging (MRI) should start at age 25 years or 10 years postexposure to radiation therapy (whichever came last). (Refer to the PDQ summary on the Late Effects of Treatment for Childhood Cancer for more information about secondary breast cancers.) Breast tumors may also occur as metastatic deposits from leukemia, rhabdomyosarcoma, other sarcomas, or lymphoma (particularly in patients who are infected with the human immunodeficiency virus).
(Refer to the PDQ summary on adult Breast Cancer Treatment for more information.)
Primary lung tumors are rare in children and histologically quite diverse. When epithelial cancers of the lung occur, they tend to be of advanced stage with prognosis dependent on both histology and stage. The majority of pulmonary malignant neoplasms in children are due to metastatic disease, with an approximate ratio of primary malignant tumors to metastatic disease of 1:5. While primary pulmonary tumors are rare in children, the majority of these tumors are malignant. In a review of 383 primary pulmonary neoplasms in children, 76% were malignant and 24% were benign. The most common malignant primary tumors of the lung, bronchial tumors and pleuropulmonary blastoma, are discussed below.
Bronchial tumors are a heterogeneous group of primary endobronchial lesions, and though adenoma implies a benign process, all varieties of bronchial tumors on occasion display a malignant behavior. There are three histologic types:[22,23,24,25,26,27]
The presenting symptoms of a cough, recurrent pneumonitis, and hemoptysis are usually due to an incomplete bronchial obstruction. Because of difficulties in diagnosis, symptoms are frequently present for months and occasionally children with wheezing have been treated for asthma with delays in diagnosis as long as 4 to 5 years.
Metastatic lesions are reported in approximately 6% of carcinoid tumors and recurrences are reported in 2% of cases. Atypical carcinoid tumors are rare but more aggressive with 50% of patients presenting with metastatic disease at diagnosis.[19,31] There is a single report of a child with a carcinoid tumor and metastatic disease who developed the classic carcinoid syndrome. Octreotide nuclear scans may demonstrate uptake of radioactivity by the tumor or lymph nodes, suggesting metastatic spread.
The management of bronchial tumors is somewhat controversial because bronchial tumors are usually visible endoscopically. Biopsy in these lesions may be hazardous because of hemorrhage, and endoscopic resection is not recommended. Bronchography or computed tomography scan may be helpful to determine the degree of bronchiectasis distal to the obstruction since the degree of pulmonary destruction may influence surgical therapy.
Conservative pulmonary resection, including sleeve segmental resection when feasible, with the removal of the involved lymphatics, is the treatment of choice.[34,35] Adenoid cystic carcinomas (cylindroma) have a tendency to spread submucosally, and late local recurrence or dissemination has been reported. In addition to en bloc resection with hilar lymphadenectomy, a frozen section examination of the bronchial margins should be carried out in children with this lesion. Neither chemotherapy nor radiation therapy is indicated for bronchial tumors, unless evidence of metastasis is documented.
Pleuropulmonary blastoma is a rare and highly aggressive pulmonary malignancy in children. Pleuropulmonary blastoma appears to progress through the following stages:
The tumor is usually located in the lung periphery, but it may be extrapulmonary with involvement of the heart/great vessels, mediastinum, diaphragm, and/or pleura.[41,42] The International Pleuropulmonary Blastoma Registry identified 11 cases of Type II and Type III pleuropulmonary blastoma with tumor extension into the thoracic great vessels or the heart. Radiographic evaluation of the central circulation should be performed in children with suspected or diagnosed pleuropulmonary blastoma to identify potentially fatal embolic complications.
Approximately one-third of families affected by pleuropulmonary blastoma manifest a number of dysplastic and/or neoplastic conditions comprising the Pleuropulmonary blastoma Family Tumor and Dysplasia Syndrome. Germline mutations in the DICER1 gene are considered the major genetic determinant of the complex.[44,45] A family history of cancer in close relatives has been noted for many young patients affected by this tumor.[46,47] In addition, pleuropulmonary blastoma has been reported in siblings. There has been a reported association between pleuropulmonary blastoma and cystic nephroma, ciliary body medulloepithelioma of the eye, and primary ovarian neoplasms, particularly ovarian sex cord–stromal tumors.[45,49,50,51,52] Importantly, while DICER1 mutations cause a wide range of phenotypes, pleuropulmonary blastoma does not occur in all families with DICER1 mutations; therefore, the term DICER1 syndrome is generally used for these families. Also, most mutation carriers are unaffected, indicating that tumor risk is modest.
Achieving total resection of the tumor at any time during treatment is associated with improved prognosis. The tumors may recur or metastasize, in spite of primary resection.[37,40] The cerebral parenchyma is the most common metastatic site. Responses to chemotherapy have been reported with agents similar to those used for the treatment of rhabdomyosarcoma, and adjuvant chemotherapy may benefit patients with Type I pleuropulmonary blastoma by reducing the risk of recurrence.[39,53] Chemotherapeutic agents may include vincristine, cyclophosphamide, dactinomycin, doxorubicin, and irinotecan. High-dose chemotherapy with stem cell rescue has been used without success. Radiation, either external beam or P-32, may be used when the tumor cannot be surgically removed. Data from the International Pleuropulmonary Blastoma Registry suggest that adjuvant chemotherapy may reduce the risk of recurrence.
There are no standard treatment options. Current treatment regimens have been informed by consensus conferences. The rare occurrence of these tumors makes recommending treatment difficult. Some general treatment considerations from the Pleuropulmonary Blastoma Registry include:
An independent group of researchers has established a registry and resource Web site for this rare tumor.
Esophageal cancer is rare in the pediatric age group, although it is relatively common in older adults.[59,60] Most of these tumors are squamous cell carcinomas, although sarcomas can also arise in the esophagus. The most common benign tumor is leiomyoma.
Symptoms are related to difficulty in swallowing and associated weight loss. Diagnosis is made by histologic examination of biopsy tissue.
Treatment options for esophageal carcinoma include either external-beam intracavitary radiation therapy or chemotherapy agents commonly used to treat carcinomas: platinum derivatives, paclitaxel, and etoposide. Prognosis is generally poor for this cancer, which rarely can be completely resected.
(Refer to the PDQ summary on adult Esophageal Cancer Treatment for more information.)
Thymoma and Thymic Carcinoma
A cancer of the thymus is not considered a thymoma or a thymic carcinoma unless there are neoplastic changes of the epithelial cells that cover the organ.[61,62,63] The term thymoma is customarily used to describe neoplasms that show no overt atypia of the epithelial component. Thymic carcinomas have a higher incidence of capsular invasion and metastases. A thymic epithelial tumor that exhibits clear-cut cytologic atypia and histologic features no longer specific to the thymus is known as thymic carcinoma, also known as type C thymoma. Other tumors that involve the thymus gland include lymphomas, germ cell tumors, carcinomas, carcinoids, and thymomas. Hodgkin lymphoma and non-Hodgkin lymphoma may also involve the thymus and must be differentiated from true thymomas and thymic carcinomas.
Thymoma and thymic carcinomas are very rare in children.[64,65] In the Tumori Rari in Età Pediatrica (TREP) registry, only eight cases were identified over a 9-year period. Various diseases and syndromes are associated with thymoma, including myasthenia gravis, polymyositis, systemic lupus erythematosus, rheumatoid arthritis, thyroiditis, Isaacs syndrome or neuromyotonia (continuous muscle stiffness resulting from persistent muscle activity as a consequence of antibodies against voltage-gated potassium channels), and pure red-cell aplasia.[67,68] Endocrine (hormonal) disorders including hyperthyroidism, Addison disease, and panhypopituitarism can also be associated with a diagnosis of thymoma.
These neoplasms are usually located in the anterior mediastinum and are usually discovered during a routine chest x-ray. Symptoms can include cough, difficulty with swallowing, tightness of the chest, chest pain, and shortness of breath, although nonspecific symptoms may occur. These tumors generally are slow growing but are potentially invasive, with metastases to distant organs or lymph nodes. Staging is related to invasiveness.
Surgery is performed with the goal of a complete resection and is the mainstay of therapy. Radiation therapy is used in patients with invasive thymoma or thymic carcinoma, and chemotherapy is usually reserved for patients with advanced-stage disease who have not responded to radiation therapy or corticosteroids. Agents that have been effective include doxorubicin, cyclophosphamide, etoposide, cisplatin, ifosfamide, and vincristine.[63,66,69,70,71,72] Responses to regimens containing combinations of some of these agents have ranged from 26% to 100% and survival rates have been as high as 50%.[72,73] Response rates are lower for patients with thymic carcinoma, but 2-year survival rates have been reported to be as high as 50%. Sunitinib has yielded clinical responses in four patients with adult thymic carcinoma.
The most common primary tumors of the heart are benign. In adults, myxoma is the most common tumor; however, these tumors are rare in children. The most common primary heart tumors in children are rhabdomyomas and fibromas.[77,78,79,80] Other benign tumors include myxomas (as noted above), histiocytoid cardiomyopathy tumors, teratomas, hemangiomas, and neurofibromas (i.e., tumors of the nerves that innervate the muscles).[77,79,81,82,83] Myxomas are the most common noncutaneous finding in Carney complex, a rare syndrome characterized by lentigines, cardiac myxomas or other myxoid fibromas, and endocrine abnormalities.[84,85,86] A mutation of the PRKAR1A gene is noted in more than 90% of the cases of Carney complex.[84,87] Primary malignant pediatric heart tumors are rare but may include malignant teratomas, rhabdomyosarcomas, chondrosarcomas, infantile fibrosarcoma, and other sarcomas.[77,88]
The utilization of new cardiac MRI techniques can identify the likely tumor type in the majority of children. However, histologic diagnosis remains the standard for diagnosing cardiac tumors.
The distribution of cardiac tumors in the fetal and neonatal period is different, with more benign teratomas occurring. Multiple cardiac tumors noted in the fetal or neonatal period are highly associated with a diagnosis of tuberous sclerosis. A retrospective review of 94 patients with cardiac tumors detected by prenatal or neonatal echocardiography shows that 68% of the patients exhibited features of tuberous sclerosis. In another study, 79% (15 out of 19) of patients with rhabdomyomas discovered prenatally had tuberous sclerosis, while 96% of those diagnosed postnatally had tuberous sclerosis. Most rhabdomyomas, whether diagnosed prenatally or postnatally, will spontaneously regress.
Secondary tumors of the heart include metastatic spread of rhabdomyosarcoma, melanoma, leukemia, and carcinoma of other sites. Patients may be asymptomatic for long periods. Symptoms may include abnormalities of heart rhythm, enlargement of the heart, fluid in the pericardial sac, and congestive heart failure. Some patients present with sudden death. Successful treatment may require surgery, including transplantation, and chemotherapy appropriate for the type of cancer that is present.[92,93,94]; [Level of evidence: 3iiA]
This tumor can involve the membranous coverings of the lung, the heart, or the abdominal organs.[97,98,99] These tumors can spread over the surface of organs, without invading far into the underlying tissue, and may spread to regional or distant lymph nodes. Mesothelioma may develop after successful treatment of an earlier cancer, especially after treatment with radiation.[100,101] In adults, these tumors have been associated with exposure to asbestos, which was used as building insulation. The amount of exposure required to develop cancer is unknown, and there is no information about the risk for children exposed to asbestos.
Benign and malignant mesotheliomas cannot be differentiated using histologic criteria. A poor prognosis is associated with lesions that are diffuse and invasive or for those that recur. In general, the course of the disease is slow, and long-term survival is common. Diagnostic thoracoscopy should be considered in suspicious cases to confirm diagnosis.
Radical surgical resection has been attempted with mixed results. Treatment with various chemotherapeutic agents used for carcinomas or sarcomas may result in partial responses.[99,104] Pain is an infrequent symptom; however, radiation therapy may be used for palliation of pain.
Papillary serous carcinoma of the peritoneum is sometimes mistaken for mesothelioma. This tumor generally involves all surfaces lining the abdominal organs, including the surfaces of the ovary. Treatment includes surgical resection whenever possible and use of chemotherapy with agents such as cisplatin, carboplatin, and paclitaxel.
(Refer to the PDQ summary on adult Malignant Mesothelioma Treatment for more information.)
|1.||Yu DC, Grabowski MJ, Kozakewich HP, et al.: Primary lung tumors in children and adolescents: a 90-year experience. J Pediatr Surg 45 (6): 1090-5, 2010.|
|2.||Chung EM, Cube R, Hall GJ, et al.: From the archives of the AFIP: breast masses in children and adolescents: radiologic-pathologic correlation. Radiographics 29 (3): 907-31, 2009 May-Jun.|
|3.||Jayasinghe Y, Simmons PS: Fibroadenomas in adolescence. Curr Opin Obstet Gynecol 21 (5): 402-6, 2009.|
|4.||Valdes EK, Boolbol SK, Cohen JM, et al.: Malignant transformation of a breast fibroadenoma to cystosarcoma phyllodes: case report and review of the literature. Am Surg 71 (4): 348-53, 2005.|
|5.||Serour F, Gilad A, Kopolovic J, et al.: Secretory breast cancer in childhood and adolescence: report of a case and review of the literature. Med Pediatr Oncol 20 (4): 341-4, 1992.|
|6.||Drukker BH: Breast disease: a primer on diagnosis and management. Int J Fertil Womens Med 42 (5): 278-87, 1997 Sep-Oct.|
|7.||Rogers DA, Lobe TE, Rao BN, et al.: Breast malignancy in children. J Pediatr Surg 29 (1): 48-51, 1994.|
|8.||Rivera-Hueto F, Hevia-Vázquez A, Utrilla-Alcolea JC, et al.: Long-term prognosis of teenagers with breast cancer. Int J Surg Pathol 10 (4): 273-9, 2002.|
|9.||Kaste SC, Hudson MM, Jones DJ, et al.: Breast masses in women treated for childhood cancer: incidence and screening guidelines. Cancer 82 (4): 784-92, 1998.|
|10.||Costa NM, Rodrigues H, Pereira H, et al.: Secretory breast carcinoma--case report and review of the medical literature. Breast 13 (4): 353-5, 2004.|
|11.||Gutierrez JC, Housri N, Koniaris LG, et al.: Malignant breast cancer in children: a review of 75 patients. J Surg Res 147 (2): 182-8, 2008.|
|12.||Keegan TH, Derouen MC, Press DJ, et al.: Occurrence of breast cancer subtypes in adolescent and young adult women. Breast Cancer Res 14 (2): R55, 2012.|
|13.||Anders CK, Hsu DS, Broadwater G, et al.: Young age at diagnosis correlates with worse prognosis and defines a subset of breast cancers with shared patterns of gene expression. J Clin Oncol 26 (20): 3324-30, 2008.|
|14.||Gabriel CA, Domchek SM: Breast cancer in young women. Breast Cancer Res 12 (5): 212, 2010.|
|15.||Metayer C, Lynch CF, Clarke EA, et al.: Second cancers among long-term survivors of Hodgkin's disease diagnosed in childhood and adolescence. J Clin Oncol 18 (12): 2435-43, 2000.|
|16.||Swerdlow AJ, Barber JA, Hudson GV, et al.: Risk of second malignancy after Hodgkin's disease in a collaborative British cohort: the relation to age at treatment. J Clin Oncol 18 (3): 498-509, 2000.|
|17.||van Leeuwen FE, Klokman WJ, Veer MB, et al.: Long-term risk of second malignancy in survivors of Hodgkin's disease treated during adolescence or young adulthood. J Clin Oncol 18 (3): 487-97, 2000.|
|18.||Henderson TO, Amsterdam A, Bhatia S, et al.: Systematic review: surveillance for breast cancer in women treated with chest radiation for childhood, adolescent, or young adult cancer. Ann Intern Med 152 (7): 444-55; W144-54, 2010.|
|19.||Lal DR, Clark I, Shalkow J, et al.: Primary epithelial lung malignancies in the pediatric population. Pediatr Blood Cancer 45 (5): 683-6, 2005.|
|20.||Weldon CB, Shamberger RC: Pediatric pulmonary tumors: primary and metastatic. Semin Pediatr Surg 17 (1): 17-29, 2008.|
|21.||Hancock BJ, Di Lorenzo M, Youssef S, et al.: Childhood primary pulmonary neoplasms. J Pediatr Surg 28 (9): 1133-6, 1993.|
|22.||Vadasz P, Palffy G, Egervary M, et al.: Diagnosis and treatment of bronchial carcinoid tumors: clinical and pathological review of 120 operated patients. Eur J Cardiothorac Surg 7 (1): 8-11, 1993.|
|23.||Kulke MH, Mayer RJ: Carcinoid tumors. N Engl J Med 340 (11): 858-68, 1999.|
|24.||Oliaro A, Filosso PL, Donati G, et al.: Atypical bronchial carcinoids. Review of 46 patients. J Cardiovasc Surg (Torino) 41 (1): 131-5, 2000.|
|25.||Moraes TJ, Langer JC, Forte V, et al.: Pediatric pulmonary carcinoid: a case report and review of the literature. Pediatr Pulmonol 35 (4): 318-22, 2003.|
|26.||Al-Qahtani AR, Di Lorenzo M, Yazbeck S: Endobronchial tumors in children: Institutional experience and literature review. J Pediatr Surg 38 (5): 733-6, 2003.|
|27.||Roby BB, Drehner D, Sidman JD: Pediatric tracheal and endobronchial tumors: an institutional experience. Arch Otolaryngol Head Neck Surg 137 (9): 925-9, 2011.|
|28.||Soga J, Yakuwa Y: Bronchopulmonary carcinoids: An analysis of 1,875 reported cases with special reference to a comparison between typical carcinoids and atypical varieties. Ann Thorac Cardiovasc Surg 5 (4): 211-9, 1999.|
|29.||Fauroux B, Aynie V, Larroquet M, et al.: Carcinoid and mucoepidermoid bronchial tumours in children. Eur J Pediatr 164 (12): 748-52, 2005.|
|30.||Abuzetun JY, Hazin R, Suker M, et al.: Primary squamous cell carcinoma of the lung with bony metastasis in a 13-year-old boy: case report and review of literature. J Pediatr Hematol Oncol 30 (8): 635-7, 2008.|
|31.||Rizzardi G, Marulli G, Calabrese F, et al.: Bronchial carcinoid tumours in children: surgical treatment and outcome in a single institution. Eur J Pediatr Surg 19 (4): 228-31, 2009.|
|32.||Lack EE, Harris GB, Eraklis AJ, et al.: Primary bronchial tumors in childhood. A clinicopathologic study of six cases. Cancer 51 (3): 492-7, 1983.|
|33.||Ahel V, Zubovic I, Rozmanic V: Bronchial adenoid cystic carcinoma with saccular bronchiectasis as a cause of recurrent pneumonia in children. Pediatr Pulmonol 12 (4): 260-2, 1992.|
|34.||Gaissert HA, Mathisen DJ, Grillo HC, et al.: Tracheobronchial sleeve resection in children and adolescents. J Pediatr Surg 29 (2): 192-7; discussion 197-8, 1994.|
|35.||Jalal A, Jeyasingham K: Bronchoplasty for malignant and benign conditions: a retrospective study of 44 cases. Eur J Cardiothorac Surg 17 (4): 370-6, 2000.|
|36.||Shivastava R, Saha A, Mehera B, et al.: Pleuropulmonary blastoma: transition from type I (cystic) to type III (solid). Singapore Med J 48 (7): e190-2, 2007.|
|37.||Hill DA, Jarzembowski JA, Priest JR, et al.: Type I pleuropulmonary blastoma: pathology and biology study of 51 cases from the international pleuropulmonary blastoma registry. Am J Surg Pathol 32 (2): 282-95, 2008.|
|38.||Priest JR, Magnuson J, Williams GM, et al.: Cerebral metastasis and other central nervous system complications of pleuropulmonary blastoma. Pediatr Blood Cancer 49 (3): 266-73, 2007.|
|39.||Priest JR, Hill DA, Williams GM, et al.: Type I pleuropulmonary blastoma: a report from the International Pleuropulmonary Blastoma Registry. J Clin Oncol 24 (27): 4492-8, 2006.|
|40.||Miniati DN, Chintagumpala M, Langston C, et al.: Prenatal presentation and outcome of children with pleuropulmonary blastoma. J Pediatr Surg 41 (1): 66-71, 2006.|
|41.||Indolfi P, Casale F, Carli M, et al.: Pleuropulmonary blastoma: management and prognosis of 11 cases. Cancer 89 (6): 1396-401, 2000.|
|42.||Indolfi P, Bisogno G, Casale F, et al.: Prognostic factors in pleuro-pulmonary blastoma. Pediatr Blood Cancer 48 (3): 318-23, 2007.|
|43.||Priest JR, Andic D, Arbuckle S, et al.: Great vessel/cardiac extension and tumor embolism in pleuropulmonary blastoma: a report from the International Pleuropulmonary Blastoma Registry. Pediatr Blood Cancer 56 (4): 604-9, 2011.|
|44.||Hill DA, Ivanovich J, Priest JR, et al.: DICER1 mutations in familial pleuropulmonary blastoma. Science 325 (5943): 965, 2009.|
|45.||Slade I, Bacchelli C, Davies H, et al.: DICER1 syndrome: clarifying the diagnosis, clinical features and management implications of a pleiotropic tumour predisposition syndrome. J Med Genet 48 (4): 273-8, 2011.|
|46.||Priest JR, McDermott MB, Bhatia S, et al.: Pleuropulmonary blastoma: a clinicopathologic study of 50 cases. Cancer 80 (1): 147-61, 1997.|
|47.||Cross SF, Arbuckle S, Priest JR, et al.: Familial pleuropulmonary blastoma in Australia. Pediatr Blood Cancer 55 (7): 1417-9, 2010.|
|48.||Gutweiler JR, Labelle J, Suh MY, et al.: A familial case of pleuropulmonary blastoma. Eur J Pediatr Surg 18 (3): 192-4, 2008.|
|49.||Bouron-Dal Soglio D, Harvey I, Yazbeck S, et al.: An association of pleuropulmonary blastoma and cystic nephroma: possible genetic association. Pediatr Dev Pathol 9 (1): 61-4, 2006 Jan-Feb.|
|50.||Boman F, Hill DA, Williams GM, et al.: Familial association of pleuropulmonary blastoma with cystic nephroma and other renal tumors: a report from the International Pleuropulmonary Blastoma Registry. J Pediatr 149 (6): 850-854, 2006.|
|51.||Priest JR, Williams GM, Manera R, et al.: Ciliary body medulloepithelioma: four cases associated with pleuropulmonary blastoma--a report from the International Pleuropulmonary Blastoma Registry. Br J Ophthalmol 95 (7): 1001-5, 2011.|
|52.||Schultz KA, Pacheco MC, Yang J, et al.: Ovarian sex cord-stromal tumors, pleuropulmonary blastoma and DICER1 mutations: a report from the International Pleuropulmonary Blastoma Registry. Gynecol Oncol 122 (2): 246-50, 2011.|
|53.||Schmaltz C, Sauter S, Opitz O, et al.: Pleuro-pulmonary blastoma: a case report and review of the literature. Med Pediatr Oncol 25 (6): 479-84, 1995.|
|54.||Ohta Y, Fujishima M, Hasegawa H, et al.: High therapeutic effectiveness of postoperative irinotecan chemotherapy in a typical case of radiographically and pathologically diagnosed pleuropulmonary blastoma. J Pediatr Hematol Oncol 31 (5): 355-8, 2009.|
|55.||de Castro CG Jr, de Almeida SG, Gregianin LJ, et al.: High-dose chemotherapy and autologous peripheral blood stem cell rescue in a patient with pleuropulmonary blastoma. J Pediatr Hematol Oncol 25 (1): 78-81, 2003.|
|56.||Pleuropulmonary Blastoma Registry. St. Paul, Minn: Children's Hospitals and Clinics of St. Paul. Available online. Last accessed October 31, 2012.|
|57.||MacSweeney F, Papagiannopoulos K, Goldstraw P, et al.: An assessment of the expanded classification of congenital cystic adenomatoid malformations and their relationship to malignant transformation. Am J Surg Pathol 27 (8): 1139-46, 2003.|
|58.||Hill DA, Dehner LP: A cautionary note about congenital cystic adenomatoid malformation (CCAM) type 4. Am J Surg Pathol 28 (4): 554-5; author reply 555, 2004.|
|59.||Gangopadhyay AN, Mohanty PK, Gopal SC, et al.: Adenocarcinoma of the esophagus in an 8-year-old boy. J Pediatr Surg 32 (8): 1259-60, 1997.|
|60.||Issaivanan M, Redner A, Weinstein T, et al.: Esophageal carcinoma in children and adolescents. J Pediatr Hematol Oncol 34 (1): 63-7, 2012.|
|61.||Verley JM, Hollmann KH: Thymoma. A comparative study of clinical stages, histologic features, and survival in 200 cases. Cancer 55 (5): 1074-86, 1985.|
|62.||Hsueh C, Kuo TT, Tsang NM, et al.: Thymic lymphoepitheliomalike carcinoma in children: clinicopathologic features and molecular analysis. J Pediatr Hematol Oncol 28 (12): 785-90, 2006.|
|63.||Stachowicz-Stencel T, Bien E, Balcerska A, et al.: Thymic carcinoma in children: a report from the Polish Pediatric Rare Tumors Study. Pediatr Blood Cancer 54 (7): 916-20, 2010.|
|64.||Furman WL, Buckley PJ, Green AA, et al.: Thymoma and myasthenia gravis in a 4-year-old child. Case report and review of the literature. Cancer 56 (11): 2703-6, 1985.|
|65.||Yaris N, Nas Y, Cobanoglu U, et al.: Thymic carcinoma in children. Pediatr Blood Cancer 47 (2): 224-7, 2006.|
|66.||Carretto E, Inserra A, Ferrari A, et al.: Epithelial thymic tumours in paediatric age: a report from the TREP project. Orphanet J Rare Dis 6: 28, 2011.|
|67.||Souadjian JV, Enriquez P, Silverstein MN, et al.: The spectrum of diseases associated with thymoma. Coincidence or syndrome? Arch Intern Med 134 (2): 374-9, 1974.|
|68.||Coulter D, Gold S: Thymoma in the offspring of a patient with Isaacs syndrome. J Pediatr Hematol Oncol 29 (11): 797-8, 2007.|
|69.||Cowen D, Richaud P, Mornex F, et al.: Thymoma: results of a multicentric retrospective series of 149 non-metastatic irradiated patients and review of the literature. FNCLCC trialists. Fédération Nationale des Centres de Lutte Contre le Cancer. Radiother Oncol 34 (1): 9-16, 1995.|
|70.||Carlson RW, Dorfman RF, Sikic BI: Successful treatment of metastatic thymic carcinoma with cisplatin, vinblastine, bleomycin, and etoposide chemotherapy. Cancer 66 (10): 2092-4, 1990.|
|71.||Niehues T, Harms D, Jürgens H, et al.: Treatment of pediatric malignant thymoma: long-term remission in a 14-year-old boy with EBV-associated thymic carcinoma by aggressive, combined modality treatment. Med Pediatr Oncol 26 (6): 419-24, 1996.|
|72.||Casey EM, Kiel PJ, Loehrer PJ Sr: Clinical management of thymoma patients. Hematol Oncol Clin North Am 22 (3): 457-73, 2008.|
|73.||Giaccone G, Ardizzoni A, Kirkpatrick A, et al.: Cisplatin and etoposide combination chemotherapy for locally advanced or metastatic thymoma. A phase II study of the European Organization for Research and Treatment of Cancer Lung Cancer Cooperative Group. J Clin Oncol 14 (3): 814-20, 1996.|
|74.||Loehrer PJ Sr, Jiroutek M, Aisner S, et al.: Combined etoposide, ifosfamide, and cisplatin in the treatment of patients with advanced thymoma and thymic carcinoma: an intergroup trial. Cancer 91 (11): 2010-5, 2001.|
|75.||Ströbel P, Bargou R, Wolff A, et al.: Sunitinib in metastatic thymic carcinomas: laboratory findings and initial clinical experience. Br J Cancer 103 (2): 196-200, 2010.|
|76.||Wu KH, Mo XM, Liu YL: Clinical analysis and surgical results of cardiac myxoma in pediatric patients. J Surg Oncol 99 (1): 48-50, 2009.|
|77.||Burke A, Virmani R: Pediatric heart tumors. Cardiovasc Pathol 17 (4): 193-8, 2008 Jul-Aug.|
|78.||Becker AE: Primary heart tumors in the pediatric age group: a review of salient pathologic features relevant for clinicians. Pediatr Cardiol 21 (4): 317-23, 2000 Jul-Aug.|
|79.||Bruce CJ: Cardiac tumours: diagnosis and management. Heart 97 (2): 151-60, 2011.|
|80.||Miyake CY, Del Nido PJ, Alexander ME, et al.: Cardiac tumors and associated arrhythmias in pediatric patients, with observations on surgical therapy for ventricular tachycardia. J Am Coll Cardiol 58 (18): 1903-9, 2011.|
|81.||Isaacs H Jr: Fetal and neonatal cardiac tumors. Pediatr Cardiol 25 (3): 252-73, 2004 May-Jun.|
|82.||Elderkin RA, Radford DJ: Primary cardiac tumours in a paediatric population. J Paediatr Child Health 38 (2): 173-7, 2002.|
|83.||Uzun O, Wilson DG, Vujanic GM, et al.: Cardiac tumours in children. Orphanet J Rare Dis 2: 11, 2007.|
|84.||Boikos SA, Stratakis CA: Carney complex: the first 20 years. Curr Opin Oncol 19 (1): 24-9, 2007.|
|85.||Carney JA, Young WF: Primary pigmented nodular adrenocortical disease and its associated conditions. Endocrinologist 2: 6-21, 1992.|
|86.||Stratakis CA, Kirschner LS, Carney JA: Clinical and molecular features of the Carney complex: diagnostic criteria and recommendations for patient evaluation. J Clin Endocrinol Metab 86 (9): 4041-6, 2001.|
|87.||Boikos SA, Stratakis CA: Carney complex: pathology and molecular genetics. Neuroendocrinology 83 (3-4): 189-99, 2006.|
|88.||Kogon B, Shehata B, Katzenstein H, et al.: Primary congenital infantile fibrosarcoma of the heart: the first confirmed case. Ann Thorac Surg 91 (4): 1276-80, 2011.|
|89.||Beroukhim RS, Prakash A, Buechel ER, et al.: Characterization of cardiac tumors in children by cardiovascular magnetic resonance imaging: a multicenter experience. J Am Coll Cardiol 58 (10): 1044-54, 2011.|
|90.||Tworetzky W, McElhinney DB, Margossian R, et al.: Association between cardiac tumors and tuberous sclerosis in the fetus and neonate. Am J Cardiol 92 (4): 487-9, 2003.|
|91.||Bader RS, Chitayat D, Kelly E, et al.: Fetal rhabdomyoma: prenatal diagnosis, clinical outcome, and incidence of associated tuberous sclerosis complex. J Pediatr 143 (5): 620-4, 2003.|
|92.||Michler RE, Goldstein DJ: Treatment of cardiac tumors by orthotopic cardiac transplantation. Semin Oncol 24 (5): 534-9, 1997.|
|93.||Stiller B, Hetzer R, Meyer R, et al.: Primary cardiac tumours: when is surgery necessary? Eur J Cardiothorac Surg 20 (5): 1002-6, 2001.|
|94.||Günther T, Schreiber C, Noebauer C, et al.: Treatment strategies for pediatric patients with primary cardiac and pericardial tumors: a 30-year review. Pediatr Cardiol 29 (6): 1071-6, 2008.|
|95.||Nagata S, Nakanishi R: Malignant pleural mesothelioma with cavity formation in a 16-year-old boy. Chest 127 (2): 655-7, 2005.|
|96.||Rosas-Salazar C, Gunawardena SW, Spahr JE: Malignant pleural mesothelioma in a child with ataxia-telangiectasia. Pediatr Pulmonol : , 2012.|
|97.||Kelsey A: Mesothelioma in childhood. Pediatr Hematol Oncol 11 (5): 461-2, 1994 Sep-Oct.|
|98.||Moran CA, Albores-Saavedra J, Suster S: Primary peritoneal mesotheliomas in children: a clinicopathological and immunohistochemical study of eight cases. Histopathology 52 (7): 824-30, 2008.|
|99.||Cioffredi LA, Jänne PA, Jackman DM: Treatment of peritoneal mesothelioma in pediatric patients. Pediatr Blood Cancer 52 (1): 127-9, 2009.|
|100.||Hofmann J, Mintzer D, Warhol MJ: Malignant mesothelioma following radiation therapy. Am J Med 97 (4): 379-82, 1994.|
|101.||Pappo AS, Santana VM, Furman WL, et al.: Post-irradiation malignant mesothelioma. Cancer 79 (1): 192-3, 1997.|
|102.||Hyers TM, Ohar JM, Crim C: Clinical controversies in asbestos-induced lung diseases. Semin Diagn Pathol 9 (2): 97-101, 1992.|
|103.||Maziak DE, Gagliardi A, Haynes AE, et al.: Surgical management of malignant pleural mesothelioma: a systematic review and evidence summary. Lung Cancer 48 (2): 157-69, 2005.|
|104.||Milano E, Pourroy B, Rome A, et al.: Efficacy of a combination of pemetrexed and multiple redo-surgery in an 11-year-old girl with a recurrent multifocal abdominal mesothelioma. Anticancer Drugs 17 (10): 1231-4, 2006.|
|105.||Wall JE, Mandrell BN, Jenkins JJ 3rd, et al.: Effectiveness of paclitaxel in treating papillary serous carcinoma of the peritoneum in an adolescent. Am J Obstet Gynecol 172 (3): 1049-52, 1995.|
Abdominal cancers include adrenocortical tumors, carcinomas of the stomach, cancer of the pancreas, colorectal carcinomas, carcinoid tumors, and gastrointestinal stromal tumors. The prognosis, diagnosis, classification, and treatment of these abdominal cancers are discussed below. It must be emphasized that these cancers are seen very infrequently in patients younger than 15 years, and most of the evidence is derived from case series. (Refer to the Renal Cell Carcinoma section in the PDQ summary on Wilms Tumor and Other Childhood Kidney Tumors for more information.)
Carcinoma of the Adrenal Cortex
Adrenocortical tumors encompass a spectrum of diseases with often seamless transition from benign (adenoma) to malignant (carcinoma) behavior. Their incidence in children is extremely low (only 0.2% of pediatric cancers). Adrenocortical tumors appear to follow a bimodal distribution, with peaks during the first and fourth decades.[2,3] In children, 25 new cases are expected to occur annually in the United States, for an estimated annual incidence of 0.2 to 0.3 cases per 1 million. Internationally, however, the incidence of adrenocortical tumors appear to vary substantially. The incidence of adrenocortical tumors is particularly high in southern Brazil, where it is approximately 10 to 15 times that observed in the United States.[5,6,7] Childhood adrenocortical tumors typically present during the first 5 years of life (median age, 3–4 years), although there is a second, smaller peak during adolescence.[8,9,10,11] Female gender is consistently predominant in most studies, with a female to male ratio of 1.6 to 1.
Predisposing genetic factors have been implicated in more than 50% of the cases in North America and Europe, and in 95% of the Brazilian cases. Germline TP53 mutations are almost always the predisposing factor. In the non-Brazilian cases, relatives of children with adrenocortical tumors often, though not invariably, have a high incidence of other non-adrenal cancers (Li-Fraumeni syndrome), and germline mutations usually occur within the region coding for the TP53 DNA-binding domain (exons 5 to 8, primarily at highly conserved amino acid residues). In the Brazilian cases, in contrast, the patients' families do not exhibit a high incidence of cancer, and a single, unique mutation at codon 337 in exon 10 of the TP53 gene is consistently observed. Patients with Beckwith-Wiedemann and hemihypertrophy syndromes have a predisposition to cancer, and as many as 16% of their neoplasms are adrenocortical tumors. However, less than 1% of children with adrenocortical tumors have these syndromes. The distinctive genetic features of pediatric adrenocortical carcinoma have been reviewed.
Unlike adult adrenocortical tumors, histologic differentiation of adenomas and carcinomas is difficult. However, approximately 10% to 20% of pediatric cases are adenomas.[2,9] The distinction between benign (adenomas) and malignant (carcinomas) tumors can be problematic. In fact, adenoma and carcinoma appear to share multiple genetic aberrations and may represent points on a continuum of cellular transformation. Macroscopically, adenomas tend to be well defined and spherical, and they never invade surrounding structures. They are typically small (usually <200 cm3), and some studies have included size as a criterion for adenoma. By contrast, carcinomas have macroscopic features suggestive of malignancy; they are larger, and they show marked lobulation with extensive areas of hemorrhage and necrosis. Microscopically, carcinomas comprise larger cells with eosinophilic cytoplasm, arranged in alveolar clusters. Several authors have proposed histologic criteria that may help to distinguish the two types of neoplasm.[18,19] However, morphologic criteria may not allow reliable distinction of benign and malignant adrenocortical tumors. Mitotic rate is consistently reported as the most important determinant of aggressive behavior.IGF2 expression also appears to discriminate between carcinomas and adenomas in adults, but not in children.[21,22] Other histopathologic variables are also important, and risk groups may be identified on the basis of a score derived from characteristics, such as venous, capsular, or adjacent organ invasion; tumor necrosis; mitotic rate; and the presence of atypical mitoses.
Because pediatric adrenocortical tumors are almost universally functional, they cause endocrine disturbances, and a diagnosis is usually made 5 to 8 months after the first signs and symptoms emerge.[3,9] Virilization (pubic hair, accelerated growth, enlarged penis, clitoromegaly, hirsutism, and acne) due to excess of androgen secretion is seen, alone or in combination with hypercortisolism, in more than 80% of patients. Isolated Cushing syndrome is very rare (5% of patients), and it appears to occur more frequently in older children.[3,9,23] Likewise, nonfunctional tumors are rare (<10%) and tend to occur in older children. Because of the hormone hypersecretion, it is possible to establish an endocrine profile for each particular tumor, which may facilitate the evaluation of response to treatment and monitor for tumor recurrence.
In patients with localized disease, age between 0 and 3 years, virilization alone, normal blood pressure, disease stage I, absence of spillage during surgery, and tumor weight no greater than 200 grams were associated with a greater probability of survival. In a Cox regression model analysis, only stage I, virilization alone, and age 0 to 3 years were independently associated with a better outcome. Available data suggest that tumor size is especially important in children; patients with small tumors have an excellent outcome with surgery alone, regardless of histologic features. The overall probability of 5-year survival for children with adrenocortical tumors is reported to be 54% to 74%.[3,9,10,23,24]
Treatment of adrenocortical tumors
At the time of diagnosis, two-thirds of pediatric patients have limited disease (tumors can be completely resected), and the remaining patients have either unresectable or metastatic disease.
Treatment of childhood adrenocortical tumors has evolved from the data derived from the adult studies, and the same guidelines are used; surgery is the most important mode of therapy, and mitotane and cisplatin-based regimens, usually incorporating doxorubicin and etoposide, are recommended for patients with advanced disease.[7,25,26] An aggressive surgical approach of the primary tumor and all metastatic sites is recommended when feasible. Because of tumor friability, rupture of the capsule with resultant tumor spillage is frequent (approximately 20% of initial resections and 43% of resections after recurrence).[3,10] When the diagnosis of adrenocortical tumor is suspected, laparotomy and a curative procedure are recommended rather than fine-needle aspiration, to avoid the risk of tumor rupture. Laparoscopic resection is associated with a high risk of rupture and peritoneal carcinomatosis; thus, open adrenalectomy remains the standard of care.
Little information is available about the use of mitotane in children, although response rates appear to be similar to those seen in adults.[1,25] A retrospective analysis in Italy and Germany identified 177 adult patients with adrenocortical carcinoma. Recurrence-free survival was significantly prolonged by the use of adjuvant mitotane. Benefit was present with 1 to 3 g per day of mitotane and was associated with fewer toxic side effects than doses of 3 to 5 g per day. In a review of 11 children with advanced adrenocortical tumors treated with mitotane and a cisplatin-based chemotherapeutic regimen, measurable responses were seen in seven patients. The mitotane daily dose required for therapeutic levels was around 4 g/m2, and therapeutic levels were achieved after 4 to 6 months of therapy.
The use of radiation therapy in pediatric patients with adrenocortical tumors has not been consistently investigated. Adrenocortical tumors are generally considered to be radioresistant. Furthermore, because many children with adrenocortical tumors carry germline TP53 mutations that predispose to cancer, radiation may increase the incidence of secondary tumors. One study reported three of five long-term survivors of pediatric adrenocortical tumors died of secondary sarcoma that arose within the radiation field.
(Refer to the PDQ summary on adult Adrenocortical Carcinoma Treatment for more information.)
Treatment options under clinical evaluation
The following is an example of a national and/or institutional clinical trial that is currently being conducted. Information about ongoing clinical trials is available from the NCI Web site.
Carcinoma of the Stomach
Primary gastric tumors in children are rare, and carcinoma of the stomach is even more unusual. In one series, gastric cancer in children younger than 18 years accounted for 0.11% of all gastric cancer cases seen over an 18-year period. The frequency and death rate from stomach cancer has declined worldwide for the past 50 years with the introduction of food preservation practices such as refrigeration.
The tumor must be distinguished from other conditions such as non-Hodgkin lymphoma, malignant carcinoid, leiomyosarcoma, and various benign conditions or tumors of the stomach. Symptoms include vague upper abdominal pain, which can be associated with poor appetite and weight loss. Other symptoms may include nausea, vomiting, change in bowel habits, poor appetite, weakness, and Helicobacter pylori infection.[33,35] Many individuals become anemic but otherwise show no symptoms before the development of metastatic spread. Fiberoptic endoscopy can be used to visualize the tumor or to take a biopsy sample to confirm the diagnosis. Confirmation can also involve an x-ray examination of the upper gastrointestinal tract.
Treatment should include surgical excision with wide margins. For individuals who cannot have a complete surgical resection, radiation therapy may be used along with chemotherapeutic agents such as fluorouracil (5-FU) and irinotecan. Other agents that may be of value are the nitrosoureas with or without cisplatin, etoposide, doxorubicin, or mitomycin C.
Prognosis depends on the extent of the disease at the time of diagnosis and the success of treatment that is appropriate for the clinical situation. Because of the rarity of stomach cancer in the pediatric age group, little information exists regarding the treatment outcomes of children.
(Refer to the PDQ summary on adult Gastric Cancer Treatment for more information.)
Cancer of the Pancreas
Pancreatic tumors are rare in children and adolescents and carry a variable prognosis.[37,38,39] Tumors included in this general category can arise at any site within the pancreas. Cancers of the pancreas may be classified as adenocarcinomas, squamous cell carcinomas, acinic cell carcinomas, liposarcomas, lymphomas, papillary-cystic carcinomas, pancreatoblastomas, malignant insulinomas, glucagonomas, and gastrinomas.[40,41,42,43] Several cases of primitive neuroectodermal tumor of the pancreas have been reported in children and young adults. Pancreatoblastoma is reported to be associated with Beckwith-Wiedemann syndrome and Cushing syndrome.[45,46]
Most pancreatic tumors do not secrete hormones, although some tumors secrete insulin, which can lead to symptoms of weakness, fatigue, hypoglycemia, and coma.[40,47] If the tumor interferes with the normal function of the islet cells, patients may have watery diarrhea or abnormalities of salt balance. Both carcinoma of the pancreas and pancreatoblastoma can produce active hormones and can be associated with an abdominal mass, wasting, and pain.[48,49,50] At times, there is obstruction of the head of the pancreas, which is associated with jaundice and gastrointestinal bleeding. Elevation of alpha-fetoprotein has been seen in pancreatoblastoma and acinar cell carcinoma.[43,51,52,53]
Diagnosis of pancreatic tumors is usually established by biopsy, using laparotomy or a minimally invasive surgery (e.g., laparoscopy). A diagnosis can be achieved only after ruling out various benign and cancerous lesions.
Solid pseudopapillary neoplasm of the pancreas is a rare tumor of borderline malignancy that has been reported in children but more commonly occurs in young women.[54,55,56,57] Treatment consists of complete tumor resection (ideally without biopsy). Metastases may occur, but in general, prognosis is good following surgery alone.[58,59]; [Level of evidence: 3iiA]; [Level of evidence: 3iiDi]
Treatment includes various surgical procedures to remove the pancreas and duodenum or removal of part of the pancreas. Complete resection is usually possible and long-term survival is likely, though pancreatoblastoma has a high recurrence rate.[41,51]; [Level of evidence: 3iiA] For pediatric patients, the effectiveness of radiation therapy is not known. Chemotherapy may be useful for treatment of localized or metastatic pancreatic carcinoma. The combination of cisplatin and doxorubicin has produced responses in pancreatoblastoma prior to tumor resection.[63,64] Postoperative treatment with cisplatin, doxorubicin, ifosfamide, and etoposide has also produced responses in patients with pancreatoblastoma, although surgery is the mainstay of therapy.; [Level of evidence: 3iiiA] Other agents that may be of value include 5-FU, streptozotocin, mitomycin C, carboplatin, gemcitabine, and irinotecan. Response rates and survival rates generally are not good.
(Refer to the PDQ summary on adult Pancreatic Cancer Treatment for more information.)
Carcinoma of the large bowel is rare in the pediatric age group. It is seen in one per 1 million persons younger than 20 years in the United States annually, and fewer than 100 cases are diagnosed in children each year in the United States. From 1973 to 2006, the SEER database recorded 174 cases of colorectal cancer in patients younger than 19 years.
In children, 40% to 60% of tumors arise on the right side of the colon, in contrast to adults who have a prevalence of tumors on the left side. Most reports also suggest that children present with more advanced disease and have a worse outcome.[66,68,69,70,71,72,73,74,75,76,77,78,79,80]
Most tumors in the pediatric age group are poorly differentiated mucin-producing carcinomas and many are of the signet ring cell type,[66,69,73] whereas only about 15% of adult lesions are of this histology. The tumors of younger patients with this histologic variant may be less responsive to chemotherapy. In the adolescent and young adult population, colorectal cancers have a higher incidence of mucinous histology, signet ring cells, microsatellite instability, and mutations in the mismatch repair genes. These tumors arise from the surface of the bowel, usually at the site of an adenomatous polyp. The tumor may extend into the muscle layer surrounding the bowel, or the tumor may perforate the bowel entirely and seed through the spaces around the bowel, including intra-abdominal fat, lymph nodes, liver, ovaries, and the surface of other loops of bowel. A high incidence of metastasis involving the pelvis, ovaries, or both may be present in girls. Colorectal cancers in younger patients have a high incidence of microsatellite instability, and noninherited sporadic tumors in younger patients often lack KRAS mutations and other cytogenetic anomalies seen in older patients.
Genetic syndromes associated with colorectal cancer
About 20% to 30% of adult patients with colorectal cancer have a significant history of familial cancer; of these, about 5% have a well-defined genetic syndrome. The incidence of these syndromes in children has not been well defined. In one review, 16% of patients younger than 40 years had a predisposing factor for the development of colorectal cancer. A later study documented immunohistochemical evidence of mismatch repair deficiency in 31% of colorectal carcinoma samples in patients aged 30 years or younger. The most common genetic syndromes associated with the development of colorectal cancer are shown in Tables 2 and 3.
|Syndrome||Gene||Gene Function||Hereditary Pattern|
|Attenuated familial adenomatous polyposis||APC(5' mutations),AXIN2||Tumor suppressor||Dominant|
|Familial adenomatous polyposis (Gardner syndrome)||APC||Tumor suppressor||Dominant|
|Lynch syndrome (hereditary nonpolyposis colorectal cancer)||MSH2, MLH1, MSH6, PMS2, EPCAM||Repair/stability||Dominant|
|Li-Fraumeni syndrome||TP53(p53)||Tumor suppressor||Dominant|
|Turcot syndrome||APC||Tumor suppressor||Dominant|
|Syndrome||Gene||Gene Function||Hereditary Pattern|
|Cowden syndrome||PTEN||Tumor suppressor||Dominant|
|Juvenile polyposis syndrome||BMPR1A, SMAD4, ENG||Tumor suppressor||Dominant|
|Peutz-Jeghers syndrome||STK11||Tumor suppressor||Dominant|
Familial polyposis is inherited as a dominant trait, which confers a high degree of risk. Early diagnosis and surgical removal of the colon eliminates the risk of developing carcinomas of the large bowel. Some colorectal carcinomas in young people, however, may be associated with a mutation of the adenomatous polyposis coli (APC) gene, which also is associated with an increased risk of brain tumors and hepatoblastoma. The familial APC syndrome is caused by mutation of a gene on chromosome 5q, which normally suppresses proliferation of cells lining the intestine and later development of polyps. A double-blind, placebo-controlled, randomized phase I trial in children aged 10 to 14 years with familial adenomatous polyposis (FAP) reported that celecoxib at a dose of 16 mg/kg/day is safe for administration for up to 3 months. At this dose, there was a significant decrease in the number of polyps detected on colonoscopy.[Level of evidence: 1iiDiv] The role of celecoxib in the management of FAP is not known.
Another tumor suppressor gene on chromosome 18 is associated with progression of polyps to malignant form. Multiple colon carcinomas have been associated with neurofibromatosis type I and several other rare syndromes.
Presenting symptoms are nonspecific and include abdominal pain, weight loss, change in bowel habits, anemia, and bleeding; the median duration of symptoms was about 3 months in one series.[66,69,92] Changes in bowel habits may be associated with tumors of the rectum or lower colon. Tumors of the right colon may cause more subtle symptoms but are often associated with an abdominal mass, weight loss, decreased appetite, and blood in the stool. Any tumor that causes complete obstruction of the large bowel can cause bowel perforation and spread of the tumor cells within the abdominal cavity.
Diagnostic studies that may be of value include examination of the stool for blood, studies of liver and kidney function, measurement of carcinoembryonic antigen, and various medical imaging studies, including direct examination using colonoscopy to detect polyps in the large bowel. Other conventional radiographic studies include barium enema or video-capsule endoscopy followed by computed tomography of the chest and bone scans.[79,82,93]
Most patients present with evidence of metastatic disease, either as gross tumor or as microscopic deposits in lymph nodes, on the surface of the bowel, or on intra-abdominal organs.[71,73] Complete surgical excision is the most important prognostic factor and should be the primary aim of the surgeon, but in most instances this is impossible; removal of large portions of tumor provides little benefit for the individuals with extensive metastatic disease. Most patients with microscopic metastatic disease generally develop gross metastatic disease, and few individuals with metastatic disease at diagnosis become long-term survivors.
Current therapy includes the use of radiation for rectal and lower colon tumors, in conjunction with chemotherapy using 5-FU with leucovorin. Other agents, including irinotecan, may be of value.[Level of evidence: 3iiiA] No significant benefit has been determined for interferon-alpha given in conjunction with 5-FU/leucovorin. A recent review of nine clinical trials comprising 138 patients younger than 40 years demonstrated that the use of combination chemotherapy improved progression-free and overall survival (OS) in these patients. Furthermore, OS and response rates to chemotherapy were similar to those observed in older patients.
(Refer to the PDQ summaries on adult Colon Cancer Treatment and Rectal Cancer Treatment for more information.)
These tumors, like bronchial adenomas, may be benign or malignant and can involve the lining of the lung, large or small bowel, or liver.[97,98,99,100,101,102] Most lung lesions are benign; however, some metastasize.
Most carcinoid tumors of the appendix are discovered incidentally at the time of appendectomy, and are small, localized tumors; simple appendectomy is the therapy of choice.[104,105] For larger (>2 cm) tumors or tumors that have spread to local nodes, cecectomy or rarely, right hemicolectomy, is the usual treatment. It has become accepted practice to remove the entire right colon in patients with large carcinoid tumors of the appendix (>2 cm in diameter) or with tumors that have spread to the nodes; however, this practice remains controversial. A MEDLINE search did not find any documented cases of childhood localized appendiceal carcinoid in children younger than 18 years with complete resection who relapsed. Treatment of metastatic carcinoid tumors of the large bowel or stomach becomes more complicated and requires treatment similar to that given for colorectal carcinoma. (Refer to the PDQ summary on adult Gastrointestinal Carcinoid Tumors for therapeutic options in patients with malignant carcinoid tumors.)
The carcinoid syndrome of excessive excretion of somatostatin is characterized by flushing, labile blood pressure, and metastatic spread of the tumor to the liver. Symptoms may be lessened by giving somatostatin analogs, which are available in short-acting and long-acting forms. Occasionally, carcinoids may produce ectopic ACTH and cause Cushing disease.
Gastrointestinal Stromal Tumors (GIST)
Gastrointestinal stromal tumors (GIST) are the most common mesenchymal neoplasms of the gastrointestinal tract in adults. These tumors are rare in children. Approximately 2% of all GIST occur in children and young adults;[112,113,114] in one series, pediatric GIST accounted for 2.5% of all pediatric nonrhabdomyosarcomatous soft tissue sarcomas. Previously, these tumors were diagnosed as leiomyomas, leiomyosarcomas, and leiomyoblastomas. In pediatric patients, GIST are most commonly located in the stomach and usually occur in adolescent females.[116,117]
Histology and molecular genetics
Histologically, pediatric GIST have a predominance of epithelioid or epithelioid/spindle cell morphology and, unlike adult GIST, their mitotic rate does not appear to accurately predict clinical behavior.[116,124] Most pediatric patients with GIST present during the second decade of life with anemia-related gastrointestinal bleeding. In addition, pediatric GIST have a high propensity for multifocality (23%) and nodal metastases.[116,125] These features may account for the high incidence of local recurrence seen in this patient population.
Pediatric GIST is biologically different from adult GIST. Activating mutations of KIT and PDGFA, which are seen in 90% of adult GIST, are present in only 11% of pediatric GIST.[116,125,126] In addition, unlike adult KIT mutant GIST, pediatric GIST have minimal large-scale chromosomal changes and the expression of insulin-like growth factor 1 receptor (IGF1R) expression is significantly higher and amplified in these patients, suggesting that administration of an IGF1R inhibitor might be therapeutically beneficial in these patients.[126,127]
Recent studies have revealed that about 12% of patients with wild-type GIST and a negative history of paraganglioma have germline mutations in the SDHB or C gene. In addition, using immunohistochemistry, SDHB expression is absent in all pediatric wild-type GIST, implicating cellular respiration defects in the pathogenesis of this disease. Furthermore, these findings support the notion that pediatric patients with wild-type GIST should be offered testing for constitutional mutations for the SDH complex. The routine use of immunohistochemistry has documented lack of SDHB expression in 94% of children younger than 20 years with wild-type GIST and some investigators now favor the term SDH-deficient GIST. This group of patients lack KIT, PDGFR, and BRAF mutations in the primary tumor and lack SDHB immunoreactivity in the tumor. SDH-deficient GIST more commonly affects females, has an indolent clinical course, and occurs in the stomach.
Treatment of GIST
Once the diagnosis of pediatric GIST is established, it is recommended that patients be seen at centers with expertise in the treatment of GIST and that all samples be subjected to mutational analysis for KIT (exons 9, 11, 13, 17), PDGFR (exons 12, 14, 18), and BRAF (V600E).[129,130]
Treatment of GIST varies based on whether a mutation is detected:
A randomized clinical trial in adults demonstrated that administration of adjuvant imatinib mesylate improved event-free survival in adult patients with GIST but this benefit was restricted to those with KIT exon 11 and PDGFR mutations, and thus the use of this agent in the adjuvant setting in pediatric wild-type GIST cannot be recommended. Responses to imatinib and sunitinib in pediatric patients with wild-type GIST are uncommon and consist mainly of disease stabilization.[116,132,133] In a review of ten patients who were treated with imatinib mesylate, one patient experienced a partial response and three patients had stable disease. In another study, the clinical activity of sunitinib in six children with imatinib-resistant GIST was reported as one partial response and five stable disease.
|1.||Ribeiro RC, Figueiredo B: Childhood adrenocortical tumours. Eur J Cancer 40 (8): 1117-26, 2004.|
|2.||Wooten MD, King DK: Adrenal cortical carcinoma. Epidemiology and treatment with mitotane and a review of the literature. Cancer 72 (11): 3145-55, 1993.|
|3.||Michalkiewicz E, Sandrini R, Figueiredo B, et al.: Clinical and outcome characteristics of children with adrenocortical tumors: a report from the International Pediatric Adrenocortical Tumor Registry. J Clin Oncol 22 (5): 838-45, 2004.|
|4.||Berstein L, Gurney JG: Carcinomas and other malignant epithelial neoplasms. In: Ries LA, Smith MA, Gurney JG, et al., eds.: Cancer incidence and survival among children and adolescents: United States SEER Program 1975-1995. Bethesda, Md: National Cancer Institute, SEER Program, 1999. NIH Pub.No. 99-4649., Chapter 11, pp 139-148. Also available online. Last accessed October 31, 2012.|
|5.||Figueiredo BC, Sandrini R, Zambetti GP, et al.: Penetrance of adrenocortical tumours associated with the germline TP53 R337H mutation. J Med Genet 43 (1): 91-6, 2006.|
|6.||Pianovski MA, Maluf EM, de Carvalho DS, et al.: Mortality rate of adrenocortical tumors in children under 15 years of age in Curitiba, Brazil. Pediatr Blood Cancer 47 (1): 56-60, 2006.|
|7.||Rodriguez-Galindo C, Figueiredo BC, Zambetti GP, et al.: Biology, clinical characteristics, and management of adrenocortical tumors in children. Pediatr Blood Cancer 45 (3): 265-73, 2005.|
|8.||Ribeiro RC, Sandrini Neto RS, Schell MJ, et al.: Adrenocortical carcinoma in children: a study of 40 cases. J Clin Oncol 8 (1): 67-74, 1990.|
|9.||Wieneke JA, Thompson LD, Heffess CS: Adrenal cortical neoplasms in the pediatric population: a clinicopathologic and immunophenotypic analysis of 83 patients. Am J Surg Pathol 27 (7): 867-81, 2003.|
|10.||Sandrini R, Ribeiro RC, DeLacerda L: Childhood adrenocortical tumors. J Clin Endocrinol Metab 82 (7): 2027-31, 1997.|
|11.||Bugg MF, Ribeiro RC, Roberson PK, et al.: Correlation of pathologic features with clinical outcome in pediatric adrenocortical neoplasia. A study of a Brazilian population. Brazilian Group for Treatment of Childhood Adrenocortical Tumors. Am J Clin Pathol 101 (5): 625-9, 1994.|
|12.||Michalkiewicz EL, Sandrini R, Bugg MF, et al.: Clinical characteristics of small functioning adrenocortical tumors in children. Med Pediatr Oncol 28 (3): 175-8, 1997.|
|13.||Ribeiro RC, Sandrini F, Figueiredo B, et al.: An inherited p53 mutation that contributes in a tissue-specific manner to pediatric adrenal cortical carcinoma. Proc Natl Acad Sci U S A 98 (16): 9330-5, 2001.|
|14.||Hoyme HE, Seaver LH, Jones KL, et al.: Isolated hemihyperplasia (hemihypertrophy): report of a prospective multicenter study of the incidence of neoplasia and review. Am J Med Genet 79 (4): 274-8, 1998.|
|15.||Steenman M, Westerveld A, Mannens M: Genetics of Beckwith-Wiedemann syndrome-associated tumors: common genetic pathways. Genes Chromosomes Cancer 28 (1): 1-13, 2000.|
|16.||El Wakil A, Doghman M, Latre De Late P, et al.: Genetics and genomics of childhood adrenocortical tumors. Mol Cell Endocrinol 336 (1-2): 169-73, 2011.|
|17.||Figueiredo BC, Stratakis CA, Sandrini R, et al.: Comparative genomic hybridization analysis of adrenocortical tumors of childhood. J Clin Endocrinol Metab 84 (3): 1116-21, 1999.|
|18.||Weiss LM: Comparative histologic study of 43 metastasizing and nonmetastasizing adrenocortical tumors. Am J Surg Pathol 8 (3): 163-9, 1984.|
|19.||van Slooten H, Schaberg A, Smeenk D, et al.: Morphologic characteristics of benign and malignant adrenocortical tumors. Cancer 55 (4): 766-73, 1985.|
|20.||Stojadinovic A, Ghossein RA, Hoos A, et al.: Adrenocortical carcinoma: clinical, morphologic, and molecular characterization. J Clin Oncol 20 (4): 941-50, 2002.|
|21.||Almeida MQ, Fragoso MC, Lotfi CF, et al.: Expression of insulin-like growth factor-II and its receptor in pediatric and adult adrenocortical tumors. J Clin Endocrinol Metab 93 (9): 3524-31, 2008.|
|22.||West AN, Neale GA, Pounds S, et al.: Gene expression profiling of childhood adrenocortical tumors. Cancer Res 67 (2): 600-8, 2007.|
|23.||Hanna AM, Pham TH, Askegard-Giesmann JR, et al.: Outcome of adrenocortical tumors in children. J Pediatr Surg 43 (5): 843-9, 2008.|
|24.||Klein JD, Turner CG, Gray FL, et al.: Adrenal cortical tumors in children: factors associated with poor outcome. J Pediatr Surg 46 (6): 1201-7, 2011.|
|25.||Zancanella P, Pianovski MA, Oliveira BH, et al.: Mitotane associated with cisplatin, etoposide, and doxorubicin in advanced childhood adrenocortical carcinoma: mitotane monitoring and tumor regression. J Pediatr Hematol Oncol 28 (8): 513-24, 2006.|
|26.||Hovi L, Wikström S, Vettenranta K, et al.: Adrenocortical carcinoma in children: a role for etoposide and cisplatin adjuvant therapy? Preliminary report. Med Pediatr Oncol 40 (5): 324-6, 2003.|
|27.||Stewart JN, Flageole H, Kavan P: A surgical approach to adrenocortical tumors in children: the mainstay of treatment. J Pediatr Surg 39 (5): 759-63, 2004.|
|28.||Kardar AH: Rupture of adrenal carcinoma after biopsy. J Urol 166 (3): 984, 2001.|
|29.||Gonzalez RJ, Shapiro S, Sarlis N, et al.: Laparoscopic resection of adrenal cortical carcinoma: a cautionary note. Surgery 138 (6): 1078-85; discussion 1085-6, 2005.|
|30.||Terzolo M, Angeli A, Fassnacht M, et al.: Adjuvant mitotane treatment for adrenocortical carcinoma. N Engl J Med 356 (23): 2372-80, 2007.|
|31.||Driver CP, Birch J, Gough DC, et al.: Adrenal cortical tumors in childhood. Pediatr Hematol Oncol 15 (6): 527-32, 1998 Nov-Dec.|
|32.||Curtis JL, Burns RC, Wang L, et al.: Primary gastric tumors of infancy and childhood: 54-year experience at a single institution. J Pediatr Surg 43 (8): 1487-93, 2008.|
|33.||Subbiah V, Varadhachary G, Herzog CE, et al.: Gastric adenocarcinoma in children and adolescents. Pediatr Blood Cancer 57 (3): 524-7, 2011.|
|34.||American Cancer Society.: Cancer Facts and Figures-2000. Atlanta, Ga: American Cancer Society, 2000.|
|35.||Rowland M, Drumm B: Helicobacter pylori infection and peptic ulcer disease in children. Curr Opin Pediatr 7 (5): 553-9, 1995.|
|36.||Ajani JA: Current status of therapy for advanced gastric carcinoma. Oncology (Huntingt) 12 (8 Suppl 6): 99-102, 1998.|
|37.||Chung EM, Travis MD, Conran RM: Pancreatic tumors in children: radiologic-pathologic correlation. Radiographics 26 (4): 1211-38, 2006 Jul-Aug.|
|38.||Perez EA, Gutierrez JC, Koniaris LG, et al.: Malignant pancreatic tumors: incidence and outcome in 58 pediatric patients. J Pediatr Surg 44 (1): 197-203, 2009.|
|39.||Dall'igna P, Cecchetto G, Bisogno G, et al.: Pancreatic tumors in children and adolescents: the Italian TREP project experience. Pediatr Blood Cancer 54 (5): 675-80, 2010.|
|40.||Vossen S, Goretzki PE, Goebel U, et al.: Therapeutic management of rare malignant pancreatic tumors in children. World J Surg 22 (8): 879-82, 1998.|
|41.||Shorter NA, Glick RD, Klimstra DS, et al.: Malignant pancreatic tumors in childhood and adolescence: The Memorial Sloan-Kettering experience, 1967 to present. J Pediatr Surg 37 (6): 887-92, 2002.|
|42.||Raffel A, Cupisti K, Krausch M, et al.: Therapeutic strategy of papillary cystic and solid neoplasm (PCSN): a rare non-endocrine tumor of the pancreas in children. Surg Oncol 13 (1): 1-6, 2004.|
|43.||Ellerkamp V, Warmann SW, Vorwerk P, et al.: Exocrine pancreatic tumors in childhood in Germany. Pediatr Blood Cancer 58 (3): 366-71, 2012.|
|44.||Movahedi-Lankarani S, Hruban RH, Westra WH, et al.: Primitive neuroectodermal tumors of the pancreas: a report of seven cases of a rare neoplasm. Am J Surg Pathol 26 (8): 1040-7, 2002.|
|45.||Muguerza R, Rodriguez A, Formigo E, et al.: Pancreatoblastoma associated with incomplete Beckwith-Wiedemann syndrome: case report and review of the literature. J Pediatr Surg 40 (8): 1341-4, 2005.|
|46.||Kletter GB, Sweetser DA, Wallace SF, et al.: Adrenocorticotropin-secreting pancreatoblastoma. J Pediatr Endocrinol Metab 20 (5): 639-42, 2007.|
|47.||Karachaliou F, Vlachopapadopoulou E, Kaldrymidis P, et al.: Malignant insulinoma in childhood. J Pediatr Endocrinol Metab 19 (5): 757-60, 2006.|
|48.||Schwartz MZ: Unusual peptide-secreting tumors in adolescents and children. Semin Pediatr Surg 6 (3): 141-6, 1997.|
|49.||Murakami T, Ueki K, Kawakami H, et al.: Pancreatoblastoma: case report and review of treatment in the literature. Med Pediatr Oncol 27 (3): 193-7, 1996.|
|50.||Imamura A, Nakagawa A, Okuno M, et al.: Pancreatoblastoma in an adolescent girl: case report and review of 26 Japanese cases. Eur J Surg 164 (4): 309-12, 1998.|
|51.||Dhebri AR, Connor S, Campbell F, et al.: Diagnosis, treatment and outcome of pancreatoblastoma. Pancreatology 4 (5): 441-51; discussion 452-3, 2004.|
|52.||Bendell JC, Lauwers GY, Willett C, et al.: Pancreatoblastoma in a teenage patient. Clin Adv Hematol Oncol 4 (2): 150-3; discussion 154, 2006.|
|53.||Bien E, Godzinski J, Dall'igna P, et al.: Pancreatoblastoma: a report from the European cooperative study group for paediatric rare tumours (EXPeRT). Eur J Cancer 47 (15): 2347-52, 2011.|
|54.||Papavramidis T, Papavramidis S: Solid pseudopapillary tumors of the pancreas: review of 718 patients reported in English literature. J Am Coll Surg 200 (6): 965-72, 2005.|
|55.||Choi SH, Kim SM, Oh JT, et al.: Solid pseudopapillary tumor of the pancreas: a multicenter study of 23 pediatric cases. J Pediatr Surg 41 (12): 1992-5, 2006.|
|56.||Nakahara K, Kobayashi G, Fujita N, et al.: Solid-pseudopapillary tumor of the pancreas showing a remarkable reduction in size over the 10-year follow-up period. Intern Med 47 (14): 1335-9, 2008.|
|57.||Soloni P, Cecchetto G, Dall'igna P, et al.: Management of unresectable solid papillary cystic tumor of the pancreas. A case report and literature review. J Pediatr Surg 45 (5): e1-6, 2010.|
|58.||Moholkar S, Sebire NJ, Roebuck DJ: Solid-pseudopapillary neoplasm of the pancreas: radiological-pathological correlation. Pediatr Radiol 35 (8): 819-22, 2005.|
|59.||Peng CH, Chen DF, Zhou GW, et al.: The solid-pseudopapillary tumor of pancreas: the clinical characteristics and surgical treatment. J Surg Res 131 (2): 276-82, 2006.|
|60.||Park M, Koh KN, Kim BE, et al.: Pancreatic neoplasms in childhood and adolescence. J Pediatr Hematol Oncol 33 (4): 295-300, 2011.|
|61.||Lee SE, Jang JY, Hwang DW, et al.: Clinical features and outcome of solid pseudopapillary neoplasm: differences between adults and children. Arch Surg 143 (12): 1218-21, 2008.|
|62.||Yu DC, Kozakewich HP, Perez-Atayde AR, et al.: Childhood pancreatic tumors: a single institution experience. J Pediatr Surg 44 (12): 2267-72, 2009.|
|63.||Défachelles AS, Martin De Lassalle E, Boutard P, et al.: Pancreatoblastoma in childhood: clinical course and therapeutic management of seven patients. Med Pediatr Oncol 37 (1): 47-52, 2001.|
|64.||Yonekura T, Kosumi T, Hokim M, et al.: Aggressive surgical and chemotherapeutic treatment of advanced pancreatoblastoma associated with tumor thrombus in portal vein. J Pediatr Surg 41 (3): 596-8, 2006.|
|65.||Lee YJ, Hah JO: Long-term survival of pancreatoblastoma in children. J Pediatr Hematol Oncol 29 (12): 845-7, 2007.|
|66.||Saab R, Furman WL: Epidemiology and management options for colorectal cancer in children. Paediatr Drugs 10 (3): 177-92, 2008.|
|67.||Ferrari A, Casanova M, Massimino M, et al.: Peculiar features and tailored management of adult cancers occurring in pediatric age. Expert Rev Anticancer Ther 10 (11): 1837-51, 2010.|
|68.||Sharma AK, Gupta CR: Colorectal cancer in children: case report and review of literature. Trop Gastroenterol 22 (1): 36-9, 2001 Jan-Mar.|
|69.||Hill DA, Furman WL, Billups CA, et al.: Colorectal carcinoma in childhood and adolescence: a clinicopathologic review. J Clin Oncol 25 (36): 5808-14, 2007.|
|70.||Andersson A, Bergdahl L: Carcinoma of the colon in children: a report of six new cases and a review of the literature. J Pediatr Surg 11 (6): 967-71, 1976.|
|71.||Chantada GL, Perelli VB, Lombardi MG, et al.: Colorectal carcinoma in children, adolescents, and young adults. J Pediatr Hematol Oncol 27 (1): 39-41, 2005.|
|72.||Durno C, Aronson M, Bapat B, et al.: Family history and molecular features of children, adolescents, and young adults with colorectal carcinoma. Gut 54 (8): 1146-50, 2005.|
|73.||Ferrari A, Rognone A, Casanova M, et al.: Colorectal carcinoma in children and adolescents: the experience of the Istituto Nazionale Tumori of Milan, Italy. Pediatr Blood Cancer 50 (3): 588-93, 2008.|
|74.||Karnak I, Ciftci AO, Senocak ME, et al.: Colorectal carcinoma in children. J Pediatr Surg 34 (10): 1499-504, 1999.|
|75.||LaQuaglia MP, Heller G, Filippa DA, et al.: Prognostic factors and outcome in patients 21 years and under with colorectal carcinoma. J Pediatr Surg 27 (8): 1085-9; discussion 1089-90, 1992.|
|76.||MIDDELKAMP JN, HAFFNER H: CARCINOMA OF THE COLON IN CHILDREN. Pediatrics 32: 558-71, 1963.|
|77.||Radhakrishnan CN, Bruce J: Colorectal cancers in children without any predisposing factors. A report of eight cases and review of the literature. Eur J Pediatr Surg 13 (1): 66-8, 2003.|
|78.||Taguchi T, Suita S, Hirata Y, et al.: Carcinoma of the colon in children: a case report and review of 41 Japanese cases. J Pediatr Gastroenterol Nutr 12 (3): 394-9, 1991.|
|79.||Pratt CB, Rao BN, Merchant TE, et al.: Treatment of colorectal carcinoma in adolescents and young adults with surgery, 5-fluorouracil/leucovorin/interferon-alpha 2a and radiation therapy. Med Pediatr Oncol 32 (6): 459-60, 1999.|
|80.||Sultan I, Rodriguez-Galindo C, El-Taani H, et al.: Distinct features of colorectal cancer in children and adolescents: a population-based study of 159 cases. Cancer 116 (3): 758-65, 2010.|
|81.||Tricoli JV, Seibel NL, Blair DG, et al.: Unique characteristics of adolescent and young adult acute lymphoblastic leukemia, breast cancer, and colon cancer. J Natl Cancer Inst 103 (8): 628-35, 2011.|
|82.||Kauffman WM, Jenkins JJ 3rd, Helton K, et al.: Imaging features of ovarian metastases from colonic adenocarcinoma in adolescents. Pediatr Radiol 25 (4): 286-8, 1995.|
|83.||Bleyer A, Barr R, Hayes-Lattin B, et al.: The distinctive biology of cancer in adolescents and young adults. Nat Rev Cancer 8 (4): 288-98, 2008.|
|84.||Gatalica Z, Torlakovic E: Pathology of the hereditary colorectal carcinoma. Fam Cancer 7 (1): 15-26, 2008.|
|85.||O'Connell JB, Maggard MA, Livingston EH, et al.: Colorectal cancer in the young. Am J Surg 187 (3): 343-8, 2004.|
|86.||Goel A, Nagasaka T, Spiegel J, et al.: Low frequency of Lynch syndrome among young patients with non-familial colorectal cancer. Clin Gastroenterol Hepatol 8 (11): 966-71, 2010.|
|87.||Erdman SH: Pediatric adenomatous polyposis syndromes: an update. Curr Gastroenterol Rep 9 (3): 237-44, 2007.|
|88.||Turcot J, Despres JP, St. Pierre F: Malignant tumors of the central nervous system associated with familial polyposis of the colon: Report of two cases. Dis Colon Rectum 2: 465-468, 1959.|
|89.||Vogelstein B, Fearon ER, Hamilton SR, et al.: Genetic alterations during colorectal-tumor development. N Engl J Med 319 (9): 525-32, 1988.|
|90.||Lynch PM, Ayers GD, Hawk E, et al.: The safety and efficacy of celecoxib in children with familial adenomatous polyposis. Am J Gastroenterol 105 (6): 1437-43, 2010.|
|91.||Pratt CB, Jane JA: Multiple colorectal carcinomas, polyposis coli, and neurofibromatosis, followed by multiple glioblastoma multiforme. J Natl Cancer Inst 83 (12): 880-1, 1991.|
|92.||Pappo A, Rodriguez-Galindo C, Furman W: Management of infrequent cancers of childhood. In: Pizzo PA, Poplack DG: Principles and Practice of Pediatric Oncology. 6th ed. Philadelphia, Pa: Lippincott Williams and Wilkins, 2010, pp 1098-1123.|
|93.||Postgate A, Hyer W, Phillips R, et al.: Feasibility of video capsule endoscopy in the management of children with Peutz-Jeghers syndrome: a blinded comparison with barium enterography for the detection of small bowel polyps. J Pediatr Gastroenterol Nutr 49 (4): 417-23, 2009.|
|94.||Madajewicz S, Petrelli N, Rustum YM, et al.: Phase I-II trial of high-dose calcium leucovorin and 5-fluorouracil in advanced colorectal cancer. Cancer Res 44 (10): 4667-9, 1984.|
|95.||Wolmark N, Bryant J, Smith R, et al.: Adjuvant 5-fluorouracil and leucovorin with or without interferon alfa-2a in colon carcinoma: National Surgical Adjuvant Breast and Bowel Project protocol C-05. J Natl Cancer Inst 90 (23): 1810-6, 1998.|
|96.||Blanke CD, Bot BM, Thomas DM, et al.: Impact of young age on treatment efficacy and safety in advanced colorectal cancer: a pooled analysis of patients from nine first-line phase III chemotherapy trials. J Clin Oncol 29 (20): 2781-6, 2011.|
|97.||Modlin IM, Sandor A: An analysis of 8305 cases of carcinoid tumors. Cancer 79 (4): 813-29, 1997.|
|98.||Deans GT, Spence RA: Neoplastic lesions of the appendix. Br J Surg 82 (3): 299-306, 1995.|
|99.||Doede T, Foss HD, Waldschmidt J: Carcinoid tumors of the appendix in children--epidemiology, clinical aspects and procedure. Eur J Pediatr Surg 10 (6): 372-7, 2000.|
|100.||Quaedvlieg PF, Visser O, Lamers CB, et al.: Epidemiology and survival in patients with carcinoid disease in The Netherlands. An epidemiological study with 2391 patients. Ann Oncol 12 (9): 1295-300, 2001.|
|101.||Broaddus RR, Herzog CE, Hicks MJ: Neuroendocrine tumors (carcinoid and neuroendocrine carcinoma) presenting at extra-appendiceal sites in childhood and adolescence. Arch Pathol Lab Med 127 (9): 1200-3, 2003.|
|102.||Foley DS, Sunil I, Debski R, et al.: Primary hepatic carcinoid tumor in children. J Pediatr Surg 43 (11): e25-8, 2008.|
|103.||Tormey WP, FitzGerald RJ: The clinical and laboratory correlates of an increased urinary 5-hydroxyindoleacetic acid. Postgrad Med J 71 (839): 542-5, 1995.|
|104.||Pelizzo G, La Riccia A, Bouvier R, et al.: Carcinoid tumors of the appendix in children. Pediatr Surg Int 17 (5-6): 399-402, 2001.|
|105.||Hatzipantelis E, Panagopoulou P, Sidi-Fragandrea V, et al.: Carcinoid tumors of the appendix in children: experience from a tertiary center in northern Greece. J Pediatr Gastroenterol Nutr 51 (5): 622-5, 2010.|
|106.||Dall'Igna P, Ferrari A, Luzzatto C, et al.: Carcinoid tumor of the appendix in childhood: the experience of two Italian institutions. J Pediatr Gastroenterol Nutr 40 (2): 216-9, 2005.|
|107.||Cernaianu G, Tannapfel A, Nounla J, et al.: Appendiceal carcinoid tumor with lymph node metastasis in a child: case report and review of the literature. J Pediatr Surg 45 (11): e1-5, 2010.|
|108.||Delaunoit T, Rubin J, Neczyporenko F, et al.: Somatostatin analogues in the treatment of gastroenteropancreatic neuroendocrine tumors. Mayo Clin Proc 80 (4): 502-6, 2005.|
|109.||More J, Young J, Reznik Y, et al.: Ectopic ACTH syndrome in children and adolescents. J Clin Endocrinol Metab 96 (5): 1213-22, 2011.|
|110.||Corless CL, Fletcher JA, Heinrich MC: Biology of gastrointestinal stromal tumors. J Clin Oncol 22 (18): 3813-25, 2004.|
|111.||Pappo AS, Janeway K, Laquaglia M, et al.: Special considerations in pediatric gastrointestinal tumors. J Surg Oncol 104 (8): 928-32, 2011.|
|112.||Prakash S, Sarran L, Socci N, et al.: Gastrointestinal stromal tumors in children and young adults: a clinicopathologic, molecular, and genomic study of 15 cases and review of the literature. J Pediatr Hematol Oncol 27 (4): 179-87, 2005.|
|113.||Miettinen M, Lasota J, Sobin LH: Gastrointestinal stromal tumors of the stomach in children and young adults: a clinicopathologic, immunohistochemical, and molecular genetic study of 44 cases with long-term follow-up and review of the literature. Am J Surg Pathol 29 (10): 1373-81, 2005.|
|114.||Benesch M, Wardelmann E, Ferrari A, et al.: Gastrointestinal stromal tumors (GIST) in children and adolescents: A comprehensive review of the current literature. Pediatr Blood Cancer 53 (7): 1171-9, 2009.|
|115.||Cypriano MS, Jenkins JJ, Pappo AS, et al.: Pediatric gastrointestinal stromal tumors and leiomyosarcoma. Cancer 101 (1): 39-50, 2004.|
|116.||Pappo AS, Janeway KA: Pediatric gastrointestinal stromal tumors. Hematol Oncol Clin North Am 23 (1): 15-34, vii, 2009.|
|117.||Benesch M, Leuschner I, Wardelmann E, et al.: Gastrointestinal stromal tumours in children and young adults: a clinicopathologic series with long-term follow-up from the database of the Cooperative Weichteilsarkom Studiengruppe (CWS). Eur J Cancer 47 (11): 1692-8, 2011.|
|118.||Otto C, Agaimy A, Braun A, et al.: Multifocal gastric gastrointestinal stromal tumors (GISTs) with lymph node metastases in children and young adults: a comparative clinical and histomorphological study of three cases including a new case of Carney triad. Diagn Pathol 6: 52, 2011.|
|119.||Carney JA: Carney triad: a syndrome featuring paraganglionic, adrenocortical, and possibly other endocrine tumors. J Clin Endocrinol Metab 94 (10): 3656-62, 2009.|
|120.||Pasini B, McWhinney SR, Bei T, et al.: Clinical and molecular genetics of patients with the Carney-Stratakis syndrome and germline mutations of the genes coding for the succinate dehydrogenase subunits SDHB, SDHC, and SDHD. Eur J Hum Genet 16 (1): 79-88, 2008.|
|121.||Miettinen M, Wang ZF, Sarlomo-Rikala M, et al.: Succinate dehydrogenase-deficient GISTs: a clinicopathologic, immunohistochemical, and molecular genetic study of 66 gastric GISTs with predilection to young age. Am J Surg Pathol 35 (11): 1712-21, 2011.|
|122.||Miettinen M, Fetsch JF, Sobin LH, et al.: Gastrointestinal stromal tumors in patients with neurofibromatosis 1: a clinicopathologic and molecular genetic study of 45 cases. Am J Surg Pathol 30 (1): 90-6, 2006.|
|123.||Li FP, Fletcher JA, Heinrich MC, et al.: Familial gastrointestinal stromal tumor syndrome: phenotypic and molecular features in a kindred. J Clin Oncol 23 (12): 2735-43, 2005.|
|124.||Miettinen M, Lasota J: Gastrointestinal stromal tumors: review on morphology, molecular pathology, prognosis, and differential diagnosis. Arch Pathol Lab Med 130 (10): 1466-78, 2006.|
|125.||Agaram NP, Laquaglia MP, Ustun B, et al.: Molecular characterization of pediatric gastrointestinal stromal tumors. Clin Cancer Res 14 (10): 3204-15, 2008.|
|126.||Janeway KA, Liegl B, Harlow A, et al.: Pediatric KIT wild-type and platelet-derived growth factor receptor alpha-wild-type gastrointestinal stromal tumors share KIT activation but not mechanisms of genetic progression with adult gastrointestinal stromal tumors. Cancer Res 67 (19): 9084-8, 2007.|
|127.||Tarn C, Rink L, Merkel E, et al.: Insulin-like growth factor 1 receptor is a potential therapeutic target for gastrointestinal stromal tumors. Proceedings of the National Academy of Sciences 105 (24): 8387-92, 2008. Also available online. Last accessed October 31, 2012.|
|128.||Janeway KA, Kim SY, Lodish M, et al.: Defects in succinate dehydrogenase in gastrointestinal stromal tumors lacking KIT and PDGFRA mutations. Proc Natl Acad Sci U S A 108 (1): 314-8, 2011.|
|129.||Demetri GD, Benjamin RS, Blanke CD, et al.: NCCN Task Force report: management of patients with gastrointestinal stromal tumor (GIST)--update of the NCCN clinical practice guidelines. J Natl Compr Canc Netw 5 (Suppl 2): S1-29; quiz S30, 2007.|
|130.||Janeway KA, Weldon CB: Pediatric gastrointestinal stromal tumor. Semin Pediatr Surg 21 (1): 31-43, 2012.|
|131.||Dematteo RP, Ballman KV, Antonescu CR, et al.: Adjuvant imatinib mesylate after resection of localised, primary gastrointestinal stromal tumour: a randomised, double-blind, placebo-controlled trial. Lancet 373 (9669): 1097-104, 2009.|
|132.||Demetri GD, van Oosterom AT, Garrett CR, et al.: Efficacy and safety of sunitinib in patients with advanced gastrointestinal stromal tumour after failure of imatinib: a randomised controlled trial. Lancet 368 (9544): 1329-38, 2006.|
|133.||Demetri GD, von Mehren M, Blanke CD, et al.: Efficacy and safety of imatinib mesylate in advanced gastrointestinal stromal tumors. N Engl J Med 347 (7): 472-80, 2002.|
|134.||Janeway KA, Albritton KH, Van Den Abbeele AD, et al.: Sunitinib treatment in pediatric patients with advanced GIST following failure of imatinib. Pediatr Blood Cancer 52 (7): 767-71, 2009.|
Genital/urinary tumors include carcinoma of the bladder, non-germ cell testicular cancer, non-germ cell ovarian cancer, and carcinoma of the cervix and vagina. The prognosis, diagnosis, classification, and treatment of these genital/urinary tumors are discussed below. It must be emphasized that these tumors are seen very infrequently in patients younger than 15 years, and most of the evidence is derived from case series.
Carcinoma of the Bladder
Carcinoma of the bladder is extremely rare in children. The most common carcinoma to involve the bladder is papillary urothelial neoplasm of low malignant potential, which generally presents with hematuria.[1,2] In contrast to adults, most pediatric bladder carcinomas are low grade, superficial, and have a good prognosis following transurethral resection.[2,3,4,5,6] Squamous cell and more aggressive carcinomas, however, have been reported and may require a more aggressive surgical approach.[7,8] Bladder cancer in adolescents may develop as a consequence of alkylating-agent chemotherapy given for other childhood tumors or leukemia.[9,10] The association between cyclophosphamide and bladder cancer is the only established relationship between a specific anticancer drug and a solid tumor.
(Refer to the PDQ summary on adult Bladder Cancer Treatment for more information.)
Testicular Cancer (Non-Germ Cell)
Testicular tumors are very rare in young boys and account for an incidence of 1% to 2% of all childhood tumors.[11,12] The most common testicular tumors are benign teratomas followed by malignant non-seminomatous germ cell tumors. (Refer to the PDQ summary on Childhood Extracranial Germ Cell Tumors for more information.) Non-germ cell tumors such as sex cord–stromal tumors are exceedingly rare in prepubertal boys. In a small series, gonadal stromal tumors accounted for 8% to 13% of pediatric testicular tumors.; In newborns and infants, juvenile granulosa cell tumors are the most common stromal cell tumor. In older males, Leydig cell tumors are more common. Stromal cell tumors have been described as benign in young boys.[16,17,18]
There are conflicting data about malignant potential in older males. Most case reports suggest that in the pediatric patients, these tumors can be treated with surgery alone.[Level of evidence: 3iii]; [Level of evidence: 3iiiA]; [Level of evidence: 3iiiDii] However, given the rarity of this tumor, the surgical approach in pediatrics has not been well studied.
Ovarian Cancer (Non-Germ Cell)
The majority of ovarian masses in children are not malignant. The most common neoplasms are germ cell tumors, followed by epithelial tumors, stromal tumors, and then miscellaneous tumors such as Burkitt lymphoma.[20,21,22,23] The majority of ovarian tumors occur in girls aged 15 to 19 years.
Epithelial ovarian neoplasia
Ovarian tumors derived from malignant epithelial elements include: adenocarcinomas, cystadenocarcinomas, endometrioid tumors, clear cell tumors, and undifferentiated carcinomas. In one series of 19 patients younger than 21 years with epithelial ovarian neoplasms, the average age at diagnosis was 19.7 years. Dysmenorrhea and abdominal pain were the most common presenting symptoms; 79% of the patients had stage I disease with a 100% survival rate, and only those who had small cell anaplastic carcinoma died. Girls with ovarian carcinoma (epithelial ovarian neoplasia) fare better than adults with similar histology, probably because girls usually present with low-stage disease.
Treatment is stage-related and may include surgery, radiation, and chemotherapy with cisplatin, carboplatin, etoposide, topotecan, paclitaxel, and other agents.
Sex cord–stromal tumors
Ovarian sex cord–stromal tumors are a heterogeneous group of rare tumors that derive from the gonadal non-germ cell component. Histologic subtypes display some areas of gonadal differentiation and include juvenile granulosa cell tumors, Sertoli-Leydig cell tumors, and sclerosing stromal tumors. Ovarian sex-cord stromal tumors in children and adolescents are commonly associated with the presence of germline DICER1 mutations and may be a manifestation of the familial pleuropulmonary blastoma syndrome.
The most common histologic subtype in girls younger than 18 years is juvenile granulosa cell tumors (median age, 7.6 years; range, birth to 17.5 years).[29,30] Juvenile granulosa cell tumors represent about 5% of ovarian tumors in children and adolescents and are distinct from the granulosa cell tumors seen in adults.[27,31,32,33] Most patients with juvenile granulosa cell tumors present with precocious puberty. Other presenting symptoms include abdominal pain, abdominal mass, and ascites. Juvenile granulosa cell tumors has been reported in children with Ollier disease and Maffucci syndrome.
As many as 90% of children with juvenile granulosa cell tumors will have low-stage disease (International Federation of Gynecology and Obstetrics [FIGO] stage I) and are usually curable with unilateral salpingo-oophorectomy alone. Patients with advanced disease (FIGO stage II–IV) and those with high mitotic activity tumors have a poorer prognosis. Use of a cisplatin-based chemotherapy regimen has been reported in both the adjuvant and recurrent disease settings with some success.[29,33,36,37,38]
Small cell carcinomas of the ovary are exceedingly rare and aggressive tumors and may be associated with hypercalcemia. Successful treatment with aggressive therapy has been reported in a few cases.[Level of evidence: 3iiB]; [44,45][Level of evidence: 3iiiA]
Carcinoma of the Cervix and Vagina
Adenocarcinoma of the cervix and vagina is rare in childhood and adolescence with fewer than 50 reported cases.[23,46] Two-thirds of the cases are related to the exposure of diethylstilbestrol in utero. The median age at presentation is 15 years, with a range of 7 months to 18 years, and with most patients presenting with vaginal bleeding.
Adults with adenocarcinoma of the cervix or vagina will present with stage I or stage II disease 90% of the time. In children and adolescents, there is a high incidence of stage III and stage IV disease (24%). This difference may be explained by the practice of routine pelvic examinations in adults and the hesitancy to perform pelvic exams in children.
The treatment of choice is surgical resection, followed by radiation therapy for residual microscopic disease or lymphatic metastases. The role of chemotherapy in management is unknown, although drugs commonly used in the treatment of gynecologic malignancies, carboplatin and paclitaxel, have been used. The 3-year event-free survival (EFS) for all stages is 71% ± 11%; for stage I and stage II, the EFS is 82% ± 11%, and for stage III and stage IV, the EFS is 57% ± 22%.
|1.||Alanee S, Shukla AR: Bladder malignancies in children aged <18 years: results from the Surveillance, Epidemiology and End Results database. BJU Int 106 (4): 557-60, 2010.|
|2.||Paner GP, Zehnder P, Amin AM, et al.: Urothelial neoplasms of the urinary bladder occurring in young adult and pediatric patients: a comprehensive review of literature with implications for patient management. Adv Anat Pathol 18 (1): 79-89, 2011.|
|3.||Hoenig DM, McRae S, Chen SC, et al.: Transitional cell carcinoma of the bladder in the pediatric patient. J Urol 156 (1): 203-5, 1996.|
|4.||Serrano-Durbá A, Domínguez-Hinarejos C, Reig-Ruiz C, et al.: Transitional cell carcinoma of the bladder in children. Scand J Urol Nephrol 33 (1): 73-6, 1999.|
|5.||Fine SW, Humphrey PA, Dehner LP, et al.: Urothelial neoplasms in patients 20 years or younger: a clinicopathological analysis using the world health organization 2004 bladder consensus classification. J Urol 174 (5): 1976-80, 2005.|
|6.||Lerena J, Krauel L, García-Aparicio L, et al.: Transitional cell carcinoma of the bladder in children and adolescents: six-case series and review of the literature. J Pediatr Urol 6 (5): 481-5, 2010.|
|7.||Sung JD, Koyle MA: Squamous cell carcinoma of the bladder in a pediatric patient. J Pediatr Surg 35 (12): 1838-9, 2000.|
|8.||Lezama-del Valle P, Jerkins GR, Rao BN, et al.: Aggressive bladder carcinoma in a child. Pediatr Blood Cancer 43 (3): 285-8, 2004.|
|9.||Johansson SL, Cohen SM: Epidemiology and etiology of bladder cancer. Semin Surg Oncol 13 (5): 291-8, 1997 Sep-Oct.|
|10.||IARC Working Group on the Evaluation of Carcinogenic Risks to Humans. International Agency for Research on Cancer.: Overall evaluations of carcinogenicity: an updating of IARC monographs, volumes 1 to 42. IARC Monographs on the Evaluation of Carcinogenic Risks to Humans, Supplement 7. Lyon, France: International Agency for Research on Cancer, 1987.|
|11.||Hartke DM, Agarwal PK, Palmer JS: Testicular neoplasms in the prepubertal male. J Mens Health Gend 3 (2): 131-8, 2006.|
|12.||Ahmed HU, Arya M, Muneer A, et al.: Testicular and paratesticular tumours in the prepubertal population. Lancet Oncol 11 (5): 476-83, 2010.|
|13.||Pohl HG, Shukla AR, Metcalf PD, et al.: Prepubertal testis tumors: actual prevalence rate of histological types. J Urol 172 (6 Pt 1): 2370-2, 2004.|
|14.||Schwentner C, Oswald J, Rogatsch H, et al.: Stromal testis tumors in infants. a report of two cases. Urology 62 (6): 1121, 2003.|
|15.||Carmignani L, Colombo R, Gadda F, et al.: Conservative surgical therapy for leydig cell tumor. J Urol 178 (2): 507-11; discussion 511, 2007.|
|16.||Agarwal PK, Palmer JS: Testicular and paratesticular neoplasms in prepubertal males. J Urol 176 (3): 875-81, 2006.|
|17.||Dudani R, Giordano L, Sultania P, et al.: Juvenile granulosa cell tumor of testis: case report and review of literature. Am J Perinatol 25 (4): 229-31, 2008.|
|18.||Cecchetto G, Alaggio R, Bisogno G, et al.: Sex cord-stromal tumors of the testis in children. A clinicopathologic report from the Italian TREP project. J Pediatr Surg 45 (9): 1868-73, 2010.|
|19.||Thomas JC, Ross JH, Kay R: Stromal testis tumors in children: a report from the prepubertal testis tumor registry. J Urol 166 (6): 2338-40, 2001.|
|20.||Morowitz M, Huff D, von Allmen D: Epithelial ovarian tumors in children: a retrospective analysis. J Pediatr Surg 38 (3): 331-5; discussion 331-5, 2003.|
|21.||Schultz KA, Sencer SF, Messinger Y, et al.: Pediatric ovarian tumors: a review of 67 cases. Pediatr Blood Cancer 44 (2): 167-73, 2005.|
|22.||Aggarwal A, Lucco KL, Lacy J, et al.: Ovarian epithelial tumors of low malignant potential: a case series of 5 adolescent patients. J Pediatr Surg 44 (10): 2023-7, 2009.|
|23.||You W, Dainty LA, Rose GS, et al.: Gynecologic malignancies in women aged less than 25 years. Obstet Gynecol 105 (6): 1405-9, 2005.|
|24.||Brookfield KF, Cheung MC, Koniaris LG, et al.: A population-based analysis of 1037 malignant ovarian tumors in the pediatric population. J Surg Res 156 (1): 45-9, 2009.|
|25.||Lovvorn HN 3rd, Tucci LA, Stafford PW: Ovarian masses in the pediatric patient. AORN J 67 (3): 568-76; quiz 577, 580-84, 1998.|
|26.||Tsai JY, Saigo PE, Brown C, et al.: Diagnosis, pathology, staging, treatment, and outcome of epithelial ovarian neoplasia in patients age < 21 years. Cancer 91 (11): 2065-70, 2001.|
|27.||Schneider DT, Jänig U, Calaminus G, et al.: Ovarian sex cord-stromal tumors--a clinicopathological study of 72 cases from the Kiel Pediatric Tumor Registry. Virchows Arch 443 (4): 549-60, 2003.|
|28.||Schultz KA, Pacheco MC, Yang J, et al.: Ovarian sex cord-stromal tumors, pleuropulmonary blastoma and DICER1 mutations: a report from the International Pleuropulmonary Blastoma Registry. Gynecol Oncol 122 (2): 246-50, 2011.|
|29.||Calaminus G, Wessalowski R, Harms D, et al.: Juvenile granulosa cell tumors of the ovary in children and adolescents: results from 33 patients registered in a prospective cooperative study. Gynecol Oncol 65 (3): 447-52, 1997.|
|30.||Capito C, Flechtner I, Thibaud E, et al.: Neonatal bilateral ovarian sex cord stromal tumors. Pediatr Blood Cancer 52 (3): 401-3, 2009.|
|31.||Bouffet E, Basset T, Chetail N, et al.: Juvenile granulosa cell tumor of the ovary in infants: a clinicopathologic study of three cases and review of the literature. J Pediatr Surg 32 (5): 762-5, 1997.|
|32.||Zaloudek C, Norris HJ: Granulosa tumors of the ovary in children: a clinical and pathologic study of 32 cases. Am J Surg Pathol 6 (6): 503-12, 1982.|
|33.||Vassal G, Flamant F, Caillaud JM, et al.: Juvenile granulosa cell tumor of the ovary in children: a clinical study of 15 cases. J Clin Oncol 6 (6): 990-5, 1988.|
|34.||Kalfa N, Patte C, Orbach D, et al.: A nationwide study of granulosa cell tumors in pre- and postpubertal girls: missed diagnosis of endocrine manifestations worsens prognosis. J Pediatr Endocrinol Metab 18 (1): 25-31, 2005.|
|35.||Gell JS, Stannard MW, Ramnani DM, et al.: Juvenile granulosa cell tumor in a 13-year-old girl with enchondromatosis (Ollier's disease): a case report. J Pediatr Adolesc Gynecol 11 (3): 147-50, 1998.|
|36.||Powell JL, Connor GP, Henderson GS: Management of recurrent juvenile granulosa cell tumor of the ovary. Gynecol Oncol 81 (1): 113-6, 2001.|
|37.||Schneider DT, Calaminus G, Wessalowski R, et al.: Therapy of advanced ovarian juvenile granulosa cell tumors. Klin Padiatr 214 (4): 173-8, 2002 Jul-Aug.|
|38.||Schneider DT, Calaminus G, Harms D, et al.: Ovarian sex cord-stromal tumors in children and adolescents. J Reprod Med 50 (6): 439-46, 2005.|
|39.||Arhan E, Cetinkaya E, Aycan Z, et al.: A very rare cause of virilization in childhood: ovarian Leydig cell tumor. J Pediatr Endocrinol Metab 21 (2): 181-3, 2008.|
|40.||Baeyens L, Amat S, Vanden Houte K, et al.: Small cell carcinoma of the ovary successfully treated with radiotherapy only after surgery: case report. Eur J Gynaecol Oncol 29 (5): 535-7, 2008.|
|41.||Choong CS, Fuller PJ, Chu S, et al.: Sertoli-Leydig cell tumor of the ovary, a rare cause of precocious puberty in a 12-month-old infant. J Clin Endocrinol Metab 87 (1): 49-56, 2002.|
|42.||Zung A, Shoham Z, Open M, et al.: Sertoli cell tumor causing precocious puberty in a girl with Peutz-Jeghers syndrome. Gynecol Oncol 70 (3): 421-4, 1998.|
|43.||Distelmaier F, Calaminus G, Harms D, et al.: Ovarian small cell carcinoma of the hypercalcemic type in children and adolescents: a prognostically unfavorable but curable disease. Cancer 107 (9): 2298-306, 2006.|
|44.||Christin A, Lhomme C, Valteau-Couanet D, et al.: Successful treatment for advanced small cell carcinoma of the ovary. Pediatr Blood Cancer 50 (6): 1276-7, 2008.|
|45.||Kanwar VS, Heath J, Krasner CN, et al.: Advanced small cell carcinoma of the ovary in a seventeen-year-old female, successfully treated with surgery and multi-agent chemotherapy. Pediatr Blood Cancer 50 (5): 1060-2, 2008.|
|46.||McNall RY, Nowicki PD, Miller B, et al.: Adenocarcinoma of the cervix and vagina in pediatric patients. Pediatr Blood Cancer 43 (3): 289-94, 2004.|
|47.||Abu-Rustum NR, Su W, Levine DA, et al.: Pediatric radical abdominal trachelectomy for cervical clear cell carcinoma: a novel surgical approach. Gynecol Oncol 97 (1): 296-300, 2005.|
Other rare childhood cancers include multiple endocrine neoplasia syndromes and Carney complex, skin cancer, chordoma, and cancer of unknown primary site. The prognosis, diagnosis, classification, and treatment of these other rare childhood cancers are discussed below. It must be emphasized that these cancers are seen very infrequently in patients younger than 15 years, and most of the evidence is derived from case series.
Multiple Endocrine Neoplasia (MEN) Syndromes and Carney Complex
MEN syndromes are familial disorders that are characterized by neoplastic changes that affect multiple endocrine organs. Changes may include hyperplasia, benign adenomas, and carcinomas. There are two main types of MEN syndrome: type 1 and type 2. Type 2 can be further subdivided into three subtypes: type 2A, type 2B, and familial medullary thyroid carcinoma.
Clinical features and diagnosis of MEN syndromes
The most salient clinical and genetic alterations of the MEN syndromes are shown in Table 4.
|Syndrome||Clinical Features/Tumors||Genetic Alterations|
|MEN type 1: Werner syndrome||Parathyroid||11q13 (MEN1 gene)|
|Pancreatic islets:||Gastrinoma||11q13 (MEN1 gene)|
|Pituitary:||Prolactinoma||11q13 (MEN1 gene)|
|Other associated tumors:||Carcinoid: bronchial and thymic||11q13 (MEN1 gene)|
|MEN type 2A: Sipple syndrome||Medullary thyroid carcinoma||10q11.2 (RET gene)|
|MEN type 2B||Medullary thyroid carcinoma||10q11.2 (RET gene)|
|Familial medullary thyroid carcinoma||Medullary thyroid carcinoma||10q11.2 (RET gene)|
Germline mutations of the MEN1 gene located on chromosome 11q13 are found in 70% to 90% of patients; however, this gene has also been shown to be frequently inactivated in sporadic tumors. Mutation testing should be combined with clinical screening for patients and family members with proven at-risk MEN 1 syndrome. It is recommended that screening for patients with MEN 1 syndrome begin by the age of 5 years and continue for life. The number of tests or biochemical screening is age specific and may include yearly serum calcium, parathyroid hormone, gastrin, glucagon, secretin, proinsulin, chromogranin A, prolactin, and IGF-1. Radiologic screening should include a magnetic resonance imaging of the brain and computed tomography (CT) of the abdomen every 1 to 3 years.
A germline activating mutation in the RET oncogene (a receptor tyrosine kinase) on chromosome 10q11.2 is responsible for the uncontrolled growth of cells in medullary thyroid carcinoma associated with MEN 2A and MEN 2B syndromes.[7,8,9]
Guidelines for genetic testing of suspected patients with MEN2 syndrome, as well as the correlations between the type of mutation and the risk levels of aggressiveness of medullary thyroid cancer, have been published.[14,15]
|MEN 2 Subtype||Medullary Thyroid Carcinoma||Pheochromocytoma||Parathyroid Disease|
|MEN 2A||95%||50%||20% to 30%|
|Familial medullary thyroid carcinoma||100%||0%||0%|
Treatment of MEN syndromes
Relatives of patients with MEN 2A should undergo genetic testing in early childhood, before the age of 5 years. Carriers should undergo total thyroidectomy as described above with autotransplantation of one parathyroid gland by a certain age.[20,24,25,26]
Complete removal of the thyroid gland is the recommended procedure for surgical management of medullary thyroid cancer in children, since there is a high incidence of bilateral disease.
Hirschsprung disease has been associated in a small percentage of cases with the development of neuroendocrine tumors such as medullary thyroid carcinoma. RET germline inactivating mutations have been detected in up to 50% of patients with familial Hirschsprung disease and less often in the sporadic form.[29,30,31] Cosegregation of Hirschsprung disease and medullary thyroid carcinoma phenotype is infrequently reported, but these individuals usually have a mutation in RET exon 10. It has been recommended that patients with Hirschsprung disease be screened for mutations in RET exon 10 and consideration be given to prophylactic thyroidectomy if such a mutation is discovered.[31,32,33]
(Refer to the PDQ summary on Genetics of Endocrine and Neuroendocrine Neoplasias for more information about MEN 2A and MEN 2B.)
In a randomized phase III trial for adult patients with unresectable locally advanced or metastatic hereditary or sporadic medullary thyroid carcinoma treated with vandetanib, a selective inhibitor of RET, VEGFR, and EGFR, versus placebo, vandetanib administration was associated with significant improvements in progression-free survival, response rate, disease control rates, and biochemical response.
Treatment options under clinical evaluation
The following is an example of a national and/or institutional clinical trial that is currently being conducted. Information about ongoing clinical trials is available from the NCI Web site.
The Carney complex is an autosomal dominant syndrome caused by mutations in the PPKAR1A gene, located in chromosome 17. The syndrome is characterized by cardiac and cutaneous myxomas, pale brown to brown lentigines, blue nevi, primary pigmented nodular adrenocortical disease causing Cushing syndrome, and a variety of endocrine and nonendocrine tumors, including pituitary adenomas, thyroid tumors, and large cell calcifying Sertoli cell tumor of the testis.[37,38,39] There are guidelines that may be followed for screening patients with Carney complex.
For patients with the Carney complex, prognosis depends on the frequency of recurrences of cardiac and skin myxomas and other tumors.
Pheochromocytoma and Paraganglioma
Pheochromocytoma and paraganglioma are rare catecholamine-producing tumors with a combined annual incidence of three cases per 1 million individuals. Tumors arising within the adrenal gland are known as pheochromocytomas, whereas morphologically identical tumors arising elsewhere are termed paragangliomas. Paragangliomas are further divided into: (1) sympathetic paragangliomas that predominantly arise from the intra-abdominal sympathetic trunk and usually produce catecholamines, and (2) parasympathetic paragangliomas that are distributed along the parasympathetic nerves of the head, neck, and mediastinum and are rarely functional.[40,41]
It is now estimated that up to 30% of all pheochromocytomas and paragangliomas are familial; several susceptibility genes have been described (see Table 6). The median age at presentation in most familial syndromes is 30 to 35 years, and up to 50% of subjects have disease by age 26 years.[42,43,44]
|Germline Mutation||Syndrome||Proportion of all PGL/PCC (%)||Mean Age at Presentation (y)||Penetrance of PGL/PCC (%)|
|MEN1 = multiple endocrine neoplasia type 1; MEN2 = multiple endocrine neoplasia type 2; NF1 = neurofibromatosis type 1; VHL = von Hippel-Lindau.|
|a Adapted from Welander et al.|
|SDHB, C, D||Carney-Stratakis||<1||33||Unknown|
|No mutation||Sporadic disease||70||48.3||-|
|1.||Von Hippel-Lindau (VHL) syndrome—Pheochromocytoma and paraganglioma occur in 10% to 20% of patients with VHL.|
|2.||Multiple Endocrine Neoplasia (MEN) Syndrome Type 2—Codon-specific mutations of the RET gene are associated with a 50% risk of development of pheochromocytoma in MEN 2A and MEN 2B. Somatic RET mutations are also found in sporadic pheochromocytoma and paraganglioma.|
|3.||Neurofibromatosis type 1 (NF1)—Pheochromocytoma and paraganglioma are a rare occurrence in patients with NF1, and typically have characteristics similar to those of sporadic tumors, with a relatively late mean age of onset and rarity in pediatrics.|
|4.||Familial pheochromocytoma/paraganglioma syndromes, associated with germline mutations of mitochondrial succinate dehydrogenase (SDH) complex genes (see Table 6). They are all inherited in an autosomal dominant manner but with varying penetrance.|
|5.||Other susceptibility genes recently discovered include KIF1B-beta, EGLN1/PHD2, TMEM127, SDHA, and MAX.|
Immunohistochemical SDHB staining may help triage genetic testing; tumors of patients with SDHB, SDHC, and SDHD mutations have absent or very weak staining, while sporadic tumors and those associated with other constitutional syndromes have positive staining.[46,47] Therefore, immunohistochemical SDHB staining can help identify potential carriers of a SDH mutation early, thus obviating the need for extensive and costly testing of other genes.
Patients with pheochromocytoma and sympathetic extra-adrenal paraganglioma usually present with symptoms of excess catecholamine production, including hypertension, headache, perspiration, palpitations, tremor, and facial pallor. These symptoms are often paroxysmal, although sustained hypertension between paroxysmal episodes occurs in more than one-half the patients. These symptoms can also be induced by exertion, trauma, labor and delivery, induction of anesthesia, surgery of the tumor, foods high in tyramine (e.g., red wine, chocolate, cheese), or urination (in cases of primary tumor of the bladder). Parasympathetic extra-adrenal paragangliomas do not secrete catecholamines and usually present as a neck mass with symptoms related to compression, but also may be asymptomatic and diagnosed incidentally.
Paraganglioma and pheochromocytoma in children and adolescents
Younger patients have a higher incidence of bilateral adrenal pheochromocytoma and extra-adrenal paraganglioma, and a germline mutation can be identified in close to 60% of patients. Therefore, genetic counseling and testing is always recommended in young patients. The pediatric and adolescent patient appears to present with symptoms similar to those of the adult patient, although with a more frequent occurrence of sustained hypertension. The clinical behavior of paraganglioma and pheochromocytoma appears to be more aggressive in children and adolescents and metastatic rates of up to 50% have been reported.[41,49,50]
In a study of 49 patients younger than 20 years with a paraganglioma or pheochromocytoma, 39 (79%) had an underlying germline mutation that involved the SDHB (n = 27; 55%), SDHD (n = 4; 8%), VHL (n = 6; 12%), or NF1 (n = 2; 4%) genes. The germline mutation rates for patients with nonmetastatic disease were lower than those observed in patients who had evidence of metastases (64% vs. 87.5%). Furthermore, among patients with metastatic disease, the incidence of SDHB mutations was very high (72%) and most presented with disease in the retroperitoneum; five died of their disease. All patients with SDHD mutations had head and neck primary tumors. In another study, the incidence of germline mutations involving RET, VHL, SDHD and SDHB in patients with nonsyndromic paraganglioma was 70% for patients younger than 10 years and 51% among those aged 10 to 20 years. In contrast, only 16% of patients older than 20 years had an identifiable mutation. It is important to remember that these two studies did not include systematic screening for other genes that have been recently described in paraganglioma and pheochromocytoma syndromes such as KIF1B-beta, EGLN1/PHD2, TMEM127, SDHA, and MAX (see Table 6).
These findings suggest that younger patients with extra-adrenal nonsyndromic pheochromocytoma and paraganglioma are at high risk for harboring SDHB mutations and that this phenotype is associated with an earlier age of onset and a high rate of metastatic disease. Early identification of young patients with SDHB mutations using radiographic, serologic, and immunohistochemical markers could potentially decrease mortality and identify other family members who carry a germline SDHB mutation. In addition, approximately 12% of pediatric GIST patients have germline SDHB, SDHC, or SDHD mutations in the context of Carney-Stratakis syndrome.
The diagnosis of paraganglioma and pheochromocytoma relies on the biochemical documentation of excess catecholamine secretion coupled with imaging studies for localization and staging.
Measurement of plasma-free fractionated metanephrines (metanephrine and normetanephrine) is usually the diagnostic tool of choice when the diagnosis of a secreting paraganglioma or pheochromocytoma is suspected. A 24-hour urine collection for catecholamines (epinephrine, norepinephrine, and dopamine) and fractionated metanephrines can also be performed for confirmation.
Catecholamine metabolic and secretory profiles are impacted by hereditary background; both hereditary and sporadic paraganglioma and pheochromocytoma differ markedly in tumor contents of catecholamines and corresponding plasma and urinary hormonal profiles. About 50% of secreting tumors produce and contain a mixture of norepinephrine and epinephrine, while most of the rest produce norepinephrine almost exclusively, with occasional rare tumors producing mainly dopamine. Patients with epinephrine-producing tumors are diagnosed later (median age, 50 years) than those with tumors lacking appreciable epinephrine production (median age, 40 years). Patients with MEN2 and NF1 syndromes, all with epinephrine-producing tumors, are typically diagnosed at a later age (median age, 40 years) than patients with tumors that lack appreciable epinephrine production secondary to mutations of VHL and SDH (median age, 30 years). These variations in ages at diagnosis associated with different tumor catecholamine phenotypes and locations suggest origins of paraganglioma and pheochromocytoma for different progenitor cells with variable susceptibility to disease-causing mutations.[53,54]
Imaging modalities available for the localization of paraganglioma and pheochromocytoma include CT, magnetic resonance imaging, iodine I-123 or iodine I-131–labeled metaiodobenzylguanidine (123/131 I-MIBG) scintigraphy, and fluorine F-18 6-fluorodopamine (6-[18 F]FDA) positron emission tomography (PET). For tumor localization, 6-[18 F]FDA PET and 123/131 I-MIBG scintigraphy perform equally well in patients with nonmetastatic paraganglioma and pheochromocytoma, but metastases are better detected by 6-[18 F]FDA PET than by 123/131 I-MIBG. Other functional imaging alternatives include indium In-111 octreotide scintigraphy and fluorodeoxyglucose F-18 PET, both of which can be coupled with CT imaging for improved anatomic detail.
Treatment of paraganglioma and pheochromocytoma is surgical. For secreting tumors, alpha and beta adrenergic blockade must be optimized prior to surgery. For patients with metastatic disease, responses have been documented to some chemotherapeutic regimens such as gemcitabine and docetaxel or vincristine, cyclophosphamide, and dacarbazine.[56,57] Chemotherapy may help alleviate symptoms and facilitate surgery, although its impact in overall survival is less clear. Responses have also been obtained to high-dose 131 I-MIBG.
Skin Cancer (Melanoma, Basal Cell Carcinoma, and Squamous Cell Carcinoma)
Melanoma, although rare, is the most common skin cancer in children, followed by basal cell carcinomas (BCCs) and squamous cell carcinomas (SCCs).[59,60,61,62,63,64,65,66,67] In a retrospective study of 22,524 skin pathology reports in patients younger than 20 years, investigators identified 38 melanomas, 33 of which occurred in patients aged 15 to 19 years. Study investigators reported that the number of lesions that needed to be excised in order to identify one melanoma was 479.8, which is 20 times higher than the adult population.
In patients younger than 20 years, there are approximately 425 cases of melanoma diagnosed each year in the United States, representing about 1% of all new cases of melanoma. Melanoma annual incidence in the United States (2002–2006) increases with age, from 1 to 2 per 1 million in children younger than 10 years to 4.1 per 1 million in children aged 10 to 14 years and 16.9 per 1 million in children aged 15 to 19 years. Melanoma accounts for about 8% of all cancers in children aged 15 to 19 years. The incidence of pediatric melanoma (in children younger than 20 years) increased by 1.7% per year between 1975 and 2006. Increased exposure to ambient ultraviolet radiation increases the risk of the disease.
Conditions associated with an increased risk of developing melanoma in children and adolescents include giant melanocytic nevi, xeroderma pigmentosum (a rare recessive disorder characterized by extreme sensitivity to sunlight, keratosis, and various neurologic manifestations), immunodeficiency, immunosuppression, history of retinoblastoma, and Werner syndrome.[71,72] Other phenotypic traits that are associated with an increased risk of melanoma in adults have been documented in children and adolescents with melanoma and include exposure to ultraviolet sunlight, red hair, blue eyes,[73,74,75,76,77] poor tanning ability, freckling, dysplastic nevi, increased number of melanocytic nevi, and family history of melanoma.[78,79,80] Neurocutaneous melanosis is an unusual condition associated with large or multiple congenital nevi of the skin in association with meningeal melanosis or melanoma; approximately 2.5% of patients with large congenital nevi develop this condition, and those with increased numbers of satellite nevi are at greatest risk.[81,82]
Pediatric melanoma shares many similarities with adult melanoma, and the prognosis is stage dependent. Overall 5-year survival of children and adolescents with melanoma is approximately 90%.[77,83,84] Approximately three-fourths of all children and adolescents present with localized disease and have an excellent outcome (>90% 5-year survival). The outcome for patients with nodal disease is intermediate, with about 60% expected to survive long term.[77,84] In one study, the outcome for patients with metastatic disease was favorable, but this result was not duplicated in another study from the National Cancer Database.
In pediatric melanoma, the association of thickness with clinical outcome is controversial.[77,84,85,86,87] In addition, pediatric melanoma appears to have a higher incidence of nodal involvement and this feature does not appear to have an impact on survival.[88,89] However, it is unclear how these findings truly affect clinical outcome since some series have included patients with atypical melanocytic lesions.[90,91] In a study of sentinel lymph node biopsies in children and adolescents, 25% were positive (compared with 17% in adults). However, only 0.7% of lymph nodes found on complete lymph node dissection were positive for melanoma. In this study, mortality was infrequent but was confined to sentinel lymph node–positive patients.[Level of evidence: 3iiA] In another study, 53% of patients younger than 10 years had positive sentinel lymph node biopsy compared with 26% of those who were aged 10 years and older.
Children younger than 10 years who have melanoma often present with poor prognostic features, are more often non-white, have head and neck primary tumors, and more often have syndromes that predispose them to melanoma.[77,83,84,93]
Biopsy or excision is necessary to determine the diagnosis of any skin cancer. Diagnosis is necessary for decisions regarding additional treatment. Although BCCs and SCCs are generally curable with surgery alone, the treatment of melanoma requires greater consideration because of its potential for metastasis. The width of surgical margins in melanoma is dictated by the site, size, and thickness of the lesion and ranges from 0.5 cm for in situ lesions to 2 cm or more for thicker lesions. To achieve negative margins in children, wide excision with skin grafting may become necessary in selected cases. Examination of regional lymph nodes using sentinel lymph node biopsy has become routine in many centers [94,95] and is recommended in patients with lesions measuring more than 1 mm in thickness or in those whose lesions are 1 mm or less in thickness and have unfavorable features such as ulceration, Clark level of invasion IV or V, or mitosis rate of 1 per mm2 or higher.[94,96,97]
Lymph node dissection is recommended if sentinel nodes are involved with tumor, and adjuvant therapy with high-dose interferon-alpha-2b for a period of 1 year should be considered in these patients.[63,94,98,99,100] Clinically benign melanocytic lesions can sometimes pose a significant diagnostic challenge, especially when they involve regional lymph nodes.[101,102,103]
The diagnosis of pediatric melanoma may be difficult and many of these lesions may be confused with the so-called melanocytic tumors of unknown metastatic potential. These lesions are biologically different from melanoma and benign nevi.[104,105] The term Spitz nevus and Spitzoid melanoma are also commonly used, creating additional confusion. Novel diagnostic techniques are actively being used by various centers in an attempt to differentiate melanoma from these challenging melanocytic lesions. For example, the absence of BRAF mutations or the presence of a normal chromosomal complement with or without 11p gains strongly argues against the diagnosis of melanoma.[106,107] In contrast, the use of FISH probes that target four specific regions in chromosomes 6 and 11 can help classify melanoma correctly in over 85% of cases; however, 24% of atypical Spitzoid lesions will have chromosomal alterations on FISH analysis and 75% will have BRAF V600E mutations.[108,109]HRAS mutations have been described in some cases of Spitz nevi but they have not been described in Spitzoid melanoma. The presence of a HRAS mutation may aid in the differential diagnosis of Spitz nevus and Spitzoid melanoma. Some of the characteristic genetic alterations seen in various melanocytic lesions are summarized in the table below:[111,112]
Surgery is the treatment of choice for patients with localized melanoma. Current guidelines recommend margins of resection as follows:
|1.||0.5 cm for melanoma in situ.|
|2.||1.0 cm for melanoma thickness under 1 mm.|
|3.||1 cm to 2 cm for melanoma thickness of 1.01 mm to 2 mm.|
|4.||2 cm for tumor thickness greater than 2 mm.|
Sentinel node biopsy should be offered to patients with thin lesions (≤1 mm) and ulceration, mitotic rate greater than 1 mm2, young age, and to patients with lesions greater than 1 mm with or without adverse features. If the sentinel node is positive, patients should be offered the option to undergo a complete lymph node dissection. Patients with high-risk primary cutaneous melanoma, such as those with regional lymph node involvement, should be offered the option to receive adjuvant interferon alpha-2b, a therapy that is well tolerated in children.[98,99,113]
For patients with metastatic disease, prognosis is poor and various agents such as interferon, dacarbazine, temozolomide, sorafenib, or interleukin-2, and biochemotherapy can be used.[114,115,116] The results of pediatric trials that incorporate newer therapies such as vemurafenib and ipilimumab are not yet available.[117,118] Vemurafenib is used only in the treatment of patients with a BRAF mutation.
(Refer to the PDQ summary on adult Melanoma Treatment for more information.)
Basal cell and squamous cell carcinomas
Basal cell carcinomas (BCCs) generally appear as raised lumps or ulcerated lesions, usually in areas with previous sun exposure. These tumors may be multiple and exacerbated by radiation therapy. Nevoid BCC syndrome (Gorlin syndrome) is a rare disorder with a predisposition to the development of early-onset neoplasms, including BCC, ovarian fibroma, and desmoplastic medulloblastoma.[121,122,123,124] SCCs are usually reddened lesions with varying degrees of scaling or crusting, and they have an appearance similar to eczema, infections, trauma, or psoriasis.
Diagnostic evaluation and treatment
Biopsy or excision is necessary to determine the diagnosis of any skin cancer. Diagnosis is necessary for decisions regarding additional treatment. BCCs and SCCs are generally curable with surgery alone and further diagnostic workup is not indicated.
Most BCCs have activation of the hedgehog pathway, generally resulting from mutations in PTCH1. Vismodegib (GDC-0449), a hedgehog pathway inhibitor, has been approved for the treatment of adult patients with BCC.[126,127] It was approved by the U.S. Food and Drug Administration for the treatment of adults with metastatic BCC or with locally advanced BCC that has recurred following surgery or who are not candidates for surgery, and who are not candidates for radiation.
(Refer to the PDQ summary on adult Skin Cancer Treatment for more information.)
Chordoma is a very rare tumor of bone that arises from remnants of the notochord within the clivus, spinal vertebrae, or sacrum. The incidence in the United States is approximately one case per one million people per year, and only 5% of all chordomas occur in patients younger than 20 years. Most pediatric patients have the conventional or chondroid variant of chordoma.[128,129]
Patients usually present with pain, with or without neurologic deficits such as cranial or other nerve impairment. Diagnosis is straightforward when the typical physaliferous (soap-bubble-bearing) cells are present. Differential diagnosis is sometimes difficult and includes dedifferentiated chordoma and chondrosarcoma. Childhood chordoma has been associated with tuberous sclerosis complex.
Standard treatment includes surgery and external radiation therapy. Surgery is not commonly curative in children and adolescents because of difficulty obtaining clear margins and the likelihood of the chordoma arising in the skull base, rather than in the sacrum, making them relatively inaccessible to complete surgical excision. The best results have been obtained using proton-beam therapy (charged-particle radiation therapy).[134,135]; [Level of evidence: 3iiiDiii] Recurrences are usually local but can include distant metastases to lungs or bone.
There is no known effective cytotoxic agent or combination chemotherapy for this disease with only anecdotal reports published. Imatinib mesylate has been studied in adults with chordoma based on the overexpression of PDGFR alpha, beta, and KIT in this disease.[137,138] Among 50 adults with chordoma treated with imatinib and evaluable by RECIST, there was one partial response and 28 additional patients had stable disease at 6 months. The low rate of RECIST responses and the potentially slow natural course of the disease complicate the assessment of the efficacy of imatinib for chordoma. Other tyrosine kinase inhibitors and combinations involving kinase inhibitors have been studied.[139,140,141]
Cancer of Unknown Primary Site
Cancers of unknown primary site present as a metastatic cancer for which a precise primary tumor site cannot be determined. As an example, lymph nodes at the base of the skull may enlarge in relationship to a tumor that may be on the face or the scalp but is not evident by physical examination or by radiographic imaging. Thus, modern imaging techniques may indicate the extent of the disease but not a primary site. Tumors such as adenocarcinomas, melanomas, and embryonal tumors such as rhabdomyosarcomas and neuroblastomas may have such a presentation. Children represent less than 1% of all solid cancers of unknown primary site and because of the age-related incidence of tumor types, embryonal histologies are more common in this age group.
For all patients who present with tumors from an unknown primary site, treatment should be directed toward the specific histopathology of the tumor and should be age appropriate for the general type of cancer initiated, irrespective of the site or sites of involvement. Studies in adults suggest that positron emission tomography (PET) imaging can be helpful in identifying cancers of unknown primary site, particularly in patients whose tumors arise in the head and neck area. A report in adults using fludeoxyglucose (FDG) PET-CT identified 42.5% of primary tumors in a group of cancers of unknown primary site. In addition, molecular assignment of tissue of origin using molecular profiling techniques is feasible and can aid in identifying the putative tissue of origin in about 60% of patients with cancers of unknown primary site. It is still unclear, however, whether these techniques can improve the outcomes or response rates of these patients, and no pediatric studies have been conducted.
Chemotherapy and radiation therapy treatments appropriate and relevant for the general category of carcinoma or sarcoma (depending on the histologic findings, symptoms, and extent of tumor) should be initiated as early as possible.
|1.||de Krijger RR: Endocrine tumor syndromes in infancy and childhood. Endocr Pathol 15 (3): 223-6, 2004.|
|2.||Thakker RV: Multiple endocrine neoplasia--syndromes of the twentieth century. J Clin Endocrinol Metab 83 (8): 2617-20, 1998.|
|3.||Starker LF, Carling T: Molecular genetics of gastroenteropancreatic neuroendocrine tumors. Curr Opin Oncol 21 (1): 29-33, 2009.|
|4.||Farnebo F, Teh BT, Kytölä S, et al.: Alterations of the MEN1 gene in sporadic parathyroid tumors. J Clin Endocrinol Metab 83 (8): 2627-30, 1998.|
|5.||Field M, Shanley S, Kirk J: Inherited cancer susceptibility syndromes in paediatric practice. J Paediatr Child Health 43 (4): 219-29, 2007.|
|6.||Thakker RV: Multiple endocrine neoplasia type 1 (MEN1). Best Pract Res Clin Endocrinol Metab 24 (3): 355-70, 2010.|
|7.||Sanso GE, Domene HM, Garcia R, et al.: Very early detection of RET proto-oncogene mutation is crucial for preventive thyroidectomy in multiple endocrine neoplasia type 2 children: presence of C-cell malignant disease in asymptomatic carriers. Cancer 94 (2): 323-30, 2002.|
|8.||Alsanea O, Clark OH: Familial thyroid cancer. Curr Opin Oncol 13 (1): 44-51, 2001.|
|9.||Fitze G: Management of patients with hereditary medullary thyroid carcinoma. Eur J Pediatr Surg 14 (6): 375-83, 2004.|
|10.||Puñales MK, da Rocha AP, Meotti C, et al.: Clinical and oncological features of children and young adults with multiple endocrine neoplasia type 2A. Thyroid 18 (12): 1261-8, 2008.|
|11.||Skinner MA, DeBenedetti MK, Moley JF, et al.: Medullary thyroid carcinoma in children with multiple endocrine neoplasia types 2A and 2B. J Pediatr Surg 31 (1): 177-81; discussion 181-2, 1996.|
|12.||Brauckhoff M, Gimm O, Weiss CL, et al.: Multiple endocrine neoplasia 2B syndrome due to codon 918 mutation: clinical manifestation and course in early and late onset disease. World J Surg 28 (12): 1305-11, 2004.|
|13.||Sakorafas GH, Friess H, Peros G: The genetic basis of hereditary medullary thyroid cancer: clinical implications for the surgeon, with a particular emphasis on the role of prophylactic thyroidectomy. Endocr Relat Cancer 15 (4): 871-84, 2008.|
|14.||Waguespack SG, Rich TA, Perrier ND, et al.: Management of medullary thyroid carcinoma and MEN2 syndromes in childhood. Nat Rev Endocrinol 7 (10): 596-607, 2011.|
|15.||Kloos RT, Eng C, Evans DB, et al.: Medullary thyroid cancer: management guidelines of the American Thyroid Association. Thyroid 19 (6): 565-612, 2009.|
|16.||Skinner MA, Moley JA, Dilley WG, et al.: Prophylactic thyroidectomy in multiple endocrine neoplasia type 2A. N Engl J Med 353 (11): 1105-13, 2005.|
|17.||Skinner MA: Management of hereditary thyroid cancer in children. Surg Oncol 12 (2): 101-4, 2003.|
|18.||Learoyd DL, Gosnell J, Elston MS, et al.: Experience of prophylactic thyroidectomy in multiple endocrine neoplasia type 2A kindreds with RET codon 804 mutations. Clin Endocrinol (Oxf) 63 (6): 636-41, 2005.|
|19.||Guillem JG, Wood WC, Moley JF, et al.: ASCO/SSO review of current role of risk-reducing surgery in common hereditary cancer syndromes. J Clin Oncol 24 (28): 4642-60, 2006.|
|20.||National Comprehensive Cancer Network.: NCCN Clinical Practice Guidelines in Oncology: Thyroid Carcinoma. Version 1.2011. Rockledge, Pa: National Comprehensive Cancer Network, 2011. Available online with free subscription. Last accessed October 31, 2012.|
|21.||Lallier M, St-Vil D, Giroux M, et al.: Prophylactic thyroidectomy for medullary thyroid carcinoma in gene carriers of MEN2 syndrome. J Pediatr Surg 33 (6): 846-8, 1998.|
|22.||Dralle H, Gimm O, Simon D, et al.: Prophylactic thyroidectomy in 75 children and adolescents with hereditary medullary thyroid carcinoma: German and Austrian experience. World J Surg 22 (7): 744-50; discussion 750-1, 1998.|
|23.||Skinner MA, Wells SA Jr: Medullary carcinoma of the thyroid gland and the MEN 2 syndromes. Semin Pediatr Surg 6 (3): 134-40, 1997.|
|24.||Heizmann O, Haecker FM, Zumsteg U, et al.: Presymptomatic thyroidectomy in multiple endocrine neoplasia 2a. Eur J Surg Oncol 32 (1): 98-102, 2006.|
|25.||Frank-Raue K, Buhr H, Dralle H, et al.: Long-term outcome in 46 gene carriers of hereditary medullary thyroid carcinoma after prophylactic thyroidectomy: impact of individual RET genotype. Eur J Endocrinol 155 (2): 229-36, 2006.|
|26.||Piolat C, Dyon JF, Sturm N, et al.: Very early prophylactic thyroid surgery for infants with a mutation of the RET proto-oncogene at codon 634: evaluation of the implementation of international guidelines for MEN type 2 in a single centre. Clin Endocrinol (Oxf) 65 (1): 118-24, 2006.|
|27.||Leboulleux S, Travagli JP, Caillou B, et al.: Medullary thyroid carcinoma as part of a multiple endocrine neoplasia type 2B syndrome: influence of the stage on the clinical course. Cancer 94 (1): 44-50, 2002.|
|28.||Zenaty D, Aigrain Y, Peuchmaur M, et al.: Medullary thyroid carcinoma identified within the first year of life in children with hereditary multiple endocrine neoplasia type 2A (codon 634) and 2B. Eur J Endocrinol 160 (5): 807-13, 2009.|
|29.||Decker RA, Peacock ML, Watson P: Hirschsprung disease in MEN 2A: increased spectrum of RET exon 10 genotypes and strong genotype-phenotype correlation. Hum Mol Genet 7 (1): 129-34, 1998.|
|30.||Eng C, Clayton D, Schuffenecker I, et al.: The relationship between specific RET proto-oncogene mutations and disease phenotype in multiple endocrine neoplasia type 2. International RET mutation consortium analysis. JAMA 276 (19): 1575-9, 1996.|
|31.||Fialkowski EA, DeBenedetti MK, Moley JF, et al.: RET proto-oncogene testing in infants presenting with Hirschsprung disease identifies 2 new multiple endocrine neoplasia 2A kindreds. J Pediatr Surg 43 (1): 188-90, 2008.|
|32.||Skába R, Dvoráková S, Václavíková E, et al.: The risk of medullary thyroid carcinoma in patients with Hirschsprung's disease. Pediatr Surg Int 22 (12): 991-5, 2006.|
|33.||Moore SW, Zaahl MG: Multiple endocrine neoplasia syndromes, children, Hirschsprung's disease and RET. Pediatr Surg Int 24 (5): 521-30, 2008.|
|34.||Wells SA Jr, Robinson BG, Gagel RF, et al.: Vandetanib in patients with locally advanced or metastatic medullary thyroid cancer: a randomized, double-blind phase III trial. J Clin Oncol 30 (2): 134-41, 2012.|
|35.||Herbst RS, Heymach JV, O'Reilly MS, et al.: Vandetanib (ZD6474): an orally available receptor tyrosine kinase inhibitor that selectively targets pathways critical for tumor growth and angiogenesis. Expert Opin Investig Drugs 16 (2): 239-49, 2007.|
|36.||Vidal M, Wells S, Ryan A, et al.: ZD6474 suppresses oncogenic RET isoforms in a Drosophila model for type 2 multiple endocrine neoplasia syndromes and papillary thyroid carcinoma. Cancer Res 65 (9): 3538-41, 2005.|
|37.||Wilkes D, Charitakis K, Basson CT: Inherited disposition to cardiac myxoma development. Nat Rev Cancer 6 (2): 157-65, 2006.|
|38.||Carney JA, Young WF: Primary pigmented nodular adrenocortical disease and its associated conditions. Endocrinologist 2: 6-21, 1992.|
|39.||Ryan MW, Cunningham S, Xiao SY: Maxillary sinus melanoma as the presenting feature of Carney complex. Int J Pediatr Otorhinolaryngol 72 (3): 405-8, 2008.|
|40.||Lenders JW, Eisenhofer G, Mannelli M, et al.: Phaeochromocytoma. Lancet 366 (9486): 665-75, 2005 Aug 20-26.|
|41.||Waguespack SG, Rich T, Grubbs E, et al.: A current review of the etiology, diagnosis, and treatment of pediatric pheochromocytoma and paraganglioma. J Clin Endocrinol Metab 95 (5): 2023-37, 2010.|
|42.||Welander J, Söderkvist P, Gimm O: Genetics and clinical characteristics of hereditary pheochromocytomas and paragangliomas. Endocr Relat Cancer 18 (6): R253-76, 2011.|
|43.||Timmers HJ, Gimenez-Roqueplo AP, Mannelli M, et al.: Clinical aspects of SDHx-related pheochromocytoma and paraganglioma. Endocr Relat Cancer 16 (2): 391-400, 2009.|
|44.||Ricketts CJ, Forman JR, Rattenberry E, et al.: Tumor risks and genotype-phenotype-proteotype analysis in 358 patients with germline mutations in SDHB and SDHD. Hum Mutat 31 (1): 41-51, 2010.|
|45.||Stratakis CA, Carney JA: The triad of paragangliomas, gastric stromal tumours and pulmonary chondromas (Carney triad), and the dyad of paragangliomas and gastric stromal sarcomas (Carney-Stratakis syndrome): molecular genetics and clinical implications. J Intern Med 266 (1): 43-52, 2009.|
|46.||Gill AJ, Benn DE, Chou A, et al.: Immunohistochemistry for SDHB triages genetic testing of SDHB, SDHC, and SDHD in paraganglioma-pheochromocytoma syndromes. Hum Pathol 41 (6): 805-14, 2010.|
|47.||van Nederveen FH, Gaal J, Favier J, et al.: An immunohistochemical procedure to detect patients with paraganglioma and phaeochromocytoma with germline SDHB, SDHC, or SDHD gene mutations: a retrospective and prospective analysis. Lancet Oncol 10 (8): 764-71, 2009.|
|48.||Barontini M, Levin G, Sanso G: Characteristics of pheochromocytoma in a 4- to 20-year-old population. Ann N Y Acad Sci 1073: 30-7, 2006.|
|49.||King KS, Prodanov T, Kantorovich V, et al.: Metastatic pheochromocytoma/paraganglioma related to primary tumor development in childhood or adolescence: significant link to SDHB mutations. J Clin Oncol 29 (31): 4137-42, 2011.|
|50.||Pham TH, Moir C, Thompson GB, et al.: Pheochromocytoma and paraganglioma in children: a review of medical and surgical management at a tertiary care center. Pediatrics 118 (3): 1109-17, 2006.|
|51.||Neumann HP, Bausch B, McWhinney SR, et al.: Germ-line mutations in nonsyndromic pheochromocytoma. N Engl J Med 346 (19): 1459-66, 2002.|
|52.||Lenders JW, Pacak K, Walther MM, et al.: Biochemical diagnosis of pheochromocytoma: which test is best? JAMA 287 (11): 1427-34, 2002.|
|53.||Eisenhofer G, Pacak K, Huynh TT, et al.: Catecholamine metabolomic and secretory phenotypes in phaeochromocytoma. Endocr Relat Cancer 18 (1): 97-111, 2011.|
|54.||Eisenhofer G, Timmers HJ, Lenders JW, et al.: Age at diagnosis of pheochromocytoma differs according to catecholamine phenotype and tumor location. J Clin Endocrinol Metab 96 (2): 375-84, 2011.|
|55.||Timmers HJ, Chen CC, Carrasquillo JA, et al.: Comparison of 18F-fluoro-L-DOPA, 18F-fluoro-deoxyglucose, and 18F-fluorodopamine PET and 123I-MIBG scintigraphy in the localization of pheochromocytoma and paraganglioma. J Clin Endocrinol Metab 94 (12): 4757-67, 2009.|
|56.||Mora J, Cruz O, Parareda A, et al.: Treatment of disseminated paraganglioma with gemcitabine and docetaxel. Pediatr Blood Cancer 53 (4): 663-5, 2009.|
|57.||Huang H, Abraham J, Hung E, et al.: Treatment of malignant pheochromocytoma/paraganglioma with cyclophosphamide, vincristine, and dacarbazine: recommendation from a 22-year follow-up of 18 patients. Cancer 113 (8): 2020-8, 2008.|
|58.||Gonias S, Goldsby R, Matthay KK, et al.: Phase II study of high-dose [131I]metaiodobenzylguanidine therapy for patients with metastatic pheochromocytoma and paraganglioma. J Clin Oncol 27 (25): 4162-8, 2009.|
|59.||Sasson M, Mallory SB: Malignant primary skin tumors in children. Curr Opin Pediatr 8 (4): 372-7, 1996.|
|60.||Barnhill RL: Childhood melanoma. Semin Diagn Pathol 15 (3): 189-94, 1998.|
|61.||Fishman C, Mihm MC Jr, Sober AJ: Diagnosis and management of nevi and cutaneous melanoma in infants and children. Clin Dermatol 20 (1): 44-50, 2002 Jan-Feb.|
|62.||Hamre MR, Chuba P, Bakhshi S, et al.: Cutaneous melanoma in childhood and adolescence. Pediatr Hematol Oncol 19 (5): 309-17, 2002 Jul-Aug.|
|63.||Ceballos PI, Ruiz-Maldonado R, Mihm MC Jr: Melanoma in children. N Engl J Med 332 (10): 656-62, 1995.|
|64.||Schmid-Wendtner MH, Berking C, Baumert J, et al.: Cutaneous melanoma in childhood and adolescence: an analysis of 36 patients. J Am Acad Dermatol 46 (6): 874-9, 2002.|
|65.||Pappo AS: Melanoma in children and adolescents. Eur J Cancer 39 (18): 2651-61, 2003.|
|66.||Huynh PM, Grant-Kels JM, Grin CM: Childhood melanoma: update and treatment. Int J Dermatol 44 (9): 715-23, 2005.|
|67.||Christenson LJ, Borrowman TA, Vachon CM, et al.: Incidence of basal cell and squamous cell carcinomas in a population younger than 40 years. JAMA 294 (6): 681-90, 2005.|
|68.||Moscarella E, Zalaudek I, Cerroni L, et al.: Excised melanocytic lesions in children and adolescents - a 10-year survey. Br J Dermatol 167 (2): 368-73, 2012.|
|69.||Bleyer A, O'Leary M, Barr R, et al., eds.: Cancer Epidemiology in Older Adolescents and Young Adults 15 to 29 Years of Age, Including SEER Incidence and Survival: 1975-2000. Bethesda, Md: National Cancer Institute, 2006. NIH Pub. No. 06-5767. Also available online. Last accessed October 31, 2012.|
|70.||Horner MJ, Ries LA, Krapcho M, et al.: SEER Cancer Statistics Review, 1975-2006. Bethesda, Md: National Cancer Institute, 2009. Also available online. Last accessed October 31, 2012.|
|71.||Shibuya H, Kato A, Kai N, et al.: A case of Werner syndrome with three primary lesions of malignant melanoma. J Dermatol 32 (9): 737-44, 2005.|
|72.||Kleinerman RA, Yu CL, Little MP, et al.: Variation of second cancer risk by family history of retinoblastoma among long-term survivors. J Clin Oncol 30 (9): 950-7, 2012.|
|73.||Pappo AS, Kaste SC, Rao BN, et al.: Childhood melanoma. In: Balch CM, Houghton AN, Sober AJ, et al., eds.: Cutaneous Melanoma. 3rd ed., St. Louis, Mo: Quality Medical Publishing Inc., 1998, pp 175-186.|
|74.||Heffernan AE, O'Sullivan A: Pediatric sun exposure. Nurse Pract 23 (7): 67-8, 71-8, 83-6, 1998.|
|75.||Berg P, Lindelöf B: Differences in malignant melanoma between children and adolescents. A 35-year epidemiological study. Arch Dermatol 133 (3): 295-7, 1997.|
|76.||Elwood JM, Jopson J: Melanoma and sun exposure: an overview of published studies. Int J Cancer 73 (2): 198-203, 1997.|
|77.||Strouse JJ, Fears TR, Tucker MA, et al.: Pediatric melanoma: risk factor and survival analysis of the surveillance, epidemiology and end results database. J Clin Oncol 23 (21): 4735-41, 2005.|
|78.||Whiteman DC, Valery P, McWhirter W, et al.: Risk factors for childhood melanoma in Queensland, Australia. Int J Cancer 70 (1): 26-31, 1997.|
|79.||Tucker MA, Fraser MC, Goldstein AM, et al.: A natural history of melanomas and dysplastic nevi: an atlas of lesions in melanoma-prone families. Cancer 94 (12): 3192-209, 2002.|
|80.||Ducharme EE, Silverberg NB: Pediatric malignant melanoma: an update on epidemiology, detection, and prevention. Cutis 84 (4): 192-8, 2009.|
|81.||Hale EK, Stein J, Ben-Porat L, et al.: Association of melanoma and neurocutaneous melanocytosis with large congenital melanocytic naevi--results from the NYU-LCMN registry. Br J Dermatol 152 (3): 512-7, 2005.|
|82.||Makkar HS, Frieden IJ: Neurocutaneous melanosis. Semin Cutan Med Surg 23 (2): 138-44, 2004.|
|83.||Paradela S, Fonseca E, Pita-Fernández S, et al.: Prognostic factors for melanoma in children and adolescents: a clinicopathologic, single-center study of 137 Patients. Cancer 116 (18): 4334-44, 2010.|
|84.||Lange JR, Palis BE, Chang DC, et al.: Melanoma in children and teenagers: an analysis of patients from the National Cancer Data Base. J Clin Oncol 25 (11): 1363-8, 2007.|
|85.||Rao BN, Hayes FA, Pratt CB, et al.: Malignant melanoma in children: its management and prognosis. J Pediatr Surg 25 (2): 198-203, 1990.|
|86.||Aldrink JH, Selim MA, Diesen DL, et al.: Pediatric melanoma: a single-institution experience of 150 patients. J Pediatr Surg 44 (8): 1514-21, 2009.|
|87.||Tcheung WJ, Marcello JE, Puri PK, et al.: Evaluation of 39 cases of pediatric cutaneous head and neck melanoma. J Am Acad Dermatol 65 (2): e37-42, 2011.|
|88.||Gibbs P, Moore A, Robinson W, et al.: Pediatric melanoma: are recent advances in the management of adult melanoma relevant to the pediatric population. J Pediatr Hematol Oncol 22 (5): 428-32, 2000 Sep-Oct.|
|89.||Livestro DP, Kaine EM, Michaelson JS, et al.: Melanoma in the young: differences and similarities with adult melanoma: a case-matched controlled analysis. Cancer 110 (3): 614-24, 2007.|
|90.||Lohmann CM, Coit DG, Brady MS, et al.: Sentinel lymph node biopsy in patients with diagnostically controversial spitzoid melanocytic tumors. Am J Surg Pathol 26 (1): 47-55, 2002.|
|91.||Su LD, Fullen DR, Sondak VK, et al.: Sentinel lymph node biopsy for patients with problematic spitzoid melanocytic lesions: a report on 18 patients. Cancer 97 (2): 499-507, 2003.|
|92.||Howman-Giles R, Shaw HM, Scolyer RA, et al.: Sentinel lymph node biopsy in pediatric and adolescent cutaneous melanoma patients. Ann Surg Oncol 17 (1): 138-43, 2010.|
|93.||Moore-Olufemi S, Herzog C, Warneke C, et al.: Outcomes in pediatric melanoma: comparing prepubertal to adolescent pediatric patients. Ann Surg 253 (6): 1211-5, 2011.|
|94.||Shah NC, Gerstle JT, Stuart M, et al.: Use of sentinel lymph node biopsy and high-dose interferon in pediatric patients with high-risk melanoma: the Hospital for Sick Children experience. J Pediatr Hematol Oncol 28 (8): 496-500, 2006.|
|95.||Kayton ML, La Quaglia MP: Sentinel node biopsy for melanocytic tumors in children. Semin Diagn Pathol 25 (2): 95-9, 2008.|
|96.||Ariyan CE, Coit DG: Clinical aspects of sentinel lymph node biopsy in melanoma. Semin Diagn Pathol 25 (2): 86-94, 2008.|
|97.||Pacella SJ, Lowe L, Bradford C, et al.: The utility of sentinel lymph node biopsy in head and neck melanoma in the pediatric population. Plast Reconstr Surg 112 (5): 1257-65, 2003.|
|98.||Navid F, Furman WL, Fleming M, et al.: The feasibility of adjuvant interferon alpha-2b in children with high-risk melanoma. Cancer 103 (4): 780-7, 2005.|
|99.||Chao MM, Schwartz JL, Wechsler DS, et al.: High-risk surgically resected pediatric melanoma and adjuvant interferon therapy. Pediatr Blood Cancer 44 (5): 441-8, 2005.|
|100.||Kirkwood JM, Strawderman MH, Ernstoff MS, et al.: Interferon alfa-2b adjuvant therapy of high-risk resected cutaneous melanoma: the Eastern Cooperative Oncology Group Trial EST 1684. J Clin Oncol 14 (1): 7-17, 1996.|
|101.||Roaten JB, Partrick DA, Bensard D, et al.: Survival in sentinel lymph node-positive pediatric melanoma. J Pediatr Surg 40 (6): 988-92; discussion 992, 2005.|
|102.||Ludgate MW, Fullen DR, Lee J, et al.: The atypical Spitz tumor of uncertain biologic potential: a series of 67 patients from a single institution. Cancer 115 (3): 631-41, 2009.|
|103.||Busam KJ, Murali R, Pulitzer M, et al.: Atypical spitzoid melanocytic tumors with positive sentinel lymph nodes in children and teenagers, and comparison with histologically unambiguous and lethal melanomas. Am J Surg Pathol 33 (9): 1386-95, 2009.|
|104.||Berk DR, LaBuz E, Dadras SS, et al.: Melanoma and melanocytic tumors of uncertain malignant potential in children, adolescents and young adults--the Stanford experience 1995-2008. Pediatr Dermatol 27 (3): 244-54, 2010 May-Jun.|
|105.||Cerroni L, Barnhill R, Elder D, et al.: Melanocytic tumors of uncertain malignant potential: results of a tutorial held at the XXIX Symposium of the International Society of Dermatopathology in Graz, October 2008. Am J Surg Pathol 34 (3): 314-26, 2010.|
|106.||Gill M, Renwick N, Silvers DN, et al.: Lack of BRAF mutations in Spitz nevi. J Invest Dermatol 122 (5): 1325-6, 2004.|
|107.||Bastian BC, Wesselmann U, Pinkel D, et al.: Molecular cytogenetic analysis of Spitz nevi shows clear differences to melanoma. J Invest Dermatol 113 (6): 1065-9, 1999.|
|108.||Gerami P, Jewell SS, Morrison LE, et al.: Fluorescence in situ hybridization (FISH) as an ancillary diagnostic tool in the diagnosis of melanoma. Am J Surg Pathol 33 (8): 1146-56, 2009.|
|109.||Massi D, Cesinaro AM, Tomasini C, et al.: Atypical Spitzoid melanocytic tumors: a morphological, mutational, and FISH analysis. J Am Acad Dermatol 64 (5): 919-35, 2011.|
|110.||van Engen-van Grunsven AC, van Dijk MC, Ruiter DJ, et al.: HRAS-mutated Spitz tumors: A subtype of Spitz tumors with distinct features. Am J Surg Pathol 34 (10): 1436-41, 2010.|
|111.||Blokx WA, van Dijk MC, Ruiter DJ: Molecular cytogenetics of cutaneous melanocytic lesions - diagnostic, prognostic and therapeutic aspects. Histopathology 56 (1): 121-32, 2010.|
|112.||Takata M, Saida T: Genetic alterations in melanocytic tumors. J Dermatol Sci 43 (1): 1-10, 2006.|
|113.||Kirkwood JM, Manola J, Ibrahim J, et al.: A pooled analysis of eastern cooperative oncology group and intergroup trials of adjuvant high-dose interferon for melanoma. Clin Cancer Res 10 (5): 1670-7, 2004.|
|114.||Gogas HJ, Kirkwood JM, Sondak VK: Chemotherapy for metastatic melanoma: time for a change? Cancer 109 (3): 455-64, 2007.|
|115.||Eton O, Legha SS, Bedikian AY, et al.: Sequential biochemotherapy versus chemotherapy for metastatic melanoma: results from a phase III randomized trial. J Clin Oncol 20 (8): 2045-52, 2002.|
|116.||Middleton MR, Grob JJ, Aaronson N, et al.: Randomized phase III study of temozolomide versus dacarbazine in the treatment of patients with advanced metastatic malignant melanoma. J Clin Oncol 18 (1): 158-66, 2000.|
|117.||Chapman PB, Hauschild A, Robert C, et al.: Improved survival with vemurafenib in melanoma with BRAF V600E mutation. N Engl J Med 364 (26): 2507-16, 2011.|
|118.||Hodi FS, O'Day SJ, McDermott DF, et al.: Improved survival with ipilimumab in patients with metastatic melanoma. N Engl J Med 363 (8): 711-23, 2010.|
|119.||Efron PA, Chen MK, Glavin FL, et al.: Pediatric basal cell carcinoma: case reports and literature review. J Pediatr Surg 43 (12): 2277-80, 2008.|
|120.||Griffin JR, Cohen PR, Tschen JA, et al.: Basal cell carcinoma in childhood: case report and literature review. J Am Acad Dermatol 57 (5 Suppl): S97-102, 2007.|
|121.||Gorlin RJ: Nevoid basal cell carcinoma syndrome. Dermatol Clin 13 (1): 113-25, 1995.|
|122.||Kimonis VE, Goldstein AM, Pastakia B, et al.: Clinical manifestations in 105 persons with nevoid basal cell carcinoma syndrome. Am J Med Genet 69 (3): 299-308, 1997.|
|123.||Amlashi SF, Riffaud L, Brassier G, et al.: Nevoid basal cell carcinoma syndrome: relation with desmoplastic medulloblastoma in infancy. A population-based study and review of the literature. Cancer 98 (3): 618-24, 2003.|
|124.||Veenstra-Knol HE, Scheewe JH, van der Vlist GJ, et al.: Early recognition of basal cell naevus syndrome. Eur J Pediatr 164 (3): 126-30, 2005.|
|125.||Caro I, Low JA: The role of the hedgehog signaling pathway in the development of basal cell carcinoma and opportunities for treatment. Clin Cancer Res 16 (13): 3335-9, 2010.|
|126.||Von Hoff DD, LoRusso PM, Rudin CM, et al.: Inhibition of the hedgehog pathway in advanced basal-cell carcinoma. N Engl J Med 361 (12): 1164-72, 2009.|
|127.||Sekulic A, Migden MR, Oro AE, et al.: Efficacy and safety of vismodegib in advanced basal-cell carcinoma. N Engl J Med 366 (23): 2171-9, 2012.|
|128.||Hoch BL, Nielsen GP, Liebsch NJ, et al.: Base of skull chordomas in children and adolescents: a clinicopathologic study of 73 cases. Am J Surg Pathol 30 (7): 811-8, 2006.|
|129.||McMaster ML, Goldstein AM, Bromley CM, et al.: Chordoma: incidence and survival patterns in the United States, 1973-1995. Cancer Causes Control 12 (1): 1-11, 2001.|
|130.||Coffin CM, Swanson PE, Wick MR, et al.: Chordoma in childhood and adolescence. A clinicopathologic analysis of 12 cases. Arch Pathol Lab Med 117 (9): 927-33, 1993.|
|131.||Borba LA, Al-Mefty O, Mrak RE, et al.: Cranial chordomas in children and adolescents. J Neurosurg 84 (4): 584-91, 1996.|
|132.||Jian BJ, Bloch OG, Yang I, et al.: A comprehensive analysis of intracranial chordoma and survival: a systematic review. Br J Neurosurg 25 (4): 446-53, 2011.|
|133.||McMaster ML, Goldstein AM, Parry DM: Clinical features distinguish childhood chordoma associated with tuberous sclerosis complex (TSC) from chordoma in the general paediatric population. J Med Genet 48 (7): 444-9, 2011.|
|134.||Hug EB, Sweeney RA, Nurre PM, et al.: Proton radiotherapy in management of pediatric base of skull tumors. Int J Radiat Oncol Biol Phys 52 (4): 1017-24, 2002.|
|135.||Noël G, Habrand JL, Jauffret E, et al.: Radiation therapy for chordoma and chondrosarcoma of the skull base and the cervical spine. Prognostic factors and patterns of failure. Strahlenther Onkol 179 (4): 241-8, 2003.|
|136.||Rutz HP, Weber DC, Goitein G, et al.: Postoperative spot-scanning proton radiation therapy for chordoma and chondrosarcoma in children and adolescents: initial experience at paul scherrer institute. Int J Radiat Oncol Biol Phys 71 (1): 220-5, 2008.|
|137.||Casali PG, Messina A, Stacchiotti S, et al.: Imatinib mesylate in chordoma. Cancer 101 (9): 2086-97, 2004.|
|138.||Stacchiotti S, Longhi A, Ferraresi V, et al.: Phase II study of imatinib in advanced chordoma. J Clin Oncol 30 (9): 914-20, 2012.|
|139.||Lindén O, Stenberg L, Kjellén E: Regression of cervical spinal cord compression in a patient with chordoma following treatment with cetuximab and gefitinib. Acta Oncol 48 (1): 158-9, 2009.|
|140.||Singhal N, Kotasek D, Parnis FX: Response to erlotinib in a patient with treatment refractory chordoma. Anticancer Drugs 20 (10): 953-5, 2009.|
|141.||Stacchiotti S, Marrari A, Tamborini E, et al.: Response to imatinib plus sirolimus in advanced chordoma. Ann Oncol 20 (11): 1886-94, 2009.|
|142.||Kuttesch JF Jr, Parham DM, Kaste SC, et al.: Embryonal malignancies of unknown primary origin in children. Cancer 75 (1): 115-21, 1995.|
|143.||Pavlidis N, Pentheroudakis G: Cancer of unknown primary site. Lancet 379 (9824): 1428-35, 2012.|
|144.||Bohuslavizki KH, Klutmann S, Kröger S, et al.: FDG PET detection of unknown primary tumors. J Nucl Med 41 (5): 816-22, 2000.|
|145.||Han A, Xue J, Hu M, et al.: Clinical value of 18F-FDG PET-CT in detecting primary tumor for patients with carcinoma of unknown primary. Cancer Epidemiol 36 (5): 470-5, 2012.|
|146.||Varadhachary GR, Talantov D, Raber MN, et al.: Molecular profiling of carcinoma of unknown primary and correlation with clinical evaluation. J Clin Oncol 26 (27): 4442-8, 2008.|
|147.||Pentheroudakis G, Greco FA, Pavlidis N: Molecular assignment of tissue of origin in cancer of unknown primary may not predict response to therapy or outcome: a systematic literature review. Cancer Treat Rev 35 (3): 221-7, 2009.|
The PDQ cancer information summaries are reviewed regularly and updated as new information becomes available. This section describes the latest changes made to this summary as of the date above.
Editorial changes were made to this summary.
This summary is written and maintained by the PDQ Pediatric Treatment Editorial Board, which is editorially independent of NCI. The summary reflects an independent review of the literature and does not represent a policy statement of NCI or NIH. More information about summary policies and the role of the PDQ Editorial Boards in maintaining the PDQ summaries can be found on the About This PDQ Summary and PDQ NCI's Comprehensive Cancer Database pages.
Purpose of This Summary
This PDQ cancer information summary for health professionals provides comprehensive, peer-reviewed, evidence-based information about the treatment of unusual cancers of childhood. It is intended as a resource to inform and assist clinicians who care for cancer patients. It does not provide formal guidelines or recommendations for making health care decisions.
Reviewers and Updates
This summary is reviewed regularly and updated as necessary by the PDQ Pediatric Treatment Editorial Board, which is editorially independent of the National Cancer Institute (NCI). The summary reflects an independent review of the literature and does not represent a policy statement of NCI or the National Institutes of Health (NIH).
Board members review recently published articles each month to determine whether an article should:
Changes to the summaries are made through a consensus process in which Board members evaluate the strength of the evidence in the published articles and determine how the article should be included in the summary.
The lead reviewers for Unusual Cancers of Childhood Treatment are:
Any comments or questions about the summary content should be submitted to Cancer.gov through the Web site's Contact Form. Do not contact the individual Board Members with questions or comments about the summaries. Board members will not respond to individual inquiries.
Levels of Evidence
Some of the reference citations in this summary are accompanied by a level-of-evidence designation. These designations are intended to help readers assess the strength of the evidence supporting the use of specific interventions or approaches. The PDQ Pediatric Treatment Editorial Board uses a formal evidence ranking system in developing its level-of-evidence designations.
Permission to Use This Summary
PDQ is a registered trademark. Although the content of PDQ documents can be used freely as text, it cannot be identified as an NCI PDQ cancer information summary unless it is presented in its entirety and is regularly updated. However, an author would be permitted to write a sentence such as "NCI's PDQ cancer information summary about breast cancer prevention states the risks succinctly: [include excerpt from the summary]."
The preferred citation for this PDQ summary is:
National Cancer Institute: PDQ® Unusual Cancers of Childhood Treatment. Bethesda, MD: National Cancer Institute. Date last modified <MM/DD/YYYY>. Available at: http://cancer.gov/cancertopics/pdq/treatment/unusual-cancers-childhood/HealthProfessional. Accessed <MM/DD/YYYY>.
Images in this summary are used with permission of the author(s), artist, and/or publisher for use within the PDQ summaries only. Permission to use images outside the context of PDQ information must be obtained from the owner(s) and cannot be granted by the National Cancer Institute. Information about using the illustrations in this summary, along with many other cancer-related images, is available in Visuals Online, a collection of over 2,000 scientific images.
Based on the strength of the available evidence, treatment options may be described as either "standard" or "under clinical evaluation." These classifications should not be used as a basis for insurance reimbursement determinations. More information on insurance coverage is available on Cancer.gov on the Coping with Cancer: Financial, Insurance, and Legal Information page.
More information about contacting us or receiving help with the Cancer.gov Web site can be found on our Contact Us for Help page. Questions can also be submitted to Cancer.gov through the Web site's Contact Form.
For more information, U.S. residents may call the National Cancer Institute's (NCI's) Cancer Information Service toll-free at 1-800-4-CANCER (1-800-422-6237) Monday through Friday from 8:00 a.m. to 8:00 p.m., Eastern Time. A trained Cancer Information Specialist is available to answer your questions.
The NCI's LiveHelp® online chat service provides Internet users with the ability to chat online with an Information Specialist. The service is available from 8:00 a.m. to 11:00 p.m. Eastern time, Monday through Friday. Information Specialists can help Internet users find information on NCI Web sites and answer questions about cancer.
Write to us
For more information from the NCI, please write to this address:
|NCI Public Inquiries Office|
|6116 Executive Boulevard, MSC8322|
|Bethesda, MD 20892-8322|
Search the NCI Web site
The NCI Web site provides online access to information on cancer, clinical trials, and other Web sites and organizations that offer support and resources for cancer patients and their families. For a quick search, use the search box in the upper right corner of each Web page. The results for a wide range of search terms will include a list of "Best Bets," editorially chosen Web pages that are most closely related to the search term entered.
There are also many other places to get materials and information about cancer treatment and services. Hospitals in your area may have information about local and regional agencies that have information on finances, getting to and from treatment, receiving care at home, and dealing with problems related to cancer treatment.
The NCI has booklets and other materials for patients, health professionals, and the public. These publications discuss types of cancer, methods of cancer treatment, coping with cancer, and clinical trials. Some publications provide information on tests for cancer, cancer causes and prevention, cancer statistics, and NCI research activities. NCI materials on these and other topics may be ordered online or printed directly from the NCI Publications Locator. These materials can also be ordered by telephone from the Cancer Information Service toll-free at 1-800-4-CANCER (1-800-422-6237).
Last Revised: 2012-10-31
Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated. | 1 | 13 |
<urn:uuid:91643373-a78a-4e3a-a892-525c4a4dc23e> | Define abstemious Save money .au Like Dictionary@Youtube on facebook Follow us on twitter
Citizen Pain - GRE Vocabulary Ep1, Season A GRE Vocabulary Words--An Episodic Journey 1. Abase - v. to humble; disgrace 2. Abate - v. to reduce in amount, degree, or severity 3. Abdicate - v. to give up a position, right or power 4. Aberrant - adj. deviating from what is normal or expected 5. Abeyance - n. temporary suppression or suspension 6. Abjure - v. to reject; abandon formally 7. Abscond - v. to leave secretly 8. Abstain - v. to choose not to do something 9. Abstemious - adj. moderate in appetite
Aberration - Semi.Special Flash Video - Vocabulary Word abandon abase abash abate abattoir abdicate aberration abet abeyance abhor abject abjure ablution abnegate abolish abortive abridge abrogate abscond absolve abstain abstemious abstruse accede accessible accessory sat vocabulary rome total war etc classical music asparagoose runescape cartoon music ownage flash video bad attempt Flash Flash macromedia Flash
Kleptagon VS Sauceytone In a world where only the strong survive Kleptagon Shows us that there is a strong sensitive type aswell. Cast Kleptogon: Cody Ayers Sauceytone: Kenny Whitis Joey: Dale Ayers Director/Co-editor/producer: Dale Ayers Co-editor: Cody Ayers
How to Pronounce Abstemiously Learn how to say Abstemiously correctly with EmmaSaying's "how do you pronounce" free tutorials. Definition of abstemious (oxford dictionary): adjective indulging only very moderately in something, especially food and drink: 'We only had a bottle.' 'Very abstemious of you.' Derivatives abstemiously adverb abstemiousness noun Origin: early 17th century: from Latin abstemius, (from ab- 'from' + a word related to temetum 'alcoholic drink') + -ous
The Captains Of Industry present: CAPTAINS SAY RELAX After a night of fraternity and fermented drinks, The Captains Of Industry (Graham Funke & StoneRokk) take a step back and allow their brother in arms, Roctakon, to man the ship as he was the only one who was abstemious. Even the most virile men, men of breeding, can't fight the feeling of Frankie Goes To Hollywood "Relax"...
MAC OS X finally installed on Dell Latitude D630 Hi i have been trying to install OS X on Dell for a long time , finally i am able to have almost perfect installation. i am still facing problems SMBIOS not working sleep not working wifi not working shutdown not working i used below guide to achieve it:
22-05-2011 - Wake Island - All Generals - 2nd game Here the second game played 22-05-2011 on Wake Island. If you play on ps3 maybe met many of them. There are guys from some clan, DAP-VET, Gen-82nd and HeAt. I want to mention SigoConWeba as one of the best pilot on ps3, he made hard for me drive the tank. You may have noticed i was driving like a drunk - i say I'm abstemious - i was looking at the sky while moving since worried of his plane :DI must say i was a little bit lucky to be on top of the leaderboard since in these kind of games i look like a noob :D
meeeedeeeeahz clips taken from various VHS cassettes.
The 'Truthful Dictionary', Vocabulary Builder You'll remember these words) 1-10 Have you ever had a vocabulary lesson that wasn't boring? This one, will not bore you Using words, as if they were swords, you will learn the word, the definition and some ways to use these words to slash and plunder the dictatorship, under which we live It's fun, it's funny, it's the truth, finally! And thank God! oneofyoursubs wrote this comment: Assiduously I watch backtoconstitution's channel to develop greater acumen to become more abstemious. It isn't that my goal is to ameliorate but I have developed more aplomb. My mind has not gown awry, nor have I become baleful--I am filled with blithe and gratitude for this dude's wit and humor. At first I may have brusquely left comments and been cast to my bucolic fate, but fortune shined upon me and I am permitted to leave comments.
FFF #110.5 witty title goes here! Jim's Daily Life:
Abstemious Vocabwiz - 1000 words
Latest Pathetic Sports Marketing Low: Virgins LIONEL NY's PIX 11 News Commentary Aired: May 23, 2012 The latest in sports market schtick and gimmicks: virgins.
abstemious This video will define abstemious, use it in a sentence, and show a mnemonic device which makes easier to remember.
List of Over 50000 English Words ABSTEMIOUS
Harry Potter Contest- Abstemious Cognate This a beauty potion. One sip and instant beauty. Two parts secretion of an orange One part HOH Half part blood of an emonLa and the star power ;) (This is an actual recipe! try it out! Lots of vitamin C! Orange juice, lemon juice and water!) Submission for Harry Potter and the Half Blood Prince Contest Christy Hinterlong Please rate fav sub and comment :D Check out my other vids and/or my channel! Thanks!! Follow me on Twitter! /ChristyABell Formspring! Ask questions or get advice here www.formspring.me Dailybooth: ChristyABell
Led Zeppelin : Greatest Secret You be the judge. Creating reverse messages creates a song with 2 meanings. One is a Stairway to Heaven while the other is stairway to hell. This is a brilliant idea by Led Zeppelin, it creates a balance and makes the song legendary. You be the judge.
GRE Audio I (Abate to Abstenious) Krishna Bista abase lower; degrade; humiliate; make humble; make (oneself) lose self-respect abash embarrass abate subside or moderate abbreviate shorten abdicate renounce; give up (position, right, or responsibility) aberrant abnormal or deviant aberration deviation from the normal; mental disorder abet assist usually in doing something wrong; encourage abeyance suspended action abhor detest; hate abide dwell; abide by: comply with; put up with; tolerate; Ex. abide by the rules; Ex. I can't abide rude people. abject (of a condition) wretched; as low as possible; lacking pride; very humble; showing lack of self-respect; Ex. abject apology abjure renounce upon oath ablution washing abnegation renunciation; self-sacrifice; self-abnegation abode dwelling place; home abolish cancel; put an end to abominable detestable; extremely unpleasant abominate loathe; hate aboriginal being the first of its kind in a region; primitive; native; indigenous; N. aborigine abortive unsuccessful; fruitless abrasive rubbing away; tending to grind down abridge condense or shorten abrogate abolish abscission cutting off; separation abscond depart secretly and hide absolute complete; totally unlimited; having complete power; certain; not relative; Ex. absolute honesty/ruler; CF. absolutism absolve pardon (an offense) abstain refrain; withhold from participation; intentionally not use one's vote; abstemious sparing in eating and drinking; temperate
Crossfire [PH] - Uncut M700 Gameplay BGM 1st - Let the bodies hit the floor - Drowning pool 2nd - Fly mag pie - O2jam song
IS SOMEBODY WATCHING YOU Ever have that feeling that somebody watching you well this is what you have to contend with
Basketball Tips This is the way basketball should be played.
Video Dictionary Abstemious
Abstemious (3/17/10) Become a fan on Facebook Follow us on Twitter: abstemious |abˈstēmēəs| adjective marked by restraint especially in the consumption of food or alcohol; temperate in diet. People who are on a restricted diet lead an abstemious lifestyle. Synonyms: temperate, abstinent, moderate, self-disciplined, restrained
A LOVER'S DILEMMA "Too much, or too little wine, ... "*NEED SUBSCRIPTIONS* Rate, Comment, PLEASE* ~~ WORDS HERE (Down Cursor Button) ~~ OR ~~ ~~ OR ~~ http A LOVERS DILEMMA Too much, or too little wine, Either way that woman proves problematic, Give her none, She can not find the truth, Give her too much, the same.
Monks Caught On Camera Smoking,Drinking and Gambling, South Korea Six Buddhist monks have quit after being secretly filmed drinking, smoking and playing a game of poker in which more than £600000 was won. South Korean television showed shots of the monks who had gathered at a luxury lakeside hotel for a colleague's memorial service. The monks, who played poker for 13 hours, were from the Jogye order, which has ten million followers -- a fifth of the nation's population. The head of the order made a public apology yesterday, vowing 'self-repentance'. He added: 'Basically, Buddhist rules say don't steal ... they abused money from Buddhists for gambling.' The episode has led to speculation of a power split within the order. South Korean TV networks aired shots of monks playing poker, some of them smoking and drinking, after gathering at a luxury lakeside hotel in late April for a fellow monk's memorial service. 'The stakes for 13 hours of gambling were more than 1 billion won (£541400),' Seongho, a senior monk who uses one name, said today. He said he had reported the incident to prosecutors. Gambling outside of licensed casinos and horse racing tracks is illegal in South Korea and frowned upon by religious leaders. 'Basically, Buddhist rules say don't steal. Look at what they did, they abused money from Buddhists for gambling,' Seongho said. The behaviour of the supposedly abstemious monks has led to Korean media speculation about a power split within the Jogye order.
Left for dead A man talks to himself
a sting in a tale - selected scenes THE BOAT RIDE. IM ONLY TRYING TO HELP. THE CHINESE BUSINESS MAN. A twisted tale of two unemployed graduates who embark on a journey to make it in a world where you need more than what you have to get what you want. Kuuku is overwhelmed with the urgency to succeed and frantically searches for a reward to his several years of school. Frustrated and constantly reminded of his failure by the presence of his girlfriend, (Frema); Kuuku will do anything to make the odds work in his favor. Nii Aryee, Kuuku's abstemious looking friend is more positive about the future until the rejection letters begin to mount and his landlord comes to town. Driven by the fear of poverty, these two friends go in search of a destiny that takes them to the most obscure places. In a tale where the unexpected is always lurking in the shadows, from the natural to the supernatural, among all ploys, grief and struggles, nothing prepares you for the sting, in a bizarre ending.
Abstemious Ap words vid
SlimQuick Are you tired of the same old diet pills, well these arent diet pills there SlimQuick. Ready to bur off pound of fat, then but this product.
The Bounty A scene from "The Bounty" (starring Anthony Hopkins, Liam Neeson, Daniel Day Lewis and a young Mel Gibson). The ship's surgeon (who looks like a large Oompa Loompa) is a drunk, so it it surprising when he initially says no to the wine. The use of the word "abstemious" is spot on.
BrvmaK - Funeral Beer Death/Black Metal band from Frosinone Produced/Recorded/Mixed [at] Acme Rec. Studio by Davide Rosati in december 2006 www.acmerecording.it lyric: Hey, can you hear me down here? Go away or Welcome to my funeral Can you hear me down here? Wickened in a coffin Whipped by the death You' re only poor fools! You believed you have defeated me throwing me down in this oblivion of phobia? But it isn't so, i' m not imploring you. No satisfaction for the infamous! I' m insulting you from down here, under all this humid ground, inside of this frozen coffin. Did you ever expect it, bad son of a ***? Spit on yourselves, shameless! This is the funeral beer, you stay up here and say cheer! I' m buried but alive, while you are tipsy and dead! Hey can you hear me down here? Drunk Father, you are in the bar help us to sacrifice the abstemious let your nectar come to us we'll drink of us own will and lead us not into sobriety but give us the strenght to still ask another beer AMEN This is the funeral beer, you stay up here and say cheer! I' m buried but alive, while you are tipsy and dead! Unleash this funeral. Can you hear me? facebook profile: /brvmakprofile facebook page: /brvmak myspace page: /anotherbeer reverbnation page: /brvmak
Abstemious - definition and usage Using the word "abstemious"...
Masque for the Marriage of Ferdinand and Miranda - Tempest - Shakespeare PROSP: Dearly, my delicate Ariel. Do not approach Till thou dost hear me call. Now Ariel, come! PROSP: (to Ferdinand) Look thou be true. Do not give dalliance Too much the rein: the strongest oaths are straw To th" fire i'th" blood. Be more abstemious, Or else, good night your vow! IRIS: Ceres, most bounteous lady, thy rich leas Of wheat, rye, barley, vetches, oats and peas; Thy turfy mountains, where live nibbling sheep, And flat meads thatched with stover, them to keep; and thy broom-groves, Whose shadow the dismissèd bachelor loves, The Queen o'th'Sky, Whose watery arch and messenger am I Bids thee leave these, and with her sovereign grace, Here on this grass-plot, in this very place To come and sport. Her pea***s fly amain. Approach, rich Ceres, her to entertain CERES: Hail, many-coloured messenger, that ne'er Dost disobey the wife of Jupiter; Who with thy saffron wings upon my flowers Diffusest honey-drops, refreshing showers, And with each end of thy blue bow dost crown My bosky acres and my unshrubbed down, Rich scarf to my proud earth. Why hath thy Queen Summoned me hither to this short-grassed green? IRIS: A contract of true love to celebrate; And some donation freely to estate On the blest lovers. CERES: Tell me, heavenly bow, If Venus or her son, as thou dost know, Do now attend the queen? IRIS: Of her society Be not afraid. I met her deity. Cutting the clouds towards Pathos and her son Dove-drawn with her. Here thought they to have done Some wanton charm upon ...
The abominable Dr. Marco Psycho images of a TOTALLY sober man...we're not joking...he's abstemious
Man VS Wilderness (A Man vs Wild Parody) *TURN UP THE VOLUME* *WATCH IN HIGH QUALITY* Dale Ayers leads you on an adventure that will blow your mind. He will encounter many many challenges but nothing can stop him. Produce/Director/Editor/Host: Dale Ayers Co-producer/ Co-writer/Camera Guy: Cody Ayers Myspace: /und3rgr0undc0medy Thanks for watching
Riley's G6 Watch this!
A STING IN A TALE A twisted tale of two unemployed graduates who embark on a journey to make it in a world where you need more than what you have to get what you want. Kuuku is overwhelmed with the urgency to succeed and frantically searches for a reward to his several years of school. Frustrated and constantly reminded of his failure by the presence of his girlfriend, (Frema) ; Kuuku will do anything to make the odds work in his favor. Nii Aryee, Kuukus abstemious looking friend is more positive about the future until the rejection letters begin to mount and his landlord comes to town. Driven by the fear of poverty, these two friends go in search of a destiny that takes them to the most obscure places. In a tale where the unexpected is always lurking in the shadows, from the natural to the supernatural, among all ploys, grief and struggles, nothing prepares you for the sting, in a rather bizarre ending. Starring MAJID MICHEALS ADJETEY ANNAN LYDIA FORSON
Bigfoot captured: Nesquik video contest Sasquatch finaly captured after many years. and all thanks to nesquik. | 1 | 3 |
<urn:uuid:8e3a6507-ac44-4969-9f77-2902bbad9681> | The Northern Isles (Scots: Northren Isles; Scottish Gaelic: Na h-Eileanan a Tuath; Old Norse: Norðreyjar) is a chain (or archipelago) of islands off the north coast of mainland Scotland. The climate is cool and temperate and much influenced by the surrounding seas. There are two main island groups: Shetland and Orkney. There are a total of 26 inhabited islands with landscapes of the fertile agricultural islands of Orkney contrasting with the more rugged Shetland islands to the north, where the economy is more dependent on fishing and the oil wealth of the surrounding seas. Both have a developing renewable energy industry. They also share a common Pictish and Norse history and both were absorbed into the Kingdom of Scotland in the 15th century and then became part of the United Kingdom in the modern era. The islands played a significant naval role during the world wars of the 20th century.
Tourism is important to both archipelagos, with their distinctive prehistoric ruins playing a key part in their attraction, and there are regular ferry and air connections with mainland Scotland. The Scandinavian influence remains strong, especially in relation to local folklore and both island chains have strong, though distinct local cultures. The names of the islands are dominated by the Norse heritage, although some may retain pre-Celtic elements.
The phrase "Northern Isles" generally refers to the main islands of the Orkney and Shetland archipelagos. Stroma, which lies between mainland Scotland and Orkney is part of Caithness, and so falls under Highland council area for local government purposes, not Orkney. It is however clearly one of the "northern isles" of Scotland. Fair Isle and Foula are outliers of Shetland, but would normally be considered as part of Shetland and thus the Northern Isles. Similarly, Sule Skerry and Sule Stack although distant from the main group are part of Orkney and technically amongst the Northern Isles. However the other small islands that lie off the north coast of Scotland are in Highland and thus not usually considered to be part of the Northern Isles.
Orkney is situated 16 kilometres (10 mi) north of the coast of mainland Scotland, from which it is separated by the waters of the Pentland Firth. The largest island, known as the "Mainland" has an area of 523.25 square kilometres (202.03 sq mi) making it the sixth largest Scottish island. The total population in 2001 was 19,245 and the largest town is Kirkwall. Shetland is around 170 kilometres (110 mi) north of mainland Scotland, covers an area of 1,468 square kilometres (567 sq mi) and has a coastline 2,702 kilometres (1,679 mi) long. Lerwick, the capital and largest settlement, has a population of around 7,500 and about half of the archipelago's total population of 22,000 people live within 16 kilometres (10 mi) of the town. Orkney has 20 inhabited islands and Shetland a total of 16.
The superficial rock of Orkney is almost entirely Old Red Sandstone, mostly of Middle Devonian age. As in the neighbouring mainland county of Caithness, this sandstone rests upon the metamorphic rocks of the Moine series, as may be seen on the Orkney Mainland, where a narrow strip of the older rock is exposed between Stromness and Inganess, and again on the small island of Graemsay.
Middle Devonian basaltic volcanic rocks are found on western Hoy, on Deerness in eastern Mainland and on Shapinsay. Correlation between the Hoy volcanics and the other two exposures has been proposed, but differences in chemistry means this remains uncertain. Lamprophyre dykes of Late Permian age are found throughout Orkney. Glacial striation and the presence of chalk and flint erratics that originated from the bed of the North Sea demonstrate the influence of ice action on the geomorphology of the islands. Boulder clay is also abundant and moraines cover substantial areas.
The geology of Shetland is quite different. It is extremely complex, with numerous faults and fold axes. These islands are the northern outpost of the Caledonian orogeny and there are outcrops of Lewisian, Dalriadan and Moine metamorphic rocks with similar histories to their equivalents on the Scottish mainland. There are also small Old Red Sandstone deposits and granite intrusions. The most distinctive feature is the ultrabasic ophiolite, peridotite and gabbro on Unst and Fetlar, which are remnants of the Iapetus Ocean floor. Much of Shetland's economy depends on the oil-bearing sediments in the surrounding seas.
Geological evidence shows that at around 6100 BC a tsunami caused by the Storegga Slides hit the Northern Isles, (as well as much of the east coast of Scotland), and may have created a wave of up to 25 metres (82 ft) high in the voes of Shetland where modern populations are highest.
The Northern Isles have a cool, temperate climate that is remarkably mild and steady for such a northerly latitude, due to the influence of the surrounding seas and the Gulf Stream. In Shetland average peak temperatures are 5 °C (41 °F) in February and 15 °C (59 °F) in August and temperatures over 21 °C (70 °F) are rare. The frost-free period may be as little as 3 months.
The average annual rainfall is 982 millimetres (38.7 in) in Orkney and 1,168 millimetres (46.0 in) in Shetland. Winds are a key feature of the climate and even in summer there are almost constant breezes. In winter, there are frequent strong winds, with an average of 52 hours of gales being recorded annually in Orkney. Burradale wind farm on Shetland, which operates with five Vestas V47 660 kW turbines, achieved a world record of 57.9% capacity over the course of 2005 due to the persistent strong winds.
Snowfall is usually confined to the period November to February and seldom lies on the ground for more than a day. Less rain falls from April to August although no month receives less than an average of 50 mm (2.0 in). Annual bright sunshine averages 1082 hours in Shetland and overcast days are common.
To tourists, one of the fascinations of the islands is their "nightless" summers. On the longest day in Shetland there are over 19 hours of daylight and complete darkness is unknown. This long twilight is known in the Northern Isles as the "simmer dim". Winter nights are correspondingly long with less than six hours of daylight at midwinter. At this time of year the aurora borealis can occasionally be seen on the northern horizon during moderate auroral activity.
There are numerous important prehistoric remains in Orkney, especially from the Neolithic period, four of which form the Heart of Neolithic Orkney UNESCO World Heritage Site that was inscribed in 1999: Skara Brae; Maes Howe; the Stones of Stenness; and Ring of Brodgar. The Knap of Howar Neolithic farmstead situated on the island of Papa Westray is probably the oldest preserved house in northern Europe. This structure was inhabited for 900 years from 3700 BC but was evidently built on the site of an even older settlement. Shetland is also extremely rich in physical remains of the prehistoric eras and there are over 5,000 archaeological sites all told. Funzie Girt is a remarkable Neolithic dividing wall that ran for 4 kilometres (2.5 mi) across the island of Fetlar, although the Iron Age has provided the most outstanding archaeology on Shetland. Numerous brochs were erected at that time of which the Broch of Mousa is the finest preserved example of these round towers. In 2011 the collective site, "The Crucible of Iron Age Shetland" including Broch of Mousa, Old Scatness and Jarlshof joined the UK's "Tentative List" of World Heritage Sites.
History, culture and politics
Pictish times
The culture that built the brochs is unknown, but by the late Iron Age the Northern Isles were part of the Pictish kingdom. The main archaeological relics from these times are symbol stones. One of the best examples is located on the Brough of Birsay; it shows three warriors with spears and sword scabbards combined with traditional Pictish symbols. The St Ninian's Isle Treasure was discovered in 1958. The silver bowls, jewellery and other pieces are believed to date from approximately 800 AD. O'Dell (1959) stated that "the treasure is the best survival of Scottish silver metalwork from the period" and that "the brooches show a variety of typical Pictish forms, with both animal-head and lobed geometrical forms of terminal".
Christianity probably arrived in Orkney in the 6th century and organised church authority emerged in the 8th century. The Buckquoy spindle-whorl found at a Pictish site on Birsay is an Ogham–inscribed artefact whose interpretation has caused controversy although it is now generally considered to be of both Irish and Christian in origin.
Norse era
The 8th century was also the time the Viking invasions of the Scottish seaboard commenced and with them came the arrival of a new culture and language for the Northern Isles, the fate of the existing indigenous population being uncertain. According to the Orkneyinga Saga, Vikings then made the islands the headquarters of pirate expeditions carried out against Norway and the coasts of mainland Scotland. In response, Norwegian king Harald Hårfagre ("Harald Fair Hair") annexed the Northern Isles in 875 and Rognvald Eysteinsson received Orkney and Shetland from Harald as an earldom as reparation for the death of his son in battle in Scotland. (Some scholars believe that this story is apocryphal and based on the later voyages of Magnus Barelegs.)
The islands were fully Christianised by Olav Tryggvasson in 995 when he stopped at South Walls on his way from Ireland to Norway. The King summoned the jarl Sigurd the Stout and said "I order you and all your subjects to be baptised. If you refuse, I'll have you killed on the spot and I swear I will ravage every island with fire and steel." Unsurprisingly, Sigurd agreed and the islands became Christian at a stroke, receiving their own bishop in the early 11th century.
Scots rule
In the 14th century Orkney and Shetland remained a Norwegian province, but Scottish influence was growing. Jon Haraldsson who was murdered in Thurso in 1231, was the last of an unbroken line of Norse jarls, and thereafter the earls were Scots noblemen of the houses of Angus and St. Clair. In 1468 Shetland was pledged by Christian I, in his capacity as King of Norway, as security against the payment of the dowry of his daughter Margaret, betrothed to James III of Scotland. As the money was never paid, the connection with the crown of Scotland has become perpetual. In 1470 William Sinclair, 1st Earl of Caithness ceded his title to James III and the following year the Northern Isles were directly annexed to Scotland.
Early British era
From the early 15th century on the Shetlanders sold their goods through the Hanseatic League of German merchantmen. This trade with the North German towns lasted until the 1707 Act of Union when high salt duties prohibited the German merchants from trading with Shetland. Shetland then went into an economic depression as the Scottish and local traders were not as skilled in trading with salted fish. However, some local merchant-lairds took up where the German merchants had left off, and fitted out their own ships to export fish from Shetland to the Continent. For the independent farmer/fishermen of Shetland this had negative consequences, as they now had to fish for these merchant-lairds.
British rule came at price for many ordinary people as well as traders. The Shetlanders' nautical skills were sought by the Royal Navy: some 3,000 served during the Napoleonic wars from 1800 to 1815 and press gangs were rife. During this period 120 men were taken from Fetlar alone and only 20 of them returned home. By the late 19th century 90% of all Shetland was owned by just 32 people, and between 1861 and 1881 more than 8,000 Shetlanders emigrated. With the passing of the Crofters' Act in 1886 the Liberal prime minister William Gladstone emancipated crofters from the rule of the landlords. The Act enabled those who had effectively been landowners' serfs to become owner-occupiers of their own small farms.
The Orcadian experience was somewhat different. An influx of Scottish entrepreneurs helped to create a diverse and independent community that included farmers, fishermen and merchants that called themselves comunitatis Orcadie and who proved themselves increasing able to defend their rights against their feudal overlords. In the 17th century, Orcadians formed the overwhelming majority of employees of the Hudson's Bay Company in Canada. The harsh climate of Orkney and the Orcadian reputation for sobriety and their boat handling skills made them ideal candidates for the rigours of the Canadian north. During this period, burning kelp briefly became a mainstay of the islands' economy. For example on Shapinsay over 3,048 tonnes (3,000 long tons) of burned seaweed were produced per annum to make soda ash, bringing in £20,000 to the local economy. Agricultural improvements beginning in the 17th century resulted in the enclosure of the commons and ultimately in the Victoria era the emergence of large and well-managed farms using a five-shift rotation system and producing high quality beef cattle. There is little evidence of an Orcadian fishing fleet until the 19th century but it grew rapidly and 700 boats were involved by the 1840s with Stronsay and then later Stromness becoming leading centres of development.[Note 1] Many Orcadian seamen became involved in whaling in Arctic waters during the 19th century, although the boats were generally based elsewhere in Britain.
World Wars
Orkney was the site of a navy base at Scapa Flow, which played a major role in World War I. After the Armistice in 1918, the German High Seas Fleet was transferred in its entirety to Scapa Flow while a decision was to be made on its future; however, the German sailors opened their sea-cocks and scuttled all the ships. During World War I the 10th Cruiser Squadron was stationed at Swarbacks Minn in Shetland and during a single year from March 1917 more than 4,500 ships sailed from Lerwick as part of an escorted convey system. In total, Shetland lost more than 500 men, a higher proportion than any other part of Britain, and there were waves of emigration in the 1920s and 1930s.
One month into World War II, the Royal Navy battleship HMS Royal Oak was sunk by a German U-boat in Scapa Flow. As a result barriers were built to close most of the access channels; these had the additional advantage of creating causeways enabling travellers to go from island to island by road instead of being obliged to rely on ferries. The causeways were constructed by Italian prisoners of war, who also constructed the ornate Italian Chapel. The Scapa Flow base was run down after the war, eventually closing in 1957.
During World War II a Norwegian naval unit nicknamed the "Shetland Bus" was established by the Special Operations Executive in the autumn of 1940 with a base first at Lunna and later in Scalloway to conduct operations around the coast of Norway. About 30 fishing vessels used by Norwegian refugees were gathered and the Shetland Bus conducted covert operations, carrying intelligence agents, refugees, instructors for the resistance, and military supplies. It made over 200 trips across the sea with Leif Larsen, the most highly decorated allied naval officer of the war, making 52 of them.
The problem of a declining population was significant in the post-war years, although in the last decades of the 20th century there was a recovery and life in the islands focused on growing prosperity and the emergence of a relatively classless society.
Modern times
Due to their history, the islands have a Norse, rather than a Gaelic flavour, and have historic links with the Faroes, Iceland, and Norway. The similarities of both geography and history are matched by some elements of the current political process. Both Orkney and Shetland are represented in the House of Commons as constituting the Orkney and Shetland constituency, which elects one Member of Parliament (MP), the current incumbent being Alistair Carmichael. Both are also within the Highlands and Islands electoral region for the Scottish Parliament.
However there are also two separate constituencies that elect one Member of the Scottish Parliament each for Orkney and Shetland by the first past the post system. Orkney and Shetland also have separate local Councils which are dominated by independents, that is they are not members of a political party.
The Orkney Movement, a political party that supported devolution for Orkney from the rest of Scotland, contested the 1987 general election as the Orkney and Shetland Movement (a coalition of the Orkney movement and its equivalent for Shetland). Their candidate, John Goodlad, came 4th with 3,095 votes, 14.5% of those cast, but the experiment has not been repeated.
Ferry services link link Orkney and Shetland to the rest of Scotland, the main routes being Scrabster harbour, Thurso to Stromness and Aberdeen to Lerwick, both operated by Northlink Ferries. Inter-island ferry services are operated by Orkney Ferries and SIC Ferries, which are operated by the respective local authorities and Northlink also run a Lerwick to Kirkwall service. The archipelago is exposed to wind and tide, and there are numerous sites of wrecked ships. Lighthouses are sited as an aid to navigation at various locations.
The main airport in Orkney is at Kirkwall, operated by Highland and Islands Airports. Loganair, a franchise of Flybe, provides services to the Scottish mainland (Aberdeen, Edinburgh, Glasgow-International and Inverness), as well as to Sumburgh Airport in Shetland. Similar services fly from Sumburgh to the Scottish mainland.
Inter-Island flights are available from Kirkwall to several Orkney islands and from the Shetland Mainland to most of the inhabited islands including those from Tingwall Airport. There are frequent charter flights from Aberdeen to Scatsta near Sullom Voe, which are used to transport oilfield workers and this small terminal has the fifth largest number of international passengers in Scotland. The scheduled air service between Westray and Papa Westray is reputedly the shortest in the world at two minutes duration.
The very different geologies of the two archipelagos have resulted in dissimilar local economies. In Shetland, the main revenue producers in Shetland are agriculture, aquaculture, fishing, renewable energy, the petroleum industry (offshore crude oil and natural gas production), the creative industries and tourism. Oil and gas was first landed at Sullom Voe in 1978, and it has subsequently become one of the largest oil terminals in Europe. Taxes from the oil have increased public sector spending in Shetland on social welfare, art, sport, environmental measures and financial development. Three quarters of the islands' work force is employed in the service sector and Shetland Islands Council alone accounted for 27.9% of output in 2003. Fishing remains central to the islands' economy today, with the total catch being 75,767 tonnes (74,570 long tons; 83,519 short tons) in 2009, valued at over £73.2 million.
By contrast, fishing has declined in Orkney since the 19th century and the impact of the oil industry has been much less significant. However, the soil of Orkney is generally very fertile and most of the land is taken up by farms, agriculture being by far the most important sector of the economy and providing employment for a quarter of the workforce. More than 90% of agricultural land is used for grazing for sheep and cattle, with cereal production utilising about 4% (4,200 hectares (10,000 acres)), although woodland occupies only 134 hectares (330 acres).
Orkney and Shetland have significant wind and marine energy resources, and renewable energy has recently come into prominence. The European Marine Energy Centre is a Scottish Government-backed research facility that has installed a wave testing system at Billia Croo on the Orkney Mainland and a tidal power testing station on the island of Eday. This has been described as "the first of its kind in the world set up to provide developers of wave and tidal energy devices with a purpose-built performance testing facility."
The Northern Isles have a rich folklore. For example, there are many Orcadian tales concerning trows, a form of troll that draws on the islands' Scandinavian connections. Local customs in the past included marriage ceremonies at the Odin Stone that forms part of the Stones of Stenness. The best known literary figures from modern Orkney are the poet Edwin Muir, the poet and novelist George Mackay Brown and the novelist Eric Linklater.
Shetland has a strong tradition of local music. The Forty Fiddlers was formed in the 1950s to promote the traditional fiddle style, which is a vibrant part of local culture today. Notable exponents of Shetland folk music include Aly Bain and the late Tom Anderson and Peerie Willie Johnson. Thomas Fraser was a country musician who never released a commercial recording during his life, but whose work has become popular more than 20 years after his untimely death in 1978.
Island names
The etymology of the island names is dominated by Norse influence. There follows a listing of the derivation of all the inhabited islands in the Northern Isles.
The oldest version of the modern name Shetland is Hetlandensis recorded in 1190 becoming Hetland in 1431 after various intermediate transformations. This then became Hjaltland in the 16th century. As Shetlandic Norn was gradually replaced by Scots Hjaltland became Ȝetland. When use of the letter yogh was discontinued, it was often replaced by the similar-looking letter z, hence Zetland, the mispronounced form used to describe the pre-1975 county council. However the earlier name is Innse Chat – the island of the cats (or the cat tribe) as referred to in early Irish literature and it is just possible that this forms part of the Norse name. The Cat tribe also occupied parts of the northern Scottish mainland – hence the name of Caithness via the Norse Katanes ("headland of the cat"), and the Gaelic name for Sutherland, Cataibh, meaning "among the Cats".
The location of "Thule", first mentioned by Pytheas of Massilia when he visited Britain sometime between 322 and 285 BC is not known for certain. When Tacitus mentioned it in AD 98 it is clear he was referring to Shetland.
|Bruray||Norse||east isle||Norse: bruarøy – bridge island"|
|East Burra||Scots/Norse||east broch island|
|Fair Isle||Frioarøy||Norse||fair island||Norse: feoerøy – "far-off isle".|
|Fetlar||Unknown||Pre-Celtic?||Unknown||Norse: fetill – "shoulder-straps" or "fat land". See also Funzie Girt.|
|Shetland Mainland||Hetlandensis||Norse/ Gaelic||island of the cat people?||Perhaps originally from Gaelic: Innse Chat – see above|
|Muckle Roe||Rauðey Milkla||Scots/Norse||big red island|
|Papa Stour||Papøy Stóra||Celtic/Norse||big island of the priests|
|Trondra||Norse||boar island||Norse: "Þrondr's isle" or "Þraendir's isle". The first is a personal name, the second a tribal name from the Trondheim area.|
|Unst||Unknown||Pre-Celtic?||Unknown||Norse: omstr – "corn-stack" or ørn-vist – "home of the eagle"|
|Vaila||Valøy||Norse||falcon island||Norse: "horse island", "battlefield island" or "round island"|
|West Burra||Scots/Norse||west broch island|
|Yell||Unknown||Pre-Celtic?||Unknown||Norse: í Ála – "deep furrow" or Jala – "white island"|
Pytheas described Great Britain as being triangular in shape, with a northern tip called Orcas. This may have referred to Dunnet Head, from which Orkney is visible. Writing in the 1st century AD, the Roman geographer Pomponius Mela called the Orkney islands Orcades, as did Tacitus in AD 98 "Orc" is usually interpreted as a Pictish tribal name meaning "young pig" or "young boar". The old Irish Gaelic name for the islands was Insi Orc ("island of the pigs").[Note 2] The ogham script on the Buckquoy spindle-whorl is also cited as evidence for the pre-Norse existence of Old Irish in Orkney. The Pictish association with Orkney is leant weight by the Norse name for the Pentland Firth – Pettaland-fjörðr i.e "Pictland Firth.
The Norse retained the earlier root but changed the meaning, providing the only definite example of an adaption of a pre-Norse place name in the Northern Isles. The islands became Orkneyar meaning "seal islands". An alternative name for Orkney is recorded in 1300—Hrossey, meaning "horse isle" and this may also contain a Pictish element of ros meaning "moor" or "plain".
Unlike most of the larger Orkney islands, the derivation of the name "Shapinsay" is not obvious. The final 'ay' is from the Old Norse for island, but the first two syllables are more difficult to interpret. Haswell-Smith (2004) suggests the root may be hjalpandis-øy (helpful island) due to the presence of a good harbour, although anchorages are plentiful in the archipelago. The first written record dates from 1375 in a reference to Scalpandisay, which may suggest a derivation from "judge's island". Another suggestion is "Hyalpandi's island", although no one of that name is known to have been associated with Shapinsay.
|Auskerry||Østr sker||Norse||east skerry|
|Egilsay||Égillsey||Norse or Gaelic||Egil's island||Possibly from Gaelic eaglais" – church island|
|Flotta||Flottøy||Norse||flat, grassy isle|
|North Ronaldsay||Rinansøy||Norse||Uncertain – possibly "Ringa's isle"|
|Orkney Mainland||Orcades||Various||isle(s) of the young pig||See above|
|Papa Stronsay||Papey Minni||Norse||priest isle of Stronsay||The Norse name is literally "little priest isle"|
|Papa Westray||Papey Meiri||Norse||priest isle of Westray||The Norse name is literally "big priest isle"|
|Shapinsay||Unknown||Possibly "helpful island"||See above|
|South Ronaldsay||Norse||Rognvald's island|
|South Walls||Sooth Was||Scots/Norse||"southern voes"||"Voe" means fjord. Possibly "south bays".|
|Stronsay||Possibly Strjónsøy||Norse||good fishing and farming island|
Uninhabited islands
Stroma, from the Norse Straumøy means "current island" or "island in the tidal stream", a reference to the strong currents in the Pentland Firth. The Norse often gave animal names to islands and these have been transferred into English in for example, the Calf of Flotta and Horse of Copinsay. Brother Isle is an anglicisation of the Norse breiðareøy meaning "broad beach island". The Norse holmr, meaning "a small islet" has become "Holm" in English and there are numerous examples of this use including Corn Holm, Thieves Holm and Little Holm. "Muckle" meaning large or big is one of few Scots words in the island names of the Nordreyar and appears in Muckle Roe and Muckle Flugga in Shetland and Muckle Green Holm and Muckle Skerry in Orkney. Many small islets and skerries have Scots or Insular Scots names such as Da Skerries o da Rokness and Da Buddle Stane in Shetland, and Kirk Rocks in Orkney.
- "Clarence G Sinclair: Mell Head, Stroma, Pentland Firth". Scotland's Places. Retrieved 27 May 2011.
- "Northern Isles". MSN Encarta. Retrieved 31 May 2011.
- Haswell-Smith (2004) pp. 334, 502
- "Orkney Islands" Vision of Britain. Retrieved 21 September 2009.
- Shetland Islands Council (2010) p. 4
- "Visit Shetland". Visit.Shetland.org Retrieved 25 December 2010.
- Haswell-Smith (2004) pp. 336–403
- General Register Office for Scotland (2003)
- Marshall, J.E.A., & Hewett, A.J. "Devonian" in Evans, D., Graham C., Armour, A., & Bathurst, P. (eds) (2003) The Millennium Atlas: petroleum geology of the central and northern North Sea.
- Hall, Adrian and Brown, John (September 2005) "Basement Geology". Retrieved 10 November 2008.
- Odling, N.W.A. (2000) "Point of Ayre". (pdf) "Caledonian Igneous Rocks of Great Britain: Late Silurian and Devonian volcanic rocks of Scotland". Geological Conservation Review 17 : Chapter 9, p. 2731. JNCC. Retrieved 4 October 2009.
- Hall, Adrian and Brown, John (September 2005) "Orkney Landscapes: Permian dykes" Retrieved 4 October 2009.
- Brown, John Flett "Geology and Landscape" in Omand (2003) p. 10.
- Gillen (2003) pp. 90–91
- Keay & Keay (1994) p. 867
- Smith, David "Tsunami hazards". Fettes.com. Retrieved 7 March 2011.
- Chalmers, Jim "Agriculture in Orkney Today" in Omand (2003) p. 129.
- "Shetland, Scotland Climate" climatetemp.info Retrieved 26 November 2010.
- Shetland Islands Council (2005) pp. 5–9
- "Northern Scotland: climate". Met office. Retrieved 18 June 2011.
- "The Climate of Orkney" Orkneyjar. Retrieved 18 June 2011.
- "Burradale Wind Farm Shetland Islands". REUK.co.uk. Retrieved 18 June 2011.
- "About the Orkney Islands". Orkneyjar. Retrieved 19 September 2009.
- "The Weather!". shetlandtourism.com. Retrieved 14 March 2011.
- John Vetterlein (21 December 2006). "Sky Notes: Aurora Borealis Gallery". Retrieved 9 September 2009.
- "Heart of Neolithic Orkney" UNESCO. Retrieved 29 August 2008.
- Wickham-Jones (2007) p. 40
- Armit (2006) pp. 31–33
- "The Knap of Howar" Orkney Archaeological Trust. Retrieved 27 August 2008.
- Turner (1998) p. 18
- Turner (1998) p. 26
- "Feltlar, Funziegirt" ScotlandsPlaces. Retrieved 1 May 2011.
- Fojut, Noel (1981) "Is Mousa a broch?" Proc. Soc. Antiq. Scot. 111 pp. 220–228
- "From Chatham to Chester and Lincoln to the Lake District – 38 UK places put themselves forward for World Heritage status" (7 July 2010) Department for Culture, Media and Sport. Retrieved 7 March 2011.
- "Sites make Unesco world heritage status bid shortlist" (22 March 2011) BBC Scotland. Retrieved 22 March 2011.
- Hunter (2000) pp. 44, 49
- Wickham-Jones (2007) pp. 106–07
- Ritchie, Anna "The Picts" in Omand (2003) p. 39
- O'Dell, A. et al (December 1959) "The St Ninian's Isle Silver Hoard". Antiquity 33 No 132.
- O'Dell, A. St. Ninian's Isle Treasure. A Silver Hoard Discovered on St. Ninian's Isle, Zetland on 4th July, 1958. Aberdeen University Studies. No. 141.
- Wickham-Jones (2007) p. 108
- Ritchie, Anna "The Picts" in Omand (2003) p. 42
- Thomson (2008) p. 69. quoting the Orkneyinga Saga chapter 12.
- Schei (2006) pp. 11–12
- Thomson (2008) p. 24-27
- Watt, D.E.R., (ed.) (1969) Fasti Ecclesia Scoticanae Medii Aevii ad annum 1638. Scottish Records Society. p. 247
- Crawford, Barbara E. "Orkney in the Middle Ages" in Omand (2003) pp. 72–73
- Nicolson (1972) p. 44
- Nicolson (1972) p. 45
- Schei (2006) pp. 14–16
- Nicolson (1972) pp. 56–57
- "History". visit.shetland.org. Retrieved 20 March 2011.
- Ursula Smith" Shetlopedia. Retrieved 12 October 2008.
- Schei (2006) pp. 16–17, 57
- "A History of Shetland" Visit.Shetland.org
- Thompson (2008) p. 183
- Crawford, Barbara E. "Orkney in the Middle Ages" in Omand (2003) pp. 78–79
- Thompson (2008) pp. 371–72
- Haswell-Smith (2004) pp. 364–65
- Thomson, William P. L. "Agricultural Improvement" in Omand (2003) pp. 93, 99
- Coull, James "Fishing" in Omand (2003) pp. 144–55
- Troup, James A. "Stromness" in Omand (2003) p. 238
- Nicolson (1972) pp. 91, 94–95
- Thomson (2008) pp. 434–36.
- Thomson (2008) pp. 439–43.
- "Shetlands-Larsen – Statue/monument". Kulturnett Hordaland. (Norwegian.) Retrieved 26 March 2011.
- "The Shetland Bus" scotsatwar.org.uk. Retrieved 23 March 2011.
- "Alistair Carmichael: MP for Orkney and Shetland" alistaircarmichael.org.uk. Retrieved 8 September 2009.
- "Candidates and Constituency Assessments". alba.org.uk – "The almanac of Scottish elections and politics". Retrieved 9 February 2010.
- "The Untouchable Orkney & Shetland Isles " (1 October 2009) www.snptacticalvoting.com Retrieved 9 February 2010.
- "Liam McArthur MSP" Scottish Parliament. Retrieved 8 September 2009.
- "Tavish Scott MSP" Scottish Parliament. Retrieved 20 March 2011.
- "Social Work Inspection Agency: Performance Inspection Orkney Islands Council 2006. Chapter 2: Context." The Scottish Government. Retrieved 8 September 2009.
- MacMahon, Peter and Walker, Helen (18 May 2007) "Winds of change sweep Scots town halls". Edinburgh. The Scotsman.
- "Political Groups" Shetland Islands Council. Retrieved 23 April 2010.
- "Candidates and Constituency Assessments: Orkney (Highland Region)" alba.org.uk. Retrieved 11 January 2008.
- Shetland Islands Council (2010) pp. 32, 35
- "2011 Timetables" Northlink Ferries. Retrieved 7 April 2011.
- "Getting Here" Visit Orkney. Retrieved 13 September 2009.
- "Ferries". Shetland.gov.uk. Retrieved 23 May 2011.
- "Lighthouse Library" Northern Lighthouse Board. Retrieved 8 July 2010.
- "Sumburgh Airport" Highlands and Islands Airports. Retrieved 16 March 2011.
- "UK Airport Statistics: 2005 – Annual" Table 10: EU and Other International Terminal Passenger Comparison with Previous Year. (pdf) CAA. Retrieved 16 March 2011.
- "Getting Here" Westray and Papa Westray Craft and Tourist Associations. Retrieved 18 June 2011.
- "Economy". move.shetland.org Retrieved 19 March 2011.
- "Asset Portfolio: Sullom Voe Termonal" (pdf) BP. Retrieved 19 March 2011.
- Shetland Islands Council (2010) p. 13
- "Shetland's Economy". Visit.Shetland.org. Retrieved 19 March 2011.
- Shetland Islands Council (2005) p. 13
- "Public Sector". move.shetland.org. Retrieved 19 March 2011.
- Shetland Islands Council (2010) pp. 16–17
- Chalmers, Jim "Agriculture in Orkney Today" in Omand (2003) p. 127, 133 quoting the Scottish Executive Agricultural Census of 2001 and stating that 80% of the land area is farmed if rough grazing is included.
- "Orkney Economic Review No. 23." (2008) Kirkwall. Orkney Islands Council.
- "European Marine Energy Centre". Retrieved 3 February 2007.
- "Pelamis wave energy project Information sheet". (pdf) E.ON Climate and Renewables UK Ltd. Retrieved 18 June 2011.
- "The Trows". Orkneyjar. Retrieved 19 September 2009.
- Muir, Tom "Customs and Traditions" in Omand (2003) p. 270
- Drever, David "Orkney Literature" in Omand (2003) p. 257
- "The Forty Fiddlers" Shetlopedia. Retrieved 8 March 2011.
- Culshaw, Peter (18 June 2006) " The Tale of Thomas Fraser" guardian.co.uk. Retrieved 8 March 2011.
- Gammeltoft (2010) p. 21
- Sandnes (2010) p. 9
- Gammeltoft (2010) p. 22
- Gammeltoft (2010) p. 9
- Watson (1994) p. 30
- Breeze, David J. "The ancient geography of Scotland" in Smith and Banks (2002) pp. 11–13
- Watson (1994) p. 7
- Haswell-Smith (2004) p. 425
- Haswell-Smith (2004) p. 459
- Haswell-Smith (2004) p. 433
- Haswell-Smith (2004) p. 408
- Gammeltoft (2010) pp. 19–20
- Haswell-Smith (2004) p. 471
- Haswell-Smith (2004) p. 419
- Haswell-Smith (2004) p. 449
- Haswell-Smith (2004) p. 434
- Haswell-Smith (2004) p. 481
- Haswell-Smith (2004) p. 430
- Haswell-Smith (2004) p. 452
- Haswell-Smith (2004) p. 467
- "Early Historical References to Orkney" Orkneyjar.com. Retrieved 27 June 2009.
- Tacitus (c. 98) Agricola. Chapter 10. "ac simul incognitas ad id tempus insulas, quas Orcadas vocant, invenit domuitque".
- Waugh, Doreen J. "Orkney Place-names" in Omand (2003) p. 116
- Pokorny, Julius (1959) Indo-European Etymological Dictionary. Retrieved 3 July 2009.
- "The Origin of Orkney" Orkneyjar.com. Retrieved 27 June 2009.
- "Proto-Celtic – English Word List" (pdf) (12 June 2002) University of Wales. p. 101
- Forsyth, Katherine (1995). "The ogham-inscribed spindle-whorl from Buckquoy: evidence for the Irish language in pre-Viking Orkney?" (PDF). The Proceedings of the Society of Antiquaries of Scotland (ARCHway) 125: 677–96. Retrieved 27 July 2007.
- Gammeltoft (2010) pp. 8–9
- Haswell-Smith (2004) p. 364
- "Orkney Placenames" Orkneyjar. Retrieved 10 October 2007.
- Haswell-Smith (2004) p. 363
- Haswell-Smith (2004) p. 354
- Haswell-Smith (2004) p. 386
- Gammeltoft (2010) p. 16
- Haswell-Smith (2004) p. 379
- Haswell-Smith (2004) p. 341
- Haswell-Smith (2004) p. 367
- Haswell-Smith (2004) p. 352
- Haswell-Smith (2004) p. 343
- Haswell-Smith (2004) p. 400
- Haswell-Smith (2004) p. 376
- Haswell-Smith (2004) p. 397
- Haswell-Smith (2004) p. 383
- Haswell-Smith (2004) p. 392
- Haswell-Smith (2004) p. 370
- Haswell-Smith (2004) p. 394
- Gammeltoft (2010) p. 18
- Haswell-Smith (2004) p. 336
- Mac an Tàilleir (2003) p. 109
- Haswell-Smith (2004) p. 465
- "Holm". Wiktionary. Retrieved 27 May 2011.
- General references
- Armit, Ian (2006) Scotland's Hidden History. Stroud. Tempus. ISBN 0-7524-3764-X
- Ballin Smith, B. and Banks, I. (eds) (2002) In the Shadow of the Brochs, the Iron Age in Scotland. Stroud. Tempus. ISBN 0-7524-2517-X
- Clarkson, Tim (2008) The Picts: A History. Stroud. The History Press. ISBN 978-0-7524-4392-8
- Haswell-Smith, Hamish (2004). The Scottish Islands. Edinburgh: Canongate. ISBN 978-1-84195-454-7.
- Gammeltoft, Peder (2010) "Shetland and Orkney Island-Names – A Dynamic Group". Northern Lights, Northern Words. Selected Papers from the FRLSU Conference, Kirkwall 2009, edited by Robert McColl Millar.
- General Register Office for Scotland (28 November 2003) Occasional Paper No 10: Statistics for Inhabited Islands. Retrieved 22 January 2011.
- Gillen, Con (2003) Geology and landscapes of Scotland. Harpenden. Terra Publishing. ISBN 1-903544-09-2
- Keay, J. & Keay, J. (1994) Collins Encyclopaedia of Scotland. London. HarperCollins. ISBN 0-00-255082-2
- Mac an Tàilleir, Iain (2003) Ainmean-àite/Placenames. (pdf) Pàrlamaid na h-Alba. Retrieved 26 August 2012.
- Omand, Donald (ed.) (2003) The Orkney Book. Edinburgh. Birlinn. ISBN 1-84158-254-9
- Nicolson, James R. (1972) Shetland. Newton Abbott. David & Charles.
- Sandnes, Berit (2003) From Starafjall to Starling Hill: An investigation of the formation and development of Old Norse place-names in Orkney. (pdf) Doctoral Dissertation, NTU Trondheim.
- Sandnes, Berit (2010) "Linguistic patterns in the place-names of Norway and the Northern Isles" Northern Lights, Northern Words. Selected Papers from the FRLSU Conference, Kirkwall 2009, edited by Robert McColl Millar.
- Schei, Liv Kjørsvik (2006) The Shetland Isles. Grantown-on-Spey. Colin Baxter Photography. ISBN 978-1-84107-330-9
- Shetland Islands Council (2010) "Shetland in Statistics 2010" (pdf) Economic Development Unit. Lerwick. Retrieved 6 March 2011
- Thomson, William P. L. (2008) The New History of Orkney Edinburgh. Birlinn. ISBN 978-1-84158-696-0
- Turner, Val (1998) Ancient Shetland. London. B. T. Batsford/Historic Scotland. ISBN 0-7134-8000-9
- Wickham-Jones, Caroline (2007) Orkney: A Historical Guide. Edinburgh. Birlinn. ISBN 1-84158-596-3
- Watson, W. J. (1994) The Celtic Place-Names of Scotland. Edinburgh. Birlinn. ISBN 1-84158-323-5. First published 1926. | 1 | 11 |
<urn:uuid:a297d426-b454-4c26-b75f-2aeb4f6996e1> | Free PC / Intel x86 Emulators and Virtual Machines
Emulation / Virtualization of Intel/AMD x86-based Machines
Free PC / Intel i86 Emulators and Virtual Machines
Virtual Machines ("VM") allow you to run another operating system (or even the same one) on top of the current system you're currently running. For example, it's possible to run (say) Windows XP on your Windows 7 machine in a separate window. This requires that your computer currently uses an x86 (32 or 64 bit) processor (be it Intel or AMD). The virtual machine then virtualizes the hardware so that the guest operating system (the one you're running in a separate window) thinks it the only one running on the machine. The programs running in the guest are isolated from your main computer, making such a system very useful for programmers, webmasters using multiple browsers, and even just the ordinary person who wants to test different software without the latter harming their real machine. And it's also useful if you use a Mac, and want to run Windows programs alongside your Mac software.
Emulators are slightly different. They allow you to run an operating system that requires (say) an Intel/AMD x86 processor on a completely different CPU (processor). For example, it may allow you to run Windows XP on a PowerPC processor (something that normally won't work, since Windows XP requires an x86 processor). In other words, unlike the VM which only has to virtualize some of the hardware, emulators have to emulate everything, including the CPU. As such, emulators tend to be much slower than VMs.
This page lists both VMs and emulators for the Intel/AMD x86 (32 or 64 bits) processors (meaning that they either emulate the x86 or are virtual machines that run on the x86). The guest "machines" they create may or may not (depending on which software you choose) be able to provide access to your real computer's USB drives, CD/DVD drives, printers, network, etc.
Requirements: Some of the virtual machine software require your computer to have a processor (CPU) with hardware virtualization support. As far as I know, all modern AMD Athlon 64 bit CPUs have this (note: I said Athlon, not the budget Sempron). Things are more confusing where Intel CPUs are concerned, since the support for hardware virtualization (or "VT" as Intel calls it) seems a bit haphazard across their product range (that is, even if you have a higher-end CPU, it doesn't necessarily mean the chip has VT support). To check if your Intel chip has VT support, look for it in Intel's Virtualization Technology List. Even worse, even if your chip has such support, some computer manufacturers may have disabled it in the BIOS.
Some additional useful terminology that you may find useful: in the world of emulators and virtual machines, the host refers to your real, physical computer that you're currently using to read this page. For example, if you're using a computer running Mac OS X, then that computer is your host computer, Mac OS X is your host operating system, and so on. If you run a virtual machine on that computer, and install (say) Windows 7 into that virtual machine, then that Windows 7 is the guest operating system.
Note that this page does not list commercial PC emulators and virtualization software. If you need a commercial solution for their support, completeness of implementation, stability, speed, etc, you might want to take a look at VMWare Workstation if you use Windows, or Parallels Desktop for Mac and Parallels Workstation, if you use Mac OS X.
Free PC Emulators / IA-32 / x86 / x64 (x86-64) Emulators, Virtualization and Virtual Machines
- VMWare Player and VMWare Server
VMWare provides two free virtualization software. VMWare Player allows you to use virtual machines created by the commercial VMWare Workstation, VMWare Server, Virtual PC or the free third-party command-line VMX-Builder. It allows you to run operating systems like Linux, Windows, FreeBSD, etc, on top of your existing Windows or Linux system. VMware Server allows you to create and use virtual servers. The site also provides a number of prebuilt virtual machines for free operating systems (like Linux), including something they call a "browser appliance" — a complete system running under their virtualisation software (VMWare Player or others) that allows you to surf the Internet safely without jeopardising your main machine even if your browsing leads you to unsafe sites. Instead of cleaning up your machine with an antivirus program or an anti-spyware software, you simply ditch the changes made in the virutal machine and restart it. Update: VMWare Server has reached its "End of Life" (meaning that it will no longer be updated). Use either the Player or the Workstation edition.
- Microsoft Windows XP Mode for Windows 7
This is a special version of the free Microsoft Virtual PC software (see elsewhere on this page) designed specifically for users of Windows 7 Professional, Ultimate and Enterprise. (If you don't have those exact versions of Windows, try one of the other software listed below.) It requires your computer to have hardware virtualization support in the CPU. The raison d'être of this virtual machine is to allow you to run Windows XP programs in a virtual mode alongside your Windows 7 programs in a highly integrated fashion. Unlike the typical virtual machine, your programs act and behave as though they are directly running within your host system itself, and can interact not only with your hardware but also your desktop, documents, music and video folders. In other words, this is meant as a backward compatibility tool for you to run older programs on Windows 7. You should not use this if your intention is to test/debug programs and want to protect your main system. Note: unlike other virtualization solutions, this one does not appear to require you to have a separate Windows XP licence.
- Microsoft Virtual PC for Windows
Virtual PC for Windows, a virtualization software from Microsoft, is now available free of charge. Your host system must be running Windows. It officially supports running virtual machines with Windows loaded (you must have an additional licence for the copy of Windows running in your virtual machine). Unofficially, Linux also runs in the virtualizer, but poorly since Microsoft does not provide the necessary drivers (called "Virtual Machine Additions") for the current Linux distributions. (If you plan to use Linux, you should consider the other virtual machines listed on this page instead.)
VirtualBox is a virtualization solution that runs on Windows and Linux 32-bit hosts, and supporting, as guest OSes, Windows NT 4.0, 2000, XP, Server 2003, Vista, DOS/Windows 3.x, Linux and OpenBSD. It supports shared folders and virtual USB controllers in addition to the usual floppy and CDROM drive support. Note that the downloadable binaries can only be used for personal use or evaluation purpose.
- QEMU on Windows
QEMU on Windows is an emulator for x86, ARM, SPARC and PowerPC (see elsewhere on this page for more information). This site contains a Windows port with downloadable binaries.
Q is a cocoa port of QEMU (see elsewhere on this page) that allows you to run Windows, Linux, etc, on your Mac. You can exchange files between your host and guest operating systems. Q runs on OS/X and requires a G4/G5 processor. It can emulate a PC (x86 or x86_64 processor), a PowerPC (PPC), a G3, a Sun4m (32 bit Sparc processor), Sun4u (64 bit Sparc processor), Malta (32 bit MIPS processor) and a Mac99 PowerMac. It emulates a Soundblaster 16 card, a Cirrus CLGD 5446 PCI VGA card (or a dummy VGA card with Bochs VESA extensions), a PS/2 mouse and keyboard, 2 PCI IDE interfaces with hard disk and CD-ROM support, a floppy disk, NE2000 PCI network adapters and serial ports.
- QEMU CPU Emulator
QEMU supports the emulation of x86 processors, ARM, SPARC and PowerPC. Host CPUs (processors that can run the QEMU emulator) include x86, PowerPC, Alpha, Sparc32, ARM, S390, Sparc64, ia64, and m68k (some of these are still in development). When emulating a PC (x86), supported guest operating systems include MSDOS, FreeDOS, Windows 3.11, Windows 98SE, Windows 2000, Linux, SkyOS, ReactOS, NetBSD, Minix, etc. When emulating a PowerPC, currently tested guest OSes include Debian Linux.
- Xen Virtual Machine Monitor
Xen is an open source virtual machine that allows you to run multiple guest operating systems partitioned in their own virtual machines. It currently runs on Linux (as the host operating system). Supported guest operating systems include Linux, Windows XP (work in progress), NetBSD and FreeBSD. Unlike some of the other virtual machines and emulators, however, Xen requires you to have a modified version of the operating system as the guest OS.
- Bochs IA-32 Emulation Project
Bochs is an open source emulator for IA-32 (Intel x86) machines. It has the ability to emulate a 386, 486, Pentium, Pentium Pro, AMD64, with or without MMX, SSE, SSE2 and 3DNow, with common I/O devices (such as a SoundBlaster sound card, a NE2000 compatible network card, etc) and a custom BIOS. You can run Windows 95/NT, Linux and DOS as guest operating systems in that machine. Your guest OS will be installed in a large file which the emulator will use to mimic a hard disk for the emulated machine. Supported platforms (and here I mean platforms on which Bochs will run) include Win32 (Windows 9x/ME/2k/XP), Macintosh, Mac OS X, BeOS, Amiga MorphOS, OS/2, and Unix/X11 systems (including Linux).
- JPC: The Pure Java x86 PC Emulator
JPC is a PC emulator written using the Java programming language, and thus runs on any computer that has the Java runtime environment installed (eg Windows, Linux, Mac OS X, etc). At the time this mini-review was written, the emulator is able to run all versions of DOS as well as some versions of Linux and OpenBSD. Note: if you need to run serious programs (other than DOS games), you should probably choose one of the other emulators on this page. The emulator is probably intended more for academic interest and amusement than serious emulation tasks. (It is after all an emulator running on a virtual machine.)
OpenVZ is a server virtualization software built on Linux. If you have ever signed up with a web hosting company that provides virtual private servers (VPS), they are probably running some sort of server virtualization software like this. The software allows you to create isolated environments to run individual copies of operating systems and provide a supposedly secure virtual environment (VE) that behaves like standalone servers.
- DOSEMU DOS Emulation on Linux
DOSEMU is a well-known DOS emulator that runs in Linux (host OS). It can even run Windows 3.x in DOS emulation.
- DOSBox, an x86 Emulator with DOS
DOSBox is an x86 emulator with a built-in DOS. It was created primarily to run DOS games. It emulates a 286/386 in real and protected modes, XMS/EMS, a graphics card (VGA/EGA/CGA/VESA/Hercules/Tandy), SoundBlaster/Gravis Ultra sound card, etc. You can apparently even run the old 16-bit Windows 3.1 in the emulator. Host operating systems (ie, platforms on which you can run the DOSBox emulator) include Windows, Linux, Mac OS X, BeOS, FreeBSD, MorphOS and Amiga68k.
WINE, which stands for WINE Is Not an Emulator, allows you to run Windows programs in Linux and other Unix-type systems. It is a layer that implements the Windows API in terms of X and Unix. You do not need to have Windows at all to run your Windows applications in WINE. If you are looking for Windows emulators or clones, you may also want to check out the Free Windows Clones, Emulators and Emulation Layers page.
- Plex86 x86 Virtualization Project
Plex86 is a virtual machine for running Linux on x86 machines. It only runs on a Linux running on an x86 machine.
Minde is an emulator that allows you to run some DOS applications, demos and games under Linux.
- PCEmu 8086 PC Emulator for X
PCEmu emulates a basic 8086 PC with a VGA text-only display, allowing you to run some DOS programs. It runs under Linux. The program is no longer maintained.
Can't Find What You're Looking For?
Search the site using Google.
How to Link to This Page
It will appear on your page as: | 2 | 26 |
<urn:uuid:278203fd-391c-4f2c-9437-5602b4825694> | |History of IBM mainframes 1952–present|
|zSeries 900, 800, 990, and 890||z/Architecture|
|zEnterprise System (z196, zEC12)|
IBM mainframes are large computer systems produced by IBM from 1952 to the present. During the 1960s and 1970s, the term mainframe computer was almost synonymous with IBM products due to their marketshare. Current mainframes in IBM's line of business computers are developments of the basic design of the IBM System/360.
First and second generation
From 1952 into the late 1960s, IBM manufactured and marketed several large computer models, known as the IBM 700/7000 series. The first-generation 700s were based on vacuum tubes, while the later, second-generation 7000s used transistors. These machines established IBM's dominance in electronic data processing. IBM had two model categories: one (701, 704, 709, 7090, 7040) for engineering and scientific use, and one (702, 705, 7080, 7070, 7010) for commercial or data processing use. The two categories, scientific and commercial, generally used common peripherals but had completely different instruction sets, and there were incompatibilities even within each category.
IBM initially sold its computers without any software, expecting customers to write their own; programs were manually initiated, one at a time. Later, IBM provided compilers for the newly developed higher-level programming languages Fortran and COBOL. The first operating systems for IBM computers were written by IBM customers who did not wish to have their very expensive machines ($2M USD in the mid-1950s) sitting idle while operators set up jobs manually. These first operating systems were essentially scheduled work queues. It is generally thought that the first operating system used for real work was GM-NAA I/O, produced by General Motors' Research division in 1956. IBM enhanced one of GM-NAA I/O's successors, the SHARE Operating System, and provided it to customers under the name IBSYS. As software became more complex and important, the cost of supporting it on so many different designs became burdensome, and this was one of the factors which led IBM to develop System/360 and its operating systems.
The second generation (transistor-based) products were a mainstay of IBM's business and IBM continued to make them for several years after the introduction of the System/360. (Some IBM 7094s remained in service into the 1980s.)
Smaller machines
Prior to System/360, IBM also sold computers smaller in scale that were not considered mainframes, though they were still bulky and expensive by modern standards. These included:
- IBM 650 (vacuum tube logic, decimal architecture, business and scientific)
- IBM 305 RAMAC (vacuum tube logic, first computer with disk storage; see: Early IBM disk storage)
- IBM 1400 series (business data processing; very successful and many 1400 peripherals were used with the 360s)
- IBM 1620 (decimal architecture, engineering, scientific, and education)
IBM had difficulty getting customers to upgrade from the smaller machines to the mainframes because so much software had to be rewritten. The 7010 was introduced in 1962 as a mainframe-sized 1410. The later Systems 360 and 370 could emulate the 1400 machines. A desk size machine with a different instruction set, the IBM 1130, was released concurrently with the System/360 to address the niche occupied by the 1620. It used the same EBCDIC character encoding as the 360 and was mostly programmed in Fortran, which was relatively easy to adapt to larger machines when necessary.
Midrange computer is a designation used by IBM for a class of computer systems which fall in between mainframes and microcomputers.
IBM System/360
All that changed with the announcement of the System/360 (S/360) in April, 1964. The System/360 was a single series of compatible models for both commercial and scientific use. The number "360" suggested a "360 degree," or "all-around" computer system. System/360 incorporated features which had previously been present on only either the commercial line (such as decimal arithmetic and byte addressing) or the technical line (such as floating point arithmetic). Some of the arithmetic units and addressing features were optional on some models of the System/360. However, models were upward compatible and most were also downward compatible. The System/360 was also the first computer in wide use to include dedicated hardware provisions for the use of operating systems. Among these were supervisor and application mode programs and instructions, as well as built-in memory protection facilities. Hardware memory protection was provided to protect the operating system from the user programs (tasks) and the user tasks from each other. The new machine also had a larger address space than the older mainframes, 24 bits addressing 8-bit bytes vs. a typical 18 bits addressing 36-bit words.
The smaller models in the System/360 line (e.g. the 360/30) were intended to replace the 1400 series while providing an easier upgrade path to the larger 360s. To smooth the transition from second generation to the new line, IBM used the 360's microprogramming capability to emulate the more popular older models. Thus 360/30s with this added cost feature could run 1401 programs and the larger 360/65s could run 7094 programs. To run old programs, the 360 had to be halted and restarted in emulation mode. Many customers kept using their old software and one of the features of the later System/370 was the ability to switch to emulation mode and back under operating system control.
The System/360 later evolved into the System/370, the System/390, and the 64-bit zSeries, System z, and zEnterprise machines. System/370 introduced virtual memory capabilities in all models other than the very first System/370 models; the OS/VS1 variant of OS/360 MFT, the OS/VS2 (SVS) variant of OS/360 MVT, and the DOS/VS variant of DOS/360 were introduced to use the virtual memory capabilities, followed by MVS, which, unlike the earlier virtual-memory operating systems, ran separate programs in separate address spaces, rather than running all programs in a single virtual address space. The virtual memory capabilities also allowed the system to support virtual machines; the VM/370 hypervisor would run one or more virtual machines running either standard System/360 or System/370 operating systems or the single-user Conversational Monitor System (CMS). A time-sharing VM system could run multiple virtual machines, one per user, with each virtual machine running an instance of CMS.
Today's systems
Processor units
The different processors on current IBM mainframes are:
- CP, Central Processor: general-purpose processor
- IFL, Integrated Facility for Linux: dedicated to Linux OSes (optionally under z/VM)
- ICF, Integrated Coupling Facility: designed to support Parallel Sysplex operations
- SAP, System Assist Processor: designed to handle various system accounting, management, and I/O channel operations
- zAAP, System z Application Assist Processor: currently limited to run only Java and XML processing
- zIIP, System z Integrated Information Processor: dedicated to run specific workloads including DB2, XML, and IPSec
Note that the ICF and ZAAP are essentially identical to CP, but distinguished for software cost control: they are slightly restricted such they cannot be used to run arbitrary operating systems, and thus do not count in software licensing costs (which are typically based on the number of CPs). There are other supporting processors typically installed inside mainframes such as cryptographic accelerators (CryptoExpress), the OSA-Express networking processor, and FICON Express disk I/O processors.
Software to allow users to run "traditional" workloads on zIIPs and zAAPs was briefly marketed by Neon Enterprise Software as "zPrime" but was withdrawn from the market in 2011 after a lawsuit by IBM.
Operating systems
The primary operating systems in use on current IBM mainframes include z/OS (which followed MVS and OS/390), z/VM (previously VM/CMS), z/VSE (which is in the DOS/360 lineage), z/TPF (a successor of Airlines Control Program), and Linux on System z such as SUSE Linux Enterprise Server and others. A few systems run MUSIC/SP and UTS (Mainframe UNIX). In October 2008, Sine Nomine Associates introduced OpenSolaris on System z.
Current IBM mainframes run all the major enterprise transaction processing environments and databases, including CICS, IMS, WebSphere Application Server, DB2, and Oracle. In many cases these software subsystems can run on more than one mainframe operating system.
There are software-based emulators for the System/370, System/390, and System z hardware, including FLEX-ES and the freely available Hercules emulator which runs under Linux, FreeBSD, Solaris, Mac OS X and Microsoft Windows.
See also
- A Brief History of Linux
- IBM 7090/94 IBSYS Operating System
- Gray, G. (1999). "EXEC II". Unisys History Newsletter 1 (3).
- Chuck Boyer, The 360 Revolution
- IBM Archives: System/360 Announcement
- IBM corp. (2005). "Mainframe concepts (page 31)".
- Radding, Alan. "Bye bye zPrime on System z". DancingDinosaur. Retrieved May 5, 2012.
Further reading
- Bashe, Charles J.; et al. (1986). IBM's Early Computers. MIT. ISBN 0-262-02225-7.
- Prasad, Nallur and Savit, Jeffrey (1994). IBM Mainframes: Architecture and Design, 2nd ed. McGraw-Hill Osborne Media. ISBN 0-07-050691-4.
- Pugh, Emerson W.; et al. (1991). IBM's 360 and Early 370 Systems. MIT. ISBN 0-262-16123-0.
|Wikimedia Commons has media related to: IBM mainframe computers| | 1 | 10 |
<urn:uuid:e14cb7f7-7926-4907-a0c5-17368615e468> | Internet in the United Kingdom
Currently Internet access is available to businesses and home users in various forms, including dial-up, cable, DSL, and wireless.
Dial-up Internet access was first introduced in the UK by Pipex in March 1992, having been established during 1991 as the UK's first commercial Internet provider, and by November 1993 provided Internet service to some 150 customer sites.
This narrowband service has been almost entirely replaced by the new broadband technologies, and is generally only used as a backup.
Broadband Internet access in the UK was, initially, provided by a large number of regional cable television and telephone companies which gradually merged into larger groups. The development of digital subscriber line (DSL) technology has allowed broadband to be delivered via traditional copper telephone cables. Also, Wireless Broadband is now available in some areas. These three technologies (cable, DSL and wireless) now compete with each other.
More than half of UK homes had broadband in 2007, with an average connection speed of 4.6 MBit/s. Bundled communications deals mixing broadband, digital TV, mobile phone and landline phone access were adopted by forty per cent of UK households in the same year, up by a third over the previous year. This high level of service is considered the main driver for the recent growth in online advertising and retail.
As of July 2011 BT's share had grown by six per cent and the company became the broadband market leader.
Cable broadband uses coaxial cables or optical fibre cables. The main cable service provider in the UK is Virgin Media, although Smallworld Cable have a substantial market share in the areas in which they operate (the Isle of Wight, Scotland and the north-west of England) the current maximum speed a cable customer can expect is 120Mbit/s.
Digital subscriber line (DSL)
Asymmetric digital subscriber line (ADSL) was introduced to the UK in trial stages in 1998 and a commercial product was launched in 2000. In the United Kingdom, most exchanges, local loops and backhauls are owned and managed by BT Wholesale, who then wholesale connectivity via Internet service providers, who generally provide the connectivity to the Internet, support, billing and value added services (such as web hosting and email).
As of October 2012, BT operate 5630 exchanges across the UK with the vast majority being enabled for ADSL. Only a relative handful have not been upgraded to support ADSL products - in fact it is under 100 of the smallest and most rural exchanges. Some exchanges, numbering under 1000, have been upgraded to support SDSL products. However, these exchanges are often the larger exchanges based in major towns and cities so they still cover a large proportion of the population. SDSL products are aimed more at business customers and are priced higher than ADSL services.
Unbundled local loop
Many companies are now operating their own services using local loop unbundling. Initially Bulldog Communications in the London area and Easynet (through their sister consumer provider UK Online) enabled exchanges across the country from London to Central Scotland.
In November 2010, having purchased Easynet in the preceding months, Sky closed the business-centric UK Online with little more than a month's notice. Although Easynet continued to offer business-grade broadband connectivity products, UKO customers could not migrate to an equivalent Easynet service, only being offered either a MAC to migrate provider or the option of becoming a customer of the residential-only Sky Broadband ISP with an introductory discounted period. Also, some previously available service features like fastpath (useful for time-critical protocols like SIP) were not made available on Sky Broadband, leaving business users with a difficult choice particularly where UK Online were the only LLU provider. Since then, Sky Broadband has become a significant player in the quad play telecoms market, offering below-cost DSL, discounted full LLU line rental and call packages to customers (who receive additional discount if they are also Sky television subscribers).
Whilst Virgin Media is the nearest direct competitor, their quad play product is available to fewer homes given the fixed nature of their cable infrastructure. TalkTalk is the next DSL-based ISP with a mature quad play product portfolio (EE's being the merger of the Orange and T-Mobile service providers, and focusing their promotion on forthcoming fibre broadband and 4G LTE products).
Market consolidation and expansion has permitted service providers to offer faster and less expensives services with typical speeds of up to 24 Mbit/s downstream (subject to ISP and line length). They can offer products at sometimes considerably lower prices, due to not necessarily having to conform to the same regulatory requirements as BT Wholesale: for example, 8 unbundled LLU pairs can deliver 10 Mbit/s over 3775 meters for half the price of a similar Fibre connection.
Since 2005, another company, Be, has offered speeds of up to 24 Mbit/s downstream and 2.5 Mbit/sec using ADSL2+ with Annex M and is currently available in over 1,250 UK exchanges. Exchanges continue to be upgraded, subject to demand, across the country, although at a somewhat slower pace since BT's commencement of FTTC rollout plans and near-saturation in key geographical areas. BE were taken over by O2's parent company Telefónica in 2007; they have continued to expand the customer base and additionally make wholesale network access available to third party ISPs through the BE Wholesale brand. On the 1st of March 2013 O2 Telefonica sold BE to Sky who promise to maintain the unlimited services offered on the BE network.
Up until the launch of "Max" services, the only ADSL packages available via BT Wholesale were known as IPStream Home 250, Home 500, Home 1000 and Home 2000 (contention ratio of 50:1); and Office 500, Office 1000, and Office 2000 (contention ratio of 20:1). The number in the product name indicates the downstream data rate in kilobits per second. The upstream data rate is up to 250 kbit/s for all products.
For BT Wholesale ADSL products, users initially had to live within 3.5 kilometres of the local telephone exchange to receive ADSL, but this limit was increased thanks to rate-adaptive digital subscriber line (RADSL), although users with RADSL may have a reduced upstream rate, depending on the quality of their line. There are still areas that cannot receive ADSL because of technical limitations, not least of which networks in housing areas built with aluminium cable rather than copper in the 1980s and 1990s, and areas served by optical fibre (TPON), though these are slowly being serviced with copper.
In September 2004, BT Wholesale removed the line-length/loss limits for 500 kbit/s ADSL, instead employing a tactic of "suck it and see" — enabling the line, then seeing if ADSL would work on it. This sometimes includes the installation of a filtered faceplate on the customer's master socket, so as to eliminate poor quality telephone extension cables inside the customer's premises which can be a source of high frequency noise.
In the past, the majority of home users used packages with 500 kbit/s (downstream) and 250 kbit/s (upstream) with a 50:1 contention ratio. However, BT Wholesale introduced the option of a new charging structure to ISPs which means that the wholesale service cost was the same regardless of the ADSL data rate, with charges instead being based on the amount of data transferred. Nowadays, most home users use a package whose data rate is only limited by the technical limitations of their telephone line. Initially this was 2 Mbit/s downstream. Nowadays, most home products are ADSL Max based (up to 7.15 Mbit/s).
Max and Max Premium
Following successful trials, BT announced the availability of higher speed services known as BT ADSL Max and BT ADSL Max Premium in March 2006. BT made the "Max" product available to more than 5300 exchanges, serving around 99% of UK households and businesses.
Both Max services offer downstream data rates of up to 7.15 Mbit/s. Upstream data rates are up to 400 kbit/s for the standard product and up to 750 kbit/s for the premium product. (Whilst the maximum downstream data rate for IPStream Max is often touted as 8 Mbit/s, this is in fact misleading because, in a departure from previous practice, it actually refers to the gross ATM data rate. The maximum data rate available at the IP level is 7.15 Mbit/s; the maximum TCP payload rate — the rate you would actually see for file transfer — would be about 7.0 Mbit/s.)
The actual downstream data rate achieved on any given Max line is subject to the capabilities of the line. Depending on the stable ADSL synchonisation rate negotiated, BT's system applies a fixed rate limit from one of the following data rates: 160 kbit/s, 250 kbit/s, 500 kbit/s, then in 500 kbit/s steps up to 7.0 Mbit/s, then a final maxium rate of 7.15 Mbit/s.
Contention ratios are no longer officially stated either, except that the Office products will generally see a reduced level of contention to their Home counterparts. This is the product of amalgamating Home and Office users onto a single consolidated, but larger, virtual path.
On August 13, 2004 the ISP Wanadoo (formerly Freeserve and now Orange SA in the UK) were told by the Advertising Standards Authority to change the way that they advertised their 512 kbit/s broadband service in Britain, removing the words "full speed" which rival companies claimed was misleading people into thinking it was the fastest available service.
In a similar way, on April 9, 2003 the Advertising Standards Authority ruled against ISP NTL, saying that NTL's 128 kbit/s cable modem service must not be marketed as "broadband". Ofcom reported in June 2005 that there were more broadband than dial-up connections for the first time in history.
In the third quarter of 2005 with the merger of NTL and Telewest, a new alliance was formed to create the largest market share of broadband users. This alliance brought about huge increases in bandwidth allocations for cable customers (minimum speed increasing from the industry norm of 512 kbit/s to 2 Mbit/s home lines with both companies planning to have all domestic customers upgraded to at least 4 Mbit/s downstream and ranging up to 10 Mbit/s and beyond by mid-2006.) along with the supply of integrated services such as Digital TV and Phone packages.
March 2006 saw the nationwide launch of BT Wholesale's up to 8 Mbit/s ADSL services, known as Max ADSL. Max based packages are available to end users on any broadband enabled exchange in the UK.
Since 2003 BT has been introducing SDSL to exchanges in many of the major cities. Services are currently offered at upload/download speeds of 256 kbit/s, 512 kbit/s, 1 Mbit/s or 2 Mbit/s. Unlike ADSL, which is typically 256 kbit/s upload, SDSL upload speeds are the same as the download speed. BT usually provide a new copper pair for SDSL installs, which can be used only for the SDSL connection. At a few hundred pounds a quarter, SDSL is significantly more expensive than ADSL, but is significantly cheaper than a leased line. SDSL is marketed to businesses and offers low contention ratios, and in some cases, a Service Level Agreement. At present, the BT Wholesale SDSL enablement programme has stalled, most probably due to a lack of uptake.
Recent developments
In 2006, the UK market has been about convergence and takeovers. TalkTalk threw down the gauntlet by offering so-called ‘free’ broadband along with their telephone package. Rival, Orange responded by offering ‘free’ broadband for some mobile customers. Many other smaller ISPs have responded by offering similar bundled packages. O2 also entered the broadband market by taking over LLU provider Be, while Sky (BSkyB) had already taken over LLU broadband provider Easynet. In July 2006, Sky entered the broadband arena by announcing 2 Mbit/s broadband to be available free to Sky customers and a higher speed connection at a lower price than most rivals.
Also, Virgin Media declared that 13 million UK homes are covered by Virgin Media’s optical fibre broadband network, and that by the end of 2012 they all should be covered by this 100Mb broadband that Virgin Media are rolling out. There are over 100 towns across the UK that have access to their super fast broadband network.
In October 2011, British operator Hyperoptic launched a 1Gbit/sec FTTH service in London.
In October 2012, British operator Gigler UK launched a 1Gbit/sec down and 500Mbit/sec up FTTH service in Bournemouth using the CityFibre network.
Wireless broadband
The term "wireless broadband" generally refers to the provision of a wireless router with a broadband connection.
Mobile broadband
A new mobile broadband technology emerging in the United Kingdom is 4G which hopes to replace the old 3G technology currently in use and could see download speeds increased to 300Mbit/s. The company EE have been the first company to start developing a full scale 4G network throughout the United Kingdom.
School children's access to the Internet
- nearly a third of UK children have a mobile phone,
- 15% use smartphones regularly,
- 10% have an iPhone,
- 5% have an iPad,
- 16% have access to a laptop computer,
- 8% have a social networking account,
- 25% have an e-mail address,
- most use their smartphone primarily to make phone calls, but 20% send and receive text messages, 10% go online, and 5% draft and send email,
- 50% have no parental controls installed on their internet connected devices,
- 5% use their phone or laptop when their parents are out,
- 50% of parents said they have concerns about the lack of controls installed on their children's Internet devices,
- 68% of parents who bought their children smartphones said they did so to keep better track of their children,
- 17% of surveyed parents bought phones after being pestered by their kids, and
- most pay around 10 British pounds per month on children's phone bills, although 20% pay 20 pounds or more.
The survey gathered answers from 2,000 British parents of children ages 10 and under. The survey was used as a marketing tool to coincide with the release of Westcoastcloud's new iPad Internet content filtering product.
See also
- Digital Britain
- Internet censorship in the United Kingdom
- Illegal file sharing in the United Kingdom
- Media in the United Kingdom
- Telecommunications in the United Kingdom
- "About PIPEX". GTNet. Retrieved 2012-06-30.
- "UUNET PIPEX - Encyclopedia". Encyclo.co.uk. Retrieved 2012-06-30.
- [dead link]
- "More than half of UK homes have broadband - 22 Aug 2007 - Computing News". Computing.co.uk. Retrieved 2012-09-20.
- Kitz (2005-12-07). "UK ISP Market Share .::". . Kitz. Retrieved 2012-06-30.
- "UK broadband market share". guardian.co.uk. 2011-07-28. Retrieved 2011-07-28.
- Williams, Christopher (2007-08-23). "Ofcom: the Internet is for coffin dodgers and girls". Theregister.co.uk. Retrieved 2012-09-20.
- "the complete report". Ofcom.org.uk. 2007-08-23. Retrieved 2012-06-30.
- SamKnows (2012-10-16). "SamKnows - Regional Broadband Statistics". SamKnows. Retrieved 2012-10-16.
- Ferguson, Andrew (2006-06-15). "Broadband for all - not!". The Guardian (London). Retrieved 2010-05-05.
- Williams, Christopher (November 12, 2010). "Sky confirms UK Online closure". The Register. Retrieved October 14, 2012.
- Wakeling, Tim (16 November 2010). "Tim Wakeling's PC Inner Circle: UKonline closing". Tim Wakeling. Retrieved October 16, 2012.
- Ferguson, Andrew (11 November 2010). "UK Online to close January 14th 2011 - Official". ThinkBroadband. Retrieved October 16, 2012.
- LLU VS Fibre. Infographic, MLL Telecom 2011
- 1 kbit = 1000 bit
- "UK 'embraces digital technology'". BBC News. 2005-07-13. Retrieved 2010-05-05.
- "BT Wholesale confirms launch of the Max services". thinkbroadband. Retrieved 2012-06-30.
- "Phone firm 'plans free broadband'". BBC. 2006-04-09. Retrieved 2010-05-05.
- "BT select three ISP's for System Trial". Thinkbroadband. 14 September 2007.
- "BT rolls out 100Mbps broadband in Milton Keynes". PC Advisor. 2012-06-18. Retrieved 2012-06-30.
- "Virgin Media offers 100Mb broadband to over 4 million homes". BroadbandIN.co.uk. 10 June 2011.
- "1Gbit/sec broadband lands in London | Broadband | News". PC Pro. Retrieved 2012-06-30.
- "UK Filtering Software Company Releases Survey on Kids' Internet Access", Quichen Zhang, OpenNet Initiative, 26 September 2011
- "10% of UK elementary schoolkids own an iPhone; 5% own an iPad", Brad Reed, Network World, 23 September 2011
- "Westcoastcloud survey reveals 1 in 10 UK primary school children have iPhones", Westcoastcloud, accessed 3 October 2011 | 1 | 2 |
<urn:uuid:10e16976-6296-4106-bf68-3c5a9f6ea67f> | A multimeter or a multitester, also known as a volt/ohm meter or VOM, is an electronic measuring instrument that combines several measurement functions in one unit. A typical multimeter may include features such as the ability to measure voltage, current and resistance. Multimeters may use analog or digital circuits—analog multimeters and digital multimeters (often abbreviated DMM or DVOM.) Analog instruments are usually based on a microammeter whose pointer moves over a scale calibrated for all the different measurements that can be made; digital instruments usually display digits, but may display a bar of length proportional to the quantity measured.
A multimeter can be a hand-held device useful for basic fault finding and field service work or a bench instrument which can measure to a very high degree of accuracy. They can be used to troubleshoot electrical problems in a wide array of industrial and household devices such as electronic equipment, motor controls, domestic appliances, power supplies, and wiring systems.
Multimeters are available in a wide ranges of features and prices. Cheap multimeters can cost less than US$10, while the top of the line multimeters can cost more than US$5000.
The first moving-pointer current-detecting device was the galvanometer. These were used to measure resistance & voltage by using a wheatstone bridge, and comparing the unknown quantity to a reference voltage or resistance. While usable in a lab, the technique was very slow and impractical in the field. These galvanometers were bulky and delicate.
The D'Arsonval/Weston meter movement used a fine metal spring to give proportional measurement rather than just detection, and built-in permanent field magnets made deflection independent of the position of the meter. These features enabled dispensing with Wheatstone bridges, and made measurement quick and easy. By adding a series or shunt resistor, more than one range of voltage or current could be measured with one movement.
Multimeters were invented in the early 1920s as radio receivers and other vacuum tube electronic devices became more common. The invention of the first multimeter is attributed to Post Office engineer Donald Macadie, who became dissatisfied with having to carry many separate instruments required for the maintenance of the telecommunication circuits. Macadie invented an instrument which could measure amperes, volts and ohms, so the multifunctional meter was then named Avometer.The meter comprised a moving coil meter, voltage and precision resistors, and switches & sockets to select the range.
Macadie took his idea to the Automatic Coil Winder and Electrical Equipment Company (ACWEEC, founded probably in 1923). The first AVO was put on sale in 1923, and although it was initially a DC-only instrument many of its features remained almost unaltered right through to the last Model 8.
Pocket watch style meters were in widespread use in the 1920s, at much lower cost than Avometers. The metal case was normally connected to the negative connection, an arrangement that caused numerous electric shocks. The technical specs of these devices were often crude, for example the one illustrated has a resistance of just 33 ohms per volt, a non-linear scale and no zero adjustment.
The usual analog multimeter when used for voltage measurements loads the circuit under test to some extent (a microammeter with full-scale current of 50μA, the highest sensitivity commonly available, must draw at least 50μA from the circuit under test to deflect fully). This may load a high-impedance circuit so much as to perturb the circuit, and also to give a low reading. Vacuum Tube Voltmeters or valve voltmeters (VTVM, VVM) were used for voltage measurements in electronic circuits where high impedance was necessary. The VTVM had a fixed input impedance of typically 1 megohm or more, usually through use of a cathode follower input circuit, and thus did not significantly load the circuit being tested. Before the introduction of digital electronics high-impedance analog transistor and FET voltmeters were used. Modern digital meters and some modern analog meters use electronic input circuitry to achieve high input impedance—their voltage ranges are functionally equivalent to VTVMs.
Additional scales such as decibels, and functions such as capacitance, transistor gain, frequency, duty cycle, display hold, and buzzers which sound when the measured resistance is small have been included on many multimeters. While multimeters may be supplemented by more specialized equipment in a technician's toolkit, some modern multimeters include even more additional functions for specialized applications (e.g., temperature with a thermocouple probe, inductance, connectivity to acomputer, speaking measured value, etc.)
Contemporary multimeters can measure many quantities. The common ones are:
- Voltage, alternating and direct, in volts.
- Current, alternating and direct, in amperes.
The frequency range for which AC measurements are accurate must be specified.
Additionally, multimeters may measure:
- Capacitance in farads.
- Conductance in siemens.
- Duty cycle as a percentage.
- Frequency in hertz
- Inductance in henrys
- Temperature in degrees Celsius or Fahrenheit, with an appropriate temperature test probe, often a thermocouple.
Digital multimeters may also include circuits for:
- Continuity; beeps when a circuit conducts.
- Diodes (measuring forward drop of diode junctions, i.e., diodes and transistor junctions) and transistors (measuring current gain and other parameters)
- "Battery check" for simple 1.5 and 9V batteries. This is a current loaded voltage scale. Battery checking (ignoring internal resistance, which increases as the battery is used up), can be done less accurately using a DC voltage scale.
Various sensors can be attached to multimeters to take measurements such as:
The resolution of a multimeter is often specified in "digits" of resolution. For example, the term 5½ digits refers to the number of digits displayed on the readout of a multimeter.
By convention, a half digit can display either a zero or a one, while a three-quarters digit can display a numeral higher than a one but not nine. Commonly, a three-quarters digit refers to a maximum value of 3 or 5. The fractional digit is always the most significant digit in the displayed value. A 5½ digit multimeter would have five full digits that display values from 0 to 9 and one half digit that could only display 0 or 1. Such a meter could show positive or negative values from 0 to 199,999. A 3¾ digit meter can display a quantity from 0 to 3,999 or 5,999, depending on the manufacturer.
While a digital display can easily be extended in precision, the extra digits are of no value if not accompanied by care in the design and calibration of the analog portions of the multimeter. Meaningful high-resolution measurements require a good understanding of the instrument specifications, good control of the measurement conditions, and traceability of the calibration of the instrument.
Specifying "display counts" is another way to specify the resolution. Display counts give the largest number, or the largest number plus one (so the count number looks nicer) the multimeter' display can show, ignoring a decimal separator. For example, a 5½ digit multimeter can also be specified as a 199999 display count or 200000 display count multimeter. Often the display count is just called the count in multimeter specifications.
Resolution of analog multimeters is limited by the width of the scale pointer, vibration of the pointer, the accuracy of printing of scales, zero calibration, number of ranges, and errors due to non-horizontal use of the mechanical display. Accuracy of readings obtained is also often compromised by miscounting division markings, errors in mental arithmetic, parallax observation errors, and less than perfect eyesight. Mirrored scales and larger meter movements are used to improve resolution; two and a half to three digits equivalent resolution is usual (and is usually sufficiently adequate for the limited precision actually necessary for most measurements).
Resistance measurements, in particular, are of low precision due to the typical resistance measurement circuit which compresses the scale heavily at the higher resistance values. Inexpensive analog meters may have only a single resistance scale, seriously restricting the range of precise measurements. Typically an analog meter will have a panel adjustment to set the zero-ohms calibration of the meter, to compensate for the varying voltage of the meter battery.
Digital multimeters generally take measurements with accuracy superior to their analog counterparts. Standard analog multimeters measure with typically three percent accuracy, though instruments of higher accuracy are made. Standard portable digital multimeters are specified to have an accuracy of typically 0.5% on the DC voltage ranges. Mainstream bench-top multimeters are available with specified accuracy of better than ±0.01%. Laboratory grade instruments can have accuracies of a few parts per million.
Accuracy figures need to be interpreted with care. The accuracy of an analog instrument usually refers to full-scale deflection; a measurement of 10V on the 100V scale of a 3% meter is subject to an error of 3V, 30% of the reading. Digital meters usually specify accuracy as a percentage of reading plus a percentage of full-scale value, sometimes expressed in counts rather than percentage terms.
A multimeter's quoted accuracy is specified as being that of the lower (mV) DC range, and is known as the "basic DC volts accuracy" figure. Higher DC voltage ranges, current, resistance, AC and other ranges will usually have a lower accuracy than the basic DC volts figure. AC measurements only meet specified accuracy within a specified range of frequencies.
Manufacturers can provide calibration services so that new meters may be purchased with a certificate of calibration indicating the meter has been adjusted to standards traceable to, for example, the AmericanNational Institute of Standards and Technology, or other national standards laboratory.
Test equipment drifts out of calibration over time, and the specified accuracy cannot be relied upon indefinitely. For more expensive equipment, manufacturers and third parties provide calibration services so that older equipment may be recalibrated and recertified. The cost of such services is disproportionate for inexpensive equipment; however extreme accuracy is not required for most routine testing. Multimeters used for critical measurements may be part of a metrology program to assure calibration.
Sensitivity and input impedance
When used for measuring voltage, the input impedance of the multimeter must be very high compared to the impedance of the circuit being measured; otherwise circuit operation may be changed, and the reading will also be inaccurate.
Meters with electronic amplifiers (all digital multimeters and some analog meters) have a fixed input impedance that is high enough not to disturb most circuits that are encountered. This is often either one or ten megohms; the standardisation of the input resistance allows the use of external high-resistance probes which form a voltage divider with the input resistance to extend voltage range up to tens of thousands of volts.
Most analog multimeters of the moving-pointer type are unbuffered, and draw current from the circuit under test to deflect the meter pointer. The impedance of the meter varies depending on the basic sensitivity of the meter movement and the range which is selected. For example, a meter with a typical 20,000 ohms/volt sensitivity will have an input resistance of two million ohms on the 100 volt range (100 V * 20,000 ohms/volt = 2,000,000 ohms). On every range, at full scale voltage of the range, the full current required to deflect the meter movement is taken from the circuit under test. Lower sensitivity meter movements are acceptable for testing in circuits where source impedances are low compared to the meter impedance, for example, power circuits; these meters are more rugged mechanically. Some measurements in signal circuits require higher sensitivity movements so as not to load the circuit under test with the meter impedance.
Sometimes sensitivity is confused with resolution of a meter, which is defined as the lowest voltage, current or resistance change that can change the observed reading.
For general-purpose digital multimeters, the lowest voltage range is typically several hundred millivolts AC or DC, but the lowest current range may be several hundred milliamperes, although instruments with greater current sensitivity are available. Measurement of low resistance requires lead resistance (measured by touching the test probes together) to be subtracted for best accuracy.
The upper end of multimeter measurement ranges varies considerably; measurements over perhaps 600 volts, 10 amperes, or 100 megohms may require a specialized test instrument.
Any ammeter, including a multimeter in a current range, has a certain resistance. Most multimeters inherently measure voltage, and pass a current to be measured through a shunt resistance, measuring the voltage developed across it. The voltage drop is known as the burden voltage, specified in volts per ampere. The value can change depending on the range the meter selects, since different ranges usually use different shunt resistors.
The burden voltage can be significant in low-voltage circuits. To check for its effect on accuracy and on external circuit operation the meter can be switched to different ranges; the current reading should be the same and circuit operation should not be affected if burden voltage is not a problem. If this voltage is significant it can be reduced (also reducing the inherent accuracy and precision of the measurement) by using a higher current range.
Alternating current sensing
Since the basic indicator system in either an analog or digital meter responds to DC only, a multimeter includes an AC to DC conversion circuit for making alternating current measurements. Basic meters utilize arectifier circuit to measure the average or peak absolute value of the voltage, but are calibrated to show the calculated root mean square (RMS) value for a sinusoidal waveform; this will give correct readings foralternating current as used in power distribution. User guides for some such meters give correction factors for some simple non-sinusoidal waveforms, to allow the correct root mean square (RMS) equivalent value to be calculated. More expensive multimeters include an AC to DC converter that measures the true RMS value of the waveform within certain limits; the user manual for the meter may indicate the limits of the crest factor and frequency for which the meter calibration is valid. RMS sensing is necessary for measurements on non-sinusoidal periodic waveforms, such as found in audio signals and variable-frequency drives.
Digital multimeters (DMM or DVOM)
Modern multimeters are often digital due to their accuracy, durability and extra features. In a digital multimeter the signal under test is converted to a voltage and an amplifier with electronically controlled gain preconditions the signal. A digital multimeter displays the quantity measured as a number, which eliminates parallaxerrors.
Modern digital multimeters may have an embedded computer, which provides a wealth of convenience features. Measurement enhancements available include:
- Auto-ranging, which selects the correct range for the quantity under test so that the most significant digits are shown. For example, a four-digit multimeter would automatically select an appropriate range to display 1.234 instead of 0.012, or overloading. Auto-ranging meters usually include a facility to 'freeze' the meter to a particular range, because a measurement that causes frequent range changes is distracting to the user. Other factors being equal, an auto-ranging meter will have more circuitry than an equivalent, non-auto-ranging meter, and so will be more costly, but will be more convenient to use.
- Auto-polarity for direct-current readings, shows if the applied voltage is positive (agrees with meter lead labels) or negative (opposite polarity to meter leads).
- Sample and hold, which will latch the most recent reading for examination after the instrument is removed from the circuit under test.
- Current-limited tests for voltage drop across semiconductor junctions. While not a replacement for a transistor tester, this facilitates testing diodes and a variety of transistor types.
- A graphic representation of the quantity under test, as a bar graph. This makes go/no-go testing easy, and also allows spotting of fast-moving trends.
- A low-bandwidth oscilloscope.
- Automotive circuit testers, including tests for automotive timing and dwell signals.
- Simple data acquisition features to record maximum and minimum readings over a given period, or to take a number of samples at fixed intervals.
- Integration with tweezers for surface-mount technology.
- A combined LCR meter for small-size SMD and through-hole components.
Modern meters may be interfaced with a personal computer by IrDA links, RS-232 connections, USB, or an instrument bus such as IEEE-488. The interface allows the computer to record measurements as they are made. Some DMMs can store measurements and upload them to a computer.
A multimeter may be implemented with a galvanometer meter movement, or with a bar-graph or simulated pointer such as an LCD or vacuum fluorescent display. Analog multimeters are common; a quality analog instrument will cost about the same as a DMM. Analog multimeters have the precision and reading accuracy limitations described above, and so are not built to provide the same accuracy as digital instruments.
Analog meters, with needle able to move rapidly, are sometimes considered better for detecting the rate of change of a reading; some digital multimeters include a fast-responding bar-graph display for this purpose. A typical example is a simple "good/no good" test of an electrolytic capacitor, which is quicker and easier to read on an analog meter. The ARRL handbook also says that analog multimeters, with no electronic circuitry, are less susceptible to radio frequency interference.
The meter movement in a moving pointer analog multimeter is practically always a moving-coil galvanometer of the d'Arsonval type, using either jeweled pivots or taut bands to support the moving coil. In a basic analog multimeter the current to deflect the coil and pointer is drawn from the circuit being measured; it is usually an advantage to minimize the current drawn from the circuit. The sensitivity of an analog multimeter is given in units of ohms per volt. For example, an inexpensive multimeter would have a sensitivity of 1000 ohms per volt and would draw 1 milliampere from a circuit at the full scale measured voltage. More expensive, (and mechanically more delicate) multimeters would have sensitivities of 20,000 ohms per volt or higher, with a 50,000 ohms per volt meter (drawing 20 microamperes at full scale) being about the upper limit for a portable, general purpose, non-amplified analog multimeter.
To avoid the loading of the measured circuit by the current drawn by the meter movement, some analog multimeters use an amplifier inserted between the measured circuit and the meter movement. While this increased the expense and complexity of the meter and required a power supply to operate the amplifier, by use of vacuum tubes or field effect transistors the input resistance can be made very high and independent of the current required to operate the meter movement coil. Such amplified multimeters are called VTVMs (vacuum tube voltmeters), TVMs (transistor volt meters), FET-VOMs, and similar names.
A multimeter can utilise a variety of test probes to connect to the circuit or device under test. Crocodile clips, retractable hook clips, and pointed probes are the three most common attachments. Tweezer probesare used for closely-spaced test points, as in surface-mount devices. The connectors are attached to flexible, thickly-insulated leads that are terminated with connectors appropriate for the meter. Probes are connected to portable meters typically by shrouded or recessed banana jacks, while benchtop meters may use banana jacks or BNC connectors. 2mm plugs and binding posts have also been used at times, but are less common today.
Clamp meters clamp around a conductor carrying a current to measure without the need to connect the meter in series with the circuit, or make metallic contact at all. For all except the most specialised and expensive types they are suitable to measure only large (from several amps up) and alternating currents.
All but the most inexpensive multimeters include a fuse, or two fuses, which will sometimes prevent damage to the multimeter from a current overload on the highest current range. A common error when operating a multimeter is to set the meter to measure resistance or current and then connect it directly to a low-impedance voltage source; meters without protection are quickly destroyed by such errors. Fuses used in meters will carry the maximum measuring current of the instrument, but are intended to clear if operator error exposes the meter to a low-impedance fault.
On meters that allow interfacing with computers, optical isolation may protect attached equipment against high voltage in the measured circuit.
Digital meters are rated into categories based on their intended application, as set forth by the CEN EN61010 standard. There are four categories:
- Category I: used where current levels are low.
- Category II: used on residential branch circuits.
- Category III: used on permanently installed loads such as distribution panels, motors, and appliance outlets.
- Category IV: used on locations where current levels are high, such as service entrances, main panels, and house meters.
Each category also specifies maximum transient voltages for selected measuring ranges in the meter. Category-rated meters also feature protections from over-current faults.
A general-purpose DMM is generally considered adequate for measurements at signal levels greater than one millivolt or one milliampere, or below about 100 megohms—levels far from the theoretical limits of sensitivity. Other instruments—essentially similar, but with higher sensitivity—are used for accurate measurements of very small or very large quantities. These include nanovoltmeters, electrometers (for very low currents, and voltages with very high source resistance, such as one teraohm) and picoammeters. These measurements are limited by available technology, and ultimately by inherent thermal noise.
Hand-held meters use batteries for continuity and resistance readings. This allows the meter to test a device that is not connected to a power source, by supplying its own low voltage for the test. A 1.5 volt AA battery is typical; more sophisticated meters with added capabilities instead or also use a 9 volt battery for some types of readings, or even higher-voltage batteries for very high resistance testing. Meters intended for testing in hazardous locations or for use on blasting circuits may require use of a manufacturer-specified battery to maintain their safety rating. A battery is also required to power the electronics of a digital multimeter or FET-VOM. | 1 | 2 |
<urn:uuid:e81e707c-90f8-4481-9e1c-24da8beeb3ec> | by Oleg Artamonov
11/21/2006 | 03:51 PM
The more brands come to the PSU market and the tougher the competition becomes, the wider various marketing inventions are employed besides just technical advances and innovations.
Unfortunately, besides experimenting with colorful box designs and accessories (well, it’s hard to expand a PSU’s accessories set beyond the customary set of a cable, a couple of braces, and a handful of stickers), the marketing department comes up with a technical lingo to bewilder the customer with mysterious terms and abbreviations. Every box and instruction shows a long list of employed technologies, the point of some of which may be distorted almost to the opposite.
That’s why I’m going to walk you through some of the technologies (or what the PSU manufacturers regard as such) most frequently mentioned on boxes with modern power supplies. And then I’ll put PSUs with such technologies to practical tests.
In good old times PC power supplies used to have one power rail for each of the output voltages (+5V, +12V, +3.3V, and a couple of negative voltages), and the maximum output power on each of the rails was not higher than 150-200W. It’s only in some high-wattage server-oriented power supplies that the load on the +5V rail could be as high as 50A, i.e. 250W. This situation was changing as computers required ever more power and the distribution of power consumption among the different power rails was shifting towards +12V.
The ATX12V 1.3 standard recommends a max current of 18A for the +12V rail and this is where a problem occurred. It was about safety regulations rather than about increasing the current load further. According to the EN-60950 standard, the maximum output power on user-accessible connectors must not exceed 240VA. It is thought that higher output power may with a higher probability lead to various disasters like inflammation in case of a short circuit or hardware failure. Obviously, this output power is achieved on the +12V rail at a current of 20A while the PSU connectors are surely user-accessible.
So, when it became necessary to push the allowable current bar higher on the +12V rail, Intel Corporation, the developer of the ATX12V standard, decided to divide that power rail into multiple ones, with a current of 18A on each, the 2A difference being left as a small reserve. Purely out of safety considerations, there was no other reason for that solution. It means that the power supply does not necessarily have to have more than one +12V power rail. It is only required that an attempt to put a load higher than 18A on any of its 12V connectors would trigger off the overcurrent protection. That’s all. This simplest way to implement this is to install a few shunts into the PSU, each of which is responsible for a group of connectors. If there’s a current of over 18A on a shunt, the protection wakes up. As a result, the output power of none of the 12V connectors can exceed 18A*12V=216VA, but the combined power on the different 12V connectors can be higher than that number.
That’s why there are virtually no power supplies existing with two, three or four +12V power rails. Why should the engineer pack additional components into the already overcrowded PSU case when he can do with just a couple of shunts and a simple chip that will be controlling the voltage in them (the resistance of a shunt being a known value, the current passing through the shunt can be known if you know the voltage).
But the marketing folk just couldn’t pass by such an opportunity and now you can read on any PSU box that dual +12V output circuits help increase power and stability, the more so if there are not two but three such lines!
You think they stopped at that? Not at all. The latest trend is power supplies that have and don’t have the splitting of the +12V rail at the same time. How? It’s simple. If the current on any of the +12V output lines exceeds the 18A threshold, the overcurrent protection becomes disabled. As a result, they can still embellish the box with the magical text, “Triple 12V Rails for Unprecedented Power and Stability”, but can also add there some nonsense that the three rails are united into one when necessary. I call this nonsense because, as I have written above, there have never been separate +12V power rails. It’s impossible to comprehend the depth of that “new technology” from a technical standpoint. In fact, they try to present the lack of one technology as another technology.
As far as I know, the “self-disabling protection” is currently being promoted by Topower and Seasonic and, accordingly, by the companies that are selling such PSUs under their own brands.
This means that the speed of the PSU fan is varied depending on temperature or, less often, on load power. This speed management is currently implemented in all PSUs, even cheapest ones, so the question is about the quality of implementation. This quality can be viewed from three aspects: the quality of the employed fan, the minimum speed of the fan, and the speed adjustment range. For example, simplest power supplies may have speed management, but the speed is changed from 2500rpm at a 50W load to 2700rpm at a 350W load. It’s like the speed doesn’t change at all.
Respectable manufacturers implement the fan speed management system properly, but often play another marketing trick. The fan speed (or the noise level) they write into the power supply specs is measured at a temperature of 18°C as reported by a sensor inside the PSU. This thermal sensor is usually installed somewhere in the hottest part of the PSU, on the heatsink with diode packs, so you can only have that temperature in reality if you put your PSU in a refrigerator. Although no one keeps PSUs in a fridge, the specification still contains an unrealistically pretty number like a noise level of 16dBA (this is quieter than the background noise in a quiet room). In reality, the room temperature is usually within 20-25°C, and the temperature inside the PC case is closer to 30°C. Of course, you can’t get 16dBA under such conditions.
Short circuit protection is obligatory according to the ATX12V Power Supply Design Guide. This means that it is implemented in all power supplies, even those that don’t explicitly mention such protection, that claim to comply with that standard.
This protects the power supply from overload on all of its outputs combined. This protection is obligatory.
This protects the separate PSU outputs from overload (but not yet from short circuit). It is available on many, but not all, PSUs, and not for all of the outputs. This protection is not obligatory.
This protects the PSU from overheat. It is not required and is not implemented often.
This protection is obligatory, but is only meant for critical failures. It works only when some output voltage shoots 20-25% above the nominal value. In other words, if your power supply yields 13V instead of 12V, you must replace it as soon as possible, but its protection is not required to react yet because it is designed for even more critical situations.
As opposed to too-high voltage, too-low voltage cannot do much harm to your computer, but may cause failures in operation of the hard drive, for example. This protection works when a voltage bottoms out by 20-25%.
Soft braided nylon tubes on the PSU’s output cables help lay them out neatly inside the system case.
Unfortunately, many manufacturers have switched from the undoubtedly good idea of using nylon sleeves to the use of thick plastic tubes, often screened and covered with a paint that shines in ultraviolet. The shining paint is a matter of personal taste, of course, but the screening does not do anything good to the PSU cables. The thick tubes make the cables stiff and unwilling to bend, which makes it hard to lay them out in the system case properly and is even dangerous for the power connectors that have to bear the pressure of the cables that resist the bending.
This is often advertised as a means to improve the cooling of the system case, but I can assure you that the tubes on the power cables have but a very small effect on the airflows inside your computer.
The AC electric mains can be considered as having two types of power: active and reactive. Reactive power is generated in two cases: when the load current and the mains voltage are out of phase (that is, the load is inductive or capacitive) or when the load is non-linear. The PC power supply is a pronounced example of the second case. It will normally consume the mains current in short high impulses that coincide with the maximums of the mains voltage.
The problem is that while active power is fully transformed into useful work in the load, reactive power is not consumed at all. It is driven back into the mains. It is kind of wandering to and fro between the generator and the load, but it heats up the connecting wires as well as active power does. That’s why reactive power must be got rid of.
The circuit called active PFC is the most efficient way to suppress reactive power. It is in fact an impulse transformer that is designed in such a way that its instantaneous consumed power is directly proportional to the instantaneous voltage in the mains. In other words, it is made linear on purpose and thus consumes active power only. The voltage from the output of the active PFC device goes right to the switching transformer of the power supply which used to be a reactive load due to its non-linearity. But now that it receives direct voltage, the non-linearity of the second transformer doesn’t matter anymore because it is detached from the electric mains and cannot affect it.
The power factor is the measure of reactive power. It is the ratio of active power to the total of active and reactive power. It is about 0.65 with an ordinary PSU, but PSUs with active PFC have a power factor of 0.97-0.99. So, the active PFC device reduces reactive power almost to zero.
Users and even hardware reviewers sometimes make no difference between the power factor and the efficiency factor. Although both these terms describe the effectiveness of a power supply, it is a gross mistake to confuse them. The power factor describes how efficiently the PSU uses the AC electric mains, i.e. what percent of power the PSU consumes from it is actually put to good use. The efficiency factor describes how efficiently this consumed power is transformed into useful work. There is no connection between these two things because, as I said above, reactive power, which determines the value of the power factor, is not transformed in the PSU into anything. You cannot apply the term “conversion efficiency” to it, so it has no effect on the efficiency factor.
Generally speaking, it is the power supply companies rather than the users that profit from active PFC because it reduces the computer’s load on the electric mains by over one third. And this amounts to big numbers today when there is a PC standing on every office desk. From an ordinary user’s point of view, active PFC makes no difference even when it comes to electricity bills. Home electricity supply meters measure only active power as yet. The manufacturers’ claims that active PFC can in any way help your computer are nothing but marketing noise.
A side effect of active PFC is that it can be easily designed to support a full range of input voltages, from 90 to 260V, thus making it a universal PSU that can work in any power grid without a manual selection of the input voltage. Moreover, PSUs with manual switches can only work in two input voltage ranges, 90-130V and 180-260V, and you cannot start them up at an input voltage of 130-180V. A PSU with active PFC covers all those input voltage ranges without any gaps. So, if you have to work in an environment with unstable energy supply, when the AC voltage may often bottom out to below 180V, a PSU with active PFC will allow you to do without a UPS or will make the UPS’ battery life much longer.
Well, the availability of active PFC does not guarantee that the PSU will support the whole range of input voltages. It can be designed to support a range of 180-260V only. This is sometimes implemented in PSUs to be sold in Europe because the use of such narrow-range active PFC helps reduce the manufacturing cost of the PSU somewhat.
Active PFC is not an obligatory feature right now, but from the next year a power supply will have to have a power factor that can only be achieved with active PFC to pass the Energy Star certification (which is voluntary, though).
Passive PFC is the simplest way to correct the power factor. It is a small choke connected in series with the power supply. Its inductance is smoothing out the pulsation of the current consumed by the PSU and is thus reducing the level of non-linearity. There is a very small effect from passive PFC – the power factor grows only from 0.65 to 0.7-0.75. But while implementing active PFC requires a deep redesign of the PSU’s high-voltage circuitry, passive PFC can be easily added into any existing power supply.
Right now it is obligatory for PSUs selling in Europe to have passive PFC. Power supplies with passive PFC with eventually be replaced with active-PFC models.
Efficiency is the ratio of input power to output power. The higher the efficiency of a PSU is, the less heat it generates and the quieter its cooling can be made. Your electricity bills will be lower if the efficiency is high, too.
The current version of the ATX12V 2.2 standard limits the PSU efficiency from below: a minimum of 72% at typical load, 70% at full load and 65% at low load. Besides that, there are optional numbers (an efficiency of 80% at nominal load) and the voluntary certification program “80 Plus” which requires that the PSU has an efficiency of 80% and higher at loads from 20% to maximum. The new Energy Star certification program to come to effect in 2007 will have the same requirements as in the 80 Plus.
The efficiency of a PSU depends on the input voltage. The higher that voltage is, the better the efficiency. The difference in efficiency between the 110V and 220V power grids is about 2%. Moreover, different samples of the same PSU model may vary in efficiency by 1-2% due to the variations in the parameters of the components employed.
This is nothing but a pretty-looking label. Dual-core processors do not require any special support from the power supply.
Yet another pretty-looking label that means two power connectors for graphics cards and an ability to yield as much power as is considered sufficient for a SLI graphics subsystem. Nothing else stands behind that label.
One more pretty-looking sticker! Industrial class components are components that can work in a very wide range of temperatures. But what’s the purpose of installing a chip capable of working under -45°C into a PSU if this PSU will never be used in such cold weather?
Sometimes the term industrial class components refers to capacitors meant for operation under a temperature up to 105°C, but that’s all clear here, too. The capacitors in the PSU’s output circuits heat up by themselves and also located very close to the hot chokes are always rated for a temperature of 105°C max or their service life would be too short. Of course, there is a much lower temperature inside the PSU, but the problem is that the service life of a capacitor depends on the ambient temperature. Capacitors rated for higher max temperatures are going to last longer under the same thermal conditions.
The input high-voltage capacitors work almost at the temperature of the ambient air, so the use of somewhat cheaper 85°C capacitors there doesn’t affect the PSU’s service life much.
Alluring the potential customer with mysterious terms is a favorite trick of the marketing department.
Here, the term means the topology of the PSU, i.e. the general concept of its circuit design. There are quite a number of different topologies. Besides the double forward converter, PC power supplies may use a forward converter or a half-bridge converter. These terms are only interesting for a specialist and don’t mean much for an ordinary user.
The choice of the particular PSU topology is determined by a number of reasons like the availability and price of transistors with required characteristics (they differ greatly depending on the topology), transformers, controller chips, etc. For example, the single-transistor forward converter is simple and cheap but requires a high-voltage transistor and high-voltage diodes on the PSU output, so it is only used in inexpensive low-wattage models (high-voltage diodes and transistors of high power are too expensive). The half-bridge converter is somewhat more complex, but has a two times lower voltage on the transistors. So, this is generally a matter of availability and cost of the necessary components. I can predict, for example, that synchronous rectifiers will be sooner or later used in the secondary circuits of PC power supplies. There’s nothing new in that technology, but it is too expensive as yet and its advantages don’t cover its cost.
This is a new European Union directive that limits the use of certain substances in electronic equipment since July 1, 2006. It restricts the use of lead, mercury, cadmium, hexavalent chromium, and two bromides. For power supplies this mainly means a transition to non-lead solders. Yes, we are all for ecology and against heavy metals, but a too hasty transition to new materials may have unpleasant consequences. You may have heard the story about Fujitsu’s MPG hard drives which would die due to a failure of Cirrus Logic controllers that had a packaging made of some new environment-friendly compound from Sumitomo Bakelite. The elements of the compound facilitated the migration of copper and silver that formed bridges between interconnects inside the chip case. As a result, the chip would fail almost certainly after 1 or 2 years of operation. The compound was abandoned eventually, and the involved companies exchanged lawsuits, but nothing could restore the data that were lost with the hard drives.
Neo HE series power supplies feature the classic cooling solution with a single 80mm fan. I don’t mean that are the only PSUs designed like that, but models with 120mm fans have been prevailing recently because the larger fan can create the same airflow at a lower rotation speed and, accordingly, at a lower noise level. I can’t say why they install 80mm fans into the Neo HE, but Antec positions this model as a quiet power supply and puts an emphasis on its high efficiency (HE = High Efficiency), small amount of generated heat and, as the result, low fan speed. Let’s see if the power supply is up to these claims.
The manufacturer provided us with a revision A3.1 unit whereas the latest revision is A4. The older revisions of this power supply do not work with some mainboards. You can read the revision number from the white paper sticker on the PSU.
There is an UL certificate number on the PSU, E104405. This helps identify its actual manufacturer, Seasonic.
The component layout and circuit design of the power supply are classic. There are no surprises for me here. The PSU is equipped with active PFC (its choke can be seen in the photo peeping out from under the left heatsink).
The heatsinks are large, with well-developed ribbing. A small additional card with output connectors is fastened on the rear panel of the case. Most of the connectors are detachable.
The PSU has six same-type 6-pin connectors. It doesn’t matter which exactly connector you attach a cable to since they are all identical. On one hand, this guarantees that there will be no user mistakes at connection, but on the other hand, it would be logical to make two separate connectors for top-end graphics cards in which all the pins would be +12V and “ground”.
The PSU complies with the ATX12V 2.0 standard, offering three +12V lines with an allowable current of 18A on each, but the combined load current must not be higher than 42A, i.e. 14A on each line. These are “virtual” lines of course. I mean that there is actually only one power rail with a load capacity of 42A inside the PSU, but it is split into three output lines with a current limitation at 18A on each.
Among other matters worth mentioning, there is the ability to work with input voltages from 100 to 240V without manual switching. The full output power of the PSU is declared to be 550W at an ambient air temperature of 50°C. The latter feature is in fact a requirement of the ATX12V Power Supply Design Guide, but many PSU manufacturers neglect it, specifying output power at a lower temperature.
The PSU is equipped with the following cables and connectors:
The following is enclosed with the power supply:
When the power supply was working in pair with an APC SmartUPS SC 620, the UPS overload indicator would turn on at a load of over 350W irrespective of the power source (electric mains or batteries). The switching-over to the batteries was performed without problems.
The first problem with the Neo HE was that it only started up at a second attempt. Our testbed wakes the PSU up as an ordinary PC does, i.e. by sending a low-level signal to the PS_ON contact (it’s the green wire, usually) of the PSU connector. The Neo HE didn’t react at all to my first press on the Power button, but would start up normally on a second press. This didn’t depend on whether the load on the PSU was zero or other at the moment of my trying to turn it on.
The second problem was about the pulsation of the output voltages. It looks normal at first sight:
At a load of 550W the output voltage ripple was 38 millivolts on the +5V rail, 31 millivolts on the +12V rail and 32 millivolts on the +3.3V rail, but the amplitude would occasionally jump up so high that I began to suspect some problem with our testbed. However, there were no problems with the High Power HPC560-A12S and the OCZ OCZGXS700 that were tested right before and after the Neo HE, so I threw my suspicions away. The testbed couldn’t have had something against products from the Antec brand only. Moreover, the surges of voltage didn’t vanish at lower loads, although their amplitude and duration diminished proportionally (the voltage ripple fitted into the requirements of the ATX standard at a load of 250W and lower). The oscillogram with a 1ms/div time base, recorded under a load of 400W, shows one such surge. It lasted for about 6 milliseconds and had an amplitude of about 100 millivolts on the +5V rail (to remind you: the allowable maximum is 50 millivolts).
It would go beyond the scope of this review to analyze the circuit design of the PSU to find the reason for the described phenomenon. So, I only have to say that the stability of the revision A3.1 Antec Neo HE 550 calls for improvement. Perhaps I was testing a defective sample, but recalling the forum discussions concerning incompatibility of some mainboards with the revisions lower than A4, I’m inclined to think that that is a common problem, not a single instance.
The output voltages proved to be very stable irrespective of the load on the power supply. The Neo HE features a superbly implemented independent regulation of all the three voltages.
The PSU is cooled with a single 80mm Adda AD0812HB-A71GL fan whose speed is adjusted linearly depending on the temperature.
The PSU is really quiet. The fan speed is just a little higher than 2500rpm even at full load. In a load range typical of a modern computer (i.e. below 300W), the PSU is altogether silent.
Of course, such a quiet cooling with a single 80mm fan is made possible not only by the large heatsinks but also by the excellent efficiency, which is 86% at the maximum. However, the Neo HE doesn’t meet the requirements of the 80 Plus program that’s becoming popular nowadays because its efficiency sinks down under low loads.
The Neo HE 550 left me doubtful. In most of its parameters it is a superb power supply (which is not a surprise considering the reputation of its actual manufacturer Seasonic): excellent efficiency, quiet operation, very stable voltages, all the necessary connectors (but you must make sure you connect the graphics card cables correctly). But the surges in the pulsations on the PSU output and the PSU’s waking up only at a second attempt are somewhat alarming facts. Perhaps these defects are corrected in revision A4 (and users file less complaints about it), but I can’t say it for certain until I test it. Anyway, if you are going to buy this model, make sure you buy at least its fourth revision.
The Phantom 500 comes from a rather rare variety of power supplies. The manufacturers usually call them semi-fanless, meaning that the fan of such a PSU starts to work only at a big load (to be exact, when the temperature inside the PSU exceeds a certain threshold). The Phantom 500 grew out of the fanless Phantom 300. The fan that works only under high loads has helped to raise the wattage of the model.
The Phantom is manufactured by CWT that supplies a lot of PSU models for Antec (for example, the widely known TruePower series).
Many manufacturers of semi-fanless power supplies usually take a standard design of a fan-equipped PSU and enlarge its heatsinks (or even put them outside with the help of heat pipes), but the Phantom 500 shows traces of much deeper developmental work, obviously inherited from its fanless predecessor. You can see a lot of heatsinks here, only two of which belong to the transistors and diode packs (these are the components that have heatsinks in any power supply). The others cool the chokes.
When the PSU is closed, the heatsinks press down through heat-conducting pads to the bottom panel of the PSU which is a big ribbed aluminum heatsink in itself. This means that some air cooling is desirable. An additional low-speed fan on the rear panel of your system case, creating airflow along the PSU’s bottom panel, will do much good to the PSU’s thermal conditions.
The top panel of the Phantom 500 is designed like a heatsink, too, but only for the sake of aesthetics. None of the hot components in the PSU has thermal contact with it.
It is the transistors and diode packs that are traditionally considered the hottest components in a power supply, yet it is not exactly so. The chokes heat up a lot, too, due to the high current passing through them. In an ordinary PSU they are effectively cooled by the airflow, but in a fanless model those chokes have to be made with some reserve or have to be cooled somehow. The photograph above shows the two chokes of the output regulators on magnetic amplifiers (the PSU features independent voltage regulation) that are pressed through a soft heat-conducting pad to the heatsink with diode packs.
By the way, there is a card with two thermo-resistors on the left of the chokes. They measure the temperature of the heatsink.
Other chokes, located far from the heatsinks, are cooled in a more original way. They are equipped with chunks of aluminum that ensure thermal contact between a choke and the cover of the PSU case.
Among other things, I can note the use of only 105°C capacitors in the Phantom 500. In ordinary power supplies such capacitors are only employed in the highly loaded output circuits, and cheaper 85°C capacitors are installed elsewhere, but for a fanless model the use of 105°C capacitors is a must everywhere because they get heated up by the surrounding heatsinks due to the lack of airflow.
There’s a plastic cover above the fan on the rear panel of the PSU case. It must be removed if you want to get inside the PSU.
Having removed the decorative cover that is fastened on latches, I found another, this time metallic, PSU panel and an 80x80x15fan underneath. A small portion of the airflow from the fan goes aside the PSU’s internals through the ribs in its heat-spreading top panel.
This is the Xinruilian RDM8015B model.
There is a tiny 3-position switch next to the fan. The PSU manual says this switch controls the cooling efficiency, giving you the opportunity to choose between quiet and cool.
The specification of the Phantom 500 is somewhere in between versions 1.2 and 2.0 of the ATX12V standard. On one hand, its load current on the +5V rail can be as high as 30A, but the +12V rail isn’t much worse – its load current is up to 35A.
The PSU is equipped with the following cables and connectors:
When the PSU was working in pair with an APC SmartUPS SC 620, the UPS overload indicator would turn on at a load of over 330W irrespective of the power source (electric mains or batteries). There were no problems with the UPS at lower loads.
The high-frequency pulsation of the output voltages is negligible (less than 20 millivolts on all the rails), but there is a low-frequency (100Hz) ripple on the +12V rail that is as high as 64 millivolts at full load (the permissible maximum is 120 millivolts). The ripple subsides at lower loads, amounting to 28 millivolts at a combined load of 240W (half the maximum output power of this model).
The cross-load diagram of the Phantom 500 doesn’t have as much green as the previous model’s, yet it looks good anyway. The voltages deflect from their nominal values by no more than 4%. The range of loads typical of modern computers (a high load on the +12V and a moderate load on the +5V and +3.3V rails) is all green.
I had to measure the speed of the fan three times, for each of the three positions of the switch. The diagram suggests that only the first and second positions differ much. The third position of the switch doesn’t bring anything dramatically new into the behavior of the fan. In every case, the fan only begins to work when there’s a load of 200-300W on the PSU (I performed my measurements at a room temperature; in a real computer system the fan is going to start up sooner). The speed of the fan is a little over 2400rpm at the maximum.
As a kind of drawback, the fan speed controller doesn’t have some kind of hysteresis between the turn-on and turn-off thresholds. As a result, at a load of about 250-300W the fan periodically turns on and works at about 1400rpm. The PSU receives a portion of fresh air and gets cooler by a few degrees – and the fan turns off again for a couple of minutes. Ideally, the fan turn-off temperature threshold should be lower than its turn-on threshold. There would be no such cycles then. Well, there’s not a high chance that the power consumption of your computer will fit exactly into this range of cyclic turning on of the fan.
The efficiency of the Phantom 500 reaches 86% at a load of 200W and doesn’t change after that. That’s an excellent result, but quite expectable from a fan-less PSU. It would just overheat otherwise.
The Phantom 500 doesn’t have power factor correction (at least its American version has not – a PSU must have at least passive PFC to be sold in Europe), so its power factor is only 0.65. This also means that you have to manually select the input voltage with a switch (110V or 220V). Be careful when you first turn the PSU on – a wrongly set switch has been the death of a great many power supplies.
Generally speaking, the Phantom 500 is a rather expensive, but interesting option for people who care about silence. This PSU is absolutely silent under low loads. At high loads, its fan is very quiet, too. The other parameters are up to the product class. I have no complaints about the quality of assembly. The stability of the output voltages is excellent. The voltage ripple is within normal. I only wish the manufacturer implemented hysteresis between the thresholds of turning the fan on and off so that it didn’t enter the cycle of turning on/off with a period of several minutes in a certain range of loads.
The TruePower 2.0 looks somewhat unassuming in comparison with the two previous models, especially against the Phantom 500. It seems to be just another gray-colored power supply. Well, its specs don’t promise anything exceptional, either. It is a rather ordinary ATX12V 2.0 model that follows the recent fashion trends and has appropriate certificates, but what model doesn’t have them now?
The actual manufacturer of the power supply is CWT.
You can identify the manufacturer even without looking it up in the UL certificate database, so typical the internal design of this model is. CWT supplies this very model to many trademark holders, so if you put, say, an Antec TruePower and a Foxconn WinFast down next to each other and remove their covers, you will hardly tell what brands they are sold under.
The PSU has a standard internal design except for independent voltage regulation which is represented by the three large toroidal-core chokes of magnetic amplifiers located near the output circuitry. The PSU has neither active nor passive PFC. It is an American model because European PSUs must have some kind of PFC.
The PSU has a high load capacity of the +12V rail (a maximum current up to 36A with a division into two “virtual” output lines with a maximum of 19A on each) as well as of the +5V rail (as much as 40A, which is almost two times the load capacity of this rail in a majority of today’s ATX12V 2.0 PSUs). The latter can only be of use to owners of old top-end systems in which the old power supply has failed. There were once even dual-processor mainboards that powered the CPUs from the +5V, but the manufacturers soon realized the lack of prospects of that solution.
The PSU is equipped with the following cables and connectors:
The mainboard cable is sleeved. The other cables are just tied up with nylon braces. As the result, wires from different cables may get entangled. Moreover, the different wires in the same cable vary slightly in length.
When the PSU was working in pair with an APC SmartUPS SC 620, loads below 315W were allowable irrespective of the power source (electric mains or batteries). Judging by that, the TruePower is unlikely to have high efficiency.
At a load of 530W the output voltage ripple was 20 millivolts on the +5V rail, 31 millivolts on the +12V rail, and 17 millivolts on the +3.3V rail.
Like a majority of models with independent voltage regulation, the TruePower 2.0 has excellent cross-load characteristics. In the whole range of allowable loads it’s only the +3.3V voltage that goes out of the “green zone”. And even this voltage doesn’t reach the maximum allowable deflection.
The PSU is cooled with a single Top Motor DF1212BB-3 fan whose speed is nearly constant at loads below 200W, but grows up quickly thereafter until reaches an almost constant level again. The PSU is very quiet at low load, but its fan makes itself heard at speeds higher than 1500rpm.
The efficiency is indeed mediocre at about 79% through the most part of the load range. That’s not bad, but worse than many other modern power supplies offer.
The power factor is 0.65 on average, quite expectably for a PFC-less model.
So, the Antec TruePower 2.0 is just a good midrange model. It doesn’t have any exceptional features that would distinguish it from the crowd, but it has no serious defects, either. If you need just a good power supply, you may want to consider the Antec TruePower as an option.
“Green” means the color of the case with Topower’s Silent Green model, but here it means the ecological purity of the product. To be exact, it means high efficiency and compliance with the American Energy Star and the German Blauer Engel programs.
The actual manufacturer of the iGreen Power (and the manufacturer of other CoolerMaster power supplies we’ve tested in our labs so far) is AcBel Polytech (UL# E131875).
The internal design of the PSU is very ordinary, except for the second transformer in the bottom left corner and the two capacitors with different ratings nearby. This transformer proved to be the choke of an active PFC device wound on an E-type instead of a toroidal-type core while the capacitors are connected in parallel so the difference in their capacitances plays no role at all. It was just easier for the PSU designer to make it like that.
Otherwise, it is quite an ordinary modern power supply. It has group voltage regulation; its main PWM and active PFC controllers are implemented with one CM6800G chip.
Two columns of parameters are specified for this PSU, for continuous and peak (not longer than 1 minute) loads. The latter thing looks funny for the three +12V outputs which are “virtual” in this PSU, just like in a majority of others. There is only one +12V power rail inside the PSU with a max current of 38A, which is divided into three output lines by 19-19.5A current limiters. Of course, the declaration of a sustained current of 8A and a peak current of 19A for one such output line makes no sense. You can pass a current of 19A through it as long as you want if the combined load on all the three 12V output lines is not higher than 38A.
Judging by the declared load capacity, this is a typical ATX12V 2.2 power supply. This means low allowable currents on the +5V and +3.3V lines because modern computers just don’t need more than that.
When the PSU was working with an APC SmartUPS SC 620, the UPS’s indicator reported overload if there was a load of over 380W on the PSU irrespective of the power source (electric mains or batteries). This is a record-breaking result that is indicative of high efficiency of the PSU as well as of well-designed PFC (not because it corrects the power factor well, but because it works normally with the non-sinusoidal voltage the SC 620 yields from its batteries).
There are narrow surges in the oscillogram of the +5V voltage ripple that occur at the moments when the inverter’s transistors are switched over. Apart from that (and the mainboard is going to filter out most of those surges), the pulsation is low, only 14 milliseconds in amplitude. At full load the voltage ripple was 85 millivolts on the +12V rail (the permissible maximum is 120 millivolts) and 29 millivolts on the +3.3V rail (the permissible maximum is 50 millivolts).
The cross-load diagram doesn’t look that well after we’ve seen diagrams of models with independent voltage regulation. But considering that the power consumption of a typical modern computer falls into the bottom third of the diagram (a low load on the +5V and +3.3V and a high load on the +12V rail), it’s all right here. The PSU’s voltages won’t violate the acceptable limits in a real computer system.
The PSU uses a Protechnic Electric MGA12012HB-O25 fan whose speed is regulated linearly through the entire load (temperature) range, almost. The PSU is of an average quietness. Most users will be quite satisfied with it, but the RS-600-ASAA isn’t exactly silent, because of its intrinsically loud fans from Protechnic Electric that are also rotating at a rather high speed.
The efficiency of this PSU is an impressive 87% at the max point, but 85-86% at higher loads (well, that’s excellent, too). The power factor is 0.97-0.98 on average.
So, the iGreen Power RS-600-ASAA is a high-quality modern power supply, but it only stands out among similar models with its high efficiency. Although high efficiency means low heat dissipation inside the PSU case and, accordingly, allows to implement a quieter cooling system, this PSU is not quiet. Its fan speed is average.
I can’t definitely tell you what connection exists between memory modules and power supplies, but it is a fact that the module manufacturers have begun to roll out PSUs under their brands, too. I’ve already tested PSUs from OCZ (and will return to them later on in this review), and here is a model from Corsair Memory.
Of course, Corsair is not the actual manufacturer of the PSU. The product is made by Seasonic. The CMPSU-620HX is an intermediary variant between Seasonic’s S-12 and M-12 models. It doesn’t have a second fan (like the M-12 does), but has detachable cables (unlike the S-12).
The innards of this power supply look quite normal to me. There is no reason for the engineers to change the well-established component layout. The only exception is that the high-voltage capacitor, usually located near the left edge of the case, has moved to the center of the PSU whereas the left part is all occupied by an active PFC device, the rectifier’s diode bridge and a line filter.
There is a card with connectors for detachable cables on the rear panel. The soldering is very neat and tidy there.
The PSU uses rather simple heatsinks punched out of an aluminum bar. This may not sound good to some users because many hardware reviewers like to measure the dimensions of heatsinks, transformers and other components. There’s nothing wrong in that, however. The simpler heatsink provides less resistance to the stream of air, so the airflow will be stronger with a same-static-pressure fan (as you know, each fan has not one but two basic parameters: performance describes its ability to work in an open environment whereas static pressure describes its ability to drive a stream of air through an obstacle). As a result, the cooling may prove to be better with smaller heatsinks than with larger ones.
The same goes for the dimensions of the transformers and chokes. They depend on the PSU’s operating frequency (the higher the frequency, the smaller the components, the PSU wattage being the same) and topology. Having a PWM frequency of about 130 kHz and a double-forward topology, the ferrites in the CMPSU-620HX are just the necessary size for normal operation at full power, although they seem small in comparison with older same-wattage PSUs that used to have a half-bridge topology and an operating frequency of 70 kHz.
The PSU is equipped with the following cables and connectors:
The following is enclosed with the PSU:
The PSU is declared to have three +12V lines with a max combined current of 50A and a limitation of 18A on each line. However, the PSU manual informs that the lines are united into one when the limitation is exceeded. In other words, there is no actual division of the lines inside the PSU, but the user has already got used to the idea that there must be several 12V outputs, so the manufacturer couldn’t but specify several such lines on the PSU label. Of course, the lack of a “virtual” splitting of the +12V power rail into several output lines has no effect at all on the PSU’s voltage stability or output power.
I checked the PSU out with an APC SmartUPS SC 620, and the UPS’ indicator would report overload at a load of over 365W and 330W when working from the mains and batteries, respectively. Switching to the batteries was performed without problems.
There’s almost no pulsation on the +5V rail at a load of 600W – you can only see small narrow spikes at the moments of switching the inverter’s transistors. At the same load, the voltage ripple was 26 millivolts on the +12V rail and 10 millivolts on the +3.3V rail. There’s only high-frequency pulsation here.
The cross-load characteristics of this PSU look splendid. I’d want to draw your attention to this power supply’s ability to yield nearly full power across the +12V rail only. The +5V voltage is the only one to get near the maximum permissible deflection (red color), but only at those loads that have little to do with modern computers.
The PSU employs an Adda AD1212HB-A71GL fan whose speed is regulated linearly in most of the load range (from 150 to 450W). The PSU is almost silent at low loads (a fan speed of less than 800rpm) and is quiet at high loads. Although the fan speed is set up with some reserve (the air coming out of the PSU at full load of 600W feels just barely warm), it is lower than that of many other products (e.g. of the above-described CoolerMaster iGreen Power). The use of high-quality fans from Adda contributes to the PSU’s quietness, too. But running a little ahead, I should confess that the Corsair is not the quietest PSU on the market. S-12 series models selling under Seasonic’s own brand are quieter still. On the other hand, the CMPSU-620HX is going to satisfy a vast majority of users with its noise characteristics, too.
The PSU efficiency is 85% at the maximum and 84% at full load. That’s a very good result, quite accurately coinciding with the specification. The power factor is 0.99 on average.
Corsair Memory has made a successful debut on the PSU market. The CMPSU-620HX doesn’t have any apparent drawbacks. It is a high-wattage model with excellent parameters, particularly very quiet at work. If its 620W wattage is excessive for your system (I guess it is excessive for a majority of today’s PCs), you may want to consider its 520W analog, the CMPSU-520HX model.
Sirtec, the producer of the High Power series of power supplies, is well known to our readers because its products can be met under the brands of Thermaltake, Chieftec and many others. The HPC-560-A12S differs from the other PSUs in this review with its power consumption indicator called Power Watcher.
It’s all quite ordinary inside, with group voltage regulation and active PFC. As opposed to the Corsair model, the heatsinks are large, with numerous small ribs and cross cuts. Let’s see if this helps cool the PSU better.
The component layout is similar to that of the Thermaltake W0083 (PurePower 600AP) that I described in one of my previous reviews. Some of the PSU electronics are installed on a card that is placed vertically along the rear panel.
The power consumption indicator is located on the rear panel of the case. It is a 3-position 7-section red-colored indicator. I found out that at loads over 100W the indicator’s reading was about 10% lower than the real value. Well, I guess its accuracy varies from sample to sample because the manufacturer can hardly regard it as a serious measuring instrument.
The PSU complies with the ATX12V 2.2 standard. The maximum combined load current on the +12V rail is a little lower than 37A. The lines are separated virtually, as usual. That is, there is only one +12V rail inside it with an allowable current of 37A. That’s why the purpose of specifying (in the footnote) the max combined output power for two of the three output lines is unclear. Besides that, the PSU doesn’t formally comply with the EN-60950 standard because the max output power on the 12V2 line is 276VA. This is more than 240VA which is the maximum permitted by that standard. But as I have repeatedly written in my reviews, this does not affect the stability of a computer’s operation in any way.
The PSU is equipped with the following cables and connectors:
So, this power supply allows powering two additional system fans from the PSU’s own speed controller. That is, the speed of the fans will depend on the temperature inside the PSU. This is a nice feature, but all modern mainboards, including microATX ones, can control the speed of system fans anyway.
When the power supply was working in pair with an APC SmartUPS SC 620, the UPS would report overload at loads of over 365W and 315W (electric mains and batteries, respectively). Switching to the batteries was performed without problems, though. This difference is due to the PSU’s active PFC. It’s clear that it doesn’t cope well with the non-sinusoidal voltage supplied by the UPS’ batteries.
At a load of 550W, the output voltage ripple amounted to 27 millivolts on the +5V rail, to 40 millivolts on the +12V rail, and to 9 millivolts on the +3.3V rail. There’s both low- and high-frequency pulsation here.
The PSU doesn’t have additional independent voltage regulation, yet its cross-load diagram is limited but in a very small part with the voltages going out of the acceptable limits. This occurs when there’s high load on the +5V rail which is unimportant for today’s computer systems. So, this regulation of voltages should be considered good.
The PSU is equipped with a Hong Sheng A1225S12D fan and is rather loud at work. At min load the fan speed is about 1200rpm, which is already not very low. At a load of 270W it is 2000rpm. For comparison, the fan of the above-described PSU from Corsair only reached that speed at full output power (600W). The fans of the Seasonic S-12 and the Zalman PSUs to be described below do not reach that speed at all! So, pretty-looking heatsinks do not guarantee quiet cooling.
The efficiency of this PSU is about 83%. That’s an excellent, even though not record-breaking, result. The PFC device is not that good, the power factor barely reaching 0.97. However, a power factor difference of a few percent doesn’t matter much in practice. To comply with the rather strict requirements of the Energy Star 2007 standard it is only necessary to have a power factor of higher than 0.90 at full load.
So, the HPC-560-A12S is a good power supply with one obvious drawback. It is rather noisy even at minimum load. If this doesn’t scare you, the PSU is going to be a good choice. Otherwise, you should consider alternative products or replace the PSU’s native fan with a slower and quieter one (judging by the low temperature of the exhausted air, there won’t be overheat problems after that).
To be continued! | 1 | 3 |
<urn:uuid:81b21f85-6a6a-4783-ba98-a2f0a143d629> | by Mari Elspeth nic Bryan (Kathleen M. O'Brien)
© 2000-2007 by Kathleen M. O'Brien. All rights reserved.
Version 2.0, updated 01 March 2007
What we know as a set of Irish Annals are manuscripts that were each compiled during a particular time period, usually using older material as sources. For example, when the Annals of the Four Masters were written from 1632 to 1636, they covered events that occurred centuries and millenia before (including legendary history). So, when an entry in this set of annals refers to a person who lived in the year 738, the spelling used for that person's name is very likely not using the spelling that would have been used in 738.
Standard forms of this name (based on spelling systems of different periods) would be:
|Old Irish Gaelic (c700-c900) nominative form:||Taithlech|
|Old Irish Gaelic (c700-c900) genitive form:|
|Middle Irish Gaelic (c900-c1200) nominative form:||Taithlech|
|Middle Irish Gaelic (c900-c1200) genitive form:|
|Early Modern Irish Gaelic (c1200-c1700) nominative form:||Taichleach|
|Early Modern Irish Gaelic (c1200-c1700) genitive form:|
|Number of men found in the annals with this name:||14|
|Found in Years:||728, 734, 766, 771, 788, 793, 808, 809, 964, 966, 1095, 1134, 1182, 1188, 1192, 1201, 1225, 1235, 1252, 1259, 1278, 1279, 1281, 1282, 1293, 1297, 1316, 1404, 1411, 1417, 1439|
Further information about the name Taithlech / Taichleach may be found in:
The Sources page lists the Annals referenced below. Information about secondary sources is included on that page as well.
In the table below, I have separated individuals with a blank line. That is, when there are multiple entries in the annals that refer to a single person, those entries are grouped together.
Within the list of entries refering to a single person, I have sorted the entries primarily by orthography when it is obvious that what I am seeing is the same entry showing up in multiple annals. The entries that tend to use older spellings are listed first.
Special factors which may affect name usage are marked in the context column.
|AN||indicates a member of an Anglo-Norman family|
|AS||indicates an Anglo-Saxon|
|N||indicates a Norseman|
|P||indicates a Pict|
|R||indicates a person holding a religious office|
|S||indicates a person from Scotland|
NOTE: The Annals referenced below under the code letters A, B, C, E, & F tend to use later spellings than the other Annals. In some cases, the spellings listed in these Annals may not be appropriate for the year referenced in the Annal entry.
In some Gaelic scripts, there is a character that looks approximately like a lowercase f,
but without the crossbar. This character (represented by an underscored Medieval Scotland | Medieval Names Archive | Index of Names in Irish Annals Kathleen M. O'Brien's articles are hosted by Medieval Scotland, which is published by Sharon L. Krossa (contact). Shopping online? How you can support this site.
Annals Entry Context Citation (formatting preserved) (d. 728-734) U U734.9 Taichleach m. Cinn Faeladh, rex Luighne T T734.6 Taithleach mac Cind Faeladh rí Luigne A M728.5 Taichleach, mac Cinn Faolaidh, toiseach Luighne U U771.7 Dungalach m. Taichlich dux Luigne A M766.12 Dungholach mac Taichligh toiseach Luighne (d. ?) U U793.1 Flaithgel m. Taichlich abbas Droma Rathae A M788.6 Flaithgheal, mac Taichlich, abb Droma Rátha (d. 809) U U809.1 R Toictich alias Taichligh a Tir Imchlair, abbatis Ard Machae [Note: name is in genitive case due to sentence structure.] A M808.8 R Toictheach ua Tighernaigh .i. ó Thir Iomchlair, abb Arda Macha (d. 964-966) CS CS966 Taithlech h. n-Gadhra .i. righ Luigne B M964.7 Toichleach ua n-Gadhra, tighearna Luighne Deisceirt (d. 1095) T T1095.6 Taichleach h-Úa h-Eagra, rí Luigne B M1095.12 Taichleach Ua h-Eaghra, tigherna Luighne (d. ?) CS CS1134 Taithlech h. Eghra B M1134.19 Taichleach Ua n-Eaghra (d. 1188) LC LC1188.14 Taithlech mac Conchobair, mic Diarmada, mic Taidhc .H. Mael Ruanaid (d. 1192) C M1182.6 Murchadh mac Taichligh Uí Dubhda U U1192.2 Taichleach h-Ua Dubhda, ri h-Uan-Amhalghaidh & h-Ua Fhiacrach Muaidhi C M1192.2 Taichleach Ua Dubda ticcherna Ua n-Amhalgadha & Ua f-Fiachrach Muaidhi LC LC1201.7 Oedh mac Taichligh I Dubhda, ri .H. n-Amalgaid LC LC1202.5 Tomaltach mac Taichligh .H. Dubh Da (d. 1235) Co 1225.23 Taichlech mac Aeda h. Dubta LC LC1225.31 Taichlech mac Aodha I Dubhda Co 1225.24 Taichlech h. Dubda LC LC1225.32 Taichlech .H. Dubda Co 1225.24 Taichlech LC LC1225.32 Taichlech Co 1235.6 Taichlech mac Aedo h. Dubda ri h. nAmalgaid & h. Fiachrach LC LC1235.4 Taichlech mac Aodha h-I Dubhda, ri .H. n-Amalgaid C M1235.5 Taichleach mac Aodha Uí Dubhda tighearna Ua n-Amhalgadha & Ua f-Fiachrach (d. 1259) Co 1252.12 Arlaith ingen Taichlig Meic Diarmata LC LC1252.12 Orlaith, inghen Taichlig Mic Diarmada Co 1259.9 Taichlech Mac Diarmata LC LC1259.4 Taichlech Mac Diarmada C M1259.9 Taichleach Mac Diarmada U2 U1293.1 Concobur, mac Taichligh mic Diarmata, mic Conchobuir (mic Taidhg) Mic Diarmata, ri Muighi Luirg & Airtigh Co 1293.13 Conchobair meic Taichlig LC LC1293.10 Conchobair mic Taichligh Co 1297.2 Conchobar mac Taichlig meic Diarmata meic Conchobair meic Diarmada meic Taidc ri Muigi Luirc & Artig LC LC1297.1 Conchabar, mac Taichlich, mic Diarmada, mic Conchobair, mic Dhiarmada, mic Taidhg, .i. rí Mhuighe Luirg & Airtigh C M1297.4 Concobhar mac Taichligh Meic Diarmata tigerna Moighe Luircc & Airtigh Co 1316.5 Murcertach mac Taichlig Meic Diarmata LC LC1316.3 Muircheartach mac Taichligh mic Diarmada C M1316.2 Muirceartach mac Taichligh Meic Diarmata (d. ?) U2 U1278.3 Taichlech O Baighill Co 1281.4 Taithlech h. Baigill LC LC1281.3 Taichlech .H. Baighill C M1281.4 Taichleach Ó Baoighill (d. 1282) U2 U1278.3 Taichlech O Dubhda U2 U1279.1 Taichlech mac Maelruanaigh h-Ui Dhubhda, rí h-Ua Fiachrach Co 1281.4 Taithlech h. Dubda LC LC1281.3 Taichlech .H. Dubhda C M1281.4 Taichleach Ó Dúbhda Co 1282.3 Taithlech mac Maelruanaig h. Dubda ri h. Fiachrach Muade LC LC1282.2 Taichlech mac Mhaolruanaidh h-I Dhubhda, ri .H. bFhiacrach Muaidhe C M1282.2 Taichleach mac Maol Ruanaidh Uí Dhúbhda ticcherna Ua f-Fiachrach D M1417.4 Ruaidhri(.i.,Ó Dubhda) mac Domhnaill mic Briain mic Taichligh Uí Dubhda D M1439.18 Domhnall mac Ruaidhri mic Taichligh Uí Dhubhda (d. 1404) D M1404.17 Taichlech mac Donnchaidh Uí Dubhda (d. 1411) Co 1411.14 Taichlech Bude mac Sean h. Egra LC2 LC1411.13 Taichleach Bude mac Seain .H. Eghra D M1411.17 Taichleach Buidhe Ó h-eghra
Feminine Given Names | Feminine Descriptive Bynames | Masculine Given Names | Masculine Descriptive Bynames
© 1996-2007. All rights reserved. Copyright of individual articles belongs to their authors. Please do not copy or redistribute without proper permission!
[an error occurred while processing this directive]
Medieval Scotland | Medieval Names Archive | Index of Names in Irish Annals
Kathleen M. O'Brien's articles are hosted by Medieval Scotland, which is published by Sharon L. Krossa (contact). Shopping online? How you can support this site. | 1 | 17 |
<urn:uuid:9c644a92-6457-465b-ba90-45877181eae8> | All round were chellovecks well away on milk plus vellocet and synthemesc and drencrom and other veshches which take you far far far away from this wicked and real world into the land to viddy Bog And All His Holy Angels And Saints in your left sabog with lights bursting and spurting all over your mozg.
— Anthony Burgess, A Clockwork Orange
The word ‘video’ was first used in the 1930s to describe the visual channel, as opposed to the auditory channel, in early television experiments (Barbash). A ‘video’ track was first recorded in 1927 by John Logie Baird. He created a system called Phonovision that used discs to hold images. This was accomplished in a way similar to recording audio on a phonograph. By tracing a path in a disc with a rapidly moving needle a low quality image was reproduced by a cathode ray tube. Thus the medium ‘video’ has two connotations. It can be used to describe a visual channel of information or to describe a recording medium that stores electromagnetic information.
Video comes from the latin verb videre ‘to see’ (OED). Burgess undoubtedly uses this etymology to coin the word ‘viddy’ in the vocabulary of ultra-violent London teens in A Clockwork Orange. ‘Seeing’ is often used interchangeably with ‘knowing’ in highly visual Western society. Yet seeing and knowing are completely different acts. Burgess’s dystopia arises from the confused notion that the two are synonyms. This is encapsulated by the word ‘viddy.’
The phenomena of ‘medium nesting’ can be used to separate video from film and television which previously nested older media- most notably photography and radio (“Video Killed the Radio Star”). Videos unique qualities can be discerned from an examination of its origin, comparison, and divergence from these older media. The myth of the ‘real’ in video and its predecessors reaches back to the phonograph’s tendency to capture ‘noise’ (Kittler). The video camera captures visual noise. When coupled with technological mysticism and complacent trust in science there is a danger of grafting a false ‘realness’ onto the medium. This ‘realistic effect’ occurs because “the ‘real’ is never more than an unformulated signified, sheltering behind the apparently all-powerful referent” (Barthes). Surveillance videos, pornographic videos, and documentary videos are all exploitation of this false objectivity. None would need much rhetorically induced credibility if brought in front of a jury. This is hubristic for two reasons. First, video is often of very low quality. The resolutions of photography and film have always surpassed video. Video is constructed of three colors (red, green, blue/magenta, yellow, cyan) which are displayed at different brightnesses. Video currently is most often recorded onto magnetic tape which can degrade after time or be erased (due to magnetization) altogether. It can stretch and induce ‘vertical rolling’ during playback. All of these can influence what we ‘see’ and then ‘know’ upon a later viewing. Second, video can be edited quite easily. If the time/date appears in the corner of a video it is assumed to be accurate. Other temporal ellipses are as easy (or easier) to create in video as in film. The time qualities of video, television, and film separate them as mediums. Television consists of previously recorded video and film and live broadcasting. A live television broadcast (the nightly news) is ‘immediate.’ But television is always ‘being broadcast’ so even recordings become ‘immediate’ in that there is little control of the receiver in what is mediated. Commercials capitalize on this quality in order to incite desires for food, clothing, information, sex, etc. The television medium prescribes immediate capitalistic pharmakon in order to visually manifest what the viewer should be.
Film is the opposite of television’s immediateness. It is not accessible in the normal home. Much like a play, the film creates a spectacle meant (usually) to entertain an audience in a theater. The fictionality of film and its ability to trick through montage is well known and accepted by its audience that uses it to construct cohesive visual narratives. Commercial films are created from screenplays demonstrating nesting of theater and writing media. The film then becomes a timeless entity crystalized within the confines of celluloid like carving writing into a clay tablet. The theatricality of film is regulated by time. One knows that a film will be projected whether one is there or not, and that the film will be projected at some later time. Aside from showing up in the first place film is about lack of control to the normal spectator. One goes to a theater, watches a film (and in so doing forgets reality) but at its conclusion is left back in the theater/reality. Film as media allows a timeless spectacle to exist while simultaneously refuting this as a future experience- it shows what one can never be.
Video often nests within television immediacy (previously recorded broadcast) and film timelessness (film transfers to video). Video that is broadcast is simply television as previously discussed. Video and film are much more difficult to separate (see bibliography). Video can be projected with an LCD projector in a cinema causing it to become theatrical. However, video is more often viewed alone in an intimate home setting. One has much control over when and how video is played. Furthermore there is an obvious physical difference between film and video- one is celluloid and the other is not. Video can be trapped on tape or digitized. Both can be readily copied (even with anti-copy protections) and are much more freely accessible than film. Video can be played on many different types of monitors. Lastly video is much more ‘ethereal’ than film. It is stored digitally as binary code or directly through magnetism. Video is self-reflexive (through both recording and playback characteristics) and has the ability to show one as they are.
Moving images were able to be recorded for quite some time by either filming them or using novel devices like Phonovision. The first use of tape as a recording medium was in 1956. The Ampex VRX-1000 was the first commercial videotape recorder. Its quality was very poor. The first consumer video tape recorder was 9-feet long and weighed 900 pounds. It was not portable (obviously), but offered for sale at $30,000 in 1963. Once recording could take place the format of television radically changed. Images could be seen again and again. The consumer had to wait until 1975 when Sony released the Sony Betamax Combination TV/VCR. The next year the stand alone Betamax VCR was released and sold fairly well. In 1977 RCA introduced the VHS VCR. This was much cheaper and allowed twice as much recording time compared to the Betamax (4 vs. 2 hours). Essentially it was due to economics and good timing that VHS is now synonymous with ‘video’ tape (Betamax tapes had much better image quality). DVD has been the only recent contender to VHS (ignoring all computer video formats) having sold readily since its release in 1997. It would be difficult to create a comprehensive list of ‘dead’ video technologies (such as pixelvision). Regardless of the specific storage mechanism (so long as it is autonomous from film) video is important because of its massconsumer appeal. Video has changed television and film as each strives to become separate from each other and fill a niche market. Video is cheap and can be left recording for much longer times than film. Surveillance cameras and amateur videos tend to do just that.
Historically it was much easier to edit film than video; instead of video’s anonymous magnetic strip (although one form of early video actually consisted of small, film-like images) film has little ‘photographs’ that run through the projector at 24 frames per second. To change the film one physically ‘cut’ it and ‘spliced’ this series of photographs in somewhere else. Video could be recorded over easily. Recently there has been an explosion of new digital editing techniques that allow editing of video to be done much like that of film (frame by frame at 29.97 frames per second) on computers. Now it is actually easier and much less expensive to manipulate video than film. Film is even ‘going digital.’ It is altogether likely that video tape (VHS) will become a ‘dead medium’ in the near future and give way to digital video. Digital video, which is stored as binary code, will not loose quality if stored on a computer. Furthermore, video is increasingly striving towards the picture quality and speed of film. Commercial video cameras are now available that record at 24 frames per second. As film and video tape are digitalized they become digital video and, subsequently, fit video’s first definition again the ‘image track’ as opposed to the ‘audio track.’
The best recent example of video’s accessibility is the spectacle of the 9/11 footage. The footage was stored on video and re-broadcast over and over again until the entire country felt as though they had actually experienced the tragedy. They were in truth “far far far away from this wicked and real world” and “into the land to viddy .” Video can not perfectly replicate experience. Rather, it constitutes a different experience of fantasy and pseudo-reality [see reality/hyperreality, (2)].
McLuhan and Krauss compare video as a medium to narcissism. In a sense the obsessive rewatching of 9/11 videos makes the violence pornographic and serves as a type of auto-eroticism upon reviewing. McLuhan would say that the American public has through a real but, as a whole, distant amputation (of buildings and human life), amputated itself still further through the numbness and closure that the video medium as ‘extension’ grants. America looks into its ‘pool’ of video footage in order to counter the irritant of emotional amputation, until through increasing numbness it is unable to recognize itself anymore. Almost all Americans actually think that they were in New York/Washington DC on 9/11 when in truth they were staring into their narcissistic pools. Soon they could not even recognize themselves. That is, they thought that they were actually witnesses to the events when in truth they were just sitting and watching what they were to become through false experience. America was viddy-ing. Krauss claims narcissism as the main distinguishing factor of video art. She observed that through the ‘feedback coil of video’ “consciousness of temporality and of separation between subject and object are simultaneously submerged. The result of this submergence is, for the maker and the viewer of most video art, a kind of weightless fall through the suspended space of narcissism.” The music video is an excellent example of video art creating a fantasy pool. A music video beseeches autoerotic participation. One becomes sexually aroused, violent, and wants to buy something. The music video unrealistically presents desires that can be fulfilled in a realistic manner. One gets up and dances with Dionysian bliss, turning up the volume, while being overcome by the rapid succession of visual stimuli. The music video has the potential to replicate a similar experience every time it is played (either by the spectator on tape or by MTV).
Video art has had to overcome both this narcissistic and film/television/video medium identity crisis. What distinguishes a video artist from a film artist or a photographer? Is their work any different from what you see on television? The first video artists worked in the 1965 alongside the availability of commercial camcorders. Nam June Paik is widely cited as being the first. He experimented with the physical medium of video tape by manipulating it with magnets. This work was an attempt to demonstrate how to ‘see’ video. Many claim that video art was just another outlet for artists resisting the materiality of painting and merely served a theatrical role. This is exemplified by Dan Graham’s more conceptual/idea oriented work that contained non-manipulated timescales. Bill Viola in the 1970s stood for less self-referential work and dealt primarily with content (his work also tended to be theatrical). Video art began to get less exclusive in the 1980s and leave the museum context altogether. Documentary videographers and video artists constitute the current tendency to create concept driven work that explores editing technique or allows grass-roots political activism. This sort of work is described as a sort of moebius strip by Ryan a formula for self video taping. When you watch it back again you will declare, “wow, it’s like making it with yourself.” Perhaps this is the sort of narcissism that Krauss and McLuhan had in mind for video. Because it is so self-reflexive video art has the potential/tendency to be both ‘boring’ (Ryan) and self-reflexive and thus narcissistic. (see mirror)
Video as being a visual electromagnetically recorded channel is an incredibly broad definition of the medium. Perhaps this is why it has been going through a continual identity crisis with film and television. The work of video artists frequently takes advantage of video’s characteristics as a medium. Some of these are realism, cheapness, accessibility, fantasy, and dreamlike temporal disruption. It must be remembered that every different type of video storage medium allows more specific characteristics. Perhaps the most dramatic effect of video is its ability to distort what is ‘seen’ in a visual-centric society into what is ‘known.’ Though their blending they allow ‘viddying’ to occur – a massively replicated sociological pseudo-experience intrinsically tied to narcissism.
Antin, D. “Video: The Distinctive Features of the M edium.” Video Culture: A Critical Investigation. Ed. Hanhardt. New York, 1986.
Barbash, I and Taylor, L. Cross Cultural Filmmaking. University of California Press, 1997.
Barthes, R. “The Discourse of History.” Comparative Criticism, 3 (1981). p. 7-20.
Berger, R. “L’art video.” Art Actuel, 75 (Skira, Geneva, 1975). p. 131-137.
Fagone, V. “Video in Contemporary Art.” Artistic Creation and Video Art. (Cultural Development Documentary Dossier). 1982. p. 25-26.
Kittler, F. Grammophon Film Typewriter. Brinkmann & Bose, Berlin, 1986.
Krauss, R. “Video: The Aesthetics of Narcissism.” October, 1, no. 1 (Spring 1976).
McLuhan, M. Understanding Media. McGraw-Hill, 1964.
Roscoe, J and Hight, C. Faking It: Mock-Documentary and the Subversion of Factuality. Manchester University Press, 2000.
Ryan, P. Birth and Death and Cybernation: Cybernetics of the Sacred. Gordon and Breach, 1973.
Wood, P. “Television as Dream.” Television as a Cultural Force. Ed. Adler and Cater. New York: Praeger, 1976. | 1 | 3 |
<urn:uuid:efaa04e4-e117-4cbb-baa5-93de9b0900a4> | Here you will know how to create HTML-document with all known el
Here you will know how to create HTML-document with all known elements in basic template. After the template is ready you can enter the text. After that you will learn how to save ready documents, test them and check on accordance with standards.
All you must have in order to create HTML-documents, - is common text editor. Pages, written in HTML, represent common text files. Any program, in which it is possible to edit ASCII-files, can serve as the editor for creation web pages. You can use even text processors for this, such as Word Perfect or Microsoft Word under condition of right saving of files.
So, if you have text editor you may start the creation of basic template of HTML-document.
First, that you'll know about XHTML is the elements which describe the document. This is tags, which you'll meet on any HTML-page. They determine different parts of document.
HTML-document has two parts: title and page body. In the title you place name and some special information about page. In order to create the title write the next in any text editor:
In this section you inform browser of specific information for this page, and also the title which must appear in the title of browser window.
So, the title is ready. Now you need to create body of page. In the body of page you do the greatest part of work: enter text, titles and subtitles, put hyper references, images and so on. In order to denote the beginning and the end of page's body enter after title's tags:
You have to place text and graphic between two these tags. Now sections <head> and <body> must be concluded in any element, having seen what all world has understood that deals with HTML-document. It's the element <html>. Above the first tag, <head>, enter:
After closing tag </body> enter:
In spite of the fact that your document is saved in a common text format, brouser will understand that this is HTML-document. We have to adhere to standard XHTML. That's why, tag <html> must look more complicated:
The atribute of tag <html> is the parameter xmlns, setting the area of XML names. It is needed for creation documents, compatible with XHTML requests, especially standard XHTML 1.0.
So, here is the elements, which are already in our template:
<html xmlns="http://www.w3.org/1999/xhtml><http://www.w3.org/1999/xhtml"> <head> </head> <body> </body> </html>
As you see, you have almost ready web-document.
At the beginning of any web document, especially if we aspire to the observance of requirements of standard XHTML, one should include DTD (document type definition). DTD is the element which is included in the beginning of page and from which you can learn about languages and standards used in it. DTD is gradually becoming the necessary part of any site because XML begins to take stronger and stronger stands in Net and in order to get the access to web services one should use wider and wider spectrum of supplements and facilities.
So you have to put DTD in the beginning of every created page. It is not difficult to do.
If you work with the page which already exists or was created with the help of standard editor HTML, working as
WYSIWYG, you can start it with this definition:
PUBLIC " //W3C//DTD XHTML 1.0 Transitional//EN" "DTD/xhtml1-transitional .dtd">
This is DTD of transition version of standard XHTML. Brawser is informed that you use the specification of XHTML 1.0 Transitional. After the atribute PUBLIС there is the name of DTD in which the used language is indicated. Even if you use any other language on your site the XHTML language is English.
The next line of DOCTYPE is URL of that file DTD, which is supported by W3C. It is not very necesary to indicate.
As you'll see later, DTD of specification of XHTML Transitional allows to use elements which directly influence on the appearence (types, colours and so on) of page's text.
DTD of specification of XHTML Transitional allows to get the back compatibility of brawers. In other words non-standard elements created by third persons will be reflected without mistakes. But you may compel the brawser to work in a "strict" regime. You'll need DTD of specification of XHTML Strict:
<!DOCTYPE html PUBLIC "//W3C//DTD XHTML 1.0 Strict//EN" "DTD/xhtml1-strict.dtd">
Apply this definition if you are exactly sure that you won't apply any specific codes.
When do you need to use one variant or another? DTD for XHTML Strict is applied when you use plates of styles for changing visual design of page, and the elements XHTML serve only for organization of information. DTD for XHTML Transitional is applied when you change the appearance of page with the help of HTML elements. This specification is loyal to the using of old elements.
Division of element DOCTYPE for lines is not compulsory. That is all atributes can be placed in one line:
<!DOCTYPE html PUBLIC "//W3C//DTDXHTML 1.0 Strict//EN" "DTD/xhtml1-strict..dtd">
Element comment differs from the rest ones. It contains text but don't have opening and closing tags. Instead of this text of comment is concluded in one tag, which starts with <! - - and finishes -->.
Sometimes it is necessary to hide the text for addition comments to the text HTML, written in the form of reminder. They help in editing initial code to understand quicker in what part of the document you are and what you have to do with it.
Creation of HTML template
Having saved the template in the form of text file, you will get a possibility to create new files of HTML very easily. Load the template in editor then save it under the name of created web-page.
Start with the entering of the next code in new file:
<! DOCTYPE html PUBLIC "//W3C//DTD XHTML 1.0 Sthct//EN"
<html xml ns=" http: / /www w3org/1999/ xhtml" >
<title> enter your favourite title here </title>
<!-- any design -->
<!-- last renewal 10/12->
Now save the composed code in text file with the name template. html. Now if it is necessary to create html-document load template in text editor and save it with new name. After that you can start to edit it.
Title of document
Here we'll talk about the section head HTML-document. It precedes the main part of page and is determined by the element <head> which is containering. Text between tags <head> and </head> informs browser about file, but isn't depicted as a part of document. Inside of <head> different elements can be placed, among which the next are:
<title> - name of document;
<base> - initial URL of document;
<meta> - additional information about page
<title> is compulsory. Rest of them are often absent on web-pages, though it is important to know what they represent because it is possible to create more complicated sites.
Name of document
Element <title> is given to a person for appellation of his work. Many graphic browsers depict this name in the line of title of their window. Name doesn't coincide with a name of file, this is its short description of page.
For example, "the official site of John Brown".
Element <title> must be placed after <head>:
<head><title>site map of John Brown</title> </head>
Name of document must be informative and short. Long name can look strangely in title line of browser window and can not be placed in a line of layings list or plate "Selected". Remember that internet research engines often use it. Here are some rules of choosing a name of your page:
Avoid general words. Try to inform what your page is dedicated to maximum exactly. Remember, that any name can be used as a note in research systems, for example Google or AllthcWeb.com. If it is possible avoid hackneyed slogans. Your aim is to reveal main point of page in two-three words. It doesn't always help to write simply firm's name, especially if you apply the same name on many pages of site. In spite of this try to describe briefly your company activities and to reflect them in its name. Try to use at the most 60 symbols in its name. Specification of XHTML doesn't enter limits of text length in element <title>.
Ways to files and URL may be rather long and may be an impediment for young web-designer. Element <base> can be used in order to facilitate his difficult work. <base> is intended for setting "basic" way, with its help all the relative URL are determined.
This element is applied for additing metadata. Information about site which is contained in <meta> can be used by users or other computers. One of the most widespread applications of this element consists in entering keywords for research systems of internet, such as Yahoo or Excite. With the help of keywords your page will be easy to find by other users.
When searching robots look a page through they pay attention on popular <meta>.
Searching robots - are small programs, created for collecting information about sites and their catalogization. Some of them check against and save descriptions and keywords of your page.
The first of <meta> elements is used by robots for description of your page in the list of web-catalogs. The attribute content may conclude 50-200 words, depending on a concrete research system.
The second element of <meta> in our program is a list of keywords, which will be associated by searching robots with your site. If any user enter some of them in search field, then the reference on your site will appear among the results.
Certainly, this are not exhaustive examples of using <meta> elements. There is one rule: content attributes, name or http-equiv must be in present element. Name and http-equiv can not be used together.
Body of page
Body of page - is its main part, it is limited by two tags: opening (<body>) and closing (</body>). Inside of this section you place everything, that user will really see in browser window: text, hyper references, titles, images, elements of forms, tables and so on.
Section <body> must be placed between <html> and </html> - this means that page body is the substructure of <html>.
Almost everything, that is on this web-page, is enclosed in this section and must be placed between opening and closing tags: <body> and </body>.
Example of body section:
<!DOCTYPE html PUBLIC "//W3C//DTD XHTML 1.0 Strict//EN""DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title> FakeCorp Web Deals </title></head><body><h1> FakeCorp's Web Deals </h1> ...real contents of page</body></html>
Entering text paragraph
The most part of text which you enter between two tags <body> and </body> is enclosed in container elements <p> and </p>. They are used for division of text on paragraphs.
Pay attention: pressing Enter while entering text doesn't influence on the page depiction in browser window. superfluous omissions will be ignored.
<p> Donald Johnson<br />12345 Main Street<br /> Yourtown NY 10001<br /><p/>
You can already create whole paragraphs. But what do you have to do if you want to designate the end of page? This will be useful for enter address in web-document:
Donald Johnson 12345 Main Street Yourtown NY 10001</p>
THis marking won't be visible only in your text editor. But browsers will ignore superfluous omissions and even if you mark every line in individual paragraph it will be uncorrect.
The decision of this problem is the usage of void element <bг /> which represents a sign of the line end. In the right form text of address will look in the next way in HTML:
<p> Donald Johnson<br />12345 Main Street<br /> Yourtown NY 10001<br /><p/>
Saving, testing and checking
Work on the creation of page implies three important steps: first you have to save it rightly, then to make sure that you see in browser what you've planned, and check the code of its compatibility with HTML standards.
Saving of page
If you worked with HTML-template, then you've saved your page under any name. But if you haven't done it you must save a file, choosing point Save As in menu File.
Saving a file, remember that it must be placed in the way it will be registered in web-server.
Testing of page
In order to check how your page look in actual fact you can use a browser. Find commands File, Open File in browser menu. The dialog window of last will allow you to open the necessary file.
For testing a document in browser:
Choose File Save File in order to save all the changes, made in editor. Turn to browser window and choose File Open File in order to open the saved element.
The document must appear in browser. Check it on a presence of mistakes and misprints.
Turn to the window of text editor and put all the necessary corrections. After finishing this work don't forget to save a file.
If a page is opened in browser, then in order to see changes you must click Renew.
Checking of page
Finally, saved and tested page has to be checked in its compatibility with HTML standards. It is not necessary, but it is desirable.
There are some possibilities. The easiest way is to use the program HTML Tidy. There are many its variants, written for Windows, Mac and Unix. Look on http://www.w3.org/People/Raggett/tidy, there you can find a big collection of checking programs.
Other news for Friday 15 June, 2007
News for Thursday 14 June, 2007
- Wednesday 13 June, 2007
- Tuesday 12 June, 2007
- Monday 11 June, 2007
- Sunday 10 June, 2007
- Saturday 09 June, 2007 | 1 | 6 |
<urn:uuid:8841110d-7539-4fee-b3c9-5b07d13d5a41> | CELL CYCLE A D ITS REGULATIO Readings Karp chap
Shared by: steepslope9876
CELL CYCLE A D ITS REGULATIO Readings: Karp chap 14; esp. pp 570-579, 590-594 ( Exptl. History: 609-612) I. Intro (dealing with euk cells only; will emphasize simple expts, "big picture", and "main themes". Thus, lecture will present a more simple version of a very complex and fast-moving field) All cancer cells – regulation not working; continue to go through cell cycle; don’t respond to normal signals like untransformed cells A. Stages: defined by activity of nucleus (Fig 14-1) Mitosis G1 "sibling cell" G2 G1 <--------------------> "G0" "S" ,DNA non-dividing Interphase synthesis non-cycling dividing, cycling Arbitrary how you define stages of cell = most common: what happens w/ regard to nucleus and DNA content - 50 yrs ago = advent of precursors - DNA incorporation didn’t occur in all cells; when analyzed time frame - S phase = DNA synthesis; doesn’t occur immediately after mitosis - G1 phase = gap 1 - G2 phase = gap 2 - G0 phase = cells withdrawn from cell cycle; highly differentiated; not permanent; cells can be induced to reenter the cell cycle B. How determined? 1. mitosis: see in microscope 2. S: =DNA synthesis, so incorporate DNA precursors (e.g. 3H-thymidine; use autorad, or count) 3. Amt of DNA: G1 = 2N , G2 = 4N , S = between 2N and 4N (Assay single cells using fluorescent “cell sorter/analyzer” ) Mitosis can be seen in microscope S = DNA synthesis, so look at incorporation of DNA precursors (3H-thymadine) Look at amt of DNA in cell; amt of staining proportional to DNA (quantitate amt of DNA in cell) In S phase, exp. measure amt of DNA in b/w 2N and 4N How long cell cycle: briefly label population of cells and follow those as they progress thru cell cycle Label cells w/ short pulse – only those that are in S and remove label precursor and allow cell to grow and go thru cell cycle; assay cells that are in mitosis and how many are labeled Start at 0 b/c all in S not M => more cells in mitosis that are labeled (50% = min time for G2); S is till 12 hrs Entire cycle done w/ cell cycle at ~ 27 C. Properties: 1. Timing: S,G2, and M usually fairly constant (e.g. S=6 hrs, human) G1 variable: very short/none to very long G0 = no division 2. Synthesis: all molecules, except DNA, synthesized in each stage except M. DNA only in S. Stage very variable in length = G1 = decision to go thru cell cycle or not Normal DNA replication: all stages except mitosis same Some that depend on cell cycle = regulatory things II. TRA SITIO S and their REGULATIO What regulates and controls moving from one stage to next Mitosis can start but not finish until all checkpoints completed A. G1 (or G0) ---> S ; probably most impt "decision" pt for cells "start site", "restriction pt","to divide or not", etc Cancer cells = uncontrolled growth; do not stop here as do normal cells when "told to stop". (also, some cancer due to cells not “dying” when supposed to; apoptosis) Occurs in various stages of G1; cell deciding if going to stay in G1 or go thru S If moving single nuclei, looking at what going on in DNA replication If fusing cells together (mixing nuclei and cytoplasm) = have inhibitors; drive nuclear functions G1 nucleus transplanted into S cytoplasm: capable of making DNA synthesis but something present/absent that keeps it from doing it S nucleus transplanted into G1 cytoplasm: DNA synthesize in S nucleus stops Fuse G1 cell w/ S cell = both nuclei synthesizes DNA S nucleus transplanted into G2 cytoplasm = DNA synthesis continues in S nucleus Block protein synthesis in G1 cells = block transition of G1 to S B. G2 ---> M Transition ("Why and when to begin mitosis?") 1. Expt suggesting a "checkpoint" before M : a. Fuse G2 cell with S cell (e.g. tissue culture cells) b. results: S nuclei finishes DNA synthesis; but G2 nucleus does OT start DNA syn. also: this cell with both nuclei waits until S nucleus has finished its DNA syn, BEFORE starting M. c. Interp: There is a checkpoint before M can start, monitoring whether DNA syn is complete Gives info to whether or not cell monitors everything before going on (checkpoint) Fuse G2 cells w/ S cell nuclei = S nuclei makes DNA replication but G2 nucleus doesn’t start DNA replication; cell doesn’t go to mitosis at normal time of G2 but cell waits until S phase finishes DNA replication So, some type of checkpoint that tells it that it is done finishing all DNA replication 2. Expt suggesting a "mitosis promoting factor" ( MPF ) which is dominant and positive a. fuse M cells with cells in other stages (G1,S,G2) b. result: cells driven into mitosis; "premature chromosome condensation", nucl envel breakdown, etc. EVEN THO DNA replication had not begun (G1),or not complete (S). Figure 14-3 Expt confirmed that there is a + super strong factor that promotes cell to go thru mitosis Took cells in mitosis and fused w/ cells in other stages; (expt figure out which cells in what stage) Results = G1 + mitosis cell = see metaphase human CH but also cell premature CH condensation M + S phase = some replicated and duplicated and some not and when hit w/ condensation signal, in a mess + promoting factor that drives cells into mitosis III. Properties of Factors regulating transition: MPF, as example A. Factors have been cloned, purified; are active when injected into cells -- e.g. inject MPF, mitosis induced 1. MPF composed of 2 subunits: a. protein kinase (= "Cdk", cyclin-dependent kinase) (= "cdc2 protein", and other names) its concentration does not change during cell cycle, BUT its activity is influenced by phosphorylation, A D the presence of: b. "cyclin" protein its conc varies during cell cycle; mitosis cyclin is quickly destroyed in mitosis (metaphase); and resynthesized during next interphase (Fig 14-4,5) destruction via ubiquination/ proteasome pathway and specific SCF, APC complexes Activity capable of driving cells into mitosis (premature condensenation) made up of two diff. subunits: protein kinase and cyclin protein - CDK; CDC2 protein = cyclin dependent kinase o When analyzed concentration of this protein, it was constant; didn’t vary in concentration thru cell cycle (contrast to the other component of complex = cyclin) - Cyclin = concentration does change thru cell cycle o Concentration gradually increases - Activity of MPF = 0 at G1, S and starts increasing at G2 = what expect to drive cell to mitosis 2. Situation even more complex: each transition can have its own "promoting factor" composed up of the same or diff Cdk and a specific "cyclin-type" subunit. For example: (Fig 14-8) mammals: a. "Start point" has: D-cyclin + Cdk4,6 b. G1--> S E- cyclin Cdk 2 c. S phase A- cyclin Cdk 2 d. G2-->M (MPF) B-cyclin + Cdk1 (=cdc2 of yeast) so diff Cdk's (1 in yeast, >3 in mammals) diff cyclins (1 in yeast, >8 in mammals) diff kinase (promoting factor) complexes (Some partial redundancy possible for Cdk’s and cyclins) (e.g., too much D-cyclin reported to act as an oncogene, "car gas petal depressed") In humans, many CDK and cyclin Variations = subtle Too much of D cyclin = drive cells thru start cycle; doesn’t stop (act as oncogene = tumor more active) B. Activity of these cell cycle kinases (factors) regulated by other proteins: 1. "CDI" -- cyclin-dependent kinase inhibitor: (Fig 14-10) --binds to complex, blocking its kinase activity --so can act as a tumor suppressor (p21,p27) = "car brake" Protein binds to CDK-cyclin complex and prevents it from acting Not get fully active MPF if this is bound Acts as tumor suppressor (prevents CDK from active = prevents cells from going thru cycle) = eq. to break on car 2. protein kinases that phosphorylate specific amino acids in CDK: (Fig 14-6) yeast gene "Mo15" phosphorylates threonine#161 which activates, if no tyr-14,15-P (called “CAK” for Cdk-activating kinase) But: yeast gene "Wee1" phosphorylates tyrosines #14,15 (close to ATP binding site), so causes dominant inhibition of MPF activity so if this Wee1 kinase is mutant/defective (but CAK is OK), CDK more active or active earlier, so cells divide earlier and so are smaller when divide. Activity also influenced by state of phosphorylation: 2 sites on CDK that can get phosphorylated CDK + cyclin still not active w/ 2 sites: thr and tyr 161 Tyr phosphprylated = active Phosphorylation at 14, 15 tyr messes up activity of Wee1 3. phosphatases -- remove specific PO4 groups ( e.g. yeast gene “cdc25", removes P from tyr-14,15-P, so activates MPF if threo-161P present). So if this defective/mutant, MPF never activated, and cell never enters mitosis. If both sites phosphorylated, still inactive Phosphatase activity = only Thr161 phosphorylated = active if no inhibitory activity inhibits kinase wee1 but stimulates cdc25 4. MPF acts autocatalytically to stimulate its own activation analogy: slow burning fuse -------> explosion as [cyclin] increase when cyclin > threshold result: strong, definite effect at precise time, "switch turned on" Net result = activity of MPF turned on in sharp way More available to stimulate = more active So, have strong definite switch = integrating activity of kinase, phosphatase 5. Destruction of cyclin crucial as well, for cells to move thru each stage: a. e.g., if MPF cyclin is not degraded, cell never leaves mitosis (stuck in metaphase) e.g., by APC complex (Anaphase promoting complex; ubiquitination and proteasomes). b. MPF stimulates the destruction of its own cyclin, as well Equally important = how to turn it off (in G1 and G2 don’t want MPF activity) If MPF cycle not degraded, cell stuck in mitosis Make mutations of MPF cycle where ubiquitin activity = stuck in metaphase APC complex C. Lots of inhibitory feedback controls to ensure that each stage is completed before starting the next stage. = “Checkpoints” ( "washing machine control" analogy) Enter M: G2 checkpoint = is cell big enough, is environment favorable, is all DNA replicated? Exit M = Metaphase = are all CH aligned at metaphase 1.e.g. --DNA replic completed before G2-->M --no damaged DNA for G2-->M --cells big enough, and enough nutrients for G1-->S1, G2-->M --mitotic spindle "OK", chromosomes at metaphase plate for cells to finish and exit M 2. Nature of "signalling", "checkpoint monitoring" still unclear; probably complicated, redundant. e.g. steps in checkpoints induced in response to ionizing radiation involve many proteins, kinases(phosphorylation), inhibitions, stimulations, degradation, etc. at several transitions and cell-cycle stages. (G1->S, thru S, G2->M) (e.g. Fig 14-9 for 2 DNA damage checkpoints) D. Substrates of the various cyclin-CDK complexes not fully known: For MPF, thought to include: nuclear lamins (for dissociation necessary for nucl.envel.breakdown) histone H1, etc (for chromo condensation ? ) tubulin ( for mitotic spindle assembly) MPF = kinase which phosphorylate: nuclear lamina breaks down (so, nuclear lamins); histone H1, etc (for CH condensation); changes in tubulin population (mitotic spindle) IV. Other Cell Cycle Topics/Questions: A. More information about how “protein destruction” is an important part of the mechanism of cell cycle control: APC (Anaphase promoting complex): is a multimeric protein complex with at least 2 different “adapter” proteins that determine what it does. What tells it to start separating CH at anaphase: This complex required for cells to begin Anaphase and finish Mitosis. HOW ? A S: it acts as an ubiquitin ligase, and so tags specific proteins for destruction by proteasomes. Two known systems: 1. When the adapter protein is Cdc20, it allows the “sister” chromatid to separate at the beginning of Anaphase. HOW? (Fig 14.26) Cdc20 made late in cell cycle, bind to APC, and causes it to tag “securin” , a protein that inhibits the activity of a protease (“separase”) which is specific for the cleavage of the protein (“cohesin”) that holds the sister chromatid together. So when securin is destroyed, separase can work cutting cohesin and thus allowing the sister chromatids to “separate” / move apart. This form of APC is also involved in the spindle attachment checkpoint: Unattached chromosomes have a Mad2 protein present that inhibits APC-Cdc20 from working (allowing Anaphase to begin), so until all chromosomes are attached and all Mad2 is gone, APC- Cdc20 cannot work. APC complex still links ubiquitin to tag for destruction Which substrate it tags determined by adaptor protein (need Cdh1) Cdc20 tags securin for destruction; securin = protease that destroys protein that links sister proteins together (cohesion) Monitoring check point if spindle fibers attached Mad2 inhibits APC from active, when Mad2 all destroyed = go through whole 2. When adapter protein is Cdh1 (after Cdc20 gets destroyed), the APC-Cdh1 tags for destruction the mitotic cyclins, and so the activity of the mitotic Cdk’s is gone ---- which is necessary for the cells to finish Mitosis, and to “reset” themselves for G1, etc. Ultimately, Cdh1 is itself destroyed, eliminating the activity of APC until the next Mitosis and new Cdc20 and Cdh1. | 1 | 2 |
<urn:uuid:61a30a97-6477-468f-b9dd-504fcfb321fc> | Despite schizophrenia's low prevalence in the general population (1), its overall impact on society is staggering when one considers its economic burden (2–4) and the human suffering it inflicts on patients and their families. There is no cure, at present, for this brain disorder, but a number of treatment options are available, aimed at managing symptoms, improving quality of life, and preventing relapse and rehospitalization (5).
Pharmacotherapy using antipsychotic medications is an important part of today's treatment programs for individuals with schizophrenia. Treatment guidelines clearly identify these drugs, particularly the newer second-generation antipsychotics, as first-line therapy for schizophrenic disorders and bipolar disorders (5–11). Antipsychotics are also used to treat other conditions, including the behavioral and psychological symptoms associated with dementia and depression (12). Overall use of these drugs in many countries has markedly increased, fueled largely by the use of second-generation antipsychotics (13,14).
Evidence of ethnic disparities in the utilization of antipsychotic medications was first reported in the 1990s, a few years after the introduction of second-generation antipsychotics on the market. A study from that period showed that although no significant differences in treatment rates were observed, African Americans were found to be more likely than their non-Latino white counterparts to be given higher doses of first-generation antipsychotics (15). More recent studies show that further disparities have emerged with the introduction of more expensive but widely used second-generation antipsychotics. These later studies showed that African Americans were less likely to receive second-generation antipsychotics (16–24) but more likely to receive first-generation antipsychotics (16,17) compared with non-Latino whites. Similarly, some studies reported that compared with non-Latino whites, Latinos have a lower likelihood of using second-generation antipsychotics (18,25).
Although evidence of ethnic disparities in the use of antipsychotic medication has grown over the past few years, there remains a paucity of information regarding differences in use across other ethnic groups, including East Asian, South Asian, and Southeast Asian populations. This lack of understanding could pose challenges to mental health systems that plan and provide services for these fast-growing ethnocultural groups, which now account for 5% and 11% of the U.S. and Canadian populations, respectively (26,27).
To help address this gap we conducted this study of ethnic disparities in the use of antipsychotic drugs in British Columbia, Canada. Using linked survey and administrative data, we examined whether the likelihood of using antipsychotic drugs differed significantly across ethnic groups and whether these differences persisted after analyses controlled for some observable patient characteristics, including possible indications for treatment. Because of the composition of its population, British Columbia is an ideal location to study the use of antipsychotic medications by persons of Asian heritage in a North American setting.
In many instances, disparities in the use of antipsychotic medications are indicative of poor-quality care because the mismatch between need and appropriate care often leads to important differences in mental health outcomes. Findings from this study should prompt planners and providers of mental health care to closely examine current practices and structures that may contribute to disparities in use.
We performed a cross-sectional retrospective study using both survey and administrative data. We created our study sample by pooling samples from the 2001, 2003, and 2005 cycles of the Canadian Community Health Survey (CCHS). These interviewer-administered surveys collect health-related data from individuals ages 12 years and older from community-dwelling households selected by Statistics Canada based on complex multistage sampling designs (28). All linkable CCHS respondents from British Columbia (N=30,062) were included in the study sample.
We linked individual ethnicity data from CCHS to administrative health databases that include all residents of British Columbia except those whose health care is under federal jurisdiction (approximately 4% of the total population): registered status Indians (aboriginals), veterans, federal penitentiary inmates, and members of the Royal Canadian Mounted Police. To minimize bias arising from the use of incomplete administrative data, we excluded individuals who self-identified as aboriginal and those who identified as white and aboriginal as well as those who did not reside in the province for at least 275 days (29). We also excluded individuals who were less than 12 years old at the beginning of 2005 and those with missing data on ethnicity, sex, place of residence, or income. [A figure showing the sample selection process is available as an online supplement to this article at ps.psychiatryonline.org.] Analysis of missing data showed no disproportionate concentration of cases with missing data on sex, place of residence, or income across ethnicities.
All health and sociodemographic data except income were obtained from administrative data sets from 2005; for income, we used 2004 data sets. Data were provided from Population Data BC (30) with the permission of the British Columbia Ministry of Health Services (BC-MoHS) and the British Columbia College of Pharmacists. Ethics approval was obtained from the Behavioral Research Ethics Board at the University of British Columbia.
Measures and data sources
Our outcome measure was a dichotomous variable indicating whether or not an individual filled at least one prescription for any antipsychotic drug in calendar year 2005. Prescription data were obtained from PharmaNet, a centralized database maintained by the BC-MoHS that records every prescription filled in community pharmacies throughout the province regardless of patient age or insurance status. We used the World Health Organization's Anatomical Therapeutic Chemical (ATC) classification system (31) to identify the first- and second-generation antipsychotics filled by individuals included in the study.
We derived our ethnicity variable from responses to the CCHS question “People living in Canada come from many different cultural and racial backgrounds. Are you?” We coded respondents who self-identified with more than one of the 13 ethnic categories as persons of mixed ethnicity. Respondents who identified with just one ethnic group were coded into the following categories according to their response: white, Chinese, other Asian, and nonwhite non-Asian. We originally intended to derive a number of Asian ethnic groups, but our sample yielded only two that were statistically viable: Chinese and other Asians. We included in the category of other Asians those who self-identified as Filipino, Japanese, Korean, East Indian, Pakistani, Sri Lankan, Cambodian, Indonesian, Laotian, and Vietnamese. The nonwhite non-Asian category included Arabs, blacks, Latinos, Afghanis, and Iranians. We also used CCHS data to flag as recent immigrants those who immigrated to Canada in 1996 or later. Ethnicity and recent immigration status were the only two variables we derived from pooled CCHS data.
Using the 2005 diagnostic codes available in the administrative records of physician and hospital visits (obtained from the British Columbia Medical Services Plan database, where one physician visit equals one diagnosis, and from the British Columbia Discharge Abstract Database, where a recorded hospital stay can have up to 25 diagnoses), we constructed aggregated diagnostic groups (ADGs) according to the Johns Hopkins Adjusted Clinical Groups Case-Mix system (32). We used a count of ADGs as our general health status covariate. A higher count of ADGs is associated with a greater degree of overall clinical complexity and increased likelihood of prescription drug use (33).
Similarly, we used physician and hospital records to construct indicators of schizophrenia, bipolar disorders, depression, and dementia diagnoses. These variables were binary measures that indicated whether individuals received at least one diagnosis of a mental disorder or dementia (34) in 2005. The specific ICD-9 diagnostic codes we looked for in the records of physician visits were 295 for schizophrenic disorders; 296 (excluding 296.2, 296.3 and 296.9) for bipolar disorders; 311, 296.2, 296.3, 296.9, and 50B (a British Columbia-specific diagnostic code used for “anxiety/depression”) for depressive disorders; and, 290, 294, 298, 331, and 348 for dementia. In hospital visit records, we looked for the following ICD-10 diagnostic codes: F20 for schizophrenia, F30 and F31 for bipolar disorders, F32–F34 and F38–F39 for depressive disorders, and F00–F03 for dementia.
Using 2005 administrative data, we adjusted for sociodemographic characteristics, including age (in ten-year bands) and place of residence (urban versus nonurban). We controlled for income using 2004 household income quintiles built with a combination of household-specific and neighborhood-level income data (35). We used British Columbia's geographic regions (referred to as local health areas) to create variables indicating urban and nonurban residence.
To test for statistically significant differences across ethnic groups with respect to the covariates we examined, we performed chi square tests on categorical variables (such as sex and diagnosis) and analysis of variance (ANOVA) with Bonferroni post hoc comparisons on continuous variables (including age, income, and number of ADGs).
Using logistic regression, we modeled the association between antipsychotic drug use and ethnicity, controlling for the effects of sex, age, urban residence, recent immigrant status, income, overall health status, and clinical indications for antipsychotic medication for our entire cohort. We ran two regression models to determine whether ethnic variations in antipsychotic use differed by type of diagnosis. Our first model examined whether antipsychotic drug use differed by ethnicity and controlled for schizophrenia or bipolar disorder diagnosis and health and sociodemographic variables. The second model, built on the first, added depression and dementia diagnoses. To test the robustness of our findings, we ran these models again using a subgroup of individuals who had at least one of the mental disorder or dementia diagnoses we described earlier.
We also attempted to run two separate analyses: one for the subgroup of individuals with one or more diagnoses of schizophrenia or bipolar disorder and another for those without these diagnoses; however, the small sample of persons in the schizophrenia or bipolar disorder stratum prevented these stratified models from running or obtaining statistically reliable results.
All statistical analyses were completed with version 10.1 of Stata for Linux64.
Description of the sample and antipsychotic drug use
A total of 27,658 individuals met our inclusion criteria for this study. [A figure showing the study sample selection is available as a supplement to this article at ps.psychiatryonline.org.] Table 1 describes our final sample according to ethnic groups. Chi square test and ANOVA results indicate statistically significant differences across ethnic groups for all the covariates we examined, except for the diagnoses of schizophrenia or bipolar disorders and the proportion of individuals that filled a first-generation antipsychotic.
In comparison with other single-ethnicity groups, those self-identifying as white were approximately ten years older, had a slightly higher mean number of ADGs, were more likely to live in nonurban areas, were less likely to be recent immigrants, and were more likely to be in the top income quintile. In comparison with others from minority groups, those identifying as Chinese had slightly fewer ADGs and were more likely to live in urban areas, to have recently immigrated, and to be from households with lower income.
The distribution and concentration of diagnoses also differed by ethnicity. The highest prevalence of depression (11.6%) and dementia (1.4%) diagnoses were found among those who self-identified as white. The highest prevalence of diagnoses for schizophrenia or bipolar disorder (2.7%) was found among respondents in the mixed ethnic group. Those who self-identified as Chinese had the lowest prevalence of diagnoses for depression (5.0%), dementia (.5%), and schizophrenia or bipolar disorders (.7%).
Ethnic differences in antipsychotic prescription fills
In 2005, 2.2% of the individuals in our sample filled at least one antipsychotic prescription. Without adjustment for other factors, individuals in our sample who self-identified as Chinese (1.0%) were least likely to fill a prescription for antipsychotics, whereas individuals of mixed ethnicity (4.3%) were most likely to do so (see Table 1).
Table 2 presents the results of the two adjusted logistic regression models examining differences in antipsychotic drug use by ethnicity. In the model that controlled for schizophrenia and bipolar disorders, significant ethnic differences in use of antipsychotic medication remained after adjustment for individuals' sex, age, place of residence, immigrant status, income, health status, and primary diagnoses. Persons identifying as Chinese had much lower odds than those identifying as white of filling a prescription for an antipsychotic (odds ratio [OR]=.47, p<.05). In contrast, those identifying as being of mixed ethnicity were more likely than whites to have filled antipsychotic prescriptions (OR=3.19, p<.05).
When we ran the second model, which also controlled for depression and dementia, all of the ORs moved slightly toward 1.00, except for the nonwhite non-Asian category. Statistically significant differences were found among Chinese (OR=.49, p<.05) and mixed ethnic groups (OR=2.97, p<.05), indicating that disparities persisted in this more fully adjusted model. Although they were not significant, the point estimates for the other Asian and nonwhite non-Asian groups suggested lower odds of purchasing antipsychotics for these groups compared with whites. [Odds ratios for the full model are available as a supplement to this article at ps.psychiatryonline.org.]
Results from the subgroup analyses based on a sample of individuals who had a diagnosis of either a mental disorder or dementia (N=3,445) produced findings mirroring full population results (results available by request).
Using linked survey and administrative data, our study investigated ethnic differences in the use of antipsychotic drugs across ethnic groups that had not been studied. Results suggest that ethnic disparities in use persisted even after we accounted for important sources of variation, such as sex, age, recent immigration, income, health status, and diagnoses of schizophrenia or bipolar disorder. Chinese and other Asians were less likely to fill antipsychotic prescriptions compared with whites, and these disparities decreased slightly after analyses further controlled for dementia and depression diagnoses. Conversely, people of mixed ethnicity were significantly more likely than whites to use antipsychotic medication, and this difference remained, although it decreased slightly, after the model was adjusted to fully account for all the diagnoses examined.
Our finding of lower likelihood of antipsychotic drug use among Chinese and other Asian people compared with whites is consistent with the existing literature on the use of mental health services in general and medication use among patients with serious mental illness. It has been reported, for instance, that even after adjustment for differences in the prevalence of major depressive disorders, Chinese, South Asian, and Southeast Asians living in Canada were less likely than white Canadians to have sought care for their condition (36). Similarly, Chinese immigrants in Canada diagnosed as having a serious mental illness were found to have received fewer psychiatric drugs than an equivalent comparison group drawn from the general population (37). Our findings also complement the existing literature on ethnic disparities in antipsychotic drug utilization by including Asians in the list of ethnic minorities that were found to have lower levels of use (16–24).
One potential reason for the lower likelihood of filling antipsychotic prescriptions among Chinese is cultural differences in views on Western medicine (38). Although we accounted for some measure of acculturation by controlling for recent immigration status, it remains possible that many Chinese patients maintained strong negative views toward the use of antipsychotic medications to treat mental disorders. It is also possible that the lower odds of filling reflect cases of nonadherence in 2005 resulting from unpleasant experience with medication in previous years. Compared with whites, Chinese patients have been reported to respond to significantly lower doses of antipsychotics (39). If clinicians inadvertently did not take into account this information when determining dosage levels, their Chinese patients may experience more adverse events, consequently affecting their adherence to the prescription regimen.
We were unable to adequately explain the higher likelihood of antipsychotic use observed among people of mixed ethnicity. In our data, these individuals were predominantly young women from lower income groups with unusually high rates of diagnoses of schizophrenia or bipolar disorder and depression. Even though they had particularly high rates of diagnoses of conditions for which antipsychotics would be prescribed, it may also be the case that a greater share of this population uses antipsychotics for conditions we did not control for, such as attention deficit disorder, autism, or substance use disorders (40). However, to our knowledge there is no literature suggesting that Canadians of mixed ethnicity are more prone to other psychological disorders or clinical conditions that would be treated with these medications.
This study was not without limitations. Our data captured only filled prescriptions, and filled prescriptions do not equal written prescriptions or antipsychotic drug consumption. Individuals may have been prescribed these medications but never filled them or filled these prescriptions but never actually took the medications. However, given the seriousness of the main diagnosis for which these medications are used, misclassification of prescribing practices is likely to have been small. Furthermore, because individuals with untreated schizophrenia often end up in hospitals, which leads to the resumption of pharmacotherapy, filled but not consumed prescriptions are also likely to have been minimal in this drug class. Also, although our linkage of three cycles of CCHS data to administrative records produced a sample with a higher percentage of persons from ethnic minorities than CCHS samples used in other analyses (41), our final sample still underrepresented ethnic minority populations in British Columbia compared with census data (29). Our income data were also from 2004, whereas all the other health administrative data we used were from 2005. It is possible that the ORs we calculated for income were biased, but we believe that the effect was negligible because incomes at the population level are fairly stable within short periods of time.
Our study provides evidence of significant disparities in the use of antipsychotic medication in a population that has a large representation of Asian ethnicities. We found that Asians, specifically Chinese, were less likely than whites to use antipsychotic drugs, whereas people of mixed ethnicities were more likely to use them. These differences persisted even when sociodemographic characteristics, health status, and clinical indications for the drugs' use were accounted for. In addition, disparities were greater when antipsychotic drugs appeared to have been used in treating conditions other than schizophrenia and bipolar disorder. Future studies may be directed toward examining whether these differences are provider or patient driven and in determining how these variations result in meaningful differences in mental health outcomes. Planners and providers of mental health care may need to take into account differences in cultural beliefs and practices as well as group differences in pharmacological response to antipsychotic medications to ensure that patients from ethnic minority groups are receiving care appropriate to their level of need.
This study was funded by an operating grant (“Equity in Pharmacare: The Effects of Ethnicity and Policy in British Columbia”) from the Canadian Institutes of Health Research. The construction of the research database was supported, in part, by contributions of the BC-MoHS to the University of British Columbia Centre for Health Services and Policy Research. Mr. Puyat was supported in part by a Western Regional Training Centre studentship funded by Canadian Health Services Research Foundation, Alberta Heritage Foundation for Medical Research, and Canadian Institutes of Health Research (CIHR). Dr. Hanley was supported by CIHR and the Michael Smith Foundation for Health Research (MSFHR). Dr. Law receives salary support through a New Investigator Award from CIHR. Dr. Wong was supported by a scholar award from MSFHR and a New Investigator award from CIHR. Sponsors had no role in the project or in decisions to publish results.
The authors report no competing interests. | 1 | 10 |
<urn:uuid:0d3a4b32-322a-489f-ad86-42c1935cd131> | Changing DNS IP addresses for your computers can be done easily nowadays as many operating systems and routers’ graphical user interfaces have made it easy to do so. Changing DNS sometimes provide greater Internet performance and security. Let say, for some reasons your ISP DNS servers are quite slow, you can totally use other DNS servers that you know are quite fast. In short and keep it simple, DNS servers are the middlemen that responsible for making sure the domain names you type into your web browser get translated correctly so your browser requests will go to the right websites, consequently you will be able to communicate with web resources such as the websites you want to visit. Even shorter, basically DNS servers will correlate and map the domain names to correct IP addresses.
Understanding how DNS servers are really worked though might require one to understand more about networking and the technology that actually does the record keeping and mapping of IP addresses for domain names. Nonetheless, the most well known DNS software which has been installed on most DNS servers nowadays is BIND. Truly speaking, BIND and other DNS software would be the brains behind the correlation, mapping, and translation of domain names into IP addresses.
DNS works the way it is today, because we humans need readable and memorable IP addresses. After all, each numerical IP address is harder to remember than its text counterpart (i.e., domain name), therefore a domain name in memorable texts would be easier to work with. Without knowing the correct IP addresses, the machines cannot communicate with each other. Imagine each IP address as a home or a business address… knowing it will get you where you want to go, right? This is why meshes of healthy DNS servers are so essential to the health of the Internet. Secure DNS system is so essential to the safety of the users, because a secure DNS system can prevent hackers from redirecting your web requests and resources to bogus web destinations where private information can be siphon away illegally.
Lately, IPv6 has been deployed by many large Internet companies and is in progress to slowly replacing the IPv4. With IPv6 is in play, the communication among machines have to be reconsidered. The machines that are using IPv6 must use IPv6 IP address format or a technology which translates IPv6 IP address format into IPv4 IP address format, therefore DNS servers — that are responsible for translating, mapping, and correlating the domain names into IPv6 IP addresses — have to also use IPv6 architecture. With this in mind, you now can actually change your ISP’s IPv6 DNS servers with a third party IPv6 DNS servers to speed up the communication between machines. With lower latency between machines making contact, therefore the faster it’s for you to make new requests to any machine. Simply having faster DNS servers, whether IPv4 or IPv6 type of DNS servers, don’t mean your data transfer will definitely improve. There are too many factors in play that can affect how fast your data get transferred. One good example would be a very busied network which hosts the web server that you’re trying to make a connection with will not be able to respond to you fast enough. So, even though DNS servers are fast, other networking factors might be at played that would affect the Internet performance for better or worse.
Companies that are hosting DNS servers (e.g., ISPs, third party DNS hosting services) can totally be the first line of defense for your Internet safety. OpenDNS has showed us how this is possible as the service is automatically block out domain names that are known to be malicious. This way, the users will be able to avoid such domain names and not have to run the risk of being infected with malicious programs. Furthermore, OpenDNS allows users to manually block specific websites or websites that are belonging to specific categories (e.g., gambling, porn, etc…). Google Public DNS is not as apparent as how OpenDNS would allow users to specify and make change to their Internet safety, but Google Public DNS is known to be fast and secure. Google works behind the scene to make sure their DNS servers are fast and secure.
Lately, Google provides IPv6 Public DNS servers, therefore users can now add Google IPv6 Public DNS servers to their routers and computers. Personally, I use both OpenDNS and Google Public DNS servers. So, in my routers and computers, I add up to 6 IP addresses of DNS servers. How come 6 but not 4? The first 4 DNS IP addresses are belonging to IPv4 DNS servers of both OpenDNS and Google, and the last 2 DNS IP addresses are belonging to Google IPv6 DNS servers. Basically, whenever OpenDNS is slowed down in regarding to DNS matter, Google DNS might kick in to save the day for me. I think DNS works in a way that allows it to find the fastest path of which it can communicate with another DNS system, consequently allowing lower latency of network communication between the networks.
I think it’s essential for you to know that you can trust a third party DNS service or not, because in the end it’s you who are making the change of your machines’ DNS settings. Using the wrong DNS services can slow down your Internet performance. Worse, using the wrong DNS services can also impair your Internet security, therefore leaving you to be exposed to hackers’ exploits. So, I suggest you to do a very thorough research on a specific DNS service you want to use. If you can’t trust any third party DNS services, then you should stick to your ISP DNS service. I think all ISPs provide DNS service by default for free, therefore you should not have a problem of using your very own ISP DNS service. Nonetheless, always make sure you have the right DNS IP addresses of your ISP DNS servers in case you want to move away from a third party DNS service so you can use your ISP DNS service again.
- OpenDNS - http://www.opendns.com/
- Google Public DNS (IPv4 and IPv6) - https://developers.google.com/speed/public-dns/docs/using
- DNSChanger Malware (sonicalkaline.wordpress.com)
- DDoS against DNS-Servers (bblank.thinkmo.de)
- How FBI DNS Changer Shutdown Might Break Your Internet and What To Do About It (makeuseof.com)
- CloudFlare & OpenDNS Work Together to Save the Web (cloudflare.com)
- linux dns server test (auberonleigh.typepad.com)
- Chinese RFC proposes separate, independent, national internets and DNS roots (tools.ietf.org)
- Opendns vs Google DNS (canadiantechblogger.com)
- Free DNSCrypt tool enhances Mac Web security (reviews.cnet.com)
- Error 105 “Err Name Cannot Be Resolved.” : Solution (marshalbajaj.wordpress.com)
- Use Tunlr To Enjoy Streaming Services Anywhere In the World (makeuseof.com)
- WHY ClientAccessArray name is internal ONLY (telnet25.wordpress.com)
- DNSChanger ‘temporary’ DNS servers going dark soon – is your computer really fixed? (eset.com)
- A Discussion of Public and Private DNS Addresses on EC2 (cloud.dzone.com)
- Pdnsd (wiki.archlinux.org)
- reverse dns setup linux (lesterkunz.typepad.com)
- DNS Resolution, Browsers & Hope For The Future (circleid.com)
- A+ acronyms-DNS (acertifiednut.typepad.com)
- IPv6 DNS Blacklists Reconsidered (circleid.com)
- Testsnow F50-533 Exam – BIG-IP GTM v10.x (slideshare.net)
- The number is no longer in service (economist.com)
- nslookup reverse dns lookup (abnercain.typepad.com)
- Law Enforcement Is Worried About IPv6 (liquidmatrix.org)
- DNS Changer: Countdown clock reset, but still ticking (garwarner.blogspot.com)
- Signature-less Detection That Works (technicalinfodotnet.blogspot.com)
- Lync, UCMA, and DNS load balancing part 1 (chrisbardon.wordpress.com)
- Facebook joins Google, ISPs in notifying DNSChanger victims (networkworld.com)
- Anonymous Supposed Plans to “Kill” the Internet on March 31 (dailytech.com) | 1 | 2 |
<urn:uuid:7e1e8532-3dd1-4897-a118-77dab222a9a5> | |Why women live longer___Women live longer than men partly because their immune systems age more slowly, a study suggests. As the body's defences weaken over time, men's increased susceptibility to disease shortens their lifespans, say Japanese scientists. Tests of immune function could give an indication of true biological age, they report in Immunity & Ageing journal. The immune system protects the body from infection and cancer, but causes disease when not properly regulated. The Japanese study set out to investigate the controversial question of whether age-related changes in the immune system could be responsible for the difference in average life expectancy between men and women. Prof Katsuiku Hirokawa of the Tokyo Medical and Dental University and colleagues analysed blood samples from 356 healthy men and women aged between 20 and 90. They measured levels of white blood cells and molecules called cytokines which interact with cells of the immune system to regulate the body's response to disease. In both sexes, the number of white blood cells per person declined with age as expected from previous studies. However, closer examination revealed differences between men and women in two key components of the immune system - T-cells, which protect the body from infection, and B-cells, which secrete antibodies.|
In the News
|Chandigarh, May 18 (babushahi.com bureau): Congress trashes, debunks and terms the statement of Brar SEC, a cock and bull story, issued to please his political masters and to seek yet another govt. office, once he retires from here.
If this election process was peaceful, then what in Sh. Brar's dictionary is unfair, partisan and violent? Four political murders have taken place in Punjab unknown to Bihar type of politics. To begin with Youth Congress leader Sh. Sukhraj Singh was kill....
|Faridkot, May 18 (babushahi.com bureau): Various atrocities on the teachers including females lodged at the Central Jail, Faridkot at the hands of SAD-BJP government had surpassed the ‘British mayhem' before independence.
Stating the above, leader of Opposition Sunil Jakhar who was denied meeting with the Teachers in Central Jail, Faridkot today by the jail authorities, lamented that its' height of dictatorial attitude of Akali-BJP that those who raise their voice for their rights....
|Chandigarh, May 18 (babushahi.com bureau): Change is the essence of life! And pursuing this belief, Desh Bhagat Community Radio 90.4 FM (Aap Ki Awaaz) has announced that it will be available on a new frequency 107.8-FM from midnight on Saturday.
About the frequency change, Dr Zora Singh, Chairman, Desh Bhagat Community Radio, said the changeover is in response to listeners seeking better connectivity and signal. "Lot of listeners were sending in their feedback that they loved our s....
|Chandigarh, May 18 (babushahi.com bureau): The Shiromani Akali Dal today asked the Congress leaders in Punjab to come out of "the trauma ward into which they have been pushed by their shocking defeat in the last assembly elections in the state, and to behave like a healthy and responsible democratic party. The party also asked the Congress leaders not to allow their serial defeats to so completely frustrate them as to lose faith in democratic system itself, as was evident from their demand for ....|
|Amritsar, May 18 (babushahi.com bureau): The Congress Party today ahead of Block Samitit and Zila Parishad elections tomorrow gave a major jolt to ruling Shiromani Akali Dal (SAD) when Bibi Kashmir Kaur the Chairperson of Block Samiti Majitha, the home constituency of Punjab Revenue Minister Bikran Singh Majithia joined Congress here in the presence of Punjab Pradesh Congress Committee (PPCC) President Mr.Partap Singh Bajwa.
Bibi Kashmir Kaur wife of Baba Ram Singh a Taksali Akali....
|Chandigarh, May 18 (babushahi.com bureau): The United States district court of Eastern district of Wisconsin has dismissed the case filed by a New York based rights body ‘Sikhs for Justice', Shiromani Akali Dal (Amritsar) led by Simranjeet Singh Mann and others against the Punjab Chief Minister Parkash Singh Badal. It may be recalled the case was filed on August 8, 2012 when the Chief Minister was on visit to Wisconsin to mourn the death of seven innocent Sikhs in a shootout tragedy.
|By Harish Monga Dido|
Ferozepur, May 18 (babushahi.com bureau): The Ferozepur police got success with the arrest of main accused Lakhwinder Pal in the tragic fire incident at village Jattan Wala on May 16, in which six members of family were burnt alive while sleeping by sprinkling petrol.
Three members of family had died on the spot while one who was serious with 70 per cent burns was admitted in a private hospital for treatment. A team was constituted to investigate into the matte....
|SAD-BJP govt committed to provide conducive atmosphere for free and fair elections|
Bajwa daily crafting new excuses to run away from battle
Chandigarh, May 19 (babushahi.com bureau): Sukhbir Singh Badal, President, Shiromani Akali Dal and Deputy Chief Minister Punjab today said that Zila Parishad and Panchayat Samiti polls would mark its total decimation of from the electoral scene of Punjab asked newly appointed PPCC Chief Partap Singh Bajwa to get ready for first strong electora....
Form Committees to check vulgarity in rendering songs on the pattern of Board of Film Censors
By Harish Monga Dido
It is quite pathetic to see these creatively challenged writers use foul language and raunchy lyrics to give their weak songs the much needed punch. It is just a cheap tactic to grab attention. It seems that no one cares or more precisely the people who need to, don't bother to come forward.
There are number of social issues for which the Punjab and Hi....
|Lahore, May 18 (babushahi.com bureau): A Pakistani judge investigating the murder of Indian death row convict Sarabjit Singh has appealed to Indian nationals having information about the matter to file written submissions with relevant documents within seven days.|
Justice Syed Mazahar Ali Akbar Naqvi of the Lahore High Court is investigating the death of Sarabjit following a brutal assault by prisoners within Kot Lakhpat Jail.
The Indians are required to get themselves registered....
|Chandigarh, May 18 (babushahi.com bureau): Punjab School Education Board has declared the result of Class-12 exams held in March-April this year. Girls have yet again outshine boys. All first three positions are shared by girls, while there was a tie for the first position. Mehak from Ludhiana and Maninder Kaur from Malout shared the first position, while Charanjit Kaur from Jagraon and Jyoti from Ludhiana bagged the second and third position respectively. The detailed result, merit list an....|
|Chandigarh, May 17 (babushahi.com bureau): The election for Zila Prishad and Panchayat Samiti in Punjab, though has a noble objective of promoting democracy at the grass root level, but the violent incidents during these polls has brought a bad name to the democratic institutional system in the state. After Adampur incident, the poll scene today witnessed another ugly happening in Chak Mishiri village in Amritsar claiming two lives in the fight between supporters of Shiromani Akali Dal and Congr....|
|Perth, Australia, May 17 (babushahi.com bureau): Bauer Media Ltd, a large media company, has this week apologised and expressed regret to UNITED SIKHS for any offence felt by any member of the Sikh community by the publication of an article and photograph of a Nihang Sikh in their porn magazine in January this year. This apology was expressed in an agreement mediated by the Australian Human Rights Commission (AHRC), following a complaint filed by UNITED SIKHS.|
|New Delhi, May 17 (babushahi.com bureau): Govt. of India extended central deputation tenure of Aloke Prasad, IPS(UP-84) as Chief Vigilance Officer (CVO) in the National Highways Authority of India (NHAI), New Delhi, for a period of two years beyond 20.06.2013 i.e.up to 20.06.2015 or untill further orders, whichever is earlier.|
|Chandigarh, May 17 (babushahi.com bureau): Taking a strong exception over the demand of the Congress leaders for imposing the President rule in the state, Advisor to Punjab Chief Minister and General Secretary of Shiromani Akali Dal (SAD) S. Maheshinder Singh Grewal today said that these baseless statements were a reflection of the mounting frustration of Congress leaders in the wake of impending defeat in the Zila Parishad and Block samiti elections slated to be held on May 19.|
|Bathinda/Mansa, May 17 (babushahi.com bureau): The Punjab Revenue and Public Relations Minister Bikram Singh Majithia today urged the President of India to form a caretaker government at centre as the present congress led UPA-2 is hell bent upon to loot the national resources presuming its impending humiliating defeat in coming general elections.|
Addressing a series of election rallies in favour of SAD-BJP candidates for Zila Parishad and Block Samiti elections at....
|Chandigarh, May 17 (babushahi.com bureau): In a bid to provide the best treatment and diagnostic facilities to the cancer patients across the state, the Punjab Chief Minister Mr. Parkash Singh Badal today approved a comprehensive plan of nearly Rs.300 crore to equip the government medical colleges Patiala, Amritsar, Faridkot besides cancer hospital at Bathinda with the super specialty facilities. A decision to this effect was taken by Mr. Badal during a meeting with the top brass of the departm....|
|Chandigarh, May 17 (babushahi.com bureau): The Fifth Session of the Fourteenth Punjab Vidhan Sabha, which was adjourned sine-die at the conclusion of its sitting held on the 3rd May, 2013, has been prorogued by an Order of the Governor of Punjab, dated the 17th May, 2013.|
|Emerging Agro Farm Project will register new heights in agriculture : Gurpreet Sidhu|
By Gagandeep Sohal
Chandigarh, May 17 : Sans property tax with lower stamp duties than Punjab, appreciable water table (water available at 200 feet) and abundant solar and wind energy have turned Rajasthan into the most sought after destination for agriculture investment, said Gurpreet Singh Sidhu, the managing director of Emerging India Group.
Sidhu was speaking on the occa....
|By Gagandeep Sohal|
Chandigarh, May 17 : Mr. Partap Singh Bajwa President of Punjab Pradesh Congress Committee (PPCC) today demanded resignation of Deputy Chief Minister Mr. Sukhbir Singh Badal for his failure to control the law and order in the state and described him main architect behind ‘political murders' and growth of mafias in Punjab.
Mr. Bajwa said that Mr.Sukhbir Bdal was a total failure as home minister and he has no right to continue in power. Pun....
Chandigarh, May 17 (babushahi.com bureau): The State Election Commission is fully prepared for the election as all arrangements have been made. The Canvassing came to an end today for the PS and ZP Elections, that are scheduled to held on May 19.
Disclosing this here today official spokesperson of State Election Commission said that polling for 146 Panchayat Samitis and 22 Zila Parishads would be held on May 19. State Election Commission has issued ....
|By Gagandeep Sohal |
Chandigarh, May 17 : The State Election Commission is fully prepared for the election as all arrangements have been made. The Canvassing came to an end today for the PS and ZP Elections that are scheduled to be held on May 19.
Disclosing this here today Mr. S.S.Brar, State Election Commissioner, said that polling for 146 Panchayat Samitis and 22 Zila Parishads would be held on May 19. State Election Commission has issued strict instructions t....
Chandigarh May 17 (babushahi.com bureau): The Punjab government has launched an ambitious project to reorganize the rural water supply and sanitation schemes aiming at covering entire rural population of 166 million people under drinking water and sanitation programmes.
Disclosing this here today, a spokesman of the government said that in order to improve environmental sanitation, a provision has been made in the project to initiate works on pilot basis ....
Chandigarh, May 16 (babushahi.com bureau): A public interest litigation (PIL) has been filed in the Punjab and Haryana high court against close relatives of Bhai Balwant Singh Rajoana Kalan, for seeking directions to Punjab Government to retrieve 8 acre land of Gurudwara from them-dismissed as not maintainable.
A Division Bench of the High Court comprising ACJ Jasbir Singh and Justice Rakesh Kumar Jain today dismissed a PIL filed by "Smajik Jagriti Front (Regd....
|By Harish Monga Dido|
Ferozepur,May 17(BB): Sandeep Kaur, student of BCA-III of Dev Samaj College for Women, Ferozepur City – a premier and NAAC "A" Graded institution for women – topped in BCA-III results declared by the Punjab University, Chandigarh. She has added another feather in the crown of College which has already a countless distinctions in education, sports and cultural activities.
Speaking to the media, Sandeep Kaur gave credit for her success at the top i....
Why Tirchhi Nazar turns Digital ?
Our basic purpose is facilitating interaction with information. People must have...
|British Columbia's Elections -Fascinating and Unpredictable|
This election has also been a watershed for the South Asian community in general and Punjabis in particular. Before the dissolution of the legislature there were six Punjabi MLAs in Victoria-two BC Liberals and four New Democrats.
|In Panchayat elections, politics is seen more than feeling of service|
New Sarpanch must create atmosphere to stop migration of villagers to cities
| Academicians, not the politicians should decide about education reforms|
Delhi University 4-year-degree-course from 2014 appreciable – other universities should follow it
|Congress has to start from 'ABC' as it has failed to remove dented and tainted leaders|
Congress perplexed on every front, got an opportunity to celebrate its victory in Karnataka
|Mothers' Day need to be celebrated as National Festival|
On Mothers' Day
|Love is Like a Butterfly: It goes where it pleases and it pleases wherever it goes...|
A newly emerged butterfly can't fly immediately. Because inside the chrysalis, a developing butterfly waits to emerge with its wings collapsed around its body. When it finally breaks free of the pupa case, it greets the world with tiny, shrivelled wings
View Archived Articles
|Recent comments by our online visitors|
|Sher Singh wrote :|
|Dear Mr. Baljit - your news dated 1st Apr'12 on the S.Pratap Singh Bajwa touring Gurdaspur "Bajwa lashes at SAD-BJP govt for ignoring border area development ", you have wrongly mentioned Smt. Aruna Chaudhary as former MLA. She is current incumbent MLA from Dinanagar Constituency. Kindly edit the news page and update your records for future news. Regards, Cpt Sher Singh PA to Aruna Chaudhary 98143-12678|
|narinder chhabra wrote :|
|Respected Balli ji Congratulations for 6 lakhs visitors of Babushai.Com My wishes are with u for 6 crores earliest|
|H.S.HUNDAL Adv. wrote :|
|Respected Balli ji, Congratulations for touching the mark of Six Lakh Visitors.Keep up the spirit of courageous journalism and reporting.Congrats!!!!|
|Read More >>| | 1 | 2 |
<urn:uuid:c3e7bd7c-d8b8-46bd-b74c-6eef91ac21d3> | A programming language is an artificial language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine and/or to express algorithms precisely.
The earliest programming languages predate the invention of the computer, and were used to direct the behavior of machines such as Jacquard looms and player pianos. Thousands of different programming languages have been created, mainly in the computer field, with many being created every year. Most programming languages describe computation in an imperative style, i.e., as a sequence of commands, although some languages, such as those that support functional programming or logic programming, use alternative forms of description.
The description of a programming language is usually split into the two components of syntax (form) and semantics (meaning). Some languages are defined by a specification document (for example, the C programming language is specified by an ISO Standard), while other languages, such as Perl 5 and earlier, have a dominant implementation that is used as a reference.
A programming language is a notation for writing programs, which are specifications of a computation or algorithm. Some, but not all, authors restrict the term "programming language" to those languages that can express all possible algorithms. Traits often considered important for what constitutes a programming language include:
Markup languages like XML, HTML or troff, which define structured data, are not usually considered programming languages. Programming languages may, however, share the syntax with markup languages if a computational semantics is defined. XSLT, for example, is a Turing complete XML dialect. Moreover, LaTeX, which is mostly used for structuring documents, also contains a Turing complete subset.
The term computer language is sometimes used interchangeably with programming language. However, the usage of both terms varies among authors, including the exact scope of each. One usage describes programming languages as a subset of computer languages. In this vein, languages used in computing that have a different goal than expressing computer programs are generically designated computer languages. For instance, markup languages are sometimes referred to as computer languages to emphasize that they are not meant to be used for programming. Another usage regards programming languages as theoretical constructs for programming abstract machines, and computer languages as the subset thereof that runs on physical computers, which have finite hardware resources. John C. Reynolds emphasizes that formal specification languages are just as much programming languages as are the languages intended for execution. He also argues that textual and even graphical input formats that affect the behavior of a computer are programming languages, despite the fact they are commonly not Turing-complete, and remarks that ignorance of programming language concepts is the reason for many flaws in input formats.
All programming languages have some primitive building blocks for the description of data and the processes or transformations applied to them (like the addition of two numbers or the selection of an item from a collection). These primitives are defined by syntactic and semantic rules which describe their structure and meaning respectively.
A programming language's surface form is known as its syntax. Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages. On the other hand, there are some programming languages which are more graphical in nature, using visual relationships between symbols to specify a program.
The syntax of a language describes the possible combinations of symbols that form a syntactically correct program. The meaning given to a combination of symbols is handled by semantics (either formal or hard-coded in a reference implementation). Since most languages are textual, this article discusses textual syntax.
Programming language syntax is usually defined using a combination of regular expressions (for lexical structure) and Backus–Naur Form (for grammatical structure). Below is a simple grammar, based on Lisp:
expression ::= atom | list atom ::= number | symbol number ::= [+-]?['0'-'9']+ symbol ::= ['A'-'Z''a'-'z'].* list ::= '(' expression* ')'
This grammar specifies the following:
The following are examples of well-formed token sequences in this grammar: '
(a b c232 (1))'
Not all syntactically correct programs are semantically correct. Many syntactically correct programs are nonetheless ill-formed, per the language's rules; and may (depending on the language specification and the soundness of the implementation) result in an error on translation or execution. In some cases, such programs may exhibit undefined behavior. Even when a program is well-defined within a language, it may still have a meaning that is not intended by the person who wrote it.
Using natural language as an example, it may not be possible to assign a meaning to a grammatically correct sentence or the sentence may be false:
The following C language fragment is syntactically correct, but performs operations that are not semantically defined (the operation *p >> 4 has no meaning for a value having a complex type and p->im is not defined because the value of p is the null pointer):
complex *p = NULL; complex abs_p = sqrt(*p >> 4 + p->im);
If the type declaration on the first line were omitted, the program would trigger an error on compilation, as the variable "p" would not be defined. But the program would still be syntactically correct, since type declarations provide only semantic information.
The grammar needed to specify a programming language can be classified by its position in the Chomsky hierarchy. The syntax of most programming languages can be specified using a Type-2 grammar, i.e., they are context-free grammars. Some languages, including Perl and Lisp, contain constructs that allow execution during the parsing phase. Languages that have constructs that allow the programmer to alter the behavior of the parser make syntax analysis an undecidable problem, and generally blur the distinction between parsing and execution. In contrast to Lisp's macro system and Perl's
BEGIN blocks, which may contain general computations, C macros are merely string replacements, and do not require code execution.
The static semantics defines restrictions on the structure of valid texts that are hard or impossible to express in standard syntactic formalisms. For compiled languages, static semantics essentially include those semantic rules that can be checked at compile time. Examples include checking that every identifier is declared before it is used (in languages that require such declarations) or that the labels on the arms of a case statement are distinct. Many important restrictions of this type, like checking that identifiers are used in the appropriate context (e.g. not adding an integer to a function name), or that subroutine calls have the appropriate number and type of arguments, can be enforced by defining them as rules in a logic called a type system. Other forms of static analyses like data flow analysis may also be part of static semantics. Newer programming languages like Java and C# have definite assignment analysis, a form of data flow analysis, as part of their static semantics.
Once data has been specified, the machine must be instructed to perform operations on the data. For example, the semantics may define the strategy by which expressions are evaluated to values, or the manner in which control structures conditionally execute statements. The dynamic semantics (also known as execution semantics) of a language defines how and when the various constructs of a language should produce a program behavior. There are many ways of defining execution semantics. Natural language is often used to specify the execution semantics of languages commonly used in practice. A significant amount of academic research went into formal semantics of programming languages, which allow execution semantics to be specified in a formal manner. Results from this field of research have seen limited application to programming language design and implementation outside academia.
A type system defines how a programming language classifies values and expressions into types, how it can manipulate those types and how they interact. The goal of a type system is to verify and usually enforce a certain level of correctness in programs written in that language by detecting certain incorrect operations. Any decidable type system involves a trade-off: while it rejects many incorrect programs, it can also prohibit some correct, albeit unusual programs. In order to bypass this downside, a number of languages have type loopholes, usually unchecked casts that may be used by the programmer to explicitly allow a normally disallowed operation between different types. In most typed languages, the type system is used only to type check programs, but a number of languages, usually functional ones, infer types, relieving the programmer from the need to write type annotations. The formal design and study of type systems is known as type theory.
A language is typed if the specification of every operation defines types of data to which the operation is applicable, with the implication that it is not applicable to other types. For example, the data represented by "
this text between the quotes" is a string. In most programming languages, dividing a number by a string has no meaning; most modern programming languages will therefore reject any program attempting to perform such an operation. In some languages the meaningless operation will be detected when the program is compiled ("static" type checking), and rejected by the compiler; while in others, it will be detected when the program is run ("dynamic" type checking), resulting in a run-time exception.
A special case of typed languages are the single-type languages. These are often scripting or markup languages, such as REXX or SGML, and have only one data type-most commonly character strings which are used for both symbolic and numeric data.
In contrast, an untyped language, such as most assembly languages, allows any operation to be performed on any data, which are generally considered to be sequences of bits of various lengths. High-level languages which are untyped include BCPL and some varieties of Forth.
In practice, while few languages are considered typed from the point of view of type theory (verifying or rejecting all operations), most modern languages offer a degree of typing. Many production languages provide means to bypass or subvert the type system, trading type-safety for finer control over the program's execution (see casting).
In static typing, all expressions have their types determined prior to when the program is executed, typically at compile-time. For example, 1 and (2+2) are integer expressions; they cannot be passed to a function that expects a string, or stored in a variable that is defined to hold dates.
Statically typed languages can be either manifestly typed or type-inferred. In the first case, the programmer must explicitly write types at certain textual positions (for example, at variable declarations). In the second case, the compiler infers the types of expressions and declarations based on context. Most mainstream statically typed languages, such as C++, C# and Java, are manifestly typed. Complete type inference has traditionally been associated with less mainstream languages, such as Haskell and ML. However, many manifestly typed languages support partial type inference; for example, Java and C# both infer types in certain limited cases.
Weak typing allows a value of one type to be treated as another, for example treating a string as a number. This can occasionally be useful, but it can also allow some kinds of program faults to go undetected at compile time and even at run time.
2 * x implicitly converts
x to a number, and this conversion succeeds even if
Array, or a string of letters. Such implicit conversions are often useful, but they can mask programming errors. Strong and static are now generally considered orthogonal concepts, but usage in the literature differs. Some use the term strongly typed to mean strongly, statically typed, or, even more confusingly, to mean simply statically typed. Thus C has been called both strongly typed and weakly, statically typed.
It may seem odd to some professional programmers that C could be "weakly, statically typed". However, notice that the use of the generic pointer, the void* pointer, does allow for casting of pointers to other pointers without needing to do an explicit cast. This is extremely similar to somehow casting an array of bytes to any kind of datatype in C without using an explicit cast, such as
Most programming languages have an associated core library (sometimes known as the 'standard library', especially if it is included as part of the published language standard), which is conventionally made available by all implementations of the language. Core libraries typically include definitions for commonly used algorithms, data structures, and mechanisms for input and output.
A language's core library is often treated as part of the language by its users, although the designers may have treated it as a separate entity. Many language specifications define a core that must be made available in all implementations, and in the case of standardized languages this core library may be required. The line between a language and its core library therefore differs from language to language. Indeed, some languages are designed so that the meanings of certain syntactic constructs cannot even be described without referring to the core library. For example, in Java, a string literal is defined as an instance of the java.lang.String class; similarly, in Smalltalk, an anonymous function expression (a "block") constructs an instance of the library's BlockContext class. Conversely, Scheme contains multiple coherent subsets that suffice to construct the rest of the language as library macros, and so the language designers do not even bother to say which portions of the language must be implemented as language constructs, and which must be implemented as parts of a library.
Programming languages share properties with natural languages related to their purpose as vehicles for communication, having a syntactic form separate from its semantics, and showing language families of related languages branching one from another. But as artificial constructs, they also differ in fundamental ways from languages that have evolved through usage. A significant difference is that a programming language can be fully described and studied in its entirety, since it has a precise and finite definition. By contrast, natural languages have changing meanings given by their users in different communities. While constructed languages are also artificial languages designed from the ground up with a specific purpose, they lack the precise and complete semantic definition that a programming language has.
Many programming languages have been designed from scratch, altered to meet new needs, and combined with other languages. Many have eventually fallen into disuse. Although there have been attempts to design one "universal" programming language that serves all purposes, all of them have failed to be generally accepted as filling this role. The need for diverse programming languages arises from the diversity of contexts in which languages are used:
One common trend in the development of programming languages has been to add more ability to solve problems using a higher level of abstraction. The earliest programming languages were tied very closely to the underlying hardware of the computer. As new programming languages have developed, features have been added that let programmers express ideas that are more remote from simple translation into underlying hardware instructions. Because programmers are less tied to the complexity of the computer, their programs can do more computing with less effort from the programmer. This lets them write more functionality per time unit.
Natural language processors have been proposed as a way to eliminate the need for a specialized language for programming. However, this goal remains distant and its benefits are open to debate. Edsger W. Dijkstra took the position that the use of a formal language is essential to prevent the introduction of meaningless constructs, and dismissed natural language programming as "foolish". Alan Perlis was similarly dismissive of the idea. Hybrid approaches have been taken in Structured English and SQL.
A language's designers and users must construct a number of artifacts that govern and enable the practice of programming. The most important of these artifacts are the language specification and implementation.
The specification of a programming language is intended to provide a definition that the language users and the implementors can use to determine whether the behavior of a program is correct, given its source code.
A programming language specification can take several forms, including the following:
An implementation of a programming language provides a way to execute that program on one or more configurations of hardware and software. There are, broadly, two approaches to programming language implementation: compilation and interpretation. It is generally possible to implement a language using either technique.
The output of a compiler may be executed by hardware or a program called an interpreter. In some implementations that make use of the interpreter approach there is no distinct boundary between compiling and interpreting. For instance, some implementations of BASIC compile and then execute the source a line at a time.
Programs that are executed directly on the hardware usually run several orders of magnitude faster than those that are interpreted in software.
One technique for improving the performance of interpreted programs is just-in-time compilation. Here the virtual machine, just before execution, translates the blocks of bytecode which are going to be used to machine code, for direct execution on the hardware.
Thousands of different programming languages have been created, mainly in the computing field. Programming languages differ from most other forms of human expression in that they require a greater degree of precision and completeness.
When using a natural language to communicate with other people, human authors and speakers can be ambiguous and make small errors, and still expect their intent to be understood. However, figuratively speaking, computers "do exactly what they are told to do", and cannot "understand" what code the programmer intended to write. The combination of the language definition, a program, and the program's inputs must fully specify the external behavior that occurs when the program is executed, within the domain of control of that program. On the other hand, ideas about an algorithm can be communicated to humans without the precision required for execution by using pseudocode, which interleaves natural language with code written in a programming language.
A programming language provides a structured mechanism for defining pieces of data, and the operations or transformations that may be carried out automatically on that data. A programmer uses the abstractions present in the language to represent the concepts involved in a computation. These concepts are represented as a collection of the simplest elements available (called primitives). Programming is the process by which programmers combine these primitives to compose new programs, or adapt existing ones to new uses or a changing environment.
Programs for a computer might be executed in a batch process without human interaction, or a user might type commands in an interactive session of an interpreter. In this case the "commands" are simply programs, whose execution is chained together. When a language is used to give commands to a software application (such as a shell) it is called a scripting language.
It is difficult to determine which programming languages are most widely used, and what usage means varies by context. One language may occupy the greater number of programmer hours, a different one have more lines of code, and a third utilize the most CPU time. Some languages are very popular for particular kinds of applications. For example, COBOL is still strong in the corporate data center, often on large mainframes; Fortran in scientific and engineering applications; and C in embedded applications and operating systems. Other languages are regularly used to write many different kinds of applications.
Various methods of measuring language popularity, each subject to a different bias over what is measured, have been proposed:
There is no overarching classification scheme for programming languages. A given programming language does not usually have a single ancestor language. Languages commonly arise by combining the elements of several predecessor languages with new ideas in circulation at the time. Ideas that originate in one language will diffuse throughout a family of related languages, and then leap suddenly across familial gaps to appear in an entirely different family.
The task is further complicated by the fact that languages can be classified along multiple axes. For example, Java is both an object-oriented language (because it encourages object-oriented organization) and a concurrent language (because it contains built-in constructs for running multiple threads in parallel). Python is an object-oriented scripting language.
In broad strokes, programming languages divide into programming paradigms and a classification by intended domain of use. Traditionally, programming languages have been regarded as describing computation in terms of imperative sentences, i.e. issuing commands. These are generally called imperative programming languages. A great deal of research in programming languages has been aimed at blurring the distinction between a program as a set of instructions and a program as an assertion about the desired answer, which is the main feature of declarative programming. More refined paradigms include procedural programming, object-oriented programming, functional programming, and logic programming; some languages are hybrids of paradigms or multi-paradigmatic. An assembly language is not so much a paradigm as a direct model of an underlying machine architecture. By purpose, programming languages might be considered general purpose, system programming languages, scripting languages, domain-specific languages, or concurrent/distributed languages (or a combination of these). Some general purpose languages were designed largely with educational goals.
A programming language may also be classified by factors unrelated to programming paradigm. For instance, most programming languages use English language keywords, while a minority do not. Other languages may be classified as being deliberately esoteric or not.
The first programming languages predate the modern computer. The 19th century saw the invention of "programmable" looms and player piano scrolls, both of which implemented examples of domain-specific languages. By the beginning of the twentieth century, punch cards encoded data and directed mechanical processing. In the 1930s and 1940s, the formalisms of Alonzo Church's lambda calculus and Alan Turing's Turing machines provided mathematical abstractions for expressing algorithms; the lambda calculus remains influential in language design.
In the 1940s, the first electrically powered digital computers were created. Grace Hopper, was one of the first programmers of the Harvard Mark I computer, a pioneer in the field, developed the first compiler, around 1952, for a computer programming language. Notwithstanding, the idea of programming language existed earlier; the first high-level programming language to be designed for a computer was Plankalkül, developed for the German Z3 by Konrad Zuse between 1943 and 1945. However, it was not implemented until 1998 and 2000.
Programmers of early 1950s computers, notably UNIVAC I and IBM 701, used machine language programs, that is, the first generation language (1GL). 1GL programming was quickly superseded by similarly machine-specific, but mnemonic, second generation languages (2GL) known as assembly languages or "assembler". Later in the 1950s, assembly language programming, which had evolved to include the use of macro instructions, was followed by the development of "third generation" programming languages (3GL), such as FORTRAN, LISP, and COBOL. 3GLs are more abstract and are "portable", or at least implemented similarly on computers that do not support the same native machine code. Updated versions of all of these 3GLs are still in general use, and each has strongly influenced the development of later languages. At the end of the 1950s, the language formalized as ALGOL 60 was introduced, and most later programming languages are, in many respects, descendants of Algol. The format and use of the early programming languages was heavily influenced by the constraints of the interface.
The period from the 1960s to the late 1970s brought the development of the major language paradigms now in use, though many aspects were refinements of ideas in the very first Third-generation programming languages:
Each of these languages spawned an entire family of descendants, and most modern languages count at least one of them in their ancestry.
The 1960s and 1970s also saw considerable debate over the merits of structured programming, and whether programming languages should be designed to support it. Edsger Dijkstra, in a famous 1968 letter published in the Communications of the ACM, argued that GOTO statements should be eliminated from all "higher level" programming languages.
The 1960s and 1970s also saw expansion of techniques that reduced the footprint of a program as well as improved productivity of the programmer and user. The card deck for an early 4GL was a lot smaller for the same functionality expressed in a 3GL deck.
The 1980s were years of relative consolidation. C++ combined object-oriented and systems programming. The United States government standardized Ada, a systems programming language derived from Pascal and intended for use by defense contractors. In Japan and elsewhere, vast sums were spent investigating so-called "fifth generation" languages that incorporated logic programming constructs. The functional languages community moved to standardize ML and Lisp. Rather than inventing new paradigms, all of these movements elaborated upon the ideas invented in the previous decade.
One important trend in language design for programming large-scale systems during the 1980s was an increased focus on the use of modules, or large-scale organizational units of code. Modula-2, Ada, and ML all developed notable module systems in the 1980s, although other languages, such as PL/I, already had extensive support for modular programming. Module systems were often wedded to generic programming constructs.
The rapid growth of the Internet in the mid-1990s created opportunities for new languages. Perl, originally a Unix scripting tool first released in 1987, became common in dynamic websites. Java came to be used for server-side programming, and bytecode virtual machines became popular again in commercial settings with their promise of "Write once, run anywhere" (UCSD Pascal had been popular for a time in the early 1980s). These developments were not fundamentally novel, rather they were refinements to existing languages and paradigms, and largely based on the C family of programming languages.
Programming language evolution continues, in both industry and research. Current directions include security and reliability verification, new kinds of modularity (mixins, delegates, aspects), and database integration such as Microsoft's LINQ.
▪ Premium designs
▪ Designs by country
▪ Designs by U.S. state
▪ Most popular designs
▪ Newest, last added designs
▪ Unique designs
▪ Cheap, budget designs
▪ Design super sale
DESIGNS BY THEME
▪ Accounting, audit designs
▪ Adult, sex designs
▪ African designs
▪ American, U.S. designs
▪ Animals, birds, pets designs
▪ Agricultural, farming designs
▪ Architecture, building designs
▪ Army, navy, military designs
▪ Audio & video designs
▪ Automobiles, car designs
▪ Books, e-book designs
▪ Beauty salon, SPA designs
▪ Black, dark designs
▪ Business, corporate designs
▪ Charity, donation designs
▪ Cinema, movie, film designs
▪ Computer, hardware designs
▪ Celebrity, star fan designs
▪ Children, family designs
▪ Christmas, New Year's designs
▪ Green, St. Patrick designs
▪ Dating, matchmaking designs
▪ Design studio, creative designs
▪ Educational, student designs
▪ Electronics designs
▪ Entertainment, fun designs
▪ Fashion, wear designs
▪ Finance, financial designs
▪ Fishing & hunting designs
▪ Flowers, floral shop designs
▪ Food, nutrition designs
▪ Football, soccer designs
▪ Gambling, casino designs
▪ Games, gaming designs
▪ Gifts, gift designs
▪ Halloween, carnival designs
▪ Hotel, resort designs
▪ Industry, industrial designs
▪ Insurance, insurer designs
▪ Interior, furniture designs
▪ International designs
▪ Internet technology designs
▪ Jewelry, jewellery designs
▪ Job & employment designs
▪ Landscaping, garden designs
▪ Law, juridical, legal designs
▪ Love, romantic designs
▪ Marketing designs
▪ Media, radio, TV designs
▪ Medicine, health care designs
▪ Mortgage, loan designs
▪ Music, musical designs
▪ Night club, dancing designs
▪ Photography, photo designs
▪ Personal, individual designs
▪ Politics, political designs
▪ Real estate, realty designs
▪ Religious, church designs
▪ Restaurant, cafe designs
▪ Retirement, pension designs
▪ Science, scientific designs
▪ Sea, ocean, river designs
▪ Security, protection designs
▪ Social, cultural designs
▪ Spirit, meditational designs
▪ Software designs
▪ Sports, sporting designs
▪ Telecommunication designs
▪ Travel, vacation designs
▪ Transport, logistic designs
▪ Web hosting designs
▪ Wedding, marriage designs
▪ White, light designs
▪ Magento store designs
▪ OpenCart store designs
▪ PrestaShop store designs
▪ CRE Loaded store designs
▪ Jigoshop store designs
▪ VirtueMart store designs
▪ osCommerce store designs
▪ Zen Cart store designs
▪ Flash CMS designs
▪ Joomla CMS designs
▪ Mambo CMS designs
▪ Drupal CMS designs
▪ WordPress blog designs
▪ Forum designs
▪ phpBB forum designs
▪ PHP-Nuke portal designs
ANIMATED WEBSITE DESIGNS
▪ Flash CMS designs
▪ Silverlight animated designs
▪ Silverlight intro designs
▪ Flash animated designs
▪ Flash intro designs
▪ XML Flash designs
▪ Flash 8 animated designs
▪ Dynamic Flash designs
▪ Flash animated photo albums
▪ Dynamic Swish designs
▪ Swish animated designs
▪ jQuery animated designs
▪ WebMatrix Razor designs
▪ HTML 5 designs
▪ Web 2.0 designs
▪ 3-color variation designs
▪ 3D, three-dimensional designs
▪ Artwork, illustrated designs
▪ Clean, simple designs
▪ CSS based website designs
▪ Full design packages
▪ Full ready websites
▪ Portal designs
▪ Stretched, full screen designs
▪ Universal, neutral designs
CORPORATE ID DESIGNS
▪ Corporate identity sets
▪ Logo layouts, logo designs
▪ Logotype sets, logo packs
▪ PowerPoint, PTT designs
▪ Facebook themes
VIDEO, SOUND & MUSIC
▪ Video e-cards
▪ After Effects video intros
▪ Special video effects
▪ Music tracks, music loops
▪ Stock music bank
GRAPHICS & CLIPART
▪ Pro clipart & illustrations, $19/year
▪ 5,000+ icons by subscription
▪ Icons, pictograms
|Custom Logo Design $149 ▪ Web Programming ▪ ID Card Printing ▪ Best Web Hosting ▪ eCommerce Software ▪ Add Your Link|
|© 1996-2013 MAGIA Internet Studio ▪ About ▪ Portfolio ▪ Photo on Demand ▪ Hosting ▪ Advertise ▪ Sitemap ▪ Privacy ▪ Maria Online| | 1 | 18 |
<urn:uuid:65811984-2894-42b9-9eac-3f4bafd047f6> | CHAPTER 11: MINORITY GROUPS IN RELATION TO MENTAL HEALTH LEGISALTION
11.1 This chapter looks at the literature relating to a range of groups where there are special considerations in relation to mental health legislation, either because of detention rates or because they are alluded to in the literature as having particular needs. These include children and adolescents, women, minority ethnic groups, older people, people with learning disabilities and people who are deaf. There is also a short section on the capacity of those people suffering from anorexia nervosa to refuse treatment. The literature searches carried out did not reveal any relevant research on literature relating to some other groups, for example, lesbian, gay, bisexual or transsexual groups ( LGBT) in relation to aspects of mental health legislation. Mention is made of general service provision for some of these groups but to cover this comprehensively is out-with the scope of this review.
CHILDREN and Adolescents
11.2 Children and adolescents relate to mental health legislation as people experiencing mental disorders themselves or as the dependants of parents or carers who are experiencing mental disorders.
11.3 Section 23 of the Mental Health (Care and Treatment) (Scotland) Act 2003 places a duty on Health Boards when admitting a person under 18 to hospital to provide ' such services and accommodation as are sufficient for the particular needs of that child or young person' for both detained and non detained patients (Scottish Executive 2003a).
11.4 A Needs Assessment Report on Child and Adolescent Mental Health (Public Health Institute of Scotland 2003) identified an urgent need for investment in the provision of specialised units in Scotland, able to offer developmentally appropriate settings for children and young people requiring residential or inpatient units. Current shortages mean that children and young people are being treated on adult wards that have not been adapted to meet the young person's needs.
Numbers of young people detained under the Mental Health (Scotland) Act 1984
11.5 Detention rates for young people have continued to rise as has concern about the inappropriateness of young people being admitted to adult wards. The Mental Welfare Commission annually reports on the number of detentions of young people in Scotland. For example, in 1990-1991 there were 51 episodes of detention of young people under 18 years old, by 1995-1996 this had risen to 82 and by 2002-2003 to 127 (Mental Welfare Commission 1996, 2003). Of the 127 young people detained in 2002-2003, only 29 (23%) were detained in child and adolescent units. The remainder were placed on adult wards or general medical wards. This excluded five admissions under the Criminal Procedures (Scotland) Act 1995 and two detentions under section 25(2) of the Mental Health (Scotland) Act 1984 (Mental Welfare Commission 2003). Table 11.1 shows the figures for 2003-2004.
Table 11. 1: Episodes of detention by inpatient facility in Scotland 2003-2004
Detentions under 16 Years
Detentions 16 and 17 years
All detentions Under 18 yrs
Adapted from Mental Welfare Commission annual report 2003-2004
11.6 As can be seen there were 135 detentions in that year, 27 for young people under the age of 16 and 108 for young people aged between 16-17. Of these only 20 were placed in adolescent units, 106 on adult wards and 9 on medical wards (Mental Welfare Commission 2004). The 2003-2004 figures include admissions to adult wards for 3 boys and 2 girls aged 16-17 years under the Criminal Procedures (Scotland) Act 1995.
11.7 The Mental Welfare Commission points out that duties placed on health boards under section 23 of the Mental Health (Care and Treatment) Act 2003:
'makes it imperative that the provision of services for young people is given greater attention than has been the case in the past; this includes their inpatient care'. (Mental Welfare Commission 2004)
11.8 The MWC have supported the view of the Royal College of Psychiatrists ( RCP) that young people under 16 should not be admitted to adult wards, and that 16 and 17 year olds should only be admitted to such wards under special circumstances. The RCP considered that inappropriate admissions of young people should be seen as ' an untoward critical incident' (Royal College of Psychiatrists 2002).
11.9 Since October 2002 the MWC have received information on all under 16 year olds admitted either formally or informally to adult wards. It is probable that their figures are an underestimate as they report that as yet reliable ways of collecting and notifying such data have not been established. Between October 2002 and April 2003, 7 informal admissions were notified of under-16 year olds (4 girls and 3 boys).
11.10 The MWC report of 2000-2001 (Mental Welfare Commission 2001) said that the Commission had not hitherto been as involved in services for young people as it had for adults. The intention was to become more involved. It reported that the Scottish Health Advisory Service, the Mental Health and Well-being Support Group, the Clinical Standards Board for Scotland, the Social Work Inspectorate and the MWC wrote collectively to the Scottish Executive arguing for a national strategy for child and adolescent mental health services. The response from the Executive was deemed encouraging (Mental Welfare Commission 2001). The 2002-2003 MWC report noted that at the time of writing two of the four child and adolescent units in Scotland had closed temporarily due to staffing problems (Mental Welfare Commission 2003).
Mental Health Advisory Commission in England & Wales
11.11 The Tenth Biennial report of the Mental Health Advisory Commission (2003) expressed concern that a monitoring system which concentrates on the needs of those detained under the Mental Health Act 1983 alone does not meet the needs of the whole population of minors subject to compulsion is hospital. In addition to noting the high proportion of young people admitted to inappropriate facilities it highlighted survey results that showed only 32 out of 72 hospitals that responded claimed to have adequately implemented policies in relation to the admission of minors.
11.12 In its response to the draft Mental Health Bill of 2002 6 among other recommendations the Commission urged the Government to consider giving the Commission for Healthcare Audit, the Commission for Social Care Inspection and the new Children's Rights Director specific and complimentary duties in respect of children with serious mental disorders.
Mental health legislation and the Children Act 1989 in England & Wales
11.13 Children and young people can be admitted to NHS facilities either under the provisions of the Mental Health Act 1983 or the Children Act 1989. In addition they may be admitted against their will but with the consent of their parents or parent without the use of either piece of legislation. McNamara (2002) gave an overview of the both these Acts in England & Wales and their interface with common law, family law and the Human Rights Act 1998.
11.14 Potter and Evans (2004) argued that neither the current complex legislative framework in England & Wales nor the proposed changes in the Mental Health Bill can provide appropriate services whilst safeguarding children's rights. Apparent inconsistencies in the legal framework are noted. For example young people with 'Gillick competency' can consent to treatment but cannot refuse it in the face of proxy consent by someone with parental responsibility. The authors wished to see reforms that take into account more fully both the developmental needs of young people and the complex multi-agency nature of children's services.
11.15 Mears and Worrall (2001) surveyed consultant psychiatrists in child and adolescent psychiatry about their concerns in relation to the use of legislation in their specialism. The four main concerns identified in descending order of frequency mentioned were: which Act to use, the Mental Health Act 1983 or Children Act 1989; general issues with consent to treatment; issues with social services departments; and, stigma associated with use of the Mental Health Act 1983.
11.16 Subsequent research (Mears et al 2003) identified that psychiatrists' knowledge of the Mental Health Act 1983 was better than that of the Children Act 1989. Since decisions about admission were probably taken through discussion in multidisciplinary teams and with access to legal advice, it was anticipated that knowledge of other professionals might compliment that of the psychiatrists in real clinical situations as opposed to when completing a questionnaire.
Survey evidence from England & Wales on children and adolescents in different service settings
11.17 Surveys in England & Wales have described the types of problem that patients are referred to health services with, as well as the proportions of young patients admitted under the provisions of the two Acts. However, these surveys offer an incomplete picture because of the difficulty of gathering reliable information from the wide variety of treatment settings in which young people may be placed. The more accurate descriptions of narrowly defined services for children and adolescents are however unlikely to be representative of the country as a whole.
11.18 One survey of 71 out of 80 child and adolescent in-patient services in England & Wales described the 663 young people resident on Census day, 19 October 1999 (Mears et al 2003). Some 127 (19%) had been admitted formally. Of these, 99 (78%) were under sections 2 and 3 of the Mental Health Act 1983 and 8 (6%) under section 25 of the Children Act 1989. Other sections of the Children Act applied to 55 children but these were not deemed to constitute formal admission. The four most common diagnoses for the informal patients were eating disorders (24%), mood disorders (18%), schizophrenia (10%) and conduct disorders (7%). For those patients in the 'detained' category the four most common diagnoses were schizophrenia (45%), personality disorders (16%), mood disorders (13%) and eating disorders (5%).
11.19 Calton and Arcelus (2003) analysed referrals to a general adolescent unit that accepted 12-18 year olds over a 14 month period. Of the 56 admissions 15 (26%) were admitted formally under the Mental Health Act 1983 and none under the Children Act 1989. There was a small preponderance of males to females in the 56 admissions with 30 male (54%) and 26 female (46%). The most frequent ICD diagnosis was adjustment disorder 20 (37%), which usually followed an episode of self-harm. A variety of diagnoses of psychotic disorders were given to 12 patients (21%) and anorexia nervosa to 8 (14%).
11.20 A survey of young people aged 13-17 with a home address in Greater London being treated in NHS and private psychiatric facilities both in and around London addressed issues of ethnic variation (Tolmac and Hodes 2004). From a total of 113 patients, 95 (84%) were in child and adolescent facilities while the other 18 (16%) were on adult wards. 'Black' (Black British, Black Caribbean, Black African, Black Other) patients were over represented in the population of those admitted with a diagnosis of psychosis as compared to 'Asians' (Indian, Pakistani, Bangladeshi, Asian Other) and 'Whites' (White British, White Irish, White Other). People in the 'Black' category were more likely to have been born outside the UK, have a refugee background and be detained on admission.
11.21 Although there is a strong current of opinion that it is inappropriate to admit young people to adult wards there is also concern that young people with psychotic illnesses are not necessarily well served in a general child and adolescent unit (Carlton and Arcelus 2003, Mears et al 2003). The authors urged that consideration be given to providing dedicated services for young people between 15-22 years old with psychotic illness. Mears et al (2003) pointed out that forensic and secure units often tend to admit up to the age 21. This may create better outcomes for both the patients with psychosis and also the other in-patients in the child and adolescent units (Carlton and Arcelus 2003). In Scotland, however, with a significant number of remote and rural populations the more specialised a unit is the greater the distance is likely to be from the young person's home, thus making maintenance or re-establishment of family relationships more difficult.
11.22 Of those children and young people admitted formally most were admitted under the provisions of the Mental Health Act 1983. There is debate as to which law should be used and also if psychiatrists have sufficient knowledge of the Children Act 1989 to use it when appropriate. While there were concerns that use of the Mental Health Act 1983 may be stigmatising, official guidance from the Welsh Office stated that it provides more safeguards than the Children Act 1989 (Mears et al 2003).
11.23 Kurtz et al (1998) surveyed professionals' attitudes to young people in secure accommodation, including forensic adolescent units. They reported that there were many young people with mental health needs in secure accommodation who were not getting the treatment they needed. Only 46% of children considered by departments of child and adolescent psychiatry to be in need of a secure placement were so placed. There were also reports of young people in a variety of secure placements who were not getting their mental health needs met.
Rates of self harm amongst children and adolescents
11.24 An analysis from the national survey of the mental health of children and adolescents in Britain in 1999 (Meltzer at al 2001) presented prevalence rates of self-harm. Information was collected on 83% of a total of 12,529 children eligible for interview resulting in data for 10,438 children and adolescents aged 5-15 in Britain. The findings suggested that, according to parents, approximately 1% of 5-10 year olds had tried to harm, hurt or kill themselves with the rate of self harm among the sample with no mental health problems at 0.8%. The rate of self harm increased to approximately 6% in children diagnosed as having an anxiety disorder and around 7% in those who had a conduct disorder, hyperkinetic disorder or a less common mental disorder. In this age group, the prevalence of self harm was greater: in children in lone parent as apposed to two parent families; single child families; social class V families; families living in terraced houses or maisonettes as opposed to detached or semi-detached houses; and, the rates were higher in England than in Scotland or Wales.
11.25 In the 11-15 age group, approximately 2% had tried to harm, hurt or kill themselves with the highest rate of 3% amongst 13-15 year old girls. For those with no mental disorder the rate was approximately 1% and this increased for those with anxiety disorders (9%), depression (19%), a conduct disorder (12%) or a hyperkinetic disorder (8.5%). In this age group, the prevalence of self-harm was greater for children in: lone parent families; families with stepchildren; families with 5 children or more; and, families who were social sector or private renters as opposed to being owner-occupiers. The rates in Wales were higher than those in England or Scotland.
The rights of children as dependants of adults with mental illness
11.26 A review of cases taken to the European Court of Human Rights ( ECHR) (Prior 2003) by parents with a mental illness who either wish to regain custody or establish access to their children highlights an important trend. Although the ECHR upheld the decision to take children into care in the cases reviewed, authorities were criticised for not giving sufficient consideration to the changeable nature of mental illness. The 'once and for all' decision did not recognise that parental mental health could improve and children benefit from either being returned to the care of their families or of having contact maintained with the birth parent(s) whilst in foster care or adopted. A case was also cited where children were awarded compensation for damages suffered as a consequence of being left too long with their family and suffering 'inhuman and degrading treatment' (Prior 2003).
- Section 23 of the Mental Health (Care and Treatment) Act 2003 places a duty on Health Boards when admitting a person under 18 to hospital to provide ' such services and accommodation as are sufficient for the particular needs of that child or young person' for both detained and non detained patients.
- The Mental Welfare Commission reported that detention rates for young people under 18 have continued to rise from 51 in 1990-91 to 135 in 2003-4.
- A Needs Assessment Report on Child and Adolescent Mental Health identified an urgent need for investment in the provision of specialised units in Scotland. Neither the current complex legislative framework in England & Wales nor the proposed changes in the Mental Health Bill can provide appropriate services whilst safeguarding children's rights.
- The main concerns in relation to detention amongst consultant psychiatrists in child and adolescent psychiatry are: which Act to use, the Mental Health Act 1983 or Children Act 1989; general issues with consent to treatment; issues with social services departments; and stigma associated with use of the Mental Health Act 1983.
- One survey of adolescent in-patient services found that the most common diagnoses for informal patients were eating disorders (24%), mood disorders (18%), schizophrenia (10%) and conduct disorders (7%). For those patients in the 'detained' category the most common diagnoses were schizophrenia (45%), personality disorders (16%), mood disorders (13%) and eating disorders (5%).
- · Although there is a strong current of opinion that it is inappropriate to admit young people to adult wards there is also concern that young people with psychotic illnesses are not necessarily well served in a general child and adolescent unit.
11.27 In recent years there have been moves to develop strategic mental health care plans specifically for women in Britain. On the other hand, one study commissioned by the Department of Health highlighted that while some services were improving there was still little suggestion of any sustainable improvements in mental health services for women (Barnes et al 2002). In their Ninth Biennial Report, the Mental Health Act Commission (2001) noted that progress on the implementation of NHS directives on safety, dignity and privacy in mixed environments was slow. They suggested that the 95% of objectives relating to women's safety, privacy and dignity (including separate washing and toilet facilities, safe sleeping arrangements as well as general organisational arrangements) were not being met. The Commission suggested that while certain services had complied with some of the basics of the Government objectives, there was still a long way to go before a quality service could be obtained (Mental Health Act Commission 2001, 2003).
Government action in England & Wales
11.28 In the last number of years UK governments have begun to give women's mental health services priority at a policy level. In England & Wales in March 2001, the then Minister for Mental Health, John Hutton, announced the development of a women's mental health strategy. The report, Women's Mental Health: Into the Mainstream was published in October 2002 (Department of Health 2002b). Following a consultation process, implementation guidance for the strategy called Mainstreaming Gender and Women's Mental Health was published in September 2003 (as part of a broader Government initiative Delivering on Gender Equality launched by Patricia Hewitt in June of the same year) (Department of Health 2003).
Action in Scotland
11.29 In Scotland the National Mental Health Services Assessment (set up to advise on the implementation of the Mental Health (Care and Treatment) (Scotland) Act 2003) reported that forensic services for women had suffered in the past from a lack of co-ordination and planning. The report stressed that these deficiencies were being addressed and called for the new Act to put in place a full range of forensic services as a matter of urgency (Grant 2002).
11.30 The Mental Health (Care and Treatment) (Scotland) Act 2003 also includes provisions to allow mothers with post-natal depression to be admitted to hospital with their child. A Short Life Working Group was set up in May 2003 to prepare appropriate guidance to inform planning processes towards implementation of the provisions of the new Act. While the legislation specifically mentioned post-natal depression, the Working Group used the term perinatal mental illness in order to highlight a wider scope for potential new services. The Scottish Executive published guidance for those charged with developing care for new mothers experiencing mental ill health in March 2004 (Scottish Executive 2004d).
11.31 In response to the publication, the then Health Minister, Malcolm Chisholm stated that:
' the care and treatment of women who experience mental ill health after they have given birth is of the utmost importance not least for the continuity of the essential bonding between mother and child…Planning perinatal services facilities and service for mothers and their babies will form an important part of the improved services and support set out in the new legislation in addition to those services already being developed' (Scottish Executive News Release, 04/03/2004).
Women in secure settings
11.32 There has been recognition that within secure settings women have been exposed to unnecessary levels of security because services designed to meet their specific needs have not been available (Department of Health 2002b). It has been suggested that this may in part be due to a lack of secure provision for women outside of high security services (Beasley 2000). Despite the fact that many women in high security services do not warrant that level of security, a security-driven rather than clinically focussed service has been allowed to develop (Funnell 2004).
11.33 Lart et al (1999) carried out a review of available literature around women and secure psychiatric services. The findings from the review suggested key differences in contexts and needs of female patients. It was found that women were more likely to have experienced previous psychiatric admissions and less likely to have committed serious criminal offences than men. It was also found that women were more likely to have to have been diagnosed with a personality disorder or borderline personality disorder than men.
Service provision for women
11.34 Although not looking specifically at all the literature on service provision, from the literature reviewed Lart et al (19990 noted that very few papers discussed the implications of specific service models in relation to their impact on women. The papers were generally supportive of mixed therapy groups and regimes and did not address women's history of physical or sexual abuse or issues relating to women as parents.
11.35 What also became clear was that a higher proportion of women in high security care in England & Wales have come from NHS psychiatric services rather than the criminal justice system. Their arrival in higher level security was because they were seen as being difficult to handle at lower level security facilities (Lart et al 1999).
11.36 It has been argued that women have been fitted into services that had been developed for men and, where services have developed, they have struggled to respond in ways that were meaningful to women. Certainly the Government's Mainstreaming Gender and Women's Mental Health (Department of Health 2003) has taken into consideration a number of problems with the mental health care system in relation to gender. A small number of studies have looked at the problems that have appeared in relation to provision of care and the specific needs of women. While there has been more acknowledgement of overt forms of abuse it is clear than more subtle forms need to be addressed.
11.37 Aitken and Noble (2001) assessed service provisions for women who were involuntarily referred into medium and high secure care in England & Wales. They argue that many of the women in secure services are wrongly placed (see also Gorsuch 1999) and receive inappropriate forms of treatment and care. The claim here is that there is little consensus or standard approaches on how to calculate the degree of dangerousness and that there needs to be much more understanding of how risk assessments may be influenced by sexism, racism, ageism or anti-gay/anti-lesbian prejudice.
Characteristics of women in secure settings
11.38 On the basis that new therapeutic treatments for women are needed in the future, Coid et al (2000a,b) attempted to find out if identifiable sub-groups of women patients existed in secure forensic psychiatry services. Using cluster analysis they assessed the case notes of 471 women admitted to three special hospitals and seven regional secure services over a seven year period (1988-1994). Analysis resulted in a 7-cluster solution. Women with personality disorders were clustered into three groups, each with different problems and in need of different levels of security.
11.39 Cluster I (11%) was characterised by anti-social personality disorder (with more than half showing co-morbid personality disorder), Cluster II (21%) (Borderline Personality Disorders), Cluster III (10%) (Mania/Hypomania), Cluster IV (34%) (Schizophrenia/Paranoid Psychosis), Cluster V (8%) (Other Personality Disorder), Cluster VI (11%) (Depression) and Cluster VII (5%) (Organic Brian Syndrome). The categories showed up major differences in histories, criminal activity, additional diagnoses of mental disorder over lifetimes and different pathways into secure care.
11.40 The authors concluded that it was unclear whether specialist services could be developed to manage all women in any single category without high-security facilities. They expected that those diagnosed with anti-social personality disorders would continue to need high security. Personality disorders rate highly for women prisoners yet few receive treatment. It is argued that research needs to be done to assess which women with personality disorders might benefit from treatment and therefore be transferred to hospital. It was also clear that a high proportion of women with major mental illness (Clusters III, V and VII) were being managed in medium security facilities. As such an assessment to evaluate the need for high security for these groups of women may be needed.
Mentally disordered offenders
11.41 Gorsuch (1999) has argued that there are groups within the mental health system who are being poorly served compared to others. One such group, it was suggested, are disturbed offenders. Bland et al (1999) also stated that there have been few academic or clinical studies of mentally disordered females, which might be because the relatively small numbers involved make the group invisible to some extent.
11.42 Bland et al (1999) carried out a case note based study of the 87 women patients detained in Broadmoor hospital during the first 6 months of 1994. Their findings suggested that women in special hospitals share many of the same characteristics as women in prisons. Their histories showed a high level of social deprivation, high rates of physical and sexual abuse, self-harm and behavioural disturbances. Few had functioned effectively in relation to employment, family or intimate relationships before being detained in Broadmoor.
11.43 The authors claim that women in special hospitals represent a 'minority underclass'. They point out that while between 35-50% of women patients in special hospitals do not need high security, medium secure units may not be safe for the women themselves, because of the high proportion of men there and lack of women only space available. It is also suggested that the diagnosis of a personality disorder as opposed to one of mental illness leaves women less well catered for (see also Gorsuch 1999) and more likely to spend longer periods in high security.
11.44 Gorsuch (1999) undertook a study of ten highly disturbed women in the psychiatric ward of Holloway prison. The aim of the study was to find out why these women had found it difficult to obtain beds in NHS secure units. The women, who had been diagnosed with a personality disorder and committed serious crimes were interviewed and asked to complete the Millon Clinical Multiaxial Inventory II ( MCMI-II).
11.45 These women did not fit easily into the medical model, which the author suggested dominates forensic psychiatry. They can be rejected by services on the grounds of treatability because there have been very few controlled studies which provide evidence of what treatments are effective. In interview, the women themselves consistently highlighted the need for social contact, a confiding relationship, autonomy and control as well as the need for a more supportive environment than the prison unit was offering them.
Women with children
11.46 Another important issue rests upon the lack of research into the numbers of women with psychotic disorders who have children and are involved in childcare. This is viewed as an important area as bringing up a family may bring with it with specific problems and needs for women that are not addressed by mental health services.
11.47 One study carried out by Howard et al (2001) took an epidemiologically representative population of women with psychotic disorders, using a descriptive analysis and two case-control studies, to examine the impact of having children on them (as well as the impact of having children cared for by social services).
11.48 The sample consisted of 246 women diagnosed as having a psychotic disorder from two types of community mental health services in South London. Of these 155 (63%) had children. The median age of the women was 43 years (range 16-89). The results suggest there are no clinical differences between women with or without children, as mothers appear to be as disabled, have similar diagnoses and severe illnesses as women without children. Indeed, the study shows that most of the women lived in difficult circumstances with low incomes and had small social or support networks (although mothers did have more contact with relatives and 'acquaintances' such as health visitors or social workers than other women).
11.49 In this group, 10% of the women with children had a history of having their child cared for by social services. Having had a child in the care of Social Services was associated particularly with detention under the Mental Health Act 1983, younger age, a forensic history and being Black African. While it is made clear by the authors that their data was not detailed enough to assess if children being in care was because of women being detained or through parenting difficulties, it is suggested that child care may be a problem if women relapse and this might explain why there is a link between women who have been detained whose children had been in care. (see paragraph 11.26 on the rights of children).
11.50 Although the numbers were small it was the case that Black African women (or first generation immigrants) were more likely to have children who had been taken into care. While detention under the Mental Health Act 1983 or marital status and living circumstances did not fully explain this (although detention was a contributing factor), the authors refer to research which found that Black mothers were more likely to be referred by the police or health services for reasons of mental health than white mothers. They conclude that future research may confirm their findings that Black families with a parent with psychosis were more likely to have a child placed in care.
Findings from the USA
11.51 Mowbray et al (2000) argued that mental health services have paid little attention to the parenting problems of mothers with a mental illness. They interviewed 375 mothers with serious mental illness, recruited from community mental health centres and psychiatric inpatients in Detroit, USA. The aim of the study was to highlight the women's particular needs and improve service provisions. The women involved in the study all had a psychiatric disorder (including mainly diagnoses of major depression, bipolar disorder, schizophrenia, or schizophrenia-related disorder). The participants were 60% African American, 29% Caucasian, 8% Hispanic, and 3% other race/ethnic category. All were aged between 18-55 years and all had at least one child between 4-16 years for whom they had childcare duties. The study found that these women were poorer than people living in the same census tract. The mothers were of the opinion that mental health services helped them less with their problems as parents than they did with their other problems.
Training staff to work with women
11.52 One of the main conclusions from a literature review (Lart et al 1999) was a call for more training that recognised the very specific problems relating to women within the mental health system.
11.53 The lack of training in gender specific issues prompted the Department of Health and the organisation Women in Secure Hospitals ( WISH) to develop the ' Gender Training Initiative'. The initiative was based on a study by Scott and Parry-Crooke (2001) that developed and piloted a training course specifically aimed to the needs of staff working with women in secure psychiatric settings.
11.54 Underlying the study was ' the recognition that women in secure psychiatric services are not a 'special' population in some distinct way. Rather they overlap with and often combine experiences of a number of different 'groups' of women: adult survivors of child sexual abuse, survivors of domestic violence and other trauma, women who self-harm, women in prison, women living in poverty and women users of mental health services' (Scott and Parry-Crooke 2001).
11.55 The investigation involved conducting 60 semi-structured interviews with a cross-section of staff in six secure settings (including nurses, doctors, psychologists, occupational therapists and social workers) and a training needs assessment survey given to members of one team in each of the six sites. A confidential questionnaire was also distributed to 154 staff members in the teams. This asked for views on how gender, poverty, race and sexuality were addressed in training and how issues of power and inequality were addressed in the workplace. Respondents were also asked if they felt there were gaps in their understanding in relation to the needs of women and to identify their training needs for working with women.
11.56 Only 10 (17%) of the respondents had received information on inequality in their initial training, while 18 (30%) said that further training (optional and generally comprised of conferences and seminars) had included inequality. Only three of the respondents (clinical psychologists) had received training in working with survivors of child abuse.
11.57 When asked what areas needed to be developed, the respondents highlighted a need for more knowledge of sexual abuse, dealing with the impact of (early) trauma and self-harm, as well as more understanding of gender issues. It was interesting that discussions of gender and inequality in this study seemed to be more common in informal peer discussions and less so in line management or supervision situations. It was also made clear that issues such as a lack of time for debriefing, the 'ad-hoc' nature of supervision, and a lack of input by specialists all impacted on how well staff were carrying out their jobs.
11.58 From the findings of the study a pilot course was designed, tested and developed with successful results. It is hoped that the setting up of this training programme has provided a clearer focus on how to deal with specific gender issues with women in secure settings. The authors point out that staff training is a crucial tool in ensuring good practice and suggest that the scheme may also be developed for staff in acute services and community contexts.
- In Scotland a National Mental Health Services Assessment report allowed that forensic services for women had suffered in the past from a lack of co-ordination and planning.
- The Mental Health (Care and Treatment) (Scotland) Act 2003 includes provisions to allow mothers with post-natal depression to be admitted to hospital with their child.
- It has been recognised that within secure settings women have been exposed to unnecessary levels of security because services designed to meet their specific needs have not been available.
- A higher proportion of women in high security care in England & Wales have come from NHS psychiatric services rather than the criminal justice system.
- There have been few academic or clinical studies of mentally disordered females, which may be because the relatively small numbers involved are making the group invisible to some extent.
- Women in special hospitals share many of the same characteristics as women in prisons. Their histories showed a high level of social deprivation, high rates of physical and sexual abuse, self-harm and behavioural disturbances.
- There is a lack of research into the numbers of women with psychotic disorders who have children and are involved in childcare.
- Having had a child in the care of Social Services was associated particularly with detention under the Mental Health Act 1983, younger age, a forensic history and being Black African.
- · Mental health professionals highlighted a need for more knowledge of sexual abuse, dealing with the impact of (early) trauma and self-harm, as well as more understanding of gender issues and point to staff training as a crucial tool in ensuring good practice.
MINORITY ETHNIC GROUPS
11.59 The 2001 Census found that approximately 2% of the population of Scotland reported themselves as being of Asian or Black ethnicity. The majority lived in large cities and are reported as making up 5.5% of the population in Glasgow, with the highest percentage of Asian origin (General Register Office for Scotland 2001, Grant 2004). Irish born and those of Irish parentage represent 4.6% of the population of mainland Britain making them the largest migrant population in Western Europe (Bracken et al 1998). The web-site for the most recent Census in Scotland http://www.gro-Scotland.gov.uk/grosweb/grosweb.nsf/pages/censushm does not allow public access to ethnicity of the population whereas this is available in the case of the 2001 census for England & Wales http://www.statistics.gov.uk/census2001.
11.60 The report that this census information for Scotland is derived from also indicated a breakdown of the age structure of minority ethnic groups noting that 7% of all ethnic communities in Scotland are over 60 and 56% are under the age of 30, compared with 21% over 60 and 36% under 30 for white communities (Grant 2004).
Practice in collecting data on ethnicity
11.61 A number of studies to inform good practice in collecting mental health data for minority ethnic status and to inform the 2001 Census ethnicity questions illustrate key considerations in the collection of data for analysis by ethnicity (Halpern and Nazroo 1999, Aspinall 2000, 2003, Bracken and O'Sullivan 2001). These studies highlight the importance of capturing detailed information to encompass descriptors of religion, distinct area of origin, identification with the nation of residence (as in the bi-cultural term Asian-Scottish), first language, English literacy and age at migration as well as recent and current postcode information.
11.62 A study that used a large dataset assembled from 26 Local Authority and NHS trust areas in England with details on detentions of patients under the Mental Health Act 1983, reported that information about ethnicity was collected and categorised in a variety of ways across datasets and because personal identifiers were not available, over-representation of groups more at risk would occur in analysis since repeated admissions to specific individuals could not be identified (Audini and Lelliot 2002). All of the data-sets were found to have missing data. The authors recommended that monitoring of the Mental Health Act 1983 should be revised to allow for data collection at the individual level. This study also reported a wide variation in classification of ethnicity between data-sets and, implicitly, over time within them. They recoded to one of four categories using the range provided of between 4-16.
11.63 Comment on this article recommended an overhaul of the ways in which such routine data are collected, centralised and analysed with a programme that crossed over between health and social care (Harrison 2002). The author noted that the introduction of a new Mental Health Act in England & Wales might offer an opportunity to make these important changes for future monitoring of trends and activity.
11.64 Elsewhere, observed inconsistencies in the understanding and use of the term Asian in studies and surveys, for example, to encompass or exclude Chinese or Indian sub-continent subjects, highlight the importance of more detailed area-specific enquiry and the inadvisability of grouping pan-Asian terms (Aspinall 2003).
11.65 In the section on ethnicity, the recent Census survey of 2001 for England & Wales and the one for Scotland both included an option for Irish within the broader White category where these previous 1991 Censuses had "Irish traveller" as the only option specific to those of Irish origin or parentage (General Register Office for Scotland 1991). Regarding respondents to both of the British mainland censuses, it has been reported that the majority of those of Irish parentage did not identify themselves as such and in interviews with 72 people of Irish descent, all those born in Scotland self-identified as Scottish (Walls and Williams 2004). Recently, studies and reports dedicated to the mental health of minority ethnic groups have included collection of data on this background when discussing and reporting on their experience of mental health and psychiatric services (Erens et al 2001, National Centre for Social Research 2002, Walls and Williams 2004).
Policies on recording ethnicity
11.66 Journal correspondence questioning the use of 'outdated' hospital admission data (in an article focussing on those of Irish origin) commented that mandatory routine collection of ethnicity data was not introduced for NHS in-patients until April 1995 (Aspinall 2000). This requires trusts to use categories used within the national Census Survey. In visits to 104 in-patient units in 1999 the Mental Health Act Commission found that only half of them had written policies on the recording of ethnicity (Warner et al 2000).
Prevalence of psychosis and detention under the Mental Health Act 1983
11.67 A recent study of ethnic minority psychiatric illness rates in the community ( EMPIRIC) conducted a large quantitative survey of ethnic minorities using the health survey for England for its sample (Erens et al 2001, National Centre for Social Research 2002). The study measured psychotic symptoms using the Psychosis Screening Questionnaire ( PSQ). The White group was found to have a prevalence of scoring positively on any of the PSQ questions of 6%. This compared to a lower rate for the Bangladeshi group of 5% and higher rates for Irish (8%), Indian (9%), Pakistani (10%), and African-Caribbean (12%) groups. It is noticeable that the rates of prevalence of psychosis in this general ethnic population study are lower than rates of psychiatric contact and treatment.
11.68 Using datasets for the purpose of examining detentions under Part II of the Mental Health Act 1983 in England another study reported that Black people were over six times more likely to be detained than White people. In the cases of Black men this rose to an eight-fold increase, while Asian people were 65% more frequently detained under Part II (Audini and Lelliott 2002). The major limitation in the study was that, because of the way that the data-sets were presented, the researchers were unable to identify repeat incidences of detentions attributable to individuals. This study analysed 31,702 incidences of Part II detention over the period 1988-99 from areas with a combined population of 9.2 million.
11.69 Previous studies have placed the figure for detentions for Black people at closer to three times than for White people although one article that reported this statistic included Asian in its definition of Black (Keating et al 2003).
11.70 An update on current literature relating to Chinese mental health reported returns to the Mental Health Act Commission detentions for Chinese as 0.3% for the period from 1996-98 (Cowan 2001). This percentage detained is exactly the same as the proportion of Chinese people in the England & Wales national population with 40% residing in Greater London. The author concluded that evidence about the prevalence of mental disorder amongst Chinese people was extremely limited.
11.71 A study which conducted a retrospective case note analysis on hospital records and clinical notes of restricted hospital order patients conditionally discharged from a large medium secure unit between in England between 1987-2000, compared data on those of Black African-Caribbean race and origin with all other ethnic categories (Riordan et al 2004). Most of the subjects in both groups had a diagnosis of schizophrenia and there was an over-representation of Black people (36%) as compared to the general population.
11.72 Although the study's findings were inconclusive the authors suggested that the imposition of the hospital order may have been on the basis of number of previous offences in the Black group rather than the severity of the index offence as was more often the case in the non-Black group, indicating that the threshold for the imposition of the order may be different between the groups. The article cited evidence from previous studies of greater police attention and a disproportionate likelihood of Black people going to crown court trial and postulated that the study may support the view that black people are seen as being more dangerous.
Rates of contact with psychiatric services
11.73 The consensus from the extensive examinations of the literature on psychotic illness identified is that rates of contact with psychiatric services are found to be higher for some minority ethnic groups, most notably African-Caribbean (3-6 times higher than white groups). However a hypothesis that suggests a higher prevalence of this illness in these groups is still one that it would be unwise to support (Sharpley et al 2001, Chakraborty and MacKenzie 2002).
Findings from Mental Welfare Commission for Scotland & Mental Health Act Commission visits
11.74 Within the course of their visiting programme of 2001-2002, the Mental Welfare Commission identified a deficiency of care with regard to a Punjabi speaker with very limited English. Amongst the many inadequacies in care highlighted in the subsequent investigation were failures to recognise the need for and provide appropriate interpretation services and to consider access to advocacy and befriending. The need for specific communication of ongoing care including discharge planning, informed consent for treatment and general day-to-day consideration of specific cultural and language needs were highlighted, and recommendations to the Trust and Local Authority were made accordingly (Mental Welfare Commission 2002).
11.75 The MWC continued consultation processes in this area and adopted racial and cultural issues as a special focus for their visiting programme during 2003-2004 (Mental Welfare Commission 2003). A dedicated report, Race and Culture themed Visiting Programme: gaps in the service provision and ways forward summarises some of the main issues facing people from minority ethnic communities with mental health problems, gaps in service provision and ways forward (Mental Welfare Commission 2005).
11.76 An article detailing visits made to 104 inpatient units on one day in 1999 on behalf of the Mental Health Act Commission reported similar concerns about the use of interpreting services (Warner et al 2000). Although three-quarters of the units used trained interpreters, only 31 of the 56 patients who were not fluent in English had ever had access to one, while two-thirds of the wards had used patient's friends or relatives for interpreting, raising serious concerns about patient confidentiality and objective decision-making by staff.
Professional perceptions and practice towards service provision
11.77 An audit of research on minority ethnic issues identified common issues particular to mental health which included GP's inability to identify need, a lack of awareness of available services, consequent failure to make appropriate referrals against a background of relatively low uptake of services (such as counselling and befriending) by minority ethnic groups (Netto et al 2001).
11.78 A recent assessment of mental health services postulated that professionals' assumptions of the existence of considerable community support combined with evidence of low demand leads to underestimates of need and provision of services for people of ethnic minority backgrounds (Grant 2004). It has been suggested elsewhere that this stereotypical attitude about caring communities is significantly disadvantageous to carers within minority ethnic groups in gaining equitable support (Keating et al 2003).
11.79 The results of visits made by Mental Health Act Commission to 104 inpatient units reported that 11% of the patients had reported racial harassment and three-quarters of the units visited had no policy for dealing with this, half of them did not have policies on treatment and detention issues specific to minority ethnic groups, and two-thirds had no training policy on anti-discriminatory practice (Warner et al 2000).
11.80 Refugees and asylum seekers have been experiencing particular difficulties in accessing services. It has been reported that professionals (and in particular GPs) have often been found to offer incorrect assertions that they are not entitled to services or have simply refused them (Keating et al 2003). This report concluded that NHS staff lacked knowledge of refugees' rights to health care that their access to translation services was especially limited.
Possible influencing factors
11.81 A study of data from a national community survey of England & Wales compared the responses from 5,196 minority ethnic and 2,867 White subjects to test the hypothesis that ethnic group concentration is associated with lower levels of reported psychiatric symptoms (Halpern and Nazroo 1999). Whilst this was broadly supported for both psychotic and neurotic symptoms, the effect was found to be modest in size and, in the case of the Pakistani sample, was reversed. Both reduced exposure to local prejudice and increased social support were identified from responses as possible causes of the protective effects of same ethnic group concentration.
11.82 Using the Psychosis Screening Questionnaire to screen people in the community, the EMPIRIC study found a socio-economic gradient in risk of psychosis in African Caribbean, White and Irish interviewees but this was not found in Bangladeshi or Indian participants and was reversed in those of Pakistani origin (National Centre for Social Research 2002).
11.83 In a study of case records of people with schizophrenia in electoral wards of Camberwell in London for a ten year period (1988-1997 n=222), 57% were non-White, as defined by self assigned categories at Census, which was predominately Caribbean (40%) and African (30%). The study found that the lower the proportion of non-White to White population was in an area the higher the incidence of schizophrenia in those ethnic minorities (Boydell et al 2001). The possibility of selection bias where non-Whites might choose to isolate themselves from their ethnic community was determined as being unlikely as the area under study was predominately local authority housing and therefore limiting on the population's spontaneous mobility.
11.84 In addressing the argument that mental illness is deficiently identified in subjects of lower acculturation by western psychiatry, one study found that poor English and older age on migration were strongly associated with lower reported psychotic symptom levels (Halpern and Nazroo 1999).
The impact of migration
11.85 The literature reported is entirely divided as to whether migrational factors have any bearing on increased treatment rates and over-representations of psychotic illness for some minority ethnic groups, and some earlier studies are reported as finding positive effects on outcomes for mental health (Sharpley et al 2001; National Centre for Social Research 2002). The EMPIRIC community study reported its findings on this matter to be consistent with a similar and comparably extensive community study and reported variations on prevalence of psychosis in relation to migrational status to differ in effect between different ethnic groups. The author commented that it was difficult to understand why this should happen if it was a true effect (National Centre for Social Research 2002).
Issues around rates of psychosis amongst African-Caribbean groups
11.86 A review of the current hypotheses in understanding the (apparent) excess of psychoses among the African-Caribbean population in England discussed a range of possible attributable factors (Mental Welfare Commission 2003). This study concluded that it might be possible that the threshold for a diagnosis is set lower for a diagnosis of schizophrenia, as hallucinations and paranoid ideas may be more common among those of African Caribbean origin.
11.87 Several studies and correspondence discussed the possible existence and influence of institutional racism (Minnis et al 2001, Spector 2001, Sharpley et al 2001, Chakraborty and MacKenzie 2002, Eagles 2002, Freeman, 2002). Although there would seem to be a strong inclination to support this as an influence on high rates of psychosis and treatment, it was not proven, reportedly due to the lack of rigour in research that failed to overcome the many possible confounding factors.
Suicide and self-harm
11.88 A national clinical survey of suicide reported that minority ethnic patients were more likely to be unemployed than those of White ethnic background, have a history of recent non-compliance and violence and were more often detained under the Mental Health Act 1983 at the time of their suicide, although this did not translate into relatively closer supervisory care prior to the incident (Hunt et al 2003). Of those that were viewed as preventable by the respondents, 33% of Asian suicides indicated having contact with the patient's family as one of the measures that might have reduced risk as compared with 16% of White cases.
11.89 It has been indicated that South Asian women in Britain have high rates of suicide. Burman et al (2002) carried out an investigation using interviews with service managers, discussion groups with service providers, community groups and survivors of attempted suicide or self-harm to assess the available services and outline potential policy changes to ensure there is adequate provision to this population. The study highlighted the need for statutory and voluntary sector organisations to undertake gender sensitive anti-racist work in relation to suicide and self-harm services for South Asian women. The study claims that there is a lack of consideration of the needs of this group and a 'privileging' of race over gender in commissioning and service provision.
11.90 An article concerning the lack of focus on Irish migrants in British health research reported that their suicide rate in the period 1988-1992 was 53% more than that of the native-born rate making it the highest recorded rate for any ethnic group (Bracken and O'Sullivan 2001). The authors commented that this trend persisted into second-generation descendants and that this was unusual in the context of research into migrant mortality. It was noted in an article about suicide in Irish migrants that Scottish migrants also had high rates in migrant studies. The author concluded that the similarity in findings between these groups with regard to social class and marital status indicated the need for further research exploring relative risks within cultural context was needed (Aspinall 2002).
Irish mental health
11.91 A study into Irish Catholic ill health in the west coast of Scotland investigated links between ethnicity, religion and health between those of Irish Catholic, Scottish Catholic, Scottish Protestant and Irish Protestant (Walls and Williams 2004). The investigators identified serious health problems, in particular unmanageable and unacceptable levels of stress that related particularly to Catholics of Irish origin who were mainly middle class men. They concluded that theories of institutional sectarianism provided the hypothesis that best explained their data.
11.92 An article dedicated to the Irish dimension reported that Irish rates of hospitalisation in England for all mental health problems were far in excess of those who were native born. Irish rates for a diagnosis of schizophrenia were second to those of African-Caribbean ethnicity, although Irish overall rates of psychiatric hospitalisation were far higher. This finding was echoed in another study which looked at the population of Haringey in London as detailed in a recent report on minority ethnic groups and mental health in London (Bracken et al 1998, Keating et al 2002). The rates of admission to hospital in England with a diagnosis of schizophrenia for those born in the Irish Republic was reported as nearly three times that for those who were born in England, although the incidence of schizophrenia in Ireland has been found to conform with elsewhere (Bracken et al 1998). This article attracted considerable correspondence, one of which criticised the use of data collected in 1971 and 1981 (Aspinall 2000). Another suggested that socio-economic and migrational factors might account for the articles findings rather than ethnic factors (Sandford 1998).
· The literature reported is entirely divided as to whether migrational factors have any bearing on the increased treatment rates and over-representations of psychotic illness for some minority ethnic groups. OLDER PEOPLE
11.93 It is estimated that 18% of the general population in the UK are of pensionable age, a figure that may grow to 20% in 2025. There are a number of concerns with the mental health status of this part of the population. Reports suggest that between 4-23% will have depression (Seymour and Gale 2004), while 1 in 20 over the age of 65 are affected by dementia, rising to 1 in 4 over the age of 85 (Audit Commission 2000). Ten to fifteen percent of older people who abuse alcohol can become depressed and are at a higher risk of suicide It was also found that many older people were using prescribed medications some of which was being taken inappropriately (Mental Health Foundation Statistics at http://mentalhealth.org.uk).
Service provision for older people
11.94 It has been documented that services for older people in Britain need to be improved (Audit Commission 2000). In January 2000 the Audit Commission published a report on mental health services for older people called Forget Me Not, which was developed and updated in 2002 (Audit Commission 2002). The report highlighted a number of areas where services needed to be developed in England. Two-fifths of GPs were reluctant to diagnose dementia early and did not use protocols to help in the diagnosis of dementia or depression in older people. Over one third of GPs felt that they did not have easy access to specialist advice or received enough training on dementia.
11.95 The Commission also pointed out that specialist teams for older people with mental health problems were only fully available in less than half of all areas and often did not have all of the recommended core team members. Only one third of areas had agreed assessment and care management procedures, while a small number had compatible IT systems. The audit also found that assessment and short-term treatment services in day hospitals were available in less than half of the areas, while over one third of carers reported having difficulties getting respite care (Audit Commission 2002).
11.96 The need for the promotion of mental health services for older people has been established in the National Service Framework for older people in England. Standard Seven in particular (Mental Health in Older People) establishes the need ensure that older people who have mental health problems will be able to access integrated mental health services for effective diagnosis, treatment and care (Department of Health 2001a).
The Mental Health (Care and Treatment) (Scotland) Act 2003
11.97 In Scotland, the Mental Health (Care and Treatment) (Scotland) Act 2003 sets out a clear framework for the care and treatment of people with mental illness that should benefit older people. The Scottish Executive set up an expert group to look specifically at the health needs of older people, including their mental health needs. The report of the expert group, Adding Life to Years (Scottish Executive 2002) made recommendations to improve the detection and treatment of older people with depression and improve services for older people with dementia. The first progress report Adding Years to Life Annual Report 2002-2003 (Scottish Executive 2003b) highlighted extra spending and new initiatives from the Executive, including the modernisation of primary care, and the role being given to Community Health Partnerships in the development of co-ordinated mental health services.
11.98 The literature in this area was very limited, centring mainly on the detention rates for older patients and offenders.
Detention rates for older patients
11.99 Salib et al (2000) suggested that the emergency detention of older patients receives little attention in research literature. They carried out a review to examine the use of emergency detention for elderly psychiatric patients (compared to younger patients) to assess what determined outcome. Of 1,027 applications of section 5(2) of the Mental Health Act 1983 (under which voluntary patients can be detained by a doctor pending further assessment) implemented in Winwick hospital (North Cheshire) between 1985 and 1997, a total of 61 were applied on elderly patients. The mean age of the patients was 73 (range 65-90 years) and 37% were male, 63% female.
11.100 The main characteristics of older patients detained under section 5(2) appeared similar to inpatients aged less than 65 detained during the same period. The reasons given for detention of older patients included, aggressive behaviour towards others (16%), suicidal threats (33%), acute psychosis (44%) while the remainder of the cases were poorly documented.
11.101 Of the 61 older inpatients, 46% regained informal status while 54% were moved onto longer-term treatment orders. The most common reasons for conversion to section 2 (admission for assessment for up to 28 days) or section 3 (admission for assessment for up to 6 months initially) were: a clinical diagnosis of functional mental illness; duration in hospital of more than 48 hours; or, when section 5(2) had been preceded by section 5(4) where a voluntary patient has been detained using the nurses' holding power. Although not statistically significant, female patients appeared to have had their emergency detention order converted to other sections more frequently than males.
11.102 The authors suggest that the high rate of non-conversion of emergency detention in older people (43%) may be because section 5(2) was being used to control isolated incidents of disturbed behaviour in normally co-operative patients. They also point out that the positive association between the prior use of section 5(4) and the conversion rate of section 5(2) is interesting and might warrant more investigation.
11.103 Research on older people in secure forensic psychiatry services suggests that some admissions reflect an absence of more suitable provision for older patients at a lower level of security. Coid et al (2002) studied admissions to medium and high security from 7 of 14 health regions in England over a seven year period between 1988-1994. Less than 2% of admissions were people over 60 years (with less than 1% being over 65). Of 61 admissions of older patients to secure forensic psychiatry services, 54 followed criminal charges or convictions while 7 were transfers from another hospital or the community (following non-criminalised behavioural disorder). Only 3% of the older group were admissions to a high security hospital.
11.104 Offences which led to admission for older people included acquisitive offences, homicide, less serious violent offences and criminal damage. The most prevalent diagnoses for this group were depressive illness, delusional disorders and dementia. Patients over 60 years old had fewer convictions than younger patients and were generally older when first admitted to psychiatric hospitals (which usually coincided with their first appearance in court). The offender patients in this study were highly selected and it is made clear that they do not reflect documented patterns of offending for older people in England & Wales (it is suggested that this group may have precluded consideration for psychiatric hospital admission or may have been deemed 'untreatable').
11.105 Convictions for homicide were a prominent element of the older group in this study and Coid et al (2002) suggest that these homicides and their associations are heterogeneous. A quarter had been transferred from a special hospital to medium security for rehabilitation having committed offences many years earlier. The majority though had been admitted directly from secure services. The authors highlight that they were unable to explain why older patients were admitted to medium or high security, questioning whether it was an accurate assessment of their potential danger to the public or due to lack of available places at low security and where there was inadequate provision for longer-term care in psychiatric hospitals.
11.106 In order to assess changes in the management of incapacitated older people, Kearney and Treloar (2000) carried out a postal audit of practices in the South East Thames Region before and after the Bournewood judgements (a case in 1998 where the House of Lords overruled a Court of appeal decision that hospitals and nursing homes should not be allowed to detain vulnerable patients without their consent). The results from the study show a trebling of the rate of admission under section of older incapacitated patients prior to the House of Lords ruling. Following the ruling the detention rate appeared to have returned to the pattern set prior to the Court of Appeal ruling. The authors highlight a continued need for proper protections to be put into place for this vulnerable group.
- It has been documented that the services for older people need to be improved.
- Two-fifths of GP's were reluctant to diagnose dementia early and did not use protocols to help in the diagnosis of dementia or depression in older people. Over one third of GP's felt that they did not have easy access to specialist advice or received enough training on dementia.
- In Scotland, the Mental Health (Care and Treatment) (Scotland) Act 2003 sets out a clear framework for the care and treatment of people with mental illness that should benefit older people.
- Emergency detention of older patients receives little attention.
- · Research on older people in secure forensic psychiatry services suggests that some admissions reflect an absence of more suitable provision for elderly patients at a lower level of security.
people with Learning Disability
11.107 While offering some general information, this section will look specifically at any literature relating to people with a learning disability who have a mental illness. The social and health care needs of people with learning disabilities has started to be given a higher priority at policy level than in the past. The publication of the White Paper Valuing People: A New Strategy for Learning Disability in the 21 st Century (Department of Health 2001b) was the first White paper on learning disability for thirty years in England. In Scotland, a review of learning disability services The Same as You?A review of services for people with learning disabilities (Scottish Executive 2000b) was the first policy initiative for over twenty years.
Statistics on the number of people with learning disabilities
11.108 Statistics on the total number of people with learning disabilities in the UK are not clear. Estimates suggest that there are 580,000-1,750,000 people with mild disabilities and 230,000-350, 000 with severe learning disabilities. The incidence of Down's syndrome is 1 in 6000. Males are more likely than females to have both mild and severe learning disabilities. In general, people with learning disabilities have a high level of unrecognised illness as well as reduced rates of access to health care (http://www.learningdisabilities.org.uk).
11.109 In Scotland, while there are no accurate figures available, it is estimated that there are about 120, 000 people who have a learning disability. About 20 people in every 1,000 have a mild to moderate learning disability and 3-4 in every 1,000 have a severe or profound disability ( NHS Health Scotland, 2004). A National Mental Health Services Assessment: Towards Implementation of the Mental Health (Care and Treatment)(Scotland) Act 2003 (Grant 2004) pointed to research which suggested there was an increasing number of older people with learning disabilities, that people were moving away from home to receive treatment (because of a lack of appropriate services) and that hospital closure programmes had led to some parts of the country having higher numbers of people with learning disabilities than would be expected.
Learning disability and mental health
11.110 It is estimated that people with a learning disability have a high incidence of mental illness, with a lifetime prevalence of 50% ( NHS Health Scotland 2004). The risk of children or young people with a learning disability experiencing mental illness is 30-40% higher, and for adults it is 40-50% higher than the general population. Older people with learning disabilities have more mental health problems, particularly people with Down's syndrome who may get early onset dementia. There has also been an annual increase of about 1% in the prevalence of people with learning disabilities over the last 35 years, which is set to continue over the next ten years (Scottish Executive 2000b).
11.111 The NHS Health Scotland Health Needs Assessment Report: People with Learning Disabilities in Scotland ( NHS Health Scotland 2004) pointed to some of the mental health and behavioural problems of people with learning disabilities. In a review of research the report cites studies which show that genetic causes of some learning disabilities are associated with some forms of mental illness. For example, Down's syndrome is associated with depression and dementia, Prader-Willi Syndrome is associated with an affective psychosis, and velo-cardio-facial syndrome is associated with psychosis.
11.112 In addition, some conditions such as schizophrenia, depression, severe anxiety disorders, delirium and dementia are more common in people with learning disabilities than in the general population. People with learning disabilities can also commonly suffer from eating disorders (including eating behaviours and feeding disorders not prevalent in the general population) and Attention Deficit Hyperactivity Disorder ( ADHD) is also said to be common amongst children and adults, although there is concern about the reliability of diagnosing ADHD in people with learning disabilities ( NHS Health Scotland 2004).
11.113 A lack of awareness by professionals and carers has meant that mental illness can remain undiagnosed and unmanaged in people with learning disabilities for long periods of time. The report suggests that:
'diagnostic overshadowing contributes to this. This refers to the phenomenon whereby debilitating emotional/psychological problems are assumed to be less important than they Actually are, because of the context of the person's learning disability' ( NHS Health Scotland 2004).
11.114 The authors argue that given its high prevalence, mental illness should always be a consideration as a possible cause of changes in the behaviour of people with learning disabilities.
People with learning disabilities in secure settings
11.115 Under the Mental Health (Scotland) Act 1984, a small number of people with learning disabilities were detained. Of these, most were on a section 18 order (detention for up to 6 months initially), and few requested reviews of their detention or appealed to the sheriff. It was also clear that the use of detention procedures varied across Scotland (Scottish Executive 2000b).
11.116 The Same as You? report (Scottish Executive 2000b) also pointed to the lack of information on numbers and needs of people with learning disabilities in prison or secure accommodation for children. Following one of the recommendations of this report a study was commissioned to look at the numbers of people with learning disabilities and/or Autistic Spectrum Disorders ( ASD) within these settings in Scotland (Myers 2004). The study included a scoping exercise of 57 secure settings and case studies of seven of these settings. Secure settings included the State hospital, 16 prisons, 6 secure accommodation units for children and 24 specialist in-patient units for people with learning disabilities and/or ASD and for people with mental health problems.
11.117 The problems of identifying the number of people with learning disabilities or ASD in secure settings were confirmed by this study. It was also pointed out that while there were only a small number of people with learning disabilities and/or ASD in prisons or secure settings, it was likely that this number represented a proportion of a larger population which had not been identified, assessed or diagnosed.
11.118 Amongst those with learning disabilities across different secure settings (from a sample of 49 people) there were a significant number with histories of statutory or institutional care, and a high proportion with current mental health and communication problems. A number of the sample were also seen as being high risk (in relation to offences committed or behaviour). People with learning disabilities were also seen as being at risk from exploitation, bullying or abuse from other residents in secure settings.
11.119 Across the secure settings there were different approaches to assessment in general, where few (apart from the learning disability units) had access to specialists in learning disability or AHD. Staff in the secure settings also highlighted a number of problems with what they saw as inadequacies and gaps in provision for this group. Some staff suggested that people with learning disabilities presented needs that neither non-healthcare secure settings (where the focus was on custody or offending behaviour) nor healthcare secure settings (where the focus was on mental illness) could appropriately fulfil.
11.120 It was also pointed out that resources for secure settings (apart from the specialist units) were not designed around the provision of needs of people with learning disabilities. As well as that, the transition of people to less secure or community environments was being affected by a lack of resources outside of the secure system, which meant that either people were being left in higher level security settings than they needed or were moved on without proper support (Myers 2004). The author concluded that
' the combination of complex individual needs and the lack of clear service responsibility or policy focus may further increase the risk of social exclusion for this vulnerable group of people'
and that this has very clear implications for policy-making, service planning and practice.
Discharge from secure settings
11.121 There are very few studies which described the long-term outcomes following discharge from secure settings. Halstead et al (2001) followed up 35 patients who had received at least one years treatment in a learning disability medium secure unit in England for a maximum of five years. Of the sample, 29 were male (83%), 28 (80%) were Caucasian, 5 (14%) African-Caribbean, 1 (3%) Asian, and 1 (3%) mixed race. Almost half of the sample (46%) had been admitted from prison, while 20% came from special hospital, 17% from learning disability hospitals, 6% from regional secure units and 9% from the community. Thirty-one patients were in the borderline or mild retardation group, while one patient with an autistic spectrum disorder was of normal intelligence and 3 had a moderate mental retardation. In relation to primary diagnosis, 16 patients suffered from a psychotic illness, 4 were depressed and the remainder were classified under behavioural disorder. For a secondary diagnosis, 14 patients had a personality disorder, 12 had problems with alcohol and 3 with drugs.
11.122 By the time of discharge, 16 patients had made 'good' progress, 11 had made 'some' progress, there was no change in 4 of the patients, while a further 4 were thought to have deteriorated. At the end of the treatment, 3 people went home, 17 went to a hostel, 13 were placed in another hospital while 2 were transferred to special hospital. Ten patients who went to hospital remained there throughout the follow-up period (including the 2 patients placed in special hospital). By the end of the follow-up 3 of the hospitalised patients had moved to a community placement and a further 2 had died.
11.123 The study found that a good treatment outcome was more common in those patients with a significant learning disability. It was also clear that the early period following discharge was a peak time for relapse. Within this group it was also found that there was a low reconviction rate (only one person) and that reconviction was less likely for older patients following discharge. At the end of the study period, 21 people were living in the community with support. It was found that a good response to inpatient treatment was associated with successful community placement, although the authors argue that secure service should be available to help with community management at any point in a follow-up to help local services.
- In general people with learning disabilities have a high level of unrecognised illness as well as reduced rates of access to health care.
- It is estimated that people with a learning disability have a high incidence of mental illness, with a lifetime prevalence of 50%.
- The risk of children or young people with a learning disability experiencing mental illness is 30-40% higher and with adults it is 40-50% higher than the general population.
- Older people with learning disabilities have more mental health problems, particularly people with Down's syndrome who may get early onset dementia.
- A lack of awareness by professionals and carers has meant that mental illness can remain undiagnosed and unmanaged in people with learning disabilities for long periods of time.
- The use of detention procedures for people with learning disabilities varied across Scotland.
- · There is a lack of information on the numbers and needs of people with learning disabilities in prison or secure accommodation for children.
11.124 There are very particular challenges to be faced when dealing with the mental health needs of deaf people. The term deaf people is used here to encompass all groups, including those who define themselves as Deaf, and a linguistic minority rather than a disability group.
Prevalence of mental health problems for deaf people
11.125 It has been reported that deaf adults share the overall prevalence rate for psychotic disorders but are more likely to be diagnosed as having a personality disorder or behavioural or adjustment problems (although this may be a consequence of being deaf). Deaf people with mental health problems are more likely to have learning difficulties or organic syndromes and co-morbidity is also higher in this group (Department of Health 2002c).
11.126 Deaf children are more vulnerable to mental health problems and are at increased risk of physical, emotional and sexual abuse. Deaf children also have a higher risk of impairments such as learning difficulties, multi-sensory impairment and central nervous system disorders with an estimated prevalence of 40% in deaf children compared to 25% in hearing children (Department of Health 2002c).
Service provisions for deaf people
11.127 One of the problems for services is how to effectively communicate essential information about, for example, service provisions or care plans, to people who are deaf. But, unlike services for the hearing community, adult mental health services for the deaf have developed in a generic way (with only 3 existing specialised deaf services).
11.128 Provision has relied on individual and organisational effort and developed at a national level in an 'ad hoc' way. The specialised nature of services specifically for the deaf has meant that there has not always been easy access to appropriate facilities for a range of mental health problems. Effective service planning has also been impacted by a lack of knowledge concerning the demographics of the deaf community and a lack of solid evidence base for specialised clinical interventions (Department of Health 2002c).
Strategy in England & Wales for deaf people with mental health problems
11.129 The Department of Health have responded to calls for the development of a national strategy for deaf people with mental health problems. The Government's strategy for adults of working age in England is set out in the National Service Framework for Mental Health (Department of Health 1999c), which sets standards of care for people with a mental illness. This included a Specialised Services National Definition Set, published in December 2002, which identifies 10 services which should be regarded as specialised mental health services, including specialised services for deaf people (defined as a service for all ages).
11.130 A Department of Health consultation document, A Sign of the Times (Department of Health 2002c) highlights a number of problems and challenges faced in delivering services to people who are deaf. Some of the main issues relate to: the fact that deaf people of all ages are disadvantaged when trying to access health services; that communication support and respect for the cultural diversity is fundamental to improving mental health in the deaf community; that providing effective mental health services to the deaf community is more costly than for mainstream services; and, that the current capacity for service development is limited (Department of Health 2002c).
Research on psychiatric services for deaf people
11.131 As has been highlighted, service planning has been impacted by a lack of information on the deaf community nationally, including numbers in prison, secure or specialised psychiatric units. Young et al (2000) state that little is known of the number of deaf prisoners in the UK with psychiatric needs. The same authors carried out a study of forensic referrals to the three specialist psychiatric services for deaf people in England (located in Manchester, London and Birmingham) (Young et al 2001). All patient case records from the three units from 1968-1999 were examined.
11.132 Of 5,034 files checked, 431 met the criteria for inclusion for a forensic referral (this number went down to 389 as 42 of the patients had been referred to more than one unit). A 'forensic referral' was classified as: 1) a patient with a history of conviction and/or 2) having received a caution; 3) a patient found unfit to plead in relation to an indictable offence and/ or 4) has been seen under a forensic section of the Mental Health Act 1983. It also included cases of 5) patients with no previous criminal history who was in contact with the unit because of a pending charge or court case.
11.133 The majority of the sample were male (91%) with only 33 females (from a total of 389 forensic patients). There were a number of types of offence that the sample had been convicted of or were currently charged with, a high proportion of which were violence and sexual offences.
11.134 Of the original 431 cases files, a diagnosis was recorded for 89% (385). Of the 389 cases (not including those referred to more than one unit), 47% were recorded as having a mental disorder, including psychotic illness (25%), mental impairment/learning disability (19%), personality disorder (36%) and other (including conduct disorders and depression). Fifty three percent of the sample were diagnosed as having no mental disorder (which did not include diagnoses of personality disorder). But, within the group diagnosed as having no mental disorder, 41% were classified as having communication problems and, in 79% of cases a psychiatric opinion had been sought prior to a court appearance.
11.135 Young et al (2001) argued that this data raised a number of issues which need to be investigated. Although the number of forensic deaf patients may seem small there is evidence that this is growing. A surprise finding was the peak age of 17-18 for men for first conviction/caution/court disposal. The authors highlighted the fact that previous research suggested that it was not common for deaf people to come before a court or be convicted at a young age (that the police would not pursue the cautioning or conviction of young deaf people because of the language complications or due to a sense of compassion). This research suggested that this may no longer be the case and that the peak age has become consistent with that of offending in the general population.
11.136 A number of the referrals were subsequently found to have no mental disorder. This is consistent with findings suggesting that deaf people who use sign language or have communication problems have been mistakenly assessed as being mentally ill or impaired by professionals jumping to conclusions. If this is the case, the authors ask, what is happening to patients who never reach these specialist services to be assessed? If the needs of this group are to be met, the development of forensic services as well as medium security facilities for deaf people will be called for.
- It has been reported that deaf adults share the overall prevalence rate for psychotic disorders but are more likely to be diagnosed as having a personality disorder or behavioural or adjustment problems.
- Deaf people with mental problems are more likely to have learning difficulties, organic syndromes and co-morbidity is also higher in this group.
- Adult mental health services for the deaf have developed in a generic way, with provision relying on individual and organisational effort and had only developed at a national level in an 'ad hoc' fashion.
- · Service planning has been impacted by a lack of information on the deaf community nationally, including numbers in prison, secure or specialised psychiatric units.
people with ANOREXIA nervosa
11.137 Webster, Schmidt and Treasure (2003) note that anorexia nervosa has the highest mortality of any psychiatric disorder, while a significant minority of sufferers need detention and involuntary treatment. At the same time clarification that feeding a person against their will is lawful has only been in place since 1997 (Webster et al 2003). Key questions arise in relation to treatment in severe cases of anorexia nervosa. Carney et al (2003) point out that clinicians differ in the use of force (as well as in the form of legal coercion in therapy). Some programmes are against it while others believe that involuntary nasogastric feeding is an integral element of treatment. Distinctions are also drawn between adults and children as well as distinctions between the form coercion takes (Carney et al 2003).
11.138 It is argued that while patients with anorexia nervosa may be able to make valid judgements in relation to certain aspects of their lives, they are often unable to make rational decisions about body weight, diet and medical treatment. As such, they should be involuntarily hospitalised and treated because they are suffering from a single delusional disturbance, which may lead to a life-threatening situation (Melamed et al 2003). Questions centre on the legal and ethical dilemmas around involuntary hospitalisation as well as concepts of 'capacity' and 'competency'.
11.139 As Melamed et al (2003) asked:
'where is the boundary between the individual's autonomy and the need for social intervention? Can the courts or the doctors deal with an illness that is also a social phenomenon outside the legal or medical perspective?'
11.140 The debate on whether artificial feeding of people with anorexia nervosa relates to whether it is ethical, as well as how competency is determined. If competency has been established, should an individual's right to refuse food be respected in all circumstances? Giordano (2003) argues that the peculiarities of the illness make this a difficult question to answer. She has suggested that anorexia nervosa is not strictly a lethal illness as the effects of abnormal eating are reversible. So, refusal to accept treatment does have a major effect on those people who are caring for someone with anorexia nervosa, which may weaken the normative strength of the principle of respect for people's competent decisions. This does not mean that a person with anorexia's refusal to accept therapy should always be disregarded but the fact that even if that person is making a competent decision it may be the case that this is not a sufficient reason to bind carers to respect that refusal. It is argued that at some level, in the case of anorexia nervosa, competence does not seem to produce the same moral obligation that it may produce in other cases (Giordano 2003). (For a fuller discussion on capacity see Chapter 5)
· Questions centre on the legal and ethical dilemmas around involuntary hospitalisation as well as concepts of 'capacity' and 'competency'. | 1 | 19 |
<urn:uuid:52c39e63-fac5-4723-97ab-74cec5a449ae> | Technical Editors: William J. Zielinski, Thomas E. Kucera
Photographic Bait StationsThomas E. Kucera, Art M. Soukkala, and William J. Zielinski
Description of Devices
Single-Sensor Camera System
Dual-Sensor Camera System
Line-Triggered Camera System
Baits and Lures
Single and Dual Sensor
Preparations for the Field
Defining the Survey Area
Station Number and Distribution
In the Field
Checking the Stations
Comparisons of Camera Systems
C--Examples of Photographs
There are a variety of systems in use that employ a camera at a bait station to detect wildlife. We will describe three that are widely used and with which we are most familiar. They can be divided into two major categories according to the type of camera used. The first employs automatic, 35-mm cameras and can be further divided into two types that differ by the mechanism that triggers them. We will refer to these types as "single sensor" (Kucera and Barrett 1993, 1995) and "dual sensor" (Mace and others 1994). The second major category is a line-triggered system that uses a manual, 110-size camera (e.g., Jones and Raphael 1993). We provide data on equipment costs and discuss the relative merits of the various systems in a later section of this chapter.
Remote-camera systems are currently available from several manufacturers (e.g., Cam-Trakker, 1050 Industrial Drive, Watkinsville, GA 30677; Compu-Tech Systems, P.O. Box 6615, Bend, OR 97708-6615; Deerfinder, 1706 Western Ave., Green Bay, WI 54303; also see Bull and others 1992, Laurance and Grant 1994, Major and Gowing 1994, Danielson and others 1995).4 All employ somewhat different configurations and have different advantages and disadvantages. The cameras used in these systems also change as camera models are discontinued by manufacturers and new ones are introduced. Thus, the systems we describe in this document may differ from what is available in the future, and the reader who wishes to use remote photography to detect wildlife may need to modify specific procedures as appropriate for the equipment in hand. As remote-camera technology advances, it is likely that additional designs will continue to be developed.
4The use of trade or firm names in this publication is for reader information and does not imply endorsement of any product or service by the U.S. Department of Agriculture.
Description of Devices
Single-Sensor Camera System
The single-sensor system that we will describe here is the Trailmaster TM1500 (Goodson and Associates, Inc., 10614 Widmer, Lenexa, KS, 66215, 1-800-544-5415), which consists of an infrared transmitter and receiver, used with the TM35-1, an automatic, 35-mm camera (fig.1). The camera is triggered when an infrared beam is broken; such an occurrence is termed an "event." The transmitter emits a cone of infrared pulses. Because the receiver has an area of sensitivity of about 1 cm in diameter, the effective beam diameter is about 1 cm, thus requiring precise placement to intercept the target animal. The transmitter and receiver may be placed as far as 30 m apart. Their alignment is facilitated by a sighting groove on the receiver and a red light that flashes during the setup procedure to indicate that the beam is being received; this light stops flashing when the system is in data-collection mode.
The receiver also is an event recorder that stores the date, time, event number, and whether a picture is taken each time the beam is broken. A maximum of 1000 events can be stored. The sensitivity of the trigger--that is, the length of time the beam must be broken or, more accurately, the number of infrared pulses that must be blocked to register as an event--can be adjusted by the user from 0.05 to 1.5 seconds. The time after a photograph is taken until the next can be taken (the "camera delay") also is set by the user, from 0.1 to 98 minutes. If the beam is broken during the camera delay, events are still recorded and stored. The transmitter and receiver are each powered by four alkaline C-cells, which last approximately 30 days of continuous field operation. Both units come with nylon straps about 70 cm long for attachment to trees.
The most recent (November 1995) Trailmaster configuration employs an Olympus Infinity Mini DLX camera; earlier models used a Yashica AW Mini or an Olympus Infinity Twin. These camera changes were dictated by the availability of the models from the manufacturer; users of the equipment must become familiar with the operations of the particular camera they have. The components of the different systems, such as receivers and cables, are not interchangeable and should not be mixed up. The camera is modified to be triggered by an electrical pulse from the Trailmaster receiver. A quartz clock in the camera allows display of date and time on the photograph. The camera connects to the receiver with an 8-m wire, providing flexibility in the placement of the camera. Several cameras can be triggered simultaneously with the use of an optional multi-camera trigger. The flash can be operated automatically as required by available light, in fill-in flash mode so that the flash operates with every frame, or the flash can be turned off. With 100-ASA film, the flash illuminates to about 3.56 m, depending on the camera model; with 400-ASA film, this distance is doubled. Infrared film also may be used with an infrared filter over the flash. Slave flashes, triggered by the flash of the camera, can be used to extend the area illuminated.
The Olympus Infinity Mini DLX in the newest Trailmaster configuration can use either one 3-v lithium or two AA alkaline batteries. In normal use, the lithium battery will operate through about 14 rolls of 36-exposure film, and the alkaline batteries about 10, assuming flash on half the exposures. At a bait station, because the camera is constantly on and the flash is charged, the battery may last only 30 days. The quartz clock is operated by the camera battery. The capacitor that charges the flash in the Olympus Infinity Twin camera used in earlier models drains after 2-4 days if no photograph is taken. Thus, if the camera is not triggered, or is not reset by closing and opening the lens hood during this time, the flash may fail to operate the first time the camera is triggered. This does not happen with the Yashica, which keeps the flash charged at all times. However, the batteries in the Yashica must be changed more frequently. The Olympus Infinity Twin uses two 3-v lithium batteries, which will last through approximately 20 rolls of 36-exposure film, assuming the flash operates on half the frames. The Yashica camera uses 2 AA batteries, which last approximately 2 weeks. The quartz clock is operated by a separate 3-v lithium battery that will last 3 years.
The system comes with a 10-cm, collapsible, plastic tripod with a threaded ball-and-socket head that screws into the bottom of the camera. A metal bracket shields the top and back of the camera and prevents birds from pecking the controls while allowing access to the viewfinder; the metal bracket also provides some protection for the lens from rain or snow if the camera is operated in landscape format. The tripod is designed to be placed on a flat surface, or when collapsed, attached to a small tree or branch by a Velcro strap. The attachment of the camera to a tree or other support can be greatly improved by using a more substantial ball-and-socket head purchased at a photographic supply store (the Bogen model 3009 works well), attaching this to a metal "L"-bracket with a bolt, and fixing the bracket to a tree with lag bolts (fig. 1). This is a much more secure and convenient alternative.
The entire system weighs about 2 kg with batteries, and can be transported in a 25- × 20- × 10-cm box. It is weatherproof and operates in rain and snow. We tested low-temperature operation of an early model using the Olympus Infinity Twin in a freezer, and it performed consistently at -17 °C for 2 weeks and at -7 °C for 2 more weeks.
Also available from the manufacturer (Goodson and Associates) is a device that allows electronic collection of data (date and time of all events, and which events triggered the camera) in the field for later transfer to a personal computer; the data can also be transferred directly from the receiver to a personal computer. The collector is particularly useful when you check several stations in a day by reducing the time you spend recording data at each station. The software package required for downloading from either the receiver or collector provides output in the form of text (event number, date, time, and frame number) and a graph showing events by day and time in a 3-dimensional bar chart. Trailmaster also makes a battery-operated printer that produces a hard copy of the event data in the field.
Dual-Sensor Camera System
The dual-sensor remote camera system consists of an automatic 35-mm camera modified to be triggered by a microwave motion and a passive infrared heat sensor (Mace and others 1994; figs. 2A, 2B ). Dual-sensor systems are made by Compu-Tech, Trailmaster, and Tim Manley (524 Eckleberry, Columbia Falls, MT, 59912, 406-892-0802). Although the Trailmaster TM500 dual sensor (fig. 3) has recently been field-tested and proved reliable and lightweight (K. Foresman, pers. comm.), we will describe the use of the equipment from the last source, sometimes referred to as the "Manley" camera. These three systems share many similarities. If you are using a dual-sensor system from another manufacturer, the procedures described below will need to be altered as required by the particular system employed. Again, because of the availability of particular camera models from the manufacturers, specific designs of the system are likely to change.
In normal operations, both the microwave sensor that detects motion and the passive infrared (PIR) sensor that detects changes in ambient temperature are triggered simultaneously and operate the camera. If either sensor malfunctions (e.g., the microwave sensor loses its signal, or if ambient temperature approaches the body temperature of a target animal), the other sensor will take priority and will work like a single-sensor system. Both sensors send out a field to approximately 11 m. The camera is triggered when an animal enters the field, which can be restricted to several meters wide by obstructing the PIR sensor window. The sensors draw 35 mA from the 12-v gel cell (golf-cart type), deep-cycle battery used to power the system. This rechargeable battery should last for 20 days between charges.
Early versions of this system used an Olympus Infinity Jr. camera, modified to be triggered by an electrical pulse from the sensor. The camera focuses from 0.7 m to infinity; the flash illuminates to 4.5 m with 100-ASA film and 9 m with 400-ASA film. The flash can be operated automatically as required by available light, continuously on every picture in fill-in mode, or the flash can be turned off. The capacitor that charges the flash drains after 3-4 days if no picture is taken. Thus, if the camera is not triggered or is not reset by closing and opening the lens hood, the flash may fail to operate the first time the camera is triggered. The camera is powered by a 3-v lithium battery that will last through 20 rolls of 36-exposure film, assuming the flash operates on approximately half the pictures. However, because the light meter is on continuously while the remote camera is operating, the camera battery may last only 1-2 weeks depending on how many rolls of film are exposed, how many flash pictures were taken, and the ambient temperature. The camera is equipped with a quartz clock that allows displays of date and time on each photograph; the clock is powered by a 3-v lithium battery that will last several years.
The entire system is housed in a weatherproof 15- × 30- × 19-cm metal ammunition box that will withstand moderate abuse (e.g., from a bear) without being damaged. An external switch allows the system to be turned on and off without opening the box. The box can be modified to allow it to be locked shut and cabled to a tree to discourage theft and vandalism. The system comes with a mounting bracket and lag bolts for attachment to a tree. Total weight is approximately 13.6 kg including the 12-v battery.
Line-Triggered Camera System
This is an inexpensive, remotely triggered system, assembled by the user, that employs a 110-size camera (fig. 4). We have the most experience with the Concord 110 EF and CEF with internal, electronic flash (a distributor can be contacted by calling 908-499-8280), but similar models may be satisfactory. It is essential that the camera have an internal flash; "flash bars" and "flash cubes" have a high failure rate in the field. Each camera should be identified with a unique number engraved or written on the body with permanent marker.
The system is composed of the camera, a wooden mounting stake, a cover from a plastic gallon milk jug, an external battery pack, and the trigger mechanism. The mounting stake is a 1- × 3- × 36-inch post topped with a 0.05- × 2.75- × 5.0-inch wooden platform (figs 5, 6). The platform should be firmly screwed to the top of the post because this is the surface on which the camera is attached. Avoid using plywood for the platform.
The camera can be adequately weather-sealed for most conditions by putting a strip of electrical tape over the trigger release and a second strip over the flash switch area (be sure the switch is ON). However, in rainy conditions, the camera should be covered with half of a 1-gallon milk jug (fig. 5). Staple Velcro to the milk jug and to the vertical surface of the platform board to hold the jug in place. Position the Velcro pads to avoid obstructing the nylon leader that comprises the trigger mechanism (see below) as it exits the camera. Camouflage the jug with dark green or brown spray paint to reduce the chance of its discovery by passers-by.
Unlike previous versions in which a coat-hanger-wire mechanism triggered the shutter (Fowler and Golightly 1993, Jones and Raphael 1993), the design presented here employs a line from the bait that connects directly with the shutter mechanism inside the camera (L. Chow, pers. comm.). Familiarize yourself with how the 110 camera works by opening the rear of the camera and watching inside while tripping the shutter and operating the film-advance mechanism several times. Look for a flat, triangular lever that snaps backwards when you trip the shutter. This is the internal shutter release. Trip the shutter to disengage the internal shutter release from the toothed gear. Drill a small hole (using a #68 or #70 gauge drill bit) in the underside of the camera, approximately 2 mm from the rear edge of the camera. Position and angle the hole so it is just behind the internal shutter release. Make a loop in a 12- to 15-inch length of a 2-lb test nylon fishing leader. Fold and pass the loop through the hole and, using forceps, hook it over the internal shutter release. Secure the loop by knotting it outside the camera an inch or two from the hole; a knot inside the camera may prevent the shutter release from operating properly.
Because the factory-suggested batteries for the camera are insufficient to provide energy for more than a few days, additional power must be provided. Build an auxiliary battery unit that will house two size D batteries (fig. 6). House the batteries in a standard, open, plastic battery pack, available at electronics stores. The D-cell unit should be connected to the battery terminals in the camera by stereo wire that is soldered from the battery pack to the contacts in the camera battery compartment; if wires are provided with the battery pack, use them. The Concord 110 requires very little modification to solder the wires to the battery terminals in the camera's battery compartment. After soldering the wires, cut a small hole in the camera's battery compartment door to allow entry of the wire from the auxiliary battery unit. Seal this hole with silicone. The battery compartments of other camera brands (e.g., Vivitar and Focal) require that some of the plastic body be cut away to access the internal battery terminals. Attach the battery pack to the bottom of the platform board with short screws or rubber bands; Velcro is inadequate to support the weight of the batteries.
Baits and Lures
With the 35-mm systems, we recommend using road-killed deer, fish, or a combination of the two. The amount used should be as large as possible, up to a whole deer carcass, but at least 5 kg. With the line-triggered system, chicken wings are the recommended bait. Also use a commercial lure and, especially for surveys for lynx, a visual attractant (e.g., hanging bird wing, large feather, or piece of aluminum).
Wolverines, fishers, and martens are opportunistic hunters, and the great diversity in their diets reflects this (Banci 1989, Hash 1987, Martin 1994). In addition to taking live prey, they frequently scavenge in winter and can be attracted to carcasses of ungulates (Hornocker and Hash 1981; Pittaway 1978, 1983). Thus, road-killed deer (Odocoileus sp.) are probably one of the most readily available baits to attract these species to 35mm camera stations. However, because it is illegal to handle or transport road-killed deer without appropriate permission, coordination with the state game agency is necessary before handling and transporting them.
In many areas, road-killed deer are available seasonally; this may require planning in order to have bait for the field season. Storing deer can be a challenge; a large freezer such as at fish hatcheries or cold box at some National Forest System ranger districts often is necessary. The bigger the bait the better, but handling whole deer carcasses can be difficult. An important requirement is that the bait be large enough to remain attractive until it is scheduled to be replaced. We recommend a piece of road-killed deer weighing at least 5 kg. One approach to increase the convenience of storage and transport of bait is to quarter deer when fresh and freeze the pieces in individual plastic bags. The frozen packages can be transported when needed, eliminating the need to cut up frozen carcasses. Another attractant being experimented with is cow blood, frozen in gallon milk jugs, from a slaughterhouse. Putting an anticoagulant in the blood will keep it in a liquid state. At the camera station, perforate the jug to allow the scent to escape and suspend the jug from a cable, approximately 3.5 m above the ground.
Commercially available trapper lures such as skunk scent may be valuable to attract the mustelids, and we recommend that they be tried and evaluated in conjunction with the bait. Two sources of such lures are the M & M Fur Company, P.O. Box 15, Bridgewater, SD 57319 (605-729-2535) and Minnesota Trapline Products, 6699 156th Ave. NW, Pennock, MN 56279 (612-599-4176). Standard predator-survey disks containing fatty acids can be obtained from the Pocatello Supply Depot, 238 East Dillon St., Pocatello, ID 83201. In several areas of California, fish emulsion sold as fertilizer in garden-supply stores and used in conjunction with deer carrion has been used to attract fishers and martens. Brands vary in the strength of their odor. Mixing vegetable oil or glycerin with the fish emulsion may retard evaporation and thus extend the attractiveness of the scent.
Lynx rely heavily on a single prey species, the snowshoe hare (Lepus americanus ), although they do take other small mammals, birds, and carrion, particularly when hares are rare (Hatler 1989). This requires somewhat different strategies in attempts to detect them. The typical set used to trap lynx employs a scented lure (e.g., commercially available skunk scent and some catnip) in addition to a visual attractant or "flasher" such as a grouse wing, a turkey primary feather, or an aluminum pie plate on a string above the trap (Baker and Dwyer 1987, Geary 1984, Young 1958). Once attracted to the general area by the scent, the animal sees the object moving in the wind and comes to investigate it. A similar arrangement could be used to attract lynx into the beam of the single-sensor, or within the range of the dual-sensor camera. Scents are probably best purchased from a commercial supplier. A set employing carrion, a scent, and a bird wing conceivably could attract any of the four target species.
35-mm systems: Conduct surveys in winter. Bears are least active during winter, and the dual-sensor cameras operate best in cool temperatures.
Line-triggered system: Conduct one survey in the spring, shortly after snowmelt, and if the target species is not detected, conduct another in the fall. The line-triggered camera system works best in snow-free conditions.
Single and Dual Sensor
There is evidence that wolverines are more attracted by carrion in the winter than at other seasons (Hornocker and Hash 1981), and this is likely true of the other mustelids. They also may be less likely to come to an attractant when natural foods are more common. In addition, bears are usually much more numerous than wolverines, fishers, and possibly martens, and are readily attracted to bait. Bears can exhaust the film, remove bait, and damage equipment. For these reasons, the best season to try to detect mustelids is winter. However, data on wolverines in Idaho suggest that females restrict their movements from near the time of parturition through weaning of offspring and thus may be effectively removed from the population in late February and March (J. Copeland, pers. comm.). Similar seasonal considerations may apply to fishers (Arthur and Krohn 1991, York and others 1995) and American martens (Strickland and others 1982).
Both 35-mm systems operate well in the snow; the dual-sensor system operates best in winter because warm temperatures during the summer can send erroneous signals to the sensor. If working in winter is not possible, or if bears are active year-round in a particular area, you may need to check and move the equipment more frequently. If a bear finds a station, it is likely to return, so the station may need to be moved or reconfigured to prevent the bear from taking the bait (see below, Checking the Stations).
Seasonal differences in vulnerability of lynx to trapping are unknown, so recommendations for seasonal guidelines will have to await additional data. Again, however, if bears are a problem in a study area, or if there is an ongoing program of snow tracking (see Chapter 5) to detect lynx that can incorporate the photographic bait stations, winter would be the most appropriate season.
The line-triggered camera system recommended here is difficult to use in snow, especially if snow falls during the survey period (C. Fowler, pers. comm.). Snow can interfere with the trigger wire that runs along the ground, and cold temperatures can affect the mechanical trigger. Therefore, surveys using line-triggered cameras should be conducted when most snow is melted and the risk of new accumulation is low. However, the line-triggered camera has successfully been used during winter by attaching the camera and bait to the top of a downed log that is above the snow (T. Holden, pers. comm.).
Martens and fishers have been detected on numerous occasions at line-triggered camera and track-plate stations baited with chicken during the spring, summer, and fall (Fowler and Golightly 1993; Seglund and Golightly 1993; Zielinski and others, 1995), when alternative foods are assumed to be more abundant than in winter. Bull and others (1992) detected marten at more stations in winter than summer, but only 16 stations were used. There is no compelling evidence that spring and fall surveys that target marten and fisher are less effective than winter surveys, and surveys certainly are easier to conduct in spring and fall. Neither wolverines nor lynx have been detected at line-triggered cameras, so conclusions about seasonal effects on their detectability must await additional data. There is little evidence that bears will return as frequently to a line-triggered camera station as they do to 35-mm camera stations. There is no reason to believe that moving the station will result in less damage than replacing the unit at the same location. Because of its low cost, a line-triggered camera set damaged by bears does not result in significant expense.
35-mm systems: Operate each station until either the target species is detected or a minimum of 28 days have elapsed.
Line-triggered system: Stations should be set for a minimum of 12 nights and checked every other day for at least six visits (excluding setup) or until the target species is detected. If the target species is not detected during the first 12-day session, run a second session during the alternate season (either spring or fall) for at least 12 days or until the target species is detected.
Allow extra days to achieve the recommended duration if the camera becomes inoperative.
Because the objective of the survey is to determine whether the target species is present in a sample unit, effort need not be expended beyond the detection of the target species. The minimum duration that a 35-mm camera station should operate without detecting a target species is 28 days. We based this minimum effort on data on "latency to first detection" of wolverines and American martens. Using dual-sensor systems, J. Copeland (pers. comm.) detected wolverines at six stations with a mean latency of 38 days; the median latency was 17 days. Mean latency to first detection at dual sensor cameras in Montana was 13.5, 9.0, and 13.0 days for martens, fishers, and wolverines, respectively (Foresman and Pearson 1995). Kucera5 detected American martens at 25 single-sensor stations after a mean of 7.9 days and a median of 5 days.
5 Unpublished data on file at the Department of Environmental Science, Policy, and Management, University of California, Berkeley, CA.
We set the minimum effort when using line-triggered cameras at 12 nights in response to several sources of information on the latency to first detection for marten and fishers. In reviewing the results of 207 surveys that used either track plates or line-triggered cameras, Zielinski and others (1995) found that the mean (SD) latency to first detection for surveys that had from 6 to 12 stations was 4.2 (2.4) and 3.7 (2.6) days for fisher and marten, respectively. This estimate is biased downward, however, because it included only those surveys that detected a target species before the survey was concluded. Raphael and Barrett (1984) suggested that 8 days were sufficient to achieve high detection probabilities when measuring carnivore diversity at a site. Jones and Raphael (1991), however, discovered that 60 percent (3 of 5) of first detections during marten surveys occurred after day 8 but before day 11. They concluded that surveys should run more than 11 days. Fowler and Golightly (1993) suggested a 22-day survey duration, but this was with the intention of using track-plate visits to monitor population change. Because the objectives of detection surveys are different, and because the statistical merits of their approach have not been adequately addressed, 22 days is probably excessive for detection.
Because visits by lynx and wolverines to line-triggered camera stations have not yet been recorded, there are no data on which to base recommendations for survey duration. Until appropriate data are collected to suggest otherwise, we believe that the 12-day duration, twice per year if necessary, is sufficient effort.
Defining the Survey Area
Preparations for the Field
Defining the Survey Area
Conduct surveys in 4-mi2 sample units, as described in Chapter 2.
Chapter 2 discusses the two types of survey, Regional Distribution and Project Level. The investigator should decide which type is appropriate for the planned work and outline the survey area on a map. In both types of survey, we recommend the use of separate, 4-mi2 sample units as the basis of the survey. For a Regional Distribution survey, the region of interest should be defined on a map, and the 4-mi2 sample units located as suggested in Chapter 2. A Project Level survey will include a 36-mi2 area, with nine sample units, centered on the project.
Station Number and Distribution
Station Number and Distribution
35-mm systems: Use a minimum of two cameras in each sample unit, no closer than 1 mile apart, at the sites of the most appropriate habitat or where unconfirmed sightings have occurred.
Line-triggered system: Use a minimum of six camera stations in each sample unit. Arrange stations in a grid, distributed at intervals of about 0.5 mile, at the site in the sample unit with the most appropriate habitat or where unconfirmed sightings have occurred (see Chapter 2, fig. 2 ).
Within each sample unit, place the detection devices (minimum of two 35-mm or six line-trigger cameras) where a detection is most likely. This could be in an area thought to have the most suitable habitat or near an area of previous reports of occurrence or likely travel routes, as discussed in Chapter 2. However, in doing so, try to maintain the inter-station spacings recommended above.
Two 35-mm cameras are an adequate minimum density per sample unit because they can operate longer for the same personnel costs than the line-triggered cameras, and the larger baits used should attract target individuals from a greater distance. The number of line-triggered cameras in a survey can influence its success (Zielinski and others, 1995). Although the data are too few to estimate the optimum station number, it seems reasonable to have detection stations that sample at least 10 percent of the area in the sample unit for the survey duration. Six stations provide at least 12.5 percent coverage of the sample unit if they are arrayed as a rectangle and one assumes that a target individual will be detected if it travels within the area created by joining the perimeter stations. Of course more stations will provide a greater assurance in detecting occupants, but more than 12 stations (covering 1.5-mi2; 37.5 percent of the area) would probably be excessive.
If there is no reason to place the line-triggered camera stations either at the most suitable habitat or where previous sightings occurred, array the stations as a grid in the center of the sample unit. Wherever the grid is placed, adjust its shape to accommodate road access in the vicinity. If the sample unit is roadless, pack the materials into the area.
In the Field
Before you go out, become familiar with the operation of the device you are using. Practice with it so that you are comfortable with its operation. When using the single-sensor system we describe, understand its commands, know how to program it, read out the event data, clear it, change batteries, and know where in the manual to look for instructions for a particular topic you need help on. This is much more easily done in the warmth of home or office than in the field.
In the field, do not go alone, especially during winter. Tell someone where you are going and when you will return, and what to do if you do not return by a certain time. Be aware of the weather forecast, have appropriate gear, and expect the worst. Remember that ease of access can change drastically as snow conditions change. Be sure you have all the necessary equipment; a list is provided below (Equipment List).
The major considerations for establishing stations in the field are maximizing the probability that they will be found by the target animal species and minimizing the likelihood that the station will be found by people. Mark the station permanently with a metal tag or stake, and precisely describe its location. If possible, use a Global Positioning System to determine the location. This will allow future study efforts to replicate your work.
Surveys using 35-mm cameras will be conducted primarily during winter when potentially hazardous conditions frequently exist. It is the responsibility of the supervisor to evaluate potential hazards in the survey area and to obtain proper training for all personnel before they go into the field. Field biologists often assume they know how to get along in the outdoors. Surveying for rare species during winter may test those assumptions; being a field biologist does not guarantee competence to conduct fieldwork in winter.
Job descriptions and training for field technicians should stress winter field skills including skiing, snowshoeing, snowmobiling, camping, and avalanche training. Proper winter equipment must be provided to each field person. Employees should be trained by in-house experts or at one of several established winter training schools. Lists of winter camping and avalanche training schools are provided in Chapter 5 under Safety Concerns. Two excellent references on avalanches are by Armstrong and Williams (1986) and Daffern (1992). Selected references on winter outdoor skills include Forgey (1991), Gorman (1991), Halfpenny and Ozanne (1989), Pozos and Born (1982), Schimelpfenig and Lindsey (1991), Weiss (1988), Wilkerson and others (1986), Wilkerson (1992), and Wilkinson (1992).
Uncooked meat baits are a potential source of Salmonella bacteria, so meat should be wrapped in plastic and frozen until the day it is used. Contact with either fresh or old bait should be minimized. Plastic bags can be used as gloves to reduce contact, and for smaller pieces of bait, kitchen tongs can be used. Carry soap, water, and disposable wipes so that you can wash your hands thoroughly after handling bait. Careful attention to cleanliness will make the risk of contamination from rotting meat, including chicken, negligible (J. Sheneman, pers. comm.). The risk of poisoning the target species with rotting meat baits is very low, as most target species regularly consume carrion.
A soft-sided cooler bag is convenient for carrying the Trailmaster and provides some protection. Be sure that the receiver is programmed for the correct date and time, for pulses = 10 (-P 10), and for camera delay = 2.0 (cd 2.0). These are initial recommendations; change them if you have reason. For example, make the trigger more sensitive (fewer pulses) if bait is being taken but no events recorded, or increase the camera delay if a non-target animal such as a squirrel is shooting up a lot of film. Make sure that the receiver is programmed to activate the camera (see the Trailmaster manual, p. 12). A short summary of Trailmaster commands is presented in appendix B.
Load film into the camera. Print film of 100 ASA works well, is relatively inexpensive, and can produce enlargements of acceptable quality. Using a small, blunt tool, synchronize the date and time on the camera display with the receiver, and set the display to show the date (day number) and time, not month or year or other configuration. With the Olympus Infinity Twin, be sure that the horizontal bar over the minutes digits is showing, which indicates that the information will appear on the film.
For mustelids, an ideal site has three trees, 15-30 cm in diameter and 3-10 m apart, lined up in a north-south direction with the middle tree slightly (15 cm) offset, and a fourth tree or a branch 2-3 m from the middle tree with a good view of it (figs. 7, 8). The transmitter will be in the middle of the trunk of the northernmost tree facing south, and the receiver will be on the east side of the trunk of the southernmost tree with the receiving window pointing north. This orientation is important to prevent solar infrared radiation from reaching the receiver and causing false events to be recorded. The bait will be on the middle tree, and the camera will be on the fourth tree. As an alternative, the camera can be above the receiver on the same tree. The beam should pass within 5 cm of the middle tree about 1.5-2 m above the ground. With some practice, you can easily identify the appropriate configuration of trees. Do not use trees that will move in the wind, and trim any branches that could blow into the beam or block the camera.
It is best to have one person handle the bait and another the equipment, so that no odors from the bait get on the equipment. Hang the bait along the trunk of the middle tree so that it is at least 2 m above the ground to prevent canids from reaching it. In areas of heavy snowfall, you may need to adjust the height of the bait to accommodate changing levels of snow. Attaching the bait to the tree with wire will prevent loss of the bait if the string or rope is chewed. Trim lower branches to guide animals to the bait through the beam and to eliminate perches for birds and squirrels in the beam. Add any scent as appropriate to attract animals to break the beam.
Position the transmitter on the northern tree and receiver on the southern tree so that the infrared beam passes 10-15 cm below the bait on the middle tree and about 5 cm from the tree, so that any animal climbing the tree to get the bait must pass through the beam. Look down the sighting groove on the receiver, and aim it precisely at the transmitter window; this is important for getting the best performance. When the approximate positions of the transmitter and receiver are established (using the receiver in setup mode with its flashing red light), tighten the receiver strap and check the alignment again.
Loosen the transmitter strap and tilt the transmitter up and down and side to side, watching when the red light on the receiver stops flashing. This is to determine where the central portion of the infrared beam is; fasten the transmitter so that this central portion of the beam hits the receiver. Check the position of the beam relative to the tree and bait by passing your hand through the beam to simulate an animal coming to the bait and watching when the red light on the receiver goes out, showing that the beam is broken. Remember, after 4 minutes the receiver automatically leaves the setup mode and the red light stops flashing. Again, sight down the groove in the receiver; adjust it so that it points directly at the transmitting window and tighten the strap, pushing the points on the back of the receiver into the tree so that the unit is firmly positioned. Visually check the transmitter to determine that the central portion of the beam is directed at the receiver, and adjust it if necessary.
If you are using the collapsible tripod supplied with the Trailmaster, attach the camera to it with the metal bracket shielding the top of the camera. Set the flash mode for FILL-IN, so that the flash operates on every exposure, and make sure that the self timer and continuous mode are off. Attach the camera and tree-pod to a tree or large branch about 2-3 m from the bait, with an unobstructed view centered on where you expect the animal to be. Position the camera so that the automatic focus frame in the viewfinder is on the target and not a distant background. The tree-pod should be collapsed; use duct tape to attach it to the tree. Tighten the attachment of the tree-pod to the camera, make a final alignment of the camera to the target, and tighten the ball and socket; this should be done with pliers to achieve a secure connection, but be careful not to strip the threads. A length of duct tape from the camera shield up to the tree helps prevent the camera from tipping down when weighted with snow. As a more secure alternative, attach an L-shaped metal bracket to the tree with lag bolts to provide an attachment for a more substantial ball-and-socket head such as the Bogen 3009 ( figs. 1, 7 ).
Run the camera cable from the receiver to the camera, winding it several times around the trees on which the camera and receiver are placed, so that any tugging on the cable (from snow, animals, you falling down) pulls on the tree and not the equipment. Be aware that the cables are specific for the model camera used and are not interchangeable. Be sure you are using the correct one. Run the cable at least 2 m off the ground so that animals and most people pass below it. Do not plug the cable into the camera yet. Trim any branches that could be in the field of view or interrupt the beam when weighted with snow or that could lift into the field of view as snow melts. Attach a blue, 3 × 5 card with the station's identification number written in large letters with a waterproof, wide-tipped marking pen to the tree in the field of view. The card provides a scale for measurement of animals in photos and a record of location. Avoid white cards, which often are overexposed and difficult to read on the photo. Attach a laminated card with the following message to a nearby tree, positioning it out of view except when close to the set:
This is part of an important wildlife study being conducted
by: ________________________. Please do not touch. It is an automatic camera that will take a picture of an animal as it comes to the bait, and will not harm the animal. If you have any questions, please
Finally, when you think all is ready, plug the cable into the camera and receiver, being sure the cable is plugged in correctly. Reset the event recorder to zero, run your hand through the beam where you expect the animal to be, and be sure a picture is taken and an event recorded. If they are not, check the programming of the receiver (p. 12 in the Trailmaster manual), the camera cable, or the alignment of the beam. Make sure everything is right, and remember the 2-minute camera delay: a picture will not be taken for 2 minutes after the last picture is taken. If necessary, reset the receiver to zero and try again.
Record in your field notebook the number of photographs taken during set-up, the final event number on the receiver, and the date and time of your test photo departure. This will be important information when you return to check the camera. A sketch of the set on the Survey Record form (appendix A and in pocket inside back cover) will help identify what configuration works and what does not. Be generous in taking field notes; these will be used in the future to reconstruct what happened, and to analyze what went wrong and right. Use flagging tape to mark the way to the site if necessary, but do not flag the site itself, to lessen the chance of its being found by people.
We will describe a station configuration that we have used with the Manley system. If you are using the Trailmaster TM500 or another dual-sensor system, modify the station as the equipment and reason dictate. Before going out, familiarize yourself with the camera and the other components of the system and how they work. The camera will operate without film so the system can be assembled in the office to make sure all components are working properly. Set the camera so that the day, number, and time are displayed and will be printed on each picture. Make sure you have all the equipment on the list provided at the end of the chapter.
An ideal site for the dual-sensor station is the intersection of several game trails. However, if deer densities are high, setting over game trails may produce too many pictures of non-target animals. Choose a site in a sheltered area, if possible, that will be shaded for most of the day. The camera unit produces the best pictures if it faces north. An area along the trail with three trees in a triangle will work best (figs. 9, fig.10,fig. 11 ). The tree at a southern point serves to support the camera and should be 3.5-5.5 m from the target point. The two other trees support the cable holding the bait and should allow the bait to be at least 3 m from any tree trunk and hang over the trail or target point. Because the Manley dual-sensor camera operates as long as a warm, moving object is in its sensor field, the bait must be inaccessible. An animal should be attracted to the station but leave shortly because it cannot reach and feed on the bait. The Trailmaster TM500 requires setting a camera delay, which avoids exposing all the film in a short time.
Suspend the bait on 1/8-inch cable between the bait trees at least 3.5 m off the ground. Use 10-m cable pieces with looped ends that will allow the cables to be hooked together to reach the appropriate length. Using a climbing belt and either removable tree steps or climbing spurs, attach one end of the cable to one tree. Then climb the other tree, wrap the cable around it as many times as needed, and anchor the cable with a nail through the looped end. Remember to place the cable high enough so the bottom of the bait will be at least 3.5 m off the ground. The bait can be suspended by attaching a rigid wire hook to the bait, roping it up to the cable, and using a pole to push it out along the cable until it hangs over the appropriate target point. If you are using heavy baits, they can be suspended using a pulley system. Attach a pulley to the cable so that when it is strung, the pulley will hang over the target point. Before suspending the cable, tie a rope to the bait (using burlap sacks to contain the bait will help) and put the rope through the pulley. Suspend the cable, keeping in mind that the pulley plus a short length of rope will cause the bait to hang lower. The bait can then be pulled up and the rope tied off to a tree. Attach a laminated card with the following message to a nearby tree, positioning it out of view except when close to the set:
This is part of an important wildlife study being conducted
by:________________________ . Please do not touch. It is an automatic camera that will take a picture of an animal as it comes to the bait, and will not harm the animal. If you have any questions, please contact____________________________________.
Climb the camera tree and mount the camera at a location where it is no more than 3-4 m from the target point and sufficiently high in the tree to reduce its accessibility to people and animals (between 3-4 m). By pointing the camera slightly down to the target point, the sensor field will be shortened so that an animal will not trigger the camera before it is close enough to be illuminated by the flash. Secure the camera to the tree using the mounting bracket and lag bolts. Mount the bracket at the approximate angle and direction needed to have the camera point directly to the target point. The camera angle can be adjusted slightly after it is mounted in the tree.
To test that the sensor field is appropriate for the site, position the unit and turn it on without film in the camera. With one person in the camera tree, the other person should walk into the target area from different directions to determine where the sensors first trigger the camera. Adjust the sensor field by blocking part of the sensor with the magnetic strips provided so that the camera is triggered only when the person is near the target point and toward the center of the picture.
When the test is complete, load film in the camera and climb down the tree. With a black marker, write the station number on the back of a data sheet. Walk into the sensor field and trigger a single picture so that the station number will be identified in the photograph. Record in your field notebook the number of pictures taken during set-up, and the date and time of your departure from the site. A sketch of the site on the Survey Record form (Appendix A and in pocket inside back cover) including directions and approximate distances will help in evaluating the effectiveness of different configurations. Leave the site without walking through the sensor field. Write a short description of how to get to the site (a dot on an orthophoto-quad, topographic map, or aerial photo is extremely helpful), and flag the way to the site if necessary, but do not flag the site itself to lessen the chance of its being found by people.
These stations are most easily established with two people, one setting up the mounting stake and camera and the other preparing the bait. If only one person is available, the camera portion should be assembled and in place before bait is handled to avoid transferring scent to the camera unit (Jones and Raphael 1993). Avoid putting stations in direct sunlight; light can penetrate these cameras. Remove vegetation so that the camera has an unobstructed view of the bait and the monofilament line is not obstructed (figs. 5, 6). Dig a hole about 6 inches deep for the mounting stake, put the bottom of the stake in it, and tap the soil around its base firmly to secure it. Rocks can be used for additional support or to help adjust the angle of the stake.
Load the camera with 12-exposure, 100-ASA, 110 print film, and advance it to exposure 1. Twenty- or 24-exposure film is also satisfactory but will leave more unexposed film. The date and station number should be identified on each film cartridge before it is loaded into the camera to avoid confusing the rolls when they are removed. This is important because there will probably be at least six cameras per sample unit. Attach the unit to the camera platform with Velcro, and if necessary, place the cut milk jug over it to protect it from rain.
Tie the monofilament line (> 20 lb test) to the 2-lb test trigger line, feed the former through the eye screws and ground wire to the washer on the "bait side" of the ground wire (figs. 5 , 12). After attaching the line to the washer, move the ground wire away from the camera until the line is taut. The washer should be between 4 and 8 feet from the mounting stake. The second person should tie a strand of thread around the chicken and then tie the thread to the washer, leaving no more than 1 inch between the bait and the washer. Time can be saved by tying thread to all the chicken pieces you will use during the day before going into the field.
Do not rely only on the viewfinder to aim the camera. The aim will differ with the position of the observer's eye. Like all other aspects of setting up a camera, aiming should be practiced before the cameras are set up in the field. Some technicians find that the camera is properly aimed when, viewing from the bait, the operator can see neither the top of the camera nor the bottom of the platform. Others sight the bait so that it is in the lower third of the viewfinder. Still others use a length of line stretched from stake to bait to determine horizontal alignment, and straight up from the bait for vertical alignment. Placing the bait slightly uphill from the camera or angling the mounting stake slightly toward the bait will usually help center the bait in the photograph. Attach a laminated information slip with the following information to each camera stake:
This is part of an important wildlife study being
conducted by: ________________________ Please do not touch. It is an automatic camera that will take a picture of an animal as it comes to the bait, and will not harm the animal. If you have any questions, please contact: ________________________________________________.
When you consider the camera "set" in the field, take one or two test shots, holding a label card (a piece of 8 × 8-inch paper with the camera number, date, and station number indicated in large print) in view of the camera. Record in your field notes the number of test shots and the exposure number on which the camera is set when you leave, and then transfer this and other general information onto the Line-Triggered Camera Results form (appendix A and in pocket inside back cover).
Checking the Stations
35-mm systems: Check the station four times at 7-day intervals so that it is operating 28 days or until the target species is detected. Allow extra days to achieve the minimum survey period if the station becomes inoperative. Pay particular attention to tracks in the snow near the station every time you check it.
Line-triggered system: Stations should be set for a minimum of 12 nights and checked every other day for at least six visits (excluding setup) or until the target species is detected. If the target species is not detected during the first 12-day session, run a second session during the alternate season (either spring or fall) for 12 days or until the target species is detected. Allow extra days to achieve the minimum survey period if the station becomes inoperative.
The station should be checked at weekly intervals to ensure that it is working and that a non-target animal such as a squirrel has not immediately found it and used all the film. Weekly checks are also necessary to check the camera batteries which can discharge rapidly during cold winter conditions (Foresman and Pearson 1995). The station should be checked at least four times at weekly intervals, so that it is operating for 28 days.
Before you leave to check a station, be sure you have new bait and replacement film and batteries, Camera Results form (see appendix A, and in pocket inside back cover), contact cleaner and brush, and equipment for recording tracks in snow (see Chapter 5). Be familiar with the tracking material in Chapter 5. This is important. J. Copeland (pers. comm.) detected wolverine visits to photographic bait stations more frequently by tracks in snow than by photographs. Do not go alone, do check the weather, and bring appropriate gear. A list of equipment is provided below.
When you approach the set, look for and identify, describe, measure, photograph, and collect, as appropriate, tracks, scat, or any other sign of what may have been there. Note whether the bait is still present, whether it has been consumed, etc. Has the tree been scratched up, or have any string or wires been chewed or broken? Record these observations on the 35-mm Camera Results form (appendix A, and in pocket inside back cover).
Press R/O ADV to cycle through the "events" (i.e., interruptions of the beam). Record on the Camera Results form the date, event number, and time of only those events that caused a photograph to be taken (i.e., those that show a period between the first and second digit locations on the receiver's display; see "Displays" section of the Trailmaster manual). If you miss something, cycle through the data again.
After recording the event data you will know how many frames were exposed. Replace the film if half or more of the frames were shot, or if you suspect from tracks or other sign that a target species has been at the set. To rewind a roll of film before its end, press the rewind button on the bottom of the camera gently with a ball-point pen. Immediately upon removing the film, write the station code and date on it with a marking pen, and put it into a film canister to keep it dry. Check the three electrodes on the camera cable for corrosion, and clean them if necessary.
With the Yashica camera, replace the two AA batteries after 12 weeks in the field. Avoid getting moisture or any other contamination in the battery or film compartments, or on the rubber seals; remove any moisture with a cotton-tipped swab. The Olympus cameras have a battery display on the LCD panel when the lens cover is opened. A solid battery figure indicates that the batteries are good; an outline of a battery, either flashing or on continuously, means that the batteries must be changed. Replace them with one (Infinity Mini DLX) or two (Infinity Twin) "DL123A" or "CR123A" lithium batteries. With the Infinity Mini DLX, check the day and time display to be sure it is still correct after changing the battery.
The batteries in the Trailmaster transmitter and receiver will last for 30 days in the field. When the batteries in the transmitter are low, the red indicator light on its base will immediately come on and quickly turn off when the unit is turned off; the light will stay on, or will not flash, when the unit is turned on. The receiver has a L o b ("low on batteries") display and will not record events if the batteries are low. If the batteries have been in use more than 20 days, or if either the transmitter or receiver indicates low batteries, replace the batteries in both units with four new alkaline C-cells. Do this over a jacket or cloth to avoid losing the tiny hex screws or wrench when you drop them into the snow or forest litter. Always replace batteries in both units at the same time. Before replacing the backs of the transmitter and receiver, make sure the rubber-gasket seals are seated in the groove, and that there is no moisture or other contamination on them.
If you are going to keep the station in place, replace and align the transmitter, receiver, and camera as necessary. Clean the camera lens with lens tissue and fluid if it is dirty. Clear the events from the receiver. Take a test photo to determine that all is operating correctly, and record the frame and event numbers left on the units when you leave.
If you find that a bear, coyote, or gray fox (Urocyon cinereoargenteus ) has found the station and has been frequently returning, move the station at least 0.5 miles from the first location. If smaller animals such as birds or squirrels are triggering the camera, move the beam farther below the bait or out from the tree so that smaller-bodied animals do not break it. Check to see that no branches that may serve as perches remain near the beam.
Stations should be checked 4 times at weekly intervals. When checking a station, have all the gear necessary to establish one, including extra film and batteries. A spare camera unit or two will allow you to replace faulty ones if necessary. Bring equipment for recording tracks (see Chapter 5). Be familiar with the tracking material in Chapter 5. This is important. J. Copeland (pers. comm.) detected wolverine visits to photographic bait stations more frequently by tracks in snow than by photographs.
When you approach the set, look for and identify, describe, measure, photograph and collect, as appropriate, any tracks, scat, or other sign of which animals may have been to the station. Has the bait or scent been disturbed? Has the bait tree or camera tree been climbed? Record these observations on the Camera Results form (appendix A and in pocket inside back cover).
Enter the sensor field with the station sign, and trigger a single picture. Climb the camera tree, turn the unit off, and open the box. Record the frame that the camera is on. If the roll is more than half exposed, or if you suspect that a target species has visited the station, remove the film. Using a digital pocket battery tester, test both the 12-v battery and camera battery, and change them if they are low (this will depend on how long the unit has been out and when you plan to visit the site again). Remember, new, fully charged batteries will probably need recharging after 20 days, so you will probably need to replace the batteries after 12 weeks. Put new film in the camera if needed, check the batteries, hook the unit up, and turn it on just before you climb down the tree. Enter the sensor field with a sign indicating the station number and date, and expose a single picture. Leave the site without again entering the sensor field.
When checking the camera, first determine whether the film can be advanced. If so, a photograph has been taken since the last visit. Record this and other information on a copy of the Line-Triggered Camera Results form (appendix A and in pocket inside back cover). Examine the camera unit, and note whether the camera is functional. Reasons for non-functional cameras include the thread being chewed through, the monofilament line obstructed or broken, and misattachment of the trigger line. To verify that the unit is functional, take a test photograph at every visit. To save processing costs, take this test shot with your hand blocking the lens so that no print will be developed from this exposure. Replace the bait at every visit. Initially, replace the film after one or two exposures (excluding test shots). Once the crew is familiar with the operation of the camera and the area appears safe from vandalism and persistent bear damage, the film can be left in the camera longer. If the film is to be removed, make certain to advance it to the end of the roll before removing the cartridge. Failure to do so will result in the overexposure of the last few photographs and loss of data. Before leaving the station, make sure to advance film to the next exposure. If necessary, take additional test shots with the lens blocked to test the camera operation. Other general suggestions for checking line-triggered cameras are outlined in Jones and Raphael (1993).
When you remove exposed film from a camera, label it with the station number and date so that it will not be confused with other rolls. Fine-tipped, indelible markers work best. Often the least expensive developing is provided by large discount or drug stores, which typically make two prints of each exposure. Record the camera number, station number, and time period over which the film was exposed on the processing envelope and on the receipt. When using 110 film, if a custom-processing laboratory is available, have a contact sheet printed first. Review each frame on the sheet, and if possible, request that only those photographs that contain animal subjects be printed at full size. If custom processing is not available, and the budget is especially tight, have the negatives developed first and then select for printing only those frames that, when examinedunder a lens, contain an animal subject. However, there is a danger of missing something important if just the negatives are examined.
Label the back of each photograph with the species, date, and station. This same information should be entered on the Camera Results form. Archive all photographs in protective plastic covers. Examples of prints from 35-mm and 110 camera systems are presented in appendix C.
We recommend three forms for data: Survey Record, Camera Results (different for 35-mm and line-triggered systems), and Species Detection form (appendix A and in pocket inside back cover). In wet areas or during snowy seasons, we strongly recommend using indelible ink and photocopies of the data sheets made on waterproof paper. All forms should be stored with photographs in a 3-ring binder as a permanent, complete record of what was done, where, when, by whom, and what the results were. Record all species detected. Your survey efforts can contribute to understanding the distributions of a variety of species in addition to MFLW.
Survey Record Form
This form contains information on each survey's location and details on its configuration. It is important to identify the legal description and the Universal Transverse Mercator (UTM) coordinates at each station. Collectively, these forms become a record of all the surveys conducted in the administrative area, regardless of their outcome.
Camera Results Form
Single and Dual Sensor
When checking stations using either the single-sensor system or the Trailmaster dual sensor, fill in the Date, Event, Number, and Time columns in the field as you cycle through the Readout/Advance mode. Record data only for those events associated with a picture, which is indicated by the decimal point between the first and second digits on the receiver's display. Fill in the Contents section after the film is developed, noting any species present.
When checking the stations using the dual-sensor system made by Manley, record in the comments section the number of frames exposed. When the film is developed, record the Date, Time, and Contents of each exposure by examining the prints. Ignore the Event column.
In a 3-ring binder, store the data sheets, negatives, and prints by sample unit and station. Put the negatives and prints in plastic sleeves made for storing film.
Use this form when establishing and checking the line-triggered camera stations. Use a separate sheet for each day, and record information for each camera visit whether an exposure was taken or not. Record the station number, the camera number, and the exposure number (at both your arrival and your departure from the station) at each visit. Record the visit number (0 for setup, and 1-6 for station visits) and the number of nights since the last visit (should be two in most cases). Note also whether a photo was taken since the last visit and the number of test shots taken at each check. The species recorded will be determined after the film is processed, so that space will remain blank until later. Remember, do not terminate effort on the sample unit until the film is developed and you are certain the target species was photographed.
Species Detection Form
When a survey is successful at detecting marten, fisher, lynx, or wolverine, complete the Species Detection form, which characterizes successful surveys and is used for all methods (camera, track-plate, snow-track). Complete one form for each species detected. Submit one copy to the state Natural Heritage office (addresses provided in Chapter 1), and archive a copy at the office of the agency that manages the land where the survey was conducted. Most Natural Heritage databases record only positive results from detection surveys.
Comparisons of Camera Systems
The perfect remote camera system is yet to be developed. In this section we discuss some of the strengths and weaknesses of each of the camera systems described to allow investigators to decide which may be most appropriate for their circumstances.
The first major difference between 35-mm and line-triggered systems is in the cost of the equipment. The 35-mm systems cost $500-$600, and the line-triggered systems less than $25. This substantial difference in initial price, however, may be mitigated by differences in labor involved in the construction of the equipment and the frequency of checking the stations. The 35-mm systems require virtually no assembly upon receipt from the manufacturer. The line-triggered system must be built by the user. Because the 35-mm systems can shoot an entire 36-exposure roll of film, they may be left in the field longer without being checked than the line-triggered systems, which can take only one picture and then must be rebaited and reset. However, damage or loss from vandalism, theft, or bears is more serious with the 35-mm systems than with the line-triggered system. Both of the 35-mm systems can be more readily used in severe weather, especially winter, than the line-triggered cameras.
Another difference between the two types of camera system is the triggers. The 35-mm systems use infrared (single sensor) or infrared and microwave (dual sensor) triggers, which require only that an animal be near the bait to be photographed. In contrast, animals must physically pull the bait to be photographed by the line-triggered system. In addition, the sensitivity of the triggers on several of the 35-mm systems is adjustable, and the film displays the date and time. The line-triggered camera lacks these features. Jones and Raphael (1991) found that half of all photos taken by line-triggered cameras did not record a subject and that 65 percent of these problems were due to failure of the disposable("flip") flash. However, the 110 camera recommended here has an internal flash that rarely fails.
Of the 35-mm systems we discussed, the Trailmaster TM1500 allows the user to specify the minimum length of time between photographs to lessen the probability that one animal will expose most of the film. Although this is not possible with the Manley dual-sensor model, the dual sensor made by Trailmaster (TM500) does have this feature. With the single-sensor camera system the animal must break a narrow infrared beam. The dual-sensor system requires only that an animal come into the field, up to 11 m from the camera. However, dual-sensor systems may be triggered when the sun heats up the background, so it is best to use them in cold conditions. The TM1500 uses eight alkaline "C" cells; the Manley dual sensor uses a heavier 12-volt battery, which is more difficult to transport. Some 12-volt batteries may leak; gel-cell batteries that do not leak can be used but at greater expense. The difference in batteries accounts for the approximately 10-kg difference in the weight of the two systems. Both Trailmaster models store the date and time of all "events." The Manley dual-sensor system is housed in a metal box, which affords some protection from weather and bear damage and can be modified to be locked shut and cabled to a tree to help prevent theft and vandalism.
Other commercially available products may resolve some of the problems with dual-sensor systems. The Trailmaster TM500 uses four alkaline C-cells, and the Deerfinder uses six D-cell and two AAA batteries, which results in much more portable systems. The TM500's batteries last several months in the field. These dual-sensor systems also allow the programming of a camera delay and store the date and time of up to 1000 (Trailmaster) or 495 (Deerfinder) events. We do not yet have extensive field experience with these systems, but preliminary results from simultaneous use of the Manley and TM500 dual-sensor systems indicate great advantages of the lighter weight, ability to program a camera delay, and storage of event data provided by the TM500 (K. R. Forseman, pers. comm.). The TM500 also allows adjusting of the sensitivity of its dual-sensor trigger, which may prevent small, non-target species from triggering the camera.
Remote video technology also is advancing, and video has several obvious advantages over still photography. Video tape does not require developing, and it may be used repeatedly. Video systems allow continuous photographic monitoring rather than a "snapshot," and can record several hundred "events," rather than the 36 events possible on a standard roll of film. Trailmaster offers a modified Sony Handycam camcorder to be used with the Trailmaster TM700v. A dual-sensor monitor turns the video camera on when it detects motion and heat, and turns the camera off when the animal moves out of range of the sensors. The tape lasts 2 hours, and the system stores the date and time of up to 1000 events. Other remote video systems are available from Compu-Tech Systems. Remote videography has been used to detect fishers in Oregon (S. Armentrout, pers. comm.; F. Wahl, pers. comm.). We have had no experience with these systems, however, and their cost (several thousand dollars) will probably prevent their common use in detection surveys.
In summary, the line-triggered system is inexpensive but requires more labor and is less versatile and rugged than the 35-mm systems. Once the bait is taken, the camera must be reset for another picture; date and time are not displayed on the film. The 35-mm systems are initially expensive, but require no assembly and because they can shoot an entire roll of film, they require less labor. The single-sensor's trigger requires precise placement of the system and can be adjusted for sensitivity. The Trailmaster allows the minimum interval between pictures to be set by the user and electronically stores the date and time of each event. Dual-sensor systems can detect animals over a broader field, the size of which is somewhat adjustable. The Manley dual sensor uses a heavy, 12-v battery, does not allow a minimum interval between photographs to be set, does not store the date and time of events, and is housed in a metal box that provides mechanical protection and may be locked. All Trailmasters operate with alkaline C-cells. The TM500 dual sensor allows specification of a minimum interval between photographs of 1 to 98 minutes, stores the date and time for up to 1000 events, and allows adjustment of the sensitivity of the trigger.
Assumptions for 35-mm systems:
Armentrout, S., Wildlife Biologist, Rogue River National Forest. Prospect, OR. [Personal communication]. 1994.
Armstrong, B.; Williams, K. 1986. The avalanche book. Golden, CO: Fulcrum, Inc.
Arthur, S.M.; Krohn, W.B. 1991. Activity patterns, movement and reproductive ecology of fishers in south central Maine. Journal of Mammalogy 72: 379385.
Baker, J. A.; Dwyer, P. M. 1987. Techniques for commercially harvesting furbearers. Novak, M.; Baker, J. A.; Obbard, M. E.; Malloch, B., eds. Wild furbearer management and conservation in North America. North Bay, ON: Trappers Association; 970-995.
Banci, V. 1989. A fisher management strategy for British Columbia. Wildlife Bulletin No. B-63. Victoria, BC: Ministry of Environment;117 p.
Bull, E. L.; Holthausen, R. S.; Bright, L. R. 1992. Comparison of three techniques to monitor marten. Wildlife Society Bulletin 20: 406-410.
Chow, L., Wildlife Biologist. National Biological Service, Yosemite Research Center. El Portal, CA. [Personal communication]. 1993.
Copeland, J. P., Wildlife Biologist. Department of Fish and Game. Stanley, ID. [Personal communication]. 1993.
Daffern, T. 1992. Avalanche safety for skiers and climbers. Seattle, WA: Cloudcap.
Danielson, W.R.; Fuller, T.K.; DeGraaf, R.M. 1995. An inexpensive, reliable, and compact camera system for wildlife research. Unpublished draft supplied by authors.
Forgey, W. W. 1991. The basic essentials of hypothermia. Merrillville, IN: ICS Books, Inc.
Forseman, K. R., Professor of Biology. University of Montana. Missoula, MT. [Personal communication]. 1995.
Foresman, K.R.; Pearson, D.E. 1995. Testing of proposed survey methods for the detection of wolverine, lynx, fisher and American marten in Bitterroot National Foreset. Final Report. Unpublished manuscript supplied by authors.
Fowler, C., Wildlife Biologist. Tahoe National Forest, Foresthill, CA. [Personal communication]. 1992.
Fowler, C. H.; Golightly, R. T. 1993. Fisher and marten survey techniques on the Tahoe National Forest. Final Report. Agreement No. PSW-90-0034CA. Arcata, CA: Humboldt State University Foundation and Forest Service, U.S. Department of Agriculture; 119 p.
Geary, S. M. 1984. Fur trapping in North America. Piscataway, NJ: Winchester Press; 154 p.
Gorman, S. 1991. AMC guide to winter camping: wilderness travel and adventure in the cold-weather months. Boston, MA: Appalachian Mountain Club Books.
Halfpenny, J. C.; Ozanne, R. 1989. Winter, an ecological handbook. Boulder, CO: Johnson Publishing Co.
Hash, H. S. 1987. Wolverine. In: Novak, M.; Baker, J. A.; Obbard, M. E.; Malloch, B., eds. Wild furbearer management and conservation in North America. North Bay, ON: Ontario Trappers Association; 575-585.
Hatler, D. F. 1989. A wolverine management strategy for British Columbia. Wildlife Bulletin No. B-60. Victoria, BC: Ministry of Environment; 135 p.
Holden, T., Wildlife Biologist, Malheur National Forest. Prairie City, OR. [Personal communication]. 1994.
Hornocker, M. G.; Hash, H. S. 1981. Ecology of the wolverine in northwestern Montana. Canadian Journal of Zoology 59: 1286-1301.
Jones, L. C.; Raphael, M. G. 1991. Ecology and management of marten in fragmented habitats of the Pacific Northwest. Progress Report FY91. Portland, OR: Pacific Northwest Research Station, Forest Service, U.S. Department of Agriculture; 36 p.
Jones, L. L. C.; Raphael, M. G. 1993. Inexpensive camera systems for detecting martens, fishers, and other animals: guidelines for use and standardization. Gen. Tech. Rep. PNW-GTR-306. Portland, OR: Pacific Northwest Research Station, Forest Service, U.S. Department of Agriculture; 22 p.
Kucera, T. E.; Barrett, R. H. 1993. The Trailmaster camera system for detecting wildlife. Wildlife Society Bulletin 21: 505-508.
Kucera, T. E.; Barrett, R. H. 1995. Trailmaster camera system: response. Wildlife Society Bulletin 23: 110-113.
Laurance, W. F.; Grant, J. D. 1994. Photographic identification of ground-nest predators in Australian tropical rain forests. Wildlife Research 21: 241-248.
Mace, R. D.; Minta, S. C.; Manley, T.; Aune, K. E. 1994. Estimating grizzly bear population size using camera sightings. Wildlife Society Bulletin 22: 74-83.
Major, R. E.; Gowing, G. 1994. An inexpensive photographic technique for identifying nest predators at active nests of birds. Wildlife Research 21: 657-666.
Martin, S. K. 1994. Feeding ecology of American martens and fishers. In: Buskirk, S. W.; Harestad, A. S.; Raphael, M. G.; Powell, R. A., eds. Martens, sables, and fishers: biology and conservation. Ithaca, NY: Cornell University Press; 297-315.
Pittaway, R. J. 1978. Observations on the behavior of the fisher (Martes pennanti) in Algonquin Park, Ontario. Le Naturaliste canadien 105: 487-489.
Pittaway, R. J. 1983. Fisher and red fox interactions over food. Ontario Field Biologist 37: 88-90.
Pozos, R. S.; Born, D. O. 1982. Hypothermia: causes, effects, prevention. Piscataway, NJ: New Century Publishers, Inc.
Raphael, M. G.; Barrett, R. 1984. Diversity and abundances of wildlife in late successional Douglas-fir forests. In: Proceedings, New Forests for a Changing World, 1983 SAF National Convention, Portland, OR. Washington, DC: Society of American Foresters; 34-42.
Schimelpfenig, T.; Lindsey, L. 1991. NOLS wilderness first aid. Lander, WY: National Outdoor Leadership School Publications.
Seglund, A. E.; Golightly, R. T. 1993. Fisher survey techniques on the Shasta-Trinity National Forest. Progress Report. Unpublished draft supplied by authors.
Sheneman, J., Medical Doctor, California Department of Health Sciences. Berkeley, CA. [Personal communication]. 1992.
Strickland, M.A.; Douglas, C.W.; Novak, M.; Hunzinger, N.P. 1982. Marten. In: Chapman, J.A.; Feldhamer, G.A., eds. Wild mammals of North America: biology, management, and economics. Baltimore, MD: Johns Hopkins University Press; 599612.
Wahl, F., Wildlife Biologist, Rogue River National Forest. Butte Falls, OR. [Personal communication]. 1995.
Weiss, H. 1988. Secrets of warmth: warmth for comfort or survival. Brooklyn, NY: Vibe Publications.
Wilkerson, J. A., ed. 1992. Medicine for mountaineering and other wilderness activities. 4th ed. Seattle, WA: The Mountaineers.
Wilkerson, J. A.; Bangs, C. C.; Hayward, J. S., eds. 1986. Hypothermia, frostbite, and other cold injuries: prevention, recognition, and prehospital treatment. Seattle, WA: The Mountaineers.
Wilkinson, E. 1992. Snow caves for fun and survival. Boulder, CO: Johnson Publishing Co.
York, E.C.; Fuller, T.K.; Powell, S.M.; DeGraaf, R.M. 1995. A description and comparison of techniques to estimate and index fisher density. Unpublished draft supplied by authors.
Young, S. P. 1958. The bobcat of North America. Washington, DC: Wildlife Management Institute.
Zielinski, W. J.; Truex, R.; Ogan, C.; Busse, K. 1995. Detection survey for fishers and martens in California 1989-1994: summary and interpretations. Edmonton, AB: Second International Martes Symposium. Unpublished draft supplied by authors.
1. When first programming the unit, or after changing batteries, when all memory is erased: Press TIME SET then R/O ADV to advance to correct hour. Repeat this command to correct the following: minute, year (tens), year (ones), month, day of month, pulses, and camera delay.
2. To read out event data:
3. To clear event data (note: this does not change pulses or camera delay):
4. To put receiver into Event Gathering Mode:
A. Fisher; Klamath National Forest, California. Single-sensor camera.
B. Fisher; Six Rivers National Forest, California. Single-sensor camera.
C. Marten; Sierra Nevada, California. Single-sensor camera.
D. Marten; Sierra Nevada, California. Single-sensor camera.
E. Lynx; Montana. Dual-sensor (Manley) camera.
F. Wolverine, Sawtooth National Forest, Idaho. Dual-sensor (Manley) camera.
G. Wolverine, Sawtooth National Forest, Idaho. Dual-sensor (Manley) camera.
H. Marten, Sequoia National Forest, California. Line-trigger camera.
I. Marten; Sequoia National Forest, California. Line-triggered camera (note enclosed track plate box in background).
J. Fisher; Six Rivers National Forest, California. Line-triggered camera.
K. Fisher; Sequoia National Forest, California. Line-triggered camera.
L. Juvenile fisher, Six Rivers National Forest, California. Line-triggered camera. | 1 | 19 |
<urn:uuid:d892f664-28c0-4a50-b0b8-51988de520a3> | Section Four: Unix and RISC, a New Hope
The basic design is scalable, from 32 to 48 and 64 bit designs, with 16 general purpose registers. It is a memory-data instruction set, but an elegant one. One early design was the Mitsubishi M32 (mid 1987), which optimised the simple and often used TRON instructions, much like the 80486 and 68040 did. It featured a 5 stage pipeline, dynamic branch prediction with a target branch buffer similar to that in the AMD 29K. It also featured an instruction prefetch queue, but being a prototype, had no MMU support or FPU.
Commercial versions such as the Gmicro/200 (1988) and other Gmicro/ from Fujitsu/Hitachi/Mitsubishi, and the Toshiba Tx1 were also introduced, and a 64 bit version (CHIP64) began development, but they didn't catch on in the non-Japanese market (definitive specifications or descriptions of the OS's actual operation were hard to come by, while research systems like Mach of BSD Unix were widely available for experimentation). In addition, newer techniques (such as load-store designs) overshadowed the TRON standard. Companies such as Hitachi switched to load-store designs, and many American companies (Sun, MIPS) licensed their (faster) designs openly to Japanese companies. TRON's promise of a unified architecture (when complete) was less important to companies than raw performance and immediate compatibility (Unix, MS-DOS/MS Windows, Macintosh), and has not become significant in the industry, though TRON operating system development continued as an embedded and distributed operating system (such as the Intelligent House project, or more recently the TiPO handheld digital assistant from Seiko (February 1997)) implemented on non-TRON CPUs.
NEC produced a similar memory-data design around the same time, the V60/V70 series, using thirty two registers, a seven stage pipeline, and preprocessed branches. NEC later developed the 32-bit load-store V800 series, and became a source of 64-bit MIPS load-store processors.
68000-based CPUs and a standard operating system, Unix. Research versions of load-store processors had promised a major step forward in speed [See Appendix A], but existing manufacturers were slow to introduce a RISC processor, so Sun went ahead and developed its own (based on Berkeley's design). In keeping with their open philosophy, they licensed it to other companies, rather than manufacture it themselves.
SPARC was not the first RISC processor. The AMD 29000 (see below) came before it, as did the MIPS R2000 (based on Stanford's experimental design) and Hewlett-Packard PA-RISC CPU, among others. The SPARC design was radical at the time, even omitting multiple cycle multiply and divide instructions (added in later versions), using repeated single-cycle "step" instructions instead (similar in idea to the square root step instruction in the Transputer T-800), while most RISC CPUs were more conventional.
SPARC usually contains about 128 or 144 integer registers, (memory-data designs typically had 16 or less). At each time 32 registers are available - 8 are global, the rest are allocated in a 'window' from a stack of registers. The window is moved 16 registers down the stack during a function call, so that the upper and lower 8 registers are shared between functions, to pass and return values, and 8 are local. The window is moved up on return, so registers are loaded or saved only at the top or bottom of the register stack. This allows functions to be called in as little as 1 cycle. later versions added a FPU with thirty-two (non-windowed) registers. Like most RISC processors, global register zero is wired to zero to simplify instructions, and SPARC is pipelined for performance (a new instruction can start execution before a previous one has finished), but not as deeply as others - like the MIPS CPUs, it has branch delay slots. Also like previous processors, a dedicated condition code register (CCR) holds comparison results.
SPARC is 'scalable' mainly because the register stack can be expanded (up to 512, or 32 windows), to reduce loads and saves between functions, or scaled down to reduce interrupt or context switch time, when the entire register set has to be saved. Function calls are usually much more frequent than interrupts, so the large register set is usually a plus, but compilers now can usually produce code which uses a fixed register set as efficiently as a windowed register set across function calls.
SPARC is not a chip, but a specification, and so there are various designs of it. It has undergone revisions, and now has multiply and divide instructions. Original versions were 32 bits, but 64 bit and superscalar versions were designed and implemented (beginning with the Texas Instruments SuperSparc in late 1992), but performance lagged behind other load-store and even Intel 80x86 processors until the UltraSPARC (late 1995) from Texas Instruments and Sun, and superscalar HAL/Fujitsu SPARC64 multichip CPU. Most emphasis by licensees other than Sun and HAL/Fujitsu has been on low cost, embedded versions.
The UltraSPARC is a 64-bit superscalar processor series which can issue up to four instructions at once (but not out of order) to any of nine units: two integer units, two of the five floating point/graphics units (add, add and multiply, divide and square root), the branch and two load/store units. The UltraSPARC also added a block move instruction which bypasses the caches (2-way 16K instr, 16K direct mapped data), to avoid disrupting it, and specialized pixel operations (VIS - the Visual Instruction Set) which can operate in parallel on 8, 16, or 32-bit integer values packed in a 64-bit floating point register (for example, four 8 X 16 -> 16 bit multiplications in a 64 bit word, a sort of simple SIMD/vector operation. More extensive than the Intel MMX instructions, or earlier HP PA-RISC MAX and Motorola 88110 graphics extensions, VIS also includes some 3D to 2D conversion, edge processing and pixes distance (for MPEG, pattern-matching support).
The UltraSPARC I/II were architecturally the same. The UltraSPARC III (mid-2000) did not add out-of-order execution, on the grounds that memory latency eliminates any out-of-order benefit, and did not increase instruction parallelism after measuring the instructions in various applications (although it could dispatch six, rather than four, to the functional units, in a fourteen-stage pipeline). It concentrated on improved data and instruction bandwidth.
The HAL/Fujitsu SPARC64 series (used in Fujitsu servers using Sun Solaris software) concentrates more on execution performance than bandwidth as the Sun versions do. The initial version can issue up to four in order instructions simultaneously to four buffers, which issue to four integer, two floating point, two load/store, and the branch unit, and may complete out of order unlike UltraSPARC (an instruction completes when it finishes without error, is committed when all instructions ahead of it have completed, and is retired when its resources are freed - these are 'invisible' stages in the SPARC64 pipeline). A combination of register renaming, a branch history table, and processor state storage (like in the Motorola 88K) allow for speculative execution while maintaining precise exceptions/interrupts (renamed integer, floating, and CC registers - trap levels are also renamed and can be entered speculatively). VIS extensions are not implemented, but are emulated by trapping to a handler routine.
The SPARC64 V (late 2002) is agressively out-of-order, concentrating on branch prediction more than load latency, although it does include data speculation (loaded data are used before they are known to be valid - if it turns out to be invalid, the load operation is repeated, but this is still a win if data is usually valid (in the L1 cache)). It can dispatch six to eight instructions to: four integer units, two FPU (one with VIS support), two load units and two store units (store takes one cycle, load takes at least two, so the units are separate, unlike other designs). It has a nine stage pipeline for single-cycle instructions (up to twelve for more complex operations), with integer and floating point registers part of the integer/floating point reorder buffers allowing operands to be fetched before dispatching instructions to execution pipe segments.
Instructions are predecoded in cache, incorporating some ideas from dataflow designs - source operands are replaced with references to the instructions which produce the data, rather than matching up an instructions source registers with destination registers of earlier instructions during result forwarding in the execution stage. The cache also performs basic block trace scheduling to form issue packets, something normally reserved for compilers.
Fujitsu uses these CPUs in its PRIMEPOWER servers which compete with mainframes, so they are designed with mainframe reliability features. Pairity or error checking and correction bits are used in internal busses, and the CPU will actually restart the instruction stream after certain errors (logged to registers which can be checked to indicate a failing CPU which should be replaced).
While UltraSPARC III has mediocre performance on benchmarks (emphacizing data throughput, with mixed success), SPARC64 V is among the top of the 64-bit CPUs in Spec benchmarks.
Berkeley RISC design (and the IBM 801 project), as a modern successor to the earlier 2900 bitslice series (beginning around 1981). Like the SPARC design that was introduced shortly later, the 29000 has a large set of registers split into local and global sets. But though it was introduced before the SPARC, it has a more elegant method of register management.
The 29000 has 64 global registers, in comparison to the SPARC's eight. In addition, the 29000 allows variable sized windows allocated from the 128 register stack cache. The current window or stack frame is indicated by a stack pointer (a modern version of the ISAR register in the Fairchild F8 CPU), a pointer to the caller's frame is stored in the current frame, like in an ordinary stack (directly supporting stack languages like C, a CISC-like philosophy). Spills and fills occur only at the ends of the cache, and registers are saved/loaded from the memory stack (normally implemented as a register cache separate from the execution stack, similar to the way FORTH uses stacks). This allows variable window sizes, from 1 to 128 registers. This flexibility, plus the large set of global registers, makes register allocation easier than in SPARC (optimised stack operations also make it ideal for a stack-oriented interpreted languages such as PostScript, making it popular as a laser printer controller).
There is no special condition code register - any general register is used instead, allowing several condition codes to be retained, though this sometimes makes code more complex. An instruction prefetch buffer (using burst mode) ensures a steady instruction stream. Branches to another stream can cause a delay, so the first four new instructions are cached - next time a cached branch (up to sixteen) is taken, the cache supplies instructions during the initial memory access delay.
Registers aren't saved during interrupts, allowing the interrupt routine to determine whether the overhead is worthwhile. In addition, a form of register access control is provided. All registers can be protected, in blocks of 4, from access. These features make the 29000 useful for embedded applications, which is where most of these processors are used, allowing it at one point to claim the title of 'the most popular RISC processor'. The 29000 also includes an MMU and support for the 29027 FPU. The 29030 added
The 80C166 has sixteen 16 bit registers, with the lower eight usable as sixteen 8 bit registers, which are stored in overlapping windows (like in the SPARC) in the on-chip RAM (or register bank), pointed to by the Context Pointer (CP) (similar to the SP in the AMD 29K). Unlike the SPARC, register windows can overlap by a variable amount (controlled by the CP), and the there are no spills or fills because the registers are considered part of the RAM address space (like in the TMS 9900), and could even extend to off chip RAM. This eliminates wasted registers of SPARC style windows.
Address space (18 to 24 bits) is segmented (64K code segments with a separate code segment register, 16K data segments with upper two bits of 16 bit address selecting one of four data segment registers).
The 80C166 has 32 bit instructions, while it's a 16 bit processor (compared to the Hitachi SH, which is a 32 bit CPU with 16 bit instructions). It uses a four stage pipeline, with a limited (one instruction) branch cache.
Stanford MIPS project, which stood for Microprocessor without Interlocked Pipeline Stages [See Appendix A], and was arguably the first commercial RISC processor (other candidates are the ARM and IBM ROMP used in the IBM PC/RT workstation, which was designed around 1981 but delayed until 1986). It was intended to simplify processor design by eliminating hardware interlocks between the five pipeline stages. This means that only single execution cycle instructions can access the thirty two 32 bit general registers, so that the compiler can schedule them to avoid conflicts. This also means that LOAD/STORE and branch instructions have a 1 cycle delay to account for. However, because of the importance of multiply and divide instructions, a special HI/LO pair of multiply/divide registers exist which do have hardware interlocks, since these take several cycles to execute and produce scheduling difficulties.
Like the AMD 29000 and DEC Alpha, the R2000 has no condition code register considering it a potential bottleneck. The PC is user readable. The CPU includes an MMU unit that can also control a cache, and the CPU was one of the first which could operate as a big or little endian processor. An FPU, the R2010, is also specified for the processor.
Newer versions included the R3000 (1988), with improved cache control, and the R4000 (1991) (expanded to 64 bits and is superpipelined (twice as many pipeline stages do less work at each stage, allowing a higher clock rate and twice as many instructions in the pipeline at once, at the expense of increased latency when the pipeline can't be filled, such as during a branch, (and requiring interlocks added between stages for compatibility, making the original "I" in the "MIPS" acronym meaningless))). The R4400 and above integrated the FPU with on-chip caches. The R4600 and later versions abandoned superpipelines.
The superscalar R8000 (1994) was optimised for floating point operation, issuing two integer or load/store operations (from four integer and two load/store units) and two floating point operations simultaneously (FP instructions sent to the independent R8010 floating point coprocessor (with its own set of thirty-two 64-bit registers and load/store queues)).
The R10000 and R12000 versions (early 1996 and May 1997) added multiple FPU units, as well as almost every advanced modern CPU feature, including separate 2-way I/D caches (32K each) plus on-chip secondary controller (and high speed 8-way split transaction bus (up to 8 transactions can be issued before the first completes)), superscalar execution (load four, dispatch five instructions (may be out of order) to any of two integer, two floating point, and one load/store units), dynamic register renaming (integer and floating point rename registers (thirty two in the R10K, fourty eight in the R12K)), and an instruction cache where instructions are partially decoded when loaded into the cache, simplifying the processor decode (and register rename/issue) stage. This technique was first implemented in the AT&T CRISP/Hobbit CPU, described later. Branch prediction and target caches are also included.
The six stage 2-way (int/float) superscalar R5000 (January, 1996) was added to fill the gap between R4600 and R10000, without any fancy features (out of order or branch prediction buffers). For embedded applications, MIPS and LSI Logic added a compact 16 bit instruction set which can be mixed with the 32 bit set (same as the ARM Thumb 16 bit extension), implemented in a CPU called TinyRISC (October 1996), as well as MIPS V and MDMX (MIPS Digital Multimedia Extensions, announced October 1996)). MIPS V added parallel floating point (two 32 bit fields in 64 bit registers) operations (compared to similar HP MAX integer or Sun VIS and Intel MMX floating point unit extensions), MDMX added integer 8 or 16 bit subwords in 64 bit FPU registers and a 24 and 48 bit subwords in a 192 bit accumulator for multimedia instructions (a MAC instruction on an 8-bit value can produce a 24-bit result, hence the large accumulator). Vector-scalar operations (ex: multiply all subwords in a register by subword 3 from another register) are also supported. These extensive instructions are partly derived from Cray vector instructions (Cray is owned by SGI, the parent company of MIPS), and are much more extensive than the earlier multimedia extensions of other CPUs. Future versions are expected to add Java virtual machine support.
MDMX instructions were never implemented in a CPU, because the MDMX and MIPS V extensions were superseeded by the MIPS64 instruction set, and MIPS-3D extensions for 3D operations.
Rumour has it that delays and performance limits, but more probably SGI's financial problems, meant that the R10000 and derivatives (R12K and R14K) were the end of the high performance line for the MIPS architecture. SGI scaled back high end development in favour of the promised IA-64 architecture announced by HP and Intel. MIPS was sold off by SGI, and the MIPS processor was retargeted to embedded designs where it's more successful. The R20K (early 2001) implemented the MIPS-3D extensions, and increased the number of integer units to six with a seven stage pipeline.
SiByte introduced a less parallel, high clock rate 64-bit MIPS CPU (SB-1, mid 2000) exceeding what marketing people enthusiastically call the "1GHz barrier" (not an actual barrier of any sort).
As part of an attempt to create a domestic computer industry in China, BLX IC Design Corp of China implemented a version of the MIPS architecture (without unaligned 32-bit load/store support, to avoid patent issues) called the Godson (known as Dragon in English) series (March 2003).
Nintendo used a version of the MIPS CPU in the N64 (along with SGI-designed 3-D hardware), accounting for around 3/4 of MIPS embedded business in 1999 until switching to a custom IBM PowerPC, and a graphics processor from ArtX (founded by ex-SGI engineers) for its successor named GameCube (codenamed "Dolphin"). Sony also uses it in its Playstation series.
microcode ROM with a simple 32 bit data path bolted to its side". Performance wasn't spectacular, but it was used in a pre-Unix workstation from HP. It led to the Vision, a fairly complex capability-based architecture. At the same time as Vision, the Spectrum project was started at HP labs based on the IBM 801, and further developed with implementation groups.
A new processor was needed to replace older 16-bit stack-based processors in HP-3000 MPE minicomputers. Initially a more complex replacement called Omega was started, but cancelled, and both Vision and Spectrum were proposed for Omega's replacement (code-named Alpha, not to be confused with the DEC Alpha). Spectrum was eventually selected, and became Precision Architecture, or PA-RISC. It also replaced Motorola 680x0 processors in the HP-9000 HP/UX Unix minicomputers and workstations.
A design typical of many load-store processors, it has an unusually large instruction set for a RISC processor (including a conditional (predicated) skip instruction similar to those in the ARM processor), partly because initial design took place before RISC philosophy was popular, and partly because careful analysis showed that performance benefited from the instructions chosen - in fact, version 1.1 added new multiple operation instructions combined from frequent instruction sequences, and HP was among the first to add multimedia instructions (the MAX-1 and MAX-2 instructions, similar to Sun VIS or Intel MMX). Despite this, it's a simple design - the entire original CPU had only 115,000 transistors, less than twice the much older 68000.
It's almost the cannonical load-store design, similar except in details to most other mainstream load-store processors like the Fairchild/Intergraph Clipper (1986), and the Motorola 88K in particular. It has a 5 stage pipeline, which (unlike early MIPS (R2000) processors) had hardware interlocks from the beginning for instructions which take more than one cycle, as well as result forwarding (a result can be used by a previous instruction without waiting for it to be stored in a register first).
Originally with a single instruction/data bus, it was later expanded to a Harvard architecture (separate instruction and data buses). It has thirty-two 32-bit integer registers (GR0 wired to constant 0, GR31 used as a link register for procedure calls), with seven 'shadow registers' which preserve the contents of a subset of the GR set during fast interrupts (also like ARM). Version 1.0 had sixteen 64-bit floating point registers, version 1.1 added features from the Apollo PRISM FPU after Hewlett-Packard acquired the company in 1988, resulting in thirty-two 64-bit floating point registers (also as sixty-four 32-bit and sixteen 128-bit), in an FPU (which could execute a floating point instruction simultaneously). Later versions (the PA-RISC 7200 in 1994) added a second integer unit (still dispatching only two instructions at a time to any of the three units). Addressing originally was 48 bits, and expanded to 64 bits, using a segmented addressing scheme.
The PA-RISC 7200 also included a tightly integrated cache and MMU, a high speed 64-bit 'Runway' bus, and a fast but complex fully associative 2KB on-chip assist cache, between the simpler direct-mapped data cache and main memory, which reduces thrashing (repeatedly loading the same cache line) when two memory addresses are aliased (mapped to the same cache line). Instructions are predecoded into a separate instruction cache (like the AT&T CRISP/Hobbit).
The PA-RISC 8000 (April 1996), intended to compete with the R10000, UltraSparc, and others) expands the registers and architecture to 64 bits (eliminating the need for segments), and adds aggressive superscalar design - up to 5 instructions out of order, using fifty six rename registers, to ten units (five pairs of: ALU, shift/merge, FPU mult/add, divide/sqrt, load/store). The CPU is split in two, with load/store (high latency) instructions dispatched from a separate queue from operations (except for branch or read/modify/write instructions, which are copied to both queues). It also has a deep pipeline and speculative execution of branches (many of the same features as the R10000, in a very elegant implementation).
The PA-RISC 8500 (mid 1998) broke with HP tradition (in a big way) and added on-chip cache - 1.5Mb L1 cache.
HP pioneered the addition of multimedia instructions with the MAX-1 (Multimedia Acceleration eXtension) extensions in the PA-7100LC (pre-1994) and 64-bit (version 2.0) MAX-2 extensions in the PA-8000, which allowed vector operations on two or four 16-bit subwords in 32-bit or 64-bit integer registers (this only required circuitry to slice the integer ALU (similar to bit-slice processors, such as the AMD 2901), adding only 0.1 percent to the PA-8000 CPU area - using the FPU registers like Sun's VIS and Intels MMX do would have required duplicating ALU functions. 8 and 32-bit support, multiplication, and complex instructions were also left out in favour of powerful 'mix' and 'permute' packing/unpacking operations).
A replacement VLIW version known as PA-RISC Wide-Word was used as a basis for the IA-64 CPU with Intel. Development on PA-RISC continued with the 8700, which used the same CPU bus as the HP-designed McKinley version of IA-64, allowing the processors to be interchangable during the introductory period of IA-64 (and possibly as a hedge against its failure by skeptical HP designers). A two-8700 chip was introduced in 2002, featuring an unusual off-chip DRAM shared level 2 cache (rather than faster SRAM) which allows a larger cache for lower cost, lower power, and smaller space. Although typically sporting fewer of the advanced (and promised) features of competing CPUs designs, a simple elegant design and effective instruction set has kept PA-RISC performance among the best in its class (of those actually available at the same time) since its introduction.
Harvard architecture (the same as the Fairchild/Intergraph Clipper C100 (1986) beat it by 2 years). Each bus has a separate cache, so simultaneous data and instruction access doesn't conflict. Except for this, it is similar to the Hewlett Packard Precision Architecture (HP/PA) in design (including many control/status registers only visible in supervisor mode), though the 88000 is more modular, has a small and elegant instruction set, no special status register (compare stores 16 condition code bits (equal, not equal, less-or-equal, any byte equal, etc.) in any general register, and branch checks whether one bit is set or clear), and lacks segmented addressing (limiting addressing to 32 bits, vs. 64 bits). The 88200 MMU unit also provides dual caches (including multiprocessor support) and MMU functions for the 88100 CPU (like the Clipper). The 88110 includes caches and MMU on-chip.
The 88000 has thirty-two 32 bit user registers, with up to 8 distinct internal function units - an ALU and a floating point unit (sharing the single register set) in the 88100 version, multiple ALU and FPU units (with thirty-two 80-bit FPU registers) and two-issue instuctions were added to the 88110 to produce one of the first superscalar designs (following the National Semiconductor Swordfish). Other units could be designed and added to produce custom designs for customers, and the 88110 added a graphics/bit unit which pack or unpack 4, 8 or 16-bit integers (pixels) within 32-bit words, and multiply packed bytes by an 8-bit value. But it was introduced late and never became as popular in major systems as the MIPS or HP processors. Development (and performance) has lagged as Motorola favoured the PowerPC CPU, coproduced with IBM.
Like the most modern processors, the 88000 is pipelined (with interlocks), and has result forwarding (in the 88110 one ALU can feed a result directly into another for the next cycle). Loads and saves in the 88110 are buffered so the processor doesn't have to wait, except when loading from a memory location still waiting for a save to complete. The 88110 also has a history buffer for speculatively executing branches and to make interrupts 'precise' (they're imprecise in the 88100). The history buffer is used to 'undo' the results of speculative execution or to restore the processor to 'state' when the interrupt occurred - a 1 cycle penalty, as opposed to 'register renaming' which buffers results in another register and either discards or saves it as needed, without penalty.
LSI-11 based and 80186-based graphics terminals, then NS32032-based workstations before moving to moving to an early RISC CPU. It continued development (C300 in 1988) and produced very advanced systems, but decided it couldn't compete alone in processor technology. After a brief joint development with Sun on the next generation SPARC, the company switched to Intel 80x86-based processors, and when a patent dispute between them erupted (Fairchild itself was bought by National Semiconductor which had a patent agreement with Intel, and Intel claimed rights to Clipper-related patents developed after the Clipper was sold to Intergraph), Intel restricted technical information to Intergraph, and Intergraph gave up on hardware, returning to software.
The C100 was a three-chip set like the Motorola 88000 (but predating it by two years), with a Harvard architecture CPU and separate MMU/cache chips for instruction and data. It differed from the 88K and HP PA-RISC in having sixteen 32-bit user registers and sixteen 64-bit FPU registers, rather than the more common thirty-two, and 16 and 32 bit instruction lengths. ROM macros implemented complex instructions. The C300 had improved floating point units, and increased clock speeds by increasing number of pipeline stages. The C400 (1990) was a two-issue superscalar version with separate address adder (eliminating ALU contention between address generation and execution in C100/C300). The following generation (C5, expected 1994) was dropped in 1993 to switch to SPARC.
The only other distinguishing features of the Clipper are a bank of sixteen supervisor registers which completely replace the user registers, (the ARM replaces half the user registers on an FIRQ interrupt) and the addition of some microcode instructions like in the Intel i960.
Berkeley experimental load-store design. It is simple, with a short 3-stage pipeline, and it can operate in big- or little-endian mode. A seven-member team created the first version in a year and a half, including four support chips.
The original ARM (ARM1, 2 and 3) was a 32 bit CPU, but used 26 bit addressing. The newer ARM6xx spec is completely 32 bits. It has user, supervisor, and various interrupt modes (including 26 bit modes for ARM2 compatibility). The ARM architecture has sixteen registers (including user visible PC as R15) with a multiple load/save instruction, though many registers are shadowed in interrupt modes (2 in supervisor and IRQ, 7 in FIRQ) so need not be saved, for fast response. The instruction set is reminiscent of the 6502, used in Acorns earlier computers.
A feature introduced in microprocessors by the ARM is that every instruction is predicated, using a 4 bit condition code (including 'never execute', not officially recommended), an idea later used in some HP PA-RISC instructions and the TI 320C6x DSP. Another bit indicates whether the instruction should set condition codes, so intervening instructions don't change them. This easily eliminates many branches and can speed execution. Another unique and useful feature is a barrel shifter which operates on the second operand of most ALU operations, allowing shifts to be combined with most operations (and index registers for addressing), effectively combining two or more instructions into one (similar to the earlier design of the funky Signetics 8x300).
These features make ARM code both dense (unlike most load-store processors) and efficient, despite the relatively low clock rate and short pipeline - it is roughly equivalent to a much more complex 80486 in speed.
The ARM6 series consisted of the ARM6 CPU core (35,000 transistors, which can be used as the basis for a custom CPU) the ARM60 base CPU, and the ARM600 which also includes 4K 64-way set-associative cache, MMU, write buffer, and coprocessor interface (for FPU, with eight 80-bit registers). The ARM7 series (Dec 1994), increased performance by optimising the multiplier, and adding DSP-like extensions including 32 bit and 64 bit multiply and multiply/accumulate instructions (operand data paths lead from registers through the multiplier, then the shifter (one operand), and then to the integer ALU for up to three independent operations). It also doubles cache size to 8K, includes embedded In Circuit Emulator (ICE) support, and raises the clock rate significantly.
A full DSP coprocessor (codenamed Piccolo, expected second half 1997) was to add an independent set of sixteen 32-bit registers (also accessable as thirty two 16 bit registers), four which can be used as 48 bit registers, and a complete DSP instruction set (including four level zero-overhead loop operations), using a load-store model similar to the ARM itself. The coprocessor had its own program counter, interacting with the CPU which performed data load/store through input/output buffers connected to the coprocessor bus (similar but more intelligent than the address unit in a typical DSP (such as the Motorola 56K) supporting the data unit). The coprocessor shared the main ARM bus, but used a separate instruction buffer to reduce conflict. Two 16 bit values packed in 32 bit registers could be computed in parallel, similar to the HP PA-RISC MAX-1 multimedia instructions. Unfortunately, this interesting concept didn't produce enough commercial interest to complete development and was difficult to produce a compiler for (essentially, it was two CPU executing two programs) - instead, DSP support instructions (more flexible MAC, saturation arithmetic, simple SIMD) were later added to the ARM9E CPU.
ARM10 (1998) added a vector floating point unit (VFP) coprocessor, with thirty two 32-bit floating point registers (usable as sixteen 64-bit registers) which can be loaded, stored, and operated on as two sixteen element vectors (vector-vector and vector-scalar operations) simultaneously. Vectors are computed one operation per cycle, compared to the Hitachi SH-4 which computes four per cycle).
DEC licensed the architecture, and developed the SA-110 (StrongARM) (February 1996), running a 5-stage pipeline at 100 to 233MHz (using only 1 watt of power), with 5-port register file, faster multiplier, single cycle shift-add, eight entry write buffer, and Harvard architecture (16K each 32-way I/D caches).
As part of a patent settlement with DEC, Intel took over the StrongARM, replacing the Intel i960 for embedded systems. The next version named XScale (2000) added low power enhancements and power management allowing the clock speed to be varied, added another stage to the memory pipeline (for eight stages, vs. seven for normal ALU instructions), and a 128 entry branch target buffer. A multiply-accumulate (MAC) unit added two 32-bit source registers, and one 40-bit accumulator. Single 16x16 or 16x32-bit results (16 bits from the hi/lo half of either register) or two 16x16-bit results (hi/hi and lo/lo source from each source register) to the accumulator.
To fill the gap between ARM7 and DEC/Intel StrongARM, ARM also developed the ARM8/800 which includes many StrongARM features, and the ARM9 with Harvard busses, write buffers, and flexible memory protection mapping.
Other companies such as Motorola, IBM and Texas Instruments have also licensed the basic ARM design, making it one of the most widely licensed embedded designs.
Like the Motorola Coldfire, ARM developed a low cost 16-bit version called Thumb, which recodes a subset of ARM CPU instructions into 16 bits (decoded to native 32-bit ARM instructions without penalty - similar to the CISC decoders in the newest 80x86 compatible and 68060 processors, except they decode native instructions into a newer one, while Thumb does the reverse). Thumb programs can be 30-40% smaller than already dense ARM programs. Native ARM code can be mixed with Thumb code when the full instruction set is needed.
Jazelle (announced October 2000) is a decoder similar to Thumb, but decodes simple Java Virtual Machine bytecode instructions to ARM (complex bytecodes are trapped and emulated by native ARM code, as is the JVM itself - different JVM software can be used).
The ARM CPU was chosen for the Apple Newton handheld system because of its speed, combined with the low power consumption, low cost and customizable design (the ARM610 version used by Apple includes a custom MMU supporting object oriented protection and access to memory for the Newton's NewtOS). The Newton was somewhat over ambitious, and was discontinued, but a large number of similar devices, as well as mobile phones, have been based on ARM CPUs for the same reasons.
An experimental asynchronous version of the ARM6 (operates without an external or internal clock signal) called AMULET has been produced by Steve Furber's research group at Manchester university. The first version (AMULET1, early 1993) is about 70% the speed of a 20MHz ARM6 on average (using the same fabrication process), but simple operations (multiplication is a big win at up to 3 times the speed) are faster (since they don't need to wait for a clock signal to complete). AMULET2e (October 1996, 93K transistor AMULET2 core plus four 1K fully associative cache blocks) is 30% faster (40 MIPS, 1/2 the performance of a 75MHz ARM810 using same fabrication), uses less power, and includes features such as branch prediction. AMULET 3i (September 2000), has been delayed, but simulations show it to be roughly equivalent to ARM9.
DSP, based on the earlier 320C20/10 16 bit fixed point DSPs (1982). It has eight 40 bit extended precision registers R0 to R7 (32 bits plus 8 guard bits for floating, 32 bits for fixed), eight 32 bit auxiliary registers AR0 to AR7 (used for pointers) with two separate arithmetic units for address calculation, and twelve 32 bit control registers (including status, an index register, stack, interrupt mask, and repeat block loop registers).
It includes on chip memory in the form of one 4K ROM block, and two 1K RAM blocks - each bus has its own bus, for a total of three (compared to one instruction and one data bus in a Harvard architecture), which essentially function as programer controlled caches. Two arguments to the ALU can be from memory or registers, and the result is written to a register, through a 4 stage pipeline.
The ALU, address controller and control logic are separate - much clearer in the AT&T DSP32, ADSP 2100 and Motorola 56000 designs, and is even reflected in the MIPS R8000 processor FPU and IBM POWER architecture with its Branch Unit loop counter. The idea is to allow the separate parts to operate as independently as possible (for example, a memory access, pointer increment, and ALU operation), for the highest throughput, so instructions accessing loop and condition registers don't take the same path as data processing instructions.
Like the TMS320C30, the 96002 has a separate program memory (RAM in this case, with a bootstrap ROM used to load the initial external program) and two blocks of data RAM, each with a separate data and address busses. The data blocks can also be switched to ROM blocks (such as sine and cosine tables). There's also a data bus for access to external memory. Separate units work independently, with their own registers (generally organised as three 32 bit parts of a single 96 bit register in the 96002 (where the '96' comes from).
The program control unit has a register containing 32 bit PC, status, and operating mode registers, plus 32 bit loop address and 32 bit loop counter registers (branches are 2 cycles, conditional branches are 3 cycles - with conditional execution support), and a fifteen element 64 bit stack (with separate 6 bit stack pointer).
The address generation unit has seven 96 bit registers, divided into three 32 bit (24 in the 56000/1) registers - R0-R7 address, N0-N7 offset, and M0-M7 modify (containing increment values) registers.
The Data Unit includes ten 96-bit floating point/integer registers, grouped as two 96 bit accumulators (A and B = three 32 bit registers each: A2, A1, A0 and B2, B1, B0) and two 64 bit input registers (X and Y = two 32 bit registers each: X1, X0 and Y1, Y0). Input registers are general purpose, but allow new operands to be loaded for the next instruction while the current contents are being used (accumulators are 8+24+24 = 56 bit in the 56000/1, where the '56' comes from). The DSP96000 was one of the first to perform fully IEEE floating point compliant operations.
The processor is not pipelined, but designed for single cycle independent execution within each unit (actually this could be considered a three stage pipeline). With multiple units and the large number of registers, it can perform a floating point multiply, add and subtract while loading two registers, performing a DMA transfer, and four address calculations within a two clock tick processor cycle, at peak speeds.
It's very similar to the Analog Devices ADSP2100 series - the latter has two address units, but replaces the separate data unit with three execution units (ALU, a multiplier, and a barrel shifter).
The DSP56K and 680xx CPUs have been combined in one package (similar idea as the TMS320C8x) in the Motorola 68456.
The DSP56K was part of the ill-fated NeXT system, as well as the lesser known Atari Falcon (still made in low volumes for music buffs).
TRON project produced processors competitive in performance (Fujitsu's(?) Gmicro/500 memory-data CPU (1993) was faster and used less power than a Pentium), the idea of a single standard processor never caught on, and newer concepts (such as RISC features) overtook the TRON design. Hitachi itself has supplied a wide variety of microprocessors, from Motorola and Zilog compatible designs to IBM System/360/370/390 compatible mainframes, but has also designed several of its own series of processors.
The Hitachi SH series was meant to replace the 8-bit and 16-bit H8 microcontrollers, a series of PDP-11-like (or National Semiconductor 32032/32016-like) memory-data CPUs with sixteen 16-bit registers (eight in the H8/300), usable as sixteen 8-bit or combined as eight 32-bit registers (for addressing, except H8/300), with many memory-oriented addressing modes. The SH is also designed for the embedded marked, and is similar to the ARM architecture in many ways. It's a 32 bit processor, but with a 16 bit instruction format (different than Thumb, which is a 16 bit encoding of a subset of ARM 32 bit instructions, or the NEC V800 load-store series, which mixes 16 and 32 bit instruction formats), and has sixteen general purpose registers and a load/store architecture (again, like ARM). This results in a very high code density, program sizes similar to the 680x0 and 80x86 CPUs, and about half that of the PowerPC. Because of the small instruction size, there is no load immediate instruction, but a PC-relative addressing mode is supported to load 32 bit values (unlike ARM or PDP-11, the PC is not otherwise visible). The SH also has a Multiply ACcumulate (MAC) instruction, and MACH/L (high/low word) result registers - 42 bit results (32 low, 10 high) in the SH1, 64 bit results (both 32 bit) in the SH2 and later. The SH3 includes an MMU and 2K to 8K of unified cache.
The SH4 (mid-1998) is a superscalar version with extensions for 3-D graphics support. It can issue two instructions at a time to any of four units: integer, floating point, load/store, branch (except for certain non-superscalar instructions, such as modifying control registers). Certain instructions, such as register-register move, can be executed by either the integer or load/store unit, two can be issued at the same time. Each unit has a separate pipeline, five stages for integer and load/store, five or six for floating point, and three for branch.
Hitachi designers chose to add 3-D support to the SH4 instead of parallel integer subword operations like HP MAX, SPARC VIS, or Intel MMX extensions, which mainly enhance rendering performance, because they felt rendering can be handled more efficiently by a graphics coprocessor. 3-D graphics support is added by supporting the vector and matrix operations used for manipulating 3-D points (see Appendix D. This involved adding an extra set of floating point registers, for a total of two sets of sixteen - one set as a 4X4 matrix, the other a set of four 4-element vectors. A mode bit selects which to use as the forground (register/vector) and background (matrix) banks. Register pair operations can load/store/move two registers (64 bits) at once. An inner product operation computes the inner product multiplication of two vectors (four simultaneous multiplies and one 4-input add), while a transformation instruction computes a matrix-vector product (issued as four consecutive inner product instructions, but using four internal work registers so intermediate results don't need to use data registers).
The SH4 allows operations to complete out of order under compiler control. For example, while a transformation is being executed (4 cycles) another can be stored (2 cycles using double-store instructions), then a third loaded (2 cycles) in preparation for the next transformation, allowing execution to be sustained at 1.4 gigaflops for a 200MHz CPU.
The SH5 is expected to be a 64-bit version. Other enhancements also planned include support for MPEG operations, which are supported in the SPARC VIS instructions. The SH5 adds a set of eight branch registers (like the Intel/HP IA-64), and a status bit which enables pre-loading of the target instructions when an address is placed in a branch register.
The SH is used in many of Hitachi's own products, as well as being a pioneer of wide popularity for a Japanese CPU outside of Japan. It's most prominently featured in the Sega Saturn video game system (which uses two SH2 CPUs) and Dreamcast (SH4) and many Windows CE handheld/pocket computers (SH3 chip set).
Part XIII: Motorola MCore, RISC brother to ColdFire (Early 1998) .To fill a gap in Motorola's product line, in the low cost/power consumption field which the PowerPC's complexity makes it impractical, the company designed a load/store CPU and core which contains features similar to the ARM, PowerPC, and Hitachi SH, beignning with the M200 (1997). Based on a four stage pipeline, The MCore contains sixteen 32-bit data registers, plus an alternate set for fast interupts (like the ARM, which only has seven in the second set), and a separate carry bit (like the TMS 1000). It also has an ARM-like (and 8x300-like before it) execution unit with a shifter for one operand, a shifter/multiply/divide unit, and an integer ALU in a series. It defines a 16-bit instruction set like the Hitachi SH and ARM Thumb, and separates the branch/program control unit from the execution unit, as the PowerPC does. The PC unit contains a branch adder which allows branches to be computed in parallel with the branch instruction decode and execute, so branches only take two cycles (skipped branches take one). The M300 (late 1998) added floating point support (sharing the integer registers) and dual instruction prefetch.
The MCore is meant for embedded applications where custom hardware may be needed, so like the ARM is has coprocessor support in the form of the Hardware Accellerator Interface (HAI) unit which can contain custom circuitry, and the HAI bus for external components.
Part XIV: TI MSP430 series, PDP-11 rediscovered (late 1998?) .Texas Instruments has been involved with microcontrollers almost as long as Intel, having introduced the TMS1000 microcontroller shortly after the Intel 4004/4040. TI concentrated mostly on embedded digital signal processors (DSPs) such as the TMS320Cx0 series, involved in microprocessors mainly as the manufacturer of 32-bit and 64-bit Sun SPARC designs. The MSP430 series Mixed Signal Microcontrollers are 16-bit CPUs for low cost/power designs.
Called "RISC like" (and consequently obliterating all remaining meaning from that term), the MSP430 is essentially a simplified version of the PDP-11 architecture. It has sixteen 16-bit registers, with R0 used as the program counter (PC), and R1 as the stack pointer (SP) (the PDP-11 had eight, with PC and SP in the two highest registers instead of two lowest). R2 is used for the status register (a separate register in the PDP-11) Addressing modes are a small subset of the PDP-11, lacking auto-decrement and pre-increment modes, but including register indirect, making this a memory-data processor (little-endian). Constants are loaded using post-increment PC relative addresses like the PDP-11 (ie. "@R0+"), but commonly used constants can be generated by reading from R2 or R3 (indirect addressing modes can generate 0, 1, 2, -1, 4, or 8 - different values for each register).
The MSP430 has fewer instructions than the PDP-11 (51 total, 27 core). Specifically multiplication is implemented as a memory-mapped peripheral - two operands (8 or 16 bits) are written to the input ports, and the multiplication result can be read from the output (this is a form of Transport Triggered Architecture, or TTA). As a low cost microcontroller, multiple on-chip peripherals (in addition to the multiplier) are standard in many available versions.
Future versions are expected to be available with two 4-bit segment registers (Code Segment Pointer for instructions, Data Page Pointer for data) to allow 20-bit memory addressing. Long branch and call instructions will be added as well.
Table of Contents | 2 | 57 |
<urn:uuid:c9110322-3867-496c-b23f-7296b5642392> | 1 filled to capacity; "a suitcase jammed with dirty clothes"; "stands jam-packed with fans"; "a packed theater" [syn: jammed, jam-pawncked]
2 pressed together or compressed; "packed snow"
- past of pack
Adjectivepacked (: more packed, : most packed)
- Put into a package.
- packed lunch
- Filled with a large number or large quantity of something.
- packed with goodness
- Filled to capacity with people.
- The bus was packed and I couldn't get on.
put into a package
filled with a large number or large quantity
filled to capacity
Data Structure Alignment is the way data is arranged and accessed in computer memory. It consists of two separate but related issues: Data Alignment and Data Structure Padding. Data Alignment is the offset of a particular datum in computer memory from boundaries that depend on the datum type and processor characteristics. Aligning data usually refers to allocating memory addresses for data such that each primitive datum is assigned a memory address that is a multiple of its size. Data Structure Padding is the insertion of unnamed members in a data structure to preserve the relative alignment of the structure members.
Although Data Structure Alignment is a fundamental issue for all modern computers, many computer languages and computer language implementations handle data alignment automatically. Certain C and C++ implementations and assembly language allow at least partial control of data structure padding, which may be useful in certain special circumstances.
DefinitionsA memory address a, is said to be n-byte aligned when n is a power of two and a is a multiple of n bytes. In this context a byte is the smallest unit of memory access, i.e. each memory address specifies a different byte. An n-byte aligned address would have log2 n least-significant zeros when expressed in binary.
A memory access is said to be aligned when the datum being accessed is n bytes long and the datum address is n-byte aligned. When a memory access is not aligned, it is said to be misaligned. Note that by definition byte memory accesses are always aligned.
A memory pointer that refers to primitive data that is n bytes long is said to be aligned if it is only allowed to contain addresses that are n-byte aligned, otherwise it is said to be unaligned. A memory pointer that refers to a data aggregate (a data structure or array) is aligned if (and only if) each primitive datum in the aggregate is aligned.
Note that the definitions above assume that each primitive datum is a power of two bytes long. When this is not the case (as with 80-bit floating-point on x86) the context influences the conditions where the datum is considered aligned or not.
ProblemsA computer accesses memory a single memory word at a time. As long as the memory word size is at least as large as the largest primitive data type supported by the computer, aligned accesses will always access a single memory word. This may not be true for misaligned data accesses.
If the highest and lowest bytes in a datum are not within the same memory word the computer must split the datum access into multiple memory accesses. This requires a lot of complex circuitry to generate the memory accesses and coordinate them. To handle the case where the memory words are in different memory pages the processor must either verify that both pages are present before executing the instruction or be able to handle a TLB miss or a page fault on any memory access during the instruction execution.
When a single memory word is accessed the operation is atomic, i.e. the whole memory word is read or written at once and other devices must wait until the read or write operation completes before they can access it. This may not be true for unaligned accesses to multiple memory words, e.g. the first word might be read by one device, both words written by another device and then the second word read by the first device so that the value read is neither the original value nor the updated value. Although such failures are rare, they can be very difficult to identify.
RISCMost RISC processors will generate an alignment fault when a load or store instruction accesses a misaligned address. This allows the operating system to emulate the misaligned access using other instructions. For example, the alignment fault handler might use byte loads or stores (which are always aligned) to emulate a larger load or store instruction.
Some architectures like MIPS have special unaligned load and store instructions. One unaligned load instruction gets the bytes from the memory word with the lowest byte address and another gets the bytes from the memory word with the highest byte address. Similarly, store-high and store-low instructions store the appropriate bytes in the higher and lower memory words respectively.
The DEC Alpha architecture has a two-step approach to unaligned loads and stores. The first step is to load the upper and lower memory words into separate registers. The second step is to extract or modify the memory words using special low/high instructions similar to the MIPS instructions. An unaligned store is completed by storing the modified memory words back to memory. The reason for this complexity is that the original Alpha architecture could only read or write 32-bit or 64-bit values. This proved to be a severe limitation that often led to code bloat and poor performance. Later Alpha processors added byte and double-byte load and store instructions.
Because these instructions are larger and slower than the normal memory load and store instructions they should only be used when necessary. Most C and C++ compilers have an “unaligned” attribute that can be applied to pointers that need the unaligned instructions.
x86 and x64While the x86 architecture originally did not require aligned memory access and still works without it, SSE2 instructions on x86 and x64 CPUs do require the data to be 128-bit (16-byte) aligned and there can be substantial performance advantages from using aligned data on these architectures.
CompatibilityThe advantage to supporting unaligned access is that it is easier to write compilers that do not need to align memory, at the expense of the cost of slower access. One way to increase performance in RISC processors which are designed to maximize raw performance is to require data to be loaded or stored on a word boundary. So though memory is commonly addressed by 8 bit bytes, loading a 32 bit integer or 64 bit floating point number would be required to be start at every 64 bits on a 64 bit machine. The processor could flag a fault if it were asked to load a number which was not on such a boundary, but this would result in a slower call to a routine which would need to figure out which word or words contained the data and extract the equivalent value.
Data Structure PaddingAlthough the compiler (or interpreter) normally allocates individual data items on aligned boundaries, data structures often have members with different alignment requirements. To maintain proper alignment the translator normally inserts additional unnamed data members so that each member is properly aligned. In addition the data structure as a whole may be padded with a final unnamed member. This allows each member of an array of structures to be properly aligned.
Padding is only inserted when a structure member is followed by a member with a larger alignment requirement or at the end of the structure. By changing the ordering of members in a structure, it is possible to change the amount of padding required to maintain alignment. For example, if members are sorted by ascending or descending alignment requirements a minimal amount of padding is required. The minimal amount of padding required is always less than the largest alignment in the structure. Computing the maximum amount of padding required is more complicated, but is always less than the sum of the alignment requirements for all members minus twice the sum of the alignment requirements for the least aligned half of the structure members.
Although C and C++ do not allow the compiler to reorder structure members to save space, other languages might. It is also possible to tell most C and C++ compilers to "pack" the members of a structure to a certain level of alignment, e.g. "pack(2)" means align data members larger than a byte to a two-byte boundary so that any padding members are at most one byte long.
One use for such "packed" structures is to conserve memory. For example, a structure containing a single byte and a four-byte integer would require three additional bytes of padding. A large array of such structures would use 37.5% less memory if they are packed, although accessing each structure might take longer. This compromise may be considered a form of space-time tradeoff.
Although use of "packed" structures is most frequently used to conserve memory space, it may also be used to format a data structure for transmission using a standard protocol. Since this depends upon the native byte ordering (endianness) for the processor matching the byte ordering of the protocol, this usage is not recommended.
Computing paddingThe following formula provides the number of padding bytes required to align the start of a data structure: padding = (align - (offset mod align)) mod align For example, the padding to add to offset 0x59d for a structure aligned to every 4 bytes is 3. The structure will then start at 0x5a0, which is a multiple of 4.
Typical alignment of C structs on x86
Data structure members are stored sequentially in a memory so that in the structure below the member Data1 will always precede Data2 and Data2 will always precede Data3:
struct MyData ;
If the type "short" is stored in two bytes of memory then each member of the data structure depicted above would be 2-byte aligned. Data1 would be at offset 0, Data2 at offset 2 and Data3 at offset 4. The size of this structure would be 6 bytes.
The type of each member of the structure usually has a default alignment, meaning that it will, unless otherwise requested by the programmer, be aligned on a pre-determined boundary. The following typical alignments are valid for compilers from Microsoft, Borland, and GNU when compiling for x86:
- A char (one byte) will be 1-byte aligned.
- A short (two bytes) will be 2-byte aligned.
- An int (four bytes) will be 4-byte aligned.
- A float (four bytes) will be 4-byte aligned.
- A double (eight bytes) will be 8-byte aligned on Windows and 4-byte aligned on Linux.
Here is a structure with members of various types, totaling 8 bytes before compilation:
struct MixedData ;
After compilation the data structure will be supplemented with padding bytes to ensure a proper alignment for each of its members:
struct MixedData /* after compilation */ ;
The compiled size of the structure is now 12 bytes. It is important to note that the last member is padded with the number of bytes required to conform to the largest type of the structure. In this case 3 bytes are added to the last member to pad the structure to the size of a long word.
It is possible to change the alignment of structures to reduce the memory they require (or to conform to an existing format) by changing the compiler’s alignment (or “packing”) of structure members.
Requesting that the MixedData structure above be aligned to a one byte boundary will have the compiler discard the pre-determined alignment of the members and no padding bytes would be inserted.
While there is no standard way of defining the alignment of structure members, some compilers use #pragma directives to specify packing inside source files. Here is an example:
- pragma pack(push) /* push current alignment to stack */
- pragma pack(1) /* set alignment to 1 byte boundary */
- pragma pack(pop) /* restore original alignment from stack */
This structure would have a compiled size of 6 bytes. The above directives are available in compilers from Microsoft, Borland, GNU and many others.
packed in German: Speicherausrichtung
packed in French: Alignement de données
packed in Russian: Выравнивание
SRO, aground, alive with, anchored, awash, bloated, blocked, bound, brimful, brimming, bristling, bulging, bursting, capacity, caught, chained, chock-full, choked, choked up, chuck-full, clogged, clogged up, close, close-knit, close-textured, close-woven, compact, compacted, compressed, concentrated, concrete, condensed, congested, consolidated, constipated, cooked, cooked-up, costive, cram-full, crammed, crammed full, crawling, crowded, crowding, cut out, cut-and-dried, cut-and-dry, dense, distended, doctored, drenched, engineered, farci, fast, fastened, filled, filled to overflowing, firm, fixed, flush, foul, fouled, full, full to bursting, gluey, glutted, gorged, groaning, grounded, hard, heavy, held, high and dry, hyperemic, impacted, impenetrable, impermeable, in profusion, in spate, in the bag, inextricable, infarcted, jam-packed, jammed, juggled, lavish, loaded, manipulated, massive, moored, nonporous, obstipated, obstructed, on ice, overblown, overburdened, overcharged, overfed, overflowing, overfraught, overfreighted, overfull, overladen, overloaded, overstocked, overstuffed, oversupplied, overweighted, packed like sardines, planned, plenary, plethoric, plotted, plugged, plugged up, populous, prearranged, preconcerted, precontrived, premeditated, preordered, prodigal, profuse, proliferating, prolific, put-up, ready to burst, replete, rife, rigged, round, running over, satiated, saturated, schemed, serried, set-up, soaked, solid, stacked, standing room only, stopped, stopped up, stranded, stuck, stuck fast, studded, stuffed, stuffed up, substantial, superabundant, supercharged, supersaturated, surcharged, surfeited, swarming, swollen, teeming, tethered, thick, thick as hail, thick with, thick-coming, thick-growing, thickset, thronged, thronging, tied, topful, transfixed, viscid, viscose, viscous, wedged, with | 1 | 7 |
<urn:uuid:1ba47f6c-71a3-4e4f-88cd-df01b310d3fe> | |United States Army Air Corps
|Before becoming United States military pilots, fledgling Army aviators underwent the arduous task of attending and completing Air Corps cadet training. Aviation training
consisted of several blocks of training from basic flight instruction to advanced flight school. The task of making it successfully through flight school was no easy task and many
would-be recruits found themselves washed out of the training programs and on their way to carrying a rifle as part of an infantry unit, or off to learn another trade as part of an
aircrew. Below are a number of Army Air Corp. cadet items in my collection. As indicated above, all of these items are original, WWII or in some cases pre-WWII items.
|Above, left to right: A wartime cadet visor cap and a light blue, pre-war Air Corps cadet cap with a royal blue band and Air Corps
|Above: Two original wartime photographs of the same
Army Air Corps cadet. The cadet is unidentified. The
PT-19A in the photos is identified as aircraft #
42-33691, which is known to have operated out of
Thompson-Robbins Field, Arkansas during the war.
|Two wartime photographs of the same Army Air Corps cadet, identified on one of the
photographs as Del Morrison, with the photos both dated 1942.
|Above: A rare, wartime color photograph
of an Army Air Corps cadet.
|A squadron dance for the 327th School Squadron, Basic Flying School was held on January 16, 1942. This is most
likely related to Minter Field, one of the largest Army Air Corps training fields ever established, which was located in
Bakersfield, California. The invitation came in its original envelope.
For more information about Minter Field, see their website at: www.minterairfieldmuseum.com.
|Sustineo Alas: Sustain the Wings
The distinctive insignia (DI) of the United States Army Air Force Technical Training Command, the Sustineo Alas pin
displays a golden urn on a background of blue, holding three feather plumes and the words "Sustineo Alas" across the
bottom on a golden background. Each of the three plumes represents each of the components of the United States Army
Air Corps: the plane, the aircrew and the ground crew. Approved for wear on July 24, 1942, these DI were worn to
denote assignment to the Army Air Corps Training Command, and would later be replaced upon reassignment.
The modern interpretation is "Keep them flying."
|Four different examples of the wartime Sustineo Alas pin. Seen is the difference between the
various makers of the same basic insignia. The pin on the far right is a plastic version of the
more commonly found all metal examples.
|Some hot-shot wing over wing flying in an AT-6!
|CPTP and the War Training Service
Civilian Flight Instructors in WWII
|Prior to 1940, the United States Army had approximately 4,500 pilots, including just over 2,000 who were active-duty officers, just over 2,100 reserve officers and a
little over 300 who were national guard officers. As war seemed more likely, the number of needed pilots grew rapidly from 982 in 1939, to approximately 8,000 in
1940, to over 27,000 in 1941. Still, with these record numbers, more pilots were still needed. At the time, the United States Army could not sufficiently handle the
training of the large number of flying cadets required. The U.S. Army Air Forces relied on additional pilots from the CPTP (Civilian Pilot Training Program) and a
large network of civilian flight schools under contract to the US Air Corps, as well as conducting training in its own schools.
The CPTP (often shortened to CPT) would eventually operate at more than 1,100 colleges and universities, with over 1,400 individual flight schools. As a result of
the high level of training provided by the CPTP, CPTP-trained pilots did well while receiving additional training at US Air Corps flight schools. Between 1939 and
1945, the CPTP would go on to train more than 435,000 pilots, logging over 12 million flight hours!
Following the attack on Pearl Harbor, the CPTP became known as the War Training Service or WTS. From 1942 until the summer of 1944, WTS trainees attended
college courses and took private flight training, signing agreements to enter into military service after their graduation.
Graduates of the CPTP/WTS program entered into the US Army Air Corps Enlisted Reserve. Most graduates continued their flight training and commissioned as
combat pilots. Still others became service pilots, liaison pilots, ferry pilots and glider pilots, instructors, or commercial pilots in the Air Transport Command. As the
defeat of the Axis powers seemed imminent, and as it became clear that fewer pilots would be required in the future, the US military services ended their
agreement with the CPTP/WTS in early 1944. The program was concluded in 1946.
The CPTP and WTS provided a much needed service to the US Air Corps both before and during WWII. The CPTP/WTS provided the US Air Corps with civilian pilots
who could easily transition to military pilot training, thus speeding up the process of getting qualified military pilots into aircraft of all kinds and off to the front.
|Three views of a CAA/War Training Service visor cap, with original cap badge insignia. The visor cap is shown with a CAATC headset.
Both the visor cap and the headset came together from the estate of a man who served as a civilian flight instructor during WWII.
|A pink, Army Air Corps overseas cap with its original CAA/War
Training Service insignia. This particular cap was once worn and
owned by Hermann Kropp of Stroudsberg, Pennsylvania. If
anyone has any additional information concerning Mr. Kropp, I
would enjoy hearing from you.
|Above: Three Department of Commerce, CPT Pilot Rating Books issued to James Berton Rudolph, and Mr.
Rudolph's original pair of Enlisted Reserve CPT wings. As was occassionally practiced among CPT graduates,
the "CPT" on the shield of the wings has been ground down to denote graduation from the program.
Mr. Rudolph attended his CPT/WTS training at Muscatine Jr. College, with the log books showing his
participation in "Elementary Army", "Army Secondary" and "Elementary Cross Country." The log books cover
a time period from November of 1942 to January of 1944. The log books are well filled out and contain terrific
details about the training Mr. Rudolph received.
The log books indicate Mr. Rudolph did the majority of his training in Taylorcraft and WACO UPF bi-planes.
Both were staples in the CPT/WTS program.
|Civilian Flight Instructor cap device.
|At an unknown airfield, civilian flight instructors stand-by awaiting assignments to
trainees who are themselves hoping to become military aviators.
|A wartime, 8x10 photograph showing a civilian instructor with five aviation cadets.
The cadet on the lower left signed his name as "Douglas E. Caldwell, Seffner,
Florida." The cadet in the upper left signed his name as "Bill Chandler, Tulsa, Okla."
The instructor also signed the photograph as "John L. Fisher, Salisbury, Conn."
|A wartime, 8x10 photograph showing three graduating classes from the Spartan
Technical Training School in Tulsa, Oklahoma in 1943. If anyone from this
graduating class comes across this page or if you have any information about this
photograph, I would enjoy hearing from you.
|A very common sight to many Army Air Corps cadets, a PT-17 instrument
panel. When I first obtained this panel it was void of any instruments. I
used only original, wartime era instruments to bring this instrument panel
back to its original configuration.
|An original wartime pair of Susineo
Alas DI';s, still in their original
|The wartime tunic of a civilian flight instructor in the employ
of the United States Army Air Corps. Displayed appropriately
on the right cuff is the gold embroidered wings of a civilian
|Two photographs showing civilian flight instructors serving at Victory Field, Vernon,
Texas during the war. The instructor on the right has been identified as E. T. Belton,
while the pilot on the left was identified as L. M. Rushinf.
|To see a grouping in my collection related to another civilian flight instructor, click on the photo above and
visit the page for Clarence S. Page Jr.
|Left: An "elementary course" CPT Pilot Rating Book
fissued to Francis Lee McLean. The log book is dated
February 19th, 1943, and shows McLean attending
flight training at Nebraska State Teachers College, with
Clinch Flying Service as the contractor providing the
instruction. Little is known about McLean's military
service, which I am still researching. I do know that he
entered into the United States Navy Reserve as a pilot,
losing his life on April 19th, 1945 having earned the Air
Medal. McLean is buried in the Manila American
Cemetary in the Philippines, Plot H, Row 6, Grave 112.
As additional information about his military aviation
career comes to light, I will update this section of the
|Left: Six CPT flight log books and a Pilot Information file from the military service of Carlton Alfred
Smith of Mansfield, Ohio. Born on January 15, 1921, in Mansfield, Smith would become a pilot in
the late 1930's and later found himself in the role of flight instructor for the United States Army Air
Corps during WWII. Smith would maintain his love of flying throughout his life, later serving a long
career with the Mansfield Fire Department. Mr. Smith passed away in March of 2011, at the age of
Smith's log books show that he attended the majority of his training at Harrington Air Service Inc. in
Mansfield, Ohio. The log books cover a period from December of 1941 into late 1943, and show
that Smith was part of both the Civilian Pilot Training Program and the War Training Service. The
Harrington Air Service trained 1,500 pilots for military service during WWII.
|Above/Left: An original era United States Army Air Corps cadet jacket, complete with Air
Corps cadet insignia/patch, appropriately applied to the right sleeve. The close-up of the cadet
insignia above shows the method of attachment to the jacket.
|An overseas cap from the Glenn Shoop grouping,
showing the appropiately applied early CPT patch.
Shoop attended CPT training at the Univrsity of
Oklahoma. To see additional items related to Mr.
Shoop, please click HERE.
|As evident by the insignia on his cap, a civilian flight instructor and four of his
students pose in front of their PT-19.
|Above: The wing at the left is the example that once belonged to James Berton Rudolph. The wing at the right is another
original CPT wing, uncleaned and just as I obtained it. The difference can clearly be seen between the example with the
"CPT" rubbed off and the bottom example of an issue wing.
|Above: A wartime era private photograph of four Army Air Corps aviation cadets.
The man standing to the far left is Robert M. Barkey. Mr. Barkey would eventually
go into combat with the 325th Checkertail Clan. He would fly a total of 53 combat
missions in both the P-47 (45 missions) and the P-51 (8 missions), logging over 200
hours of combat flying time. For his efforts, he was awarded the Distinguished Flying
Cross, the Air Medal, with 12 Oak Leaf clusters, three battle stars and a Presidential
Unit Citation. Mr. Barkey was credited with downing 5 Me-109s and was credited
with a probable, a Macchi 202. I had the pleasure of meeting Mr. Barkey on many
occasions as he was a terrific gentleman, always smiling, happy to talk airplanes and
generous with his time. He was a true gentleman and a hero.
|Above and right: Photographs of Air Corps Cadets having fun in Hot
Springs, Arkansas. In the photo to the right, the cadets are identified as
(L to R): Dick (no last name), Blaine Madden and Jim (unknown last
In the photograph above the man on the mule is identified as Melvin
Meyers, Blaine Madden is sitting on the left in the cart, and the male on
the right is identified only as "me."
The photograph to the right is dated December 24, 1943. The
photograph above is dated January 1944, and was taken in Hot Springs,
Arkansas, in front of the John C. Bohl Jewelry store.
Both Dick and Blaine can be seen wearing the same type of cadet
jacket as shown above, with the Air Corps cadet patch on the right
lower arm, just visible in the photos.
|Left: An Army Air Corps/Force cadet sidecap with correct cadet
insignia, displayed along with a pair of WWII era headphones.
|particular propeller, model C 707, Serial Number 32843 was primary used on the Cessna
T-50/UC-78, commonly called the "Bamboo bomber." The Bamboo bomber was used for
training at Yuma Army Air Field for several years during WWII.
The original wartime Yuma Army Air Field decals still covers the center section of the propeller. | 1 | 2 |
<urn:uuid:c96ea83c-8c4e-468b-9a07-95a791756d58> | ELECTIONS TO the worlds largest democracyas per data released by the Election Commission of India (ECI) on February 14, 2014, there are a total of 81.45 crore registered electors in the countrydont come cheap. If a recent report is anything to go by, a whopping R30,000 crore is likely to be spent during the ongoing Lok Sabha polls (the figure includes the total poll spending by the government, political parties and candidates), making it by far the most expensive electoral exercise in Indian history.
Of the R30,000 crore, the exchequer is likely to spend about R7,000 crore to hold the electoral exercise, a recent study carried out by the Centre for Media Studies (CMS), found out. While the ECI is likely to spend around R3,500 crore, the Union home ministry, Indian Railways, various other government agencies and state governments are expected to spend a similar amount to put in place the means to ensure free and fair polls.
The expenditure, projected by the New Delhi-based not-for-profit think tank, is set to rival the $7 billion (approximately R42,000 crore) spent by candidates and parties in the 2012 US presidential elections.
In fact, a similar study conducted by the Associated Chambers of Commerce and Industry of India (Assocham) said the estimated election spending amount will create a huge multiplier GDP effect of at least R60,000 crore, giving a shot in the arm to the Indian economy.
Data collected from the ECI and law ministry websites and compiled by the poll panel show that expenditure on conducting Lok Sabha polls has increased manifoldfrom R10.45 crore spent by the Centre in 1952 to R846.67 crore for the 2009 polls.
Cost-wise, the 2004 Lok Sabha election was the heaviest on the government exchequer with about R1,114 crore spent in the exercise. In that election, the per voter cost, too, was the highest, as the government had spent about R17 per elector.
There was an increase in the election cost by 17.53% vis-a-vis the 1999 general elections despite the fact that there was a reduction in the number of polling stations by 11.26%, the ECI data revealed.
As per ECI guidelines, the entire expenditure for conducting elections to the Lok Sabha is borne by the central government while states bear the expenses for conducting elections to state legislatures, when such elections are held independently.
If a concurrent election to the Lok Sabha and a state legislative assembly is held, then such expenditure is shared between the two governments. Expenditure incurred on items of common concern to the Centre and state governments like expenditure on regular election establishments, preparation and revision of electoral rolls, etc, is shared on a 50:50 basis, irrespective of whether such expenditure is incurred in connection with elections to the Lok Sabha or state legislatures. Even if the election is for Lok Sabha, expenditure towards law and order maintenance is borne by the respective state governments only, the ECI rules say.
This year, the limit on election expenditure incurred by a candidate for Parliamentary constituencies was raised to R70 lakh from R40 lakh in bigger states, while for smaller states and union territories, such as Arunachal Pradesh, Goa, Sikkim, Andaman and Nicobar Islands, Chandigarh, Dadra and Nagar Haveli, Daman and Diu, Lakshadweep and Puducherry, the expenditure limit would be R54 lakh against R27-35 lakh earlier.
The revision was done due to an increase in the number of electors, polling stations as well as an increase in the cost inflation index.
The enhanced expense limit comes in the wake of political parties making a strong pitch in this regard at recent meetings with top officials of the ECI. The parties had argued that the current limits were too meagre compared with the rise in prices on account of inflation. As per experts, the other reason for revising the poll limit is under-reporting by candidates. It is believed that most of the candidates declare barely half the expenditure they are allowed to incur by the ECI.
The CMS study says the decision to hike expenditure limits is one of the reasons why poll spendings are likely to touch the R30,000-crore mark this year. Till recently, political parties used to spend more during elections. Now, the trend has changed with candidates in most cases spending more than the parties. Now where is this money coming from It is coming from crorepati candidates, corporates and contractors, CMS chairman N Bhaskara Rao told the media recently.
As per a rough, unofficial estimate, after the hike in poll expenditure cap, candidates in fray for 543 seats alone could spend nearly R4,000 crore in the Lok Sabha polls. Rao claimed that different industries in different states contribute to election funding. Be it the tendu leaf business, mining business or the cement industry, they all contribute, he added.
While the official limit for each of the Parliamentary candidates in 543 constituencies has been fixed at R70 lakh, the previous experience of different agencies at the ground level shows that this time around, each of the contestants, in majority of the cases , may end up spending up to R7 crore, the scanner of the Election Commission notwithstanding, the Assocham study noted.
It is extremely difficult for the official machinery to minutely monitor the expenditure details of the candidates The past experience shows that the number of crorepatis fighting the elections far exceeded the commoner, who would find even R70 lakh difficult to raise unless he or she is from a cash-rich big party, the study added.
As per the self-sworn affidavits of candidates who contested in the first four phases of the ongoing elections, there were 16, 23, 397 and 20 crorepatis, respectively, in the fray. The analysis was conducted by the Association for Democratic Reforms, a civil society group vying for transparency in Indian politics, and uploaded on its website recently. Also, the average assets (per candidate) of the candidates in the first four phases of the ongoing elections stood at R5.75 crore, R9.12 crore, R3.05 crore and R2.12 crore, respectively, the report noted.
Boost to economy
The businesses in the media such as television channels, newspapers, city hoardings, printers, social media, transport and hospitality such as bus/taxi operators, tents/ scaffoldings, caterers and airlines will see a direct positive impact of the election budgets of the political parties, as also the state machinery.
However, the greater economic impact would be seen in the form of the GDP multiplier effect since those earning from the elections would be spending at least 80-90% of such earnings. The propensity to save is small among the workers, employees and even the owners of the unorganised businesses, which will generally be more useful for electioneering except TV channels and newspapers, Assocham president Rana Kapoor was quoted as saying in a press release.
However, he said the study has sought to capture the ground situation and in no way reflects an endorsement by Assocham on the use of money power in elections.
We stand for elections, which are free from money and muscle power and do not support a huge budget, even though the economy may get a consumption boost, he clarified. | 1 | 14 |
<urn:uuid:ae90de0b-0d78-4c44-928a-2d32144ed351> | |Initial release||December 1982|
|Stable release||2017 / March 21, 2016|
|Operating system||Windows, macOS, iOS, Android|
|Available in||English, German, French, Italian, Spanish, Korean, Chinese Simplified, Chinese Traditional, Brazilian Portuguese, Russian, Czech, Polish and Hungarian|
AutoCAD is a commercial computer-aided design (CAD) and drafting software application. Developed and marketed by Autodesk, AutoCAD was first released in December 1982 as a desktop app running on microcomputers with internal graphics controllers. Prior to the introduction of AutoCAD, most commercial CAD programs ran on mainframe computers or minicomputers, with each CAD operator (user) working at a separate graphics terminal. Since 2010, AutoCAD was released as a mobile- and web app as well, marketed as AutoCAD 360.
AutoCAD is used across a wide range of industries, by architects, project managers, engineers, graphic designers, and many other professionals. It is supported by 750 training centers worldwide as of 1994.
AutoCAD was derived from a program begun in 1977 and released in 1979 called Interact CAD, also referred to in early Autodesk documents as MicroCAD, which was written prior to Autodesk's (then Marinchip Software Partners) formation by Autodesk cofounder Mike Riddle.
The first version by Autodesk was demonstrated at the 1982 Comdex and released that December. As Autodesk's flagship product, by March 1986 AutoCAD had become the most ubiquitous CAD program worldwide. The 2016 release marked the 30th major release of AutoCAD for Windows. The 2014 release marked the fourth consecutive year of AutoCAD for Mac.
The native file format of AutoCAD is .dwg. This and, to a lesser extent, its interchange file format DXF, have become de facto, if proprietary, standards for CAD data interoperability, particularly for 2D drawing exchange.Script error AutoCAD has included support for .dwf, a format developed and promoted by Autodesk, for publishing CAD data.
Autodesk's logo and, respectively, AutoCAD icons have changed for several versions through the years.
|Official Name||Version||Release||Date of release||Comments|
|AutoCAD Version 1.0||1.0||1||1982, December||DWG R1.0 file format|
|AutoCAD Version 1.2||1.2||2||1983, April||DWG R1.2 file format|
|AutoCAD Version 1.3||1.3||3||1983, August||DWG R1.3 file format|
|AutoCAD Version 1.4||1.4||4||1983, October||DWG R1.4 file format|
|AutoCAD Version 2.0||2.0||5||1984, October||DWG R2.05 file format|
|AutoCAD Version 2.1||2.1||6||1985, May||DWG R2.1 file format|
|AutoCAD Version 2.5||2.5||7||1986, June||DWG R2.5 file format|
|AutoCAD Version 2.6||2.6||8||1987, April||DWG R2.6 file format. Last version to run without a math co-processor.|
|AutoCAD Release 9||9.0||9||1987, September||DWG R9 file format|
|AutoCAD Release 10||10.0||10||1988, October||DWG R10 file format|
|AutoCAD Release 11||11.0||11||1990, October||DWG R11 file format|
|AutoCAD Release 12||12.0||12||1992, June||DWG R11/R12 file format. Last release for Apple Macintosh till 2010.|
|AutoCAD Release 13||13.0||13||1994, November||DWG R13 file format. Last release for Unix, MS-DOS and Windows 3.11.|
|AutoCAD Release 14||14.0||14||1997, February||DWG R14 file format|
|AutoCAD 2000||15.0||15||1999, March||DWG 2000 file format|
|AutoCAD 2000i||15.1||16||2000, July|
|AutoCAD 2002||15.2||17||2001, June|
|AutoCAD 2004||16.0||18||2003, March||DWG 2004 file format|
|AutoCAD 2005||16.1||19||2004, March|
|AutoCAD 2006||16.2||20||2005, March||Dynamic Block|
|AutoCAD 2007||17.0||21||2006, March||DWG 2007 file format|
|AutoCAD 2008||17.1||22||2007, March||Annotative Objects introduced. AutoCAD 2008 and higher (including AutoCAD LT) can directly import and underlay DGN V8 files.|
|AutoCAD 2009||17.2||23||2008, March||Revisions to the user interface including the option of a tabbed ribbon|
|AutoCAD 2010||18.0||24||2009, March 24||DWG 2010 file format introduced. Parametrics introduced. Mesh 3D solid modeling introduced. PDF underlays. Both 32-bit and 64-bit versions of AutoCAD 2010 and AutoCAD LT 2010 are compatible with and supported under Microsoft Windows 7.|
|AutoCAD 2011||18.1||25||2010, March 25||Surface Modeling, Surface Analysis and Object Transparency introduced. October 15, 2010 AutoCAD 2011 for Mac was released. Are compatible with and supported under Microsoft Windows 7|
|AutoCAD 2012||18.2||26||2011, March 22||Associative Array, Model Documentation, DGN editing. Support for complex line types in DGN files is improved in AutoCAD 2012.|
|AutoCAD 2013||19.0||27||2012, March 27||DWG 2013 file format|
|AutoCAD 2014||19.1||28||2013, March 26||File Tabs, Design Feed, Reality Capture, Autodesk Live Maps|
|AutoCAD 2015||20.0||29||2014, March 27||Line smoothing (anti-aliasing), Windows 8.1 support added, dropped Windows XP support (incl. compatibility mode)|
|AutoCAD 2016||20.1||30||2015, March 23||More comprehensive canvas, richer design context, and intelligent new tools such as Smart Dimensioning, Coordination Model, and Enhanced PDFs|
|AutoCAD 2017||21.0||31||2016, March 21||PDF import, Associative Center Marks and Centerlines, DirectX 11 graphics|
Compatibility with other software
ESRI ArcMap 10 permits export as AutoCAD drawing files. Civil 3D permits export as AutoCAD objects and as LandXML. Third-party file converters exist for specific formats such as Bentley MX GENIO Extension, PISTE Extension (France), ISYBAU (Germany), OKSTRA and Microdrainage (UK); also, conversion of .pdf files is feasible, however, the accuracy of the results may be unpredictable or distorted. For example, jagged edges may appear.
AutoCAD and AutoCAD LT are available for English, German, French, Italian, Spanish, Korean, Chinese Simplified, Chinese Traditional, Brazilian Portuguese, Russian, Czech, Polish and Hungarian (also through additional Language Packs). The extent of localization varies from full translation of the product to documentation only. The AutoCAD command set is localized as a part of the software localization.
- a) products extending AutoCAD functionality to specific fields;
- b) creating products such as AutoCAD Architecture, AutoCAD Electrical, AutoCAD Civil 3D; or
- c) third-party AutoCAD-based application.
There are a large number of AutoCAD plugins (add-on applications) available on the application store Autodesk Exchange Apps . AutoCAD's DXF, drawing exchange format, allows importing and exporting drawing information.
Autodesk has also developed a few vertical programs (AutoCAD Architecture, AutoCAD Civil 3D, AutoCAD Electrical, AutoCAD ecscad, AutoCAD Map 3D, AutoCAD Mechanical, AutoCAD MEP, AutoCAD Structural Detailing, AutoCAD Utility Design, AutoCAD P&ID and AutoCAD Plant 3D) for discipline-specific enhancements. For example, AutoCAD Architecture (formerly Architectural Desktop) permits architectural designers to draw 3D objects, such as walls, doors and windows, with more intelligent data associated with them rather than simple objects, such as lines and circles. The data can be programmed to represent specific architectural products sold in the construction industry, or extracted into a data file for pricing, materials estimation, and other values related to the objects represented. Additional tools generate standard 2D drawings, such as elevations and sections, from a 3D architectural model. Similarly, Civil Design, Civil Design 3D, and Civil Design Professional support data-specific objects, facilitating easy standard civil engineering calculations and representations. Civil 3D was originally developed as an AutoCAD add-on by a company in New Hampshire called Softdesk (originally DCA). Softdesk was acquired by Autodesk, and Civil 3D was further evolved.
AutoCAD LT is the lower cost version of AutoCAD, with reduced capabilities, first released in November 1993. Autodesk developed AutoCAD LT to have an entry-level CAD package to compete in the lower price level. AutoCAD LT, priced at $495, became the first AutoCAD product priced below $1000. It is sold directly by Autodesk and can also be purchased at computer stores (unlike the full version of AutoCAD, which must be purchased from official Autodesk dealers).
As of the 2011 release the AutoCAD LT MSRP has risen to $1200. While there are hundreds of small differences between the full AutoCAD package and AutoCAD LT, there are a few recognized major differences in the software's features:
- 3D Capabilities: AutoCAD LT lacks the ability to create, visualize and render 3D models as well as 3D printing.
- Network Licensing: AutoCAD LT cannot be used on multiple machines over a network.
- Customization: AutoCAD LT does not support customization with LISP, ARX, .NET and VBA.
- Management and automation capabilities with Sheet Set Manager and Action Recorder.
- CAD standards management tools.
AutoCAD LT 2015 introduced Desktop Subscription (rental) from $360 per year
Formerly marketed as AutoCAD WS, AutoCAD 360 is an account-based mobile and web application enabling registered users to view, edit, and share AutoCAD files via mobile device and web using a limited AutoCAD feature set — and using cloud-stored drawing files. The program, which is an evolution and combination of previous products, uses a freemium business model with a free plan and two paid levels — marketed as Pro ($4.99 monthly or $49.99 yearly) and Pro Plus ($99.99 yearly) — including various amounts of storage, tools, and online access to drawings. 360 includes new features such as a "Smart Pen" mode and linking to third-party cloud-based storage such as Dropbox. Having evolved from Flash-based software, AutoCAD 360 uses HTML5 browser technology available in newer browsers including Firefox and Google Chrome.
AutoCAD WS began with a version for the iPhone and subsequently expanded to include versions for the iPod Touch, iPad, Android phones, and Android tablets. Autodesk released the iOS version in September 2010, following with the Android version on April 20, 2011. The program is available via download at no cost from the App Store (iOS), Google Play (Android) and Amazon Appstore (Android).
In its initial iOS version, AutoCAD WS supported drawing of lines, circles, and other shapes; creation of text and comment boxes; and management of color, layer, and measurements — in both landscape and portrait modes. Version 1.3, released August 17, 2011, added support of unit typing, layer visibility, area measurement and file management. The Android variant includes the iOS feature set along with such unique features as the ability to insert text or captions by voice command as well as manually. Both Android and iOS versions allow the user to save files on-line — or off-line in the absence of an Internet connection.
According to a 2013 interview with Ilai Rotbaein, an AutoCAD WS Product Manager for Autodesk, the name AutoCAD WS had no definitive meaning, and was interpreted variously as Autodesk Web Service, White Sheet or Work Space.
AutoCAD is licensed, for free, to students, educators, and educational institutions, with an 36-month renewable license available. The student version of AutoCAD is functionally identical to the full commercial version, with one exception: DWG files created or edited by a student version have an internal bit-flag set (the "educational flag"). When such a DWG file is printed by any version of AutoCAD (commercial or student) older than AutoCAD 2014 SP1, the output includes a plot stamp / banner on all four sides. Objects created in the Student Version cannot be used for commercial use. Student Version objects "infect" a commercial version DWG file if it is imported in older versions than AutoCAD 2015.
The Autodesk Education Community provides registered students and faculty with free access to different Autodesk applications. To download 36-month student license go on http://www.autodesk.com/education/free-software/featured create account,sign in and enjoy.
AutoCAD is a software package created for Windows and usually any new AutoCAD version supports the current Windows version and some older ones. AutoCAD 2016 and 2017 supports Windows 7 up to Windows 10.
Autodesk stopped supporting Apple's Macintosh computers in 1994. Over the next several years, no compatible versions for the Mac were released. In 2010 Autodesk announced that it would once again support Apple's Mac OS X software in the future. Most of the features found in the 2012 Windows version can be found in the 2012 Mac version. The main difference is the user interface and layout of the program. The interface is designed so that users who are already familiar with Apple's macOS software will find it similar to other Mac applications. Autodesk has also built in various features in order to take full advantage of Apple's Trackpad capabilities as well as the full-screen mode in Apple's OS X Lion. AutoCAD 2012 for Mac supports both the editing and saving of files in DWG formatting that will allow the file to be compatible with other platforms besides the OS X. AutoCAD 2014 for Mac supports Apple OS X v10.9.0 or later (Mavericks), OS X v10.8.0 or later (Mountain Lion) with 64-bit Intel processor.
AutoCAD LT 2013 is now available through the Mac App Store for $899.99. The full featured version of AutoCAD 2013 for Mac, however, is not available through the Mac App Store due to the price limit of $999 set by Apple. AutoCAD 2014 for Mac is available for purchase from Autodesk's Web site for $4,195 and AutoCAD LT 2014 for Mac for $1,200, or from an Autodesk Authorized Reseller. The latest version available for Mac is AutoCAD 2016 as of October 2016.
Android and iOS
Template:Expand section Autodesk AutoCAD 360 is the official AutoCAD mobile app for Android and iOS and Windows tablets (UWP). It can view, markup, measure and edit (2D only editing) any DWG file from a mobile phone or tablet. The actual file editing operations are performed in the cloud, in genuine DWG file format.
- Autodesk software
- Other topics
- ↑ Template:Webarchive
- ↑ AutoCAD Civil 3D 2011 Drawing Compatibility. AutoCAD Civil 3D 2011 User's Guide pp. 141Template:Ndash142. Autodesk (April 2010). Retrieved on January 29, 2013.
- ↑ AutoCAD 2016 Language Packs | AutoCAD | Autodesk Knowledge Network.
- ↑ AutoCAD Exchange Apps. Autodesk. Retrieved on 11 August 2013.
- ↑ Questions and Answers (PDF). Retrieved on 2016-03-30.
- ↑ 6.0 6.1 Autodesk. AutoCAD WS. iTunes Preview. Apple. Retrieved on 30 September 2011.
- ↑ 7.0 7.1 7.2 Ozler, Levent. AutoCAD for Mac and AutoCAD WS application for iPad and iPhone. Dexigner. Dexigner. Retrieved on 30 September 2011.
- ↑ 8.0 8.1 8.2 Ozler, Levent. AutoCAD for Mac 2012: Built for Mac OS X Lion. Dexigner. Dexigner. Retrieved on 30 September 2011.
- ↑ 9.0 9.1 9.2 Ozler, Levent. AutoCAD WS for Android. Dexigner. Dexigner. Retrieved on 30 September 2011.
- ↑ Thomson, Iain. Autodesk Shifts Design Apps to the Cloud. The A Register. The A Register. Retrieved on 30 September 2011.
- ↑ AutoCAD WS: Moving Forward. Augi Autodesk Users Group International, January 29th, 2013. Retrieved on 26 April 2013.
- ↑ Overview of Plotting. Retrieved on 19 March 2016.
- ↑ System requirements for AutoCAD 2016 | AutoCAD | Autodesk Knowledge Network. Knowledge.autodesk.com (2015-12-16). Retrieved on 2016-03-19.
- ↑ 14.0 14.1 Clark, Don (16 August 2011). Template:Citation/make link. The Wall Street Journal. http://blogs.wsj.com/digits/2011/08/16/autodesk-adopts-apple-app-store-for-mac-software/?KEYWORDS=AutoCAD. Retrieved 30 September 2011.
- ↑ AutoCAD 360 - Android Apps on Google Play. Play.google.com. Retrieved on 2016-03-19.
- ↑ AutoCAD 360.
- Hurley, Shaan. AutoCAD Release History. Between the lines.
- Mike Riddle & the Story of Interact, AutoCAD, EasyCAD, FastCAD & more. DigiBarn Computer Museum. Retrieved on 12 November 2016.
- About. Michael Riddle's Thoughts. Retrieved on 12 November 2016.
- Plantec, Peter (7 January 2012). The Fascinating Story of How Autodesk Came to Be (Part 1). Studio Daily. Access Intelligence.
- Grahame, James (17 May 2007). Mike Riddle's Prehistoric AutoCAD. Retro Thing.
|This article uses material from the Wikipedia article AutoCAD, which is released under the Creative Commons Attribution-ShareAlike 3.0 Unported License (view authors).| | 1 | 9 |
<urn:uuid:bf862db8-03f8-4359-837b-30211825a422> | Unlike the evidence for Là na Caillich, we’re on firmer ground as far as evidence for ritualised celebrations are concerned for Midsummer’s Eve (or St John’s Eve, Bonfire Night,1 Féill Sheathain,2 Teine Féil’ Eóin,3 or Feaill Eoin4) on June 24th in Scotland,5 or the eve of St John’s
Day in Ireland, on June 23rd.6 The provenance of these celebrations are debatable, however, and once again, as with the other festivals focused around the solstices and equinoxes, the evidence seems to point to there being a strong outside influence in the celebrations.7
Some of the earliest mentions of Midsummer celebrations in Scotland date to the sixteenth century, usually in the context of their being condemned (and then banned) by the Kirk for their perceived pagan associations.8 While the Kirk officially frowned on such traditions, in reality the celebrations persisted well into the eighteenth and nineteenth centuries, but notably they were only ever really prevalent in the most heavily Scandinavian or English-influenced areas such as the Lowlands and the north-east parts of Scotland.9
This distribution, along with a similar pattern in Wales, suggests an outside influence is responsible, but looking to Ireland and the Isle of Man complicates the issue, since on the face of it, a strong tradition of Midsummer celebrations is prevalent across both of these countries.10 Evidence for Midsummer celebrations in Ireland can be found as far back as the early fourteenth century,11 although in this case references are specifically to St Peter’s Eve (on June 28th. St Peter’s Day was often celebrated in the same manner as St John’s Eve, possibly providing a ‘second chance’ at outdoor celebrations in cases of bad weather dampening celebrations on the earlier date)12 in less condemnatory tones than those found in Scotland, but notably, the reference from New Ross in 1305 comes from a town of English settlers.13
The fires themselves were a common feature of Midsummer celebrations anywhere, and a fourteenth century description of the fires in Shropshire notes the building of three separate fires:
“’In the worship of St John, men waken at even, and maken three manner of fires: one is clean bones; another is of clean wood and no bones, and is called a wakefire, for men sitteth and wake by it; the third is made of bones and wood, and is called St John’s fire.’ The stench of the burning bones…was thought to drive away dragons.”14
Bones are sometimes mentioned in Scottish contexts, and there is also an element of some sort of mourning or funerary rites involved in the surviving descriptions, which suggest a common origin. In Scandinavian Midsummer rites, the bonfires were supposed to represent the funeral pyre of Baldr and mistletoe was gathered at this time.15 Certain ritual elements may echo this custom in Gaelic contexts, as shall be seen; otherwise, the strong overlap in the customs found at Bealltainn and Midsummer celebrations suggests a shift in focus from Bealltainn to Midsummer festivities over time. According to Joyce, some parts of Ireland celebrated the bonfires on May 1st, while others celebrated on June 24th.16 This was perhaps due to the Anglicisation of the ritual year (under whose influence the old Gaelic festivals would presumably have been less favoured compared to the big festivals in the English calendar), but also ecclesiastical influence, with the increasing popularity of St John, to whom the day was dedicated and the shifting of focus away from perveived ‘pagan rites’ that were inherent at Bealltainn. By the nineteenth century, Midsummer celebrations could be found across most of Europe and even parts of north-west Africa.17 With the Gàidhealtachd’s much more insular attitudes, both socially and religiously in many parts, the resistance to adopting such non-native festivities may be explained.
The fires in Scotland
As has been noted, the customs and traditions associated with Midsummer in Scotland were largely confined to the Lowlands and the north and east of Scotland – the most heavily Scandinavian and English-influenced areas – but most especially the Northern Isles can be seen as the stronghold of the tradition.18 Given the shifting of traditions away from pagan connotations, it’s no surprise to find a huge overlap between these customs and traditions with those found at Bealltainn.19
Celebrations began on the evening of the Feast of St John the Baptist (June 24th), and the main focal point of the festivities was the bonfire, although the accompanying rites were more solemnly observed in the north than in the Lowlands, where the emphasis was on fun and festiveness.20 These were generally lit after sunset – which at this time of year would have been very late indeed,21 and around the fire there was food (such as gudebread)22 and drink to be had, along with dancing and the leaping of flames, and the subsequent taking of the fire back to the homestead.23
On Orkney, according to a minister writing in the eighteenth century, the peats for the fire were provided by those whose horses had suffered disease, or been gelded, during the year, with the livestock then being led sunwise around the flames.24 The bonfires were lit “on the most conspicuous place of the parish, commonly facing the south,”25 suggesting a natural communal focus, since presumably this position would allow the fire to be seen from the most homesteads in the area (and so they would get the benefit of the flames), while the southerly situation probably provided the best opportunity for the smoke to waft over the maximum amount of fields in the area.26
In some bonfires a bone was thrown or placed into it, and this was invariably explained as being symbolic of the animal that would previously have been sacrificed to the fire, or else the bone was representative of a man who was made a martyr (although no one appears to have remembered much more in the way of detail).27 This could perhaps be seen as evidence of an echo of Scandinavian influence, with Baldr’s associations with the day.
Branches of birch were collected and hung over the doorways for protection,28 and torches of heather or furze were lit from the main fire and taken back to the homestead by the head of the house, where he would then go round the field sunwise three times to bless the crops, cabbage and kail and ensure a good harvest.29 The same was done around the byre to bless the cattle and safeguard them against disease or casting calves. Meanwhile, the young men and boys remained at the bonfire, where they waited for the flames to die down before leaping them and then heading home at sunrise.30
The fishermen of Shetland would gather on Midsummer’s Eve for the Fisherman’s Foy, and give a toast to the sea and the crops to ensure a good catch and a good catch. Each man would take a turn to toast, and say, “Lord! Open the mouth of the grey fish, and haud thy hand about the corn.”31
Midsummer’s Eve was also a time when witches and fairies were supposed to be at their most potent and active. Care was taken not to give out any dairy produce, to ensure the profit did not leave the house with it.32 But while there were inherent dangers of the season, this power also had an upside in that collecting particular herbs, especially those for healing or protection, were considered to be at their most potent. St John’s Wort was especially looked for, some of which might be hung in the house and outhouses for protection against thunder and evil influences, while some more could be burned in the bonfire or put in the fields.33
Carmichael gives several charms that were used for the picking of St John’s Wort – or St Columba’s Plant, as it was also known – and notes that it was at its most potent when the plant was discovered accidentally rather than purposely looked for:
“Plantlet of Columba,
Without seeking, without searching,
Plantlet of Columba,
Under my arm forever!…”34
With its protective properties, the plant was often sewn into the bodices and vests so that it would sit beneath the left armpit of the wearer, thus ensuring no harm should come to them from the likes of witchcraft or the Good Folk, and neither should they be afflicted by the evil eye or the second sight.
Fern seeds were also sought after, since it was considered to have similarly potent and protective properties:
“Only on Midsummer Eve,’ it is said, ‘can it be gathered from the wondrous night-seeding fern. On that one night it ripens from twelve to one, and then it falls and disappears instantly…It has the wonderful property of making people invisible.”35
Elderberries gathered at Midsummer were said to offer protection from witchcraft, but also bestowed magical powers on those who gathered it.36
The fires in Ireland
The evidence for Ireland is much more abundant than in Scotland, and as one might expect, it follows the same lines; bonfires and blessings of livestock and crops with blazing bushes, along with more specific customs such as the gathering of St John’s Wort.
In addition to all of this were the patterns – religious gatherings that were often focused on the hillsides, loughs and holy wells. These were often as notorious for their faction fighting as they were renowned for the votive rounds that were made by the pilgrims in attendance, or the accompanying dancing, drinking, eating, games and other kinds of amusements. The pattern of Glendalough, is said to have been “…an unsafe locality unless a stipendiary magistrate and about 100 police could keep the combatants, the Byrnes, Tools and Farrells, etc, separate.”37 Such gatherings were eventually banned by the church in the nineteenth century, as much for the violence and drunken debauchery that came to be associated with them, as for the perceived pagan vestiges that clung to them, all though stripped down versions of them did manage to survive in certain parts.38
The bonfires were usually communal affairs for the whole village, except in extremely remote areas where farms would tend to their own fires.39 Exceptions were also made if there had been a recent death in the family; in this case, no fires were lit, no rites were carried out, at home or at the communal bonfire40 – presumably, with the death being so recent, the family were still tainted by it themselves, and risked spreading such ill fortune to the community if they took part. Otherwise, failure to observe and take part in the rites was sure to invite disease on the crops, and disaster for the harvest.41
In the lead up to the festivities, fuel for the main bonfire was often collected from door to door in the village. Since it was considered unlucky (and just plain mean) to refuse to contribute anything, the village bonfire was usually well-fed in this respect, with peats and firewood, and even old bits of furniture and other kinds of inflammable rubbish (even tyres, more recently), and ended up so large that a ladder was required to finish off piling the fuel up on top. Children would also go round gathering sticks and brambles and anything else they could find to contribute to the pile.42
The main bonfire was lit as the sun set, and this task often fell to a wise old man of the village, who would light it with a traditional prayer for the occasion:
“In onóir do Dhia agus do Naomh Eoin, agus chun toraidh agus chun taibhe ar ár gcur agus ar ár saothar in ainm an Athar agus an Mhic agus an Spirid Naoimh, Amen. ’In the honour of God and of St John, to the fruitfulness and profit of our planting and our work, in the name of the Father and of the Son and of the Holy Spirit, Amen.’ ”43
After the bonfire was lit, holy water was then sprinkled about the fire and into the flames for blessing. In some parts this was done by the man who lit the fire, but in County Cork it was often a child who was given this job.44 With the formalities over, the festivities would begin, with music, singing, dancing, food and drink, games, story-telling, competitions, and amusements. In some parts of Ireland, it was common to set up a craebh, a large wooden pole that formed a centre point for the gathering, and a place where dancing competitions were held, with gingerbread being given as prizes for the men, and garters for the women.45 It is tempting to see such a practice as being a sort of artificial bile.
In Connaught, ‘goody’ bread (white bread, bought specially from the baker) was often the only food on offer, soaked in sweetened and spiced hot milk (the milk having been stolen from a neighbour’s cow, a lot of the time). This treat was cooked up in a large pot, heated on the main bonfire, or a smaller one made specifically for cooking.46
This main bonfire was usually situated on a spot where the wind would carry the smoke over the main crops in the area, so they’d get the benefit of the protective qualities of the bonfire.47 Lady Wilde gives a good description of the festivities as the flames died down:
“When the fire has burned down to a red glow the young men strip to the waist and leap over or through the flames; this is done backwards and forwards several times, and he who braves the greatest blaze is considered the victor over the powers of evil, and is greeted with tremendous applause. When the fire burns still lower, the young girls leap the flame, and those who leap clean over three times back and forward will be certain of a speedy marriage and good luck in after life, with many children. The married women then walk through the lines of the burning embers; and when the fire is nearly burnt and trampled down, the yearling cattle are driven through the hot ashes, and their back is singed with a lighted hazel twig. These hazel rods are kept safely afterwards, being considered of immense power to drive the cattle to and from the watering places. As the fire diminishes the shouting grows fainter, and the song and the dance commence; while professional story-tellers narrate tales of fairy-land, or of the good old times long ago, when the kings and princes of Ireland dwelt amongst their own people, and there was food to eat and wine to drink for all corners to the feast at the king’s house.”48
The jumping of the bonfire in particular was of great importance, not just for luck and marriage, but also for health.49 The ashes in particular were seen to have strong healing properties, and when collected from the bonfire and stored for use, a little of the ash could be mixed with water and drunk to cure general ailments, or else used as a wash for cuts, wounds, sores, and the like.50 In this vein, Hedderman (writing with characteristic exasperation at such ridiculous superstitions),describes a case where a young man suffers with a badly septic finger. While Hedderman remonstrates the young man about keeping such wounds clean to prevent infection, the young man’s father has a different opinion:
“His father, who was sitting in a chair in the corner, got up, and shaking a closed fist, with a dozen loud imprecations, exclaimed, ‘I knew it would be like this; he did not take in a red coal from the fire on St John’s night.’”51
The dying coals or embers were put to good use in the field, byre, barn and house as well. In addition to healing, keeping some of the ashes in the house was supposed to be good for luck, or ensuring a gentle crossing over for the elderly. Some of the ashes or embers might also be put into the hearth as well,52 and in newly built houses, a shovel was taken to the fire and part of it was taken to the new house so that the first fire in it would be lit from the St John’s bonfire, to ensure luck and prosperity to the inhabitants.53
In the field, the ashes or dying embers were thrown into each corner to bless the crops and ensure a bountiful harvest, or else a bush or bundle of reeds might be lit and taken round the boundaries instead, carried through the fields, or thrown into the crops.54 The torch could be taken around the house, byre and dairy as well, and in some counties (such as County Clare), the burning torch was touched to the cattle to ensure healthy calves. The ashes or embers from the fire placed in the dairy to ensure luck and plenty of milk and butter, as well as to protect against witchcraft. A charred stick might also be brought from the fire, and used to mark a cross at the door and on the churning equipment.55
In Munster, the torches were made of bunches of hay and straw, tied to poles, lit, and then processed about the hill – Cnoc Áine – where the main bonfire was lit. The procession was populated entirely by the men, and led by a member of the Quinlan family, according to Fitzgerald.56 The torches – cliars – were taken around the summit of the hill and then down to the fields, and through the cattle. Several sources speak of this as being a sort of funeral procession for Áine, the tutelary goddess of the region, carried out in remembrance of her, and in the same spirit as the Good Folk, who are said to process in the same manner.57 Once again, such associations are reminiscent of the funerary rites to Baldr, but also, perhaps, evoke half-remembered associations of the funerary rites held in honour of tutelary deities like Tailtiu at Lùnastal, which were not far off at this point.
Before heading to the main bonfire, households would often light a fire in the farmyard as a focus for protective rites and blessings to be carried out. These fires were usually on a much smaller scale than the grand affairs of the ‘big bonfires’, “usually no more than one or two furze bushes, or a little heap of twigs, or a sod or two of blazing turf from the kitchen hearth.”58 Being small, the fire was not expected to last long, and it wasn’t encouraged to once the rites had been finished and everyone was free to head to the main event.59 The rites were similar to those found at the big bonfire – a blessing was made in God’s name, and holy water was sprinkled about the house, the farm building, livestock, crops, and those in attendance.60
While the ashes were supposed to have healing properties, several plants were considered to be particularly potent, as in Scotland. St John’s Wort was looked for, as was mugwort in County Cork and County Waterford, although there was a certain leeway in the timing of its picking that probably accommodated bad weather and the time difference between New Style and Old Style Dates. Anytime between Midsummer and July 4th).61 The juice of the St John’s Wort was then boiled and drank as a preventative against illness.62
The fires in Man
The Isle of Man gives us some very important examples of festival rites at Midsummer, showing both the influences of Norse and Gaelic culture. While the fires were lit and the cattle and fields were blessed with furze (or gorse) torches as elsewhere, and mugwort was collected as a preventative against witchcraft,63 some distinctive differences can be found in the form of the rents, or offerings, that were made to Manannán at this time.
Manannán has long been associated with the Isle of Man, and tradition makes him the first king of the island – a benevolent and peaceful leader, if a pagan. A sixteenth century description of the island has this to say of him:
“Manann MacLer, the first Man that had Mann, or ever was ruler of Mann, and the Land was named after him, and he reigned many years and was a Paynim, and kept by necromancy the Land of Mann under mists, and if he dreaded any enemies, he would make one man to seem an hundred by his art magick. And he never had any farm of the Comons, but each one to bring a certain quantity of green Rushes on Midsummer Eve, some to a place called Warfield,64 and some to a place called Man, and yet is so called.”65
A sixteenth century poem has almost the exact same thing to say on the matter, and a seventeenth century writer, James Chaloner, repeats similar sentiments in his A Short Treatise on the Isle of Man.66 Moore, writing in the nineteenth century, notes that a farm near to one of the sites where the rents were made still grew an abundance of rushes in his day,67 so presumably this was a well-established custom, although I have found no first hand accounts describing it as yet. At Barrule, Rhys noted that the site was visited at other times of the year, notably the first Sunday of the harvest, and evidence of offerings being made at a nearby well were in abundance, including bent pins and buttons.68
Although there appears to be some confusion, in some sources, over whether it was meadow grass or rushes that were given, the consensus appears to err in favour of rushes;69 not only are they appropriate in terms of where they grow – near water, fresh or salt – rushes are associated with another custom associated with the Midsummer rites held in his name. This is the Tynwald Court, held on a hill near St John’s Chapel on the island, where everyone would gather to hear the laws and ordinances that were to be enacted. On the approach to the hill, the path was strewn with green rushes.70
Such are the strong associations with Manannán on this day – on the eve of the festival of St John the Baptist, no less – it only reinforces the associations of the god and the saint as noted elsewhere, in the offerings to Shony.71
While there are many similarities with Bealltainn, and probable outside influences along the way, Midsummer is certainly a festival in its own right as well. In amongst all of the rites – the fires, the blessings, the processions, and so on – was a strong sense of looking towards safeguarding the harvest, and the continuing health of the livestock (but most especially, the cattle). By this point in the agricultural calendar, the crops would be growing strong, and in the next month or so, would start to ripen. The coming weeks would be crucial to the success of the harvest, and so it was only natural to look towards that point, when the potential for the crop was now plainly apparent, and take steps to try and prevent disaster. Likewise, with the pastoral calendar looking towards getting the cows into calf, if they weren’t already, the successful fertilisation of the cows, and then birth of the calves, was also paramount in farmer’s mind.
At such a turning point in the solar calendar, people were facing one of the busiest periods in their year, as well as the gradual decline of daylight in which much of that work could be done. The bonfires may have been seen to recognise this turning of the sun, as MacCulloch suggests, but more than anything, the protective qualities of the fires themselves were most emphasised. First and foremost, Midsummer’s Eve was about protecting the crops; without much to celebrate in terms of immediate abundance, as the harvest celebrations later would, and Bealltainn did in terms of marking the summer dairying season, the Midsummer’s Eve celebrations were, on the one hand, decidedly foward-looking, whilst on the other, marked a brief pause in the labour before the real struggle began.
The coming weeks would more than likely have been a struggle for many households, since the previous year’s supplies of potatoes, wheat, oats, barley, or rye, would have diminished significantly – run out, even if the previous year had been particularly bad. July was known as ‘Hungry July’ or Iúl an Ghorta in Ireland,72 in reference to this struggle, and sometimes desperate measures would have to be taken. In the face of such a prospect, the Midsummer’s Eve celebrations surely gave a little hope and optimism.
1 Danaher, The Year in Ireland, 1972, p134.
2 In Gaelic (Scots Gaelic), Black, The Gaelic Otherworld, 2005, p556.
3 In Irish, Danaher, The Year in Ireland, 1972, p134.
4 In Manx, Moore, Folklore of the Isle of Man, 1891, p119.
5 Black, The Gaelic Otherworld, 2005, p556.
6 Danaher, The Year in Ireland, 1972, p137.
7 MacCulloch, The Religion of the Ancient Celts, 1911, p257-258.
8 Hutton, Stations of the Sun, 1996, 317-318.
9 Hutton, Stations of the Sun, 1996, 319; McNeill, The Silver Branch Volume II, 1959, p86.
10 Hutton, Stations of the Sun, 1996, 319.
11 In an ecclesiastical sense, the holy day was well established as early as the ninth century, however. See Stokes, The Martyrology of Oengus the Culdee, Félire Oengusso Céli dé, 1905, p142: “[June 24] John the Baptist’s royal nativity, if thou hast attended diligently, at the removal without disgrace of John the son (of Zebedee) to Ephesus.”
12 Hutton, Stations of the Sun, 1996, p312.
13 Hutton, Stations of the Sun, 1996, p312; p319-320.
14 Hutton, Stations of the Sun, 1996, p312.
15 Fraser, The Golden Bough, p676.
16 Joyce, A smaller social history of Ancient Ireland, 1906, p123.
17 Hutton, Stations of the Sun, 1996, 312.
18 McNeill, The Silver Bough Volume II, 1959, p90.
19 McNeill, The Silver Bough Volume II, 1959, p92.
20 Hutton, Stations of the Sun, 1996, p318.
21 In fact, in the most northerly parts of Scotland it never really gets properly dark around the time of the summer solstice. McNeill describes sunset in these parts as “hardly more than a gloaming.” McNeill, The Silver Bough Volume II, 1959, p89.
22 Gudebread refers to any sort of festival baked goods such as shortbread, sweetie-scones and festival bannocks, or simply a quality loaf of white bread bought from the bakers (which would have been considered a rare treat for many, at the time). McNeill, The Silver Bough Volume II, 1959, p89.
23 McNeill, The Silver Bough Volume II, 1959, p91.
24 McNeill, The Silver Bough Volume II, 1959, p90.
25 McNeill, The Silver Bough Volume II, 1959, p91.
26 Thistelton-Dyer notes that this was the case in the choice of siting the bonfires on Man, Thistelton-Dyer, British Popular Customs, 1911, p316.
27 Thistelton-Dyer notes that this was the case in the choice of siting the bonfires on Man, Thistelton-Dyer, British Popular Customs, 1911, p316.
28 Napier, Folk Lore, or, Superstitious Beliefs in the West of Scotland Within this Century, 1879, p117; McNeill, The Silver Bough Volume II, 1959, p89.
29 Hutton, Stations of the Sun, 1996, p318.
30 Spence, Shetland Folklore, 1899, p90; Napier, Folk Lore, or, Superstitious Beliefs in the West of Scotland Within this Century, 1879, p117; McNeill, The Silver Bough Volume II, 1959, p91.
31 County Folklore – Orkney and Shetland, p195.
32 Spence, Shetland Folklore, 1899, p139.
33 McNeill, The Silver Bough Volume II, 1959, p88.
34 Carmichael, Ortha nan Gàidheal: Carmina Gadelica Volume II, 1900, p101. Charm 167.
35 Carmichael, Ortha nan Gàidheal: Carmina Gadelica Volume II, 1900, p101. Charm 167.
36 Carmichael, Ortha nan Gàidheal: Carmina Gadelica Volume II, 1900, p101. Charm 167.
37 Evans, Irish Folk Ways, 1957, p263-264.
38 Evans, Irish Folk Ways, 1957, p264.
39 Ó Súillebháin, Irish Folk Custom and Belief, 1967, p71.
40 Danaher, The Year in Ireland, 1972, p145.
41 Danaher, The Year in Ireland, 1972, p134-136.
42 Danaher, The Year in Ireland, 1972, p138.
43 Danaher, The Year in Ireland, 1972, p139.
44 Danaher, The Year in Ireland, 1972, p142.
45 Danaher, The Year in Ireland, 1972, p151-152.
46 Danaher, The Year in Ireland, 1972, p151-152.
47 Danaher, The Year in Ireland, 1972, p145.
48 Wilde, Ancient Legends, Mystic Charms, and Superstitions of Ireland, 1887, p214-215.
49 Ó hÓgáin, Irish Superstitions, 1995, …
50 Danaher, The Year in Ireland, 1972, p147.
51 Hedderman, Glimpses of my Life in Arran, 1917, p95.
52 Danaher, The Year in Ireland, 1972, p147.
53 Danaher, The Year in Ireland, 1972, p135-136.
54 Danaher, The Year in Ireland, 1972, p145.
55 Danaher, The Year in Ireland, 1972, p146.
56 Dames, Mythic Ireland, 1992, p63-64.
57 See ‘Mannanaan Mac Lir’ in Journal of the Cork Historical and Archaeological Society, ii, 1896, p366-367. Danaher, The Year in Ireland, 1972, p153; Dames, Mythic Ireland, 1992, p63-64.
58 Danaher, The Year in Ireland, 1972, p144.
59 Danaher, The Year in Ireland, 1972, p145.
60 Danaher, The Year in Ireland, 1972, p142.
61 Danaher, The Year in Ireland, 1972, p147-148.
62 Ó Súillebháin, Irish Folk Custom and Belief, 1967, p71.
63 Train, History of the Isle of Man Vol II, 1845, p120. See also Thistelton-Dyer, British Popular Customs, 1911, p316.
64 This is now known as south Barrule. Moore, The Folklore of the Isle of Man, 1891, p5-6.
65 The Supposed True Chronicle, originally 16th century, found in Parr, An Abstract of the Laws, Customs and Ordnances of the Isle of Man, 1867, p6.
66 (1656) Reprinted in 1863, see the online edition.
67 Moore, The Folklore of the Isle of Man, 1891, p5-6.
68 Rhys, Celtic Folklore: Welsh and Manx, 1901, Chapter 4.
69 MacQuarrie, The Waves of Manannán, 1997, p294.
70 MacQuarrie, The Waves of Manannán, 1997, p294.
71 Black, The Gaelic Otherworld, 2005, p590-591.
72 Danaher, The Year in Ireland, 1972, p165. | 1 | 70 |
<urn:uuid:26753e28-dfd1-4af6-9839-c8cc9a52e67f> | Bowel Cancer UK and clinical experts are urging all hospitals across the UK to implement Lynch syndrome testing at diagnosis for everyone with bowel cancer under the age of 50. Lynch syndrome is an inherited condition which causes over 1,000 cases of bowel cancer in the UK every year, many of them in people under the age of 50. However, fewer than 5% of people with Lynch syndrome in the UK have been diagnosed.
Testing everyone with bowel cancer under the age of 50 at diagnosis for Lynch syndrome will help identify family members who may carry Lynch syndrome and be at risk of bowel cancer. It has been shown to be cost effective for the NHS, and is recommended by the Royal College of Pathologists and British Society of Gastroenterologists. It is also a key recommendation in our Never Too Young campaign.
People with Lynch syndrome should then access regular surveillance screening, which can detect bowel cancer in the early stages and has been shown to reduce mortaility from bowel cancer by 72%.
Despite this, testing and surveillance screening are patchy across the UK. A letter in the Daily Telegraph (13 November 2014) from eight leading clinical experts supports our call for all hospitals to implement Lynch syndrome testing at diagnosis for people with bowel cancer under the age of 50.
The letter and signatories are as follows:
There are more than 1,000 cases of bowel cancer a year that are attributable to Lynch syndrome (LS), many under the age of 50. LS is an inherited condition that predisposes individuals to bowel and other cancers, with a lifetime risk of around 70 per cent. Yet in the UK we have identified fewer than 5 per cent of families with LS. The family of Stephen Sutton, who was diagnosed with bowel cancer and whose father has LS, was one of them. It is a consistently under-recognised, under-diagnosed and inadequately treated condition.
Both the Royal College of Pathologists and the British Society of Gastroenterology recommend testing everyone with bowel cancer under the age of 50 at diagnosis to help us to identify family members who may carry LS and be at risk of bowel cancer. Yet testing is patchy. We urge all hospitals across the UK to implement this guidance.
This testing would mean people at risk could access surveillance programmes for regular colonoscopies, helping detecting bowel cancer early but also preventing it.
Patient groups such as Bowel Cancer UK are in support. A recent NHS study found that LS testing at diagnosis for everyone under 50 with bowel cancer would be cost effective enough to have been approved by NICE. The evidence is overwhelming. We must end this postcode lottery.
Dr Suzy Lishman, President, The Royal College of Pathologists
Professor Malcolm Dunlop MD FRCS FMedSci FRSE, Colon Cancer Genetics Group and Academic Coloproctology, Head of Colon Cancer Genetics, Institute of Genetics & Molecular Medicine
Professor D Gareth Evans MD FRCP, Professor of Clinical Genetics and Cancer Epidemiology and Consultant Geneticist, University of Manchester
Commenting on the letter from clinical experts, Deborah Alsina, CEO of Bowel Cancer UK, said:
“The Royal College of Pathologists recently produced best practice guidelines recommending everyone with bowel cancer under the age of 50 should be tested for Lynch syndrome at diagnosis. Speedy implementation is vital as testing is currently patchy at best and if people are tested at all, it is often after treatment ends. Yet a diagnosis of Lynch syndrome can affect treatment decisions. We are therefore calling for all UK hospitals to implement this guidance swiftly.”
“This will also help to identify the risk to other family members who may also carry Lynch syndrome and who may be at higher risk of developing bowel cancer. Once identified, people at risk, including those diagnosed who have a greater chance of recurring or developing another linked cancer, should have access to surveillance programmes including regular colonoscopies. This will help to ensure bowel cancer is either prevented or detected early.”
Bowel Cancer UK will be writing to all Clinical Commissioning Groups and Health Trusts in the UK asking them if they have implemented systematic Lynch syndrome testing, and we will report back on the responses. In the meantime, please share our infographic on the subject on social media to help raise awareness of the issue.
Venue: St Mark’s Hospital, London
Target Audience: All members of the Colorectal Cancer MDT (nurse specialists, oncologists, gastroenterologists, colorectal surgeons, pathologists), Geneticists, genetics counsellors
Learning Style: Lectures and case discussions
Learning Outcomes: On completion of this course, attendees will:
£150.00 – Consultants
£75.00 – Nurses, Trainees and other Healthcare Professionals
Hayley Hovey was 23 weeks’ pregnant with her first baby when she suddenly woke in the middle of the night with a sharp, shooting pain in her side.
She visited her GP’s out-of-hours service but was reassured to hear her baby’s heartbeat and be told all was well. The pain was probably ‘ligament strain’ caused by the weight of the growing baby. ‘I was ecstatic to be having a baby – I’ve always wanted to be a mum,’ says Hayley, 34. ‘All my scans showed my baby was healthy, so I didn’t think anything more about that pain.’
She now knows it was the first sign there was a grave threat to her baby’s life, and her own. Four weeks later her daughter, Autumn, was born prematurely and later died. Then Hayley was found to have bowel cancer.
Doctors now think Autumn’s death was linked to her mother’s cancer, with a blood clot breaking away from the tumour, damaging Hayley’s placenta and cutting off the food supply to her unborn baby.
However, it took four months after Autumn’s death for Hayley to be diagnosed. The problem was her age – she was ‘too young’ for bowel cancer to be considered.
Hayley, who lives in Fareham, Hants, with her husband Paul, a 35-year-old IT consultant, says: ‘Looking back, I had textbook symptoms – exhaustion, intermittent stomach pains, increasingly bad diarrhoea, blood in my stools and bleeding.
The disease is Britain’s second-biggest cancer killer, claiming 16,000 lives a year. The number of under-50s diagnosed has been gradually rising – to around 2,100 a year.
But a recent survey by the charity Bowel Cancer UK of patients under 50 found that 42 per cent of the women had visited their GP at least five times before being referred for tests.
Indeed, Hayley, a supply planner for an IT firm, was examined five times by different doctors and midwives, who all missed her symptoms, despite a golf ball-sized lump appearing on her stomach after her pregnancy. By the time she was diagnosed, Hayley had stage three to four cancer, meaning the tumour had broken through her bowel wall.
She had to undergo a seven-hour operation to remove the 6cm growth, followed by six months of chemo and radiotherapy.
But her experience is not uncommon, says Deborah Alsina, chief executive of Bowel Cancer UK: ‘We hear from many younger people who express frustration at not getting a diagnosis and support.’
‘Bowel cancer is often associated with older patients over 50 – but younger people can, and do, regularly get it, as the tragic story of Stephen Sutton recently highlighted,’ adds Kevin Monahan, consultant gastroenterologist at West Middlesex University Hospital, London.
Stephen Sutton, 19, raised more than £3million during his three-year battle against multiple tumours
Stephen Sutton, the 19-year-old fundraiser who died last week from the disease, told the Mail earlier this month of his anger that he was not diagnosed for six months after his symptoms started. This was despite his family history of Lynch syndrome, a genetic condition that raises the risk of bowel cancer.
‘If it had been caught earlier, it could have led to a better prognosis,’ he said. Hayley, too, eventually discovered she had Lynch syndrome.
Bowel cancer is very treatable if detected early – 93 per cent of patients who are found to have a small tumour on the bowel wall live for five years or more. Yet only 9 per cent of cases are diagnosed at this stage – most are diagnosed at stage three. So, the overall five-year survival rate for bowel-cancer patients is just 54 per cent.
Because patients and many doctors assume that young people won’t get bowel cancer, they are particularly likely to have advanced-stage tumours at the time of diagnosis.
Bleeding or blood in faeces
A change in bowel habits lasting more than three weeks
Unexplained weight loss
See bowelcanceruk.org.uk; beatingbowelcancer.org (phone 08450 719 301); and familyhistorybowelcancer.wordpress.com/
Cancer charities are campaigning to improve diagnosis for all ages – they want new diagnostic guidelines for GPs and earlier screening procedures.
Sean Duffy, NHS England’s national clinical director for cancer, says: ‘The UK lags behind much of Europe in terms of survival from bowel cancer. We need to change this, and this includes identifying it better in patients under 50.’
National GP guidelines state only patients aged 60 and over should be automatically referred to hospital for tests if they have one symptom. Patients aged 40 to 60 must exhibit two or more symptoms.
For under 40s, there is often an assumption the symptoms must be something else, says Mark Flannagan, chief executive of the charity Beating Bowel Cancer. ‘We’ve had patients with red-flag symptoms – such as blood in their stools – being told “you’ve got IBS” or “you’re too young to have cancer” by their GPs.’
Four weeks after Hayley’s initial scare, she was unable to feel her baby moving. Tests revealed Autumn had stopped growing, and she had to be delivered by emergency caesarean. After her birth, in July 2011, she was taken to a specialist neo-natal unit at Southampton General Hospital but died in hospital a few weeks later.
Two weeks afterwards, Hayley experienced more shooting pains. With her pregnancy bump gone, there was also a noticeable lump on the side of her waist. Her midwife said it was probably an infection, and Hayley was given antibiotics.
But her health deteriorated rapidly and she had to take six weeks off work with exhaustion, which her GP put down to depression.
Within three months of Autumn’s death, Hayley was suffering from nausea and abdominal pain.
Unable to get a GP’s appointment, she went to A&E but was told the lump was possibly an infection related to her caesarean. Doctors performed a cervical smear test (which was subsequently lost) and sent her home with paracetamol.
Stephen Sutton with his mother Jane whilst Prime Minister David Cameron visited him
‘I got the impression they didn’t take me very seriously,’ she recalls.
Soon after, she was vomiting up to ten times a day, feeling dizzy and weak, passing blood and experiencing chronic diarrhoea. At an emergency GP appointment, she was examined by a different doctor who immediately referred her to hospital; after several days of tests, she was diagnosed with cancer.
Four days before Christmas, Hayley underwent surgery. ‘We thought we’d be enjoying our first Christmas as a family, but instead I was in hospital, grieving for the loss of our little girl and terrified about the future,’ she recalls. ‘My treatment might have been less of an ordeal if my cancer had been picked up sooner. It makes me quite angry to think if I’d been 60, it would have been picked up more quickly.’
But even obvious symptoms are often missed by doctors, says Mr Flannagan. ‘I am not blaming GPs, but we need to not be shy of pointing out where things are going wrong. The default position should be for a GP to rule out cancer, just to be safe.’
‘It can also be problematic if patients don’t have obvious symptoms such as bleeding’, says Dr Monahan. ‘They may instead have vaguer symptoms such as tiredness, unexplained weight loss or abdominal pain, which could be attributed to being symptoms of other conditions such as irritable bowel syndrome or Crohn’s disease.’
Public awareness is also an issue. A survey in March by health insurer AXA PPP found nearly half of men couldn’t name one symptom of bowel cancer.
Indeed, Martin Vickers, 49, had never heard of it before his diagnosis in 2008. ‘I was totally shocked,’ says the father of four, who lives in Burton-on-Trent with wife Andrea, 48. ‘I didn’t know bowel cancer existed. It was hugely traumatic.’
Martin visited his GP five times in nine months with extreme tiredness and loose stools. His symptoms were attributed to stress – his mother had recently died and he has a high-pressure job as head of capital investment for Cambridge and South Staffordshire Water – and then IBS.
Joining friends and family to complete a Guinness Book of Records challenge creating hearts with hands
‘But I knew something wasn’t right,’ says Martin. ‘It was instinctive.’ He was finally diagnosed with stage three bowel cancer in November 2008, after his GP did an internal examination and felt a lump.
Martin underwent three months of chemotherapy and radiotherapy, followed by surgery, another six months of chemotherapy and a second operation. He now has to use a colostomy bag but has been in remission for five years.
Currently, screening is only available to people aged 60-plus. They are sent home tests, which involve sending a stool sample to a lab. But the Department of Health is now looking at a new procedure, bowel scope screening, which involves a partial colonoscopy -examining only the lower bowel.
A major UK trial of 55 to 64 year olds showed that people screened this way were 43 per cent less likely to die from bowel cancer, and 33 per cent less likely to develop it.
This is because the procedure is usually successful at detecting small growths known as polyps, which can become cancerous.
The screening – which would be offered to everyone aged 55 and over – is now being piloted. Campaigners hope it will be made available nationally by 2016.
‘This is a really important development and should make a big difference to bowel cancer outcomes,’ says Dr Monahan, who runs the Family History of Bowel Cancer clinic at West Middlesex University Hospital, specialising in hereditary components of the disease.
It won’t, however, help younger patients such as Hayley. Before her chemotherapy, she and Paul had nine embryos frozen via IVF. However she is worried she may pass on Lynch syndrome, so the couple are considering what to do.
But she says: ‘I am still here, I have a life ahead of me – and I hope my story will help others to be diagnosed in time.’
Our briefing highlights the lack of surveillance screening for younger people at higher risk of bowel cancer.
Genetic factors contribute up to 30% of bowel cancer cases, an estimated 8,000-12,000 cases each year.
Genetic factors mean a strong family history of bowel cancer, or genetic conditions such as familial adenomatous polyposis (FAP) or Lynch syndrome. People with long-term inflammatory bowel disease are also at higher risk.
People in higher risk groups are likely to develop bowel cancer much younger than the general population. Clinical guidance recommends that people in high-risk groups should be in a surveillance screening programme, which is proven to reduce deaths in these groups.
Recent evidence shows that:
Our briefing, “Never too young: Supporting people at higher risk of bowel cancer”, has five recommendations to improve services for people in high risk groups:
Full details of our findings and recommendations are in our full report available here.
Does your family have a history of early onset colon cancer? If so, your family may have Lynch syndrome. Lynch syndrome may also increase one’s chances of developing cancers of the stomach, small intestine, liver, gallbladder ducts, upper urinary tract, kidneys, bladder, pancreas, brain, skin, and if you are a male, the prostate. Women with this syndrome also are at higher risk for developing cancer of the endometrium, ovaries, and breasts. Approximately up to 1,000,000 people in the U.S. have Lynch syndrome and yet only 5% know it. Genetic testing, along with preventative measures, and annual medical screening may help one take steps to minimize risk of illness and death.
Lynch syndrome, familial adenomatous polyposis, and Mut Y homolog (MYH)-associated polyposis are three major known types of inherited colorectal cancer, which accounts for up to 5% of all colon cancer cases. Lynch syndrome is most frequently caused by mutations in the mismatch repair genes MLH1, MSH2, MSH6, and PMS2 and is inherited in an autosomal dominant manner. Familial adenomatous polyposis is manifested as colonic polyposis caused by mutations in the APC gene and is also inherited in an autosomal dominant manner. Finally, MYH-associated polyposis is caused by mutations in the MUTYH gene and is inherited in an autosomal recessive manner but may or may not be associated with polyps. There are variants of both familial adenomatous polyposis (Gardner syndrome—with extracolonic features—and Turcot syndrome, which features medulloblastoma) and Lynch syndrome (Muir–Torre syndrome features sebaceous skin carcinomas, and Turcot syndrome features glioblastomas). Although a clinical diagnosis of familial adenomatous polyposis can be made using colonoscopy, genetic testing is needed to inform at-risk relatives. Because of the overlapping phenotypes between attenuated familial adenomatous polyposis, MYH-associated polyposis, and Lynch syndrome, genetic testing is needed to distinguish among these conditions. This distinction is important, especially for women with Lynch syndrome, who are at increased risk for gynecological cancers. Clinical testing for these genes has progressed rapidly in the past few years with advances in technologies and the lower cost of reagents, especially for sequencing. To assist clinical laboratories in developing and validating testing for this group of inherited colorectal cancers, the American College of Medical Genetics and Genomics has developed the following technical standards and guidelines. An algorithm for testing is also proposed.
A history of polyposis and familial colorectal cancer
(Link to full article can be found here)
On the 25 September 2012 a meeting was held in Central London, convened by the History of Modern Biomedicine Research Group of Queen Mary, University of London, and funded by the Wellcome Trust. Assembled were many of the men and women whose research was at the forefront of the breakthroughs that led to the identification of genes for familial adenomatous polyposis (FAP) and hereditary non-polyposis colorectal cancer (HNPCC) (Lynch Syndrome) in the 1990s.
One of the most significant locations for early research into hereditary bowel cancer was St Mark’s Hospital in London, where surgeon John Percy Lockhart-Mummery (1875–1957) and pathologist Dr Cuthbert Dukes (1890–1977) were based. As Ms Kay Neale explained: ‘St Mark’s Polyposis Registry started in 1924 as a result of John Percy Lockhart-Mummery having an interest in family diseases and Dr Dukes having an interest in polyps turning into cancer.’ The Registry’s success was helped enormously by the work of Dick (later Dr) Bussey, who, aged just 17, started a meticulous system for recording patients with FAP, a condition that had first been noted in the medical literature as early as 1882. Neale elaborated on the spread of the Registry’s impact beyond the UK: ‘Dukes, of course, would lecture and publish in the journals of the day and so people would send pathological slides or descriptions of cases of polyposis from all over the world, and Dr Bussey would record them all and catalogue them.’ Fast forward to the 1980s when Sir Walter Bodmer became Director of Research at the Imperial Cancer Research Fund (ICRF) and, during the meeting, he recalled how in 1984 he established a St Mark’s Unit at the ICRF for all aspects of colorectal cancer, as research in familial cancer began to take more shape. The context for this growth in familial cancer research during the 1980s is discussed by Professor Tim Bishop in his introduction to the publication, along with several seminar participants who reflect on the work of the UK’s Cancer Family Study Group.
Representing a transatlantic viewpoint, Professor Jane Green from Canada moved the story into the 1990s and to HNPCC. A world away from the research lab, she tried to find familial links amongst cancer patients: ‘I spent many hours on roads in Newfoundland going to different small communities and talking to people in their homes. Every time somebody said, I’ll speak to my grandmother because she knows more of the history,’ or ‘You need to know about that other part of the family’ and they would contact them … As I put the pedigrees together they were very, very interesting.’ Her informal conversations revealed linkages, the understanding of which would be critical to the international effort that identified the MSH2 and MLH1 HNPCC-related genes in 1993. Like Jane Green’s families, patients from St Mark’s Polyposis Register were critical in providing DNA samples that helped identify APC, the gene for polyposis in 1991.
These and many other stories from the scientists, clinicians and others involved in this significant research can be read in more depth in the published, annotated transcript of this Witness Seminar. This volume is free to download from the Group’s website as a PDF document.
Emma M Jones, Alan YabsleyHistory of Modern Biomedicine Research Group Queen Mary, University of London Mile End Road London E1 4NS United Kindom
Analysis from a recent study has found that loading up on snack foods may increase cancer risk in individuals with an inborn susceptibility to colorectal and other cancers. Published early online in Cancer, a peer-reviewed journal of the American Cancer Society, the study suggests that an eating pattern low in snack foods could help these individuals — who have a condition called Lynch syndrome — lower their risk.
Lynch syndrome is an inherited condition characterized by a high risk of developing colorectal cancer, endometrial cancer, and other cancers at an early age. The syndrome is caused by mutations in genes involved with repairing DNA within cells.
Numerous studies have investigated associations between certain foods and colorectal cancer, and now there is general agreement that red and processed meats and alcohol consumption can increase individuals’ risk. Only a few studies have evaluated lifestyle factors and colorectal cancer in patients with Lynch syndrome, though. To investigate, Akke Botma, PhD, MSc, of the Wageningen University in the Netherlands, and her colleagues collected dietary information from 486 individuals with Lynch syndrome. During an average follow-up of 20 months, colorectal polyps (precancerous lesions) were detected in 58 people in the study.
“We saw that Lynch syndrome patients who had an eating pattern with higher intakes of snack foods — like fast food snacks, chips, or fried snacks — were twice as likely to develop these polyps as Lynch syndrome patients having a pattern with lower intakes of snack foods,” said Dr. Botma.
The findings suggest that certain dietary patterns have an influence on the development of polyps in individuals with Lynch syndrome. “Unfortunately, this does not mean that eating a diet low in snack foods will prevent any polyps from developing, but it might mean that those Lynch syndrome patients who eat a lot of snack foods might have more polyps than if they ate less snack foods,” said Dr. Botma. Because the study is observational, other studies are needed to confirm the results.
Previous work from the group revealed that smoking and obesity may also increase the risk of developing colorectal polyps among individuals with Lynch Syndrome. Thus, even though they may have inherited a very high risk of developing cancer, it may be possible to affect this risk by adopting a healthy lifestyle, including a healthy diet.
Akke Botma, Hans F. A. Vasen, Fränzel J. B. van Duijnhoven, Jan H. Kleibeuker, Fokko M. Nagengast and Ellen Kampman. Dietary patterns and colorectal adenomas in Lynch syndrome : The GEOLynch Cohort Study. Cancer, 2012; DOI: 10.1002/cncr.27726
Lynch Syndrome is an inherited cancer syndrome which causes up to 1 in 20 cases of bowel cancer in Ireland, that is equivalent to over 100 cases annually in the Republic of Ireland alone. It is also an important cause of multiple cancers outside the bowel including endometrial, ovarian, and urinary tract cancers.
Prevention of cancer in people at high risk depends on the accurate identification of families with this condition. However it is estimated that over 90% of families remain unidentified. Currently there are two clinical genetics centres in Ireland, in Dublin and Belfast. Unfortunately there is only limited access to genetic testing particularly in the Republic of Ireland where testing for Lynch Syndrome may only be requested from within the genetics department in Dublin. Thus it may be argued that much more could be done to improve the management of this condition in Ireland.
Some published data indicates that Lynch Syndrome may account for up to 5% of colorectal cancer in Ireland, thus this has a highly clinically significant impact.
A series of published abstracts from international medical conferences have been reproduced below which summarise the available academic work on Lynch Syndrome in Ireland.
Screening an Irish cohort with colorectal cancer for Lynch Syndrome using immunohistochemistry for mismatch repair proteins
Journal of Clinical Oncology, 2007 ASCO Annual Meeting Proceedings (Post-Meeting Edition). Vol 25, No 18S (June 20 Supplement), 2007: 10547 © 2007 American Society of Clinical Oncology. D. G. Power, M. P. Farrell, C. B. Muldoon, E. Fitzpatrick, C. Stuart, D. Flannery, M. J. Kennedy, R. B. Stephens and P. A. Daly St James’s Hospital, Dublin, Ireland Background: Large-scale screening for germ-line mutations that lead to the onset of disease in adulthood is possible owing to recent technical advances. The care of those with inherited predisposition to breast and ovarian cancer is now becoming a mainstream component of medical care. It is more difficult to identify those with Lynch Syndrome (LS) as various criteria (Amsterdam and Bethesda) have not proved definitive. An important development is the examination of tumor tissue to detect mismatch repair (MMR) protein loss using immunohistochemical (IHC) techniques. When coupled with family history those at risk of harbouring a mutation for LS can be identified. Once a mutation is identified predictive testing can be offered to family members, risk-reduction measures applied and mortality from colorectal cancer reduced. Methods: Screening for MMR protein expression (MLH1, MSH2, MSH6, PMS2) was planned on all colorectal cancer (CRC) cases using IHC on formalin-fixed tumor tissue from January 1st 2002. Local ethics committee approval was obtained and then written informed-consent from patients. Family history data was gathered from the index case or an appropriate relative. An aliquot of blood was stored from index cases for subsequent genetic screening if indicated by IHC analysis and genetic counseling. Results: 108 cases with CRC (62 male, 46 female, median age 59 years) from a potential total of 612 have been screened for MMR protein expression by a gastrointestinal pathologist and independently validated. Turn-around time for IHC analysis was 9 weeks. 5 patients (4.6%) had loss of MMR proteins, MSH2/MSH6- 2 cases, MSH6 alone- 1 case and MLH1/PMS2- 2 cases. All 5 have opted for genetic counselling and sequencing of relevant genes. Conclusion: These early results in an Irish cohort with CRC showing MMR loss in 4–5% of cases is consistent with other population findings. Microsatellite instability analysis is difficult, expensive and relatively unavailable. IHC, however, is an established technique in pathology departments and can be the cheapest and most reproducible approach to identify LS cases. IHC results along with robust family data can guide the genetic counseling process towards preventing deaths from CRC and other LS-associated cancers. Published on Meeting Library (http://meetinglibrary.asco.org)
Investigating parent of origin effects (POE) and anticipation in Irish Lynch syndrome kindreds.
J Clin Oncol 30: 2012 (suppl 34; abstr 431) Author(s): Michael P. Farrell, David J. Hughes, Jasmin Schmid, Philip S. Boonstra, Bhramar Mukherjee, Margaret B. Walshe, Padraic M. Mac Mathuna, David J. Gallagher; Mater Private and Mater Misericordiae University Hospital, Dublin, Ireland; Centre for Systems Medicine, Royal College of Surgeons in Ireland, Dublin, Ireland; Department of Biostatistics, University of Michigan, Ann Arbor, MI; High Risk Colorectal Family Clinic, Mater Misericordiae University Hospital, Dublin, Ireland; Mater Private Hospital and Mater Misericordiae University Hospital, Dublin, Ireland
Background: Genetic diseases associated with dynamic mutations often display parent-of-origin effects (POEs) in which the risk of disease depends on the sex of the parent from whom the disease allele was inherited. Genetic anticipation describes the progressively earlier onset and increased severity of disease in successive generations of a family. Previous studies have provided limited evidence for and against both POE effect and anticipation in Lynch syndrome. We sought evidence for a specific POE effect and anticipation in Irish Lynch syndrome families. Methods: Affected parent-child pairs (APCPs) (N = 53) were evaluated from kindreds (N = 20) from two hospital-based registries of MMR mutation carriers. POE were investigated by studying the ages at diagnosis in the offspring of affected parent-child pairs. Anticipation was assessed using the bivariate Huang and Vieland model. Results: Paired t-test revealed anticipation with children developing cancer mean 11.8 years earlier than parents, and 12.7 years using the Veiland and Huang bivariate model (p < 0.001). Conclusions: These data demonstrate a similar age at diagnosis among all offspring of affected mothers that was indistinguishable from affected fathers. Affected sons of affected mothers were diagnosed with cancer almost 3 years younger than female offspring; however, this finding failed to reach statistical significance. Genetic anticipation was present in this cohort of LS families, emphasizing the importance of early-onset screening. An additional 60 LS kindreds are under review and updated data will be presented at the meeting. POE effect: comparison in age at diagnosis in 53 affected parent-child pairs with Lynch syndrome associated malignancies. Affected mothers Affected fathers P value Unique parent N = 14 N = 13 0.28 Mean = 48.8 Mean = 53.6 Range = 27-73 Range = 36-85 All offspring N = 24 N = 30 0.67 Mean = 40.4 Mean = 41.6 Range = 23-72 Range = 23-60 Female offspring N = 6 N = 15 0.75 Mean = 42.5 Mean = 41.06 Range = 31-64 Range = 27-58 Male offspring N = 18 N = 15 0.94 Mean = 39.77 Mean = 40.07 Range 23-72 Range = 20-60 P value female vs male offspring 0.604 0.95 ________________________________________ Source URL: http://meetinglibrary.asco.org/content/106059-133 Published on Meeting Library (http://meetinglibrary.asco.org) Home > 88749-115 ________________________________________ 88749-115
Breast cancer in Irish families with Lynch syndrome.
J Clin Oncol 30, 2012 (suppl 4; abstr 413) Author(s): E. J. Jordan, M. P. Farrell, R. M. Clarke, M. R. Kell, J. A. McCaffrey, E. M. Connolly, T. Boyle, M. J. Kennedy, P. J. Morrison, D. J. Gallagher; Mater University Hospital, Dublin, Ireland; St. James Hospital, Dublin, Ireland; Mater University Hospital , Dublin , Ireland; Belfast City Hospital HSCTrust, Belfast, Northern Ireland Background: Breast cancer is not a recognised malignant manifestation of Lynch Syndrome which includes colorectal, endometrial, gastric, ovarian and upper urinary tract tumours. In this study we report the prevalence of breast cancer in Irish Lynch Syndrome families and determine immunohistochemical expression of mismatch repair proteins (MMR) in available breast cancer tissue. Methods: Breast cancer prevalence was determined among Lynch Syndrome kindreds from two institutions in Ireland, and a genotype phenotype correlation was investigated. One kindred was omitted due to the presence of a biallelic MMR and BRCA1 mutation. The clinicopathological data that was collected on breast cancer cases included age of onset, morphology, and hormone receptor status. Immunohistochemical staining was performed for MLH1, MSH2, MSH6, and PMS2 on all available breast cancer tissue from affected individuals. Results: The distribution of MMR mutations seen in 16 pedigrees was as follows; MLH1 (n=5), MSH2 (7), MSH6 (3), PMS2 (1). Sixty cases of colorectal cancer and 14 cases of endometrial cancer were seen. Seven breast cancers (5 invasive ductal and 2 invasive lobular cancers) and 1 case of ductal carcinoma in situ were reported in 7 pedigrees. This compared with 4 cases of prostate cancer. Six MSH2 mutations and 1 MSH6 mutation were identified in the 7 Lynch syndrome kindreds. Median age of breast cancer diagnosis was 49 years (range 38-57). Hormone receptor status is available on 3 breast cancer cases at time of abstract submission; all were ER positive and HER 2 negative. All cases had grade 2 or 3 tumours. Final results of immunohistochemistry for mismatch repair protein expression on breast cancer samples are pending and will be reported at the meeting. One breast cancer has been tested to date and demonstrated loss of MSH2 protein expression in an individual carrying an MSH2 mutation. Conclusions: Breast cancer occurred at an early age and was more common than prostate cancer in Irish Lynch Syndrome pedigrees. All reported breast cancer cases were in kindreds with MSH2 or MSH6 mutations. Enhanced breast cancer screening may be warranted in certain Lynch Syndrome kindreds. ________________________________________ Source URL: http://meetinglibrary.asco.org/content/88749-115
Clinical correlation and molecular evaluation confirm that the MLH1 p.Arg182Gly (c.544A>G) mutation is pathogenic and causes Lynch syndrome.
Fam Cancer. 2012 Sep;11(3):509-18. doi: 10.1007/s10689-012-9544-4.Farrell MP, Hughes DJ, Berry IR, Gallagher DJ, Glogowski EA, Payne SJ, Kennedy MJ, Clarke RM, White SA, Muldoon CB, Macdonald F, Rehal P, Crompton D, Roring S, Duke ST, McDevitt T, Barton DE, Hodgson SV, Green AJ, Daly PA. Source Department of Cancer Genetics, Mater Private Hospital, Dublin 7, Ireland. [email protected]
Approximately 25 % of mismatch repair (MMR) variants are exonic nucleotide substitutions. Some result in the substitution of one amino acid for another in the protein sequence, so-called missense variants, while others are silent. The interpretation of the effect of missense and silent variants as deleterious or neutral is challenging. Pre-symptomatic testing for clinical use is not recommended for relatives of individuals with variants classified as ‘of uncertain significance’. These relatives, including non-carriers, are considered at high-risk as long as the contribution of the variant to disease causation cannot be determined. This results in continuing anxiety, and the application of potentially unnecessary screening and prophylactic interventions. We encountered a large Irish Lynch syndrome kindred that carries the c.544A>G (p.Arg182Gly) alteration in the MLH1 gene and we undertook to study the variant. The clinical significance of the variant remains unresolved in the literature. Data are presented on cancer incidence within five kindreds with the same germline missense variant in the MLH1 MMR gene. Extensive testing of relevant family members in one kindred, a review of the literature, review of online MMR mutation databases and use of in silico phenotype prediction tools were undertaken to study the significance of this variant. Clinical, histological, immunohistochemical and molecular evidence from these families and other independent clinical and scientific evidence indicates that the MLH1 p.Arg182Gly (c.544A>G) change causes Lynch syndrome and supports reclassification of the variant as pathogenic. PMID: 22773173 [PubMed – indexed for MEDLINE]
Germline MSH6 mutations are more prevalent in endometrial cancer patient cohorts than Hereditary Non Polyposis Colorectal Cancer cohorts
Ulster Med J. 2008 January; 77(1): 25–30. PMCID: PMC2397009 Lisa A Devlin,1 Colin A Graham,1 John H Price,2 and Patrick J Morrison1
Objective To determine and compare the prevalence of MSH6 (a mismatch repair gene) mutations in a cohort of families with Hereditary Non-Polyposis Colorectal Cancer (HNPCC), and in an unselected cohort of endometrial cancer patients (EC). Design Two patient cohorts participated in the study. A cohort of HNPCC families who were known to the Regional Medical Genetics department, and an unselected cohort of patients with a history of EC. All participants received genetic counselling on the implications of molecular testing, and blood was taken for DNA extraction with consent. All samples underwent sequencing and Multiple Ligation probe analysis (MLPA) for mutations in MSH6. Populations DNA from one hundred and forty-three probands from HNPCC families and 125 patients with EC were included in the study. Methods Molecular analysis of DNA in all participants from both cohorts for mutations in MSH6. Outcome measures Prevalence of pathogenic mutations in MSH6. Results A truncating mutation in MSH6 was identified in 3.8% (95% CI 1.0–9.5%) of patients in the endometrial cancer cohort, and 2.6% (95% CI 0.5–7.4%) of patients in the HNPCC cohort. A missense mutation was identified in 2.9% and 4.4% of the same cohorts respectively. No genomic rearrangements in MSH6 were identified. Conclusion MSH6 mutations are more common in EC patients than HNPCC families. Genomic rearrangements do not contribute to a significant proportion of mutations in MSH6, but missense variants are relatively common and their pathogenicity can be uncertain. HNPCC families may be ascertained through an individual presenting with EC, and recognition of these families is important so that appropriate cancer surveillance can be put in place. Keywords: Endometrial, Cancer, MSH6, HNPCC
A guest blog from Georgia Hurst, as she worries about the impact of her diagnosis of Lynch Syndrome on her son. Read more at ihavelynchsyndrome.com
“I will not get sick, but if I do…
…I will have the strength to endure it.”
This is my new mantra. The above photo is the view from my zafu when I go to Buddhist Temple for meditation; the solace this gives me is immeasurable. I have been finding myself at Temple a lot lately, meditating and reaching for my internal strength to deal with the unbearable anxiety and stress which currently confront me. I’m trying not to discuss it with people; it’s too much for me to process, let alone them. Besides, I feel as though I put them into a precarious position if I do bring it up because there are no words available to them which can possibly comfort me at this time. I am going to Mayo Clinic in 11 days and the anxiety is increasing by the minute. I am expecting the worst, whilst hoping for the best. I’m sure I am not the only one with Lynch syndrome that feels this way when it’s close to testing time. The plethora of emotions are running rampant in me little head. I feel guilt. My oldest brother did not have a chance, my second brother does not have a colon – and then I think of all of the people I’ve met through my blog and Facebook and other forms of social media who are fighting for their lives because they, too, have been blindsided by this genetic curse. Damn you, Lynch syndrome. I feel anger because I may have given this to my beautiful little boy. If I knew for sure that this monster ended with me with 100% certainty, I would at least not have to fret about my child. I also feel anger for all the children who are watching their parents suffer and die, leaving them with a life of endless uncertainties and insecurities. I feel sad, not for myself, but for my family, my dog, and my friends; because I know that part of you dies when someone you love dies. I feel lucky; I’m fortunate to know I have this genetic mutation, have insurance, and have the ability to exhibit some control of it; I get to go Mayo and get see the Rock Star Doctors of Lynch syndrome. I feel confident; I keep reminding myself that I eat well, exercise, surround myself with loving, nourishing people, animals, books, etc., and have eliminated every imaginable toxin in my life. I feel fearless and empowered in many ways; yet, helpless in so many others. I vacillate between optimism and negativity; perhaps I should simply stop it and end up somewhere in the middle. I am exhausted, whilst I exhibit every possible emotion known to humanity. I long for the days when I didn’t know of my charming genetic nemesis and wasn’t emotionally imprisoned by Lynch syndrome. I would give everything I have to simply appreciate a few minutes of life sans Lynch. Two weeks from today, I will know if everything I talk about truly matters or if my genetics will trump everything I think and do. I just want spend the next several days being fearless. Fearless. Fearless. Fearless. In the eloquent words of Tagore: Let me not pray to be sheltered from dangers but to be fearless in facing them. Let me not beg for the stilling of my pain but for the heart to conquer it. Let me not look for allies in life’s battlefield but to my own strength. Let me not cave in… Yours, Georgia Hurst, MA ihavelynchsyndrome.com This post was written in late April before I went to the Mayo Clinic in early May for my annual testing; I received a clean bill of health and not even one little polyp was found in my colon. | 2 | 10 |
<urn:uuid:486b6418-d7c2-4f68-973f-5c71c4fbd882> | In part one of choosing a programmable power supply we discussed voltage, current, cooling and power requirements for your application. Here, we explore some of the more subtle aspects of the specification. We discuss parameters such as accuracy and repeatability, the different between ripple and noise and show how power supplies can be connected in series or parallel to increase the maximum voltage or current whilst delivering the same technical performance.
Accuracy, stability & repeatability
Accuracy of display is how close to the voltage and current display reflect the actual voltage and current being supplied. Typically, these range from 0.001% through to 1% of full scale. Note that the analogue read-back accuracy won’t be the same as the display or RS-232 because different circuitry is used to present the value. e.g. LCD readouts will be limited by the number of digits it can display.
Accuracy of set point is the difference between the “demand”, to what is actually delivered. Again, these range from 0.001% to 1%. Power supplies that offer multiple programming interfaces, will specify different accuracy figures for each interface.
Stability is often quoted as the short-term drift of output voltage and current. A stable output will resist changes in ambient or internal temperatures and other aging effects over time e.g. stability over 8hrs = <0.5%.
Repeatability is the degree to which a user can leave one set point, perhaps due to a power cycling event, and achieve the same output values at a later point in time. The built-in monitoring of programmable power supplies makes this relatively easy to check.
PLECS – specialists in simulation software for power electronics – have released version 3.7 of their Blockset and Standalone products.
The updated products now feature:
- Improved Thermal Modelling
Semiconductor losses can now be described using functional expressions in addition to lookup tables. It is also possible to define custom parameters (such as gate resistance) and describe their influence on the device losses.
Manufactured by Dean Technology, the new HVM40B high voltage voltmeter can be relied upon for highly accurate measurement of positive or negative voltages up to 40,000 volts.
PPM Power has added two new Dean Technology discrete diodes – the UX-F30B and CL03-30 – to its comprehensive range. The new axial lead diodes have a peak inverse voltage rating of 30kV and are higher voltage additions to the existing UX and CL03 series.
ABB’s high power IGBT ‘HiPak module’ semiconductors have recently been revised to incorporate a number of benefits. These include:
- Improved reliability
- Enhanced processes
- Better package design.
View product. Continue reading
The TDK Lambda Genesys series of 10kW and 15kW programmable DC power supplies is now available with output voltages of 800V, 1000V, 1250V and 1500V. These new models have the same dimensions as the existing 7.5V to 600V models – 19” wide and 3U high. The units can operate in either constant current or constant voltage modes and accept either three-phase 400V AC or 440V AC inputs.
The correct capacitor will reduce system failures therefore minimising costs and downtime. Here we review film capacitors versus electrolytics and ceramics and outline the benefits of film capacitors from different vendors.
PPM partner, Dean Technology has released a new product catalogue for power electronics which complements the high current and suppression product lines from their CKE division. Continue reading
Our new web tool automatically identifies matching IGBTs or diodes for your particular application, based on the conditions you enter. This will save you time and money doing long calculations or researching several different datasheets because you can find your IGBT or diode much quicker.
Posted in Seminconductors
Advanced modelling and simulation of power electronic systems
Register now for our next PLECS workshop: Wednesday, 27th January 2016 – 9am-4pm
The PLECS product family has been specifically developed to assist engineers with the design and implementation of complex power electronics systems.
We will guide you through exercises such as: modelling a switched-mode power supply, solver accuracy and settings, thermal modelling of a buck converter and creating a custom PV string component. | 1 | 3 |
<urn:uuid:2e0fd9bc-9d54-4ecf-80e9-879aef78a196> | |This article needs additional or better citations for verification. (October 2010) (Learn how and when to remove this template message)|
Home cinema, also called home theater or home theatre, refers to home entertainment audio-visual systems that seek to reproduce a movie theater experience and mood using consumer electronics-grade video and audio equipment that is set up in a room or backyard of a private home. In the 1980s, home cinemas typically consisted of a movie pre-recorded on a LaserDisc or VHS tape; a LaserDisc or VHS player; and a heavy, bulky large-screen cathode ray tube TV set. In the 2000s, technological innovations in sound systems, video player equipment and TV screens and video projectors have changed the equipment used in home theatre set-ups and enabled home users to experience a higher-resolution screen image, improved sound quality and components that offer users more options (e.g., many of the more expensive Blu-ray players in 2016 can also "stream" movies and TV shows over the Internet using subscription services such as Netflix). The development of Internet-based subscription services means that 2016-era home theatre users do not have to commute to a video rental store as was common in the 1980s and 1990s (nevertheless, some movie enthusiasts buy DVD or Blu-ray discs of their favourite content).
Today, a home cinema system typically uses a large projected image from a video projector or a large flat-screen high-resolution HDTV system, a movie or other video content on a DVD or high-resolution Blu-ray disc, which is played on a DVD player or Blu-ray player, with the audio augmented with a multi-channel power amplifier and anywhere from two speakers and a stereo power amp (for stereo sound) to a 5.1 channel amplifier and five or more surround sound speaker cabinets (with a surround sound system). Whether home cinema enthusiasts have a stereo set-up or a 5.1 channel surround system, they typically use at least one low-frequency subwoofer speaker cabinet to amplify low-frequency effects from movie soundtracks and reproduce the deep pitches from the musical soundtrack.
In the 1950s, playing home movies became popular in the United States with middle class and upper-class families as Kodak 8 mm film projector equipment became more affordable. The development of multi-channel audio systems and later LaserDisc in the 1980s created a new paradigm for home video, as it enabled movie enthusiasts to add better sound and images to their setup. In the mid-1980s to the mid-1990s, a typical home cinema in the United States would have a LaserDisc or VHS player playing a movie, with the signal fed to a large rear-projection television set. Some people used expensive front projectors in a darkened viewing room. During the 1990s, watching movies on VHS at home became a popular leisure activity. Beginning in the late 1990s, and continuing throughout much of the 2000s, home-theater technology progressed with the development of the DVD-Video format (higher resolution than VHS), Dolby Digital 5.1-channel audio ("surround sound") speaker systems, and high-definition television (HDTV), which initially included bulky, heavy Cathode Ray Tube HDTVs and flat screen TVs. In the 2010s, affordable large HDTV flatscreen TVs, high resolution video projectors (e.g., DLP), 3D television technology and the high resolution Blu-ray Disc (1080p) have ushered in a new era of home theater.
In the 2000s, the term "home cinema" encompasses a range of systems meant for movie playback at home. The most basic and economical system could be a DVD player, a standard definition (SD) large-screen television with at least a 27-inch (69 cm) diagonal screen size, and an inexpensive "home theater in a box" surround sound amplifier/speaker system with a subwoofer. A more expensive home cinema set-up might include a Blu-ray disc player, home theater PC (HTPC) computer or digital media receiver streaming devices with a 10-foot user interface, a high-definition video projector and projection screen with over 100-inch (8.3 ft; 2.5 m) diagonal screen size (or a large flatscreen HDTV), and a several-hundred-watt home theater receiver with five to eleven surround-sound speakers plus one or two powerful subwoofer(s). 3D-TV-enabled home theaters make use of 3D TV sets/projectors and Blu-ray 3D players in which the viewers wear 3D-glasses, enabling them to see 3D content.
Home theater designs and layouts are a personal choice and the type of home cinema a user can set up depends on her/his budget and the space which is available within the home. The minimum set of requirements for a home theater are: a large television set or good quality video projector CRT (no new models sold in U.S.), LCD, Digital Light Processing (DLP), plasma display, organic light-emitting diode (OLED), Silicon X-tal Reflective Display (SXRD), Laser TV, rear-projection TV, video projector, Standard-definition television (SDTV), HDTV, or 3D-TV at least 27 inches (69 cm) measured diagonally, an AV receiver or pre-amplifier (surround processor) and amplifier combination capable of at least stereo sound but preferably 5.1 Channel Dolby Digital and DTS audio, and something that plays or broadcasts movies in at least stereo sound such as a VHS HI-FI VCR, LaserDisc player (no new stand-alone models of either are available; VHS VCRs are usually bundled in combo decks with DVD players), a DVD player, a Blu-ray disc player, cable or satellite receiver, video game console, etc. Finally a set of speakers, at least two, are needed but more common are anywhere from six to eight with a subwoofer for bass or low-frequency effects.
The most-expensive home-theater set-ups, which can cost over $100,000 (US), and which are in the homes of executives, celebrities and high-earning professionals, have expensive, large, high-resolution digital projectors and projection screens, and maybe even custom-built screening rooms which include cinema-style chairs and audiophile-grade sound equipment designed to mimic (or sometimes even exceed) commercial theater performance.
In the 2010s, many home cinema enthusiasts aim to replicate, to the degree that is possible, the "movie theatre experience". To do so, many home cinema buffs purchase higher quality components than used for everyday television viewing on a relatively small TV with only built-in speakers. A typical home theater includes the following components:
- Movie or other viewing content: As the name implies, one of the key reasons for setting up a home cinema is to watch movies on a large screen, which does a more effective job at reproducing filmed images of vast landscapes or epic battle sequences. As of 2016, home cinema enthusiasts using "Smart" Blu-ray players may also watch DVDs of TV shows, and recorded or live sports events or music concerts. As well, with a "Smart" player, a user may be able to "stream" movies, TV shows and other content over the Internet. Many 2016-era DVD players and Blu-ray players also have inputs which allow users to view digital photos and other content on the big screen.
- Video and audio input devices: One or more video/audio sources. High resolution movie media formats such as Blu-ray discs are normally preferred, though DVD or video game console systems are also used. Some home theaters include a HTPC (Home Theater PC) with a media center software application to act as the main library for video and music content using a 10-foot user interface and remote control. In 2016, some of the more-expensive Blu-ray players can "stream" movies and TV shows over the Internet.
- Audio and video processing devices: Input signals are processed by either a standalone AV receiver or a preamplifier and Sound Processor for complex surround sound formats such as Dolby Pro-Logic/and or Pro-logic II, X, and Z, Dolby Digital, DTS, Dolby Digital EX, DTS-ES, Dolby Digital Plus, Dolby TrueHD and DTS-HD Master Audio. The user selects the input (e.g., DVD, Blu-ray player, streaming video, etc.) at this point before it is forwarded to the output stage. Some AV receivers enable the viewer to use a remote control to select which input device or source to use.
- Audio output: Systems consist of preamplifiers, power amplifiers (both of which may be integrated into a single AV receiver) and two or more loudspeakers mounted in speaker enclosures. The audio system requires at least a stereo power amplifier and two speakers, for stereo sound; most systems have multi-channel surround sound power amplifier and six or more speakers (a 5.1 surround sound system has left and right front speakers, a centre speaker, left and right rear speakers and a low-frequency subwoofer speaker enclosure). Some users have 7.1 Surround Sound. It is possible to have up to 11 speakers with additional subwoofers.
- Video output: A large-screen display, typically an HDTV. Some users may have a 3D TV. As of 2015, flatscreen HDTV is the norm. Options include Liquid crystal display television (LCD), plasma TV, OLED. Home cinema users may also use a video projector and a movie screen. If a projector is used, a portable, temporary screen may be used or a screen may be permanently mounted on a wall.
- Seating and atmosphere: Comfortable seating is often provided to improve the cinema feel. Higher-end home theaters commonly also have sound insulation to prevent noise from escaping the room and specialized wall treatment to balance the sound within the room. Some luxury home cinemas have movie theatre-style padded chairs for guests.
Component systems vs. theater-in-a-box
Home cinemas can either be set up by purchasing individual components one by one (e.g., buying a multichannel amp from one manufacturer, a Blu-ray player from another manufacturer, speakers from another company, etc.) or a by purchasing a HTIB (Home Theater in a Box) package which includes all of components from a single manufacturer, with the exception of a TV or projector. HTIB systems typically include a DVD or Blu-ray player, a surround sound amplifier, five surround speakers, a subwoofer cabinet, cables and a remote. The benefit of purchasing separate components one by one is that consumers can attain improved quality in video or audio and better matching between the components and the needs of a specific room, or the consumer's needs.
However, to buy individual components, a consumer must have knowledge of sound system and video system design and electronics and she or he must do research on the specifications of each component. For instance, some speakers perform better in smaller rooms while others perform better in larger rooms and seating location must be considered. One of the challenges with buying all the components separately is that the purchaser must understand speaker impedance, power handling, and HDMI compatibility and cabling. Given these challenges, HTIB systems are a simpler and more cost-effective solution for many families and consumers, they are also better suited to smaller living spaces in semi-detached homes or apartments/condos where noise could be an issue. As well, buying an HTIB package is often less expensive than buying separate components.
Some home cinema enthusiasts build a dedicated room in their home for the theater. These more advanced installations often include sophisticated acoustic design elements, including "room-in-a-room" construction that isolates sound and provides an improved listening environment and a large screen, often using a high definition projector. These installations are often designated as "screening rooms" to differentiate them from simpler, less-expensive installations. In some movie enthusiast's home cinemas, this idea can go as far as completely recreating an actual small-scale cinema, with a projector enclosed in its own projection booth, specialized furniture, curtains in front of the projection screen, movie posters, or a popcorn or vending machine with snack food and candy. More commonly, real dedicated home theaters pursue this to a lesser degree.
As of 2016, the days of the $100,000 and over home theater system is being usurped by the rapid advances in digital audio and video technologies, which has spurred a rapid drop in prices, making a home cinema set-up more affordable than ever before. This in turn has brought the true digital home theater experience to the doorsteps of the do-it-yourselfers, often for much less than the price of a low-budget economy car. As of 2016, consumer grade A/V equipment can meet some of the standards of a small modern commercial theater (e.g., THX sound).
Home theater seating consists of chairs or sofas specifically engineered and designed for viewing movies in a home theater. Some home theater seats have a cup holder built into the chairs' armrests and a shared armrest between each seat. Some seating has movie-theater-style chairs like those seen in a movie cinema, which feature a flip-up seat cushion. Other seating systems have plush leather reclining lounger types, with flip-out footrests. Available features include storage compartments, snack trays, tactile transducers for low-frequency effects that can be felt through a chair (without creating high volume levels which could disturb other family members), and electric motors to adjust the chair. Home theater seating tends to be more comfortable than seats in a public cinema.
In homes that have an adequately sized backyard, it is possible for people to set up a home theater in an outdoor area. Depending on the space available, it may simply be a temporary version with foldable screen, a video projector and couple of speakers, or a permanent fixture with a huge screen and dedicated audio set-up mounted in a weather-proof cabinet. Outdoor home cinemas are popular with BBQ parties and pool parties. Some specialist outdoor home-cinema companies are now marketing packages with inflatable movie screens and purpose-built AV systems. Some people have expanded the idea and constructed mobile drive-in theaters that can play movies in public open spaces. Usually, these require a powerful projector, a laptop or DVD player, outdoor speakers or an FM transmitter to broadcast the audio to other car radios.
In the 1950s, home movies became popular in the United States and elsewhere as Kodak 8 mm film (Pathé 9.5 mm in France) and camera and projector equipment became affordable. Projected with a small, portable movie projector onto a portable screen, often without sound, this system became the first practical home theater. They were generally used to show home movies of family travels and celebrations, but they also doubled as a means of showing some commercial films, or even private stag films. Dedicated home cinemas were called screening rooms at the time and were outfitted with 16 mm or even 35 mm projectors for showing commercial films. These were found almost exclusively in the homes of the very wealthy, especially those in the movie industry.
Portable home cinemas improved over time with color film, Kodak Super 8 mm film cartridges, and monaural sound but remained awkward and somewhat expensive. The rise of home video in the late 1970s almost completely killed the consumer market for 8 mm film cameras and projectors, as VCRs connected to ordinary televisions provided a simpler and more flexible substitute.
The development of multi-channel audio systems and LaserDisc in the 1980s added new dimensions for home cinema. The first-known home cinema system was designed, built and installed by Steve J. LaFontaine as a sales tool at Kirshmans furniture store in Metairie, Louisiana in 1974. He built a special sound room which incorporated the earliest quadraphonic audio systems, and he modified Sony Trinitron televisions for projecting the image. Many systems were sold in the New Orleans area in the ensuing years before the first public demonstration of this integration occurred in 1982 at the Summer Consumer Electronics Show in Chicago, Illinois. Peter Tribeman of NAD (U.S.) organized and presented a demonstration made possible by the collaborative effort of NAD, Proton, ADS, Lucasfilm and Dolby Labs, who contributed their technologies to demonstrate what a home cinema would "look and sound" like.
Over the course of three days, retailers, manufacturers, and members of the consumer electronics press were exposed to the first "home-like" experience of combining a high-quality video source with multi-channel surround sound. That one demonstration is credited with being the impetus for developing what is now a multibillion-dollar business.
In the early to mid-1990s, a typical home cinema would have a LaserDisc or VHS player fed to a large screen: rear projection for the more-affordable setups, and LCD or CRT front-projection in the more-elaborate systems. In the late 1990s, a new wave of home-cinema interest was sparked by the development of DVD-Video, Dolby Digital and DTS 5.1-channel audio, and high-quality front video projectors that provide a cinema experience at a price that rivals a big-screen HDTV.
In the 2000s, developments such as high-definition video, Blu-ray disc (as well as the now-obsolete HD DVD format, which lost the format war to Blu-ray) and newer high-definition 3D display technologies enabled people to enjoy a cinematic feeling in their own home at a more-affordable price. Newer lossless audio from Dolby Digital Plus, Dolby TrueHD, DTS-HD High Resolution Audio and DTS-HD Master Audio and speaker systems with more audio channels (such as 6.1, 7.1, 9.1, 9.2, 10.2, and 22.2) were also introduced for a more cinematic feeling.
By the mid-2010s, the Blu-ray Disc medium had become a common home media standard, and online video streaming sources such as Netflix and YouTube were offering a range of high definition content, including some 4K content (although various compression technologies are applied to make this streamed content feasible). The first 4K Blu-ray discs were released in 2016. By this point, 4K TVs and computer monitors were rapidly declining in price and increasing in prevalence, despite a lack of native 4K content. While many DSP systems existed, DTS-HD Master Audio remained the studio standard for lossless surround sound encoding on Blu-ray, with five or seven native discrete channels. High definition video projectors also continued to improve and decrease in price, relative to performance.
Entertainment equipment standards
Noise Criteria (NC) are noise-level guidelines applicable to cinema and home cinema. For this application, it is a measure of a room's ambient noise level at various frequencies. For example, in order for a theater to be THX certified, it must have an ambient sound level of NC-30 or less. This helps to retain the dynamic range of the system. Some NC levels are:
- NC 40: Significant but not a dooming level of ambient noise; the highest "acceptable" ambient noise level. 40 decibels is the lower sound pressure level of normal talking; 60 being the highest.
- NC 30: A good NC level; necessary for THX certification in cinemas.
- NC 20: An excellent NC level; difficult to attain in large rooms and sought after for dedicated home cinema systems. For example, for a home cinema to be THX certified, it has to have a rating of NC 22.
- NC 10: Virtually impossible noise criteria to attain; 10 decibels is associated with the sound level of calm breathing.
Projectors used for home cinemas have a set of recommended criteria:
- Brightness, usually at least 1800 lumens.
- Resolution (the number of pixels making up the image), usually at least 1920×1080, one of the HDTV standards.
- Contrast (how well white, black and greyscales are displayed), usually a minimum of 5000:1.[further explanation needed]
- HDMI connection sockets (although some people use three-cord sockets for the different colours)
- Good quality manufacturers, although this is a subjective element which depends upon user tastes and budget. For one user with a modest budget, "good quality" may mean a mainstream consumer electronics brand; for a well-to-do user, a Christie projector may be their interpretation of "good quality" (Christie units are widely used in professional, commercial theatres)
- Feldstein, Justin. "Next Generation Televisions: Beyond Conventional LED and LCD Technologies". Audio Den. Retrieved 11 April 2015.
- "Create Your Own Home Theater", by Stargate Sonem, Articles Organization Free Directory.
- Wood, Mike (2002). "Design the Ultimate Home Theater—On a Budget." Home Theater.
- DeBoer, Clint (2007)."THX Certified Home Theater Program." Audioholics Online A/V Magazine.
|Wikimedia Commons has media related to Home theaters.|
|Look up home cinema in Wiktionary, the free dictionary.| | 1 | 22 |
<urn:uuid:878b203d-449e-4b03-a71a-13bf24c7fcc2> | Challenges in Healthcare
Preventable Harms, Patient Experience, and Associated Costs
Incredible advances in medicine and technology are available to prevent, diagnose, and treat diseases, but our increasingly complex healthcare system still fails many patients. Yet patient satisfaction, preventable harms during medical care, and penalties that medical institutions and physicians receive due to preventable errors are significant issues that plague the medical system.
Preventable harm causes up to 440,000 deaths per year in hospitals, making it the third leading cause of death in the United States today.1 Approximately 45-66% of these adverse events are related to surgery.2-4 It is almost ironic that while patients undergo surgery to get better, many of them will be subjected to further harm. According to the CDC, about 51.4 million inpatient procedures are performed each year in the US.5 Based on calculations using published incidence rates for surgery-related adverse events, between 976,600 to 1,850,000 people will potentially suffer from preventable harm during the perioperative period.2-4
While it is difficult to know for sure how many people die needlessly, the table below presents preventable harms data. While not every single incident of these events is preventable, focused efforts directed at reducing them individually can prevent as much as two-thirds of them.6,7
Table. Preventable Adverse Events – Annual Averages
|Preventable Adverse Events||Patients Per Year|
|Health-care acquired infections (HCAIs)||100,000 deaths8|
|Deep Vein Thrombosis||300,000-600,000 incidences10|
|Diagnostic Errors||80,000 incidences11|
For many years, medicine has been based on a paternalistic patient-physician relationship, which has resulted in patients who do not fully understand their surgery, its risks, benefits, or alternative options, nor what to expect afterwards. Such passivity and lack of knowledge often means that the people with the biggest stake in their health care outcomes – patients and their families – are ill-equipped to advocate for the best and safest care. Due to the poor patient-physician relationship and its lack of appropriate communication, patient satisfaction rates in medical care tend to be fairly low. One study found only about half (54%) of patients reported being satisfied with their care,12 while another found that lower rates of patient satisfaction were associated with higher 30-day risk-standardized hospital readmission rates.13
In recent years, the Centers for Medicare and Medicaid (CMS) has begun to focus on preventable harms and patient satisfaction rates and developed metrics that seek to penalize those medical institutions and physicians with high rates of preventable adverse events and low patient satisfaction rates. CMS has developed various measures designed to address poor patient-physician communication, adverse events, hospital readmissions, and various other areas of patient safety and care quality. In 2014, nearly 1,500 hospitals will have their payments reduced up to 1.25% due to penalties based on quality,14 while 2,225 hospitals will pay $280 million in penalties for readmissions.15 Starting in 2015, an additional penalty will be imposed on close to 750 hospitals for hospital-acquired conditions, and hospitals stand to lose an estimated over $330 million in penalties.16
These increases in costs are not limited to the healthcare system. Patients are often affected, with costs frequently being passed on to them, either directly in the form of increased co-pays, or indirectly in the form of excess and unnecessary care. As healthcare costs continue to increase, more employers are moving to high-deductible plans, making patients responsible for more routine medical costs and a larger proportion of expensive treatments and hospitalizations. Around two-thirds of companies offer these high-deductible plans, with almost 20% offering nothing but. and another two-fifths considering it.17 This is potentially detrimental to patient care, with higher costs shown to make patients delay or skip needed care, leading to further costs.18 Employee contributions to their health care premiums have also increased nearly 150% in the past decade.19
The Solution: Patient Engagement
Despite CMS’s efforts, progress in reducing preventable harm and improving patient satisfaction remains slow across the U.S. Reports of transformative efforts for hospital-wide preventable harms are few and far in between. One avenue for decreasing preventable medical errors and improving patient satisfaction is through increased patient participation.20-23
Research into the causes of medical errors suggests that many of them could have been mitigated by patient involvement at various points, either as individuals or as a group.24 The World Health Organization World Alliance for Patient Safety campaign focuses on patients and their families as the core of their worldwide safety movement, to ensure legitimate and sustainable improvements in patient safety.25
Studies have shown that patients are ready to play a role in error prevention. Among nearly 2100 surveyed patients, the vast majority (91%) reported feeling that they could prevent medical errors occurring in hospitals, and almost everyone (98%) felt that hospitals should educate patients about how to help prevent errors.26 When patients were educated at the start of their hospital stay to ask their medical staff to wash their hands, the use of soap increased significantly (34% to 94%).27,28 In patient surveys taken after discharge, 90% to 100%27,29 of patients reported having asked a nurse and 31% to 35% asked a physician to wash their hands.28,29 Greater patient and family engagement in medical care also played a role in decreasing the incidence of medication errors by half at the Dana-Farber Cancer Institute.30
Better communication and shared decision-making between patients and doctors are also key components to help ease the complex journey of surgery, prevent medical errors, and improve patient satisfaction.31-33 Improving communication can also strengthen the relationship between doctors and patients. Research shows that patients who have a good relationship with their healthcare providers receive better care and are happier with their care.34 Even in the unfortunate event of a complication or error, good communication and a strong patient-physician relationship can decrease malpractices cases by as much as half.35,36 However, despite the existing successes and known benefits, more needs to be done.
Checklists for Patient Engagement
Checklists for doctors have long been shown to reduce patient complications and even death.37 Checklists have been particularly successful in improving outcomes after surgery. Training surgeons in communication and using a procedure checklist before, during, and after surgery has been shown to significantly decrease patient complications up to 30 days after surgery.38 One study found that a surgical safety checklist used at hospitals around the world reduced major complications after surgery by 36% and lowered the death rate by nearly half.39
Patient checklists can also help to reduce harm by helping patients to manage the complex preparation tasks that they need to accomplish, such as stopping certain medications before surgery or knowing when to stop eating or drinking prior to their procedure. They can also help alert patients to key safety points, for instance, enc death.37 Checklists have been particularly successful in improving outcomes after surgery. Training surgeons in communication and using a procedure checklist before, during, and after surgery has been shown to significantly decrease patient complications up to 30 days after surgery.38 One study found that a surgical safety checklist used at hospitals around the world reduced major complications after surgery by 36% and lowered the death rate by nearly half.39
ouraging patients to check with their doctors about DVT prophylaxis and perioperative antibiotic use to prevent infection.
A recent study of patients undergoing hip or knee replacement surgeries found that, when patients asked their surgeons a structured checklist of questions about their procedure, both patients and surgeons reported improved satisfaction with their communication, and patients reported being able to make a more informed decision regarding their medical care.40 Another study found that cancer patients who were given a list of common questions to ask their doctors about their care, reported feeling better and more informed about the care they received during their appointments.41
Electronic Tools for Health
Cell phones, electronic tablets, and other such devices are vital tools used ever-increasingly by patients for health purposes. For example, 31% of cell phone owners reported using their phones to look for health information in 2012, compared to only 17% in 2010.42 mHealth programs have successfully tackled aspects of various acute and chronic conditions like pneumonia, diabetes, HIV, tuberculosis and mental health.43,44 Significant benefits have also been demonstrated in care related to chronic disease conditions in the form of health information systems, appointment reminders, medication compliance, patient monitoring and education, mental health support, and supply chain management.43,44,45-47
Tapping into the large proportion of patients with mobile devices, text messaging and other mobile applications are being used to deliver health information and services in the palm of their hands. Services for patients include medical appointment and medication reminders, self-tracking tools, educational resources, lab and clinical results delivery, and many more through timely and often personalized applications.48 For example, a recent study showed a 2% decrease in Hemoglobin A1C in diabetic patients who used a mobile phone–based monitoring and insulin-dosing coaching system.49 However, the realm of patient engagement has been largely neglected by the growing use of mobile technologies in healthcare.50,51
Electronic Tools + Patient Engagement = Doctella
Doctella has created a simple solution that aligns required actions and incentives across the vast and complex healthcare system. Doctella democratizes critical information that has thus far been only accessible to highly trained medical professionals. This is particularly important given that patients and families are the only stakeholders in the healthcare system that span the entire continuum of care starting with the discussion of symptoms with family and friends, progressing onwards to specialized surgical care, and finally post-operative recovery and hopefully a return to health and wellness.
Checklists are a proven and tested way to ensure that complex tasks are completed in a high-quality manner. Doctors and nurses use checklists everyday to help them to effectively perform their difficult jobs. The complexity of the healthcare system also makes the job of being a patient a hard one. Using checklists can activate and engage patients and families in ways that have been shown to provide benefits of higher patient satisfaction, adherence, and ultimately better health, which often results in lower costs to the entire healthcare system.
Doctella helps patients to benefit from checklists. Our new mobile, web, and print platform hosts checklists created for patients. The platform is designed with easy-to-use search technology, reminders, and step-by-step questions patients can ask their healthcare providers.
Doctella seeks to harness the power of an electronic platform to engage patients and their loved ones during their medical care. Patient outcomes and satisfaction have been shown to improve as patients get more involved and engaged in their care, whether by being able to fully participate in decisions, carefully select when to receive care, or have the knowledge to choose between therapies that may or may not work for them.
Doctella’s goal is to help every patient ask the right questions and best prepare for their surgeries or other procedures by creating partnerships for better, safer health care. We believe that Doctella checklists, powered by the Doctella mobile app and website, will empower patients to watch out for preventable harm, better engage with their doctors, and ultimately lead to improve satisfaction and better long-term health outcomes.
- James JT. A New, Evidence-based Estimate of Patient Harms Associated with Hospital Care. Journal of Patient Safety. 2013; 9(3):122–128.
- Gawande AA, Thomas EJ, Zinner MJ, Brennan TA: The incidence and nature of surgical adverse events in Colorado and Utah in 1992. Surgery 1999, 126:66-75.
- Zegers M, de Bruijne MC, de Keizer B, Merten H, Groenewegen PP, van der Wal G, Wagner C: The incidence, root-causes, and outcomes of adverse events in surgical units: implication for potential prevention strategies. Patient Saf Surg 2011, 5:13.
- Thomas EJ, Studdert DM, Burstin HR, Orav EJ, Zeena T, Williams TBS, Elliott J, Mason HK, Weiler PC, Brennan TA: Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care 2000, 38(3):261-271.
- CDC/NCHS National Hospital Discharge Survey, 2010. http://www.cdc.gov/nchs/data/nhds/4procedures/2010pro4_numberprocedureage.pdf.
- Pronovost P, Needham D, Berenholtz S, Sinopoli D, Chu H, Cosgrove S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med. 2006 Dec 28;355(26):2725-32.
- Brilli RJ, McClead RE Jr, Crandall WV, Stoverock L, Berry JC, Wheeler TA, Davis JT. A Comprehensive Patient Safety Program Can Significantly Reduce Preventable Harm, Associated Costs, and Hospital Mortality. J Pediatr. 2013 Dec;163(6):1638-45.
- Klevens M, Edwards J, Richards C, et al., Estimating Health Care-Associated Infections and Deaths in U.S. Hospitals, 2002. PHR, 2007.
- Wang HE, Devereaux RS, Yealy DM, Safford MM, Howard G. National variation in United States sepsis mortality: a descriptive study. Int J Health Geogr. 2010 Feb 15;9:9.
- Beckman MG, Hooper WC, Critchley SE, Ortel TL. Venous thromboembolism: a public health concern. Am J Prev Med. 2010 Apr;38(4 Suppl):S495-501.
- Pham JC, Aswani MS, Rosen M, Lee H, Huddle M, Weeks K, Pronovost PJ. Reducing medical errors and adverse events. Annu Rev Med. 2012;63:447-63.
- Rahmqvist M, Bara AC. Patient characteristics and quality dimensions related to patient satisfaction. Int J Qual Health Care. 2010 Apr;22(2):86-92.
- Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011 Jan;17(1):41-8.
- Rau J. “Nearly 1,500 Hospitals Penalized Under Medicare Program Rating Quality.” Kaiser Health News. Nov 14 2013. http://www.kaiserhealthnews.org/stories/2013/november/14/value-based-purchasing-medicare.aspx
- The Advisory Board Company. “CMS: The 2,225 hospitals that will pay readmissions penalties next year.” Aug 5 2013. http://www.advisory.com/daily-briefing/2013/08/05/cms-2225-hospitals-will-pay-readmissions-penalties-next-year
- Rau J. “More Than 750 Hospitals Face Medicare Crackdown On Patient Injuries.” Kaiser Health News. June 22 2014. http://www.kaiserhealthnews.org/stories/2014/june/23/patient-injuries-hospitals-medicare-hospital-acquired-condition-reduction-program.aspx
- Hancock J. “Employer Health Costs Are Expected To Rise In 2015.” National Public Radio. June 24 2014. http://www.npr.org/blogs/health/2014/06/24/324915672/employer-health-costs-are-expected-to-rise-in-2015
- Galbraith AA, Soumerai SB, Ross-Degnan D, Rosenthal MB, Gay C, Lieu TA. Delayed and forgone care for families with chronic conditions in high-deductible health plans. J Gen Intern Med. 2012 Sep;27(9):1105-11.
- Kaiser Family Foundation and Health Research & Educational Trust. Employer Health Benefits: 2010 Summary of Findings. Accessed at http://ehbs.kff.org/pdf/2010/8086.pdf on 20 April 2011
- Howe A. Can the patient be on our team? an operational approach to patient involvement in interprofessional approaches to safe care. J Interprof Care 2006;20(5):527-534
- Awé C, Lin SJ. A patient empowerment model to prevent medication errors. J Med Syst 2003;27(6):503-517
- Shaw JM, Letts M, Dickinson D. Adverse events reporting in English hospital statistics: patients should be involved as partners [letter]. BMJ 2004;329(7470):857
- The Committee on Identifying and Preventing Medication Errors; Institute of MedicineAspden P, Wolcott J, Bootman JL, Cronenwett LR, editors. , eds. Preventing Medication Errors Washington, DC: National Academies Press; 2006.
- Longtin Y, Sax H, Leape LL, Sheridan SE, Donaldson L, Pittet D. Patient participation: current knowledge and applicability to patient safety. Mayo Clin Proc. 2010 Jan;85(1):53-62.
- World Health Organisation London Declaration: Patients for Patient Safety Published March 29, 2006 Geneva, Switzerland: World Health Organisation; http://www.who.int/patientsafety/patients_for_patient/London_Declaration_EN.pdf Accessed July 15.
- Waterman AD, Gallagher TH, Garbutt J, Waterman BM, Fraser V, Burroughs TE. Brief report: hospitalized patients’ attitudes about and participation in error prevention. J Gen Intern Med 2006;21(4):367-370.
- McGuckin M, Waterman R, Porten L, et al. Patient education model for increasing handwashing compliance. Am J Infect Control 1999;27(4):309-314.
- McGuckin M, Taylor A, Martin V, Porten L, Salcido R. Evaluation of a patient education model for increasing hand hygiene compliance in an inpatient rehabilitation unit. Am J Infect Control 2004;32(4):235-238.
- McGuckin M, Waterman R, Storr IJ, et al. Evaluation of a patient-empowering hand hygiene programme in the UK. J Hosp Infect 2001;48(3):222-227.
- Conway J, Nathan D, Benz E, et al. Key Learning from the Dana-Farber Cancer Institute’s 10-year patient safety journey. In: American Society of Clinical Oncology 2006 Educational Book. 42nd Annual Meeting, June 2-6, 2006 Atlanta, GA: 2006:615-619.
- Tripathy D, Durie BG, Mautner B, Ferenz KS, Moul JW. Awareness, concern, and communication between physicians and patients on bone health in cancer. Support Care Cancer. 2014 Jan 30. [Epub ahead of print].
- Black N, Varaganum M, Hutchings A. Relationship between patient reported experience (PREMs) and patient reported outcomes (PROMs) in elective surgery. BMJ Qual Saf. 2014 Feb 7.
- Trudel JG, Leduc N, Dumont S. Perceived communication between physicians and breast cancer patients as a predicting factor of patients’ health-related quality of life: a longitudinal analysis. Psychooncology. 2013 Nov 11. [Epub ahead of print].
- Tips & tools – Agency for healthcare research and quality (AHRQ) [Internet]: Agency for Healthcare Research and Quality; 2011 [updated August 2011]. Available from: http://www.ahrq.gov/patients-consumers/patient-involvement/ask-your-doctor/tips-and-tools/index.html.
- Chamberlain CJ, Koniaris LG, Wu AW, et al: Disclosure of “nonharmful” medical errors and other events: Duty to disclose. Arch Surg 2012;147:282-286.
- Wu AW: Handling hospital errors: Is disclosure the best defense? Ann Intern Med 1999;131:970-972.
- Health Research & Educational Trust. (2013, June). Checklists to improve patient safety. Chicago: IL. Illinois. Health Research & Educational Trust. Accessible at: http://www.hpoe.org/checklists-improve-patient-safety.
- Bliss LA, Ross-Richardson CB, Sanzari LJ, Shapiro DS, Lukianoff AE, Bernstein BA, et al. Thirty-day outcomes support implementation of a surgical safety checklist. J Am Coll Surg. 2012 Dec; 215 (6) :766-76.
- Haynes AB, Weiser TG, Berry WR, Lipsitz SR, Breizat AH, Dellinger EP, et al. A surgical safety checklist to reduce morbidity and mortality in a global population. N Engl J Med. 2009 Jan 29; 360(5):491-9.
- Bozic KJ, Belkora J, Chan V, Youm J, Zhou T, Dupaix J, Bye AN, Braddock CH 3rd, Chenok KE, Huddleston JI 3rd. Shared decision making in patients with osteoarthritis of the hip and knee: results of a randomized controlled trial. J Bone Joint Surg. 2013 September 18; 95(18):1633-9.
- Yeh JC, Cheng MJ, Chung CH, Smith TJ. Using a question prompt list as a communication aid in advanced cancer care. J Oncol Pract. 2014 Mar 4. [Epub ahead of print].
- Abebe NA1, Capozza KL, Des Jardins TR, Kulick DA, Rein AL, Schachter AA, Turske SA. Considerations for community-based mHealth initiatives: insights from three Beacon Communities. J Med Internet Res. 2013 Oct 15;15(10):e221.
- Martin T. Assessing mHealth: Opportunities and Barriers to Patient Engagement. Journal of Health Care for the Poor and Underserved. Volume 23, Number 3, August 2012
- Fox S, Duggan M. Pew Internet and American Life Project. Washington, DC: 2012. Nov 08, [2013-06-20]. webcite Mobile Health 2012 http://www.pewinternet.org/Reports/2012/Mobile-Health.aspx.
- de Jongh T, Gurol-Urganci I, Vodopivec-Jamsek V, Car J, Atun R. Mobile phone messaging for facilitating self-management of long-term illnesses. The Cochrane database of systematic reviews. 2012;12:CD007459. Epub 2012/12/14.
- Kumar S, Nilsen WJ, Abernethy A, Atienza A, Patrick K, Pavel M, et al. Mobile health technology evaluation: the mHealth evidence workshop. American journal of preventive medicine. 2013;45(2):228-36. Epub 2013/07/23.
- Car J, Gurol-Urganci I, de Jongh T, Vodopivec-Jamsek V, Atun R. Mobile phone messaging reminders for attendance at healthcare appointments. The Cochrane database of systematic reviews. 2012;7:CD007458. Epub 2012/07/13.
- Price M, Yuen EK, Goetter EM, Herbert JD, Forman EM, Acierno R, et al. mHealth: A Mechanism to Deliver More Accessible, More Effective Mental Health Care. Clinical psychology & psychotherapy. 2013. Epub 2013/08/07.
- Sloninsky D, Mechael P. Towards the development of an mhealth strategy: A literary review. New York, USA: WorldHealth Organization and Earth Institute, 2008.
- Cole-Lewis H, Kershaw T. Text messaging as a tool for behavior change in disease prevention and management. Epidemiol Rev. 2010 Apr; 32(1):56-69.
- Quinn CC, Shardell MD, Terrin ML, Barr EA, Ballew SH, Gruber-Baldini AL: Cluster-randomized trial of a mobile phone personalized behavioral intervention for blood glucose control. Diabetes Care 34:1934–1942, 2011. | 1 | 2 |
<urn:uuid:4af59f63-c0a5-44e8-9dfd-890a36ec0ecd> | Step-by-Step tutorial by expert to understand IP adressing and subnetting (CCNA context) Part-I
Written, Designed and Edited by: Aftab-tekdad
First of all let us understand IP? IP stands for Internet Protocol. It is a part of TCP/IP stack. IP is a layer 3 protocol according to OSI reference model. IP is mainly responsible for routing. The IP protocol uses specific address called IP address or logical address. Whenever you send any data to another system using any network based application, like internet explorer, FTP client or outlook express etc., the data travels from your network application to the TCP. TCP adds the relevant information to the data and hands it over to the IP. =================================================
The IP protocol here adds the source and destination IP address. Now let us discuss about IP address in detail.
As you can see in the illustration, IP address is a 32 bit binary number. But for the ease of human reference it is being represented into dotted decimal notations.Actually it is the dotted decimal notation which is always assigned to the systems by network administrator. Then why do we need to understand the binary form? Definitely this question should arise in your mind. Well as a Cisco Certified Network Associate, you may be required to perform complex tasks related to networking. One of the tasks may be to subnet the IP address. I know now you will be thinking about subnetting. Well here all I can tell you is that subnetting is a process of dividing one network into multiple smaller networks. Let us get back to our core topic. That is, what is IP address? As we discussed earlier, it's a layer three or logical address used by IP protocol to determine whereabouts of the destination system and the exact system itself. Let us delve into further details.
As I told you that IP address is a 32 bit binary numbers, which identifies to which logical group the computer belongs to and the exact host. In other words, IP address constitutes of two parts. One part of the address identifies the logical group or network ID of the computer and the other part represents the host itself.Here in the illustration, up to "192.168.0" is network portion and "1" is the host portion of the address. In other words the computer which is assigned with an IP address of 192.168.0.1 belongs to 192.168.0.0 network and its unique identification is "1". Now the important question is how it is decided that how much portion of the IP address is network address and how much is the host ID? If you look at the binary format of the example IP address, you will find the answer. All continuous "1's" in the subnet mask decides the network portion of the IP address. The portion of IP address above all continuous ones in subnet mask represents network address. The portion of IP address above all remaining zeroes represents host ID.
Here you can see that PC-1 and PC-2 are in 192.168.0.0 network. Since both of the computers are sharing the common network id, i.e., 192.168.0, we can call them to be belonging to same logical group. The computers belonging to same logical group can communicate with each other directly without the help of any intermediate device like routers. At the other side PC-3 and PC-4 are in other logical group which is 192.168.1.0. Here PC-3 and PC-4 both share the same network portion, i.e., 192.168.1 and there host ID's are unique in their network. Remember, in the world of networking, it is not the physical layout which makes different networks, but it is the layer three addresses or logical address which divides computers into different networks. In the illustration if we assign all of the four computers the same network id, i.e., 192.168.0.0 then all of them will become a single network and we will not require a device like router in between. Computers in the same network can communicate with each other without any third device. But computers with different network id must have some type of router in between to act as gateway for the computers of the two networks.
Suppose here in the diagram if PC-1 wants to send a data packet to PC-2, what will happen? At PC-1 the IP protocol will put its own IP address as source address, PC-2's IP address as destination IP address in the data packet. While a source and destination layer three addresses are being added to the data packet by say IP protocol, the IP protocol decides whether the packet is destined for the same network or to other network than its own. If the destination network is same then it knows that there is no requirement of any gateway address. It will simply send an ARP broadcast to its own network, asking for the MAC address of the destination machine i.e., PC-2. ARP stands for Address Resolution Protocol and it is a part of the TCP/IP protocol stack. ARP is used to resolve the MAC address from the known IP address. Here in our case the layer three components know what the destination IP address is. But it does not know the MAC address of the computer who's IP address is 192.168.0.2. So in order to know the destination machines MAC address without which data cannot be moved out of the machine, PC-1 sends ARP broadcast to its own network, asking for the MAC address of the PC-2. The broadcast means request destined for all computers.
Here in our case the ARP broadcast will be looking something like this. You can see that PC-1 is sending an ARP broadcast to entire network. Entire network here is specified by 255 which is the maximum value for the host portion. Just have a look over the Destination IP address 192.168.0.255. i.e., the packet is destined for the every computer whose Network ID is 192.168.0. This broadcast packet will reach to every computer at the routers E0 side. But the router will not allow this broadcast to be propagated to other side of the router. The ARP broadcast contains a request asking for PC-2's MAC address. In the last line you can see that PC-2 is responding with its MAC address. This is how computers learn about destination computers MAC address. Then layer two protocols like Ethernet put this address as destination MAC address before data is finally being sent out of the computer. So this whole process was related to the communication between computers in same network. What will happen if PC-1 wanted to communicate with PC-3?
In the case of PC-1 wanted to communicate with PC-3, the Layer three protocols will discover that the destination computer is in other network and it can't get MAC address of the destination computer directly. Because a computer cannot broadcast any thing other than its own network. And when the router will receive a broadcast destined for 192.168.0.0 network, it will simply drop it. So once PC-1 decided that the destination is in other network, it knows there must be some kind of router in between, and instead of asking for the MAC address of the destination Machine in the ARP broadcast, it should ask for the MAC address of the router. So now in this case PC-1 will send a broadcast to its own network, but instead of asking for the MAC address of the destination computer, it is asking who is having 192.168.0.3? which is the routers address to which side the PC-1's network is connected. The router is replying with its own MAC address to the PC-1. Once PC-1 obtains the PC-3's MAC address, it will send the packet on the network. Now this packet will be received by the router, Since the packet's destination MAC address will match with the router's MAC address. Now router will check the received packets destination layer three network ID and it will find that the destination network is directly connected to it on Port E1. So router will send an ARP broadcast on E1 destined for the network 192.168.1.255 asking for the MAC address of 192.168.1.1. PC-3 will send its MAC address to the router and router will add that MAC address to the packets destination layer two address field and forward it to the E1 interface. So you would have understood how layer three protocols like IP are used to divide computers into different logical groups. You also understood that routers do not forward broadcasts to other side. Actually we will discuss routing in later chapters. Let us get back to the current topic that is IP addressing.
First of all let us get familiar with little of background. The TCP/IP protocols were initially developed as part of the research network developed by the United States Defense Advanced Research Projects Agency (DARPA or ARPA) in the 1970s by pioneering network engineers Vinton Cerf and Bob Kahn. The Designers of the IP protocols created 5 classes of the IP addresses. Namely class A, class B, class C, class D and class E.
Class A is having first 8 bits reserved for network addressing and remaining 24 bits for host addressing. Hence it is having a default subnet mask of 255.0.0.0. Its decimal address range is from 1-126. Network 127.0.0.0 in class A is reserved for loop back address. Loop back address like 127.0.0.1 is used by all operating systems to identify itself. If you successfully ping to this address, it means the TCP/IP protocol is installed properly and it is functional. Class B is having first 16 bits reserved for network addressing and remaining 16 bits for host addressing. Hence it is having a default subnet mask of 255.255.0.0. Its decimal address range is from 128-191.
Class C is having first 24 bits reserved for network addressing and remaining 8 bits for host addressing. Hence it is having a default subnet mask of 255.255.255.0. Class D is reserved for multicasting and is in the range of 224-239. Class E is reserved for R&D purposes and is in the range of 240-255. Out of these five classes, only class A, B, and C are allowed to be used for commercial purposes. We cannot assign class D and E addresses to computers.
According to the documentations of IP addressing, the first highest order bit in class A must remain "0", hence we actually get a network range of 2^7-2 equals to 126, i.e., from 220.127.116.11 to 18.104.22.168. and 2^24-2 equals to 16777214 hosts per network. For class B, the first highest order bit must remain "on" and second "off " that is binary "10" which gives a total of 2^14-2 equals to 16382 networks, i.e., from 22.214.171.124 to 126.96.36.199 and 2^16-2 equals to 65534 hosts per network. For class C, the first and second highest order bit must remain "on" and the third highest order bit "off" that is binary "110" which gives a total of 2^21-2 equals to 2097150 networks, i.e., from 188.8.131.52 to 184.108.40.206 and 2^8 -2 equals to 254 hosts per network. Here you might be thinking why I am subtracting 2 from either number of networks or number of hosts. Well that's a genuine doubt. Again according to IP documentation all the network bits cannot be either on or off at the same time so we have to subtract two combinations of all zeros and all ones. The same rule applies to host addressing as well so all the host bits cannot be turned on or off at the same time, hence minus two. In the case of hosts, all host bits turned "on" represent a broadcast address and all host bits turned off represent a network address. We cannot assign broadcast address as well as network address to hosts so we have to minus it.
Since all the class A addresses were being assigned to the universities and military organizations in the early days itself, the class A is not available for the public. Almost all class B addresses are also exhausted. Only some of the class C addresses are available. Earlier 2^32 IP addresses were considered quite a large number ...!! That's what was thought by the developers of IP protocol like DoD. When the TCP/IP protocol was developed at DoD, only few computers were there in the universities and other organizations. And not all of them were thinking to connect with each other. So 2^32 really seemed a big number those days. But with the advent of time the popularity of TCP/IP protocol started to soar. Almost every one wanted to be connected and hence every computer required one IP address in order to connect to others, and no more than one computer connected together can use the same IP address. Soon it was realized that in near future the IP address is going to be scarce. So the Scientists came up with a solution to do away with the shortage of IP address.
They kept aside some of the IP addresses from all of the three classes namely class A, B, and C to be used as Private IP addresses. The remaining IP addresses in the above mentioned three classes were reserved to be used on internet and termed as Public IP address. The important thing to remember is that the computers having any IP address within these private ranges cannot connect to internet directly without some sort of network address translation. We will discuss network address translation later. The computers having addresses between these private ranges cannot connect to the internet directly due to the fact that the internet routers are configured to not forward the data packets destined for these private IP addresses. In other words if an internet router receives a packet destined for Private IP address, it will simply drop the packet. And this is implemented to save the IP addresses. You would be wondering how keeping aside some of the IP addresses as private addresses save the overall public IP addresses..? in fact it seems to be decreasing the number of public IP addresses.
Just look at the network and you can yourself figure out how keeping few IP addresses as private saves lot of IP addresses. Here you see a private network comprising of lot of desktop computers is connected to internet through a server which is doing Network Address Translation. The server is having two network adapter cards with two IP addresses. The server's internal network card is having the IP address within the same Private network as that of Desktop computers. While server's external network interface card is having a public IP address 220.127.116.11. Using this public IP address, server is connected to internet. All the desktop computers are configured with a gateway as 192.168.0.100. Whenever any desktop computer want to connect to the internet, they simply send the data to NAT server, the server removes the source IP address from the clients packet and replaces it by its own public IP address, then forwards it to the internet. Though all the internal hosts having private IP addresses can connect to internet, but the IP addresses which will be going out as source addresses will be that of NAT server, that is 18.104.22.168. When replies come from the internet for the internal hosts, the NAT server hands over the data packets to the appropriate hosts. This is how thousands of computers in a company having private IP addresses can connect to internet using only single public IP address. Since Packets destined for private IP addresses gets discarded by internet routers, end number of organizations or companies can use the same private IP addresses internally and they will require only single public IP address. So definitely division of IP addresses into Public and Private saves a lot of IP addresses. Private IP addresses also provide a kind of security to the companies, since all the internal hosts having private IP addresses are represented by a single Public IP address. Only that single public IP address remains visible to internet. Though a lot of Public IP addresses are being saved due to private IP addresses but still large numbers of IP addresses are being wasted due to certain reasons. How still the IP addresses are being wasted, I will explain you. ======================================================
Just have a close look at the network. You can see several internet routers with two end networks attached. The very important thing you will notice in this exhibit is that every internet router's directly connected interfaces are consuming complete one network. See the connection between Router-B and Router-C, Router-B is having 22.214.171.124 and Router-C is having an IP address of 126.96.36.199 both in the class-C. Out of 254 IP addresses available in the Class-C network, only two IP addresses are being used between Router-B and Router-C. The remaining 252 IP addresses cannot be used anywhere else now, since same network cannot be assigned at multiple sides of the routers. Due to this rule a lot of IP addresses are being wasted between router to router connections. Lot of IP addresses are getting lost at network-1 and network-2 as well. You see that network-1 requires only 20 IP addresses and network-2 requires only 30 IP addresses, the remaining IP addresses at network-1 and network-2 cannot be used anywhere else, again causing a loss of lot of IP addresses. To save the IP addresses further getting lost this way, a new workaround was done, known as subnetting. Subnetting not only saves the IP addresses, but it also provides better management of the network. In simple words, subnetting is a process of dividing one large network into multiple smaller sub-networks. Just remember that in the network in front of you, lot of IP addresses were getting wasted due to the very less number of IP addresses required on the router interfaces than the number of IP addresses available in each Class-C network. So now we can divide one network into multiple smaller networks and those smaller networks can be assigned at different interfaces of the routers. In the present scenario, we are using almost five (5) class-C networks. This provides almost 254*5 equals to 1270 IP address. And how many addresses are we using? Only 58? So we are using only 58 IP addresses out of total 1270. How many IP addresses are getting wasted? 1270-58 is equal to 1212 IP addresses. So you can see that we are losing almost 1212 IP addresses which cannot be assigned any where else. The solution? Just divide the networks into smaller ones and save lot of IP addresses. Now how those networks can be divided... I will teach you in a moment. For subnetting please select IP addressing and subnetting part-II | 1 | 3 |
<urn:uuid:8460ef28-5151-4204-a99f-b8a107e86510> | CROSS-REFERENCE TO RELATED APPLICATIONS
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
This is a continuation-in-part of U.S. patent application Ser. No. 09/304,881, filed 4 May 1999, which is a continuation-in-part of U.S. patent application Ser. No. 08/833,387, filed 4 Apr. 1997, U.S. Pat. No. 5,923,001, which is a continuation-in-part of International Application Number PCT/US95/09094, filed 19 Jul. 1995, which is a continuation of U.S. patent application Ser. No. 08/286,413, filed 5 Aug. 1994, U.S. Pat. No. 5,650,596, all hereby incorporated by reference.
- REFERENCE TO A “MICROFICHE APPENDIX”
- BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to devices which detect, collect, weigh and count surgical sponges. The present invention also relates to surgical sponges which can be detected non-optically.
2. General Background of the Invention
During surgery absorbent sponges are used to soak up blood and other body fluids in and around the incision site. Because the risk of a sponge being retained inside a patient is so great, surgical personnel go to great lengths to account for each and every sponge which is used in surgery. Strict sponge count policies have been developed by hospitals to deal with this issue. Moreover, surgeons and anesthesiologists determine blood loss by using visual inspection or the manual weighing of soiled sponges, thus soiled sponges are usually kept in one area of the operating room. Another area of concern regarding soiled surgical sponges is the risk of transmission of bloodborne diseases such as hepatitis B virus (HBV) and human immunodeficiency virus (HIV). To reduce exposure and contamination every precaution necessary should be taken to reduce risk of infection.
Sponge counts are an essential part of operating room procedure. They help ensure patient safety by reducing the chance that a sponge will be retained inside of the patient. Typical sponge count policies include: an initial count at the beginning of a procedure and subsequent counts throughout the procedure when additional sponges are added to the sterile field, before the closure of a deep incision, after the closure of a body cavity, when scrub or circulating personnel are relieved, and before the procedure is completed.
In addition, it is necessary for the anesthesiologist and surgeon to have an accurate measurement of blood loss contained in sponges, so that if excessive blood loss is occurring, blood components can be ordered and administered immediately. This information is provided by weighing soiled sponges and then subtracting the dry weight of the number of sponges weighed from the total.
Moreover, soiled sponges are a source of contamination, thus handling and exposure should be kept to a minimum. Procedures which reduce the transmission of bloodborne pathogens include making sure that soiled sponges are handled with gloves and instruments only and that used-soiled sponges are appropriately contained and confined.
In 1992, the Occupational Safety and Health Administration (OSHA) issued new regulations regarding bloodborne pathogens in U.S. hospitals. Nearly 6 million healthcare workers in the United States who could be “reasonably anticipated” to come in contact with blood and other body fluids are subject to the new regulations. These regulations are intended to reduce worker exposure to hepatitis B virus (HBV), human immunodeficiency virus (HIV), or other bloodborne pathogens. Under the section on Engineering and Work Practice Controls, hospitals are required to eliminate or minimize employee exposure. This includes the implementation of new designs for devices which count sutures and sponges.
For more information about surgical sponge handling and counting, please see U.S. Pat. No. 4,422,548, incorporated herein by reference.
U.S. Pat. No. 3,367,431 discloses a device for automatically counting and weighing surgical sponges. However, the device cannot distinguish between different sponges. Also, the amount of blood contained in soiled sponges must be manually calculated. Further, it does not use removable disposable bags.
U.S. Pat. No. 4,295,537 discloses a sponge-collecting device that keeps count and determines the weight of blood-soaked sponges. However, the device cannot automatically distinguish between different sponges. Also, the device does not automatically count the sponges (the number and dry weight of the sponges must be manually input).
U.S. Pat. No. 4,422,548 discloses a sponge-collecting device that determines the weight of blood-soaked sponges. However, the device cannot automatically distinguish between different types of sponges. It also cannot determine the amount of blood in the sponges.
U.S. Pat. No. 5,009,275 discloses a sponge-collecting device that determines the weight of blood-soaked sponges. However, the device cannot automatically distinguish between different types of sponges, and so it cannot automatically determine the amount of blood loss when sponges of different dry weights are collected in the container.
Radio Frequency Identification Systems are based on two principle components, a passive tag or transponder and a hand held or stationary reader. In operation, the hand held or stationary reader emits a low frequency magnetic field, which activates the passive tag or transponder within its range. The passive tag has no power source of its own. It derives the energy needed for operation from the magnetic field generated by the reader. Because the tags have no power source of their own, the only limitation to the operational lifespan of the tag is the durability of its protective encapsulation, usually, but not limited to, plastic or glass. Tags are available in many shapes and sizes, each designed for the unique rigors and requirements of specific applications. RF tags operate by proximity as opposed to optics like a bar code. As a result they can be read in harsh environments, submerged in liquids and can be read spherically from any direction, through most materials. They can be read through tissue, bone, etc.
- SUMMARY OF THE INVENTION
Also of potential interest are the following U.S. Pat. Nos. 3,367,431; 4,193,405; 4,498,076; 4,510,489; 4,658,818; 4,922,922; 5,031,642; 5,057,095; 5,103,210; 5,188,126; 5,190,059; 5,300,120; 5,329,944; 5,353,011; 5,357,240; 5,381,137; all patents cited in the file of U.S. patent application Ser. No. 08/286,413.
The present invention involves the use of radio frequency identification (RF ID) tags on surgical sponges and two related medical devices which will be used to identify and track those sponges during surgery. RF technology was chosen by the present inventors because no other technology available offers the reliability, accuracy and performance demanded by the operating room environment. The first device, a hand-held reader, will be passed over the surgical wound prior to the closing of the wound by the surgeon. The hand-held reader will then identify any sponges which may have been inadvertently left in the wound, thus preventing the retention of sponges inside of the patient. This hand-held reader can be used during all surgical procedures and will eliminate the dangerous and time consuming task of manually counting and bagging soiled sponges. The second device, a sponge management system including a counting, weighing, and calculating device for automatically counting and weighing surgical sponges and determining the amount of blood contained therein, will be utilized during procedures in which determination of blood contained in sponges is important. These procedures include; any procedure involving small children or infants, and heavy blood loss procedures such as cardiovascular, transplants, and obstetrical. During surgery all soiled sponges, regardless of size, will be deposited into the counting, weighing, and calculating device where the device will then determine the amount of blood contained in those sponges and display this amount on a liquid crystal display panel. In addition, the counting, weighing, and calculating device will automatically bag those sponges and give a visible running count of each type of sponge deposited. The hand-held reader will be an attachment used with the counting, weighing, and calculating device to be used at the time of closure to assure that a sponge is not retained in the patient. The use of RF tagged sponges and the accompanying identification systems discussed will have a tremendous impact on operating rooms worldwide.
RF tags can also be attached to surgical instruments that might accidentally get left in the human body during surgery to allow these surgical instruments to be detected non-optically.
As used herein, “non-optical detection” means detection of an object without visible light or X-rays. The preferred non-optical detection means comprises radio frequency (RF) scanners.
The apparatus of the present invention solves the problems confronted in the art in a simple and straightforward manner. What is provided is a device which automatically counts surgical sponges and automatically determines the amount of blood contained in the sponges, without any input or calculations during the surgery by any person. The apparatus includes means for automatically determining the weight of the sponges when dry, and for deducting that weight from the total weight of the sponges and blood in the apparatus. The soiled sponges will be held inside of the device in a removable disposable bag. Means are also provided to keep a running total of the number of sponges which have entered the apparatus from a predetermined time, and the total amount of blood which has entered the device from a predetermined time, even when a full bag is removed and replaced with an empty bag in order to make room for additional sponges to enter the container.
The means for automatically determining the weight of the sponges when dry includes a non-optical scanner means which can read an indicating means on the sponges even when the indicating means is covered with blood or other body fluids.
The present invention comprises a system for facilitating counting of surgical sponges and determining the approximate amount of body fluids contained therein. It includes a plurality of sponges of varying weights, each sponge having a dry weight before being used to absorb fluids and an indicating means thereon for preferably indicating the type of sponge, the dry weight of the sponge, the dry weight of the sponge including the weight of the indicating means; and a device for counting the surgical sponges and determining the approximate amount of body fluids contained therein. The device comprises a container means for containing the surgical sponges, the container means having an opening above a receptacle means for receiving the surgical sponges, scanner means for detecting when one of the surgical sponges has been deposited into the device, and detecting means for automatically determining the dry weight of the surgical sponges which have been deposited into the device since a predetermined time by detecting the indicating means on the sponges. The device also includes calculating means for automatically determining the approximate amount of body fluid contained in the surgical sponges which have entered the container since a predetermined time by subtracting the dry weight of the sponges from the weight of the sponges including the body fluids. The device further comprises first display means for displaying an indication of the approximate amount of body fluid contained in the surgical sponges which have entered the container since a predetermined time, determining means for automatically determining the number of surgical sponges which have entered the container since a predetermined time, and second display means for displaying the number of surgical sponges which have entered the container since a predetermined time.
The detecting means is capable of distinguishing between multiple types of surgical sponges (and preferably detecting multiple sponges simultaneously and identifying them) even those sponges of different types but similar weights, and the second display means displays the number of each type of sponge which is received.
The first display means indicates, with an accuracy of+/−0.1%, the exact amount of body fluids contained in the sponges which have entered the container since a predetermined time.
The detecting means comprises a non-optical scanner means which can read an indicating means on the sponges even when the indicating means is covered with blood or other body fluids.
The present invention includes apparatus for helping to prevent surgical sponges from being inadvertently left in a patient after surgery comprising a non-optical scanner means, a plurality of surgical sponges, and a plurality of identification tags, wherein each surgical sponge has one of the identification tags securely attached thereto for allowing the sponge to be detected by the non-optical scanner means, and either the non-optical scanner means has means for detecting and identifying multiple identification tags simultaneously, or the tags can be encoded with identifying means to identify the type of sponge to which it is attached, or both. Preferably, the identification tags do not exceed one inch in diameter and 0.20 inches in thickness. The identification tags preferably do not exceed four grams in weight, and more preferably do not exceed three grams in weight. Preferably, the identification tag is a radio frequency identification tag and the non-optical scanner means is a radio frequency reader, the radio frequency reader preferably has a read range of at least 6 inches, more preferably at least 10 inches, and most preferably at least 15 inches, when used with the identification tags attached to the surgical sponges.
The present invention also includes a method of monitoring surgical sponges during and after surgery for helping to prevent surgical sponges from being inadvertently left in a patient after surgery, comprising the following steps:
using in a surgical wound only surgical sponges which each have an identification tag securely attached thereto for allowing the sponge to be detected by a non-optical scanner means;
using a non-optical scanner means to scan the surgical wound before closing the surgical wound, wherein either the non-optical scanner means has means for detecting and identifying multiple identification tags simultaneously, or the tags can be encoded with identifying means to identify the type of sponge to which it is attached, or both. Preferably, the identification tags do not exceed one inch in diameter and 0.20 inches in thickness. The identification tags preferably do not exceed four grams in weight, and more preferably do not exceed three grams in weight. Preferably, the identification tag is a radio frequency identification tag and the non-optical scanner means is a radio frequency reader; the radio frequency reader preferably has a read range of at least 6 inches, more preferably at least 10 inches, and most preferably at least 15 inches, when used with the identification tags attached to the surgical sponges.
The present invention also includes a system for facilitating detection of surgical sponges, counting of surgical sponges and determining the approximate amount of body fluids contained therein, comprising:
a plurality of sponges of varying weights, each sponge having a dry weight before being used to absorb fluids and an indicating means thereon for indicating preferably the type of sponge, the dry weight of the sponge, the dry weight of the sponge including the weight of the indicating means;
a device for counting the surgical sponges and determining the approximate amount of body fluids contained therein, comprising:
- a container means for containing the surgical sponges,
- an opening in the container means above a receptacle means for receiving the surgical sponges;
- scanner means for detecting when one of the surgical sponges has been entered into the device;
- detecting means for automatically determining the dry weight and preferably the type of the surgical sponges which have entered into the device since a predetermined time by detecting the indicating means on the sponges;
- calculating means for automatically determining the approximate amount of body fluid contained in the surgical sponges which have entered the container since a predetermined time by subtracting the dry weight of the sponges from the weight of the sponges including the body fluids;
- first display means for displaying an indication of the approximate amount of body fluid contained in the surgical sponges which have entered the container since a predetermined time;
- determining means for automatically determining the number of surgical sponges which have entered the container since a predetermined time; and
- second display means for displaying the number of surgical sponges which have entered the container since a predetermined time; and
a non-optical scanning means for detecting surgical sponges inadvertently left in a patient during surgery. Preferably, different types of surgical sponges are received by the container, the detecting means is capable of distinguishing between multiple types of surgical sponges, even those sponges of different types but similar weights, and the second display means displays the number of each type of sponge which is received. Preferably, the first display means indicates, with an accuracy of +/−0.1%, the exact amount of body fluids contained in the sponges which have entered the container since a predetermined time. Preferably, the detecting means comprises a non-optical scanner means. Preferably, the non-optical scanner means can read an indicating means on the sponges even when the indicating means is covered with blood or other body fluids.
Preferably, the non-optical scanner means can simultaneously read indicating means on all sponges within its read range and properly identify each sponge, and display the total number of sponges of each type.
It is object of the present invention to provide a system including surgical sponges which can be detected non-optically and a device which will detect these surgical sponges, regardless of size and location in a patient's body, during surgery with a high degree of accuracy.
It is another object of the present invention to provide a method of detecting surgical sponges non-optically, regardless of size and location in a patient's body, during surgery with a high degree of accuracy.
It is also an object of the present invention to provide a device which will detect surgical sponges, regardless of size and location in a patient's body, during surgery with a high degree of accuracy.
It is an object of the present invention to provide a device which will automatically count surgical sponges, regardless of size, during surgery with a high degree of accuracy.
It is a further object of the present invention to provide, in a device of this type, in addition to means for giving a running count of sponges, means for simultaneously weighing sponges and instantly and accurately calculating the amount of blood contained in those sponges.
BRIEF DESCRIPTION OF THE DRAWINGS
Another object of the present invention is to provide a device which collects soiled surgical sponges and facilitates their disposal with minimal handling.
For a further understanding of the nature, objects, and advantages of the present invention, reference should be had to the following detailed description, read in conjunction with the following drawings, wherein like reference numerals denote like elements and wherein:
FIG. 1 is a cutaway, side view of a first embodiment of the apparatus of the present invention;
FIG. 2 is a perspective view of the first embodiment of the apparatus of the present invention;
FIG. 3 is a detail of the control panel and display of the first embodiment of the apparatus of the present invention;
FIG. 4 is a rear view of the first embodiment of the apparatus of the present invention;
FIG. 5 is a block diagram indicating the input and output of the CPU of the first embodiment of the apparatus of the present invention;
FIG. 6 is a top view of the first embodiment of the apparatus of the present invention;
FIG. 7 is a perspective view of the preferred embodiment of the present invention; and
FIG. 8 is a top view of the preferred embodiment of the method of the present invention.
- DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
The following is a list of suitable parts and materials for the various elements of the preferred embodiment of the present invention.
- 1 identification tag (indicating means) on sponge 2
- 2 sponge
- 3 opening in apparatus 30 for sponges 2
- 4 control unit (CPU)
- 5 display panel
- 6 reader (antenna)
- 7 reader electronics
- 8 disposable bag
- 9 door
- 10 weighing scale
- 11 rechargeable battery
- 12 shelf for extra bags 8
- 13 wheels
- 14 retractable electrical cord
- 15 wiring interconnecting the reader electronics 7 and the reader 6
- 16 wiring interconnecting the reader electronics 7 and the control unit 4
- 17 wiring interconnecting the reader electronics 7 and the battery 11
- 18 wiring interconnecting the control unit 4 and the scale 10
- 19 wiring interconnecting the battery 11 and the scale 10
- 20 wiring interconnecting the battery 11 and the control unit 4
- 21 bag container
- 22 handle for door 9
- 23 radio waves
- 30 automatic surgical sponge counter and blood loss determination apparatus
- 31 sloped sides of receptacle 32
- 32 receptacle
- 33 label
- 34 receptacle lid
- 35 reed switch (preferably magnetic)
- 36 electronically controlled latching mechanism
- 40 hand-held RF reader (LanLink Corporation, Advanced Long Range Reader or Trovan Model LID 500, for example)
- 41 red indicator light on reader 40
- 42 green indicator light on reader 40
- 43 LCD (liquid crystal display) readout on reader 40
- 44 antenna of reader 40
- 45 power trigger of reader 40
- 51 sponge type and quantity display screen
- 52 blood-loss display screen
- 53 battery charge indicator.
- 54 on-off switch
- 55 alarm light
- 56 hold button
- 80 surgical site
- 81 wound
- 82 patient
- 85 surgeon
The first embodiment of the present invention, automatic surgical sponge counter and blood loss determination apparatus 30, is shown in FIGS. 1 through 5.
The device (See FIG. 1) takes the place of a kickbucket which is now in use in operating rooms around the world. It is mobile (mounted on wheels 13, powered by rechargeable battery 11), compact in size (30″×18″×18″, for example) and easy to operate. During an operation all surgical sponges 2 are deposited into the apparatus 30 by dropping them into a receptacle 32 having sloped sides 31 leading to an opening 3 at the top of apparatus 30. Receptacle 32 preferably has dimensions of 15″ by 16″, more preferably has dimensions of 16″ by 18″, and most preferably has dimensions of 18″ by 18″. The top of receptacle 32 is preferably about 20-40″ above the floor, more preferably about 25-35″ above the floor, and most preferably about 30″ above the floor. Opening 3 preferably has dimensions of from 4″ by 5½″ to 6½″ by 8½″, and more preferably has dimensions of 5½″ by 7″.
To increase the chance that a sponge tossed at the apparatus of the present invention will land in receptacle 32, receptacle 32 is preferably rather large. To reduce evaporation from bag 8, opening 3 is preferably relatively small. Preferably, the ratio of the size of opening 3 to the size of receptacle 32 is rather small.
When sponge 2 passes through the opening 3 and falls into bag 8, a reader 6 interrogates the radio frequency identification tag 1 attached to sponge 2 and determines from the unique identification code on the tag what type of a sponge (Lap, Mini-Lap, Raytec, etc.) has entered the container. Control unit 4 is preferably programmed with all of the unique codes associated with different types, sizes and brands of surgical sponges. In addition, Control unit 4 is preferably programmed with the corresponding dry weight for each unique code. The control unit 4 receives data from the reader 6 along with data from the scale 10 and then processes this information. The final output is displayed on the display panel 5: a readout of the number of sponges contained in the unit, broken down by type, is displayed on screen 51; the amount of blood and other bodily fluids contained in the sponges is displayed (preferably in cubic centimeters) on screen 52. This amount will be calculated by the control unit 4 using a formula based on the weight of the sponges 2 soiled, minus the weight of the sponges 2 dry (different size sponges 2 have different dry weights; the dry weights of different sponges are preferably programmed into the software so that nurses will no longer have to do this manually).
The battery charge is indicated on battery charge indicator 53, with the left side being red and lighting up if the charge is low, and with the right side being green and lighting up if the charge is sufficient. An on-off switch 54 lights up with a green light when the power is on.
Label 33 displays the symbols and explanations for a number of alarm conditions which cause alarm light 55 to light up. When one of the conditions displayed on label 33 occurs, the appropriate symbol flashes in screen 51. The conditions include a low battery charge condition, a jammed door, a full bag, open receptacle lid, close receptacle lid, open door, close door and the presence of foreign objects (needles, hypos, cottonoids, bovie tips, etc.) inside of the device.
Sponge 2 is deposited in a disposable bag 8 which is suspended in a bag container 21 mounted to a scale 10. The scale 10 weighs the contents of bag 8 and sends this data to the control unit 4 as mentioned above to be processed. Apparatus 30 can be programmed to alarm once a predetermined number of sponges 2 has been reached or when the bag 8 is full. To change bag 8, the operator of apparatus should depress hold button 56. Once hold button is depressed, the display panel will prompt operator to close receptacle lid 34 and simultaneously unlock the electronic latching mechanism 36. The disposable bag 8 can then be removed through a rear door 9 and replaced with a new bag 8. A compartment 12 to store extra bags is provided. To resume operation, door 9 must be closed. Once hold button 56 is released, the electronic latching mechanism 36 is locked and operator will be prompted to open receptacle lid 34. The hold procedure will prevent sponges from being deposited accidentally when there is no bag in the device and the locked door will prevent personnel from opening the device when in operation. A reed switch 35 indicates to the Control Unit 4 if receptacle lid 34 is open or shut. The memory of control unit 4 will continue to give a running count of sponges 2 as well as estimated blood loss amount for the duration of the entire surgical procedure. Once the operation is complete and all counts have been verified, the device 30 can be cleaned very easily, reset and ready for the next case. Because of the small size and mobility of apparatus 30, it can be moved from room to room effortlessly.
Tags 1 can preferably endure temperatures of up to about 400 degrees Fahrenheit (about 200 degrees Centigrade) to allow them to be autoclaved.
At the end of the day the device 30 can be plugged with plug 14 into an electrical outlet and recharged for the next day's use. Additional features can include: a gauge which indicates battery status by displaying the remaining life of the battery in hours and a low battery alert alarm. The battery 11 is rechargeable during operation of the device 30.
While other technologies may be available, radio frequency is believed to be the optimal technology. Radio frequency tags are preferred to other identifying means because they do not depend upon light for detection−they can be detected even when completely covered with blood. Other identifying means which can be attached to surgical sponges and which does not depend upon light for detection could be used.
The preferred tag to use with the present invention is the Sokymat, SA, PICCOLO-TAG. This RFID tag has a operating frequency of 125 khz, 64 bits of memory, a diameter of 10 mm and thickness of 2 mm. The preferred reader 6 is the LAN-Link Corporation—Advanced Long Range Reader, with a customized antenna with dimensions of nine inches by nine inches. The tag 1 is preferably attached to a surgical sponge by being sewn onto the sponge with and where the radio opaque marker is currently attached. The information which tag 1 contains is preferably simply a number, a collection of numbers, or a combination of numbers and letters. The tags preferably store up to at least 32 binary bits of data, and more preferably at least 64 binary bits of data. Current commercially available tags can store up to 1,000 binary bits of data.—These tags can be programmed so that certain bits in the data string can be dedicated. For example, all tags which will be attached to Laparotomy sponges of a particular size, a particular dry weight and are manufactured by a particular company will have the same code in the dedicated portion of the data string. Control unit 4 will store all of the unique codes associated with different types, sizes and brands of surgical sponges. In addition, Control unit 4 will store a corresponding dry weight for each unique code.
Advantages of the Device of the Present Invention
The sponge count is an essential part of operating room procedure. It not only assures patient safety, but it also provides the medical team with an ongoing estimation of blood loss. Current methods for handling surgical sponges are antiquated and inadequate in today's modern and potentially dangerous operating room environment. Even in the newest hospitals, sponges are still counted and weighed manually. These procedures are time-consuming, prone to human error and unnecessarily expose medical staff to blood contact. The present invention addresses these shortcomings by integrating all sponge-related functions into one fully automated unit. The present invention is different from prior art on the subject of sponge management in that it has the ability to distinguish between different types of sponges, maintain a running count of each type of sponge being used in a given procedure, and automatically calculate the amount of blood contained in those sponges, instantly. These improvements will dramatically affect sponge management in the areas of safety, sponge counts and blood measurement.
Safety: The present invention will have its biggest impact in the area of increased safety for medical staff. Exposure to bloodborne pathogens will be significantly reduced due to less handling of soiled sponges and the closed environment of the device. Currently, soiled sponges are handled several times by different members of the medical team. They are first handled by scrub personnel. Next they are counted by the circulating nurse. They are then bagged, weighed when necessary, and if a count is incorrect, they are removed from the bags and recounted. Finally, an orderly has to clean the area where the sponges are handled. With the present invention, soiled sponges will only be handled once by the staff member who deposits the sponge into the device. The device will then do the counting, estimate blood loss amount and store the sponges in a disposable bag. This will be done in a closed environment as opposed to an open bucket thereby reducing airborne contamination and also reducing the time spent cleaning areas where sponges are counted. Because the disposable bag is enclosed inside of the device, less bloodborne pathogens can escape due to evaporation.
Sponge Counts: The present invention will increase the accuracy of sponge counts by eliminating human error and providing a running count of sponges already used. It will give a visible readout of all different types of sponges used during a given procedure. This is important because it allows the staff to constantly check counts throughout the procedure. An increase in accuracy reduces the chances that a sponge will be left in a patient. This increases safety for the patient and reduces the time that is spent recounting sponges, thus reducing total count time. Also, because the device contains a disposable bag, staff will no longer have to bag sponges manually, thus saving time. The technology that is preferred to be used to do the scanning (radio frequency) is extremely accurate.
Estimated Blood Loss Measurement: The present invention has the ability to weigh soiled sponges, automatically compute blood loss, and give a constant visible readout of that amount. This is an important feature for several reasons. A constant readout is valuable to anesthesiologists and surgeons who use this information as one component in estimating total blood loss for a given procedure. Instant information is helpful when ordering blood components and reduces guessing on blood loss amount. In the case of small children or infants this information is critical. Currently, surgeons and anesthesiologists have to estimate the amount of blood loss by sight and the manual weighing of sponges, which is done by the circulating nurse. Besides the time saved in weighing and doing a manual calculation of blood loss, the device reduces human error in the calculation. This increases safety for the patient. Also, a reduction in time spent handling soiled sponges reduces staff exposure to blood.
The apparatus of the present invention counts surgical sponges (Laps, Raytecs, etc.) with a high degree of accuracy. It constantly calculates the amount of blood and other bodily fluids in the sponges. It includes a rechargeable battery 11 and can include a visible battery gauge which displays the remaining life of the battery in hours. It has an alarm which goes off when the charge in the battery 11 drops below a predetermined amount. The battery 11 is rechargeable during operation of device 30.
The container 30 of the present invention is compact in size, and can have exemplary dimensions of one foot by two feet, which is bigger than a standard a kick bucket.
Container 30 is mobile and durable. It can distinguish between different types of sponges (Laps, Raytec, Mini-Laps, etc.). It includes disposable bags. It is simple and easy to operate, and has the operating instructions on its face. Disposable bags 8 have a capacity of at least forty sponges when properly installed upon bag container 21 of device 30.
Container 30 can interrupt the count and maintain the sponge count and blood loss amount. An alarm sounds when it is time to change bag 8 (that is, when a predetermined number of sponges have entered container 30 since the last change of the bag). An alarm could also sound when a foreign object is present in the container 30.
The device 30 of the present invention can read tags 1 even when the tag 1 is hidden or covered with blood. Device 30 is easily and quickly cleaned. It is water-resistant and does not have to be sterile.
The reader 6 can preferably detect up to fifty tags 1 at one time. It preferably can detect foreign objects, such as needles, hypodermic needles, cottonoids, bovie tips, etc. The count can be interrupted to allow the inspection of foreign matter. The reader 6 could be in either location shown in FIG. 1, or in both locations if necessary to provide 100% accuracy in detection.
The ability to distinguish between different types of sponges helps to accurately estimate the amount of blood lost during surgery. For example, Raytec sponges weigh, when dry, about five grams. Lap sponges weigh, when dry, about 20 grams. When soaked with blood and/or other bodily fluids, Raytec sponges can weigh up to about 50 grams and Lap sponges can weigh up to about 120 grams. Suppose, for example, that forty sponges are used during an operation, and half are Raytec sponges and the other half are Lap sponges. The total weight of blood and sponges is about 1,500 grams, with 500 grams representing the dry weight of the sponges and 1,000 grams representing the weight of the blood and other bodily fluids (1,000 cc's of fluid). If all of the sponges were treated as being Lap sponges, then the calculation would improperly treat 300 grams of blood as dry weight of the sponges. Thus, the amount of estimated fluid lost would be improperly reduced by 300 grams (300 cc's of blood). The weight of tags 1 is not being considered, since tags 1 weigh the same whether attached to a Raytec sponge or to a Mini-lap sponge.
The preferred embodiment of the present invention is a hand-held surgical sponge detection system shown in FIGS. 7 and 8. A hand-held RF reader 40 will be used by surgeons 85 to detect the presence of surgical sponges 2 in the body cavity at the time of closure during a surgical procedure (see FIG. 8). The Hand-held RF reader 40 will be passed over the surgical site 80 prior to the closing of the cavity by the surgeon 85. It will then identify any sponges 2 which may have been inadvertently left in the wound, thus preventing the retention of sponges 2 inside of the patient. This hand-held RF reader 40 can be used during all surgical procedures and will eliminate the dangerous and time-consuming task of manually counting and bagging of soiled sponges 2. This device is small in size, (preferably smaller than 10 inches in length×7 inches in Width×10 inches in Height, excluding the antenna 44), light-weight, (less than three pounds) and battery operated. It can be used alone or as part of the “Automatic Surgical Sponge Counter and Blood Loss Determination System” described in co-pending International Application Number PCT/US95/09094 and U.S. patent application Ser. No. 08/286,413. The length L of antenna 44 is preferably one to 28 inches, more preferably five to 25 inches, and most preferably ten to 14 inches. Antenna 44 can be, for example, 14 inches long.
When a surgeon 85 is ready to begin closure of the body cavity, the hand-held RF reader 40 will be passed over the surgical site 80. A red light 41 on the hand-held RF reader 40 indicates the presence of a sponge 2 in the wound 81 and a green light 42 indicates that no sponges 2 are in the wound 81. If a sponge 2 is detected, an optional LCD readout 43 on the display can indicate what type of a sponge 2 is in the cavity (Laparotomy, Mini-Laparotomy, Raytec, etc.). Before the device 40 is handed to the surgeon, it will be placed in a sterile plastic bag (not shown) to prevent blood from getting on the device 40. After the hand-held RF reader 40 is used, it will be removed from its protective bag, cleaned and stored until its next use. If necessary, it may be possible to sterilize the device 40. The hand-held RF reader 40 can be mounted on the wall in the operating room for easy accessibility or if the room has an automatic surgical sponge counter and blood loss determination system 30, it can be mounted on this device 30. A trigger 45 is used to activate the reader 40.
There are several commercially available hand held readers on the market today which could be used with slight or no modifications. These modifications could include a redesign of the handle to adjust for the ergonomic demands of the operating room and if necessary, a redesign of the reader antenna to increase the read range of the reader. An example of a commercially available hand held reader which could- be used is the Trovan® —Model LID 500 hand held reader which is manufactured by AEG/Telefunken. The invention disclosed herein can be demonstrated now by using animal carcasses, veterinary surgery or by having a person lay on top of an RF tag 1. The inventors have successfully demonstrated a read range of up to 12 inches through tissue using a LAN-Link Corporation—Advanced Long Range Reader, an antenna with dimensions of nine inches by nine inches and a variety of commercially available, read only, 125 khz tags manufactured by Sokymat, SA of Switzerland and LAN-Link Corporation of St. Louis, Mo.
The hand-held RF reader 40 will totally eliminate the chance of a surgical sponge 2 being retained inside of a patient during surgery. As a result, the labor intensive, dangerous, and error-ridden methods currently being used in operating rooms worldwide to account for soiled surgical sponges will also be eliminated. By having the ability to automatically identify these sponges 2 at any time during surgery, especially at the time of closure, an increased level of safety for both patients and staff will be realized, a drastic increase in the productivity of nursing staff will occur, procedures will be streamlined, and liability for hospitals, surgeons and nurses will be reduced. Sponges are the most time consuming and dangerous foreign bodies to keep track of during surgery as well as the item most often retained. The hand held RF reader 40 is particularly well suited for trauma cases, thoracic and abdominal surgery.
The sponge management system (automatic surgical sponge counter and blood loss determination apparatus) 30 of the present invention is a fully automated medical device which will manage all sponge-related functions in an operating room. This small, mobile unit 30 will handle the counting of all sponges 2 used during surgery, regardless of the size of those sponges 2. It will also bag, weigh and automatically compute the amount of blood contained in those sponges 2, instantly. A liquid crystal display screen will display a readout in CC's of estimated blood loss amount contained in the unit, as well as a running count of all types of sponges 2 deposited. The hand-held RF reader 40 will be mounted on device 30, to be used at the time of closure to assure that no sponges 2 have been retained in the body cavity. Device 30 is particularly well suited for procedures in which above-average blood loss occurs such as cardiac, transplants, obstetrical, etc. and procedures involving small children or infants where blood loss monitoring is critical.
In the United States and many other industrialized nations worldwide, hospitals are facing tremendous pressure by both the public and private sectors to reduce costs while at the same time delivering high quality patient care. Hospital administrators must begin looking at innovative ways to wring out excessive costs through the use of automation and job redesign. Both of the devices discussed will give operating room managers the opportunity to significantly reduce costs by automating, streamlining and eliminating many of the dangerous and time consuming tasks currently involved in sponge management during surgical procedures. Significant productivity gains can be expected as the implementation of RF technology reduces labor time and allows for a more efficient utilization of staff and thus a reduction in payroll costs. Safety for both patients and staff will be significantly increased by eliminating the manual counting, bagging and weighing of soiled surgical sponges 2. This dangerous, labor intensive task will be replaced by a hand-held RF reader 40 that will totally eliminate any chance of a sponge 2 being retained in the patient. Sponges left in patient are one of the leading causes of malpractice lawsuits and insurance claims following surgery. Blood exposure for medical staff will be drastically reduced by eliminating the handling of soiled sponges.
The Hand-held RF reader 40 and device 30 will improve productivity and help bring down labor costs. Labor costs account for 32% of the typical surgery department's budget. Current methods of sponge management rely on the manual counting, bagging and weighing of soiled surgical sponges. This task is almost always performed by a Registered Nurse because of patient safety and liability issues involved. In medium to heavy blood loss cases, a significant amount of labor time is committed to this task. For a two to three hour procedure such as a prostatectomy, twenty to thirty minutes of nurse labor is required to manually account for the fifty to sixty sponges utilized during the procedure. In larger cases such as cardiac, transplants, vascular, abdominal, trauma, and obstetrical, for example, the amount of labor time is even greater. The Hand-held RF reader 40 will totally eliminate any counting of sponges regardless of procedure. Nurses will now have more time to chart and do paperwork, prepare medications, order blood components, prepare for the next case and be more attentive to the needs of the surgical team and patient. Many surgery departments are currently trying to reduce the ratio of expensive Registered Nurses to inexpensive unlicensed personnel on their staff. The hand-held RF reader 40 and device 30 will help hospitals facilitate this process by utilizing their RN staff more efficiently by having RN circulators supervise less expensive unlicensed personnel in several rooms, simultaneously.
The hand-held RF reader 40 and device 30 will increase safety for both patients and medical staff. The Hand-held RF reader 40 will totally eliminate the chance of a sponge 2 being retained in the patient and device 30 will improve the accuracy and availability of blood loss estimation in cases in which blood loss is significant or vital due to size and age of patient. These devices will also eliminate the need for staff to manually account for soiled sponges which exposes them to excessive blood exposure and ergonomic hazards such as back injuries.
The main reason for the elaborate count procedures currently used in modern operating rooms today is to prevent the retention of foreign objects such as sponges, instruments, sutures, bovie tips, etc. in the body cavity. Sponges in particular can cause severe infections and injury if left in the patient. If a sponge is accidentally retained, then all relevant members of the team can be held responsible, either individually or jointly. This includes the surgeon, the circulating nurse and the hospital. Evidence of a retained sponge being left in a patient after closure is considered proof of negligence on the part of the medical team. From a liability standpoint, when a retained sponge case is brought to court, the question is not, “Who is responsible?” but “How much is the injury worth?” Retained sponge cases usually are settled before court proceedings unless the plaintiff asks for unrealistic compensation.
Current methods for estimating blood loss in surgical sponges are inadequate because of excessive reliance on visual estimation and manual weighing of sponges. Currently, if a surgeon or anesthesiologist needs to know how much blood is contained in the sponges, he must estimate the amount by visual inspection of sponge bags. Although the individual physician may know that he or she is in the safe zone, without actually weighing the sponges, they cannot know the exact amount. In many instances, blood contained in the sponges is the only exact amount of blood loss that is unknown by physicians during surgery. In certain procedures and in the case of infants and small children, it is vital to know this amount. The surgeon may request that the circulating nurse manually weigh the sponges and calculate the amount of fluid contained in those sponges. When sponges are weighed, the circulating nurse must individually weigh each sponge before bagging and keep a running total throughout the procedure. This involves several manual calculations. This is very time consuming, prone to human error and involves excessive handling of bloody sponges. Device 30 will improve blood loss estimation techniques just discussed. It will have the ability to weigh sponges and determine blood loss contained in those sponges, instantly. This amount will be displayed on a display panel for all staff to see. This is important for several reasons. Human error in the calculation is reduced by having the device perform the calculation instead of a busy nurse. All relevant data needed for the calculation is contained on the tag and preprogrammed software in the device. The unit will use this information along with data from an internal scale to accurately determine blood loss. A constant, visible readout of this amount may increase response time when ordering blood components. In addition, device 30 will reduce guessing by surgeons and anesthesiologists on blood loss amount contained in the sponges 2. All of these improvements will increase patient safety.
Safety for medical staff will also be improved by using the Hand-held RF reader 40 and device 30. Current methods are unsafe for a variety of reasons. Exposure to blood is unacceptably high and back injuries are common. With the prevalence of Hepatitis B Virus (HBV), Human Immunodeficiency Virus (HIV), and other dangerous pathogens in today's society, blood exposure is one of the most pressing issues in the operating room today. Currently, soiled sponges are handled several times by different members of the medical team. They are first handled by scrub personnel. Next they are manually counted by the circulating nurse. They are then bagged, weighed when necessary, and if a count is incorrect, they are removed from the bags and recounted. Finally, an orderly has to clean the area where the sponges are handled. If it is a long procedure, shift changes or relief breaks can expose more personnel. With the hand-held RF reader 40 and device 30, blood exposure for the circulating nurse will be dramatically reduced. The circulator will no longer have to touch bloody sponges. Instead of several staff members handling sponges, the number is reduced to one. Regardless of which device the operating room is using, the scrub person will be the only staff member who will come in contact with soiled sponges. In rooms where both the Hand-held RF reader 40 and device 30 are being used together, the scrub will deposit sponges 2 directly into device 30. The unit will then count, bag, weigh and calculate blood loss. Ninety percent of blood exposure that a circulating nurse currently encounters on a daily basis comes from handling sponges. The number of staff whose blood exposure will be reduced is amplified when the people involved in shift changes and relief crews are included. The standard kickbucket, into which the used sponges are deposited now, is basically a stainless steel bucket with wheels. As the blood in the sponges evaporate, airborne contamination can occur. This is not sanitary, nor safe. Device 30 will store the sponges 2 inside the device in a removable disposable bag. This is a closed environment as opposed to the open environment of the kickbucket. In addition, nurses are constantly bending over to retrieve sponges from this bucket. This is not ergonomically sound and leads to numerous back injuries. Nurses rank fifth among occupations receiving worker's compensation claims for back injuries. These back injuries average $3,000 to $4,000 per reported injury. If a procedure uses fifty sponges, the circulating nurse will have bent over anywhere from five to fifty times in order to retrieve the sponges. In large cases, sometimes the nurse will get on her hands and knees and layout soiled sponges on the floor to get an accurate count. Device 30 will eliminate the need to bend over in order to retrieve sponges out of kickbuckets and thus will reduce the number of back injuries in the operating room. Kickbuckets will no longer be necessary in operating rooms. This should reduce the number of personnel who injure themselves by tripping over them. In conclusion, device 30 and hand-held RF reader 40 will provide a safer operating room environment for medical staff.
Repeat surgeries to extract retained sponges will be eliminated and all associated surgery costs will be as well. X-ray costs will be reduced as they will not be needed anymore to determine if a sponge has been retained. Typically it cost around sixty dollars for one of these operating room X-rays to be taken. This does not take into account the fifteen to thirty minutes of valuable room time which is needed, the protection measures such as lead aprons, etc. that the staff must take and the x-ray exposure to which the patient is exposed.
The following are advantages that key personnel and hospitals who utilize the present invention will realize.
For nurses, the invention: reduces or eliminates count time; can reduce liability with respect to retained sponges; reduces exposure to blood; reduces risk of infectious disease; increases accuracy of count; increases patient contact; increases attentiveness to procedure; increases attentiveness to surgeon's needs; increases attentiveness to anesthesiologist's needs; increases attentiveness to surgical tech's needs; and increases productivity by freeing the nurse for other duties.
The surgeons and anesthesiologists benefit because the present invention: increases accuracy of blood loss amount; can reduce liability with respect to retained sponges; increases response time on checking and ordering blood components; and reduces guessing on blood loss amount.
The hospital benefits from the present invention because: it helps to provide a safer environment for operating room employees due to less exposure to bloody sponges; it increases accuracy of sponge counts; when the hand-held reader is used, it can eliminate the need to count sponges; it causes a reduction in or elimination of repeat surgeries to extract sponges left in wounds; it causes a reduction in costs and risks associated with repeat surgeries; it causes an increase in productivity of the Circulating Nurse; it causes an increase in quality of patient care due to more attentive O.R. Nurse, less chance of repeat surgery due to sponge left in wound, and reduced guessing on blood loss by anesthesiologists.
While it is preferred to use radio frequency tags and an associated detector, other means for distinguishing one type of sponge from another could be used, such as an electric eye, metal indicators, color indicators. However, tags which can be detected by non-optical detecting means are preferred, because then one does not need to be concerned about the location of the tag and whether it is clean or covered with blood.
The RF tags 1 used for the present invention are preferably inexpensive, small, durable, extremely accurate and reliable with a read range of at least ten inches. The tags 1 will be fastened to the sponges 2 when the sponges 2 are being manufactured.
The tags 1 should be small so that they are unobstructive to the surgeon and easily attached at the factory. Several manufacturers have suitable tags which have diameters of less than a half inch and weigh less than 4 grams. Three possible means for attachment include; sewing the tag into the sponge, gluing and pressing between layers of material, riveting the tag onto the sponge or a combination of these methods. Any method or methods utilized must be extremely secure to avoid a tag 1 being left in a patient. The tags 1 will be attached to all types and sizes of surgical sponges used during surgery. This includes Laparotomy, Mini-Laparotomy, Raytec, etc. So that the sponges 2 are compatible with the device 30, each different type of sponge 2 will have a unique code which identifies the size and type of sponge 2 being tagged (a Laparotomy sponge for example).
The tags 1 must be durable to withstand the unique rigors of the operating room environment. Currently available tags can withstand all sterilization processes used to sterilize sponges. Examples include: gamma radiation, gas concentration, vacuum, pressure and temperature. They can withstand temperatures up to 400° F. without affecting internal components.
RF tag technology is extremely accurate and reliable. As long as the tag is within the appropriate read range of the reader, a proper scan will occur. An appropriate read range required of this application for most patients is eight to fifteen inches. Several manufacturers currently have appropriate tags and readers which meet this criteria.
The preferred frequency is between 100 khz and 150 khz or between 10 mhz and 20 mhz. These frequencies have been proven to operate effectively through water and tissue.
Tags which could advantageously be used as tag 1 are the Sokymat, SA, PICCOLO-TAG. This RFID tag has a operating frequency of 125 khz, 64 bits of memory, diameter of 10 mm and thickness of 2 mm. Sokymat also has several other models of 125 khz tags of various encapsulations (polyester, glass) and sizes which are appropriate for this application. The inventors have tested these Sokymat, SA RFID tags using the LAN-Link Corporation ALR reader and a nine inch by nine inch customized antenna. A read range of up to twelve inches was obtained through human tissue. Other companies which manufacture appropriate tags are Trovan® Electronic Identification Systems and Texas Instruments-TIRIS.
Preferably, reader 6 is a reader which can detect and distinguish among and identify multiple sponges simultaneously, such as or similar to commercially available readers available from Samsys Technologies of Ontario, Canada and or LanLink Corporation of St. Louis, Mo. In this manner, even if two or more sponges are deposited into the receptacle at the same time, they will be properly detected and identified.
Preferably, reader 40 is a reader which can detect and distinguish among and identify multiple sponges simultaneously, such as or similar to commercially available readers available from Samsys Technologies of Ontario, Canada and or LanLink Corporation of St. Louis, Mo. In this manner, even if two or more sponges are present in a patient, they will be properly detected, identified and removed.
With a reader having a large enough read range and the ability to read multiple tags simultaneously, it would be possible to put a scanner on the bottom of the container and or adjacent to the container and constantly read all tags in the bag. Using an “anti-collision” or “anti-clash” protocol, each tag transmits its data and then waits a period of time before repeating its message. Statistics dictate that each tag eventually transmits when no other tags are transmitting and its data is read.
All measurements disclosed herein are at standard temperature and pressure, at sea level on Earth, unless indicated otherwise. All materials used or intended to be used in a human being are biocompatible, unless indicated otherwise. Also, the frequencies used are preferably biocompatible.
The foregoing embodiments are presented by way of example only; the scope of the present invention is to be limited only by the following claims. | 1 | 2 |
<urn:uuid:8320f9c6-6dbd-4249-b6de-1f9dc31c745f> | …which of the symmetries hold at all? The universe is a curious place and cosmologists endeavour to contain this curiosity into one theory. Or maybe two, if they’re unlucky.
At the current position we are far from proposing a theory of everything that encompasses the various branches of physics into one theory, in particular quantum field theory and general relativity. The issue here is that these two theories are mutually incompatible – ‘neither can live while the other survives’ to quote J.K. Rowling. A successful theory of everything would have the absolute power to explain and predict all physical phenomena in the universe.
Nevertheless we do know quite a bit about what the theory must contain, and one question that we must ask is: what would the universe be like if all matter was replaced with its corresponding antimatter? This leads to the concept of CPT symmetry.
‘Ah yes, symmetry, I recognise that word!’
You may be able to relate to using cheap, bendy mirrors with tattered edges when you were younger to study reflective symmetry, or small square pieces of tracing paper to study rotational symmetry. CPT symmetry has essentially the same underlying principles. Instead of transforming a two-dimensional shape and checking if the shape matches, we are transforming the entire universe and checking if the current laws of physics still apply. If they do, the symmetry holds.
In order to do this, we can look at if the four fundamental forces are invariant under the transformation. What transformations are we testing out? This is where the three special little letters come in.
The first transformation is charge conjugation (C). This is reversing all the electrical charges and internal quantum numbers (such as lepton number, baryon number and strangeness), and as a consequence all matter becomes antimatter and vice versa.
It turns out that the electromagnetic, gravitational and strong nuclear forces obey C-symmetry, but the weak nuclear force does not. Therefore C-symmetry does not hold, and this shows that the universe can indeed tell its positives from its negatives.
The second transformation is parity inversion (P). A universe that has had its parity inverted can be thought of as a mirror image only in space but not in time – left and right has swapped, up and down has swapped, etc.
The electromagnetic, gravitational and strong nuclear forces obey P-symmetry, but once again the weak nuclear force does not. It looks like the universe can also tell its left from its right!
It was originally thought that, even though C-symmetry and P-symmetry do not hold individually, when they were applied together they would cancel each other out, i.e. CP-symmetry would hold. However experiments in the 1950s involving radioactive beta decay (a weak interaction) confirmed this to be false. CP violations are thought to be one of the reasons why the universe has resulted in more matter than antimatter.
The third transformation is time reversal (T). You would expect T-symmetry to not hold either, because were you to view life in reverse, it would not look natural, and this is a consequence of the second law of thermodynamics. Otherwise you could see broken glass shards reforming themselves into a uniform sheet, or spilt drinks voluntarily flying up and containing themselves back into a mug. That for sure doesn’t happen in my household.
Or does it?
Interestingly, T-symmetry is found to be invariant over all the fundamental forces, showing that T-symmetry holds on a microscopic scale, even though we see a clear asymmetry on a macroscopic scale. If you take a simple physical system, such as a swinging pendulum, you can’t tell a difference in its reverse (ignoring accounts of air resistance and friction).
If you were to apply all the three transformations simultaneously, experimental evidence has shown that this master combination holds. This is the basis of the CPT theorem that was developed in the 1950s, and it forms one of the fundamental properties of nature. In other words, if there was another universe that was made of antimatter instead of matter, and was an exact mirror image, and moved backwards in time, you wouldn’t be able to tell the difference between that universe and ours.
Since CPT-symmetry holds but CP-symmetry does not, the implication is that T-symmetry does not hold also, which agrees with what we see in real life. Then what causes the contradiction in the ‘arrow of time’ between microscopic and macroscopic scales? This is a question to explore for another time.
CPT symmetry is an important concept to consider as it provides gateways into further understanding the arrow of time and why we only remember the past but not the future. | 1 | 2 |
<urn:uuid:e49a9ec5-c1ac-4839-9347-6270965818d2> | |World Book Encyclopedia. Chicago: World Book.||"The diameter of an electron is less than 1/1000 the diameter of a proton. A proton has a diameter of approximately 1/25,000,000,000,000 inch (0.000000000001 mm)."||< 10−18 m|
|Mac Gregor, Malcolm H. The Enigmatic Electron. Boston: Klurer Academic, 1992: 4-5.||"Rc = 3.86 × 10−11 cm
Rqmc = 6.64 × 10−11 cm
Rqmc = 6.70 × 10−11 cm"
|4–7 × 10−13 m|
|Pauling, Linus. College Chemistry. San Francisco: Freeman, 1964: 57, 4-5.||"The radius of the electron has not been determined exactly but it is known to be less than 1 × 10−13 cm"||< 10−15 m|
|"Ro = 2.82 × 10−13 cm"||2.82 × 10−15 m|
An electron is a negatively charged subatomic particle. They are responsible for the formation of chemical compounds. Electrons are considered to be fundamental units of matter (they are not made up of smaller units). Although scientists have been studying electrons for quite a while, the exact diameter of an electron is unknown. According to Malcolm H. Mac Gregor,
The electron is a point-like particle-that is, a particle with no measurable dimensions, at least within the limitations of present-day instrumentation. However, a rather compelling case can be made for an opposing viewpoint: namely, that the electron is in fact a large particle which contains an embedded point-like charge.
The electron was the first subatomic particle to be discovered. It was discovered in 1897 by a British physicist named Sir Joseph John Thomson. Later, in 1913, an American Physicist by the name of Robert A. Millikan obtained an accurate measurement of the electron's charge. Recent studies show that the charge of an electron is 1.60218 × 10−19 coulombs. The mass of an electron is known to be 9.10939 × 10−31 kilograms.
In all neutral atoms there are the same number of electrons as protons. Electrons are bound to the nucleus by electrostatic forces. Most of the volume of an atom is occupied by electrons, even though they barely contribute to the atomic mass. Neils Bohr, Wolfgang Pauli, and others discovered the pattern in which electrons are distributed throughout an atom in the 1920s. Electrons are arranged at various distances from the nucleus, and are arranged in energy levels called shells. The average distance of outer electrons from the nucleus is a few tens of nanometers in all atoms. In heavy atoms inner electrons are much closer to the nucleus. The number of electrons in outermost shell determines the chemical behavior of that atom. If an atom combines with another atom to form a molecule, than the electrons in the outermost shell are transferred from one atom to another or shared between atoms.
Danny Donohue -- 2000 | 1 | 3 |
<urn:uuid:7165f216-3273-4ba9-9b95-de10bdb40065> | This is quick example of using RFID to control access to some piece of arbitrary electronics, in this case a Macintosh Classic computer. As a proof of concept all the added components are outside of the Mac, but in a more finished product they could be moved inside the enclosure for example.
I’m using a PN532 NFC/RFID breakout board connected to an Arduino. Using this the Arduino can read the ID code on any ISO14443A RFID card. If that ID exists in a list of allowed users, then the Arduino activates a relay that allows power to flow to the Mac. If the ID is not in the allowed list, the relay stays deactivated. Since the PN532 operates at 3.3V, and the Arduino at 5V, a 4050 level shifter chip is used in the communication between them.
I’ve also wired in a 16×2 Character LCD screen to message to the user that their card has been read and whether access is ‘granted’ or ‘denied’. | 1 | 2 |
<urn:uuid:d22c7837-c587-4e29-acff-de3921f5aa2a> | Schizophrenia is a chronic and severe mental disorder with a typical onset in adolescence and early adulthood and a lifetime prevalence of about 1%. On average, males have their illness onset 3 to 4 years earlier than females. Onset of schizophrenia is very rare before age 11, and prior to age 18 the illness has been called “early-onset schizophrenia” (EOS), while onset before age 13 has been termed “very early-onset schizophrenia” (VEOS; Werry, 1981).
Prior to examining topics in schizophrenia, we must address a basic question as to the definition of adolescents and adults. The way these groups will be defined is partly related to the question being asked. That is, research studies that emphasize the study of neural development or finding links between endocrine changes and onset of schizophrenia are likely to place more emphasis on defining adolescence in terms of body or brain maturation. For example, adolescence could be defined as the period between the onset and offset of puberty. Alternatively, it could be defined on the basis of our current knowledge of brain development, which suggests that maturational processes accelerate around the time of puberty but that they continue on into what is often considered young adulthood. Most recent studies of normal brain development suggest that brain maturation continues to the early 20s. If this rather extended definition of adolescence is used, then the appropriate adult contrast groups are likely to be somewhat older—people in their late 20s, 30s, or even 40s.
Under the general rubric of phenomenology, four major topics need to be considered as we explore the relevance of research on adults to the understanding of adolescents. These four topics are diagnostic criteria, phenomenology, the relationship of phenomenology to neural mechanisms, and the use of phenomenology to assist in identifying the phenotype for genetic studies.
Two different sets of diagnostic criteria are currently used in the world literature. For most studies that emphasize biological markers, and for almost all of those conducted in the United States, the standard diagnostic criteria are from the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV, American Psychiatric Association, 1994). However, international epidemiologic studies are likely to use the World Health Organization's International Classification of Diseases, Tenth Revision (WHO's ICD-10, World Health Organization, 1992). Differences in the choice of diagnostic criteria may affect the results of studies.
There are many similarities between the ICD and DSM, largely as a consequence of efforts by the ICD and DSM work groups to achieve as much concordance as possible. Both require 1 month of active symptoms and the presence of psychotic symptoms such as delusions or hallucinations. There are, however, important differences between the ICD and DSM. In most respects, the DSM provides a slightly narrower conceptualization of schizophrenia than does the ICD. For example, the ICD only requires 1 month of overall duration of symptoms, whereas the DSM requires 6 months. In addition, the ICD includes schizotypal disorder and simple schizophrenia within its nomenclature under the general heading of the diagnosis of schizophrenia. In the DSM, simple schizophrenia is excluded, and schizotypal disorder is placed among the personality disorders. Other less significant differences include a greater emphasis on first-rank symptoms in the ICD, as well as a much more specific and complex system list.
How important is the choice of diagnostic criteria for research on adolescents? It could be very important. Setting criteria boundaries more broadly or more narrowly will have a significant impact on the groups of adolescents chosen for study. Furthermore, although the developers of these criteria paid close attention to examining the reliability and, when possible, their validity, it was assumed almost without question that the criteria could and should be the same for children, adolescents, and adults. This decision was not based on any published empirical data but rather, primarily on “clinical impressions.”
A frequently expressed clinical impression among those who study schizophrenia or psy chosis in children and adolescents, however, is that making a diagnosis in these younger age ranges is much more difficult than diagnosing individuals in their 20s. Multiple issues arise for a diagnosis in the adolescence age range. One important issue is comorbidity. Teenagers frequently may meet criteria for multiple diagnoses, such as conduct disorder or attention-deficit hyperactivity disorder (ADHD). Although the DSM tends to encourage the use of multiple diagnoses, this policy also has no empirical basis. An alternative approach that might be considered from research on adolescents is to try to identify a single “best” diagnosis that would summarize the child parsimoniously.
Many adolescents also abuse substances of many different kinds. This factor is important to consider in the diagnosis of adults who may have schizophrenia, but it poses even greater problems among adolescents. Abuse of substances such as amphetamines may potentially induce a psychotic picture that is very similar to schizophrenia. We do not know whether young people who continue to meet criteria for schizophrenia after discontinuing amphetamine use should be considered “typical schizophrenics” or whether they should in fact be given another diagnosis such as substance-induced psychotic disorder. However, because amphetamines have a significant effect on the dopamine system—a key neurotransmitter implicated in the neurochemical mechanisms of schizophrenia—it is at least plausible that amphetamines (and perhaps other substances as well) be considered triggers or inducers of schizophrenia. According to this view, substance abuse could be one of the many factors that rank among the nongenetic causes of schizophrenia. However, there is still no strong consensus on this issue.
In summary, there are many unanswered research questions under the heading of diagnosis. More studies are needed to explore how well existing diagnostic criteria actually work in children and adolescents. Specifically, studies of both the reliability and validity of these criteria are needed, as well as studies examining issues of comorbidity and longitudinal studies examining changes in both diagnosis and phenomenology in cohorts of adolescents and of adults.
The concept of phenomenology can be relatively broad, describing clinical symptoms, psychosocial functioning, cognitive functioning, and “neurological” measures such as soft signs. Here we will focus primarily on clinical symptoms and psychosocial functioning.
For the assessment of clinical symptoms in schizophrenia, choosing the appropriate informant is a key issue. Whatever the age, patients suffering from schizophrenia frequently have difficulty in reporting their symptoms and past history accurately. Optimally, one gets the best information from several informants, usually a parent plus the patient. In the case of adolescents, a friend may be a good additional informant. Another critical issue in phenomenology when assessing adolescents is to determine the distinction between “normal” adolescent behavior and psychopathology. Again, this can be difficult in assessing adults, but it is even more difficult in adolescents. It can be hard to draw the line between “teenage scruffiness” and disorganization, or a withdrawal to seek privacy versus avolition. As discussed above, drug use or abuse can also confound the picture. For example, when an adolescent known to be using marijuana regularly exhibits chronic apathy and avolition, is this due to marijuana use or is it a true negative symptom? At the moment, no data are available to help us address any of these issues pertaining to the assessment of clinical symptoms in adolescents versus those in adults. This is clearly an area in which more information is needed.
Another issue is the identification of appropriate developmental milestones and needs that are appropriate to the adolescent age range for the assessment of psychosocial functioning. For example, when we assess peer relationships in young or older adults with schizophrenia, we are evaluating the extent to which they have a circle of friends with whom they get together socially. In the case of adolescents, peer relationships are far more important and are more intensely driven by a need to establish independence from the family setting and to bond with others from the same age range. Likewise, the assessment of family relationships among adolescents is guided by quite different conditions than those for mature adults. Finally, the “work” of an adolescent is quite generally to do well in school, whereas the “work” of an adult is normally to find a paying job. Again, assessment tools have simply not been defined for assessing these aspects of psychosocial functioning in adolescents.
Expression of Early Symptoms and Illness Course
A wide range of symptoms has been described (Table 5.1) and the initial clinical features vary from one patient to another. The identification of these as prodromal symptoms is essentially retrospective, being diagnosed only after the first psychotic episode heralds. Using detailed assessment of such symptoms by a structured interview, studies have shown that the prodromal symptoms may begin 2 to 6 years before psychosis onset (Hafner et al., 1992). Negative symptoms of the prodrome may begin earlier than the positive symptoms (Häfner, Maurer, Löffler, & Riecher-Rossler, 1993). Over the past decade, attempts have been made to characterize the prodromal phase prospectively, with operational criteria (Yung & McGorry, 1996). However, several such patients may not develop schizophrenia, leading to the problem of false positives; it is therefore critical that we identify more specific predictors of conversion to psychosis among prospectively identified prodromal patients.
Table 5.1 Prodromal Features in First-Episode Psychosis Frequently Described in Adolescent Patients
Reduced concentration, attention
Decreased motivation, drive, and energy
Mood changes: depression, anxiety
Decline in role functioning, e.g., giving less to academic performance, quitting established interests, neglecting appearance
The onset of the first episode of psychosis (the beginning of clearly evident psychotic symptoms) is to be distinguished from the illness onset, which often begins with symptoms and signs of nonspecific psychological disorder (Häfner et al., 1993). The prodromal phase refers to the period characterized by symptoms marking a change from the premorbid state to the time frank psychosis begins (Fig. 5.1). The onset of both the prodrome and the psychotic episode are difficult to define precisely.
Although the clinical features of adolescent-onset and adult-onset schizophrenia are overall quite similar, early onset of schizophrenia may have an impact on its initial clinical presentation. In general, early-onset schizophrenia patients have more severe negative symptoms and cognitive impairments and are less responsive to treatment. Children and adolescents with schizophrenia often tend to fail in achieving expected levels of academic and interpersonal achievement. Very early-onset cases tend to have an insidious onset, whereas adolescent-onset cases tend to have a more acute onset. Patients with EOS or VEOS are also more often diagnosed as an undifferentiated subtype, because well-formed delusions and hallucinations are less frequent (Nicolson & Rapoport, 1999; Werry, McClellan, & Chard, 1991).
In summary, a challenge in the study of schizophrenia is the variability, or heterogeneity, in the clinical manifestations, and associated biological changes and course. This heterogeneity may have lead to inconsistencies in research findings (Keshavan & Schooler, 1992). Identifying early symptoms and signs and functional impairment can help our efforts in improving early diagnosis and in understanding the biological and genetic heterogeneity. Knowledge of the illness onset in adolescence may also help elucidate the brain developmental and possibly neurodegenerative processes in this illness, as proposed by recent pathophysiological models. Furthermore, an understanding of the course of clinical and neurobiological characteristics in the early phase of schizophrenia, such as the duration of untreated illness, can help in predicting outcome and presents important opportunities for secondary prevention.
Some of the key research questions in the area of the phenomenology of adolescent schizophrenia are as follows:
• Should the same criteria be used for adolescents and adults?
• Are there differences in phenomenology between the two?
• What is the validity of current assessment tools in the “real” research world? The reliability?
• What is (are) the best source(s) of information?
• What impact do differences in “life developmental stages” have on phenomenology?
• What is the best way to assess comorbidity and boundary issues in relationship to other disorders such as schizotypal disorder?
The consensus on these questions is at best modest. Almost no empirical data are available to answer them. In the area of phenomenology in adolescents suffering from schizophrenia, more well-designed, empirical studies are needed to improve assessment tools and to compare adults and adolescents.
Linking Phenomenology to Its Neural Basis
Through the use of neuroimaging, neuropathology, and neurogenetics, substantial progress in understanding the neural underpinnings of schizophrenia is being made. Excellent work has been done recently that examines the relationship between brain development and the occurrence of schizophrenia in children and adolescents, as described in other chapters (DeLisi, 1997; Giedd et al., 1999; Gur, Maany, et al., 1998; 1999; Ho et al., 2003; Jacobsen et al., 1998; Kumra et al., 2000; Lieberman, Chakos, et al., 2001; Rapoport et al., 1997; Thompson, Vidal, et al., 2001). As this work continues to mature, however, more work needs to be done to examine precisely how the specific symptoms of schizophrenia arise in the human brain, and whether imaging and other tools can be used to assist in diagnosis, treatment planning, and ultimately prevention.
This work must also address several questions in the realm of phenomenology. Specifically, how should we proceed as we attempt to link phenomenology to neural mechanisms? As discussed above, the phenomenology has multiple levels and aspects—symptoms, outcome, cognitive function, and psychosocial function. Which of these should be linked to imaging and other “biological” measures?
Most work to date has taken several different approaches. At the simplest level, investigators have conducted studies linking specific symptoms to neural measures. For example, studies have used positron emission tomography (PET) to identify brain regions active during auditory hallucinations (e.g., Silbersweig et al., 1995). Other investigators have examined symptoms such as thought disorder in relation to brain measures (e.g., Shenton et al., 1992). One of the critical conceptual issues, however, is the fact that the phenomenology of schizophrenia is complex. That is, the illness cannot be characterized on the basis of a single symptom. Although auditory hallucinations are common in schizophrenia, they are not omnipresent. Therefore, other investigators have proceeded by examining groups of symptoms that are correlated with one another, or “dimensions.” Many factor analytic studies have examined the factor structure of the symptoms of schizophrenia; nearly all find that the symptoms group naturally into three dimensions: psychoticism, disorganization, and negative symptoms (Andreasen, 1986; Andreasen, O'Leary, et al., 1995; Andreasen, Olsen, & Dennert, 1982; Arndt, Alliger, & Andreasen, 1991; Arndt, Andreasen, Flaum, Miller, & Nopoulos, 1995; Bilder, Mukherjee, Rieder, & Pandurangi, 1985; Gur et al., 1991; Kulhara, Kota, & Joseph, 1986; Lenzenweger, Dworkin, & Wethington, 1989; Liddle, 1987). Some studies have used the dimensional approach to examine brain–behavior relationships. Several studies also suggest that these three dimensions may have different functional neural substrates as seen with PET, or different structural brain correlates as evaluated with magnetic resonance imaging (MRI), and may also have different and independent longitudinal courses (Andreasen, Arndt, Alliger, Miller, & Flaum, 1995; Andreasen et al., 1996, 1997; Arndt et al., 1995; Flaum et al., 1995, 1997; Gur et al., 1991; Miller, Arndt, & Andreasen, 1993; O'Leary et al., 2000).
In concert with this work examining the symptoms of schizophrenia, other investigators have pursued the study of relationships between cognition and brain measures. Some have argued that some form of cognitive dysfunction may ultimately provide the best definition of the phenotype of schizophrenia, and that ultimately cognitive measures may replace symptom measures in defining the phenomenology of schizophrenia (Andreasen, 1999). Again, however, a consensus has not been achieved.
Defining the Phenotype for Genetic Studies
Contemporary geneticists applying the tools of modern genetics have become very much aware of how important it is to have good definitions of complex disorders such as schizophrenia. In fact, reflecting this awareness, they are beginning to speak about a new (but actually old) field, referred to as “phenomics,” the genetic underpinnings of phenomenology. The emergence of this term reflects the fact that the definition of the phenotype of illnesses like schizophrenia may be the single most important component of modern genetic studies.
Here the issues are very similar to those discussed above, involving the relationship between clinical presentation and neural mechanisms. At what level should the phenotype be defined? The symptom level? Dimension level? Diagnosis level? Cognitive level? Or should we abandon these more superficial clinical measurements and attempt to find more basic definitions, often referred to as “endophenotypes,” or “measurable components unseen by the unaided eye along the pathway between disease and distal genotype” (Gottesman & Gould, 2003)?
In this instance, there may be some consensus. Many investigators believe that endophenotypic definitions may provide a better index of the presence of this disorder than classic symptom-based definitions, such as those created by the DSM or ICD. There is as of yet, however, no strong consensus on what the “best” endophenotypes may be. Some candidates that have been proposed include problems with working memory, eye tracking, or prepulse inhibition. To date, most of this work has been conducted with adults. The application of this approach to defining and identifying the schizophrenia endophenotype in children and adolescents is another important future direction, as is the search for additional new candidate endophenotypes.
Two complementary approaches have emerged as providing much needed insight into the causes and underlying substrates of schizophrenia: neurobiology and genetics. Current efforts in neurobiology are to integrate data from behavioral measurements with the increasingly informative data from work with neuroimaging and electrophysiology. Neurobiological studies were stimulated by the well-documented neurobehavioral deficits that are present in schizophrenia. Some of the impairments are evident at the premorbid phase of illness and progress during adolescence, with onset of symptoms. These have become targets for therapeutic interventions. The application of structural and functional neuroimaging has enabled researchers to obtain in vivo measures and highlight the brain circuitry affected in schizophrenia. Progress in genetics has moved the field from earlier efforts relying on family studies of the phenotype to molecular studies that probe the underlying biology. In this section, we will review neurobehavioral measures, proceed to describe studies of brain structure and function, review the impact of hormones critical during adolescence, describe the implicated brain circuitry, and conclude by presenting the genetics of schizophrenia.
Cognitive deficits have been recognized since early descriptions of schizophrenia, when it was called “dementia praecox.” More recent evidence confirms that cognitive deficits are evident in vulnerable individuals, are present at the onset of illness, and predict outcome. Furthermore, as summarized in Chapters 6 and 7, early detection and efforts at intervention may hold a key for ameliorating the ravages of schizophrenia later in life. Here we will describe evidence for deficits in neuromotor and neurocognitive functioning, with special emphasis on early presentation.
Prior to the advent of antipsychotic medications, there were reports in the scientific literature on the occurrence of movement abnormalities in patients with schizophrenia (Huston & Shakow, 1946; Walker, 1994; Yarden & Discipio, 1971). After treatment of patients with antipsychotics became widespread, attention shifted to drug-induced abnormalities in motor behavior. Because motor side effects were of such great concern, they temporarily eclipsed research on naturally occurring motor dysfunction in schizophrenia. But in recent decades, the findings from prospective and retrospective studies have rekindled interest in the signs of motor dysfunction that often accompany schizophrenia in the absence of treatment.
Because the association between motor deficits and brain dysfunction is so well established, motor behaviors are particularly interesting to researchers in the field of schizophrenia (Walker, 1994). In clinical practice, neurologists are often able to identify the locus of brain lesions based on the nature of motor impairments. To date, the motor signs observed in schizophrenia have generally been too subtle and nonspecific to suggest a lesion in a particular brain structure. Nonetheless, there is extensive evidence that motor dysfunction is common in schizophrenia, and it may offer clues about the nature of the brain dysfunction subserving the disorder.
Research has shown that motor deficits predate the onset of schizophrenia, and for some patients are present early in life. Infants who later develop schizophrenia show delays and abnormalities in motor development (Fish, Marcus, Hans, & Auerbach, 1993; Walker, Savoie, & Davis, 1994). They are slower to acquire coordinated patterns of crawling, walking, and bimanual manipulation. They also manifest asymmetries and abnormalities in their movements. These include abnormal postures and involuntary movements of the hands and arms. It is important to note, however, that these early motor signs are not specific to schizophrenia. Delays and anomalies in motor development are present in children who later manifest a variety of disorders, as well as some who show no subsequent disorder. Thus, we cannot use motor signs as a basis for early diagnosis or prediction. But the presence of motor deficits in infants who subsequently manifest schizophrenia suggests that the vulnerability to the disorder involves the central nervous system and is present at birth.
Deficits in motor function extend beyond infancy and have been detected throughout the premorbid period in schizophrenia, including adolescence. Studies of the school and medical records of individuals diagnosed with schizophrenia in late adolescence or early adulthood reveal an elevated rate of motor problems. Both school-aged children and adolescents at risk are more likely to have problems with motor coordination (Cannon, Jones, Huttunen, Tanskanen, & Murray, 1999). Similarly, prospective research has shown that children and adolescents who later develop schizophrenia score below normal controls on standardized tests of motor proficiency (Marcus, Hans, Auerbach, & Auerbach, 1993; Niemi, Suvisaari, Tuulio-Henriksson, & Loennqvist, 2003; Schreiber, Stolz-Born, Heinrich, & Kornhuber, 1992). Again, the presence of these deficits before the onset of clinical schizophrenia suggests that they are indicators of biological vulnerability.
As mentioned, there is an extensive body of research on motor functions in adult patients diagnosed with schizophrenia, both medicated and nonmedicated (Manschreck, Maher, Rucklos, & Vereen, 1982; Walker, 1994; Wolff & O'Driscoll, 1999). The research has revealed deficits in a wide range of measures, from simple finger tapping to the execution of complex manual tasks. In addition, when compared to healthy comparison subjects, schizophrenia patients manifest more involuntary movements and postural abnormalities.
It is noteworthy that motor abnormalities have also been detected in adolescents with schizotypal personality disorder. Compared to healthy adolescents, these children show more involuntary movements and coordination problems (Nagy & Szatmari, 1986; Walker, Lewis, Loewy, & Palyo, 1999). Further research is needed to determine whether schizotypal adolescents with motor abnormalities are more likely to succumb to schizophrenia.
The nature of the motor deficits observed in schizophrenia suggests abnormalities in subcortical brain areas, in particular a group of brain regions referred to as the basal ganglia (Walker, 1994). These brain regions are a part of the neural circuitry that connects subcortical with higher cortical areas of the brain. It is now known that the basal ganglia play a role in cognitive and emotional processes, as well as motor functions. As our understanding of brain function and motor circuitry expands, we will have greater opportunities for identifying the origins of motor dysfunction in schizophrenia. In addition, research on motor abnormalities in schizophrenia has the potential to shed light on the neural substrates that confer risk for schizophrenia. Some of the important questions that remain to be answered are: What is the nature and prevalence of motor dysfunction in adolescents at risk for schizophrenia? Is the presence of motor dysfunction in schizophrenia linked with a particular pattern of neurochemical or brain abnormalities? Can the presence of motor dysfunction aid in predicting which individuals with prodromal syndromes, such as schizotypal personality disorder (SPD), will develop schizophrenia? Would neuromotor assessment aid in the prediction of treatment response?
Early studies examining cognitive function in schizophrenia focused on single domains, such as attention or memory, and preceded developments in neuroimaging and cognitive neuroscience that afford better linkage between cognitive aberrations and brain circuitry. Neuropsychological batteries, which have been initially developed and applied in neurological populations, attempt to link behavioral deficits to brain function. When applied in schizophrenia, such batteries have consistently indicated diffuse dysfunction, with relatively greater impairment in executive functions and in learning and memory (Bilder et al., 2000; Censits, Ragland, Gur, & Gur, 1997; Elvevag & Goldberg, 2000; Green, 1996; Gur et al., 2001; Saykin et al., 1994).
It is noteworthy that the pattern of deficits is already observed at first presentation and is not significantly changed by treatment of the clinical symptoms. Therefore, study of adolescents at risk or at onset of illness avoids confounding by effects of treatment, hospitalization, and social isolation that may contribute to compromised function. Although the literature evaluating the specificity of cognitive deficits in schizophrenia is limited, there is enough evidence to show that the profile and severity are different from bipolar disorder. Thus, early evaluation during adolescence may have diagnostic and treatment implications. Given the evidence on cognitive deficits at the premorbid stage, it would be important to evaluate whether a pattern of deficits in adoles cents at risk can predict the onset and course of illness. The executive functions impaired in adults with schizophrenia are the very abilities that are essential for an adolescent to make the transition to young adulthood, when navigation through an increasing complexity of alternatives becomes the issue.
In addition to the cognitive impairment, emotion-processing deficits in identification, discrimination, and recognition of facial expressions have been observed in schizophrenia (Kohler et al., 2003; Kring, Barrett, & Gard, 2003). Such deficits may contribute to the poor social adjustment already salient before disease onset. Emotional impairment in schizophrenia is clinically well established, manifesting in flat, blunted, inappropriate affect and in depression. These affect-related symptoms are notable in adolescents during the prodromal phase of illness preceding the positive symptoms. While these may represent a component of the generalized cognitive impairment, they relate to symptoms and neurobiological measures that deserve further research.
Several brain systems are implicated by these deficits. The attention-processing circuitry includes brainstem-thalamo-striato-accumbens-temporal-hippocampal-prefrontal-parietal regions. Deficits in working memory implicate the dorsolateral prefrontal cortex, and the ventromedial temporal lobe is implicated by deficits in episodic memory. A dorsolateral-medial-orbital prefrontal cortical circuit mediates executive functions. Animal and human investigations have implicated the limbic system, primarily the amygdala, hypothalamus, mesocorticolimbic dopaminergic systems, and cortical regions including orbitofrontal, dorsolateral prefrontal, temporal, and parts of parietal cortex. These are obviously complex systems and impairment in one may interact with dysfunction in others. Studies with large samples are needed to test models of underlying pathophysiology.
The link between neurobehavioral deficits and brain dysfunction can be examined both by correlating individual differences in performance with measures of brain anatomy and through the application of neurobehavioral probes in functional imaging studies. With these paradigms, we can investigate the topography of brain activity in response to engagement in tasks in which deficits have been noted in patients. Thus, there is “online” correlation between brain activity and performance in a way that permits direct examination of brain–behavior relations (Gur et al., 1997).
The availability of methods for quantitative structural neuroimaging has enabled examination of neuroanatomic abnormalities in schizophrenia. Because the onset of schizophrenia takes place during a phase of neurodevelopment characterized by dynamic and extensive changes in brain anatomy, establishment of the growth chart is necessary to interpret findings. Two complementary lines of investigation have proved helpful. By examining the neuroanatomical differences between healthy people and individuals with childhood-onset and first-episode schizophrenia, as well as individuals at risk, regional abnormalities early in the course of illness may be identified. Complementary efforts are needed to examine changes associated with illness progression. An understanding of the neuroanatomic changes in the context of the dynamic transitions of the developing brain during adolescence, however, requires careful longitudinal studies during this critical period. A brief introduction to the methodology of quantitative MRI and its application to examine neurodevelopment is needed to appreciate findings in schizophrenia.
Several approaches have been developed in the early 1990s, and these have now become standard and have been shown to produce reliable results (e.g., Filipek, Richelme, Kennedy, & Caviness, 1994; Kohn et al., 1991). These methods have provided data on the intracranial composition of the three main brain compartments related to cytoarchitecture and connectivity: gray matter (GM), the somatodendritic tissue of neurons (cortical and deep); white matter (WM), the axonal compartment of myelinated connecting fibers; and cerebrospinal fluid (CSF).
In one of the first studies examining segmented MRI in children and adults, Jernigan and Tallal (1990) documented the “pruning” process proposed by Huttenlocher's (1984) work. They found that children had higher GM volumes than adults, a finding indicating loss of GM during adolescence. This group has more recently replicated these results by use of advanced methods for image analysis (Sowell, Thompson, Holmes, Jernigan, & Toga, 1999). Their new study also demonstrated that the pruning is most “aggressive” in prefrontal and temporoparietal cortical brain regions. As a result of this work, we now recognize that both myelination and pruning are important aspects of brain development.
In a landmark paper published in 1996, a National Institutes of Health (NIH) group reported results of a brain volumetric MRI study on 104 healthy children ranging in age from 4 to 18 (Giedd et al., 1996). Although this group did not segment the MRI data into compartments, they did observe developmental changes that clearly indicated prolonged maturation beyond age 17. In a later report on this sample, in which segmentation algorithms were applied, the investigators were able to pinpoint the greatest delay in myelination, defined as WM volume, for frontotemporal pathways (Paus et al., 1999). This finding is very consistent with the Yakovlev and Lecours (1967) projections. The NIH group went on to exploit the ability of MRI to obtain repeated measures on the same individuals. Using these longitudinal data, they were able to better pinpoint the timing of preadolescent increase in GM that precipitates the pruning process of adolescence. Of importance to the question of maturation as defined by myelogenesis are results indicating that the volume of WM continued to show increases up to age 22 years (Giedd et al., 1999).
A Harvard group developed a sophisticated procedure for MRI analysis (Filipek et al., 1994) which they applied to a sample of children with the age range of 7 to 11 years and used to compare results with those of adults (Caviness, Kennedy, Richelme, Rademacher, & Filipek, 1996). They found sex differences suggesting earlier maturation of females, and generally supported the role of WM as an index of maturation. Their results also indicated that WM shows a delay in reaching its peak volume until early adulthood.
Another landmark study, published by a Stanford group, examined segmented MRI on a “retrospective” sample of 88 participants ranging in age from 3 months to 30 years and a “prospective” sample of 73 healthy men aged 21 to 70 years (Pfefferbaum et al., 1994). Scans for the retrospective sample were available from the clinical caseload, although images were carefully selected to include only those with a negative clinical reading; the prospective sample was recruited specifically for research and was medically screened to be healthy. The results demonstrated a clear neurodevelopmental course for GM and WM, the former showing a steady decline during adolescence whereas the latter showed increased volume until about age 20 to 22 years.
A Johns Hopkins group used a similar approach in a sample of 85 healthy children and adolescents ranging in age from 5 to 17 years (Reiss, Abrams, Singer, Ross, & Denckla, 1996). Consistent with postmortem and the other volumetric MRI studies, these investigators reported a steady increase in WM volume with age that did not seem to peak by age 17. Unfortunately, they did not have data on older individuals. Their results are consistent with those of Blatter et al. (1995) from Utah, although the extensive Utah database combines ages 16 to 25 and therefore does not permit evaluation of changes during late adolescence and early adulthood.
In the only study to date that has examined segmented MRI volumes from a prospective sample of 28 healthy children aged 1 month to 10 years and a small adult sample, Matsuzawa et al. (2001) applied the segmentation procedures developed by the Penn group. Matsuzawa et al. demonstrated increased volume of both GM and WM in the first postnatal months, but whereas GM volume peaked at about 2 years of age, the volume of WM, which indicates brain maturation, continued to increase into adulthood (Figure 5.2). Furthermore, consistent with the postmortem and other MRI studies that have examined this issue, the frontal lobe showed the greatest maturational lag, and its myelination is unlikely completed before young adulthood.
Magnetic resonance imaging studies in first-episode patients have indicated smaller brain volume and an increase in CSF relative to that in healthy people (e.g., Gur et al. 1998a; Ho et al., 2003). The increase is more pronounced in ventricular than in sulcal CSF. Brain and CSF volumes have been related to phenomenological and other clinical variables such as premorbid functioning, symptom severity, and outcome. Abnormalities in these measures are likely to be more pronounced in patients with poorer premorbid functioning, more severe symptoms, and worse outcome. The concept of brain reserve or resilience may apply to schizophrenia as well, with normal brain and CSF volumes as preliminary indicators of protective capacity. As our understanding of how brain systems regulate behavior in health and disease improves, we can take advantage of neuroimaging to examine specific brain regions implicated in the pathophysiology of schizophrenia.
Gray and white matter tissue segmentation can help determine whether tissue loss and disorganization in schizophrenia are primarily the result of a GM deficit or whether abnormalities in WM are also involved. Several studies using segmentation methods have indicated that GM volume reduction characterizes individuals with schizophrenia, whereas the volume of WM is normal. The reduction in GM is apparent in first-episode, never-treated patients and supports the growing body of work that schizophrenia is a neurodevelopmental disorder (e.g., Gur et al., 1999).
In evaluating specific regions, the most consistent findings are of reduced volumes of prefrontal cortex and temporal lobe structures. Other brain regions also noted to have reduced volumes include the parietal lobe, thalamus, basal ganglia, cerebellar vermis, and olfactory bulbs. Relatively few studies have related sublobar volumes to clinical or neurocognitive measures. Available studies, however, support the hypothesis that increased volume is associated with lower severity of negative symptoms and better cognitive performance (e.g., Gur et al., 2000a,b; Ho et al., 2003).
The question of progression of tissue loss has been addressed in relatively few studies and in small samples, reflecting the difficulty of recruiting for study patients in the early stages of illness. Longitudinal studies applying MRI have examined first-episode patients. One group of investigators found no ventricular changes in a follow-up study, conducted 1 to 2 years after the initial study, of 13 patients and 8 controls (De greef et al., 1992). Another study evaluated 16 patients and 5 controls, studied 2 years after a first psychotic episode (DeLisi et al., 1991). Patients showed no consistent change in ventricular size with time, although there were individual increases or decreases. With a slightly larger group of 24 patients and 6 controls, no significant changes were observed in ventricular or temporal lobe volume at follow-up (DeLisi et al., 1992). Subsequently, 20 of these patients and 5 controls were rescanned over 4 years, and greater decreases in whole-brain volume and enlargement in left ventricular volume were observed in patients. The authors concluded that subtle cortical changes may occur after the onset of illness, suggesting progression in some cases (DeLisi et al., 1995).
In a longitudinal study with a larger sample, 40 patients (20 first-episode, 20 previously treated) and 17 healthy participants were rescanned an average of 2.5 years later. Volumes of whole brain, CSF, and frontal and temporal lobes were measured (Gur, Cowell, et al., 1998). First-episode and previously treated patients had smaller whole-brain, frontal, and temporal lobe volumes than controls at intake. Longitudinally, a reduction in frontal lobe volume was found only in patients, and was most pronounced at the early stages of illness, whereas temporal lobe reduction was seen also in controls. In both first-episode and previously treated patients, volume reduction was associated with decline in some neurobehavioral functions.
The question of specificity of neuroanatomic findings to schizophrenia was addressed in a recent study that evaluated 13 patients with first-episode schizophrenia, 15 patients with first-episode affective psychosis (mainly manic), and 14 healthy comparison subjects longitudinally, with scans separated by 1.5 years (Kasai et. al., 2003a). The investigators reported that patients with schizophrenia had progressive decreases in GM volume over time in the left superior temporal gyrus, compared with that in both of the other groups. The existence of neuroanatomical abnormalities in first-episode patients indicates that brain dysfunction occurs before clinical presentation. However, the longitudinal studies suggest evidence of progression, in which anatomic changes may impact some clinical and neurobehavioral features of the illness in some patients. There is also evidence that progression is significantly greater in early-onset patients during adolescence than it is for adult subjects (Gogate, Giedd, Janson, & Rapoport, 2001).
Findings from MRI have been most consistent for GM volume reduction, but more recently, WM changes have also been reported. In the coming years the availability of diffusion tensor imaging will enhance the efforts to examine compartmental abnormalities. The growing understanding of brain development and MRI data obtained from children suggest that the neuroanatomic neuroimaging literature in schizophrenia is consistent with diffuse disruption of normal maturation. Thus, there is clear evidence for structural abnormalities in schizophrenia that are associated with reduced cognitive capacity and less clearly with symptoms. Future work, perhaps with more advanced computerized parcellation methods, is needed to better chart the brain pathways most severely affected.
The electroencephalogram (EEG) measures the electrical activity of the brain; it originates from the summated electrical potentials generated by inhibitory and excitatory inputs onto neurons. The main source of the scalp-recorded EEG is in the cortex of the brain, which contains the large and parallel dendritic trees of pyramidal neurons whose regular ordering facilitates summation. One of the important advances in EEG-based research was the development of a technique to isolate the brain activity related to specific events from the background EEG; this activity related to specific events is termed event-related potentials, or ERPs. Using averaging techniques, it is possible to visualize events related to one of the many different brain operations reflected in the EEG. Typically, these ERPs are related to the specific processing of certain sensory stimuli.
In recent years, many new means of measuring brain structure and function have been developed, each with its advantages in study of the brain. Electroencephalographic and ERP measures are unsurpassed in providing real-time, millisecond resolution of normal and pathological brain processing, literally at the speed of thought, whereas functional magnetic resonance imaging (fMRI) and PET have temporal resolutions some thousand-fold less. Moreover, fMRI and PET only indirectly track neural activity through its effects on blood flow or metabolism. However, the ability of EEG and ERP techniques to localize sources of activity is much less than that of fMRI and PET, and these methods, together with structural MRI, are needed to supplement EEG and ERP information.
Current Event-Related Potential Research in Schizophrenia
Space limitations preclude discussion of all ERPs. We provide here a sample of current work designed to illuminate a fundamental question in schizophrenia research—namely, how the brains of patients suffering from this disorder differ from those of healthy subjects. Event-related potentials provide a functional window on many aspects of brain processing. These include the most elementary ones, likely involving cellular circuitry (gamma band activity), early, simple signal detection and gating (P50), and automatic detection of changes in the environment (mismatch negativity activity), and more complex activity such as conscious updating of expectations in view of unusual events (P300).
In this section we will first briefly review studies of ERP processes in adults with schizophrenia that illustrate the potential of these measures to provide clues about the cellular circuitry that may be impaired in schizophrenia. The auditory modality plays a special role because it is severely affected in schizophrenia, as evinced in the primacy of auditory hallucinations and speech and language pathology. The data presented here support the hypothesis that schizophrenia involves abnormalities in brain processing from the most simple to the most complex level, and that the anatomical substrates of auditory processing in the neocortical temporal lobe, most carefully investigated in the superior temporal gyrus, themselves evince reduction in GM volume. Next, we briefly summarize a series of studies of adolescents with schizophrenia in which ERPS are recorded while the youngsters perform poorly on cognitive tasks that make extensive demands on processing resources. These studies use ERPs in an attempt to identify the earliest stage of cognitive processing at which deficits emerge in adolescents with schizophrenia.
Gamma-band activity and neural circuit abnormalities at the cellular level.
The first ERP we will consider is the steady-state gamma-band response. Gamma band refers to a brain oscillation at and near the frequency of 40 Hertz (Hz) or 40 times per second; steady-state refers to its being elicited by a stimulus of the same frequency. At the cellular level, gamma-band activity is an endogenous brain oscillation thought to reflect the synchronizing of activity in several columns of cortical neurons, or between cortex and thalamus, with this synchronization facilitating communication. At the cognitive level, work in humans suggests that gamma activity reflects the convergence of multiple processing streams in cortex, giving rise to a unified percept. A simple example is a “fire truck”; a particular combination of form perception, motion perception, and auditory perception are melded to form this percept. Gamma activity at its simplest, however, involves basic neural circuitry composed of projection neurons, usually using excitatory amino acid (EAA) neurotransmission, linked with inhibitory gamma-aminobutyric acid (GABA)ergic interneurons. Studies of gamma activity in schizophrenia aim to determine if there is a basic circuit abnormality present, such as might arise from a deficiency in recurrent inhibition, postulated by a number of workers (see review in McCarley, Hsiao, Freedman, Pfefferbaum, & Donchin, 1996). Gamma-band studies themselves, however, cannot reveal any specific details of neural circuitry abnormality.
Kwon and colleagues (1999) began the study of gamma in schizophrenia using an exogenous input of 40-Hz auditory clicks, leading to a steady-state gamma response. The magnitude of the brain response was measured by power, the amount of EEG energy at a specific frequency, with the degree of capability of gamma driving being reflected in the power at and near 40 Hz. Compared with healthy controls, schizophrenia patients had a markedly reduced power at 40-Hz input, although they showed normal driving at slower frequencies, which indicated that this was not a general reduction in power but one specific to the gamma band.
Spencer and colleagues (2003) took the next logical step and evaluated the gamma-band response to visual stimuli in schizophrenia, to determine whether high-frequency neural synchronization associated with the perception of visual gestalts is abnormal in schizophrenia patients. Previous studies of healthy individuals had reported enhancements of gamma-band power (Tallon-Baudry & Bertrand, 1999) and phase locking (Rodriguez et al., 1999) when gestalt objects are perceived. In the study by Spencer et al., individuals with schizophrenia and matched healthy people discriminated between square gestalt stimuli and non-square stimuli (square/no-square conditions). In schizophrenia patients, the early visual system gamma-band response to gestalt square stimuli was lacking. There were also abnormalities in gamma-band synchrony between brain regions, with schizophrenia patients showing decreasing rather than increasing gamma-band coherence between posterior visual regions and other brain regions after perceiving the visual gestalt stimuli. These findings support the hypothesis that schizophrenia is associated with a fundamental abnormality in cellular neural circuitry evinced as a failure of gamma-band synchronization, especially in the 40-Hz range.
Sensory gating and the P50—early sensory gating.
Several ERPs have been related to the search for an electrophysiologic concomitant of an early sensory gating deficit in schizophrenia. These include, for example, the startle response, for which the size of a blink to an acoustic probe is measured. Schizophrenia patients appear to be unable to modify their large startle response when forewarned that a probe is coming, in contrast with controls (e.g., Braff et al., 1978).
Another ERP thought to be sensitive to an early sensory gating abnormality in schizophrenia is the P50. In the sensory gating paradigm, an auditory click is presented to a subject, eliciting a positive deflection about 50 msec after stimulus onset, the P50 component. After a brief interval (about 500 msec), a second click elicits a much smaller-amplitude P50 in normal adult subjects, who are said to show normal gating: the first stimulus inhibits, or closes the gate to, neurophysiological processing of the second stimulus. Patients with schizophrenia, by contrast, show less reduction in P50 amplitude to the second click, which is referred to as a failure in gating (Freedman, Adler, Waldo, Oachtman, & Franks, 1983). This gating deficit occurs in about half the first-degree relatives of a schizophrenic patient, a finding suggesting that it may index a genetic factor in schizophrenia in the absence of overt psychotic symptoms (Waldo et al., 1991). Patients with affective disorder may show a gating deficit, but the deficit does not persist after successful treatment; in patients with schizophrenia, the deficit occurs in both medicated and unmedicated patients and persists after symptom remission (Adler et al., 1991; Freedman et al., 1983).
The gating effect is thought to take place in temporal lobe structures, possibly the medial temporal lobe (Adler, Waldo, & Freedman, 1985). P50 gating is enhanced by nicotinic cholinergic mechanisms, and it is possible that smoking in patients with schizophrenia is a form of self-medication. Freedman et al. (1994) have shown that blockade of the α7 -nicotinic receptor, localized to hippocampal neurons, causes loss of the inhibitory gating response to auditory stimuli in an animal model. The failure of inhibitory mechanisms to gate sensory input to higher-order processing might result in “sensory flooding,” which Freedman suggests may underlie many of the symptoms of schizophrenia.
Mismatch negativity and postonset progression of abnormalities.
Mismatch negativity (MMN) is a negative ERP that occurs about 0.2 sec after infrequent sounds (deviants) are presented in the sequence of repetitive sounds (standards). Deviant sounds may differ from the standards in a simple physical characteristic such as pitch, duration, intensity, or spatial location. Mismatch negativity is primarily evoked automatically, that is, without conscious attention. Its main source is thought to be in or near primary auditory cortex (Heschl gyrus) and to reflect the operations of sensory memory, a memory of past stimuli used by the auditory cortex in analysis of temporal patterns.
There is a consistent finding of a reduction in amplitude of MMN in chronically ill schizophrenia patients that appears to be traitlike and not ameliorated by either typical (haloperidol) or atypical (clozapine) medication (Umbricht et al., 1998). A point of particular interest has been the finding that the MMN elicited by tones of different frequency (the pitch MMN) is normal in patients at the time of first hospitalization (Salisbury, Bonner-Jackson, Griggs, Shenton, & McCarley, 2001; confirmed by Umbricht, Javitt, Bates, Kane, & Lieberman, 2002), whereas the MMN elicited by the same stimuli is abnormal in chronic schizophrenia. This finding suggests that pitch MMN might index a postonset progression of brain abnormalities. Indeed, the prospective longitudinal study of Salisbury, McCarley, and colleagues (unpublished data) now has preliminary data showing that schizophrenia subjects without a MMN abnormality at first hospitalization develop an abnormality over the next 1.5 years.
In the same group of patients, the Heschl gyrus, the likely source of the MMN, demonstrates a progressive reduction in GM volume over the same time period (Kasai et al., 2003b). In participants with both MRI and MMN procedures, the degree of GM volume reduction was found to parallel the degree of MMN reduction, although the number of subjects examined is currently relatively small and this conclusion is tentative. Although the presence of postonset progression of abnormalities is controversial in the field, it is of obvious importance to our understanding of the disorder and of particular importance to the study of adolescents with onset of schizophrenia, because it would prompt a search for possible medication and/or psychosocial treatment that might ameliorate progression.
Recent multimodal imaging (Wible et al., 2001) has demonstrated the presence of a deficiency of fMRI activation (BOLD) in schizophrenia to the mismatch stimulus within Heschl's gyrus and nearby posterior superior temporal gyrus.
Because MMN may reflect, in part, N-methyl-d -aspartate (NMDA)-mediated activity, a speculation about the reason for progression is that NMDA-mediated excitotoxity might cause both a reduction in the neuropil (dendritic regression) and a concomitant reduction in the MMN in the months following first hospitalization. Only further work will determine whether this speculation is valid. It is noteworthy that the MMN abnormalities present in schizophrenic psychosis are not present in manic psychosis.
P300 and the failure to process unusual events.
The P300 is an ERP that occurs when a low-probability event is detected and consciously processed. Typically, subjects are asked to count a low-probability tone that is interspersed with a more frequently occurring stimulus. The P300 differs from the typical MMN paradigm in that the stimuli are presented at a slower rate (typically around one per second) and the subject is actively and consciously attending and processing the stimuli, whereas the MMN stimuli are not consciously processed. P300 is larger when the stimulus is rare. Whereas MMN is thought to reflect sensory memory, by definition preconscious, P300 is thought to reflect an updating of the conscious information-processing stream and of expectancy.
Reduction of the P300 amplitude at midline sites is the most frequently replicated abnormality in schizophrenia, although P300 reduction is also found in some other disorders. This widespread P300 reduction also appears to be traitlike and an enduring feature of the disease. For example, Ford and colleagues (1994) demonstrated that although P300 showed moderate amplitude increases with symptom resolution, it did not approach normal values during these periods of remission. Umbricht et al. (1998) have reported that atypical antipsychotic treatment led to a significant increase of P300 amplitudes in patients with schizophrenia.
In addition to the midline P300 reduction, both chronically ill and first-episode schizophrenic subjects display an asymmetry in P300 with smaller voltage over the left temporal lobe than over the right. The more pronounced this left temporal P300 amplitude abnormality, the more pronounced is the extent of psychopathology, as reflected in thought disorder and paranoid delusions (e.g., McCarley et al., 1993, 2002). It is possible the increased delusions reflect a failure of veridical updating of cognitive schemata. This left temporal deficit is not found in affective (manic) psychosis.
There are likely several bilateral brain generators responsible for the P300, with a generator in the superior temporal gyrus (STG) likely under lying the left temporal deficit, since, in schizophrenia, the greater the reduction in GM volume in posterior STG, the greater the reduction in P300 amplitude at left temporal sites in both chronic and first-episode schizophrenia patients. It is of note that the posterior STG, on the left in right-handed individuals, is an area intimately related to language processing and thinking (it includes part of Wernicke's area), and an area where volume reductions are associated with increased thought disorder and severity of auditory hallucinations.
Event-related Potential Measures in Children and Adolescents with Schizophrenia
Event-related indices of information processing deficits.
Brain activity reflected in ERPs recorded during performance of information-processing tasks can be used to help isolate the component or stage of information-processing that is impaired in schizophrenia. A series of ERP studies of children and adolescents with schizophrenia, conducted by the UCLA Childhood Onset Schizophrenia program, are summarized below (see Strandburg et al., 1994a and Asarnow, Brown, & Strandburg, 1995 for reviews). These studies examined ERP components while children and adolescents with schizophrenia performed tasks like the span of apprehension (Span; Strandburg, Marsh, Brown, Asarnow, & Guthrie, 1984) and a continuous performance test (CPT; Strandburg et al., 1990). Several decades of studying mental chronometry with ERPs has produced a lexicon of ERP components with well-established neurocognitive correlates (Hillyard & Kutas, 1983). These ERP components can be used to help identify the stages of information processing that are impaired in schizophrenia.
The UCLA ERP studies have focused primarily on four components: contingent negative variation (CNV), hemispheric asymmetry in the amplitude of the P1/N1 component complex, processing negativity (Np), and a late positive component (P300). The CNV measures orienting, preparation, and readiness to respond to an expected stimulus. There are at least two separate generators of the CNV: an early frontal component believed to be an orienting response to warning stimuli, and a later central component associated with preparedness for stimuli-processing and response (Rohrbaugh et al., 1986).
Healthy individuals typically have larger visual P1/N1 components over the right cerebral hemisphere. Many of the UCLA studies compared hemispheric laterality between healthy and schizophrenia individuals. Differences in lateralization during visual information-processing tasks could reflect either differences in the strategic use of processing capacity of the hemispheres or a lateralized neural deficit.
The Np is a family of negative components that occur within the first 400 msec after the onset of a stimulus, indicating the degree to which attentional and perceptual resources have been allocated to stimulus processing. Because the Np waves occur contemporaneously with other components (P1, N1, and P2), they are best seen in difference potentials resulting from the subtraction of non-attend ERPs from attend ERPs (Hillyard & Hansen, 1986; Naatanan, 1982). Finally, as described above, the P300 is a frequently studied index of the recognition of stimulus significance in relation to task demands.
Event-related potential results in child and adolescent schizophrenia.
Table 5.2 summarizes by component the ERP results from six UCLA studies of children or adults with schizophrenia. In all the studies summarized in this table there were large and robust performance differences between groups in both the accuracy and reaction times of signal detection responses. Thus, the behavioral paradigms were successful in eliciting information-processing deficits in these patients.
Table 5.2 Information-Processing Tasks in Child and Adolescent and Adult-Onset Schizophrenia: Summary of Evoked Potential Studies.
Strandburg et al., 1984
Norm > schiz
Norm > schiz
Norm > schiz
Norm > schiza
Strandburg et al., 1990
Norm > schiz
Norm = schiz
Norm > schiz
Strandburg et al., 1991
Norm = schiz
Norm > schiz
Norm > schiz
Strandburg et al., 1994a
Norm > schiz
Schiz > norm
Norm > schiz
Norm > schiz
Strandburg et al., 1994b
Norm > schizb
Norm = schiz
Norm > schiz
Norm > schiz
Strandburg et al., 1997
Norm > schiz
Norm > schiz
a Larger task-difficulty increased more in N1 amplitude in normals than in schizophrenics.
b Normals had larger P300 than schizophrenics for targets in the single-target CPT task.
CNV, contingent negative variation; CPT, continuous performance task; Np, processing negativity; P300, late positive component.
The CNV differences between normals and schizophrenics were not consistently found across studies. In the span task (which includes a warning interval) all possible results were obtained (normals > schizophrenics; normals = schizophrenics; and normals < schizophrenics). For the CNV-like negative wave occurring in the CPT task, no group differences were found in either experiment. Because the warning interval was short and the wave was largest frontally, the CNVs in both tasks were most likely the early wave related to orienting. Thus, differences in prestimulus orienting do not seem to reliably ac count for the poor performance of schizophrenics on these tasks. There are mixed results in CNV experiments on adults with schizophrenia, although most studies found smaller CNVs in schizophrenics (Pritchard, 1986). A longer warning interval than that used in the UCLA experiments (500 msec in the span and 1250 msec ISI in the CNV) may be required to detect preparatory abnormalities in schizophrenia.
In every study summarized in Table 5.2 in which processing negativities were measured, Nps were found to be smaller in schizophrenics. This deficit was seen in both children and adults, with both the span and CPT (Strandburg et al., 1994c) tasks. In contrast, a group of children with ADHD studied while they performed a CPT task showed no evidence of a smaller Np. Diminished Np amplitude is the earliest consistent ERP index of schizophrenia-related information-processing deficit in the UCLA studies. These results suggest impaired allocation of attentional and perceptual resources.
Most studies of processing negativities during channel selective attention tasks (Nd) find that adults with schizophrenia produce less attentional-related endogenous negative activity than do normal controls (see reviews by Cohen, 1990, and Pritchard, 1986). The UCLA results compliment this finding in adults by using a discriminative processing task and extend these findings to childhood-and adolescent-onset schizophrenia. Reductions in the amplitude of Np in schizophrenia result from im-pairments in executive functions responsible for the maintenance of an attentional trace (Baribeau-Braun, Picton, & Gosselin, 1983; Michie, Fox, Ward, Catts, & McConaghy, 1990). Baribeau-Braun et al. (1983) observed normal Nd activity with rapid stimulus presentation rates, but reduced amplitudes with slower rates, findings suggesting that the neural substrates of Nd are intact but improperly regulated in schizophrenia. Individuals with frontal lobe lesions resemble individuals with schizophrenia in this regard, in that both groups do not show increased Np to attended stimuli in auditory selection tasks (Knight, Hillyard, Woods, & Neville 1981).
As noted earlier, reduced amplitude P300 in schizophrenic adults has been consistently found using a wide variety of experimental paradigms (Pritchard et al., 1986). As can be seen in Table 5.2, the UCLA studies also consistently observed smaller P300 amplitude in studies of both schizophrenic children and adults, in the span, CPT, and idiom recognition tasks. P300 latency was also measured in two of these studies. Although prolonged P300 latency was found in one study (Strandburg et al., 1994c), no differences were found in another (Strandburg et al., 1994b). The majority of ERP studies have reported normal P300 latency in schizophrenics (Pritchard, 1986).
Absence of right-lateralized P1/N1 amplitude in visual ERPs has been a consistent finding in all five of the UCLA studies that used the CPT and span tasks. Abnormally lateralized electrophysiological responses, related either to lateralized dysfunction in schizophrenia or a pathology-related difference in information-processing strategy, is a consistent aspect of both adult-and childhood-onset schizophrenia. These results are consistent with abnormal patterns of hemispheric laterality in schizophrenics (e.g., Tucker & Williamson, 1984).
In summary, ERP studies of schizophrenic adults and children performing discriminative processing tasks suggest that the earliest reliable electrophysiological correlate of impaired discriminative processing in schizophrenia is the Np component. It appears that children and adolescents with schizophrenia are deficient in the allocation of attentional resources necessary for efficient and accurate discriminative processing. Although diminished amplitude processing negativities have been observed in ADHD in auditory paradigms (Loiselle, Stamm, Maitinsky, & Whipple, 1980; Satterfield, Schell, Nicholar, Satterfield, & Freese, 1990), Np was found to be normal in ADHD children during the UCLA CPT task (Strandburg et al., 1994a). Diminished Np visual processing may be specific to schizophrenic pathology. Later ERP abnormalities in schizophrenia (e.g., diminished amplitude P300) may be a “downstream” product of the uncertainty in stimulus recognition created by previous discriminative difficulties, or they may be one of additional neurocognitive deficits. Abnormalities in later ERP components are not specific to schizophrenia, having been reported in studies of ADHD children (reviewed by Klorman, 1991).
The absence of P1/N1 asymmetry in the visual ERPs of schizophrenics is contemporaneous with diminished Np. However, the fact that Np amplitude varies with the processing demands of the task, whereas P1/N1 asymmetry does not, suggests that the Np deficit plays a greater role in the information-processing deficits manifested by children and adolescents with schizophrenia.
Magnetoencephalography—A Complement to Electroencephalography
Magnetoencephalography (MEG) is the measure of magnetic fields generated by the brain. A key difference between the physical source of the MEG and that of the EEG is that the MEG is sensitive to cells that lie tangential to the brain surface and consequently have magnetic fields oriented tangentially. Cells with a radial orientation (perpendicular to the brain surface) do not generate signals detectable with MEG. The EEG and MEG are complementary in that the EEG is most sensitive to radially oriented neurons and fields. This distinction arises, of course, because magnetic fields are generated at right angles to electrical fields. One major advantage that magnetic fields have over electrical potentials is that, once generated, they are relatively invulnerable to intervening variations in the media they traverse (i.e., the skull, gray and white matter, and CSF), unlike electrical fields, which are “smeared” by different electrical conductivities. This has made MEG a favorite technology for use in source localization, in which attention has been especially focused on early potentials.
Perhaps because of the expense and nonmobility of the recording equipment needed for MEG, there has been relatively little work using MEG in schizophrenia to replicate and extend the findings of ERPs. A search of Medline in 2000 revealed only 23 published studies using MEG measures of brain activity in schizophrenia. The extant studies have shown interesting results. Reite and colleagues demonstrated that M100 component (the magnetic analogue to the N100) showed less interhemispheric asymmetry in male schizophrenics and had different source orientations in the left hemisphere. The recent review by Reite, Teale, and Rojas (1999) should be consulted for more details of the work on MEG in schizophrenia.
In summary, electrophysiology has the advantage of providing real-time information on brain processing, with a resolution in the millisecond range. In schizophrenia, it shows abnormalities of processing from the very earliest stages (Np, mismatch negativity, P50, gamma activity) to later stages of attentive discrepancy processing (P300) and semantic processing (N400). This suggests a model of disturbance that encompasses a wide variety of processing and is most compatible with a brain model of circuit abnormalities underlying processing at each stage, particularly in the auditory modality. This is also compatible with MRI studies of abnormal GM regions associated with abnormal ERPs.
One of the more intriguing potential applications to schizophrenia in adolescence is using ERPs to track progression of brain abnormalities. The mismatch negativity ERP is normal at onset (first hospitalization) of schizophrenia but becomes abnormal in the course of the disorder (this developing abnormality is associated with a loss of GM in auditory cortex). The mismatch negativity is thus potentially of use in tracking the ability of therapeutic interventions to minimize brain changes. It is not yet known if gamma abnormalities become evident early or late in the course of schizophrenia.
In recent years, the postpubescent period received increasing attention from researchers in the field of schizophrenia (Stevens, 2002). This interest stems largely from the fact that adolescence is associated with a significant rise in the risk for psychotic symptoms, particularly prodromal signs of schizophrenia (van Oel, Sitskoorn, Cremer, & Kahn, 2002; Walker, 2002). Further, rates of other psychiatric syndromes, including mood and anxiety disorders, escalate during adolescence. It has been suggested that hormonal changes may play an important role in this developmental phenomena, making ad olescence a critical period for the emergence of mental illness (Walker, 2002).
Puberty results from increased activation of the hypothalamic-pituitary-gonadal (HPG) axis, which results in a rise in secretion of sex hormones (steroids) by the gonads in response to gonadotropin secretion from the anterior pituitary. Rising sex steroid concentrations are associated with other changes, including increased growth hormone secretion.
There is also an augmentation of activity in the hypothalamic-pituitary-adrenal (HPA) axis during adolescence. This neural system governs the release of several hormones and is activated in response to stress. Cortisol is among the hormones secreted by the HPA axis, and researchers can measure it in body fluids to index the biological response to stress. Beginning around age 12, there is an age-related increase in baseline cortisol levels in normal children. The change from pre-to postpubertal status is linked with a marked rise in cortisol (Walker, Walder, & Reynolds, 2001) and a significant rise in cortisol clearance and in the volume of cortisol distribution.
The significance of postpubertal hormonal changes has been brought into clearer focus as researchers have elucidated the role of steroid hormones in neuronal activity and morphology (Dorn & Chrousos, 1997; Rupprecht & Holsboer, 1999). Neurons contain receptors for adrenal and gonadal hormones. When activated, these receptors modify cellular function and impact neurotransmitter function. Short-term effects (nongenomic effects) of steroid hormones on cellular function are believed to be mediated by membrane receptors. Longer-term effects (genomic effects) can result from the activation of intraneuronal or nuclear receptors. These receptors can influence gene expression. Brain changes that occur during normal adolescence may be regulated by hormonal effects on the expression of genes that govern brain maturation.
Gonadal and adrenal hormone levels are linked with behavior in adolescents. In general, both elevated and very low levels are associated with greater adjustment problems. For example, higher levels of the adrenal hormones (androstenedione) are associated with adjustment problems in both boys and girls (Nottelmann et al., 1987). Children with an earlier onset of puberty have significantly higher concentrations of adrenal androgens, estradiol, thyrotropin, and cortisol. They also manifest more psychological disorders (primarily anxiety disorders), self-reported depression, and parent-reported behavior problems (Dorn, Hitt, & Rotenstein, 1999). The relationship between testosterone and aggressive behavior is more pronounced in adolescents with more conflictual parent-child relationships, and this demonstrates the complex interactions between hormonal and environmental factors (Booth, Johnson, Granger, Crouter, & McHale 2003).
It is conceivable that hormones are partially exerting their effects on behavior by triggering the expression of genes that are linked with vulnerability for behavioral disorders. Consistent with this assumption, the heritability estimates for antisocial behavior (Jacobson, Prescott, & Kendler, 2002) and depression (Silberg et al., 1999) increase during adolescence. Further, the relationship between cortisol and behavior may be more pronounced in youth with genetic vulnerabilities. For example, increased cortisol is more strongly associated with behavior problems in boys and girls with fragile X than in their unaffected siblings (Hessl et al., 2002).
To date, there has been relatively little research on the HPG axis and schizophrenia, and there is no database on gonadal hormones in adolescent schizophrenia patients. The available reports on adult schizophrenia patients suggest that estrogen may serve to modulate the severity of psychotic symptoms and enhance prognosis (Huber et al., 2001; Seeman, 1997). Specifically, there is evidence that estrogen may have an ameliorative effect by reducing dopaminergic activity.
The role of the HPA axis in schizophrenia has received greater attention. A large body of research literature suggests a link between exposure to psychosocial stress and symptom relapse and exacerbation in schizophrenia (Walker & Diforio, 1997). It has been suggested that activation of the HPA axis mediates this effect (Walker & Diforio, 1997). Dysregulation of the HPA axis, including elevated baseline cortisol and cortisol response to pharmacological challenge, is often found in unmedicated schizophrenia patients (e.g., Lammers et al., 1995; Lee, Woo, & Meltzer, 2001; Muck-Seler, Pivac, Jakovljevic, & Brzovic, 1999). Patients with higher cortisol levels have more severe symptoms (Walder, Walker, & Lewine, 2000) and are more likely to commit suicide (Plocka-Lewandowska, Araszkiewicz, & Rybakowski, 2001).
Basic research has demonstrated that cortisol affects the activity of several neurotransmitter systems. This includes dopamine, a neurotransmitter that has been implicated in the etiology of schizophrenia (Walker & Diforio, 1997). The assumption is that increased dopamine activity plays a role in psychotic symptoms. Cortisol secretion augments dopamine activity. Thus it may be that when patients are exposed to stress and elevations in cortisol ensue, dopamine activity increases and symptoms are triggered or exacerbated.
Although there are no published reports on cortisol secretion in adolescents with schizophrenia, HPA axis function has been studied in adolescents with schizotypal personality disorder (Weinstein, Diforio, Schiffman, Walker, & Bonsall, 1999). Schizotypal personality disorder (SPD) involves subclinical manifestations of the symptoms of schizophrenia, including social withdrawal and unusual perceptions and ideas. This disorder is both genetically and developmentally linked with schizophrenia. The genetic link is indicated by the higher rate of SPD in the family members of patients diagnosed with schizophrenia. From a developmental perspective, there is extensive evidence that the defining symptoms of SPD often predate the diagnosis of schizophrenia, usually arising during adolescence.
When compared to healthy adolescents, adolescents with SPD show elevated baseline levels of cortisol (Weinstein et al., 1999) and a more pronounced developmental increase in cortisol when measured over a 2-year period (Walker et al., 2001). Further, SPD adolescents who show a greater developmental rise in cortisol are more likely to have an increase in symptom severity over time. This suggests that increased activation of the HPA axis may contribute to the worsening of symptoms as the child progresses through adolescence.
Research on the role of neurohormones in schizophrenia, especially the gonadal and adrenal hormones, should be given high priority in the future. In particular, it will be important to study hormonal processes in youth at risk for schizophrenia. There are several key questions to be addressed in clinical research. Are hormonal changes linked with the emergence of the prodromal phases of schizophrenia? Do rising levels of adrenal or gonadal hormones precede the onset of symptoms? Is there a relationship between hormonal factors and the brain changes that have been observed in the prodromal phase of schizophrenia? At the same time, basic science research is expected to yield new information about the impact of hormones on gene expression. This may lead to clinical research to explore the role of adolescent hormone changes on the gene expression in humans.
BRAIN CIRCUITRY IN SCHIZOPHRENIA
Information processing in the brain is a complex task, and even simple sensory information, such as recognizing a sight or a sound, engages circuits of cells in multiple regions of the brain. Scientists early in the 20th century imagined that brain function occurred in discrete steps along a linear stream of information flow. However, the recent emergence of brain imaging as an important tool for understanding the neuroscience of cognition and emotion has demonstrated that the brain operates more like a parallel processing computer with feed-forward and feedback circuitry that manages information in distributed and overlapping processing modules working in parallel. Thus, abnormal function in one brain region will have functional ripple effects in other regions, and abnormal sharing of information between regions, perhaps because of problems in the connectional wiring, can result in abnormal behavior even if individual modules are functionally intact.
In light of the elaborate and complex symptoms of schizophrenia, it is not surprising that researchers have increasingly focused on evidence of malfunction within distributed brain circuits rather than within a particular single brain region or module. Most of this work has been based on in vivo physiologic techniques, such as imaging and electrophysiology. At the same time, basic research in animals and to a lesser extent in humans has shown that the elaboration of brain circuitry is a lifelong process, especially the connection between cells in circuits within and between different regions of the cortex. This process of development and modification of connections between neurons is particularly dynamic during adolescence and early adult life. In this section, we will review some of the recent evidence that local and distributed abnormalities of brain circuitry are associated with schizophrenia and their implications for adolescent psychosis.
Two of the most often cited areas of the brain said to be abnormal in schizophrenia are the cortices of the frontal and temporal lobes. Indeed, damage to these regions caused by trauma, stroke, or neurological disease is more likely to be associated with psychosis than is damage to other brain regions. Recent studies using neuroimaging techniques have suggested that malfunction at the systems level—that is, at the relationship of processing in the temporal and frontal lobes combined—best characterizes the problem in patients with schizophrenia. For example, in a study of identical twins discordant for schizophrenia, differences within each twin pair in volume of the hippocampus predicted very strongly the difference in the function of the prefrontal cortex assayed physiologically during a cognitive task dependent on the function of the prefrontal cortex (Weinberger, Berman, Suddah, & Torrey, 1992).
A peculiar disturbance in the use of language, so-called thought disorder, is one of the cardinal signs of schizophrenia. Language is highly dependent on frontotemporal circuitry, which is disturbed in schizophrenia. When patients are asked to generate a list of words beginning with a specific consonant, instead of activating the frontal lobes and deactivating the temporal lobes, as seen in healthy subjects, they do the opposite. More detailed analyses have examined declarative memory encoding, storage, and retrieval as related to language. Encoding is manipulated by instructing subjects to process material more deeply, as, for example, to make semantic judgments about to-be-remembered words, such as whether the words represent living or nonliving, or abstract or concrete words. This deeper, more elaborate encoding is compared with a shallower, more superficial level of encoding, such as having subjects judge the font (upper case versus lower case) of each word presented. Compared with healthy controls, patients with schizophrenia show different patterns of fMRI activation for semantically encoded words, with significantly reduced left inferior frontal cortex activation but significantly increased left superior temporal cortex activation (Kubicki et al., 2003). During tests of word retrieval, patients with schizophrenia tend to show underengagement of the hippocampus, but at the same time their prefrontal cortex is overactive (Heckers et al., 1998). During performance of effortful tasks, by contrast, people with schizophrenia show increased activity in hippocampus and an alteration in the connection between hippocampus and anterior cingulate cortex (Holcomb et al., 2000; Medoff, Holcomb, Lahti, & Tamminga, 2001). These studies suggest that the information-processing strategy for encoding and retrieving learned information, which depends on an orchestrated duet between frontotemporal brain regions, is disturbed in patients with schizophrenia.
Similar results have been found in studies focused on prefrontal mediated memory, so-called working memory, in which the normal relationships between prefrontal activation and hip-pocampal deactivation are disrupted in schizophrenia (Callicott et al., 2000). Finally, recent statistical approaches to interpreting functional imaging results based on patterns of intercorrelated activity across the whole brain have demonstrated that abnormalities in schizophrenia are distributed across cortical regions. In particular, the pattern based on the normal rela-tionships between prefrontal and temporal cortical activity is especially abnormal (Meyer-Lindenberg et al., 2001). This apparent functional abnormality in intracortical connected-ness has been supported by anatomical evidence from diffusion tensor imaging, which has pointed to an abnormality in the WM links between frontal and temporal lobes (e.g., Kubicki et al., 2002).
The evidence for abnormal function across distributed cortical circuitry is quite compelling in schizophrenia, and other regions representing other circuits are also implicated (Tamminga et al., 2002; Weinberger et al., 2001). Indeed, it is not clear that any particular area of cortex is normal under all conditions. This may reflect simply the interconnectedness of the brain or it may suggest that schizophrenia is especially characterized by a “dysconnectivity.” It is impossible at the current level of our understanding of the disease to differentiate between these possibilities.
Schizophrenia disrupts not only circuitry linking brain regions but also the microcircuitry within brain regions, as shown by abnormal electrophysiologic activity during simple, early-stage “automatic processing” of stimuli, processing relatively independent of directed, conscious control. For example, healthy subjects automatically generate a robust EEG response in and near primary auditory cortex to tones differing slightly in pitch from others in a series (“mismatch” response), whereas the processing response in schizophrenia to the mismatch is much less pronounced (Wible et al., 2001).
Neurophysiological studies have focused largely on function of the cerebral cortex, but the pharmacological treatment of schizophrenia targets principally the dopamine system, which has long implicated the striatum and related subcortical sites. In fact, cortical function and activity of the subcortical dopamine system are intimately related, consistent with circuitry models of brain function. Animal studies have demonstrated conclusively that perturbations in cortical function, especially prefrontal function, disrupt a normal tonic brake on dopamine neurons in the brainstem, leading to a loss of the normal regulation of these neurons and to their excessive activation (Weinberger et al., 2001). It is thought that the prefrontal cortex helps guide the dopamine reward system toward the reinforcing of contextually appropriate stimuli. In the absence of such normal regulation, reward and motivation may be less appropriately targeted.
Neuroimaging studies of the dopamine system in patients with schizophrenia, particularly those who are actively psychotic, have found evidence of overactivity in the striatum (Laruelle, 2000). Recently, two studies reported that this apparent overactivation of the subcortical dopamine system is strongly predicted by measures of abnormal prefrontal cortical function (Bertolino et al., 2000; Meyer-Lindenberg et al., 2002). Moreover, reducing dopaminergic transmission with dopamine antagonists in subcortical dopamine-rich regions is associated with substantial alterations in frontal cortex function (Holcomb et al., 1996), presumably mediated through circuits connecting the striatum to the frontal cortex (Alexander & Crutcher, 1990). These data illustrate that what happens in the prefrontal cortex is very important to how other brain systems function and that the behavioral disturbances of schizophrenia involve dysfunction of diverse and interconnected brain systems.
Brain Circuitry and Implications for Adolescence
Contrary to long-held ideas that the brain was mostly grown-up after childhood, it is now clear that adolescence is a time of explosive growth and development of the brain. While the number of nerve cells does not change after birth, the richness and complexity of the connections between cells do, and the capacity for these networks to process increasingly complex information changes accordingly. Cortical regions that handle abstract information and that are critical for learning and memory of abstract concepts—rules, laws, codes of social conduct—seem to become much more likely to share information in a parallel processing fashion as adulthood approaches. This pattern of increased cortical information sharing is reflected in the patterns of connections between neurons in different regions of the cortex. Thus, the dendritic trees of neurons in the prefrontal cortex become much more complex during adolescence, which indicates that the information flow between neurons has become more complex (Lambe, Krimer, & Goldman-Rakic, 2000). The possibility that schizophrenia involves molecular and functional abnormalities of information flow in these circuits suggests that such abnormalities may converge on the dynamic process of brain maturation during adolescence and increase the risk of a psychotic episode in predisposed individuals.
Pathophysiology of Schizophrenia in Adolescence
Despite over a century of research, we have only a limited understanding of what causes schizophrenia and related psychotic disorders. Early studies of the biological basis of schizophrenia relied mostly on either postmortem studies of brains of people with this illness or brain imaging studies typically of older patients with chronic schizophrenia, many of whom were treated with medications. It was therefore difficult to know to what extent the observed changes were the results of aging, illness chronicity, or medication effects. One can avoid such difficulties by conducting studies of individuals in the early phases of schizophrenia (Keshavan & Schooler, 1992). First, these studies allow us to clarify which of the biological processes may be unique to the illness and which ones might be a result of medications or of persistent illness. Second, first-episode studies allow us to longitudinally evaluate the course of the brain changes, and how such changes can help us predict outcome with treatment. Follow-up studies suggest that less than half of early psychosis patients go on to develop a chronic form of schizophrenia with poor level of functioning and intellectual deficits (Harrison et al., 2001). An understanding of which patients may have such an outcome will greatly help treatment decisions early in the illness. Finally, not all who have features of the prodromal phases of the illness go on to develop the psychotic illness (Yung et al., 2003). Studies of the prodromal and early course of psychotic disorders provide an opportunity to elucidate the neurobiological processes responsible for the transition from the prodromal to psychotic phase of the illness.
Several conceptual models of the biology and causation of schizophrenia have been recently suggested, and serve to guide research into the early phase of this illness. One view, which dates back to the late 1980s, is the so-called early neurodevelopmental model (Murray & Lewis, 1987; Weinberger, 1987). This model posits abnormalities early during brain development (perhaps at or before birth) as mediating the failure of brain functions in adolescence and early adulthood. Several lines of evidence, such as an increased rate of birth complications, minor physical and neurological abnormalities, and subtle behavioral difficulties in children who later developed schizophrenia, support this view. However, many nonaffected persons in the population also have these problems; their presence cannot inform us with confidence whether or not schizophrenia will develop later in life. The fact that the symptoms typically begin in adolescence or early adulthood suggests that the illness may be related to some biological changes related to adolescence occurring around or prior to the onset of psychosis. Childhood is characterized by proliferation of synapses and dendrites, and normal adolescence is characterized by elimination or pruning of unnecessary synapses in the brain, a process that serves to make nerve cell transmission more efficient (Huttenlocher et al., 1982). This process could go wrong, and an excessive pruning before or around the onset phase of illness (Feinberg, 1982b; Keshavan, Anderson, & Pettegrew, 1994) has been thought to mediate the emergence of psychosis in adolescence or early adulthood. Our understanding of the underlying neurobiology of this phase of illness remains poor, however. Another view is that active biological changes could occur after the onset of illness, during the commonly lengthy period of untreated psychosis. This model proposes progressive neurodegenerative changes (Lieberman, Perkins, et al., 2001). It is possible that all three processes are involved in schizophrenia (Keshavan & Hogarty, 1999); additionally, environmental factors such as drug misuse (Addington & Addington, 1998) and psychosocial stress (Erickson, Beiser, Iacono, Fleming, & Lin, 1989) may trigger the onset and influence the course of schizophrenia. Careful studies of the early phase of schizophrenia can shed light on these apparently contrasting models. The three proposed pathophysiological models might reflect different critical periods for prevention and therapeutic intervention.
THE GENETICS OF SCHIZOPHRENIA
Remarkable progress has been made in understanding genetic factors related to schizophrenia. We will summarize this work in the following section. Since almost no work has been done specifically on the genetics of adolescent-onset schizophrenia, we focus on studies of typical samples of adult-onset cases.
Is Schizophrenia Familial?
The most basic question in the genetics of schizophrenia is whether the disorder aggregates (or “runs”) in families. Technically, familial aggregation means that a close relative of an individual with a disorder is at increased risk for that disorder, compared to a matched individual chosen at random from the general population. Twenty-six early family studies, conducted prior to 1980 and lacking modern diagnostic procedures and appropriate controls, consistently showed that first-degree relatives of schizophrenia patients had a risk for schizophrenia that was roughly 10 times greater than would be expected in the general population (Kendler, 2000). Since 1980, 11 major family studies of schizophrenia have been reported that used blind diagnoses, control groups, personal interviews, and operationalized diagnostic criteria. The level of agreement in results is impressive. Every study showed that the risk of schizophrenia was higher in first-degree relatives of schizophrenic patients than in matched controls. The mean risk for schizophrenia in these 11 studies was 0.5% in relatives of controls and 5.9% in the relatives of schizophrenics. Modern studies suggest that, on average, parents, siblings, and offspring of individuals with schizophrenia have a risk of illness about 12 times greater than that of the general population, a figure close to that found in the earlier studies.
Recently, results of the first methodologically rigorous family study of child-onset schizophrenia have been reported. Compared to parents of matched normal controls and children with ADHD, parents of childhood-onset schizophrenia had an over 10-fold increased risk for schizophrenia. This finding supports the hypothesis of etiologic continuity between childhood-and adult-onset schizophrenia (Asarnow, Tompson, & Goldstein, 2001).
To What Extent Is the Familial Aggregation of Schizophrenia Due to Genetic Versus Environmental Factors?
Resemblance among relatives can be due to either shared or family environment (nurture), to genes (nature), or to both. A major goal in psychiatric genetics is to determine the degree to which familial aggregation for a disorder such as schizophrenia results from environmental or genetic mechanisms. Although sophisticated analysis of family data can begin to make this discrimination, nearly all of our knowledge about this problem in schizophrenia comes from twin and adoption studies.
Twin studies are based on the assumption that “identical,” or monozygotic (MZ), and “fraternal,” or dizygotic (DZ), twins share a common environment to approximately the same degree. However, MZ twins are genetically identical, whereas DZ twins (like full siblings) share on average only half of their genes. Results are available from 13 major twin studies of schizophrenia published from 1928 to 1998 (Kendler, 2000). Although modest differences are seen across studies, overall, the agreement is impressive. Across all studies, the average concordance rate for schizophrenia in MZ twins is 55.8% and in DZ twins, 13.5%. When statistical models are applied to these data to estimate heritability (the proportion of variance in liability in the population that is due to genetic factors), the average across all 13 studies is 72%. This figure, which is higher than that found for most common biomedical disorders, means that, on average, genetic factors are considerably more important than environmental factors in affecting the risk for schizophrenia.
Adoption studies can clarify the role of genetic and environmental factors in the transmission of schizophrenia by studying two kinds of rare but informative relationships: (1) individuals who are genetically related but do not share their rearing environment, and (2) individuals who share their rearing environment but are not genetically related. Three studies conducted in Oregon, Denmark, and Finland all found significantly greater risk for schizophrenia or schizophrenia-spectrum disorders in the adopted-away offspring of schizophrenic parents than that for the adopted-away offspring of matched control mothers. The second major adoption strategy used for studying schizophrenia begins with ill adoptees rather than with ill parents and compares rates of schizophrenia between groups of biologic parents and groups of adoptive parents. In two studies from Denmark using this strategy, the only group with elevated rates of schizophrenia and schizophrenia-spectrum disorders were the biological relatives of the schizophrenic adoptees (Kety et al., 1994).
Twin and adoption studies provide strong and consistent evidence that genetic factors play a major role in the familial aggregation of schizophrenia. Although not reviewed here, evidence for a role for nongenetic familial factors is much less clear. Some studies suggest they may contribute modestly to risk for schizophrenia, but most studies find no evidence for significant nongenetic familial factors for schizophrenia.
What Psychiatric Disorders Are Transmitted Within Families of Individuals With Schizophrenia?
Since the earliest genetic studies of schizophrenia, a major focus of such work has been to clarify more precisely the nature of the psychiatric syndromes that occur in excess in relatives of schizophrenic patients. To summarize a large body of evidence, relatives of schizophrenia patients are at increased risk for not only schizophrenia but also schizophrenia-like personality disorders (best captured by the DSM-IV categories of schizotypal and paranoid personality disorder) and other psychotic disorders (Kendler, 2000). However, there is good evidence that relatives of schizophrenia patients are not at increased risk for other disorders, such as anxiety disorders and alcoholism. The most active debate in this area is the relationship between schizophrenia and mood disorders. Most evidence suggests little if any genetic relationship between these two major groups of disorders, but some research does suggest a relationship particularly between schizophrenia and major depression.
The evidence that other disorders in addition to schizophrenia occur at greater frequency in the close relatives of individuals with schizophrenia has led to the concept of the schizophrenia-spectrum—a group of disorders that all bear a genetic relationship with classic or core schizophrenia.
What Is the Current Status for Identifying Specific Genes That Predispose to Schizophrenia?
Given the evidence that genetic factors play an important role in the etiology of schizophrenia, a major focus of recent work has been to apply the increasingly powerful tools of human molecular genetics to localize and identify the specific genes that predispose to schizophrenia. Two strategies have been employed in this effort: linkage and association. The goal of linkage studies is to identify areas of the human genome that are shared more frequently than would be expected by relatives who are affected. If such areas can be reliably identified, then these regions may contain one or more specific genes that influence the liability to schizophrenia. The method of linkage analysis has been extremely successful in identifying the location of genes for simple, usually rare medical genetic disorders (termed “Mendelian” disorders) in which there is a one-to-one relationship between having the defective gene and having the disorder. This method, however, has had more mixed results when applied to disorders such as schizophrenia that are genetically “complex.” Such complex disorders are likely to be the result of multiple genes, none of which have a very large impact on risk, interacting with a range of environmental risk factors.
Eighteen genome scans for schizophrenia have been published between 1994 and 2002. None of these scans has revealed evidence for a single gene with a large impact on risk for schizophrenia. Indeed, these results suggest that the existence of a single susceptibility locus that accounts for a large majority of the genetic variance for schizophrenia can now be effectively ruled out.
The most pressing scientific issue in the interpretation of linkage studies of schizophrenia has been whether there is agreement at above-chance levels across studies on which individual regions of the genome contain susceptibility genes for schizophrenia. Until recently, the across-study agreement had not been very impressive.
Two recent findings have increased our confidence that linkage studies of schizophrenia may be producing reliable results. First, in a large-scale study of families containing two or more cases of schizophrenia, conducted in Ireland, the sample was divided, prior to analysis, into three random subsets (Straub, MacLean, et al., 2002). When a genome scan was performed on these three subsets, three of the four regions that most prominently displayed evidence for linkage (on chromosomes 5q, 6p, and 8p) were replicated across all three subsets. Interestingly, one region, on chromosome 10p, was not replicated even within the same study. Probably more important, Levinson and collaborators were able to obtain raw data from nearly all major published genome scans of schizophrenia to perform a meta-analysis—a statistical method for rigorously combining data across multiple samples (Lewis et al., 2003). Ten regions produced nominally significant results including 2q, 5q, 6p, 22q, and 8p. The authors concluded: “There is greater consistency of linkage results across studies than had been previously recognized. The results suggest that some or all of these regions contain loci that increase susceptibility to schizophrenia in diverse populations.”
On the Cusp of Gene Discovery in Schizophrenia
The evidence for replicated linkages in schizophrenia represents an important step toward the ultimate goal of identifying susceptibility genes and characterizing their biologic effects. Because the human genome contains within its 23 pairs of chromosomes over three billion nucleotides (i.e., “letters” in the genetic alphabet) and 30,000 genes (i.e., protein-encoding units), it is a large territory to explore. Linkage is a strategy to narrow the search and to provide a map of where the treasure (i.e., the genes) may lie. The linkage results in schizophrenia so far have highlighted several regions of the genome for a more thorough search. Association (also called linkage disequilibrium) is the next critical step in this search for the treasure. Linkage represents a relationship between regions of the genome shared by family members who also share the phenotype of interest—here, schizophrenia. It provides a low-resolution map because family members share relatively large regions of any chromosome. Association, however, represents a relationship between specific alleles (i.e., specific variation in a gene or in a genetic marker) and illness in unrelated individuals. It provides a high-resolution map because unrelated people share relatively little genetic information. For a given allele to be found more frequently in unrelated individuals with a similar disease than it is in the general population, the probability that this specific allele is a causative factor in the disease is enhanced. If the frequency of a specific allele (i.e., a specific genetic variation) is greater in a sample of unrelated individuals who have the diagnosis of schizophrenia than it is in a control population, the allele is said to be associated with schizophrenia. This association represents one of three possibilities: the allele is a causative mutation related to the etiology of the disease; the allele is a genetic variation that is physically close to the true causative mutation (i.e., in “linkage disequilibrium” with the true mutation); or the association is a spurious relationship reflecting population characteristics not related to the phenotype of interest. This latter possibility is often referred to as a population stratification artifact, meaning that differences in allele frequencies between the cases and control samples are not because of disease but because of systematic genetic differences between the comparison populations.
Association has become the strategy of choice for fine mapping of susceptibility loci and for preliminary testing of whether specific genes are susceptibility genes for schizophrenia. The strategy involves identifying variations (“polymorphisms”) in a gene of interest and then performing a laboratory analysis of the DNA samples to “type” each variation in each individual and determine its frequency in the study populations. Genetic sequence variations are common in the human genome and public databases have been established to catalog them. The most abundant sequence variations are single nucleotide polymorphisms (SNPs), which represent a substitution in one DNA base. Common SNPs occur at a frequency of approximately one in every 1,000 DNA bases in the genome and over two million SNPs have been identified. While SNPs are relatively common, most SNPs within genes either do not change the amino acid code or are in noncoding regions of genes (“introns”) and are thus not likely to have an impact on gene function.
Early association studies in schizophrenia focused on genes based on their known function and the possibility that variations in their function might relate to the pathogenesis of the disease. These so-called functional candidate gene studies had no a priori probability of genetic association. A number of studies compared frequencies of variations in genes related to popular neurochemical hypotheses about schizophrenia, such as the dopamine and glutamate hypotheses, in individuals with schizophrenia with those in control samples. In almost every instance the results were mixed, with some positive but mostly negative reports. Many of the positive studies were compromised by potential population stratification artifacts. However, because the effect on risk of any given variation in any candidate gene (e.g., a dopamine or glutamate receptor gene) is likely to be small (less than a twofold increase in risk), most studies have been underpowered to establish association or to rule it out.
Recent association studies have been much more promising, primarily because of the linkage results. Using the linkage map regions as a priori entry points into the human genetic sequence databases, genes have been identified in each of the major linkage regions that appear to represent at least some of the basis for the linkage results. Moreover, confirmation of association in independent samples have appeared, which combined with the linkage results comprise convergent evidence for the validity of these genetic associations. In the August and October 2002 issues of the American Journal of Human Genetics, the first two articles appeared that claimed to identify susceptibility genes for schizophrenia, starting with traditional linkage followed by fine association mapping. Both of these were in chromosomal regions previously identified by multiple linkage groups: dysbindin (DTNBP1) on chromosome 6p22.3 (Straub, Jiang, et al., 2002) and neuregulin 1 (NRG1) on chromosome 8p-p21 (Stefansson et al., 2002). Both groups identified the genes in these regions from public databases and then found variations (SNPs) within the genes that could be tested via an association analysis. In both studies, the statistical signals were strong and unlikely to occur by chance. In the January 2003 issue of the same journal, two further articles were published, replicating, in independent population data sets also from Europe, association to variations in the same genes (Schwab et al., 2003; Stefansson et al., 2003). In the December 2002 issue of the same journal, authors of a study on a large population sample from Israel reported very strong statistical association to SNPs in the gene for catechol-O-methyltransferase (COMT), which was mapped to the region of 22q that had been identified as a susceptibility locus in several linkage studies (Shifman et al., 2002). Positive association to variation in COMT had also been reported in earlier studies in samples from China, Japan, France, and the United States (Egan et al., 2001). Starting with the linkage region on chromosome 13q34, a group from France discovered a novel gene, called G72, and reported in two population samples association between variations in this gene and schizophrenia (Chumakov et al., 2002). The SNP variations in G72 have recently been reported to be associated with bipolar disorder as well.
In addition to these reports based on relatively strong linkage regions, several other promising associations have emerged from genes found in weaker linkage regions. For example, a weak linkage signal was found in several genome scans in 15q, a region containing the gene for the α–7 -nicotine receptor (CHRNA7; Raux et al., 2002). This gene has been associated with an intermediate phenotype related to schizophrenia, the abnormal P50 EEG evoked response. Preliminary evidence has been reported that variants in CHRNA7 are associated with schizophrenia as well. DISC-1 is a gene in 1q43, which was a positive linkage peak in a genome linkage scan from Finland. A chromosomal translocation originating in this gene has been found to be very strongly associated with psychosis in Scottish families having this translocation (Millar et al., 2000). Finally, in a study of gene expression profiling from schizophrenic brain tissue, a gene called RGS4 was found to have much lower expression in schizophrenic brains than in normal brains. This gene is found in another 1q region that was positive in a linkage scan from Canada, and SNPs identified in RGS4 have now been shown to be associated with schizophrenia in at least three population samples (Chowdari et al., 2002). This convergent evidence from linkage and association studies implicates at least seven specific genes as potentially contributing risk for schizophrenia.
From Genetic Association to Biological Mechanisms of Risk
Genetic association identifies genes but it does not identify disease mechanisms. Most of the genes implicated thus far are based on associations with variations that are not clearly functional, in the sense that they do not appear to change the integrity of the gene. Most are SNPs in intronic regions of genes, which do not have an impact on traditional aspects of gene function, such as the amino acid sequence or regulation of transcription. So, the associations put a flag on the gene but they do not indicate how inheritance of a variation in the gene affects the function of the gene or the function of the brain. More work is needed in searching for variations that may have obvious functional implications and in basic cell biology to understand how gene function affects cell function.
In two of the genes implicated to date, there is evidence of a potential mechanism of increased risk. Preliminary evidence suggests that SNPs in the promotor region of the CHRNA7 gene that are associated with schizophrenia affect factors that turn on transcription of the CHRNA7 gene, presumably accounting for lower abundance of CHRNA7 receptors, which has been reported in schizophrenic brain tissue (Leonard et al., 2002). This receptor is important in many aspects of hippocampal function and in regulation of the response of dopamine neurons to environmental rewards. Both hippocampal function and dopaminergic responsivity have been prominently implicated in the biology of schizophrenia. The COMT valine allele, which has been associated with schizophrenia in the COMT studies, translates into a more active enzyme, which appears to diminish dopamine in the prefrontal cortex. This leads to various aspects of poorer prefrontal function, in terms of cognition and physiology, which are prominent clinical aspects of schizophrenia, and to intermediate phenotypes associated with risk for schizophrenia (Weinberger et al., 2001). The COMT valine allele also is associated with abnormal control of dopamine activity in the parts of the brain where it appears to be overactive in schizophrenia (Akil et al., 2003). Thus, inheritance of the COMT valine allele appears to increase risk for schizophrenia because it biases toward biological effects implicated in both the negative and positive symptoms of the illness.
Schizophrenia-Susceptibility Genes and Adolescence
It is not obvious how the genes described would specifically relate to adolescence and the emergence of schizophrenia during this time of life. The evidence so far suggests that each of the candidate susceptibility genes has an impact on fundamental aspects of how a brain grows and how it adapts to experience. Each gene may affect the excitability of glutamate neurons—directly or through GABA neuron intermediates, and indirectly through the regulation of dopamine neurons by the cortex. These are fundamental pro cesses related to the biology of schizophrenia. These are also processes that may be especially crucial to adolescence because cortical development and plasticity are changing dramatically during this period. Thus, it is conceivable that the variations in the functions of these genes associated with schizophrenia lead to compromises and bottlenecks in these processes.
The Potential Gene-Finding Utility of Intermediate or Endophenotypes
Despite encouraging results from recent linkage and association studies, the literature also contains prominent failures and inconsistencies. Failures to replicate linkage and association signals for schizophrenia suggest that genomic strategies may benefit from a redirection based on our current understanding of the pathophysiology of schizophrenia. For example, the power of genetic studies may increase by examining linkage with quantitative traits that relate to schizophrenia rather than with a formal diagnosis itself. The concept of using intermediate phenotypes, or endophenotypes, is not new (Gottesman & Gould, 2003), but has only recently started to enjoy widespread popularity among those seeking genes for schizophrenia. Gottesman and Shields suggested over 30 years ago that features such as subclinical personality traits, measures of attention and information processing, or the number of dopamine receptors in specific brain regions might lie “intermediate to the phenotype and genotype of schizophrenia” (Gottesman & Shields, 1973). Today, other traits, such as eye-movement dysfunctions, altered brain-wave patterns, and neuropsychological and neuroimaging abnormalities, are under consideration as potentially useful endophenotypes of schizophrenia, because all of these are more common or more severe in schizophrenic patients and their family members than in the general population or among control subjects (Faraone et al., 1995). These deficits may relate more directly than the diagnosis of schizophrenia to the aberrant genes. At the biological level, this is a logical assumption, as genes do not encode for hallucinations or delusions; they encode primarily for proteins that have an impact on molecular processes within and between cells. Thus, endophenotypes may serve as proxies for schizophrenia that are closer to the biology of the underlying risk genes.
Early Findings from Molecular Genetic Studies of Endophenotypes
While much recent work has been dedicated toward establishing the heritability of endophenotypes, only a handful of molecular genetic studies of endophenotypes have emerged. Results observed to date have been encouraging, in that some chromosomal loci that have been found to harbor genes for schizophrenia have also shown evidence for linkage with an endophenotype. For example, linkage with an auditory-evoked brain wave pattern (the P50 endophenotype) has been observed independently in two samples of schizophrenia pedigrees on chromosome 15 at the locus of the α–7 -nicotinic receptor gene, where some evidence for linkage had previously been observed using traditional diagnostic classifications (Leonard et al., 1996; Raux et al., 2002). However, the greater potential of endophenotype studies is that genes might be identified that would not be implicated from regions of the genome highlighted in linkage regions. This is because minor genes for schizophrenia may turn out to be major genes for some index of central nervous system dysfunction. The proof of this has been supported by evidence that COMT, which is a weak susceptibility gene for schizophrenia, is a relatively strong factor in normal human frontal lobe function (Weinberger et al., 2001).
Whether classical criteria or quantitative phenotypes are used to further study schizophrenia, refining the definition of an “affected” individual is a top priority for genetic studies. Because not all individuals with schizophrenia-susceptibility genes develop the actual disorder, understanding the measurable effects of these aberrant genes is a critical step in tracking their passage through affected pedigrees and in identifying their clinical biology. In the near future, the amount and types of expressed protein products of these disease genes may be used as the ultimate endophenotype for schizophrenia. To the extent that we can reduce measurement er ror and create measures that are more closely tied to individual schizophrenia genes, we will greatly improve our understanding of the genetics of schizophrenia.
Genetic Counseling Issues and Schizophrenia
With increasing attention in the media to issues relating to genetics and particularly the role of genetic factors in mental illness, an increasing number of individuals will likely be seeking genetic counseling for issues related to schizophrenia. In our experience, by far the most common situation is a married couple who are contemplating having children and the husband or wife has a family history of schizophrenia. They typically ask any combination of three questions: First, is there a genetic test that can be performed on us to determine whether we have the gene for schizophrenia and whether we might pass it on to our children? Second, is there an in utero test that can be given that would determine the risk of the fetus to develop schizophrenia later in life? Third, what is the risk for schizophrenia to our children?
Unfortunately, given the current state of our knowledge, answers to the first two questions are no, we are not yet in the position of having a genetic test that can usefully predict risk for schizophrenia. We would also often add a statement to the effect that this is a very active area of research and there is hope that in the next few years, some breakthrough might occur that would allow us to develop such a test. But, right now we really do not know when or even if that will be possible.
By contrast, useful information can be provided for the third question. Most typically, the husband or wife has a parent or sibling with schizophrenia and they themselves have been mentally healthy. Therefore, the empirical question is what is known about the risk of schizophrenia to the grandchild or niece or nephew of an individual with schizophrenia. Interestingly, this is a subject that has not been systematically studied since the early days of psychiatric genetics in the first decades of the 20th century. The results of these early studies have been summarized in several places, most notably by Gottesman (Gottesman & Shields, 1982), with aggregate risk estimates for schizophrenia of 3.7% and 3.0%, respectively, in grandchildren or nieces and nephews of an individual with schizophrenia. However, this is a considerable overestimate if the parent with the positive family history remains unaffected. That is, the risk to a grandchild or niece or nephew of an individual with schizophrenia when the intervening parent never develops the illness is probably under 2%. Most individuals find this information helpful and broadly reassuring.
By the time this chapter is read, a great deal more information is likely to have accumulated about the scientific status of these findings. At this early stage, several trends are noteworthy. First, including unpublished reports known to the authors, at least some of these potential gene discoveries have now been replicated enough times that it is increasingly unlikely that they are false-positive findings (due, for example, to the performance of many statistical tests). Second, we can expect that the biochemical pathways represented by these genes will be explored at the level of basic cell biology and new leads about pathogenesis and potential new targets for prevention and treatment will be found. Third, we can expect a number of studies to emerge that will try to understand whether expression of these genes are changed in the brains of schizophrenia patients. Fourth, efforts are already under way to try to understand how these genes influence psychological functions such as attention, sensory gating, and memory that are disturbed in schizophrenia. Fifth, intense efforts will be made to try to determine whether these different genes are acting through a common pathway as, for example, has been postulated for the four known genes for Alzheimer's disease. | 2 | 57 |
<urn:uuid:57e11cb6-73e2-46b2-b913-ef764ea3a0b1> | Videos relating to teaching by the Smithsonian.
Videos relating to teaching by the Smithsonian.
Build in students:
Students who feel competent in the subject and confident will make a connection. Students who make a connection then care, it becomes a part of their character. Their character makes them contribute!!!
If you are looking for motivation here it is!!!
Some of the tech I encourage you to research include: Booktrack, Trello, MyHistro, Web Whiteboard, Big Blue Button, Google Classroom, and Fotor.
Booktrack- brings the books to life, it adjusts to students reading level, great for audio learners. The site is mostly used for audiobooks which you purchase but you can also create audio for books. This is great to use with students in an English course, I can see myself using this when tackling classical literature. For example I would use an audiobook and play excerpts of Shakespeare’s Julius Caesar. With old English it may be difficult for students to interpret/understand well but this technology could help. The only issue with it is the compatibility of the site to Google Chrome/Internet Explorer. You would definitely need a plan B for your lesson if their is difficulty. The site also gives you the ability to create classes for the books and to track your readers.
Google Classroom- Gives you the ability to create classes. Is an educational facebook/twitter. You have your profile and students can comment and contribute online. A useful part of this is that you can poll your students, for example over Thanksgiving Break you could post a poll on Sunday asking if they have reviewed for Monday class. Also it is useful as a reminder app without polling the students, you as a teacher can post “statuses” such as “Bible Club let’s meet in room 307 today instead of in room 212 after lunch.” Google Classroom also let’s you have access to each students email to contact them, and the ability to upload assignments. An example of using this is if a student is absent or you weren’t able to hand out the day’s homework, you could post a pdf of the assignment and the students could access it. Google Classroom is very popular with secondary teachers because it is like Canvas, but simplified. The plus to Google Classroom is that it is FREE!
MyHistro- Is a timeline creating website which is free. As a teacher you can create a timeline, especially useful in history classes but I could see it being used in English classes. The site allows you to make quizzes for students… It is much like a ThingLink but allows you to use an accurate map. Here is a link of a timeline of the French Revolution: http://www.myhistro.com/story/the-french-revolution/30635/0/0/0/1#!oath-of-the-tennis-court-59389
Big Blue Button- Is a site where you can hold confrences and meetings this would be useful for tutoring purposes or virtual school.
Fotor- A photo editing site. It allows you to create posters, photos and other media. It is free and compatible with all technology such as Mac and Windows. For example if you were teaching preschool and wanted to create a collage of things that are the color red you could do it on Fotor. This is great to recommend to students to use also. It is basic editing but the closest thing to Photoshop which is free. I would use this in my high school English classes when having my students write personal narratives the first two weeks of school. I would have them write about themselves or their summer vacation and create a collage of photos relating to their essay.
Also I have a blog post for reference to several other resources List of 35 Technology Resources
This is a powerful video that puts schools on trial. Please watch!!
Meme- a humorous image, video, piece of text, etc. that is copied (often with slight variations) and spread rapidly by Internet users.
You can actually use memes in the classroom as explained in the web link below:
Recently I was required to create several memes referencing cyber security for my technology class. I am planning on teaching secondary education so I thought these would be funny.
This one is about the scam/question circulating around on social media about “how much money would you have if it was your social security number?” The people who answer it are putting themselves in a bad situation with the possibility of getting their information/identity stolen.This one is about having a good password on your accounts.This one is about your account settings, like turning off location on your posts/ having good privacy settings to avoid stalkers and other problems.
This one just addresses scams again and people out to get your personal information. For example anything that is “click bait” might not be safe to go to, could result in a virus.
I learned that there is a large population of students in Florida are taking online courses. I also learned that their are various situations a student may be in that would require them to go to an online school.
What surprised me was that one of the speakers believed that through online classes there was much more one-on-one focus. I believe this is very untrue, I have taken multiple online classes that lack communication between the instructor and students. I feel that in an actually classroom setting getting one-on-one is much easier because even if there are many students in the class you have a better chance at a response with your hand raised than with an email.
I would consider using online tools like Canvas/Blackboard in my classroom as an aid but I do not plan on teaching online. If i went on to teach at the college level I would incoporate teaching online classes but I would make them less intense as a face to face course. I would definitely focus on breaking down my lessons and keeping up with grades and student concerns like in a regular classroom. The difference is I would shorten the workload some, online classes should not be overbearing and have the students sitting in front of a screen for an entire day nonstop. I’d want to create a system with my online class that really breaks up the time portions/workload so that my students could spend the equal time to the credit hours the course is while finding my class as acoomadating to them. Flexible and not overbearing but still providing them a fair education. I’m not the greatest fan of technology in classrooms yet, I really feel there is too much dependance on them. For example I was observing a second grade class that had access to Ipads. The students were in groups with there own Apple TV and to learn division they watched a video about it. The teacher did not go to the board abd actually teach it, she let the video teach it and let them do the work on their Ipads. It was shocking so I feel only online classes should be allowed at the high school and college level because this is moving on from just the basics. I’d only ever teach an online college class.
Here are the links to two files that are transcripts of videos pertaining to online teaching in Florida: | 1 | 2 |
<urn:uuid:4759e102-3538-4b4d-af2e-84f4814a443d> | About a month ago I read a piece of news that mentioned Turkey dropping a constitutional clause which protected children from sexual abuse, and leafed through the ensuing panic of the internet as there were fears that no legislation would come in to fill the void created by the actions of the country’s Constitutional Court. With hundreds of child abuse allegations and several million child brides already in the country, it became quite the fuss as NGOs, foreign press, and even diplomats feared at what Turkey might do in light of recent events.
It’s probable that the Court’s ruling was misread and that they had improvement of the law in mind – even if through the creation of a void, meant to be later developed into new legislation. But that got me thinking: can I understand anything about a nation by knowing its practices and taboos around age of consent?
I started researching this for lack of knowledge. Given the absence of a true international law around the age of consent, what I knew were bits and pieces out of what I now see as a huge puzzle that nobody knows exactly how to put together.
Age of consent is something contextual – so to speak. It comes with the laws of the land, and in some places it can be greater, whilst in others it can be lower – or even differ for boys and girls. Some places have even set ages of consent for gay sexual relationships.
But the concept of sex has, for all its universality, a very wide palette of nuances. Sexual behavior is, in some places, as all-encompassing as to include things like kissing – whilst it is true that for the most part, sexual behavior means actually having intercourse.
Age of consent is a legal notion – and it is thus enforced by law. Simply put, the age of consent is the age when you are considered to be capable of agreeing to have sex, and until you reach this specific age limit you’re barred from having sex with anyone, no matter how old they are. Or, on a different perspective, you’re protected from having sex with anyone, especially older people – since sometimes legislation can be a bit more lenient if partners are in the same age group.
Sexual behavior is, in some places, as all-encompassing as to include things like kissing.
Sure, some kids are ahead of their time – but that doesn’t exclude the possibility that they might get exploited, and this is specifically what age of consent laws have in mind – keeping kids safe from ill-willed adults. Even if it becomes something familially acceptable for people to start having sex at an early age, it’s still against the law. Just the same way you can’t legally consume certain substances before a certain age – no matter what your legal guardians might allow you to do. Age of consent legislation applies regardless of previous sexual encounters or emotional attachments between partners: if you’re too young to do it, then it’s illegal.
And this is the point of interest: when are you too young to have consensual sex?
As mentioned by Stephen Robertson of the University of Sydney, the concept of age of consent first came about in a secular fashion in 1275 England, as part of the rape law of the time. Put simply, it prohibited having intercourse with a virgin, regardless of her consent – and set the limit to lawful intercourse at 12 years of age.
Over the following centuries, it became normal in Western Europe to protect underage girls: men were very easily prosecuted for rape. But this also meant, as Robertson puts it, that girls that were not of age could not engage in any form of sexual activity – regardless of their willingness – with the only loophole to this being marriage, since sex during marriage was inconceivable at that time to be rape, and hence neither did age of consent apply anymore.
Yet it must be mentioned that in Medieval Europe seldom were laws enforced solely on the basis of age since little to no proof of age was ever provided.
The concept of age of consent first came about in a secular fashion in 1275 England.
It wasn’t until modern politics started that we see a clear direction. In the 1800s, moral reformers drew on the notion of consent to campaign against prostitution or, more specifically, child prostitution. Following a few such campaigns, Britain jumped its age of consent from 13 to 16 – which created a snowball of reform in the US as well. By the 1920s, most age of consent laws followed British example, with a few going the extra mile: imposing a limit on intercourse until 18 years of age.
By the second half of the 20th century, feminist reforming had expanded these laws and also challenged the century-old view of female passivity by pointing out that such laws protected all youth (female as well as male) from exploitation, rather than ‘ensuring their virginity’.
All this has lead to considerable shifts in legal ages everywhere on Earth. Researching for this post I’ve found a total of eleven ages of consent around the world. Ten of them are maturity defined and span out from as young as 11, growing in consecutive fashion up to 21 years of age – but skipping 19. The single remaining age of consent was not itself age dependent, as it was based on marital status (i.e. you can have sex with someone if you’re married to that someone).
Looking at the numbers I’ve also found vast fluctuations in sociodemographic and economic indicators, even within the same geographic units. I decided to highlight my results based on continents. Here’s the story.
I’ll start with North America since most studies published that look at information such as the age of consent and age of marriage are based on US populations.
What I’ve found is that the most common age of consent here is 16 years of age. These countries have the largest average expenditure on education, the best civil liberties (as measured by Freedom House), relatively few homicides but above average counts of rape (whereby I mean sexual intercourse without valid consent).
The US is host to not one, but three ages of consent, which vary by state policy. These are 16 (the lowest, and hence the one taken here into consideration), 17, and 18 years of age.
Just three nations in this region had an age of consent of 15 – the lowest in the continent. But they also happened to have just as good a situation with respect to civil liberties, despite higher counts of rape.
Demographically we see that the highest age of the population happens to be in the countries that maintain age of consent at 16. Should this surprise? These nations also have the lowest mortality rate among neonates and the highest age of childbearing – with the (contextual) lowest population growth rate.
This is because one of those eleven countries is the United States. Here things might get a bit more complex since officially the US is host to not one, but three ages of consent, which vary by state policy. These are 16 (the lowest, and hence the one taken here into consideration), 17, and 18 years of age – with 27 states having close-in-age exemptions for situations in which both partners are young.
While 16 may not be even close to the worldwide minimum for the age of consent, US estimates of the number of children who are sexually abused vary wildly (from 3% to as much as 54%). Such a wide variation is caused by the lack of standardized definitions of terms and actions (i.e. what a child is and what molestation implies).
Estimates of the number of children in the US who are sexually abused vary wildly, from 3% to as much as 54%.
A study by Gene Abel and Nora Harlow published in 2001 found that in the US, child molesters match the average US population in education, percentage married or formerly married and religious observance – and that the overwhelming majority of molesters (roughly 68%) sexually abuse children of families in their social circle – with pedophilia being the most significant cause of child molestation. (Just as a sidenote, pedophilia has fuzzy borders when the perpetrator is young. For a person to be diagnosed with this, he or she must be at least 16 or at least five years older than the abused child.)
So do the numbers check out? I can’t really say. Whilst it might be true that the US has an above average rape rate (36.5 out of 100.000), this figure doesn’t distinguish precise counts of child sexual abuse.
Some studies mention that between 15% and 25% of American women – and 5% to 15% of American men had been subjected to some form of sexual abuse during their early years. In most cases the offenders were known to them, roughly 1/3rd relatives (notably first-degree relatives) and about 60% were members of the community. Strangers came in last, with only 1 in 10 cases of child molestation occurring this way. And in over 1/3rd of cases, the perpetrator was also underage.
Studies researching US demographic figures have also highlighted the hardships couples face in maxing out their individual earnings (given both of them are employed) – and in particular, the career sacrifices women are shown to make for their husbands. So while married men and unmarried women tend to be agiler on the job market, the same cannot be said for married women – whose wages take a nosedive. So it seems that marriage does not affect each sex the same way; at least in the US.
There’s a similar situation across the pond. Most European countries also have an age of consent of 16. But we also see big clusters of nations with 15 and 14 years as legal ages of consent.
Within some cultural groups, the practice of child marriage survives.
In developed nations which have a good industrial infrastructure, few women commit to marriage or are coerced into this prior to the age of 18. Yet within some cultural groups, the practice of child marriage survives – which is the case of the Roma people in Central and Southeastern Europe. Overall it seems that even in less rich and highly traditional countries of the developed world all strata of society have started giving up on early marriage and have started shunning early pregnancy.
Yet countries with low ages of consent seem to have lower mean ages of giving birth – little or negative population growth and more women than men. These countries also had among the lowest counts of rape and homicide – whilst at the opposite range, we see the lowest counts of rape where nations are far more religious and the age of consent is set at 18.
Early marriage here is generally more prevalent in the Central and Western part of the continent. Many of these brides, as reported by UNICEF, are second or third wives in polygamous households – and apparently in some situations the ‘stress of contracting HIV’ contributes to men seeking young brides. It must be said that in a few African countries the number of girls that marry young is low and that the whole continent is pushing towards later marriages – but then again, Africa also hosts nations which go completely against this trend.
Here my dataset started getting a bit blurry. Africa is host to a very diverse range of legislation with respect to consensual sex. The worldwide minimum of 11 is in Nigeria – where I’ve found a very young median population, low life expectancy and more men than women.
What did stand out is that the lower or higher the age of consent, the less civil liberties of each population. But I failed to find any reliable data on most nations with respect to some indicators.
Many African and Asian peoples continue to support the notion (cultural or not) that puberty is the single most important sign for a girl being ready to marry. These cultures tend to look upon marriage with a strategic perspective, something akin to family politics, or an economic arrangement which also happens to protect young girls from undesirable sexual experiences – even if the groom accepted by the family can be twice the age of the un-would-be bride.
In some situations the ‘stress of contracting HIV’ contributes to men seeking young brides.
The assumption behind female evolution with respect to marriage practice is that once she’s bound by wedlock, she effectively becomes a woman – even if her age is scarcely that of a teenager. UNICEF states that even while the age of marriage tends to be on the rise, very early marriages (i.e. marrying children) is something of a widespread practice. And this, in turn, is a violation of human rights; very young girls that marry are bound almost certainly to a very low mean age of first pregnancy and are thus likely to be thrown into a life cycle by which they become domestically and sexually subservient.
Here it seems that marriage does not follow a single trend. With all of its diversity, Asia sees both extremes of the marriage-age scale with places having a mean age of first marriage in a girls’ early teens – whilst others see mean ages of marriage late into their 20’s.
A study published in 2004 by David Loughran and Julie Zissimopoulos shows that educational attainment stands out as the most significant difference between early and late marriers: 14% of individuals marrying before 23 years of age earn a college degree or higher as compared to 43% of people that marry after the age of 27. What’s more, the difference in educational attainment between early and late marriers is ‘reflected in occupational choice, wages and family income’.
Almost a quarter of Asia has no specific age of consent – meaning that marriage is the way in which intercourse becomes legal.
Their findings show that the hourly wage for people that marry later in life comes down to about $8.5, as compared to $6.5 for early marriers. Added to this, later marriers are also more likely to work in professional occupations and have overall family incomes 29% higher than people who marry early.
But in Asia, this would seem to be off target. Countries with higher ages of consent also encounter far worse conditions with respect to their civil societies. What’s more, well-developed nations (Japan stands out in this respect) tended to have ages of consent of 16 or lower.
Almost a quarter of Asia has no specific age of consent – meaning that marriage is the way in which intercourse becomes legal. That, in turn, means that in some situations, these marriages can (and do) include child brides – a case similar to those found in Africa.
Both African and Asian nations with either very high or marriage-determined consent ages showed up as being sub-average when it came to how much they were spending on education. A 2004 study showed that marriage has little to no impact on wage growth for men. But Loughran and Zissimopoulos found themselves that marriage does have a detrimental effect on wage growth among women with ‘potentially high returns to career development’. That is to say, they cancel out much of a woman’s potential.
South America and Oceania
Since these continents tend to have some similar counts, I thought it best to save some space and highlight their takeaways together.
Roughly one in ten girls under the age of 19 are married in South America – and the picture in Oceania is not that clear since I’ve not found any relevant data on the topic.
Populations here seem to be more homogenous with respect to their neighbors, and yet it’s easy to see that the lower the age of consent, the higher the counts of either homicide or rape.
So what does this all mean?
Regretfully, in many of the world’s nations early marriage falls into what UNICEF amounts to a sanctions limbo (prohibited by legislation yet condoned by customs and religious practice).
Studies show that in populations with little reproductive control, age at marriage impacts fertility in a very direct way – that is to say that the longer the risk of conception (i.e. the younger the bride, the more time she spends with her husband), the better the odds of conception taking place. As put forward by Larry Bumpass and Edward Mburugu, ‘other things being equal, the remaining years of risk after the birth of the last wanted child will be fewer for those marrying later.’
But a different picture gets painted when looking at populations that do have reproductive control: fertility is not so much influenced by the brute duration of time in which a woman may become pregnant, but more so by social factors such as selective marriage patterns and contraceptive use.
‘Other things being equal, the remaining years of risk after the birth of the last wanted child will be fewer for those marrying later.’
Where having kids before being married is frowned upon, taking one’s time before committing to wedlock reduces the overall fertility of that specific area. That’s because there are few years of a woman’s life in which she could bear children without fear of being rejected by her peers and community. A corollary of this is that women that eventually have kids in places where it’s frowned upon to do so prior to being married end up having less offspring – which furthermore impacts the growth rate of the population.
By tallying up the numbers in the above graph I’ve highlighted that most countries nowadays have the age of consent set at 16 – and that most of them are considered to be free nations. What all of this shows is that the more age of consent is skewed toward the extremes – either 11 (which could very well coincide with being required to having been married) or 21, the higher the odds of that country being less of an ideal place to live. Especially if you’re a child bride.
The takeaway is this: raising the age of consent can and will prevent sexual exploitation, decrease teen pregnancy and control teen sexuality in developing nations – but in developed countries, the age of consent ceases to be the preferred route as education and training, medical access, decreased poverty and the general attitude of the population keeps child abuse from happening – most of the time, at least.
Have any questions about this? Tweet me at alexgabriel_i
- Stephen Robertson, “Age of Consent Laws,” in Children and Youth in History, Item #230, http://chnm.gmu.edu/cyh/website-reviews/230 (accessed August 17, 2016).
- Abel and Harlow Child Molestation Prevention Study – The Stop Child Molestation Book, Xlibris 2001
- Whealin, Julia (22 May 2007). “Child Sexual Abuse”. National Center for Post Traumatic Stress Disorder, US Department of Veterans Affairs.
- Finkelhor D (1994). “Current information on the scope and nature of child sexual abuse” (PDF). The Future of Children. Princeton University. 4 (2): 31–53.doi:10.2307/1602522. JSTOR 1602522. PMID 7804768.
- Gorey KM, Leslie DR (April 1997). “The prevalence of child sexual abuse: integrative review adjustment for potential response and measurement biases”. Child Abuse & Neglect. 21 (4): 391–8. doi:10.1016/S0145-2134(96)00180-9. PMID 9134267.
- Finkelhor, David; Richard Ormrod; Mark Chaffin (2009). “Juveniles Who Commit Sex Offenses Against Minors” (PDF). Washington, DC: Office of Juvenile Justice and Delinquency Prevention. Office of Justice Programs, Department of Justice. Retrieved25 February 2012.
- “Diagnostic and Statistical Manual of Mental Disorders, 5th Edition”.American Psychiatric Publishing. 2013. Retrieved July 25, 2013.
- Paedophilia. “The ICD-10 Classification of Mental and Behavioural Disorders Diagnostic criteria for research World”
- World Health Organization/ICD-10. 1993. Retrieved 2012-10-10. B. A persistent or a predominant preference for sexual activity with a prepubescent child or children. C. The person is at least 16 years old and at least five years older than the child or children in B.J Health Popul Nutr. 2004 Mar;22(1):84-96.
- Effect of socioeconomic characteristics on age at marriage and total fertility in Nepal Maitra P1.
- AGE AT MARRIAGE AND COMPLETED FAMILY SIZE – Larry Bumpass, Edward Mburu
- Krashinsky, H. A. (2004). “Do Marital Status and Computer Usage Really Change the Wage Structure?” Journal of Human Resources 29(3):774–791.
- Are There Gains to Delaying Marriage? The Effect of Age at First Marriage on Career Development and Wages ∗ David S. Loughran and Julie M. Zissimopoulos RAND 1776 Main St. Santa Monica, CA 90407-2138 [email protected] [email protected] November 8, 2004 | 1 | 4 |
<urn:uuid:e88741e3-df55-4a1d-b02b-1111bb26c4d9> | Asynchronous programming in .NET framework has evolved significantly over the years. What is fascinating is the simplicity at which developers can achieve the same goal with the latest async/await implementation in .NET Framework 4.5.
The usage of BackgroundWorker class was hugely popular among developers to offload most of the I/O heavy operations to the background thread to free up the UI for better responsiveness. Even though BackgroundWorker was built based on Thread, the Thread specific implementation was hidden from you making the code less complex than plain thread based implentation. Here we deal with DoWork and RunWorkerCompleted event handlers.
The BackgroundWorker class is essentially built on top of the Thread class. The Thread part of the BackgroundWorker is sort of hidden from you. You get to work with two very important parts of the BackgroundWorker though, the DoWork and RunWorkerCompleted events. As illustrated below, you invoke RunWorkerAsync(you can pass the parameters) which then executes the DoWork handler on a different thread. On completion it triggers the RunWorkerCompleted event handler. You can pass the results via EventArgs.
Note that the DoWork event is executed on a different thread. And more importantly, the RunWorkerCompleted event handler runs in this new thread. When using this in UI (which runs in the main thread), you should take care to avoid cross thread exceptions. Also note how the result and states from DoWork event handler is passed on to RunWorkerCompleted using
Gone are the days of writing those complex code for implementing asynchronous behavior. async and await keywords (C# 5.0) let the implementation of asynchronus operations with much simple code which looks like ‘synchronous coding’. The async and await are based on the Task-based Asynchronous Pattern (TAP) which is the current recommended asynchronous design pattern.
You would agree that the above async/await implementation do not require additional effort to describe or understrood beyond the comments placed there. You get the simlicity of the asynchronous programming and compiler handles the job behind the scenes.
Is that all about the syntax, simplicity? No; Unlike BackgroundWorker, async/await does not create a dedicated thread. Instead it makes use of the threadpool as required and no cross thread worries. Note that it would be wrong to compare BackgroundWorker and async/await as as similar ones. While the former one is specialized component for executing part of the code in the background thread, the latter is more flexible mechanism for implementing asynchronus operations.
Imagine your team’s primary skillset is on .NET development. You get requirements for a website and your customer wants to stick with low cost linux/apache hosting solutions. What will you choose when you weigh the options? – (a) Hiring new talent (b) Training the resources on technologies such as J2EE. While such a decision is dependent on different factors it is important to know that there is a third option too – the open source Mono Framework ! Mono will let you to develop the applications using .NET and host it on Linux which has Mono framework installed in it.
We have used this model for couple of applications and it works pretty fine. There were glitches initially but the framewrok is being stabilized as the new versions are out.
Mono is an open source implementation of Microsoft’s .Net Framework based on the ECMA standards for C# and the Common Language Runtime.
Cross Platform? : This framework is not just for Linux. Mono runs on Linux, Microsoft Windows, Mac OS X, BSD, and Sun Solaris, Nintendo Wii, Sony PlayStation 3, Apple iPhone. It also runs on x86, x86-64, IA64, PowerPC, SPARC (32), ARM, Alpha, s390, s390x (32 and 64 bits) and more. Developing your application with Mono allows you to run on nearly any computer in existance.
It is quite likely that many of you get requirements to extend the MS Office applications’ capabilities based on your business needs. Well, we are talking about the .NET development and the framework provides you capabilities for developing Office based solutions using Visual Studio Tools for Office.
This portal (Office Development with Visual Studio) provides many resources related to Office Solutions development using .NET framework including Add-In development for MS Office 2003 and Office 2007
How does Add-Ins work with MS Word, Excel etc?
Add-ins that are created by using Visual Studio Tools for Office consist of an assembly that is loaded into a Microsoft Office application as an add-in. Add-ins that are created by using Visual Studio Tools for Office have access to the Microsoft .NET Framework as well as the application’s object model. When you build an add-in project, Visual Studio compiles the assembly into a .dll file and creates a separate application manifest file. The application manifest points to the assembly, or to the deployment manifest if the solution uses one.
Visual Studio Tools for Office provides a loader for add-ins that are created by using Visual Studio Tools for Office. This loader is named AddinLoader.dll. When a user starts the Microsoft Office application, this loader starts the common language runtime (CLR) and the Visual Studio Tools for Office runtime, and then loads the add-in assembly. The assembly can capture events that are raised in the application.
The CLR enables the use of managed code that is written in a language supported by the Microsoft .NET Framework. In your solution, you can do the following:
- Respond to events that are raised in the application itself (for instance, when a user clicks a menu item).
- Write code against the object model to automate the application.
After the assembly has been loaded, the add-in has access to the application’s objects, such as documents or mail. The managed code communicates with the application’s COM components through the primary interop assembly in your add-in project. | 1 | 3 |
<urn:uuid:e154d243-b85b-4f10-a9d9-b9508ee40b13> | To network with other Language Arts teachers, please join the the ISEnet ning and share favorite websites with our group on Diigo. This wiki page is two years old and needs your help! Please edit anything and everything on this page. It is intended to be a comprehensive resource for teachers of language arts who are interested in integrating technology. Perhaps there would be a better organizational format for the page. Feel free to change it up! Please add links to your favorite L.A. websites to the listings below.
Language arts as a subject area includes several interrelated learning domains: vocabulary, reading fluency, comprehension, composition, and literary analysis. Because of the interrelation of each of the five domains, increased proficiency in one area will often yield higher proficiency in one or more of the other areas. For example, problems with reading fluency is often tied to limitations in vocabulary and therefore educational strategies that expand a student's vocabulary will help the child's fluency, comprehension, and most likely their analysis as well.
Using technology in the language arts classroom opens a new world of educational possibilities for both teachers and students. This wiki chapter hopes to expand new possibilities for language arts instruction by offering various ways that technology can be utilized for student learning in language arts.
This chapter will review national standards in both language arts and technology. It will also examine some of the issues and problems facinmiddle school language arts educators. The chapter includes several strategies for integrating technology into the language arts and also provides ten case studies of excellent uses of technology to aid language arts instruction. Finally, it will review some of the major software, hardware, and web resources associated with successfully integrating technology into a language arts classroom.
National Language Arts Standards Edit
- General Summary Of National Standards
The national standards for language arts and reading as outlined by the National Council of Teachers of English cover twelve particular skills and competencies that are integral to a firm understanding of reading and language arts. These standards are presented in list form, but should be looked at as indistinct and inseparable since each point is interrelated and cannot be looked at without exploring related points. The national language arts standards can be found at the following link: Language Arts National Standards
- National Technology Standards
The national standards for technology for students as outlined by the International Society for Technology in Education covers six distinct domains that have been identified as important when integrating technology into a classroom setting. While these six domains cross every curricular subject, we will apply these domains predominately to the middle school language arts classroom. The national technology standards can be found at the following link: Language Arts Technology Standards
- Overview of Synthesis Between Technology and Content Standards
While each of the twelve national language arts standards can most certainly be influenced and directed through the use of technology, five standards in particular (standards 1, 3, 5, 7, and 8) present a direct synthesis between national language arts and national technology standards.
- National language arts standard one deals with using forms of "non-print text in order to develop an understanding of the text". Through the incorporation of the internet for reading and gathering information students can use technology as a resource allowing them to be exposed to non-print media. Furthermore, tools like webquests and blogging can help student develop a greater understanding and analysis of the text by dialoging with others.
- National language arts standard three deals with drawing on "prior experience... [and] interactions with other readers and writers" in order to give meaning to literature. The incorporation of technology provides practically unlimited means for students to expand their prior and outside knowledge on a topic being studied. Using streamed video, internet queries, or a myriad of other technology based references in order to expand outside knowledge will enable language arts students to draw the inferences and conclusions necessary for understanding the text.
- National language arts standard five focuses on the writing process, which most certainly warrants one of the most basic technological integration... the use of a word processor. By incorporating word processing skills into a language arts classroom, an educator provides his or her students with a more efficient means of editing written materials and preparing these materials for publishing. Incorporating technology into a language arts writing program also prepares students for both college and workplace where word processing skills are a necessity in written communication.
- National language arts standards seven and eight both deal with research gathering and synthesis. These standards both directly mention "non-print texts, databases, computer networks, and video" as plausible sources for gathering research information. Both of these standards highlight the importance of technology in gathering, analyzing, and synthesizing information necessary for a research project to be completed thoroughly.
Advantages Of Technology Integration Edit
Technology integration can be very beneficial for both teachers and students in the classroom. Technology enables untapped media like video and pictures to be more readily exploited, allow students to collaborate in ways that were before impossible, and provide tools to increase teacher productivity from lesson planning to record keeping. These benefits over non-technology based instructional methods have been given the term 'relative advantage'. Relative advantage is the advantage that is instructionally gained using technology in the classroom as opposed to using more classical forms of instruction. For example, several software and technology based applications offer a relative advantage over instructional strategies that do not incorporate technology by incorporating streaming video, interactive activities, etc. Some further examples of relative advantage would be that technology frequently offers the ability to save information for archival purposes. These examples are just the short list of possibilities available to a classroom that incorporates technology. Listed below are some other commonly recognized relative advantages of employing technology to a fuller extent in the classroom.
- Accessibility- Technology may offer increased accessibility if the students who you are working with have computer and internet access in their homes. In these situations, assignments, activities, assessments, etc. can be scheduled over web and completed at home, providing for a wider range of instructional opportunities for at home assignments.
- Cooperative Grouping- Educators frequently utilize cooperative grouping strategies and technology integration at the same time since technology is very conducive to group work, team problem solving, etc. Having students work in partnerships or small groups with a computer based activity as a guide allows students to work collaboratively to complete a technology-based activity.
- Exposure to Technology- With computers and various forms of technology coming more prominently into the mainstream of typical life and business, it is important to expose our students to different types of technology. Gaining experience in word processing, various software programs, internet research, etc. are essential educational needs for students to become successful in the workplace.
- Interactivity- Technology often allows educators to capture the attention of students through interactive instructional activities. Technology allows opportunities for multimedia and interactivity that are impossible with more traditional instructional techniques.
- Differentiation- Technology also frequently provides greater opportunities for differentiation for your students. Computers targeted at multimedia applications showing pictures, sounds, and videos are conducive to the learning styles of your various learners. Additionally, several software programs have exercises that are targeted at differentiating to the various academic levels of students in the classroom by assigning an initial pre-assessment and developing tailor made activities to improve areas of instructional deficiency.
- Archiving- Technology allows teachers to more efficently save and document student work for archival purposes. Whether it be students working on a paper that is saved under their name or scanning in classroom worksheets that identify where a student is in their learning at a set point of time, technology gives the power to save information in ways that are impossible through traditional means. This leads to several advantages like tracking student progress over time or looking back to see what areas need improvement.
Issues and Problems Edit
- Hardware/Software- Often the most common factor deterring teachers from integrating technology into the classroom is the lack of hardware and software necessary to make true technology integration attainable. Many classrooms suffer from few computers, slow computers, limited internet connectivity, broken hardware, or incorrect software. A lack of appropriate hardware and software makes technology integration extremely challenging, but still doable. Strategies outlined in the sections below will hopefully generate ideas for activities that can be utilized in a classroom even with limited hardware and software.
- Professional Development- Another large problem with technology integration is the lack of professional development directed towards integrating technology into the classroom. Most teachers recognize the benefits of technology integration, but are unequipped to present instructional information via technology to their classes.
- Construction Time- To successfully incorporate many thoughtful and beneficial technology applications their is a large amount of time for production and preparation. A webquest for example may take several hours for even an experienced teacher to program, identify links for, and upload to the internet. Often, even installing and setting up software programs is tedious and time consuming, leading many teachers to avoid technology integration completely.
- Limited Familiarity- Depending on the age of your students and how accessible computers are in their lives, limited familiarity with technology amongst students could be a major stumbling block in technology integration. It is difficult to provide instruction using computers when students have low familiarity with basic applications like using a mouse, saving a file, etc.
Strategies For Integration Edit
Teaching language arts gives a variety of opportunities for integrating technology into education. Since language arts and reading are so interdisciplinary strategies you may also incorporate into the teaching of the arts, social studies, science, and several other subject fields could provide a framework for developing the context needed to make inferences, analysis texts/authors, and draw conclusions within the language arts framework. The following list contains a series of strategies that can be used to integrate technology into the classroom. These strategies are organized for different technological environments based on the accessibility to computers, internet, projectors, etc. within the school setting. The section immediately following tracks ten outstanding case studies where teachers have very practically incorporated technology usage into their classroom in order to guide and assist instruction.
- Word Processing/Desktop Publishing- While word processing is the most common and obvious means of integrating technology into a language arts classroom, it is still worth mentioning because of the profound impact it can have on enhancing and displaying student analytical and writing skills. Assigning writing activities that require the use of a type-written submission helps to encourage students to complete their assignments with the use of a computer. Students may also complete writing assignments via a school computing lab or on classroom computers if there is non-availability of computers in the home. Technology can also be integrated through the student production of forms of desktop publishing, beyond a simple type written essay. By requiring the insertion of pictures, video, hyperlinks, borders, various fonts, etc. into the assignment the teacher can ensure that computing skills beyond simple typing are addressed.
- Provide Authentic Audiences- Check out this site for a list of websites that accept submission of student writing: http://www.noodletools.com/debbie/literacies/basic/yngwrite.html. One popular example of a site like this is http://www.fanfiction.net. Students can submit their creative writing there (such as Harry Potter stories) and other kids read and comment on their writing. Providing your students with a real audience for their writing is a tremendous motivator.
- Video- Using video files to enhance reading in language arts is a powerful tool for technology integration. Whenever books or stories are read, students often lack background information about the author or time period where the story is taking place. This makes it difficult for language arts students to draw the important inferences that help them better comprehend and analyze the story. With video streaming programs like United Streaming and Discovery Learning several thousand video clips are available to be searched, downloaded, and broadcasted for students to see in the classroom on a computer or at home over the internet. Video that helps students reinforce knowledge being learned in a language arts classroom through a visual format is an effective way to integrate technology and expand upon topics being covered.
- Powerpoint Presentations- Microsoft Powerpoint and other presentation software can be very useful when integrating technology into language arts. Powerpoint allows students to make presentation style slides that can be used as a teaching aid. Language Arts students maybe required to make presentations on a book being read, a favorite author, an interesting poem, or a short story. Powerpoint also makes it practical to give oral presentations with a visual component to accompany it. This allows for a jigsaw of information, since students will have an in depth knowledge of their topic along with a facial understanding of the other books or topics being presented.
- Language Arts Software Packages- There are several software programs available for download and purchase that deal directly with improving reading fluency and comprehension, increasing vocabulary, and addressing student writing. These software packages often allow for interactive learning through integration of video, activities, and games. Software packages can be utilized effectively in a single computer classroom with students working individually or in groups as either a center based activity or an opportunity for enrichment after completion of classroom work. Similar opportunities on a larger scale exist in multiple computer classrooms. A list of popular language arts software packages are listed in the software section of this wiki.
- Language Arts Online Activities- Similar to the language arts software packages available, several computer applications can be downloaded or purchased over the web. These activities often provide similar interactive games and activities, yet give additional flexibility since an internet based application can be utilized at a student home as long as a computer and internet access if available. These online activities can serve as reinforcement, homework, etc. depending upon how many of your students can access the activities from home.
- Concept Mapping Software- Concept mapping software, which can be found in common programs like Microsoft Word or in more advanced applications like Inspiration can be integrated successfully into language arts. Since several language arts standards aim at analyzing things and forming connections between various topics, concept mapping software can be utilized to give a visual depiction of the linkages existing between two seemingly different subjects. Concept maps can be used to brainstorm writing ideas, analyze character traits, examine book themes, etc. Since concept maps made in Microsoft Word or Inspiration can be saved, the class can always come back to the map and make appropriate additions or changes.
- Audiobooks- Several websites are available providing digital audio files of books for either free download or purchase. These downloadable audio books are a useful resource in a language arts classroom, particularly if one or more students have difficulty with reading fluency. Audio books allow good reading with proper punctuation, expression, and grammar to be recited to students in a format that can be forwarded or rewound to an appropriate place.
- E-mail- E-mail an definitely have a place in a language arts classroom since much research shows that one of the best ways to improve writing is to increase the amount of writing. E-mail provides a non-threatening way for students to express their thoughts. Setting up a pen pal system where students are in correspondence over the web to students from around the country or around the globe also provides some interesting opportunities for exposures to new cultures and ideas.
- Webquests- Webquests have been a successful way to incorporate technology seamlessly into a classroom setting. Webquests can be particularly appropriate in language arts since their is so much literature on a range of subjects that linking to information for your class to use is limitless. A webquest is a self-contained activity over the internet where the teacher has already designated certain links for his or her student to connect to and gather their information. More information on webquests can be found at the following link: http://webquest.org/.
- Classroom Information On The Web- Placing classroom information onto a class website can be helpful in keeping your students, their parents, and the greater school community informed about what is occurring in the classroom. Successful class websites have several components and vary dramatically from class to class. Some of the most common features of a class website are classroom announcements, links to related websites, internet webquests, pictures from the classroom, posted grades, enrichment materials, etc. May school districts will provide space on a district server and therefore creating and uploading web information may be as easy as creating a page in a number of web design programs (ex. Microsoft Frontpage, Marcomedia Dreamweaver) and uploading your site to the public server.
- Blogging- The use of blogs as a means of communication for students who are studying a particular piece of literary work can be a particularly effect strategy for using technology in a language arts classroom. Weblogs or 'blogs' as they are more commonly known give students the opportunity to reflect on things being discussed in the classroom in an achronistic manner. By posting a particular topic or reflection question to address and having their students sign in and respond, educators can create a forum for open discussion. More information on blogging can be found at the following link: http://schoolcomputing.wikia.com/wiki/Weblogs.
- Podcasts- Podcasting is a relatively new phenomenon, however its has enormous potential for enhancing education through technology integration. A podcast would be defined as anything recorded in a digital format and broadcasted over the internet. There are several application for language arts including using podcasts to cover important course content and reading books aloud and broadcasting them over the internet to improve reading fluency. Once an audio file has been captured in digital format and placed on the internet for download, parents and students can access the audio file by downloading it from a class website and listening to the file either through a computer or Apple I Pod. More information on podcasting can be found at the following link: http://schoolcomputing.wikia.com/wiki/Podcasts.
- Wikis- A Wiki is a piece of Web server software that allows users to create and modify Web site content using any Web browser. The characteristic that set wikis apart from other web-based forums and discussions is that it may be authored and edited by a number of people. Some speak of wiki pages as never being completed and always in the process of being edited and expanded. Applications for wiikis in a language arts classroom could be on collaborative papers where one or more students are working on and editing the same document. This tool makes for a more collaborative learning experience in the classroom. More information on wikis can be found at the following link: http://schoolcomputing.wikia.com/wiki/Wikis.
- Examples Of Successful Vocabulary Technology Integration
Case Study #1- Powerpoint And Vocabulary
Educator Anne Marie Guerrettaz who teaches at Maryvale Prepatory School in Baltimore, Maryland uses Microsoft Powerpoint as a unique tool to help her student better remember and understanding sight vocabulary words. By grouping her students, Ms. Guerrettaz uses constructivist learning principles to have her students create visual depictions, sight vocabulary used in context, and pneumatics in order to create Powerpoint slides based on a group of assigned vocabulary words. After each group has created a Powerpoint presentation that will help the class remember those select words they present their slides to the class using a laptop connected to a multimedia projector. This unique jigsaw activity exemplifies how technology can be effectively used to increase vocabulary usage and exposure in the classroom. While Ms. Guerrettaz uses Microsoft Powerpoint in her classroom, several other slideshow presentation programs (hypercard, keynote, etc.) would also serve a similar purpose.
- Examples Of Successful Comprehension Technology Integration
Case Study #2- The Non-Traditional Book Report
Educators Marilyn and David Forest who teaches at James Logan High School in Union City, California have moved their students away from creating a traditional handwritten book reports and replaced them with Hyper Card Projects. Through a Hyper Card Project, students are asked to summarize some of the major settings, characters, and themes from either fiction or non-fiction texts that they are reading by placing both written and visual depictions of what they are reading into a multimedia presentation. Using a program like Microsoft Powerpoint, the students at James Logan can create text and upload pictures relevant to their book. Students can also include outside knowledge they have gathered through research and provide links to their sources through the Hyper Card Projects. Finally, these projects can be uploaded to the web and categorized so that other students can review whether it would be a novel or text they would be interested in reading in the future.
For more information about Marilyn and David Forest's technology integration follow this link. http://www.nhusd.k12.ca.us/cue/cue.html
To see an example of a Hyper Card Project follow this link. http://www.jlhs.nhusd.k12.ca.us/Classes/Social_Science/Latin_America/Che/Che.frames.html
- Examples Of Successful Fluency Technology Integration
Case Study #3- Podcasting Your Novels
At Willowdale Elementary School in Omaha, Nebraska, Ms. Sandbourn’s class has taken to the air podcasting several aspects of their fifth grade classroom. Ms. Sandbourn has her students record their voices and give lectures based on information they have gather on topics ranging from the United States Constitution to various books that have been reading in class. Podcasts can be beneficial in providing audio record of material covered during a class period that can continually be referenced via the internet. Other uses of podcasts could be to help improve reading fluency by recording a book or story being read by a fluent reader to model good pronunciation and expression for struggling readers.
For more information about Podcasting in Ms. Sandbourn’s classroom follow this link. http://www.mpsomaha.org/willow/radio/listen.html
- Examples Of Successful Analysis Technology Integration
Case Study #4- Blogging To Analyze and Understand
Supervisor of Instructional Technology and Communications Will Richardson from Hunterdon Central Regional High School in Flemington, New Jersey has pushed the incorporation of blogging into several classes and disciplines with astounding results. Weblogs or 'blogs' for short provide an interactive discussion forum where individuals Through blogs students at Hunterdon have the ability to express themselves in written form, thereby indirectly improving their writing, but more so, it provides a forum for students to have intellectual discourse on a topic related to what is being discussed in the classroom. Hunterdon students have even blogged with authors of the novels they are reading, asking in depth questions and receiving analysis directly from the source.
For more information about the impact of blogging at Hunterdon Central Regional High School follow this link.http://curriculum.enoreo.on.ca/ontario_blogs/why_blog.html
For more information about incorporating weblogs into the classroom follow this link. http://www.glencoe.com/sec/teachingtoday/educationupclose.phtml/47
For more information about the impact of blogging at a New Jersey High School follow this link. http://weblogs.hcrhs.k12.nj.us/bees/
Case Study #5- Character Mapping Through Inspiration
By using Inspiration software it is possible to complete a thorough analysis of characters, settings, themes in a short story, poem, novel, or other work of literature. The Inspiration software allows the user to create concept maps by using a visual depiction to display connections between several different concepts. Through Inspiration, a classroom computer, and an LCD projector, the instructor can lead the class in a concept brainstorm of a piece of literature. For example, you can cluster personality or physical traits of characters in a story being read. Since the brainstorm is created through the Inspiration software, concept clusters can be reviewed later and added to as a during reading activity. Using Inspiration in the classroom allows for a deeper analysis of literature and assists visual learners in better understanding the connections seen in stories.
- Examples Of Successful Writing Technology Integration
Miguel Guhlin blogs, "We need to thrill our learners to be readers and writers. To be successful in life, what kind of writing will help children in their life? If you're like me, you're writing persuasive writing. In K-2 classrooms, 95% of writing experiences were with personal narrative and story By 6th grade, children will have spent 84% of writer's workshop composing personal narratives, stories, and writing from prompts. Kids wrote a brochure and dedicated it to everyone who is scared of bats. For the us, the use of technology to get online and find out about stuff. With every book, there's a web site. Kids went to batconservation.com. Bats Conservation said, "If you send us the information and produce it and send it to all 1000 of our members." Those kids were screaming with absolute joy. All day, all they want to do is write persuasive brochures. Our kids sit in those classrooms and do what they're told. They write and read without every understanding why. "
E-mail Pen Pals
Third Grade educator Ben Lewis has found e-mailing to be an exceptional way of improving the writing skills of her students by partnering up with pen pals from a different school. Through his pen pal activity student not only improve their writing skills, but also gain exposure to new students from different regions of the country and globe. http://k-6educators.about.com/od/languagearts/l/aa090201.htm
Give each student a digital camera as part of essay writing assignment and have them include digital photos that illustrate their writing.
Using A Tablet PC In The Classroom
Educator Joseph Manko who teaches at Rosemont Elementary/Middle School in Baltimore, Maryland uses technology for his students to see good writing modeled, critique the writing of their peers, and evaluate what can be done to improve written responses. Using a tablet PC hooked up to a multimedia projector, Mr. Manko allows students to come to the front of the class and enter examples of their written responses onto the tablet PC for the class to see. Afterwards, students have the opportunity to evaluate the piece of writing and edit or make any changes that would help improve the written piece. The tablet PC will also allow for the saving of each written response, thereby leaving a means to assess progress in writing for each student overtime.
While, the tablet PC provides a unique educational tool, similar strategies can be effectively implemented with lower technologies like an overhead projector and separate transparencies which the students can compose their written responses or evaluate and edit the work of their peers.
For more information about Joe Manko's technology integration follow this link. http://staff.hcpss.org/~jmanko/intropage.html
Software To Improve Writing
- Microsoft Word- Microsoft Word is probably the most common form of word processing/desktop publishing software, although several other software titles exist that will allow you to edit and manipulate texts, pictures, etc. Word processing software is integral to improving writing in a language arts classroom. http://office.microsoft.com/en-us→/default.aspx
- Writing Analysis Programs - See: http://www.techlearning.com/showArticle.jhtml?articleID=193700228 for a description of three software programs that analyze student writing: Pearson's WriteToLearn, Vantage Learning's MyAccess!, and Criterion from ETS
Software To Improve Literary Analysis
- Inspiration- The Inspiration Graphical Mapping Software allows students and teachers to create concepts maps that can be helpful to the study of several aspects of language arts learning. Through Inspiration, students can create concepts maps for characters, themes, settings, and summaries of books and stories they are reading. Inspiration allows for a graphical depiction which shows links and connections between various pieces within the literature being studied. For primary school language arts classroom the program kidspiration (produced by the same company) provides concept mapping tools that are more visually based for younger learners. http://www.inspiration.com/
- Bride Media- Bride Media publishes multimedia CD-ROMs on Shakespeare plays including classics like Macbeth, Romeo and Juliet, Julius Caesar, and many others. The CD-ROMs include several interactive activities to reinforce skills and analysis that the students will gain while reading the test. http://www.bridemedia.com/bmi/products/order/index.html
- SAS - American Literature InterActivities- SAS - American Literature InterActivities is a unique language arts development program that addresses culture, themes, and stylistic devices associated with various literary periods and ethnic groups. Students will be taken through a series of pre-, during, and after- reading activities in order to better comprehend and analyze themes and devices in a literary selection. http://www.sasinschool.com/software/americanlit/index.shtml
Software To Aid Test Preparation
- The Princeton Review- Educational Testing Activities- The Princeton Review, one of the industry leaders in educational testing now has online software available for the use of educators. By subscribing to the Princeton Review program, school districts will gain access to CD-ROMS, online tests, and paper assessments that can help in preparation for standardized tests in math and language arts. http://www.homeroom.com
- Classworks- Classworks by Curriculum Advantage, Inc. is comprehensive, instructional software that gives students the edge to succeed. Dynamic, interactive lessons engage students and offer new ways to address difficult concepts.
- Database development for BI sloutions- at the Asapy Company
- Comprehensive K-12 Math, Reading, Language Arts; Elementary Science Aligned to local, state, and national standards/
- - 180 award-winning software titles with thousands of lessons
- - Ability to import your High-Stakes test scores to ensure automatic delivery of the right content for each student
- - Research-based and proven successful across the nation
- - Automatically Delivers Customized Learning Based on Your High-Stakes Test Results
- Not every student learns the same way or at the same pace. Helping teachers find easy ways to individualize
- instruction is critical in today's classroom. But with all the student assessment conducted in America's
- schools, the amount of raw data is overwhelming.
- This innovative K-12 solution automatically sifts through the mountain of test-generated information to
- create individualized instruction for each student. With the advanced technology of the Classworks solution,
- teachers are free to do what they do best — teach. Updated 12.08
Software To Improve Reading Fluency & Literacy
- Soliloquy Reading Assistant- is a program which analyzes students' reading by recording their voice and analyzing it in real time. http://www.soliloquylearning.com/
- Wiggleworks- An old favorite that has not been upgraded, wiggleworks can run on Windows XP or on a citrix or terminal server for newer operating systems. http://teacher.scholastic.com/products/wiggleworks/index.htm
- Renaissance Learning- Reading (reading practice, accelerated vocabulary and literature skills) and Math. It allows you to create a customized, individualized reading/math program for every student. It is web based. http://www.renlearn.com/
- Ochard Educational Products- Orchard offers language arts products for grades K-12. The Orchard program focuses on vocabulary building, phonics, and more. Orchard Learning also produces state specific assessments for over 35 states that can be purchased along with their software. http://www.orchardsoftware.com/
- Gamco Educational Software- Gameco Educational software created by the Siboney Learning Group has a series of educational games and activities that will help improve phonemic awareness, reading fluency, writing, and basic comprehension skills. Products address curriculum standards and skills from grades K-12. http://www.gamco.com/products.htm
- Scholastic- Scholastic recognized by many as the nation's leader in educational products and has several software packages available to help students with language, writing, and vocabulary development. Included under Scholastic are software packages designed by Tom Synder Productions including the Fizz and Martina series, Clifford Reading materials, etc. http://www.tomsnyder.com/products/products.asp?Subject=LanguageArts
- Weaver Instructional Systems- Weaver Instructional Systems designs and develops computer software programs for reading, language, and study skills. Weaver also offers a reading intervention program targeted for grades K-3. http://www.wisesoft.com/
- One More Story- One More Story is an online database of hundreds of childrens books that are available in audio and pictorial form. This is a great resource for students who need to increase their fluency and can be used to model excellent reading in a very interactive way. http://www.onemorestory.com/
- Lexia Strategies for Older Students--Reading SOS-- from Lexia Learning. It is primarily a decoding program, but has a few comprehension sections. I think a teacher needs to be involved in assigning the skills to be addressed. I believe they also have a diagnostic software as well. I use the Lexia Strategies for Older Students with my middle school dyslexic students.
Software To Improve Vocabulary
- Centaur Systems- Center Systems publishes educational software dealing with vocabulary development. The software focuses on an in depth understanding of Greek, Roman, and Latin roots. http://www.centaursystems.com/
- Fast ForWord- Fast ForWord software is targeted at developing fundamental language, listening, and reading skills. This software package helps to build fundamental cognitive skills of memory, attention, processing and sequencing. Several interactive activities help in developing listening accuracy, phonological awareness and language structures. http://www.scilearn.com/prod2/
Web Resources Edit
Software To Improve Literary Analysis
- United Streaming- United Streaming uses video as instructional media. It is a great site for providing context and background knowledge to many topics being read and discussed in a typical language arts classroom. The site is password protected and costs money to use. However, most local school districts (that I know of) have purchased this service. Unitedstreaming allows you to select videos based on grade level/topic/Voluntary State Curriculum indicator. You can create your own playlist and create and/or print a quiz to accompany the video. The downside is the amount of time it takes to stream video if you are not using a broadband or other high-speed connection. http://www.unitedstreaming.com
- Teachers Domain- Teachers Domain has a series of lessons, classroom materials, and video clips that can be incorporated into K-12 language arts classroom. Materials are broken down by both grade level and topic so materials and information are easy to find. http://www.teachersdomain.org
- Brain Pop- Brain Pop provides animated movies for grades K-8 on a variety of academic disciplines including language arts. There are lessons with reproducible activities, experiments comic strips and timelines. Students can interact by completing the online quiz and asking questions. While the service is not free, many school systems have purchased district licenses. You can also register for a two week trial for free. http://www.brainpop.com
- Discovery Learning Connection- The Discovery Learning Connection website contains a database of over 30,000 streaming videos to tutor students from ages 3-10. These videos help to provide the context needed to better understand passages being read and therefore form conclusions and make inferences. There are also games and quizzes that go along with the content. You can register for a two-week trial for free. http://www.discoverylearningconnection.com
- Thinkport- Thinkport is a website that contains lesson plans, classroom materials, and several interactive activities to be used with students. On the website you can take online field trips where students learn through an interactive format about materials that often provide background information about materials being discussed in a language arts classroom. The site contains material for multiple grade levels and disciplines. http://www.thinkport.org
- Mr. Manko's World- Mr. Manko's World is a website targeted towards middle school social studies and language arts teachers and students. The website has an abundance of downloadable resources including worksheets, assessments, and reading guides for several popular young adult novels. http://www.hcpsss.org/~jmanko/intropage.html
Software To Improve Reading Fluency
- Starfall- Starfall is an excellent multimedia site for educators (typically grade K-3) who are looking for resources to better teach language arts skills and content. It uses video to help teach phonemic awareness/phonics. Once students have developed phonics skills they can advance to reading genres, etc. It's very interactive, but not overwhelming or distracting. http://www.starfall.com
- JumpStart - JumpStart is an award-winning 3D online virtual world for kids. Ideal for preschoolers through fifth graders, JumpStart is the perfect mix of fun and learning, offering kids the opportunity to engage in interactive learning games on a wide array of subjects. http://www.jumpstart.com
- Audio Books For Free- Audio Books For Free is a website that provides books being transmitted aloud in mp3 format. Students or teachers can download books they are reading in class and listen to them for improved understanding and fluency. This is a great resource for students with reading difficulties. While t books are provided for free, the site does advertise to purchase some different formats of the stories. It also may take a high speed connection and some time to download. http://www.dvdaudiobooks.com/screen_main.asp
- Scholastic- Scholastic is seen by many is one of the premier companies in children’s literacy. Their website provides lesson plan, movies, games, quizzes, and interviews all targeted at improving student reading. The site contains materials perfect for a language arts classroom and covers several grade levels. http://teacher.scholastic.com/scholasticnews/index.asp
- Storyline- Storyline is a website that helps story books come to life for students. The site allows children to read and follow along with the book. Often times, Storyline will have celebrities who are reading picture books through which students can follow along with their own paper version of the book. The site is targeted towards younger readers and English as a Second Language students who struggle with reading. http://www.storylineonline.net/
- PBS Kids- PBS Kids contains an entire section dedicated to the Clifford the Dog books. Through the many activities and interactive read-alouds younger students who are just beginning to read can hone their comprehension and fluency skills. This is a great resource for struggling readers or English as a Second Language students. http://pbskids.org/clifford/
- Cyber Kids- The Cyber Kids reading activities are based off of the Choose Your Own Adventure books and allow students to navigate through several stories and make decisions on where the characters will go next. The site may be particularly beneficial to boys or weak readers who can become more involved in the stories that they read. http://www.cyberkids.com/cw/mul/
- Tools for Teachers- The Readability Analysis allows you to paste a block of text into this page and then it analyzes it for readability.
1. Name three benefits of utilizing technology into a language arts classroom.
2. What are two hurdles/challenges of utilizing technology in the language arts classroom?
3. What are possible ways you can utilize weblogs to provide excellent language arts instruction?
4. Provide some possible strategies you can use to improve analytic reading by incorporating technology into the classroom.
5. What is one successful language arts web package or software package that can provide a relative advantage in terms of instruction? (Name the software and explain what it does.)
6. How can podcasts be used to improve reading fluency?
- Cramer, S. & Smith, A. (2002). Technology’s Impact On Student Writing At The Middle School Level. Journal Of Instructional Psychology, Vol. 29, No. 1, pp. 3-14.
- Banaszewski, T. (2002). Digital Storytelling Finds Its Place In The Classroom. Multimedia Schools, January/February 2002, pp. 32-35.
- Scharf, E. & Cramer, J. (2002). Desktop Poetry Project. Learning And Leading With Technology, Vol. 29, No. 26, pp. 28-31, 50-51.
- McNabb, M.L. (2005). Raising The Bar On Technology Research In English Language Arts. Journal Of Research On Technology In Education. Fall 2005, Vol. 38, No. 1, pp. 113-119.
- Warburton, J. (2001). Finding The Poetic In A Technological World: Integrating Poetry And Computer Technology In A Teacher Education Program. Journal Of Technology and Teacher Education. Vol. 9, No. 4, pp. 585-597.
- Cardiner, S. (2001). Teaming Up To Integrate Technology Into A Writing Lesson. Learning And Leading With Technology. Vol. 28, No. 4, pp. 22-27.
- McGrail, Ewa. (2005). Teachers, Technology, And Change: English Teachers’ Perspective. Journal Of Technology and Teacher Education. Vol. 13, No. 1, pp. 5-24.
- Merkley, D., Schmidt, D. & Allen, G. (2001). Addressing The English Language Arts Technology Standard In A Secondary Reading Methodology Course. Journal Of Adolescent And Adult Literacy. Vol. 45, No. 3, pp. 220-231.
- Roblyer, M. D. (2005). Integrating Educational Technology Into Teaching,(4th edition). Upper Saddle River, NJ: Prentice Hall.
Back to Teaching With Technology | 1 | 3 |
<urn:uuid:b79ca570-4f5a-4898-8c62-7da2c163b012> | Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Sadism refers to sexual or non-sexual gratification in the infliction of pain or humiliation upon or by another person. Masochism refers to sexual or non-sexual gratification in the infliction of pain or humiliation upon oneself.
Often interrelated, the practices are collectively known as sadomasochism as well as S&M or SM. These terms usually refer to consensual practices within the BDSM community.
Distinction between S&M, BDSM and D/s Edit
Sadists like Jackson (the pitcher) desire to inflict pain; this may or may not be sexual in nature. Masochists like Peter (the catcher) desire to receive pain, which again may or may not be sexual.
Dominance and submission—control over another, or being controlled by another, respectively—typically describes a relationship power dynamic rather than a set of acts, and may or may not involve sadomasochism. Bondage and discipline describes a set of acts that sometimes involve D/s or S&M; although discipline often implies a level of suffering (real or pretend), participants may stop short of causing actual pain.[How to reference and link to summary or text]
The development of the term sadomasochism is very complex. Originally "Sadism" and "Masochism" were purely technical terms for psychological features, which were classified as psychological illness.[How to reference and link to summary or text] The terms are derived from the authors Marquis de Sade and Leopold von Sacher-Masoch.
In 1843 the Hungarian physician Heinrich Kaan published Psychopathia sexualis ("Psychopathy of Sex"), a writing in which he converts the sin conceptions of Christianity into medical diagnoses. With his work the originally theological terms "perversion", "aberration" and "deviation" became part of the scientific terminology for the first time.[How to reference and link to summary or text]
The German psychiatrist Richard von Krafft-Ebing introduced the terms "Sadism" and "Masochism" into the medical terminology in his work Neue Forschungen auf dem Gebiet der Psychopathia sexualis ("New research in the area of Psychopathy of Sex") in 1890.
In 1905, Sigmund Freud described "Sadism" and "Masochism" in his Drei Abhandlungen zur Sexualtheorie ("Three papers on Sexual Theory") as diseases developing from an incorrect development of the child psyche and laid the groundwork for the scientific perspective on the subject in the following decades. This led to the first time use of the compound term Sado-Masochism (German "Sado-Masochismus")) by the Viennese Psychoanalyst Isidor Isaak Sadger in his work Über den sado-masochistischen Komplex ("Regarding the sadomasochistic complex") in 1913.
In the past BDSM activists turned repeatedly against these conceptual models, originally deriving from singular historical figures and implying a clear pathological connotation. They argued that there is no common sense in attributing a phenomenon as complex as BDSM to two individual humans, as well one might speak of "Leonardism" instead of Homosexuality. The BDSM scene tried to distinguish themselves with the expression "B&D" for Bondage and Discipline from that pejorative connotated term "S&M".[How to reference and link to summary or text]
The abbreviation BDSM was probably coined in the early 1990s in the subculture around the Newsgroup news:alt.sex.bondage.[How to reference and link to summary or text] This new term is first recorded as appearing in July 1991.[How to reference and link to summary or text]
Psychological categorization Edit
Both terms were coined by German psychiatrist Richard von Krafft-Ebing in his 1886 compilation of case studies Psychopathia Sexualis. Pain and physical violence are not essential in Krafft-Ebing's conception, and he defined masochism (German "Masochismus") entirely in terms of control. Sigmund Freud, a psychoanalyst and a contemporary of Krafft-Ebing, noted that both were often found in the same individuals, and combined the two into a single dichotomous entity known as sadomasochism (German "Sadomasochismus")(often abbreviated as S&M or S/M). This observation is commonly verified in both literature and practice; many sadists and masochists define themselves as "switchable"—capable of taking pleasure in either role. However it has also been argued (Deleuze, Coldness and Cruelty) that the concurrence of sadism and masochism in Freud's model should not be taken for granted.
Freud introduced the terms "primary" and "secondary" masochism. Though this idea has come under a number of interpretations, in a primary masochism the masochist undergoes a complete, not just a partial, rejection by the model or courted object (or sadist), possibly involving the model taking a rival as his or her preferred mate. This complete rejection is related to the death drive in Freud's psychoanalysis (Todestrieb). In a secondary masochism, by contrast, the masochist experiences a less serious, more feigned rejection and punishment by the model. Secondary masochism, in other words, is the relatively casual version, more akin to a charade, and most commentators are quick to point out its contrivedness.
Rejection is not desired by a primary masochist in quite the same sense as the feigned rejection occurring within a relatively equal relationship--or even where the masochist happens to be the one having true power (this is the problematic that underlies the analyses of Deleuze and Sartre, for example). In Things Hidden Since the Foundation of The World Rene Girard attempts to resuscitate and reinterpret Freud's distinction of primary and secondary masochism, in connection with his own philosophy.
Both Krafft-Ebing and Freud assumed that sadism in men resulted from the distortion of the aggressive component of the male sexual instinct. Masochism in men, however, was seen as a more significant aberration, contrary to the nature of male sexuality. Freud doubted that masochism in men was ever a primary tendency, and speculated that it may exist only as a transformation of sadism. Sadomasochism in women received comparatively little discussion, as it was believed that it occurred primarily in men. Both also assumed that masochism was so inherent to female sexuality that it would be difficult to distinguish as a separate inclination.
Havelock Ellis, in Studies in the Psychology of Sex, argued that there is no clear distinction between the aspects of sadism and masochism, and that they may be regarded as complementary emotional states. He also made the important point that sadomasochism is concerned only with pain in regard to sexual pleasure, and not in regard to cruelty, as Freud had suggested. In other words, the sadomasochist generally desires that the pain be inflicted or received in love, not in abuse, for the pleasure of either one or both participants. This mutual pleasure may even be essential for the satisfaction of those involved.
Here Ellis touches upon the often paradoxical nature of consensual S&M. It is not only pain to initiate pleasure, but violence—or the simulation of violence—to express love. This contradictory character is perhaps most evident in the observation by some that not only are sadomasochistic activities usually done for the benefit of the masochist, but that it is often the masochist that controls them, through subtle emotional cues received by the sadist.
In his essay Coldness and Cruelty, (originally Présentation de Sacher-Masoch, 1967) Gilles Deleuze rejects the term 'sadomasochism' as artificial, especially in the context of the prototypical masochistic work, Sacher-Masoch's Venus In Furs. Deleuze instead argues that the tendency toward masochism is based on desire brought on from the delay of gratification. Taken to its extreme, an infinite delay, this is manifested as perpetual coldness. The masochist derives pleasure from, as Deleuze puts it, The Contract: the process by which he can control another individual and turn the individual into someone cold and callous. The Sadist, in contrast, derives pleasure from The Law: the unavoidable power that places one person below another. The sadist attempts to destroy the ego in an effort to unify the id and super-ego, in effect gratifying the most base desires the sadist can express while ignoring or completely suppressing the will of the ego, or of the conscience. Thus, Deleuze attempts to argue that Masochism and Sadism arise from such different impulses that the combination of the two terms is meaningless and misleading. The perceived sadistic capabilities of masochists are treated by Deleuze as reactions to masochism. Indeed, in the epilogue of Venus In Furs, the character of Severin has become bitter from his experiment in masochism, and advocates instead the domination of women.
Before Deleuze, however, Sartre had presented his own theory of sadism and masochism, at which Deleuze's deconstructive attack, which took away the symmetry of the two roles, was probably directed. Because the pleasure or power in looking at the victim figures prominently in sadism and masochism, Sartre was able to link these phenomena to his famous philosophy of the Look of the Other. Sartre argued that masochism is an attempt by the For-itself (consciousness) to reduce itself to nothing, becoming an object that is drowned out by the "abyss of the Other's subjectivity" By this Sartre means that, given that the For-itself desires to attain a point of view in which it is both subject and object, one possible strategy is to gather and intensify every feeling and posture in which the self appears as an object to be rejected, tested, and humiliated; and in this way the For-itself strives toward a point of view in which there is only one subjectivity in the relationship, which would be both that of the abuser and the abused. Conversely, of course, Sartre held sadism to be the effort to annihilate the subjectivity of the victim. That would mean that the sadist, who is exhilarated in the emotional distress of the victim, is such because he or she also seeks to assume a subjectivity which would take a point of view on the victim, and on itself, as both subject and object.
This argument may appear stronger if it is somehow understood that the Look of the Other is either only an aspect of the other faculties of desire, or somehow its primary faculty. It does not account for the turn that Deleuze took for his own philosophy of these matters, but this premise of desire-as-Look is associated with the view always attacked by Deleuze, in what he regarded as the essential error of "desire as lack," and which he identified in the philosophical temperament of Plato, Socrates, and Lacan. For Deleuze, insofar as desire is a lack it is reducible to the Look.
Finally, after Deleuze, Rene Girard included his account of sado-masochism in Things Hidden Since the Foundation of The World, originally Des choses cachées depuis la fondation du monde, 1978, making the chapter on masochism a coherent part of his theory of mimetic desire. In this view of sado-masochism, the violence of the practices are an expression of a peripheral rivalry that has developed around the actual love-object. There is clearly a similarity to Deleuze, since both in the violence surrounding the memory of mimetic crisis and its avoidance, and in the resistance to affection that is focussed on by Deleuze, there is an understanding of the value of the love object in terms of the processes of its valuation, acquisition and the test it imposes on the suitor.
Many theorists, particularly feminist theories, have suggested that sadomasochism is an inherent part of modern Western culture.[How to reference and link to summary or text] According to their[attribution needed] theories, sex and relationships are both consistently taught to be formulated within a framework of male dominance and female submission. Some of them further link this hypothesized framework to inequalities among gender, class, and race which remain a substantial part of society, despite the efforts of the civil rights movement and feminism.
There are a number of reasons commonly given for why a sadomasochist finds the practice of S&M enjoyable, and the answer is largely dependent on the individual. For some, taking on a role of compliance or helplessness offers a form of therapeutic escape; from the stresses of life, from responsibility, or from guilt. For others, being under the power of a strong, controlling presence may evoke the feelings of safety and protection associated with childhood. They likewise may derive satisfaction from earning the approval of that figure (see: Servitude (BDSM)). A sadist, on the other hand, may enjoy the feeling of power and authority that comes from playing the dominant role, or receive pleasure vicariously through the suffering of the masochist. It is poorly understood, though, what ultimately connects these emotional experiences to sexual gratification, or how that connection initially forms. Dr. Joseph Merlino, author and psychiatry adviser to the New York Daily News, said in an interview that a sadomasochistic relationship, as long as it is consensual, is not a psychological problem:
|“||It's a problem only if it is getting that individual into difficulties, if he or she is not happy with it, or it's causing problems in their personal or professional lives. If it's not, I'm not seeing that as a problem. But assuming that it did, what I would wonder about is what is his or her biology that would cause a tendency toward a problem, and dynamically, what were the experiences this individual had that led him or her toward one of the ends of the spectrum.||”|
It is usually agreed on by psychologists that experiences during early sexual development can have a profound effect on the character of sexuality later in life. Sadomasochistic desires, however, seem to form at a variety of ages. Some individuals report having had them before puberty, while others do not discover them until well into adulthood. According to one study, the majority of male sadomasochists (53%) developed their interest before the age of 15, while the majority of females (78%) developed their interest afterwards (Breslow, Evans, and Langley 1985). Like sexual fetishes, sadomasochism can be learned through conditioning—in this context, the repeated association of sexual pleasure with an object or stimulus.
With the publication of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) in 1994 new criteria of diagnosis were available describing Sadomasochism clearly not as disorders of sexual preferences. They are now not regarded as illnesses in and of themselves. The DSM-IV asserts that "The fantasies, sexual urges, or behaviors" must "cause clinically significant distress or impairment in social, occupational, or other important areas of functioning" in order for sexual sadism or masochism to be considered a disorder. The manuals' latest edition (DSM-IV-TR) requires that the activity must be the sole means of sexual gratification for a period of six (6) months, and either cause "clinically significant distress or impairment in social, occupational, or other important areas of functioning" or involve a violation of consent to be diagnosed as a paraphilia. Overlays of sexual preference disorders and the practice of Sadomasochism practices can occur, however.
Real life Edit
The term BDSM describes the activities between consenting partners that contain sadistic and masochistic elements. Many behaviors such as erotic spanking, tickling and love-bites that many people think of only as "rough" sex also contain elements of sado-masochism. Note the issue of legal consent may not be accepted as a defense to criminal charges in some jurisdictions, and very few jurisdictions will permit consent as a defense to serious bodily injury.
In certain extreme cases, sadism and masochism can include fantasies, sexual urges or behavior that cause significant distress or impairment in social, occupational, or other important areas of functioning, to the point that they can be considered part of a mental disorder. However, this is an uncommon case, and psychiatrists are now moving towards regarding sadism and masochism not as disorders in and of themselves, but only as disorders when associated with other problems such as a personality disorder.
"Sadism" and "masochism," in the context of consensual sexual activities, are not strictly accurate terms, at least by the psychological definitions. "Sadism" in absolute terms refers to someone whose pleasure in causing pain does not depend on the consent of the "victim." Indeed, a lack of consent may be a requisite part of the experience for a true sadist. Similarly, the masochist in consensual BDSM is someone who enjoys sexual fantasies or urges for being beaten, humiliated, bound, tortured, or otherwise made to suffer, either as an enhancement to or a substitute for sexual pleasure, usually according to a certain scripted and mutually agreed upon "scene." These "masochists" do not usually enjoy pain in other scenarios, such as accidental injury, medical procedures, and so on.
Similarly, the exchange of power in S&M may not be along the expected lines. While it might be assumed that the "top"—the person who gives the sensation or causes the humiliation—is the one with the power, the actual power may lie with the "bottom," who typically creates the script, or at least sets the boundaries, by which the S&M practitioners play. Ernulf and Innala (1995) observed discussions among individuals with such interests, one of whom described the goal of hyperdominants (p. 644):
- Bondage (BDSM)
- Bottom (BDSM)
- Domination & submission (BDSM)
- Masochistic personality
- Sadism and masochism as medical terms
- Sadomasochistic personality
- Self harm
- Sexual sadism
- Top (BDSM)
- ↑ Details describing the development of the theoretical construct "Perversion" by Krafft-Ebing and his relation to this terms, see Andrea Beckmann, Journal of Criminal Justice and Popular Culture, 8(2) (2001) 66-95 online unter Deconstructing Myths
- ↑ Isidor Isaak Sadger: Über den sado-masochistischen Komplex. in: Jahrbuch für psychoanalytische und psychopathologische Forschungen, Bd. 5, 1913, S. 157–232 (German)
- ↑ von Krafft-Ebing, Richard . "Masochism" Psychopathia Sexualis, 131. "[The masochist] is controlled by the idea of being completely and unconditionally subject to the will of a person of the opposite sex; of being treated by this person as by a master, humiliated and abused. This idea is coloured by lustful feeling; the masochist lives in fancies, in which he creates situations of this kind and often attempts to realise them"
- ↑ Jean-Paul Sartre, Being and Nothingness
- ↑ Interview with Dr. Joseph Merlino, David Shankbone, Wikinews, October 5, 2007.
- ↑ Letter to the Editor of The American Journal of Psychiatry: Change in Criterion for Paraphilias in DSM-IV-TR. Russell B. Hilliard, Robert L. Spitzer. 2002. Retrieved: 23 November, 2007.
- ↑ Ernulf, K. E., & Innala, S. M. (1995). Sexual bondage: A review and unobtrusive investigation. Archives of Sexual Behavior, 24, 631–654.
- Alize, M. (2007). Experiences of a pro-domme. New York, NY: Palgrave Macmillan.
- Barker, M. (2007). Turning the world upside down: Developing a tool for training about SM. New York, NY: Palgrave Macmillan.
- Barker, M., Gupta, C., & Iantaffi, A. (2007). The power of play: The potentials and pitfalls in healing narratives of BDSM. New York, NY: Palgrave Macmillan.
- Bauer, R. (2007). Playgrounds and new territories--The potential of BDSM practices to queer genders. New York, NY: Palgrave Macmillan.
- Beckmann, A. (2007). The 'bodily practices' of consensual 'SM', spirituality and 'transcendence'. New York, NY: Palgrave Macmillan.
- Brenner, I. (1991). The unconscious wish to develop AIDS: A case report. Madison, CT: International Universities Press, Inc.
- Bridoux, D. (2000). Kink therapy: SM and sexual minorities. Maidenhead, BRK, England: Open University Press.
- Brothers, D. (1997). The leather princess: Sadomasochism as the rescripting of trauma scenarios. Mahwah, NJ: Analytic Press.
- Brothers, D. (2003). Cinderella's Gender Trouble: Sadomasochism as the Intersubjective Regulation of Uncertainty. Westport, CT: Praeger Publishers/Greenwood Publishing Group.
- Chaline, E. (2007). On becoming a gay SMer: A sexual scripting perspective. New York, NY: Palgrave Macmillan.
- Chancer, L. S. (2000). Fromm, sadomasochism, and contemporary American crime. Champaign, IL: University of Illinois Press.
- Coen, S. J. (1988). Sadomasochistic excitement: Character disorder and perversion. Hillsdale, NJ, England: Analytic Press, Inc.
- Denkinson, G. (2007). SM and sexual freedom: A life history. New York, NY: Palgrave Macmillan.
- De Masi, F. (1999). The sadomasochistic perversion: The entity and the theories. London, England: Karnac Books.
- Dillon-Weston, M. (1997). From sado-masochism to shared sadness. London, England: Jessica Kingsley Publishers.
- Downing, L. (2007). Beyond safety: Erotic asphyxiation and the limits of SM discourse. New York, NY: Palgrave Macmillan.
- Easton, D. (2007). Shadowplay: S/M journeys to our selves. New York, NY: Palgrave Macmillan.
- Glenn, J. (1998). Dora as an adolescent: Sadistic and sadomasochistic fantasies. Mahwah, NJ: Analytic Press.
- Glickauf-Hughes, C. (1996). Sadomasochistic interactions. Oxford, England: John Wiley & Sons.
- Gosselin, C. (1984). Fetishism, sadomasochism and related behaviours. Cambridge, MA: Basil Blackwell.
- Gosselin, C. C. (1987). The sadomasochistic contract. Baltimore, MD: Johns Hopkins University Press.
- Green, R. (2007). Total power exchange in a modern family: A personal perspective. New York, NY: Palgrave Macmillan.
- Halberstadt-Freud, H. C. (1991). Freud, Proust, perversion and love. Lisse, Netherlands: Swets & Zeitlinger Publishers.
- Hemphill, R. E., & Zabow, T. (1992). Clinical vampirism: A presentation of 3 cases and a reevaluation of Haigh, the "acid-bath-murderer." Philadelphia, PA: Brunner/Mazel.
- Henkin, W. A. (2007). Some beneficial aspects of exploring personas and role play in the BDSM context. New York, NY: Palgrave Macmillan.
- Joffe, H. (2006). Dynamics of sadomasochism in the film The night porter. Lanham, MD: Jason Aronson.
- Kantor, M. (2002). Passive-aggression: A guide for the therapist, the patient and the victim. Westport, CT: Praeger Publishers/Greenwood Publishing Group.
- Langdridge, D. (2007). Speaking the unspeakable: S/M and the eroticisation of pain. New York, NY: Palgrave Macmillan.
- Langdridge, D., & Barker, M. (2007). Safe, sane and consensual: Contemporary perspectives on sadomasochism. New York, NY: Palgrave Macmillan.
- Langdridge, D., & Barker, M. (2007). Situating sadomasochism. New York, NY: Palgrave Macmillan.
- McGrath, M., & Turvey, B. E. (2008). Sexual asphyxia. San Diego, CA: Elsevier Academic Press.
- Merck, M. (2006). The feminist ethics of lesbian sadomasochism. London, England: Karnac Books.
- Montgomery, J. D. (1989). The return of masochistic behavior in the absence of the analyst. Madison, CT: International Universities Press, Inc.
- Moser, C., & Kleinplatz, P. J. (2007). Themes of SM expression. New York, NY: Palgrave Macmillan.
- Novick, J., & Novick, K. K. (1997). Not for barbarians: An appreciation of Freud's "A Child is Being Beaten". New Haven, CT: Yale University Press.
- Phillips, Anita (1998). A Defense of Masochism. ISBN 0-312-19258-4.
- Odd Reiersol, Svein Skeid:The ICD Diagnoses of Fetishism and Sadomasochism, in Journal of Homosexuality, Harrigton Park Press, Vol.50, No.2/3, 2006,pp.243-262
- Rathbone, J. (2001). Anatomy of masochism. New York, NY: Kluwer Academic/Plenum Publishers.
- Ross, J. M. (1997). The sadomasochism of everyday life: Why we hurt ourselves--and others--and how to stop. New York, NY: Simon & Schuster.
- Sacksteder, J. L. (1989). Sadomasochistic relatedness to the body in anorexia nervosa. Madison, CT: International Universities Press, Inc.
- Saez, Fernando y Olga Viñuales, Armarios de Cuero, Editorial Bellaterra, 2007. ISBN 84-7290-345-6
- Santtila, P., Sandnabba, N. K., & Nordling, N. (2006). Sadomasochism. Westport, CT: Praeger Publishers/Greenwood Publishing Group.
- Schad-Somers, S. P. (1982). Sadomasochism: Etiology and treatment. New York, NY: Human Sciences Press.
- Schapiro, B. (2000). Sadomasochism as intersubjective breakdown in D. H. Lawrence's "The Woman Who Rode Away". Albany, NY: State University of New York Press.
- Shengold, L. (1997). Comments on Freud's "'A Child is Being Beaten': A Contribution to the Study of the Origin of Sexual Perversions". New Haven, CT: Yale University Press.
- Sisson, K. (2007). The cultural formation of S/M: History and analysis. New York, NY: Palgrave Macmillan.
- Sophia. (2007). Who is in charge in an SM scene? New York, NY: Palgrave Macmillan.
- Stoller, R. J. (1989). Consensual sadomasochistic perversions. Madison, CT: International Universities Press, Inc.
- Weait, M. (2007). Sadomasochism and the law. New York, NY: Palgrave Macmillan.
- Yost, M. R. (2007). Sexual fantasies of S/M practitioners: The impact of gender and S/M role on fantasy content. New York, NY: Palgrave Macmillan.
- Ahrens, S. (2006). The paradox of sadomasochism: Zeitschrift fur Sexualforschung Vol 19(4) Dec 2006, 279-294.
- Alison, L., Santtila, P., & Sandnabba, N. K. (2001). Sadomasochistically oriented behavior: Diversity in practice and meaning: Archives of Sexual Behavior Vol 30(1) Feb 2001, 1-12.
- Arroyo, A. P., & Escarcega, J. S. (2006). The sado-masochist perversion pair. A clinical case: Revista Intercontinental de Psicologia y Educacion Vol 8(2) Jul-Dec 2006, 41-60.
- Avery, N. C. (1977). Sadomasochism: A defense against object loss: Psychoanalytic Review Vol 64(1) Spr 1977, 101-109.
- Bach, S. (2002). Sadomasochism in Clinical Practice and Everyday Life: Journal of Clinical Psychoanalysis Vol 11(2) Spr 2002, 225-235.
- Bach, S., & Hacker, A.-L. (2002). Sadomasochism in clinical practice and everyday life: Revue Francaise de Psychanalyse Vol 66(4) Oct-Dec 2002, 1215-1224.
- Bauduin, A. (1994). The erotic alienation of the daughter to her mother: Revue Francaise de Psychanalyse Vol 58(1) Jan-Mar 1994, 17-32.
- Berg, T. (1981). Object-splitting, dominance and submission in families of borderline adolescents: Tidsskrift for Norsk Psykologforening Vol 18(11) Nov 1981, 571-577.
- Berg, T. (1986). Narcissus: Master/slave in the mirror of each other: Tidsskrift for Norsk Psykologforening Vol 23(3) Mar 1986, 152-160.
- Berner, W. (1991). Sado-masochism in a woman: Report on a psychoanalytical therapy: Zeitschrift fur Sexualforschung Vol 4(1) Mar 1991, 45-57.
- Berner, W. (1997). Forms of sadism: Zeitschrift fur Psychoanalytische Theorie und Praxis Vol 12(2) 1997, 166-182.
- Birkett, D. (2004). Review of The Sado-Masochistic Perversion: The Entity and the Theories: British Journal of Psychotherapy Vol 21(2) Win 2004, 341-344.
- Biven, B. M. (1997). Dehumanization as an enactment of serial killers: A sadomasochistic case study: Journal of Analytic Social Work Vol 4(2) 1997, 23-49.
- Blizard, R. A. (2001). Masochistic and sadistic ego states: Dissociative solutions to the dilemma of attachment to an abusive caretaker: Journal of Trauma & Dissociation Vol 2(4) 2001, 37-58.
- Blos, P., Jr. (1991). Sadomasochism and the defense against recall of painful affect: Journal of the American Psychoanalytic Association Vol 39(2) 1991, 417-430.
- Blum, H. P. (1978). Psychoanalytic study of an unusual perversion: Discussion: Journal of the American Psychoanalytic Association Vol 26(4) 1978, 785-792.
- Blum, H. P. (1991). Sadomasochism in the psychoanalytic process, within and beyond the pleasure principle: Discussion: Journal of the American Psychoanalytic Association Vol 39(2) 1991, 431-450.
- Breslow, N., Evans, L., & Langley, J. (1985). On the prevalence and roles of females in the sadomasochistic subculture: Report of an empirical study: Archives of Sexual Behavior Vol 14(4) Aug 1985, 303-317.
- Calogeras, R. C. (1994). Sadomasochistic object relations: Some clinical observations: Forum der Psychoanalyse: Zeitschrift fur klinische Theorie & Praxis Vol 10(2) Jun 1994, 97-115.
- Cappon, J. (1975). Masochism: A trait in the Mexican national character: International Mental Health Research Newsletter Vol 17(1) Spr 1975, 2.
- Catano, J. V. (2003). Labored language: Anxiety and sadomasochism in steel industry tales of masculinity: Men and Masculinities Vol 6(1) Jul 2003, 3-30.
- Celenza, A. (2000). Sadomasochistic relating: What's sex got to do with it? : Psychoanalytic Quarterly Vol 69(3) Jul 2000, 527-543.
- Chancer, L. S. (2004). Rethinking domestic violence in theory and practice: Deviant Behavior Vol 25(3) May-Jun 2004, 255-275.
- Chasseguet-Smirgel, J. (1991). Sadomasochism in the perversions: Some thoughts on the destruction of reality: Journal of the American Psychoanalytic Association Vol 39(2) 1991, 399-415.
- Chefetz, R. A. (2000). Disorder in the therapist's view of the self: Working with the person with dissociative identity disorder: Psychoanalytic Inquiry Vol 20(2) 2000, 305-329.
- Claus, C., & Lidberg, L. (2003). Ego-boundary disturbances in sadomasochism: International Journal of Law and Psychiatry Vol 26(2) Mar-Apr 2003, 151-163.
- Corman, L. (1977). Moral masochism identified by the use of projective tests: Bulletin de Psychologie Vol 31(18) Sep-Oct 1977-1978, 915-922.
- Cross, P. A., & Matheson, K. (2006). Understanding Sadomasochism: An Empirical Examination of Four Perspectives: Journal of Homosexuality Vol 50(2-3) 2006, 133-166.
- Cycon, R. (1994). Sadomasochism in the transference/countertransference as a defense against psychic pain: Psychoanalytic Inquiry Vol 14(3) 1994, 441-450.
- Damon, W. (2002). Dominance, Sexism, and Inadequacy: Testing a Compensatory Conceptualization in a Sample of Heterosexual Men Involved in SM: Journal of Psychology & Human Sexuality Vol 14(4) 2002, 25-45.
- Dancer, P. L., Kleinplatz, P. J., & Moser, C. (2006). 24/7 SM Slavery: Journal of Homosexuality Vol 50(2-3) 2006, 81-101.
- De Groot, M. (2008). Review of Sadomasochism in everyday life--The dynamics of power and powerlessness: Sexual and Relationship Therapy Vol 23(2) May 2008, 171.
- Dekker, A. (2007). Splash and Clash in Regensburg? A conference of the German Society for Sexuality Research on "Perspectives on Sadomasochism." Zeitschrift fur Sexualforschung Vol 20(3) Sep 2007, 263-266.
- Dervin, D. (1986). Edmond: Is there such a thing as a sick play? : Psychoanalytic Review Vol 73(1) Spr 1986, 111-119.
- Donnelly, D., & Fraser, J. (1998). Gender differences in sado-masochistic arousal among college students: Sex Roles Vol 39(5-6) Sep 1998, 391-407.
- Downing, L. (2004). On the limits of sexual ethics: The phenomenology of autassassinophilia: Sexuality & Culture: An Interdisciplinary Quarterly Vol 8(1) Win 2004, 3-17.
- Dubinsky, A. (1986). The sado-masochistic phantasies of two adolescent boys suffering from congenital physical illnesses: Journal of Child Psychotherapy Vol 12(1) 1986, 73-85.
- Durkin, K. F. (2007). Show me the money: Cybershrews and on-line money masochists: Deviant Behavior Vol 28(4) 2007, 355-378.
- Fakhry Davids, M. (1997). Sado-masochism as a defence: Psycho-analytic Psychotherapy in South Africa Vol 5(2) 1997, 51-64.
- Finell, J. S. (1992). Sadomasochism and complementarity in the interaction of the narcissistic and borderline personality type: Psychoanalytic Review Vol 79(3) Fal 1992, 361-379.
- Gabbard, K. (1997). The circulation of sado-masochistic desire in the Lolita texts: PsyART Vol 1 1997,
- Gagnier, T. T., & Robertiello, R. C. (1993). Sado-masochism as a defense against merging: Six case studies: Journal of Contemporary Psychotherapy Vol 23(3) Fal 1993, 183-192.
- Geltner, P. (2005). Countertransference in Projective Identification and Sadomasochistic States: Modern Psychoanalysis Vol 30(1) 2005, 73-91.
- Glasser, M. (1998). On violence: A preliminary communication: International Journal of Psycho-Analysis Vol 79(5) Oct 1998, 887-902.
- Glasser, M. (1999). "On violence": Reply: International Journal of Psycho-Analysis Vol 80(3) Jun 1999, 627-628.
- Gosselin, C. C., Wilson, G. D., & Barrett, P. T. (1991). The personality and sexual preferences of sadomasochistic women: Personality and Individual Differences Vol 12(1) 1991, 11-15.
- Goulding, M. M. (1998). Sadomasochism in psychotherapy with nonpsychotic clients: Comments on Ken Wood's "The danger of sadomasochism in the reparenting of psychotics." Transactional Analysis Journal Vol 28(1) Jan 1998, 55-56.
- Green, R. (2001). (Serious) sadomasochism: A protected right of privacy? : Archives of Sexual Behavior Vol 30(5) Oct 2001, 543-550.
- Grindstaff, D. (2003). Queering Marriage: An Ideographic Interrogation of Heteronormative Subjectivity: Journal of Homosexuality Vol 45(2-4) 2003, 257-275.
- Grossman, W. I. (1991). Pain, aggression, fantasy, and concepts of sadomasochism: Psychoanalytic Quarterly Vol 60(1) Jan 1991, 22-52.
- Groth, M. (2000). On the Umbilicus as a bisexual symbol: Psychoanalytic Psychology Vol 17(2) Spr 2000, 360-365.
- Hanly, M.-A. F. (1993). Sado-masochism in Charlotte Bronte's Jane Eyre: A ridge of lighted heath: International Journal of Psycho-Analysis Vol 74(5) Oct 1993, 1049-1061.
- Hekma, G. (2007). Review of Sadomasochism: Powerful pleasures: Sexualities Vol 10(3) Jul 2007, 391-392.
- Henny, R. (1998). Metapsychological position of anality: Revue Francaise de Psychanalyse Vol 62(5) Nov-Dec 1998, 1749-1755.
- Henry-Sejourne, M. (1997). Anais Nin: Letter to her father: Cahiers Jungiens de Psychanalyse No 89 Sum 1997, 63-79.
- Hillman, C. (2006). Relational treatment of a borderline analysand: International Forum of Psychoanalysis Vol 15(3) Sep 2006, 178-182.
- Hitzler, R. (1993). Agonising choices: A glimpse into the small realm of the algophile: Zeitschrift fur Sexualforschung Vol 6(3) Sep 1993, 228-242.
- Hollan, D. (2004). Self systems, cultural idioms of distress, and the psycho-bodily consequences of childhood suffering: Transcultural Psychiatry Vol 41(1) Mar 2004, 62-79.
- Houlberg, R. (1991). The magazine of a sadomasochism club: The tie that binds: Journal of Homosexuality Vol 21(1-2) 1991, 167-183.
- Hughes, M. A. (1983). Transfer perversion: Revue Francaise de Psychanalyse Vol 47(1) Jan-Feb 1983, 357-363.
- Joseph, B. (1982). Addiction to near-death: International Journal of Psycho-Analysis Vol 63(4) 1982, 449-456.
- Joshi, S. (2003). 'Watcha gonna do when they cum all over you?' What police themes in male erotic video reveal about (leather)sexual subjectivity: Sexualities Vol 6(3-4) Nov 2003, 325-342.
- Karol, C. (1980). The role of primal scene and masochism in asthma: International Journal of Psychoanalytic Psychotherapy Vol 8 1980-1981, 577-592.
- Keiter, R. H. (1975). Psychotherapy of moral masochism: American Journal of Psychotherapy Vol 29(1) Jan 1975, 56-65.
- Kennedy, H. (1989). Sadomasochistic perversion in adolescence: A developmental-historical observation: Zeitschrift fur Psychoanalytische Theorie und Praxis Vol 4(4) 1989, 348-360.
- Kennedy, K. (2000). Writing trash: Truth and the sexual outlaw's reinvention of lesbian identity: Feminist Theory Vol 1(2) Aug 2000, 151-172.
- Kernberg, O. F. (1991). Sadomasochism, sexual excitement, and perversion: Journal of the American Psychoanalytic Association Vol 39(2) 1991, 333-362.
- Kernberg, O. F. (1993). Sadomasochism, sexual excitement and perversion: Zeitschrift fur Psychoanalytische Theorie und Praxis Vol 8(4) 1993, 319-341.
- Klein, M., & Moser, C. (2006). SM (Sadomasochistic) Interests as an Issue in a Child Custody Proceeding: Journal of Homosexuality Vol 50(2-3) 2006, 233-242.
- Kleinplatz, P. J. (2006). Learning from Extraordinary Lovers: Lessons from the Edge: Journal of Homosexuality Vol 50(2-3) 2006, 325-348.
- Kolmes, K., Stock, W., & Moser, C. (2006). Investigating Bias in Psychotherapy with BDSM Clients: Journal of Homosexuality Vol 50(2-3) 2006, 301-324.
- Kovel, C. C. (2000). Cross-cultural dimensions of sadomasochism in the psychoanalytic situation: Journal of the American Academy of Psychoanalysis & Dynamic Psychiatry Vol 28(1) Spr 2000, 51-62.
- Krambeck, K. (1989). Erotic forms of hatred: Psicopatologia Vol 9(1) Jan-Mar 1989, 1-9.
- Kulick, D. (2003). No: Language & Communication Vol 23(2) Apr 2003, 139-151.
- Langdridge, D., & Butt, T. (2004). A Hermeneutic Phenomenological Investigation of the Construction of Sadomasochistic Identities: Sexualities Vol 7(1) Feb 2004, 31-53.
- Langdridge, D., & Butt, T. (2005). The Erotic Construction of Power Exchange: Journal of Constructivist Psychology Vol 18(1) Jan-Mar 2005, 65-73.
- Lawner, P. (1979). Sado-masochism and imperiled self: Issues in Ego Psychology Vol 2(1) 1979, 22-29.
- Lawrence, A. A., & Love-Crowell, J. (2008). Psychotherapists' experience with clients who engage in consensual sadomasochism: A qualitative study: Journal of Sex & Marital Therapy Vol 34(1) Jan-Feb 2008, 63-81.
- Leonhard, K. (1986). Sadomasochism and dream in the background of Kafka's works: Psychiatrie, Neurologie und Medizinische Psychologie Vol 38(6) Jun 1986, 315-323.
- Lerner, H. D. (2001). A two-systems approach to the treatment of a disturbed adolescent: Psychoanalytic Social Work Vol 8(3-4) 2001, 123-142.
- Lerner, P. M., & Lerner, H. D. (1996). Further notes on a case of possible multiple personality disorder: Masochism, omnipotence, and entitlement: Psychoanalytic Psychology Vol 13(3) Sum 1996, 403-416.
- Levitt, E. E., Moser, C., & Jamison, K. V. (1994). The prevalence and some attributes of females in the sadomasochistic subculture: A second report: Archives of Sexual Behavior Vol 23(4) Aug 1994, 465-473.
- Lichtenberg, J. D. (2002). Intimacy with the gendered self: Selbstpsychologie: Europaische Zeitschrift fur psychoanalytische Therapie und Forschung/ Self Psychology: European Journal for Psychoanalytic Therapy and Research Vol 3(7) 2002, 13-60.
- Luca, M. (2002). Containment of the sexualized and erotized transference: Journal of Clinical Psychoanalysis Vol 11(4) Fal 2002, 649-662.
- Luca, M. (2002). Response to Dr. Baudry's commentary: Journal of Clinical Psychoanalysis Vol 11(4) Fal 2002, 672-674.
- Mahoney, J. M. (1998). Strategic sadomasochism in lesbian relationships: Psychology: A Journal of Human Behavior Vol 35(1) 1998, 41-43.
- Maidi, H. (1997). Can an innocent person love a guilty one? : Topique: Revue Freudienne Vol 27(62) 1997, 137-154.
- Manninen, V., & Absetz, K. (2000). The face of fear: Castration and perversion: Scandinavian Psychoanalytic Review Vol 23(2) 2000, 193-215.
- Meloy, J. R. (1999). "On violence": Comment: International Journal of Psycho-Analysis Vol 80(3) Jun 1999, 626-627.
- Messer, J. M., & Fremouw, W. J. (2008). A critical review of explanatory models for self-mutilating behaviors in adolescents: Clinical Psychology Review Vol 28(1) Jan 2008, 162-178.
- Mintz, I. L. (1980). Multideterminism in asthmatic disease: International Journal of Psychoanalytic Psychotherapy Vol 8 1980-1981, 593-600.
- Mollinger, R. N. (1982). Sadomasochism and developmental stages: Psychoanalytic Review Vol 69(3) Fal 1982, 379-389.
- Morgenstern, S. (1987). The Personality structure and its deviations: Revue Francaise de Psychanalyse 51(1) Jan-Feb 1987, 119-122.
- Moser, C. (1988). Sadomasochism: Journal of Social Work & Human Sexuality Vol 7(1) 1988, 43-56.
- Moser, C., & Kleinplatz, P. J. (2006). Introduction: The State of Our Knowledge on SM: Journal of Homosexuality Vol 50(2-3) 2006, 1-15.
- Naylor, B. A. (1986). Sadomasochism in children and adolescents: A contemporary treatment approach: Psychotherapy: Theory, Research, Practice, Training Vol 23(4) Win 1986, 586-592.
- Nichols, M. (2006). Psychotherapeutic Issues with "Kinky" Clients: Clinical Problems, Yours and Theirs: Journal of Homosexuality Vol 50(2-3) 2006, 281-300.
- Nordling, N., Sandnabba, N. K., & Santtila, P. (2000). The prevalence and effects of self-reported childhood sexual abuse among sadomasochistically oriented males and females: Journal of Child Sexual Abuse Vol 9(1) 2000, 53-63.
- Nordling, N., Sandnabba, N. K., Santtila, P., & Alison, L. (2006). Differences and Similarities Between Gay and Straight Individuals Involved in the Sadomasochistic Subculture: Journal of Homosexuality Vol 50(2-3) 2006, 41-57.
- Novick, J. (2003). Naughty love: PsycCRITIQUES Vol 48 (5), Oct, 2003.
- Novick, J., & Novick, K. K. (1998). Fearful symmetry: The development and treatment of sadomasochism: Psychoanalytic Psychology Vol 15(1) Win 1998, 168-171.
- Novick, J., & Novick, K. K. (2004). The Superego and the Two-System Model: Psychoanalytic Inquiry Vol 24(2) 2004, 232-356.
- Novick, J., Novick, K. K., & Hacker, A.-L. (2002). A developmental theory of sadomasochism: Revue Francaise de Psychanalyse Vol 66(4) Oct-Dec 2002, 1133-1155.
- Ogden, D. S. (1996). Richardson's narrative space-off: Freud, vision and the (heterosexual) problem of reading Clarissa: Literature and Psychology Vol 42(4) 1996, 37-52.
- Ormerod, D. (1994). Sado-masochism: Journal of Forensic Psychiatry Vol 5(1) May 1994, 123-136.
- Pizzato, M. (2005). A Post-9/11 Passion: Review of Mel Gibson's The Passion of the Christ: Pastoral Psychology Vol 53(4) Mar 2005, 371-376.
- Queen, C. (1996). Women, S/M, and therapy: Women & Therapy Vol 19(4) 1996, 65-73.
- Quinodoz, J.-M. (1992). Homosexuality and separation anxiety: Revue Francaise de Psychanalyse Vol 56 May 1992, 1643-1650.
- Quinsey, V. L., Chaplin, T. C., & Upfold, D. (1984). Sexual arousal to nonsexual violence and sadomasochistic themes among rapists and non-sex-offenders: Journal of Consulting and Clinical Psychology Vol 52(4) Aug 1984, 651-657.
- Ramsay, R. L. (1991). The sado-masochism of representation in French texts of modernity: The power of the erotic and the eroticization of power in the work of Marguerite Duras and Alain Robbe-Grillet: Literature and Psychology Vol 37(3) 1991, 18-28.
- Rea, J. E. (1985). James Joyce's Bloom: The mongrel imagery: American Imago Vol 42(1) Spr 1985, 39-43.
- Reed, G. S. (1999). Analysts who submit and patients who comply: Sadomasochistic transference/countertransference interchanges and their rationalizations: Psychoanalytic Inquiry Vol 19(1) 1999, 82-96.
- Reiersol, O., & Skeid, S. (2006). The ICD Diagnoses of Fetishism and Sadomasochism: Journal of Homosexuality Vol 50(2-3) 2006, 243-262.
- Richards, A. K. (1989). A romance with pain: A telephone perversion in a woman? : International Journal of Psycho-Analysis Vol 70(1) 1989, 153-164.
- Richards, A. K. (2002). Sadomasochistic Perversion and the Analytic Situation: Journal of Clinical Psychoanalysis Vol 11(3) Sum 2002, 359-377.
- Richters, J., de Visser, R. O., Rissel, C. E., Grulich, A. E., & Smith, A. M. A. (2008). Demographic and psychosocial features of participants in bondage and discipline, "sadomasochism" or dominance and submission (BDSM): Data from a national survey: Journal of Sexual Medicine Vol 5(7) Jul 2008, 1660-1668.
- Ridinger, R. B. (2006). Negotiating Limits: The Legal Status of SM in the United States: Journal of Homosexuality Vol 50(2-3) 2006, 189-216.
- Roussillon, R. (2002). The clinical decomposition of sadism: Revue Francaise de Psychanalyse Vol 66(4) Oct-Dec 2002, 1167-1180.
- Ryle, A. (1993). Addiction to the death instinct? A critical review of Joseph's paper "Addiction to Near Death." British Journal of Psychotherapy Vol 10(1) Fal 1993, 88-92.
- Salvage, D. (2006). Review of The Sadomasochistic Perversion: The Entity and the Theories: The Journal of the American Academy of Psychoanalysis and Dynamic Psychiatry Vol 34(3) Fal 2006, 559-561.
- Sandnabba, N. K., Santtila, P., & Nordling, N. (1999). Sexual behavior and social adaptation among sadomasochistically-oriented males: Journal of Sex Research Vol 36(3) Aug 1999, 273-282.
- Sandnabba, N. K., Santtila, P., Nordling, N., Beetz, A. M., & Alison, L. (2002). Characteristics of a sample of sadomasochistically-oriented males with recent experience of sexual contact with animals: Deviant Behavior Vol 23(6) Nov-Dec 2002, 511-529.
- Santtila, P., Sandnabba, N. K., Alison, L., & Nordling, N. (2002). Investigating the underlying structure in sadomasochistically oriented behavior: Archives of Sexual Behavior Vol 31(2) Apr 2002, 185-196.
- Santtila, P., Sandnabba, N. K., & Nordling, N. (2000). Retrospective perspectives of family interaction in childhood as correlates of current sexual adaptation among sadomasochistic males: Journal of Psychology & Human Sexuality Vol 12(4) 2000, 69-87.
- Saville, J. (1992). Of fleshly garments: Ascesis and desire in the ethic of psychoanalysis: American Imago Vol 49(4) Win 1992, 445-465.
- Schneider, M. (1986). Wounding, the couple and the other as ob-ject: Psychanalyse a l'Universite Vol 11(44) Oct 1986, 637-663.
- Schteingart-Gitnacht, A. (1998). The economic problem of anality: Regarding anality: Revue Francaise de Psychanalyse Vol 62(5) Nov-Dec 1998, 1701-1748.
- Scott, A. (1993). "Addiction to the death instinct? A critical review of Joseph's paper "Addiction to near death": Response: British Journal of Psychotherapy Vol 10(1) Fal 1993, 93-95.
- Seelig, B. J., & Person, E. S. (1991). A sadomasochistic transference: Its relation to distortions in the rapprochement subphase: Journal of the American Psychoanalytic Association Vol 39(4) 1991, 939-965.
- Shulman, B. H., & Peven, D. (1971). Sex for domination: Medical Aspects of Human Sexuality Vol 5(10) Oct 1971, 28-32.
- Smith, S. (1984). The sexually abused patient and the abusing therapist: A study in sadomasochistic relationships: Psychoanalytic Psychology Vol 1(2) Spr 1984, 89-98.
- Southern, S. (2002). The tie that binds: Sadomasochism in female addicted trauma survivors: Sexual Addiction & Compulsivity Vol 9(4) 2002, 209-229.
- Spengler, A. (1977). Manifest sadomasochism of males: Results of an empirical study: Archives of Sexual Behavior Vol 6(6) Nov 1977, 441-456.
- Stark, R., Schienle, A., Girod, C., Walter, B., Kirsch, P., Blecker, C., et al. (2005). Erotic and disgust-inducing pictures--Differences in the hemodynamic responses of the brain: Biological Psychology Vol 70(1) Sep 2005, 19-29.
- Sternbach, O. (1975). Aggression, the death drive and the problem of sadomasochism: A reinterpretation of Freud's second drive theory: International Journal of Psycho-Analysis Vol 56(3) 1975, 321-333.
- Sternbach, O. (2006). Aggression, the Death Drive and the Problem of Sadomasochism: A Reinterpretation of Freud's Second Drive Theory: Psychoanalytic Review Vol 93(6) Dec 2006, 857-881.
- Stolorow, R. D. (1975). The narcissistic function of masochism (and sadism): International Journal of Psycho-Analysis Vol 56(4) 1975, 441-448.
- Taylor, G. W., & Ussher, J. M. (2001). Making sense of S&M: A discourse analytic account: Sexualities Vol 4(3) Aug 2001, 293-314.
- Troisier, H. (2002). Hypochondria and erotism: Revue Francaise de Psychosomatique No 22 2002, 85-97.
- Tsang, D. C. (1995). Policing "perversions": Depo-Provera and John Money's new sexual order: Journal of Homosexuality Vol 28(3-4) 1995, 397-426.
- Valentine, M. (2007). 'Those that the gods wish to destroy, they first make mad': An analytic discussion of the depiction of sado-masochism in the film The Night Porter: British Journal of Psychotherapy Vol 23(3) May 2007, 445-457.
- van Lieshout, M. (1995). Leather nights in the woods: Homosexual encounters in a Dutch highway rest area: Journal of Homosexuality Vol 29(1) 1995, 19-39.
- Van Naerssen, A. X., Van Dijk, M., Hoogeveen, G., Visser, D., & et al. (1986). Gay SM in pornography and reality: Journal of Homosexuality Vol 13(2-3) Win-Spr 1986-1987, 111-119.
- Waska, R. (2006). Two Tales of Loss and the Search for a Solution: New Ideas on Acting Out, Sadomasochism, and Working Through: International Journal of Applied Psychoanalytic Studies Vol 3(1) 2006, 101-110.
- Weille, K.-L. H. (2002). The psychodynamics of consensual sadomasochistic and dominant-submissive sexual games: Studies in Gender and Sexuality Vol 3(2) Apr 2002, 131-160.
- Weinberg, M. S., Williams, C. J., & Moser, C. (1984). The social constituents of sadomasochism: Social Problems Vol 31(4) Apr 1984, 379-389.
- Weinberg, T. S. (1987). Sadomasochism in the United States: A review of recent sociological literature: Journal of Sex Research Vol 23(1) Feb 1987, 50-69.
- Weinberg, T. S. (1994). Research in sadomasochism: A review of sociological and social psychological literature: Annual Review of Sex Research Vol V 1994, 257-277.
- Weinberg, T. S. (2006). Sadomasochism and the Social Sciences: A Review of the Sociological and Social Psychological Literature: Journal of Homosexuality Vol 50(2-3) 2006, 17-40.
- Weinstein, L. (1998). Looking at reality: Perversion, illusion, and the primal scene in Peter Greenaway's The Draughtman's Contract: Psychoanalytic Inquiry Vol 18(2) 1998, 257-268.
- Weiss, M. D. (2006). Mainstreaming Kink: The Politics of BDSM Representation in U.S. Popular Media: Journal of Homosexuality Vol 50(2-3) 2006, 103-132.
- Westerfelhaus, R. G. (2007). The Spirituality of Sex and the Sexuality of the Spirit: BDSM Erotic Play as Soulwork and Social Critique. Thousand Oaks, CA: Sage Publications, Inc.
- White, C. (2006). The Spanner Trials and the Changing Law on Sadomasochism in the UK: Journal of Homosexuality Vol 50(2-3) 2006, 167-187.
- Williams, D. J. (2006). Different (Painful!) Strokes for Different Folks: A General Overview of Sexual Sadomasochism (SM) and its Diversity: Sexual Addiction & Compulsivity Vol 13(4) Oct-Dec 2006, 333-346.
- Wilson, C. P. (1980). Parental overstimulation in asthma: International Journal of Psychoanalytic Psychotherapy Vol 8 1980-1981, 601-621.
- Wolf, K. (2004). Treating a man with transvestite and sadomasochistic tendencies: Zeitschrift fur Sexualforschung Vol 17(2) Jun 2004, 163-173.
- Woods, K. (1998). The danger of sadomasochism in the reparenting of psychotics: Transactional Analysis Journal Vol 28(1) Jan 1998, 48-54.
- Wright, S. (2006). Discrimination of SM-Identified Individuals: Journal of Homosexuality Vol 50(2-3) 2006, 217-231.
- Wrye, H. (2005). Perversion annihilates creativity and love: A passion for destruction in The Piano Teacher (2001): International Journal of Psychoanalysis Vol 86(4) Aug 2005, 1205-1212.
- Zillmann, D., Bryant, J., & Carveth, R. A. (1981). The effect of erotica featuring sadomasochism and bestiality on motivated intermale aggression: Personality and Social Psychology Bulletin Vol 7(1) Mar 1981, 153-159.
- Brodwyn, J. M. (2001). Psychic sadomasochism in four contemporary depth psychology formulations: A comparative analysis. Dissertation Abstracts International: Section B: The Sciences and Engineering.
- Chancer, L. S. (1988). The social generality of sadomasochism: A study in the political as personal: Dissertation Abstracts International.
- Cross, P. A. (2000). Understanding sadomasochism: An examination of current perspectives. Dissertation Abstracts International: Section B: The Sciences and Engineering.
- Damon, W. D. (2002). Patterns of power: A test of two approaches to understanding sadomasochistic sexual behavior in heterosexual men. Dissertation Abstracts International: Section B: The Sciences and Engineering.
- Hoff, G. (2003). Power and love: Sadomasochistic practices in long-term committed relationships. Dissertation Abstracts International: Section B: The Sciences and Engineering.
- Jones, C. M. (1993). Pilgrimage to the sun: Marriage in D. H. Lawrence's major novels: Dissertation Abstracts International.
- Matthews, M. A. (2005). Lesbians who engage in public bondage, discipline, dominance, submission and sadomasochism (BDSM). Dissertation Abstracts International: Section B: The Sciences and Engineering.
- Palazzolo, S. A. (2007). Demystifying a sexual perversion: An existential reading of sadomasochism and Erich Fromm's call to love. Dissertation Abstracts International: Section B: The Sciences and Engineering.
- Panter, A. L. H. (1999). An exploratory study of female sadomasochists' sexuality: Behavior, fantasy, and meaning. (women sadomasochists). Dissertation Abstracts International: Section B: The Sciences and Engineering.
- Pine, S. (1984). A study of interrelationships between sadomasochism and narcissism: Dissertation Abstracts International.
- Pinzka, L. C. (1994). Sado-masochism and literary production: The case of Flaubert. Dissertation Abstracts International Section A: Humanities and Social Sciences.
- Schiller, G. C. (1987). The pursuit of masculinity, a study in homosexual sadomasochism: Dissertation Abstracts International.
- Yost, M. R. (2006). Consensual sexual sadomasochism and sexual aggression perpetration: Exploring the erotic value of power. Dissertation Abstracts International: Section B: The Sciences and Engineering.
- Spengler A (1977). Manifest sadomasochism of males: results of an empirical study. Archives of Sexual Behavior 6 (6): 441–56.
- Lesley Hall. Pain and the erotic. The Wellcome Trust. URL accessed on 2008-11-17.
- The Eulenspiegel Society, founded in New York City in 1971 is the oldest SM support group in the US.
- The Society of Janus, founded in San Francisco, California in 1974 is the second oldest SM support group in the US.
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| | 1 | 2 |
<urn:uuid:488b36c6-9fc9-4ad6-918f-fc81cf6db6d8> | |Classification and external resources|
Klippel–Trénaunay syndrome (KTS or KT), formerly Klippel–Trénaunay–Weber syndrome and sometimes angioosteohypertrophy syndrome and hemangiectatic hypertrophy, is a rare congenital medical condition in which blood vessels and/or lymph vessels fail to form properly. The three main features are nevus flammeus (port-wine stain), venous and lymphatic malformations, and soft-tissue hypertrophy of the affected limb. It is similar to, though distinctly separate from, the less common Parkes-Weber syndrome.
There is disagreement as to how cases of KTS should be classified if there is an arteriovenous fistula present. Although several authorities have suggested that the term Parkes Weber syndrome is applied in those cases, ICD-10 currently uses the term "Klippel–Trénaunay–Weber syndrome".
Signs and symptoms
The birth defect is diagnosed by the presence of a combination of these symptoms (often on approximately ¼ of the body, though some cases may present more or less affected tissue):
- One or more distinctive port-wine stains with sharp borders
- Varicose veins
- Hypertrophy of bony and soft tissues, that may lead to local gigantism or shrinking, most typically in the lower body/legs.
- An improperly developed lymph system
In some cases, port-wine stains (capillary port wine type) may be absent. Such cases are very rare and may be classified as "atypical Klippel–Trenaunay syndrome".
KTS can either affect blood vessels, lymph vessels, or both. The condition most commonly presents with a mixture of the two. Those with venous involvement experience increased pain and complications, such as venous ulceration in the lower extremities.
Those with large AVMs are at risk of formation of blood clots in the vascular lesion, which may migrate to the lungs (pulmonary embolism). If there is large-volume blood flow through the lesion, "high-output heart failure" may develop due to the inability of the heart to generate sufficient cardiac output.
|Port wine stain||98%|
The birth defect affects men and women equally, and is not limited to any racial group. It is not certain if it is genetic in nature, although testing is ongoing. There is some evidence that it may be associated with a translocation at t(8;14)(q22.3;q13). Some researchers have suggested VG5Q has an association.
KTS is a complex syndrome, and no single treatment is applicable for everyone. Treatment is decided on a case-by-case basis with the individual's doctors.
At present, many of the symptoms may be treated, but there is no cure for Klippel–Trenaunay syndrome.
Debulking has been the most common treatment for KTS for several decades and while improvements have been made, the procedure is still considered invasive and has several risks associated with it. More effective and less invasive treatment choices now exist for KTS patients and therefore debulking is generally only recommended as a last resort. Debulking operations can result in major deformities and also leave patients with permanent nerve damage.
Mayo Clinic has reported the largest experience in managing KTS with major surgery. In 39 years at Mayo clinic the surgery team evaluated 252 consecutive cases of KTS, of which only 145 (57.5%) could be treated by primary surgery. The immediate success rate for treating varicose veins was only 40%, excision of vascular malformation was possible in 60%, debulking operations in 65%, and correction of bone deformity and limb length correction (epiphysiodesis) had 90% success. All the procedures demonstrated high recurrence rate in the follow-up. Mayo clinic studies demonstrate that primary surgical management of KTS has limitations and non-surgical approaches need to be developed in order to offer a better quality of life for these patients. Major surgery including amputation and debulking surgery does not seem to offer any benefit on a long-term basis.
Sclerotherapy is a treatment for specific veins and vascular malformations in the affected area. It involves the injection of a chemical into the abnormal veins to cause thickening and obstruction of the targeted vessels. Such treatment may allow normal blood flow to resume. It is a non-surgical medical procedure and is not nearly as invasive as debulking. Ultrasound guided foam sclerotherapy is the state of the art new treatment which could potentially close many large vascular malformations.
Compression therapies are finding more use as of the last ten years. The greatest issue with KTS syndrome is that the blood flow and/or lymph flow may be impeded, and will pool in the affected area. This can cause pain, swelling, inflammations, and in some cases, even ulceration and infection. Among older children and adults, compression garments can be used to alleviate almost all of these, and when combined with elevation of the affected area and proper management, can result in a comfortable lifestyle for the patient without any surgery. Compression garments are also used lately after a debulking procedure to maintain the results of the procedure. For early treatment of infants and toddlers with KTS, custom compression garments are impractical because of the rate of growth. When children may benefit from compression therapies, wraps and lymphatic massage may be used. While compression garments or therapy are not appropriate for everyone, they are relatively cheap (compared to surgery), and have few side-effects. Possible side-effects include a slight risk that the fluids may simply be displaced to an undesirable location (e.g., the groin), or that the compression therapy itself further impedes circulation to the affected extremities.
The condition was first described by French physicians Maurice Klippel and Paul Trénaunay in 1900; they referred to it as naevus vasculosus osteohypertrophicus. The German-British physician Frederick Parkes Weber described cases in 1907 and 1918 that were similar but not identical to those described by Klippel and Trénaunay.
- Billy Corgan, lead singer for The Smashing Pumpkins
- Patience Hodgson, lead singer for The Grates
- Casey Martin, professional golfer
- Matthias Schlitte, Arm Wrestler
- "Klippel-Trenaunay syndrome". Archived from the original on July 4, 2013. Retrieved May 15, 2014.
- James, William; Berger, Timothy; Elston, Dirk (2005). Andrews' Diseases of the Skin: Clinical Dermatology (10th ed.). Saunders. p. 585. ISBN 0-7216-2921-0.
- Lindenauer, S. Martin (1965). "The Klippel-Trenaunay Syndrome". Annals of Surgery. 162 (2): 303–14. PMC . PMID 14327016. doi:10.1097/00000658-196508000-00023.
- Cohen, M. Michael (2000). "Klippel-Trenaunay syndrome". American Journal of Medical Genetics. 93 (3): 171–5. PMID 10925375. doi:10.1002/1096-8628(20000731)93:3<171::AID-AJMG1>3.0.CO;2-K.
- Mendiratta, V; Koranne, RV; Sardana, K; Hemal, U; Solanki, RS (2004). "Klippel trenaunay Parkes-Weber syndrome". Indian journal of dermatology, venereology and leprology. 70 (2): 119–22. PMID 17642585.
- Klippel-Trenaunay syndrome: Spectrum and management
- Tian XL, Kadaba R, You SA, Liu M, Timur AA, Yang L, Chen Q, Szafranski P, Rao S, Wu L, Housman DE, DiCorleto PE, Driscoll DJ, Borrow J, Wang Q (2004). "Identification of an angiogenic factor that when mutated causes susceptibility to Klippel–Trenaunay syndrome" (PDF). Nature. 427 (6975): 640–5. PMC . PMID 14961121. doi:10.1038/nature02320. Archived from the original (PDF) on December 9, 2006.
- Wang, Q.; Timur, A.A.; Szafranski, P.; Sadgephour, A.; Jurecic, V.; Cowell, J.; Baldini, A.; Driscoll, D.J. (2001). "Identification and molecular characterization of de novo translocation t(8;14)(q22.3;q13) associated with a vascular and tissue overgrowth syndrome". Cytogenetic and Genome Research. 95 (3–4): 183–8. PMC . PMID 12063397. doi:10.1159/000059343.
- Barker, K T; Foulkes, WD; Schwartz, CE; Labadie, C; Monsell, F; Houlston, RS; Harper, J (2005). "Is the E133K allele of VG5Q associated with Klippel-Trenaunay and other overgrowth syndromes?". Journal of Medical Genetics. 43 (7): 613–4. PMC . PMID 16443853. doi:10.1136/jmg.2006.040790.
- Black, Rosemary (May 19, 2009). "What is Klippel–Trenaunay Syndrome? Brooklyn writer Carla Sosenko shares facts about condition". New York Daily News.
- Jacob, A G; Driscoll, D J; Shaughnessy, W J; Stanson, A W; Clay, R P; Gloviczki, P (1998). "Klippel-Trenaunay syndrome: Spectrum and management". Mayo Clinic Proceedings. 73 (1): 28–36. PMID 9443675. doi:10.4065/73.1.28.
- Cabrera, Juan; Cabrera Jr, J; Garcia-Olmedo, MA; Redondo, P (2003). "Treatment of Venous Malformations with Sclerosant in Microfoam Form". Archives of Dermatology. 139 (11): 1409–16. PMID 14623700. doi:10.1001/archderm.139.11.1409.
- McDonagh, B; Sorenson, S; Cohen, A; Eaton, T; Huntley, D E; La Baer, S; Campbell, K; Guptan, R C (2005). "Management of venous malformations in Klippel–Trenaunay syndrome with ultrasound-guided foam sclerotherapy". Phlebology. 20 (2): 63–81. doi:10.1258/0268355054069188.
- synd/1812 at Who Named It?
- Klippel M, Trénaunay P (1900). "Du naevus variqueux ostéohypertrophique". Archives générales de médecine. 3: 641–72.
- Weber FP (1907). "Angioma-formation in connection with hypertrophy of limbs and hemi-hypertrophy". British Journal of Dermatology. 19: 231–5.
- Weber FP (1918). "Hemangiectatic hypertrophy of limbs – congenital phlebarteriectasis and so-called congenital varicose veins". British Journal of Children's Diseases. 25: 13.
- Information from The Klippel–Trenaunay Syndrome Support Group
- KTS gene discovery implications
- New imaging techniques avoid unnecessary diagnostic tests for Klippel–Trénaunay vascular malformation from Basque Research | 1 | 5 |
<urn:uuid:a08359f7-793b-4a45-863a-d3b0717aa43a> | A monitor or display (sometimes called a visual display unit) is an electronic visual display for computers. The monitor comprises the display device, circuitry, and an enclosure. The display device in modern monitors is typically a thin film transistor liquid crystal display (TFT-LCD) thin panel, while older monitors use a cathodic ray tube about as deep as the screen size (Dell XPS M1210 Battery) .
Originally computer monitors were used for data processing and television receivers for entertainment; increasingly computers are being used both for data processing and entertainment. Displays exclusively for data use tend to have an aspect ratio of 4:3; those used also (or solely) for entertainment are usually 16:9 widescreen, Sometimes a compromise is used, e.g. 16:10 (Dell Studio XPS 1340 Battery) .
Main articles: Viewable image size and Computer display standard
For any rectangular section on a round tube, the diagonal measurement is also the diameter of the tube
The area of displays with identical diagonal measurements can vary substantially (Dell Studio XPS 1640 Battery)
The size of an approximately rectangular display is usually given as the distance between two opposite screen corners, that is, the diagonal of the rectangle. One problem with this method is that it does not take into account the display aspect ratio, so that for example a 16:9 21 in (53 cm) widescreen display is far less high, and has less area, than a 21 in (53 cm) 4:3 screen. The 4:3 screen has dimensions of 16.8 × 12.6 in (43 × 32 cm) and area 211 sq in (1,360 cm2), while the widescreen is 18.3 × 10.3 in (46 × 26 cm), 188 sq in (1,210 cm2) (Dell Vostro 1710 Battery) .
For many purposes the height of the display is the main parameter; a 16:9 display needs a diagonal 22% larger than a 4:3 display for the same height.
This method of measurement is inherited from the method used for the first generation of CRT television, when picture tubes with circular faces were in common use. Being circular, only their diameter was needed to describe their size (ASUS EEE PC900 battery) .
Since these circular tubes were used to display rectangular images, the diagonal measurement of the rectangle was equivalent to the diameter of the tube's face. This method continued even when cathode ray tubes were manufactured as rounded rectangles; it had the advantage of being a single number specifying the size, and was not confusing when the aspect ratio was universally 4:3 (Dell RM791 battery) .
A problematic practice was the use of the size of a monitor's imaging element, rather than the size of its viewable image, when describing its size in publicity and advertising materials. On CRT displays a substantial portion of the CRT's screen is concealed behind the case's bezel or shroud in order to hide areas outside the monitor's "safe area" due to overscan. These practices were seen as deceptive, and widespread consumer objection and lawsuits eventually forced most manufacturers to instead measure viewable size (Sony VGP-BPS13 battery) .
The performance of a monitor is measured by the following parameters:
Luminance is measured in candelas per square meter (cd/m2 also called a Nit).
Viewable image size is measured diagonally. For CRTs, the viewable size is typically 1 in (25 mm) smaller than the tube itself (sony vgp-bpl9 battery) .
Aspect ratios is the ratio of the horizontal length to the vertical length. 4:3 is the standard aspect ratio, for example, so that a screen with a width of 1024 pixels will have a height of 768 pixels. If a widescreen display has an aspect ratio of 16:9, a display that is 1024 pixels wide will have a height of 576 pixels (Sony VGP-BPL11 battery) .
Display resolution is the number of distinct pixels in each dimension that can be displayed. Maximum resolution is limited by dot pitch.
Dot pitch is the distance between subpixels of the same color in millimeters. In general, the smaller the dot pitch, the sharper the picture will appear (Sony VGP-BPL15 battery) .
Refresh rate is the number of times in a second that a display is illuminated. Maximum refresh rate is limited by response time.
Response time is the time a pixel in a monitor takes to go from active (black) to inactive (white) and back to active (black) again, measured in milliseconds. Lower numbers mean faster transitions and therefore fewer visible image artifacts.
Contrast ratio is the ratio of the luminosity of the brightest color (white) to that of the darkest color (black) that the monitor is capable of producing (Dell Inspiron E1505 battery ) .
Power consumption is measured in watts.
Viewing angle is the maximum angle at which images on the monitor can be viewed, without excessive degradation to the image. It is measured in degrees horizontally and vertically (Dell Latitude E6400 battery) .
High dynamic range (up to around 15,000:1), excellent color, wide gamut and low black level.
Can display natively in almost any resolution and refresh rate (HP Pavilion dv6000 Battery)
No input lag
Sub-millisecond response times
Near zero color, saturation, contrast or brightness distortion. Excellent viewing angle.
Usually much cheaper than LCD or Plasma screens.
Allows the use of light guns/pens (Hp Pavilion dv3-1000 battery)
Large size and weight, especially for bigger screens (a 20-inch unit weighs about 50 lb (23 kg))
High power consumption
Generates a considerable amount of heat when running
Geometric distortion caused by variable beam travel distances (Dell Precision M70 Battery)
Can suffer screen burn-in
Produces noticeable flicker at low refresh rates
Normally only produced in 4:3 aspect ratio
Hazardous to repair/service
Effective vertical resolution limited to 1024 scan lines (Acer Aspire One battery) .
Color displays cannot be made in sizes smaller than 7 inches (5 inches for monochrome). Maximum size is 21 inches (for computer monitors; televisions run up to 40 inches).
Very compact and light (Toshiba Satellite L305 Battery)
Low power consumption
No geometric distortion
Little or no flicker depending on backlight technology
Not affected by screen burn-in
No high voltage or other hazards present during repair/service (Toshiba Satellite Pro M15 Battery)
More reliable than CRTs
Can be made in almost any size or shape
No theoretical resolution limit
Limited viewing angle, causing color, saturation, contrast and brightness to vary, even within the intended viewing angle, by variations in posture (Toshiba Satellite M65 battery) .
Bleeding and uneven backlighting in some monitors, causing brightness distortion, especially toward the edges.
Slow response times, which cause smearing and ghosting artifacts. Modern LCDs have response times of 8 ms or less.
Only one native resolution. Displaying resolutions either requires a video scaler, lowering perceptual quality, or display at 1:1 pixel mapping, in which images will be physically too large or won't fill the whole screen (Toshiba Satellite T4900 Battery) .
Fixed bit depth, many cheaper LCDs are only able to display 262,000 colors. 8-bit S-IPS panels can display 16 million colors and have significantly better black level, but are expensive and have slower response time
Dead pixels may occur either during manufacturing or through use (Toshiba PA3399U-2BRS battery) .
In a constant on situation, thermalization may occur, which is when only part of the screen has overheated and therefore looks discolored compared to the rest of the screen.
Not all LCD displays are designed to allow easy replacement of the backlight
Cannot be used with light guns/pens (Toshiba Satellite A200 Battery)
Main article: Plasma display
High contrast ratios (10,000:1 or greater,) excellent color, wide gamut and low black level.
High speed response.
Near zero color, saturation, contrast or brightness distortion. Excellent viewing angle.
No geometric distortion (Toshiba Satellite 1200 Battery) .
Softer and less blocky-looking picture than LCDs
Highly scalable, with less weight gain per increase in size (from less than 30 in (760 mm) wide to the world's largest at 150 in (3,800 mm)).
Large pixel pitch, meaning either low resolution or a large screen (Toshiba NB100 Battery) .
Color plasma displays cannot be made in sizes under 32 inches
Noticeable flicker when viewed at close range
Glass screen can induce glare and reflections
High operating temperature and power consumption (Toshiba Satellite M300 Battery)
Only has one native resolution. Displaying other resolutions requires a video scaler, which degrades image quality at lower resolutions.
Fixed bit depth
Can suffer image burn-in. This was a severe problem on early plasma displays, but much less on newer ones
Cannot be used with light guns/pens (Dell INSPIRON 1525 battery)
Dead pixels are possible during manufacturing
Phosphor burn-in is localized aging of the phosphor layer of a CRT screen where it has displayed a static image for long periods of time. This results in a faint permanent image on the screen, even when turned off. In severe cases, it can even be possible to read some of the text, though this only occurs where the displayed text remained the same for years (SONY VAIO VGN-FZ280E Battery) .
Burn-in is most commonly seen in the following applications:
Security monitors (SONY VAIO VGN-FZ410 Battery)
Screen savers were developed as a means to avoid burn-in, which was a widespread problem on IBM Personal Computer monochrome monitors in the 1980s. Monochrome displays are generally more vulnerable to burn-in because the phosphor is directly exposed to the electron beam while in color displays, the shadow mask provides some protection. Although still found on newer computers, screen savers are not necessary on LCD monitors (SONY VAIO VGN-FZ160 Battery) .
Phosphor burn-in can be "fixed" by running a CRT with the brightness at 100% for several hours, but this merely hides the damage by burning all the phosphor evenly. CRT rebuilders can repair monochrome displays by cutting the front of the picture tube off, scraping out the damaged phosphor, replacing it, and resealing the tube. Color displays cannot be repaired (SONY VAIO VGN-FZ38M Battery) .
Burn-in re-emerged as an issue with early plasma displays, which are more vulnerable to this than CRTs. Screen savers with moving images may be used with these to minimize localized burn. Periodic change of the color scheme in use also helps (SONY VAIO VGN-FZ21m Battery) .
Glare is a problem caused by the relationship between lighting and screen or by using monitors in bright sunlight. Matte finish LCDs and flat screen CRTs are less prone to reflected glare than conventional curved CRTs or glossy LCDs, and aperture grille CRTs, which are curved on one axis only and are less prone to it than other CRTs curved on both axes (SONY VAIO VGN-FZ18m Battery) .
If the problem persists despite moving the monitor or adjusting lighting, a filter using a mesh of very fine black wires may be placed on the screen to reduce glare and improve contrast. These filters were popular in the late 1980s. They do also reduce light output.
A filter above will only work against reflective glare; direct glare (such as sunlight) will completely wash out most monitors' internal lighting, and can only be dealt with by use of a hood or transreflective LCD (SONY VAIO VGN-FZ11m Battery) .
With exceptions of correctly aligned video projectors and stacked LEDs, most display technologies, especially LCD, have an inherent misregistration of the color channels, that is, the centers of the red, green, and blue dots do not line up perfectly. Sub-pixel rendering depends on this misalignment; technologies making use of this include the Apple II from 1976 , and more recently Microsoft (ClearType, 1998) and XFree86 (X Rendering Extension) (SONY VAIO VGN-FZ11z Battery) .
RGB displays produce most of the visible color spectrum, but not all. This can be a problem where good color matching to non-RGB images is needed. This issue is common to all monitor technologies with three color channels (SONY VAIO VGN-FZ11l Battery) .
Main article: Computer terminal
Early CRT-based VDUs (Visual Display Units) such as the DEC VT05 without graphics capabilities gained the label glass teletypes, because of the functional similarity to their electromechanical predecessors (SONY VAIO VGN-FZ31z Battery) .
Some historic computers had no screen display, using a teletype, modified electric typewriter, or printer instead.
Early home computers such as the Apple II and the Commodore 64 used a composite signal output to drive a TV or color composite monitor (a TV with no tuner). This resulted in degraded resolution due to compromises in the broadcast TV standards used. This method is still used with video game consoles. The Commodore monitor had S-Video input to improve resolution, but this was not common on televisions until the event of HDTV (Sony VGN-FW11S Battery) .
Early digital monitors are sometimes known as TTLs because the voltages on the red, green, and blue inputs are compatible with TTL logic chips. Later digital monitors support LVDS, or TMDS protocols (Sony VGP-BPS13B/B Battery) .
IBM PC with green monochrome display.
Monitors used with the MDA, Hercules, CGA, and EGA graphics adapters used in early IBM PC's (Personal Computer) and clones were controlled via TTL logic. Such monitors can usually be identified by a male DB-9 connector used on the video cable (Toshiba Satellite P10 Battery) .
The disadvantage of TTL monitors was the limited number of colors available due to the low number of digital bits used for video signaling .
Modern monochrome monitors use the same 15-pin SVGA connector as standard color monitors. They are capable of displaying 32-bit grayscale at 1024x768 resolution, making them able to interface with modern computers (SONY VAIO VGN-FZ210CE Battery) .
TTL Monochrome monitors only made use of five out of the nine pins. One pin was used as a ground, and two pins were used for horizontal/vertical synchronization. The electron gun was controlled by two separate digital signals, a video bit, and an intensity bit to control the brightness of the drawn pixels. Only four shades were possible; black, dim, medium or bright (Hp pavilion dv6000 battery) .
CGA monitors used four digital signals to control the three electron guns used in color CRTs, in a signaling method known as RGBI, or Red Green and Blue, plus Intensity. Each of the three RGB colors can be switched on or off independently. The intensity bit increases the brightness of all guns that are switched on, or if no colors are switched on the intensity bit will switch on all guns at a very low brightness to produce a dark grey (Sony VGN-FW11S Battery_) .
A CGA monitor is only capable of rendering 16 colors. The CGA monitor was not exclusively used by PC based hardware. The Commodore 128 could also utilize CGA monitors. Many CGA monitors were capable of displaying composite video via a separate jack.
EGA monitors used six digital signals to control the three electron guns in a signaling method known as RrGgBb. Unlike CGA, each gun is allocated its own intensity bit. This allowed each of the three primary colors to have four different states (off, soft, medium, and bright) resulting in 64 colors (Dell Studio 1555 Battery) .
Although not supported in the original IBM specification, many vendors of clone graphics adapters have implemented backwards monitor compatibility and auto detection. For example, EGA cards produced by Paradise could operate as an MDA, or CGA adapter if a monochrome or CGA monitor was used in place of an EGA monitor. Many CGA cards were also capable of operating as MDA or Hercules card if a monochrome monitor was used (Dell Vostro 1720 Battery) .
Single color screens
Green and amber phosphors were used on most monochrome displays in the 1970s and 1980s. White was uncommon because it was more expensive to manufacture, although Apple used it on the Lisa and early Macintoshes (Dell Vostro 1500 Battery) .
Most modern computer displays can show the various colors of the RGB color space by changing red, green, and blue analog video signals in continuously variable intensities. These are almost exclusively progressive scan. Although televisions used an interlaced picture, this was too flickery for computer use. In the late 1980s and early 1990s, some VGA-compatible video cards in PCs used interlacing to achieve higher resolution, but the event of SVGA quickly put an end to them (Dell Latitude D830 Battery) .
While many early plasma and liquid crystal displays have exclusively analog connections, all signals in such monitors pass through a completely digital section prior to display.
While many similar connectors (13W3, BNC, etc…) were used on other platforms, the IBM PC and compatible systems standardized on the VGA connector in 1987 (Dell Latitude D620 Battery) .
CRTs remained the standard for computer monitors through the 1990s. The first standalone LCD displays appeared in the early 2000s and over the next few years, they gradually displaced CRTs for most applications. First-generation LCD monitors were only produced in 4:3 aspect ratios, but current models are generally 16:9. The older 4:3 monitors have been largely relegated to point-of-service and some other applications where widescreen is not required (SONY VAIO VGN-FZ150E Battery) .
Digital and analog combination
The first popular external digital monitor connectors, such as DVI-I and the various breakout connectors based on it, included both analog signals compatible with VGA and digital signals compatible with new flat-screen displays in the same connector. Older 4:3 LCD monitors had only VGA inputs, but the newer 16:9 models have added DVI (Dell Studio 1735 Battery) .
Monitors are being made which have only a digital video interface. Some digital display standards, such as HDMI and DisplayPort, also specify integrated audio and data connections. Many of these standards enforce DRM, a system intended to deter copying of entertainment content (Dell Inspiron 300M Battery) .
Configuration and usage
Main article: Multi-monitor
More than one monitor can be attached to the same device. Each display can operate in two basic configurations (Dell Studio 1737 battery) :
The simpler of the two is mirroring (sometimes cloning,) in which at least two displays are showing the same image. It is commonly used for presentations. Hardware with only one video output can be tricked into doing this with an external splitter device, commonly built into many video projectors as a pass through connection (Dell XPS M1530 battery) .
The more sophisticated of the two, extension allows each monitor to display a different image, so as to form a contiguous area of arbitrary shape. This requires software support and extra hardware, and may be locked out on "low end" products by crippleware.
Primitive software is incapable of recognizing multiple displays, so spanning must be used, in which case a very large virtual display is created, and then pieces are split into multiple video outputs for separate monitors (Dell XPS M2010 battery) .
Hardware with only one video output can be made to do this with an expensive external splitter device, this is most often used for very large composite displays made from many smaller monitors placed edge to edge (Dell Vostro 1000 battery) .
Multiple video sources
Multiple devices can be connected to the same monitor using a video switch. In the case of computers, this usually takes the form of a "Keyboard Video Mouse switch" (KVM) switch, which is designed to switch all of the user interface devices for a workstation between different computers at once (HP Pavilion dv9000 battery) .
Main article: Virtual desktop
Screenshot of workspaces layed out by Compiz
Much software and video hardware supports the ability to create additional, virtual pieces of desktop, commonly known as workspaces. Spaces is Apple's implementation of virtual displays (Hp 520 battery) .
Most modern monitors will switch to a power-saving mode if no video-input signal is received. This allows modern operating systems to turn off a monitor after a specified period of inactivity. This also extends the monitor's service life (SONY VGP-BPS13 Battery) .
Some monitors will also switch themselves off after a time period on standby.
Most modern laptops provide a method of screen dimming after periods of inactivity or when the battery is in use. This extends battery life and reduces wear (SONY VAIO VGN-FZ Battery) .
Many monitors have other accessories (or connections for them) integrated. This places standard ports within easy reach and eliminates the need for another separate hub, camera, microphone, or set of speakers (Dell RM791 battery) .
Main article: Glossy display
Some displays, especially newer LCD monitors, replace the traditional anti-glare matte finish with a glossy one. This increases saturation and sharpness but reflections from lights and windows are very visible (Toshiba Portege R200 Battery) .
Narrow viewing angle screens are used in some security conscious applications.
Main article: Autostereoscopy
A directional screen which generates 3D images without headgear (Toshiba Satellite M60 battery) .
These monitors use touching of the screen as an input method. Items can be selected or moved with a finger, and finger gestures may be used to convey commands. The screen will need frequent cleaning due to image degradation from fingerprints.
Tablet screens (Dell Vostro 1400 Battery)
Main article: Graphics tablet/screen hybrid
A combination of a monitor with a graphics tablet. Such devices are typically unresponsive to touch without the use of one or more special tools' pressure. Newer models however are now able to detect touch from any pressure and often have the ability to detect tilt and rotation as well.
Touch and tablet screens are used on LCD displays as a substitute for the light pen, which can only work on CRTs (HP Pavilion DV7 Battery) . | 1 | 25 |
<urn:uuid:0a516efa-779f-4d69-8c88-7bfe206122a1> | Tag - Programming
Apple has opened registrations for its annual summer camp for kids in the US, Canada, China, France, Germany, Hong Kong, Italy, Japan, The Netherlands, Spain, Sweden, Switzerland, Turkey, and the United Kingdom, allowing children ages 8 to 12 who are accompanied by a parent or guardian to attend a series of workshops at a local Apple Store focused on iMovie, iBooks Author, and in some countries, basic programming. The sign-ups are on a first-come, first-served basis, and tend to fill up quickly.
Apple is capitalizing on the popularity of the Swift programming language, which is being used by over 100,000 apps and is the most popular language project on GitHub, by making it easier for younger users to pick up. Swift Playgrounds is an app for iPad that aims to teach younger users and those new to programming the fundamentals of creating Swift code, with the possibility of learning to produce their own apps.
Commenting to the BBC on the Hour of Code initiative that encourages young people to understand some of the basic concepts behind the creation of the games they play, the services they use, and the devices their lives revolve around, Apple SVP of Mac and iOS Software Craig Federighi says that programming -- often thought of as a lonely, isolated activity, is "an incredibly creative medium, not unlike music" and that he wants Apple to do more to "set off a spark" in young learners by using Apple Stores as classrooms more often.
Fulfilling a promise it made last spring, Apple has posted source code for the core libraries, parts of Foundation, and the raw language compiler for Swift, the company's development language -- including some features planned for the future Swift 3, but published now to gain feedback and assist in development. The move enables a number of new use cases for the language, which is deeply integrated into the company's Xcode IDE.
Again this year, Apple Stores around the US are now offering free registration for a child-oriented introductory workshop called "Hour of Code" in conjunction with Computer Science Education Week, which runs December 7–12. The workshops in the US will happen on Thursday, December 10, and the class itself is designed by Code.org, a group that uses popular characters to teach kids computer programming skills.
Cato's Hike: A Programming and Logic Odyssey by Hesham Wahba is an iOS app for helping kids, and adults if they're so inclined, to learn about programming. The app utilizes an object-oriented language of "cards" and customizable maps to demonstrate the various principles. The app is very cute, and we liked the idea of the cards format for the programs. We think Cato's Hike is a good tool for a parent or teacher to guide a would-be programmer to a better understanding of how it's done.
The BBC is continuing in its attempts to promote digital literacy in the United Kingdom, by providing approximately 1 million devices to children. The broadcaster will be providing the Micro Bit, a compact electronic board inspired by the BBC Micro from the 1980s, to students in secondary schools this fall, which it hopes will encourage a new generation of coders to create software.
Edutainment took off in the early 1990s, with games like Math Blaster and The Incredible Machine. For the most part, edutainment games have usually been math or science-centric, with enough history and reading thrown in to keep the mix interesting. However, that's beginning to change with a new dawn of kid-targeted apps. Tynker is one such app, which promises to teach elementary-age children how to code their own video games.
Apple's Swift language, introduced just last June at its Worldwide Developer's Conference, as risen from 68th place to 22nd in the last six months on a ranking of the most widely-used programming languages. Enterprise developer liason firm RedMonk said it had never seen a growth rate so "meteoric" in the history of its rankings, which first appeared in 2010. When "ties" are discounted, the streamlined language has entered the top 20 just seven months after its debut.
Continuing its effort to promote its own streamlined object-oriented programming code Swift, Apple has followed up from its Swift blog with a full-blown mini-site on Apple.com. The new site takes a similar approach to Apple's dedicated mini-sites for education and business, highlighting some of the many apps now built using swift and featuring case studies, profiles and links to tutorials and free resources. The new promotional mini-site is in addition to the regular Swift developer site.
Now AAPL Stock: 145.83 ( + 2.1 )
Cirrus creates Lightning-headphone dev kit
Apple supplier Cirrus Logic has introduced a MFi-compliant new development kit for companies interested in using Cirrus' chips to create Lightning-based headphones, which -- regardless of whether rumors about Apple dropping the analog headphone jack in its iPhone this fall -- can offer advantages to music-loving iOS device users. The kit mentions some of the advantages of an all-digital headset or headphone connector, including higher-bitrate support, a more customizable experience, and support for power and data transfer into headphone hardware. Several companies already make Lightning headphones, and Apple has supported the concept since June 2014. http://bit.ly/29giiZj
Apple Store app offers Procreate Pocket
The Apple Store app for iPhone, which periodically rewards users with free app gifts, is now offering the iPhone "Pocket" version of drawing app Procreate for those who have the free Apple Store app until July 28. Users who have redeemed the offer by navigating to the "Stores" tab of the app and swiping past the "iPhone Upgrade Program" banner to the "Procreate" banner have noted that only the limited Pocket (iPhone) version of the app is available free, even if the Apple Store app is installed and the offer redeemed on an iPad. The Pocket version currently sells for $3 on the iOS App Store. [32.4MB]
Porsche adds CarPlay to 2017 Panamera
Porsche has added a fifth model of vehicle to its CarPlay-supported lineup, announcing that the 2017 Panamera -- which will arrive in the US in January -- will include Apple's infotainment technology, and be seen on a giant 12.3-inch touchscreen as part of an all-new Porsche Communication Management system. The luxury sedan starts at $99,900 for the 4S model, and scales up to the Panamera Turbo, which sells for $146,900. Other vehicles that currently support CarPlay include the 2016 911 and the 2017 models of Macan, 718 Boxster, and 718 Cayman. The company did not mention support for Google's corresponding Android Auto in its announcement. http://bit.ly/295ZQ94
Apple employees testing wheelchair features
New features included in the forthcoming watchOS 3 are being tested by Apple retail store employees, including a new activity-tracking feature that has been designed with wheelchair users in mind. The move is slightly unusual in that, while retail employees have previously been used to test pre-release versions of OS X and iOS, this marks the first time they've been included in the otherwise developer-only watchOS betas. The company is said to have gone to great lengths to modify the activity tracker for wheelchair users, including changing the "time to stand" notification to "time to roll" and including two wheelchair-centric workout apps. http://bit.ly/2955JDa
SanDisk reveals two 256GB microSDXC cards
SanDisk has introduced two 256GB microSDXC cards. Arriving in August for $150, the Ultra microSDXC UHS-I Premium Edition card offers transfer speeds of up to 95MB/s for reading data. The Extreme microSDXC UHS-I card can read at a fast 100MB/s and write at up to 90MB/s, and will be shipping sometime in the fourth quarter for $200. http://bit.ly/294Q1If
Apple's third-quarter results due July 26
Apple has advised it will be issuing its third-quarter results on July 26, with a conference call to answer investor and analyst queries about the earnings set to take place later that day. The stream of the call will go live at 2pm PT (5pm ET) via Apple's investor site, with the results themselves expected to be released roughly 30 minutes before the call commences. Apple's guidance for the quarter put revenue at between $41 billion and $43 billion. http://apple.co/1oi1Pbm
Twitter stickers slowly roll out to users
Twitter has introduced "stickers," allowing users to add extra graphical elements to their photos before uploading them to the micro-blogging service. A library of hundreds of accessories, props, and emoji will be available to use as stickers, which can be resized, rotated, and placed anywhere on the photograph. Images with stickers will also become searchable with viewers able to select a sticker to see how others use the same graphic in their own posts. Twitter advises stickers will be rolling out to users over the next few weeks, and will work on both the mobile apps and through the browser. http://bit.ly/29bbwUE | 1 | 2 |
<urn:uuid:e145dbfc-ecb2-4d72-9d5e-3c00508e6729> | Forgotten Fragments: An Introduction to Japanese Silent Cinema
- 16 July 2002
The general consensus abroad seems to be that Japanese cinema really only began in earnest during the 1950s, when films such as Akira Kurosawa's Rashomon (1950), Kimisaburo Yoshimura's A Tale of Genji (Genji Monogatari, 1952), Kenji Mizoguchi's Ugetsu (Ugetsu Monogatari, 1953), and Teinosuke Kinugasa's Gate of Hell (Jigokumon, 1953) were first unveiled to a wider world. In reality, the Japanese film industry began shortly after the new medium was born at the tail end of the 19th century, yet still hardly anything has been written outside of Japan on the subject of pre-War Japanese cinema (an obvious exception being Anderson and Richie's The Japanese Film: Art and Industry, first published in 1959).
The obvious reason for this is that these films are virtually impossible to see. With one of the lowest survival rates in the world, the embryonic development of the nation's cinema is a difficult area to chart and many important works of this period including early films by important directors including Daisuke Ito, Yasujiro Ozu and Kenji Mizoguchi are now unfortunately considered to be irretrievably lost. The National Film Center still lists only around seventy titles made in Japan before 1930 in their collection, yet it has been estimated that around 7000 were produced in the 1920s alone. Some were carelessly misplaced by the studios that produced them, a great number were destroyed in the chaos of the Kanto earthquake in 1923 and still more of the country's cinematic legacy went up in smoke either in the conflagration of the bombing raids during the Pacific War or in its immediate aftermath under the Allied Occupation where films that fell under a list of 13 forbidden subjects (the most serious offence being 'feudal loyalty') sketched up by the Supreme Commander for the Allied Powers' Civil Information and Education Section were banned outright. In spring 1946, a match was put to all films deemed unnecessary to be kept for further analysis and a large chapter of cinematic history has been lost in the ashes forever.
That any films do remain from this period is largely down to the work of one man, Shunsui Matsuda. Born in 1925, Matsuda began his vocation working as a child benshi. In 1947, when post-war shortages meant there really weren't a lot of films being shown in the more provincial areas of Japan, he found himself part of a troupe of itinerant benshi travelling around Kyushu, whose burgeoning coal mining industry had attracted a lot of workers to the region. The desperate shortage of any means of entertainment in the area meant that reruns of old silent films were still immensely popular. The story goes that Matsuda discovered one of the projectionists snipping out footage from one of these films because it "dragged the film down", and thereupon decided to dedicate his life to the act of preserving these early cinematic documents.
During the 1950s Matsuda was appointed president of the Friends of Silent Films Association, dedicated to the appreciation of Japanese silent cinema and keeping alive the benshi tradition of the katsuben silent film narration. He continued to give performances all the way up until his death in 1987, also producing Bantsuma - Bando Tsumasaburo no Shogai (Bantsuma - The Life and Times of Tsumasaburo Bando) in 1980 and his own silent film, Jigoku no Mushi (Maggots of Hell) in 1979. Matsuda Film Productions, the company he founded, still runs regular narrated screenings of silent films in Tokyo, and not only Japanese ones, but also classics by such notables as Friedrich Murnau and DW Griffiths.
Outside of Japan however, aside from sporadic festival screenings of Teinosuke Kinugasa's A Page of Madness (a pretty atypical work from the period), Japanese silent film remains virtually an unknown entity. One company dedicated to changing all this, in conjunction with Matsuda productions, is Urban Connections, whose fascinating book The Benshi: Japanese Silent Film Narrators provides the best English-language introduction to the subject there is. For those that wish to dig deeper, the same company have also put out an incredibly informative Japanese/English bilingual DVD-Rom entitled Masterpieces of Japanese Silent Cinema (PC only - there's no provision for Mac users, unfortunately). Admittedly, at 18,000 Yen, the cost of this product is going to keep it from the hands of most people, though with the Yen at its current low rate, it's perhaps a little less painful on the wallet than it could have been. However, I think its fare to say that with essays from a number of prominent Japanese critics and film historians and a searchable database with information, credits and images on around 12,000 titles, it represents the most complete resource there is on the subject.
However, the package's greatest strength is the footage included from 45 of these films, all accompanied by a recorded benshi katsuben (narration), mainly from Midori Sawato, the leading practitioner of this unique narrative art form still active today, or Shunsui Matsuda himself, but sometimes from some of the major benshis of the era such as Shiro Otsuji. These resources reveal that the approach taken by various benshis differed radically in style and approach. The footage from the films included here are a fascinating selection, not only jidai-geki period swashbucklers such as Kageboshi, Noble Thief of Edo (Edo Kaizokuden Kageboshi, 1925) or Sakamoto Ryoma (1928), but also contemporary set melodramas, such as Mother's Milk (Chibusa, 1937) and a fragment of Ozu's lost I Graduated, But... (Daigaku wa Detakeredo, 1929). Of particular interest are the portions of the early animated piece of war propaganda, Private Norakuro (Norakuro Gocho, 1934), Heinosuke Gosho's 1933 adaptation of Yasunari Kawabata's The Izu Dancer (Izu no Odoriko), and Junsuke Sawata's The Missing Ball (Mari no Yukue, 1930), in which the social class inequalities of the Taisho era are analysed via means of a girls high school athletic meeting.
Impressive as this package is however, it can't hope to do full justice to the full blown experience of a live benshi performance. I was recently lucky enough to catch one such of these delivered by Midori Sawato in Tokyo, accompanying two films from Nikkatsu's early Mito Komon series of jidai-geki, directed by Ryohei Arai; Mito Komon: The Story of Raikunitsugu (Mito Komon Raikunitsugu no Maki, 1934), Mito Komon: The Secret Letter (Mito Komon Missho no Maki, 1935), and a 15-minute fragment of what remains of Tomiyasa Ikeda's Yaji and Kita - Yasuda's Rescue (Yajikita Sonno no Maki, 1927).
An Interview with Midori Sawato
Midori Sawato's spoken commentaries regularly accompany screenings of silent films on the Japanese TV channel NHK, and she even accompanied a recent screening in Japan of Finnish director Aki Kaurismaki's homage to the silent days, Juha (2000). She kindly agreed to talk to Midnight Eye about the unique phenomenon of the katsuben commentary, the origins of cinema in Japan, and her training as a benshi under Shunsui Matsuda:
First I'd like to ask you how you became a benshi.
In 1972, I saw the silent film The Water Magician [Taki no Shiraito, dir. Kenji Mizoguchi, 1933] and was very impressed. A man named Shunsui Matsuda was the benshi for that showing. I ended up becoming his pupil. I'd always been a big fan of old films, so I thought that working as a benshi - someone who provides the narration for silent films - would be very fulfilling for me. I love silent movies so much!
In Japan now, are there still many people who are aware of the katsuben phenomenon - that silent films in Japan were regularly shown with a spoken narration accompaniment?
In the past people did know, and now there are still many elderly people who know about it, but among people in their forties or younger... Well, probably even among people in their fifties or younger, most people don't know about it at all. I spend a lot of time thinking about what would be the best way to make those people more aware of the benshi.
There was a film directed by Kaizo Hayashi in 1986 called To Sleep So As To Dream (Yume Miru yo ni Nemuritai") in which Shunsui Matsuda actually appeared doing a benshi commentary, and that you appeared in as well. Do you think that film helped to promote or advertise the benshi at all?
Well, the film itself is about nostalgia really, but I do think it also helped to spread awareness of the benshi.
Regarding the film's final scene in the denkikan film theater, I was wondering if that was an accurate reconstruction of what an original benshi performance would have been like.
I don't think it reconstructed the era perfectly, but much of it was accurate. In the past there really were many varieties of theaters (katsudo shashinkan). The sizes and numbers of benshi working varied from theater to theater, so it's hard to say what would have been "standard" at the time. However, I do think that final scene in the katsudo shashinkan had many elements from what the theaters of the time really were like. I can't say it was 100% accurate, but I think it was very close.
How many benshi are currently in operation in Japan?
There are very few, probably fewer than 10 people...
Right, so you're...
Actually there's probably not even that many. Several people, at the most.
What exactly is your relationship to Matsuda Productions? They work as a film archives, but you're not involved in the archiving, just the promotion?
I am a benshi based at Matsuda Productions. I do many different things: performances within Japan, even performances in foreign countries. I've gone overseas for performances more than 10 times now. In fact I'm going to the University of California at Berkeley this September. Right now I do pretty much all of the work that comes into Matsuda Productions.
Can you give me some background on how the benshi began in Japan?
Well, in 1896 Edison's Kinetoscope came into Japan. Following that was the Lumiere brothers' Cinematograph. This technology arrived around Meiji 29 or 30 - that is, 1896 or 1897. When that happened, there was a need to explain the foreign films, or katsudo shashin [moving photographs] as they were called - the Cinematograph, Kinetoscope, Vitascope - in simple terms to the Japanese audience. All of the films were of foreign scenes and settings, so someone had to explain to the audience what a movie is, what film is, what the images are of and so on. People were being charged to watch, so it would be have been rude to the audience if they couldn't understand what they were seeing. The benshi were born out of that need.
In addition, in Japan there is an old tradition of spoken performance arts like rokyoku, gidayu and rakugo. The benshi weren't born out of the need for an explanation alone, but also from this tradition and the love of spoken performance arts that the Japanese held within them. I believe those two reasons together are what led to the development of the benshi.
A lot of western writing on the benshi focuses on the 'traditional' element, and as a reader, before I saw my first film - or heard it, rather - on video with a katsuben, I always had this idea of it being very much like kabuki, with this sort of chanting kabuki style of speaking. I thought a lot of writing was quite misleading in that regard. When I did hear a benshi performance, it was almost like an energetic sports commentary. It was much less formal than I expected.
With a film it becomes a little lighter and a bit more, how should I say, rhythmical. The benshi style of talking can also include aspects of kabuki, gidayu and storytelling (kodan), so it does contain a variety of traditional Japanese elements as well. In the early period of Japanese film there were many movies that depicted the kabuki stage as is, or just filmed a play performance. These were called jissha ["true to life"] film. Stories of heroes like Minamoto no Yoshitsune that were performed as a kabuki plays, or Shibukawa Bangoro - these were filmed as is. So that influence did exist, and while the benshi didn't just copy that, they did borrow and mix such elements within themselves in order to create their own performance styles.
Was there an equivalent to the benshi in other countries at the time? In the early days of cinema when they were screening the sort of newsreel footage in other countries, would there not have been a commentator to explain what was going on as well?
When I went to the Pordenone Silent Film Festival last year there was a professor from the University of California at Berkeley, Russell Merritt, who said until about 1908 there were people similar to the Japanese benshi who narrated and explained films in America as well. I've also been reading about Italian film history recently, and it seems that in Italy during the early years they would have someone giving an explanation from the projection room. For one example, there was this Italian comedian filmmaker who would film his own act, and then at the film showings he would perform an explanation from behind the screen. There was something like the benshi in France, and in Germany as well. There's a book called The Film Explainer written by Gert Hoffman, and it tells that there were benshi in Germany as well. It even says where they performed. The story is more or less fiction, though. I have the book and have been trying my best to read it. [Laughs]
To go back to what I was saying about America, there were many immigrants from different places then. Apparently they would have narrators at screenings that would read the film intertitles for the sake of people in the audience who weren't yet used to English. Or actors would be speaking from behind the screen, I believe. According to what I heard from Frances Loden of Berkeley, who came to Japan to do research on the benshi, there is even a book written on the subject of film explainers in America. So as you say, in the early years of film there were narrators explaining the films around the world... well, I can't say everywhere in the world, but they did exist in many places. In Japan though, at the peak of the benshi era there were more than 7000 benshi in Japan, and they were active over a period of 30 or 40 years. Japan is the only place in the world where the benshi became such a big popular attraction. In that sense it is a somewhat unique phenomenon.
Yes, because I understand in Japan as well the means of distribution was very different and that the Denki-kan theater was purpose-built (in 1903) only to show films, whereas in other countries films were initially shown as a sort of adjunct to other activities, so they'd be shown in carnival fairs and so forth.
They did have specialty theaters fairly early on even in America though. While at first there would only be one person at a time looking at the Kinetoscope, they did start to show more and more films on screens after that. Two men named Harris and Davis made the first exclusive movie theater in Pittsburgh in 1905, the Nickelodeon. In Italy too they had theaters made specifically for films. At first it was entertainment for the working class that high society looked down on, but as cinema advanced the producers knew that it could attract a more varied audience, and so theaters started to develop. I think I see what you mean, but there were places like the Japanese theaters (josetsukan) very early on as well.
It seems that the earliest fictional kabuki adaptations were shot without any cinematic device at all. When did cinematic techniques start to be used in Japan? When did cinema become a bit more removed from theater?
There are many different explanations, but one theory attributes it to a short film called Armed Robber: Shimizu Sadakichi (Pisutoru Goto Shimizu Sadakichi, 1899), which was based on a real-life event. Some say that this was the first geki eiga. Others look to the films made by producer-director Shozo Makino in Kyoto, who's known as the father of Japanese film. He filmed stories that were well known through kabuki and kodan, with actors like Matsunosuke Onoe. They weren't using close ups and cutbacks like Griffith, and the camera was pretty much left in one place, but you could say that they were geki eiga, filming fictional stories instead of simply live scenes. Many think that Shimizu Sadakichi was the first film to do this, but it's hard to be sure.
Up until this period in Japan were there more foreign releases or domestic productions?
There were more foreign films then. Lumiere and Edison's films, and films from Italy and France... Towards the end of the Meiji period a French film entitled Zigomar (dir. Victorin-Hippolyte Jasset, 1911) was imported but then banned. Zigomar was about a thief, and Japanese children would watch that and want to imitate him, playing thief. It was said that this had a negative effect on their education. At any rate, in the early period of film in Japan, there were more foreign films than Japanese films. But after the end of the Meiji period and into the Taisho period Japan struggled to catch up. Nikkatsu's Kyoto and Mukojima studios were built, later Shochiku, and Makino had been around for a while. Different studios were established, and from the Taisho to Showa period films were being mass-produced in Japan. In the old katsudo shashinkan the film program would change about every week, so they had to make a lot of films to keep up with that schedule. At the start western films were dominant, but then Japanese filmmakers absorbed the foreign films, learning technical matters and other things from them, and from the Taisho to Showa periods many high quality films were being made.
Moving on from film history, it was assumed with the benshi that films would have a katsuben commentary. Would it have been possible to understand the films without the katsuben? Every benshi obviously added their own interpretation to the films, but would the film as a product stand alone?
That's a really good question, but again very difficult to answer. At the time - and even now - there were very few people in Japan who could read English, French or German, so a lot of people couldn't understand the intertitles in foreign films. They didn't have translation subtitles of the sort we see in films today. Back then foreign films were just shown as is, and if you couldn't read the titles you probably couldn't understand two-thirds of the story. For those sorts of films, the benshi were necessary. On the other hand, many Japanese films were made from stories everyone already knew, for example hero adventures like Sarutobi Sasuke or Kanjincho, a popular kabuki story about Yoshitsune trying to escape from his older brother. The average person in Japan would have already heard these, so strictly speaking they probably would understand what was going on if they just saw the film. However, most people wouldn't have been satisfied with that. The audiences also wanted to enjoy the performance, not just understand it, and so they wanted someone who could explain the stories in an interesting, funny way. Viewers could understand the pictures, but the issue isn't simply about intellectually understanding what was going on but also about enjoying the entertainment, with a narrator in a live performance. They even had live bands playing with the films. I understand very well what you're saying, and strictly speaking perhaps the benshi weren't necessary for some films, but the average viewer wouldn't have been satisfied without them.
However, there were some young Japanese at the time who complained that Japanese films were "old-fashioned" compared to foreign films. For example, in Japanese love stories the woman would be always be secondary to the man, but in American films the man and woman were much more equal. Also, there was a genre of film called the shimpa higeki, or tragic stories about women. These young viewers argued that Japanese stories like these, with the benshi narration, were behind the times. They looked up to foreign films and argued that Japanese films should depict contemporary life in a more realistic way as well. They also complained about the kowairo voice actors. At first there would be several benshi at film screenings, and different people would "perform" the different voices of the characters in the film. It was something like the dubbing they do for animation today. The benshi didn't always work in groups like this, just in the very early years. By the later half of the Taisho period this started to go out of style.
So Japanese film had always been carrying along these traditions from kabuki and shimpa theater, both the good and bad aspects of them. One of the bad aspects was - as those young film fans were complaining - that the tear-jerking stories about kowakare (child separation), tragic women, or heroes were too old-fashioned, and that Japan should make more films based on contemporary reality. Japanese films, with the influence from kabuki and shimpa and the kowairo voice acting, were branded as being old-fashioned. They said that Japanese films should be made more like foreign films. Those are the people who argued that the benshi were unnecessary. They wouldn't have minded if the benshi were just doing a descriptive explanation like they did for western films, but some people really disliked the benshi, kowairo, the Japanese music, and those tear-jerking stories. Norimasa Kaeriyama, for example. That's not quite what you were asking, but there were people like that who protested against the benshi.
So on the one hand the benshi were stars and were very popular with the audience, but there was also a group of people who said they were too old-fashioned, and that Japanese cinema should be made with more western-like themes. If benshi were involved, they wanted them to give a more efficient, intellectual performance. Basically, they didn't want them. But on the whole, as I said, benshi were necessary for foreign films, and even with Japanese films the viewers wouldn't have been satisfied with just images alone. Sorry I got carried away with such a long explanation!
The best-known silent Japanese film outside of Japan at the moment is A Page of Madness...
That's a wonderful film, so avant garde.
...which doesn't seem to be well known inside Japan. It gets shown in America and Europe now with various musical accompaniments, but never a benshi. Do you think that's changing the way it was originally meant to be seen?
It seems that it was played with a benshi narration when it was originally released. The famed Musei Tokugawa is said to have performed with that film. Director Teinosuke Kinugasa made that and Crossroads (Jujiro, 1928), films that are both very avant garde. Rather than following a linear story, the films were more about the images themselves, about playing with images. Personally, I doubt that Kinugasa himself would have insisted that A Page of Madness be shown with a benshi. He had taken a lot of influence from avant garde film and foreign art, and various literary figures were involved with the film's production, so it was made influenced by a lot of the most progressive art of the time. It probably didn't matter to him if the audience understood it or not. The audience's reaction to the twisted images and contorted visuals was more important, even if it was a negative response. So I don't think Kinugasa felt he needed someone to explain the film from start to end. At the time the film was shown with a benshi, but now it's shown with live music from a variety of genres. Perhaps that's something Kinugasa would have approved of.
A few years ago in Tokyo a group named the Blindman Kwartet played with the film. I was called to that to provide a summary of the story before the screening, but I didn't narrate as a benshi. Instead, once the film started the Blindman Kwartet started playing their music, and it was really exciting. A Page of Madness is set in a mental institution, and the images are crazy too. Put together with that music the viewers really enjoyed it.
There were some (but probably not many) directors at that time making such avant-garde films who didn't feel they needed a benshi, and I believe that's what Kinugasa felt about his film. For this film I think presentation without a benshi is just fine. I think it'd be great if they show it more with exciting music too. In fact I think it would probably be very difficult to perform a narration with that film.
Now when the films were originally made, did the filmmakers work with the benshi? Did the filmmakers leave guidelines for the benshi to work around, or cooperate with the benshi in preparing the katsuben script?
No, not at all.
How much of the katsuben commentary is improvised, and how much is scripted?
The benshi devise and write their own scripts beforehand, but at times they do ad lib lines during a performance. As the years go by the understanding of a film changes, so occasionally they'll revise the script too. Sometimes a benshi will speak lines that are not in the script, or skip lines that are. It depends on the circumstances, but it's left to the benshi's discretion.
In the early days, the benshi were big stars in their own right, and I heard that they were more of a box office draw than the actors. Is this true?
Yes, well, how should I put it? For example the katsudo shashinkan would have posters and banners up advertising the films, and the benshi's name would be written bigger than the actors'. "So-and-so's reading of this-and-that film." Some benshi were paid very well and had movie star-level popularity. If you ask why, well, movie stars then were really beyond the audience's reach. Nowadays actors aren't quite so distant, but back then they were very charismatic, often extremely beautiful men and women who were far away from most people's daily lives. On the other hand, the benshi who narrated at these stars' films were literally close enough to call out to. The benshi were very familiar to the audiences, so they were very popular. I've heard a lot of stories about their popularity.... At times popular benshi would use their fame to pick up women and make lots of girlfriends! [Laughs] Sometimes you hear things like that.
As I was saying a few minutes ago, the basis of this popularity was that, in the Taisho and Showa periods, before WWI, Japanese people loved beautiful language. They loved hearing the spoken word. For example, "Higashiyama sanjuroppo shizuka ni nemuru ushimitsu toki..." [ 1 ] If you take that literally, there is no sanjuroppo in Kyoto's Higashiyama. But said in those terms it sounds beautiful. It's called bibun, or beautiful prose; taking an embellished style, or speaking in a certain tone. "Aa, sono omoide no haru no tsuki wa, reiho hakusan no yuki o terashi." [ 2 ] When you hear it performed in that style the scene really comes to life. People truly loved that kind of beautiful language. A benshi who could perform like that would be beautiful in the eyes of the audience, even if they didn't have such an attractive face. People would just be intoxicated by their speech and thrilled with the benshi themselves. This love of beautiful words was at the core of the benshi's popularity. The audiences truly enjoyed hearing the words spoken with that kind of musical tone and rhythm.
When did the era of the benshi officially end?
It was over by about 1937. Even after sound films had started to be made, some companies were still making silent films, and there were still benshi for a while, but I believe they were completely gone by about 1936 or 1937.
Right, because in the Matsuda Productions book on the benshi it's mentioned that Shunsui Matsuda was touring Kyushu in 1947 as a benshi. When did he begin, because I heard he was a child benshi?
Yes, before the war he was active as a child benshi, and then the war began and he became a soldier. Afterwards he went to the Soviet Union, and when he came back he toured around as a benshi, like you said. He was upset to see film prints being destroyed, so he decided to start collecting film. He established the Friends of Silent Film Association and began to show the films he gathered to paying audiences.
In 1952 he founded Matsuda Film Productions. Did they produce any films at all?
Matsuda Film Productions made a film called The Insects of Hell (Jigoku no Mushi, dir. Tatsuo Yamada, 1979) and a documentary called The Life of Tsumasaburo Bando (Bando Tsumasaburo no Shogai, 1980). They produced those two.
Jigoku no Mushi is a very intriguing title. Can you tell me a bit more about it?
The famous director named Hiroshi Inagaki actually made that film a long time ago, but it ran into a lot of problems with the censors. Years later, Shinsui Mastuda and Hiroshi Inagaki were friends, and Matsuda decided to make the film that Inagaki had tried to make earlier. The story is about outlaws who steal money and the internal struggles and problems that develop afterwards. The thieves fall one by one and are finally captured by the authorities. It's really a dark, hopeless story.
On to you more personally, how do you go about preparing a katsuben for a film?
I watch the film several times then write a script. Also I look into the historical setting and background.
Do you have to explain some of the historical and cultural differences to the audience when you show such old films?
Yes, I give an introduction before the film starts when I explain what kind of time it was made in and what kind of people the director and actors were. People watching it today might think some of the relationships between men and women seem a bit old, for example, but those were the limitations of the times, and I explain that to the young audiences. Of course old films show things differently than what we see in contemporary life, but that's how things were. When narrating I don't use any modern language or slang to make it easier for people to understand. If the titles are written using older vocabulary, then I try to be consistent and use that kind of language throughout the film.
Do you have any personal favorites, of the films you've narrated?
Yes... Orochi (dir. Buntaro Futagawa, 1925), starring Tsumasaburo Bando. Also The Water Magician by Mizoguchi. Memories of Mother (Mabuta no Haha, 1931) by Hiroshi Inagaki. Chaplin, Keaton, Griffith, Fritz Lang, Murnau, Maurice Tourneur... There are so many!
Finally, in the book there were a lot of fascinating films mentioned, and I'd love to see them. Can you tell me a little bit about the Friends of Silent Film Association screenings? How often do you meet, and what sort of films do you show?
There's always one gathering per month in Tokyo, and each time we choose a theme. In May we're showing Mito Komon films, starring Denjiro Okochi. We show many different films, Japanese and foreign.
- [ 1 ] "Higashiyama sanju roppo shizuka ni nemuru ushimitsu toki..." - Here Ms. Sawato is quoting one of the "famous lines" from benshi Koto Goro's narration of the 1926 Bantsuma Productions picture A Royalist (Son-oh, dir. Seika Shiba). "...[The] 36th peak of the Higashiyama mountain range, at midnight when all is quietly asleep."
- [ 2 ] "Aa, sono omoide no haru no tsuki wa, reiho hakusan no yuki o terashi." - This is a famous katsuben line from Shunsui Matsuda's reading of Kenji Mizoguchi's The Water Magician (1933). "Ah, the moon on that memorable spring night left a glow of light upon the snow on Mt. Haku."
The Matsuda Film Productions book on the katsudo benshi, The Benshi - Japanese Silent Film Narrators, contains descriptions of fifty different silent films and quotes of famous benshi lines from each of them. The book is available in both English and Japanese language versions. There are several other books on the subject in Japanese, with famous quotes, information on theater conditions and details about film releases, although some may be out of print. For two examples, see Kyohei Misono's Katsuben Jidai (Iwanami Shoten, 1990) and Chieo Yoshida's Mo Hitotsu no Eigashi: Katsuben no Jidai (Jidai Tsushinsha, 1978). | 1 | 3 |
<urn:uuid:8d58288d-f5d2-49c9-8b01-4744438a426c> | ICD-10-CM Code A77.1
Spotted fever due to Rickettsia conorii
Billable CodeBillable codes are sufficient justification for admission to an acute care hospital when used a principal diagnosis.
A77.1 is a billable ICD code used to specify a diagnosis of spotted fever due to Rickettsia conorii. A 'billable code' is detailed enough to be used to specify a medical diagnosis.
The ICD code A771 is used to code Boutonneuse fever
Boutonneuse fever (also called Mediterranean spotted fever, fièvre boutonneuse, Kenya tick typhus, Marseilles fever, or African tick-bite fever) is a fever as a result of a rickettsial infection caused by the bacterium Rickettsia conorii and transmitted by the dog tick Rhipicephalus sanguineus. Boutonneuse fever can be seen in many places around the world, although it is endemic in countries surrounding the Mediterranean Sea. This disease was first described in Tunisia in 1910 by Conor and Bruch and was named boutonneuse (French for "spotty") due to its papular skin rash characteristics.
|ICD 9 Code:||821|
Coding Notes for A77.1 Info for medical coders on how to properly use this ICD-10 code
Inclusion TermsInclusion Terms are a list of concepts for which a specific code is used. The list of Inclusion Terms is useful for determining the correct code in some cases, but the list is not necessarily exhaustive.
- African tick typhus
- Boutonneuse fever
- India tick typhus
- Kenya tick typhus
- Marseilles fever
- Mediterranean tick fever
- DRG Group #867-869 - Other infectious and parasitic diseases diagnoses with MCC.
- DRG Group #867-869 - Other infectious and parasitic diseases diagnoses with CC.
- DRG Group #867-869 - Other infectious and parasitic diseases diagnoses without CC or MCC.
ICD-10-CM Alphabetical Index References for 'A77.1 - Spotted fever due to Rickettsia conorii'
The ICD-10-CM Alphabetical Index links the below-listed medical terms to the ICD code A77.1. Click on any term below to browse the alphabetical index.
Equivalent ICD-9 Code GENERAL EQUIVALENCE MAPPINGS (GEM)
This is the official exact match mapping between ICD9 and ICD10, as provided by the General Equivalency mapping crosswalk. This means that in all cases where the ICD9 code 082.1 was previously used, A77.1 is the appropriate modern ICD10 code. | 3 | 7 |
<urn:uuid:2f38fe62-b314-4773-a9de-9f318e7756ab> | | കേരളം? · Kēraḷaṁ
|Nickname: "God's Own Country"|
|Time zone||IST (UTC+5:30)|
|Area||38,863 km² (15,005 sq mi)|
|Largest metro||Kochi urban agglomeration|
|31,838,619 (12th) (2001)
• 819 /km² (2,121 /sq mi)
|Governor||R. L. Bhatia|
|Chief Minister||V.S. Achuthanandan|
|Established||November 1, 1956|
|Legislature (seats)||Unicameral (141‡)|
|‡ 140 elected, 1 nominated|
Coordinates: Kerala refers to a state on the Malabar Coast of southwestern India. To its east and northeast, Kerala borders Tamil Nadu and Karnataka respectively; to its west and south lie the Arabian Sea and the Indian Ocean, with the islands of Lakshadweep and the Maldives, respectively. Kerala nearly envelopes Mahé, a coastal exclave of Pondicherry. Kerala is one of the four states of South India.
First settled in the tenth century B.C.E. by speakers of Proto-South Dravidian, the Maurya Empire influenced Kerala. Later, the Cheran kingdom and feudal Namboothiri Brahminical city-states became major powers in the region. Early contact with overseas lands culminated in struggles between colonial and native powers. The States Reorganisation Act of November 1, 1956, elevated Kerala to statehood. Social reforms enacted in the late 19th century by Cochin and post-independence governments expanded upon Travancore, making Kerala among the Third World's longest-lived, healthiest, most gender-equitable, and most literate regions. Paradoxically, Kerala's suicide, alcoholism, and unemployment rates rank among India's highest. A survey conducted in 2005 by Transparency International ranked Kerala as the least corrupt state in the country.
Linguist widely dispute the etymology of Kerala, casting the issue into the realm of conjecture. Common wisdom considers Kerala an imperfect Malayalam portmanteau that fuses kera ('coconut palm tree') and alam ('land' or 'location' or 'abode of'). Another theory with a following states that the name originated from the phrase chera alam (Land of the Chera). Natives of Kerala—Keralites or Malayalees—thus refer to their land as Keralam. Kerala's tourism industry, among others, also use the phrase God's own country.
Myths and legends persist concerning the origin of Kerala. One such myth depicts the creation of Kerala by Parasurama, a warrior sage. Parasurama embodied the incarnation of Maha Vishnu. He was the sixth of the ten avatars (incarnation) of Vishnu. The word Parasu means 'axe' in Sanskrit and therefore the name Parasurama means 'Ram with Axe'. The gods gave birth to him with the intention of delivering the world from the arrogant oppression of the ruling caste, the Kshatriyas. He killed all the male Kshatriyas on earth and filled five lakes with their blood. After destroying the Kshatriya kings, he approached an assembly of learned men to find a way of penitence for his sins. They advised him, to save his soul from damnation, to hand over the lands he had conquered to the Brahmins. He did as they advised and sat in meditation at Gokarnam. There, Varuna—the God of the Oceans and Bhumidevi—Goddess of Earth blessed him. From Gokarnam he reached Kanyakumari and threw his axe northward across the ocean. The place where the axe landed he named Kerala. 160 katam (an old measure) of land lay between Gokarnam and Kanyakumari. Puranas say that Parasuram planted the 64 Brahmin families in Kerala, whom he brought down from the north to expiate his slaughter of the Kshatriyas. According to the puranas, Kerala also went by the name Parasurama Kshetram, i.e., 'The Land of Parasurama', as he reclaimed the land from sea.
During Neolithic times, humans largely avoided Kerala's rain forests and wetlands. Evidence exists that speakers of protoa-Tamil language produced prehistoric pottery and granite burial monuments (dolmen) in the tenth century B.C.E. resembling their counterparts in Western Europe and the rest of Asia. Thus, Kerala and Tamil Nadu once shared a common language, ethnicity and culture; that common area went by the name Tamilakam. Kerala became a linguistically separate region by the early fourteenth century. The ancient Cherans, who spoke Tamil as their mother tongue and court language, ruled Kerala from their capital at Vanchi, the first major recorded kingdom. Allied with the Pallavas, they continually warred against the neighboring Chola and Pandya kingdoms. A Keralite identity—distinct from the Tamils and associated with the second Chera empire—and the development of Malayalam evolved between the eighth and fourteenth centuries. In written records, the Sanskrit epic Aitareya Aranyaka first mentioned Kerala. Later, figures such as Katyayana, Patanjali, Pliny the Elder, and the unknown author of the Periplus of the Erythraean Sea displayed familiarity with Kerala.
The Chera kings' dependence on trade meant that merchants from West Asia established coastal posts and settlements in Kerala. Many—especially Jews and Christians—also escaped persecution, establishing the Nasrani Mappila and Muslim Mappila communities. According to several scholars, the Jews first arrived in Kerala in 573 B.C.E. The works of scholars and Eastern Christian writings state that Thomas the Apostle visited Muziris in Kerala in 52 C.E. to proselytize amongst Kerala's Jewish settlements. The first verifiable migration of Jewish-Nasrani families to Kerala occured with the arrival of Knai Thoma in 345 C.E., who brought with him 72 Syrian Christian families. Muslim merchants (Malik ibn Dinar) settled in Kerala by the eighth century C.E. After Vasco Da Gama's arrival in 1498, the Portuguese sought to control the lucrative pepper trade by subduing Keralite communities and commerce.
Conflicts between the cities of Kozhikode (Calicut) and Kochi (Cochin) provided an opportunity for the Dutch to oust the Portuguese. In turn, Marthanda Varma of Travancore (Thiruvathaamkoor) defeated the Dutch at the 1741 Battle of Colachel, ousting them. Hyder Ali, heading the Mysore, conquered northern Kerala, capturing Kozhikode in 1766. In the late eighteenth century, Tipu Sultan, Ali’s son and successor, launched campaigns against the expanding British East India Company; those resulted in two of the four Anglo-Mysore Wars. He ultimately ceded Malabar District and South Kanara to the Company in the 1790s. The Company then forged tributary alliances with Kochi (1791) and Travancore (1795). Malabar and South Kanara became part of the Madras Presidency.
Kerala saw comparatively little defiance of the British Raj—nevertheless, several rebellions occurred, including the 1946 Punnapra-Vayalar revolt, and heroes likeVelayudan Thampi Dalava Pazhassi Raja and Kunjali Marakkar earned their place in history and folklore. Many actions, spurred by such leaders as Sree Narayana Guru and Chattampi Swamikal, instead protested such conditions as untouchability; notably the 1924 Vaikom Satyagraham. In 1936, Chitra Thirunal Bala Rama Varma of Travancore issued the Temple Entry Proclamation that opened Hindu temples to all castes; Cochin and Malabar soon did likewise. The 1921 Moplah Rebellion involved Mappila Muslims battling Hindus and the British Raj.
After India's independence in 1947, Travancore and Cochin merged to form Travancore-Cochin on July 1, 1949. On January 1, 1950 (Republic Day), Travancore-Cochin received recognition as a state. Meanwhile, the Madras Presidency had become Madras State in 1947. Finally, the Government of India's November 1, 1956 States Reorganisation Act inaugurated the state of Kerala, incorporating Malabar district, Travancore-Cochin (excluding four southern taluks that merged with Tamil Nadu), and the taluk of Kasargod, South Kanara. The government also created a new legislative assembly, with the first elections held in 1957. Those resulted in a communist-led government—one of the world's earliest—headed by E. M. S. Namboodiripad. Subsequent social reforms favored tenants and laborers. That facilitated, among other things, improvements in living standards, education, and life expectancy.
Kerala’s 38,863 km² landmass (1.18 percent of India) wedges between the Arabian Sea to the west and the Western Ghats—identified as one of the world's 25 biodiversity hotspots—to the east. Lying between north latitudes 8°18' and 12°48' and east longitudes 74°52' and 72°22', Kerala sits well within the humid equatorial tropics. Kerala’s coast runs for some 580 km (360 miles), while the state itself varies between 35 and 120 km (22–75 miles) in width. Geographically, Kerala divides into three climatically distinct regions: the eastern highlands (rugged and cool mountainous terrain), the central midlands (rolling hills), and the western lowlands (coastal plains). Located at the extreme southern tip of the Indian subcontinent, Kerala lies near the center of the Indian tectonic plate; as such, most of the state experiences comparatively little seismic and volcanic activity. Geologically, pre-Cambrian and Pleistocene formations compose the bulk of Kerala’s terrain.
Eastern Kerala lies immediately west of the Western Ghats' rain shadow; it consists of high mountains, gorges and deep-cut valleys. Forty one of Kerala’s west-flowing rivers, and three of its east-flowing ones originate in this region. Here, the Western Ghats form a wall of mountains interrupted only near Palakkad, where the Palakkad Gap breaks through to provide access to the rest of India. The Western Ghats rises on average to 1,500 m (4920 ft) above sea level, while the highest peaks may reach to 2,500 m (8200 ft). Just west of the mountains lie the midland plains composing central Kerala; rolling hills and valleys dominate. Generally ranging between elevations of 250–1,000 m (820–3300 ft), the eastern portions of the Nilgiri and Palni Hills include such formations as Agastyamalai and Anamalai.
Kerala’s western coastal belt lays relatively flat, criss-crossed by a network of interconnected brackish canals, lakes, estuaries, and rivers known as the Kerala Backwaters. Lake Vembanad—Kerala’s largest body of water—dominates the Backwaters; it lies between Alappuzha and Kochi, expanding more than 200 km² in area. Around 8 percent of India's waterways (measured by length) exist in Kerala. The most important of Kerala’s 44 rivers include the Periyar (244 km), the Bharathapuzha (209 km), the Pamba (176 km), the Chaliyar (169 km), the Kadalundipuzha (130 km) and the Achankovil (128 km). The average length of the rivers of Kerala measures 64 km. Most of the remainder extend short distances depending entirely on monsoon rains. Those conditions result in the nearly year-round water logging of such western regions as Kuttanad, 500 km² of which lies below sea level. Kerala's rivers, small and lacking deltas, find themselves prone to environmental factors. Kerala's rivers face many problems, including summer droughts, the building of large dams, sand mining, and pollution.
With 120–140 rainy days per year, Kerala has a wet and maritime tropical climate influenced by the seasonal heavy rains of the southwest summer monsoon. In eastern Kerala, a drier tropical wet and dry climate prevails. Kerala's rainfall averages 3,107 mm annually. Some of Kerala's drier lowland regions average only 1,250 mm; the mountains of eastern Idukki district receive more than 5,000 mm of orographic precipitation, the highest in the state.
In summers, most of Kerala endures gale force winds, storm surges, cyclone-related torrential downpours, occasional droughts, and rises in sea level and storm activity resulting from global warming. Kerala’s maximum daily temperature averages 36.7 °C; the minimum measures 19.8 °C. Mean annual temperatures range from 25.0–27.5 °C in the coastal lowlands to 20.0–22.5 °C in the highlands.
Much of Kerala's notable biodiversity concentrates in the Agasthyamalai Biosphere Reserve in the eastern hills, protected by the Indian government. Almost a fourth of India's 10,000 plant species grow in the state. Among the almost 4,000 flowering plant species (1,272 endemic to Kerala and 159 threatened) 900 species constitute highly sought medicinal plants.
Its 9,400 km² of forests include tropical wet evergreen and semi-evergreen forests (lower and middle elevations—3,470 km²), tropical moist and dry deciduous forests (mid-elevations—4,100 km² and 100 km², respectively), and montane subtropical and temperate (shola) forests (highest elevations—100 km²). Altogether, forests cover 24 percent of Kerala. Kerala hosts two of the world’s Ramsar Convention listed wetlands—Lake Sasthamkotta and the Vembanad-Kol wetlands, as well as 1455.4 km² of the vast Nilgiri Biosphere Reserve. Subjected to extensive clearing for cultivation in the twentieth century, much of Kerala's forest cover has been protected from clearfelling. Kerala's fauna has received notice for their diversity and high rates of endemism: 102 species of mammals (56 endemic), 476 species of birds, 202 species of freshwater fishes, 169 species of reptiles (139 of them endemic), and 89 species of amphibians (86 endemic). The fauna has been threatened by extensive habitat destruction (including soil erosion, landslides, desalinization, and resource extraction).
Eastern Kerala’s windward mountains shelter tropical moist forests and tropical dry forests common in the Western Ghats. Here, sonokeling (Indian rosewood), anjili, mullumurikku (Erythrina), and Cassia number among the more than 1000 species of trees in Kerala. Other plants include bamboo, wild black pepper, wild cardamom, the calamus rattan palm (a type of climbing palm), and aromatic vetiver grass (Vetiveria zizanioides). Such fauna as Asian Elephant, Bengal Tiger, Leopard (Panthera pardus), Nilgiri Tahr, Common Palm Civet, and Grizzled Giant Squirrel live among them. Reptiles include the king cobra, viper, python, and crocodile. Kerala has an abundance bird species—several emblematic species include Peafowl, the Great Hornbill, Indian Grey Hornbill, Indian Cormorant, and Jungle Myna. In lakes, wetlands, and waterways, fish such as kadu (stinging catfish and Choottachi (Orange chromide—Etroplus maculatus; valued as an aquarium specimen) live.
Kerala's 14 districts distribute among Kerala's three historical regions: Malabar (northern Kerala), Kochi (central Kerala), and Travancore (southern Kerala). Kerala's modern-day districts (listed in order from north to south) correspond to them as follows:
Mahé, a part of the Indian union territory of Puducherry (Pondicherry), constitues a coastal exclave surrounded by Kerala on all of its landward approaches. Thiruvananthapuram (Trivandrum) serves as the state capital and most populous city. Kochi counts as the most populous urban agglomeration and the major port city in Kerala. Kozhikode and Thrissur make up the other major commercial centers of the state. The High Court of Kerala convenes at Ernakulam. Kerala's districts, divided into administrative regions for levying taxes, further subdivided into 63 taluks; those have fiscal and administrative powers over settlements within their borders, including maintenance of local land records.
Like other Indian states and most Commonwealth countries, a parliamentary system of representative democracy governs Kerala; state residents receive universal suffrage. The government has three branches. The unicameral legislature, known as the legislative assembly, comprises elected members and special office bearers (the Speaker and Deputy Speaker) elected by assemblymen. The Speaker presides over Assembly meetings while the Deputy Speaker presides whenever in the Speaker's absence. Kerala has 140 Assembly constituencies. The state sends 20 members to the Lok Sabha and nine to the Rajya Sabha, the Indian Parliament's upper house.
Like other Indian states, the Governor of Kerala sits as the constitutional head of state, appointed by the President of India. The Chief Minister of Kerala, the de facto head of state vested with most of the executive powers, heads the executive authority; the Governor appoints the Legislative Assembly's majority party leader to that position. The Council of Ministers, which answers to the Legislative Assembly, has its members appointed by the Governor; the appointments receive input from the Chief Minister.
The judiciary comprises the Kerala High Court (including a Chief Justice combined with 26 permanent and two additional (pro tempore) justices) and a system of lower courts. The High Court of Kerala constitutes the highest court for the state; it also decides cases from the Union Territory of Lakshadweep. Auxiliary authorities known as panchayats, elected through local body elections, govern local affairs.
The state's 2005–2006 budget reached 219 billion INR. The state government's tax revenues (excluding the shares from Union tax pool) amounted to 111,248 million INR in 2005, up from 63,599 million in 2000. Its non-tax revenues (excluding the shares from Union tax pool) of the Government of Kerala as assessed by the Indian Finance Commissions reached 10,809 million INR in 2005, nearly double the 6,847 million INR revenues of 2000. Kerala's high ratio of taxation to gross state domestic product (GSDP) has failed to alleviate chronic budget deficits and unsustainable levels of government debt, impacting social services.
Kerala hosts two major political alliances: the United Democratic Front (UDF—led by the Indian National Congress) and the Left Democratic Front (LDF—led by the Communist Party of India (Marxist) CPI(M). At present, the LDF stands as the ruling coalition in government; V.S. Achuthanandan of the CPI(M) sits as the Chief Minister of Kerala.
Kerala stands as one of the few regions in the world where communist parties have been democratically elected in a parliamentary democracy. Compared with most other Indians, Keralites research issues well and participate vigorously in the political process; razor-thin margins decide many elections.
Since its incorporation as a state, Kerala's economy largely operated under welfare based democratic socialist principles. Nevertheless, the state has become increasingly liberalizing its economy, thus moving to a more mixed economy with a greater role played by the free market and foreign direct investment. Kerala's nominal gross domestic product (as of 2004–2005) has been calculated at an estimated 89451.99 crore INR, while recent GDP growth (9.2 percent in 2004–2005 and 7.4 percent in 2003–2004) has been robust compared to historical averages (2.3 percent annually in the 1980s and between 5.1 percent and 5.99 percent in the 1990s). Rapid expansion in services like banking, real estate, and tourism (13.8 percent growth in 2004–2005) outpaced growth in both agriculture (2.5 percent in 2004–2005) and the industrial sector (−2 percent in 2004–2005). Nevertheless, relatively few major corporations and manufacturing plants choose to operate in Kerala. Overseas Keralites help mitigated that through remittances sent home, contributing to around 20 percent of state GDP. Kerala's per capita GDP of 11,819 INR ranks significantly higher than the all India average, although it still lies far below the world average. Additionally, Kerala's Human Development Index and standard of living statistics rank as the nation's best. That apparent paradox—high human development and low economic development—has been dubbed the Kerala phenomenon or the Kerala model of development, and arises mainly from Kerala's strong service sector.
The service sector (including tourism, public administration, banking and finance, transportation, and communications—63.8 percent of statewide GDP in 2002–2003) along with the agricultural and fishing industries (together 17.2 percent of GDP) dominate Kerala's economy. Nearly half of Kerala's people are dependent on agriculture alone for income. Some 600 varieties of rice (Kerala's most important staple food and cereal crop) harvest from 3105.21 km² (a decline from 5883.4 km² in 1990) of paddy fields; 688,859 tons per annum. Other key crops include coconut (899,198 ha), tea, coffee (23 percent of Indian production, or 57,000 tonnes), rubber, cashews, and spices—including pepper, cardamom, vanilla, cinnamon, and nutmeg. Around 10.50 lakh (1.050 million) fishermen haul an annual catch of 6.68 lakh (668,000) tons (1999–2000 estimate); 222 fishing villages line the 590 km coast, while an additional 113 fishing villages spread throughout the hinterland.
Traditional industries manufacturing such items as coir, handlooms, and handicrafts employ around ten lakh (one million) people. Around 1.8 lakh (180,000) small-scale industries employ around 909,859 Keralites, while some 511 medium and large scale manufacturing firms headquarter in Kerala. Meanwhile, a small mining sector (0.3 percent of GDP) involves extraction of such minerals and metals as ilmenite (136,908.74 tonnes in 1999–2000), kaolin, bauxite, silica, quartz, rutile, zircon, and sillimanite. Home vegetable gardens and animal husbandry also provide work for hundreds of thousands of people. tourism, manufacturing, and business process outsourcing constitute Other significant economic sectors. Kerala's unemployment rate has been variously estimated at 19.2 percent and 20.77 percent, although underemployment of those classified as "employed," low employability of many job-seeking youths, and a mere 13.5 percent female participation rate comprise significant problems. Estimates of the statewide poverty rate range from 12.71 percent to as high as 36 percent.
Kerala, situated on the lush and tropical Malabar Coast, was named as one of the "ten paradises of the world" by the National Geographic Traveler magazine, Kerala has become famous for its ecotourism initiatives. Its unique culture and traditions, coupled with its varied demographics, has made Kerala an attractive destination. Growing at a rate of 13.31 percent, the state's tourism industry makes a major contribution to the state's economy.
Until the early 1980s, Kerala had been a hitherto unknown destination, with most tourism circuits concentrated around the north of the country. Aggressive marketing campaigns launched by the Kerala Tourism Development Corporation, the government agency that oversees tourism prospects of the state, laid the foundation for the growth of the tourism industry. In the decades that followed, Kerala's tourism industry transformed the state into one of the niche holiday destinations in India. The tag line God's Own Country, used in its tourism promotions, soon became synonymous with the state. In 2006, Kerala attracted 8.5 million tourists–an increase of 23.68 percent in foreign tourist arrivals compared to the previous year, thus making it one of the fastest growing tourism destination in the world.
Popular attractions in the state include the beaches at Kovalam, Cherai and Varkala; the hill stations of Munnar, Nelliampathi, Ponmudi and Wayanad; and national parks and wildlife sanctuaries at Periyar and Eravikulam National Park. The "backwaters" region, which comprises an extensive network of interlocking rivers, lakes, and canals that center on Alleppey, Kumarakom, and Punnamada (the site of the annual Nehru Trophy Boat Race held every August), also see heavy tourist traffic. Heritage sites, such as the Padmanabhapuram Palace and the Mattancherry Palace, receive heavy tourist traffic. Cities such as Kochi and Thiruvananthapuram have become popular centers for their shopping and traditional theatrical performances. During the summer months the popular temple festival Thrissur pooram attracts many tourists.
Kerala has 145,704 kilometers (90,538.7 mi) of roads (4.2 percent of India's total). That translates to about 4.62 kilometers (2.87 mi) of road per thousand population, compared to an all India average of 2.59 kilometers (1.61 mi). Roads connect virtually all of Kerala's villages. Traffic in Kerala has been growing at a rate of 10–11 percent every year, resulting in high traffic and pressure on the roads. Kerala's road density measures nearly four times the national average, reflecting the state's high population density.
India's national highway network includes a Kerala wide total of 1,524 km, comprising 2.6 percent of the national total. Eight designated national highways traverse in the state. The Kerala State Transport Project (KSTP), including the GIS-based Road Information and Management Project (RIMS), maintains and expands the 1,600 kilometers (994.2 mi) of roadways that comprise the state highways system; it also oversees major district roads. Two national highways, NH 47, and NH 17, provide access to most of Kerala's west coast.
The state has major international airports at Thiruvananthapuram, Kochi, and Kozhikode that link the state with the rest of the nation and the world. The Cochin International Airport at Kochi represents the first international airport in India built without Central Government funds. The backwaters traversing the state constitute an important mode of inland navigation. The Indian Railways' Southern Railway line runs throughout the state, connecting all major towns and cities except the highland districts of Idukki and Wayanad. Trivandrum Central, Kollam Junction, Ernakulam Junction, Thrissur, Kozhikode, Shoranur Junction, and Palakkad comprise Kerala's major railway stations. Kerala has excellent connections to Coimbatore and Tirupur.
The 3.18 crore (31.8 million) of Kerala’s compound population has predominantly Malayali Dravidian ethnicity, while the rest belong mostly to Indo-Aryan, Jewish, and Arab elements in both culture and ancestry (usually mixed). The 321,000 indigenous tribal Adivasis (1.10 percent of the populace) call Kerala home, mostly concentrated in the eastern districts. Kerala speaks Malayalam as the official language; Ethnic minorities also speak Tamil and various Adivasi languages.
Kerala has 3.44 percent of India's population; at 819 persons per km², it has three times the density as the rest of India. Kerala has the lowest rate of population growth in India, and Kerala's decadal growth (9.42 percent in 2001) numbers less than half the all-India average of 21.34 percent. Whereas Kerala's population more than doubled between 1951 and 1991, adding 156 lakh (15.6 million) people to reach a total of 291 lakh (29.1 million) residents in 1991, the population stood at less than 320 lakh (32 million) by 2001. The coastal regions of Kerala have the highest density, leaving the eastern hills and mountains comparatively sparsely populated.
Women comprise 51.42 percent of the population. The principal religions of Kerala include Hinduism (56.1 percent), Islam (24.7 percent), and Christianity (19 percent). Remnants of a once substantial Cochin Jewish population also practice Judaism. In comparison with the rest of India, Kerala experiences relatively little sectarianism. Nevertheless, there have been signs of increasing influences from religious extremist organizations including the Hindu Aikya Vedi.
Kerala's society practices patriarchalism less than the rest of the Third World. Certain Hindu communities (such as the Nairs), Travancore Ezhavas and the Muslims around Kannur used to follow a traditional matrilineal system known as marumakkathayam, which ended in the years after Indian independence. Christians, Muslims, and some Hindu castes such as the Namboothiris and the Ezhavas follow makkathayam, a patrilineal system. Gender relations in Kerala have been reputed to be among the most equitable in India and the Third World. Forces such as the patriarchy-enforced oppression of women threatens that status.
Kerala's human development indices—elimination of poverty, primary level education, and health care—rate among the best in India. Kerala's literacy rate (91 percent) and life expectancy (73 years) now stand the highest in India. Kerala's rural poverty rate fell from 69 percent (1970–1971) to 19 percent (1993–1994); the overall (urban and rural) rate fell 36 percent between the 1970s and 1980s. By 1999–2000, the rural and urban poverty rates dropped to 10.0 percent and 9.6 percent respectively. Those changes stem largely from efforts begun in the late nineteenth century by the kingdoms of Cochin and Travancore to boost social welfare. Kerala's post-independence government maintained that focus.
Kerala's health care system has garnered international acclaim; UNICEF and the World Health Organization designating Kerala the world's first "baby-friendly state." Representative of that condition, more than 95 percent of Keralite births have been hospital-delivered. Aside from ayurveda (both elite and popular forms), siddha, and unani, people practice many endangered and endemic modes of traditional medicine, including kalari, marmachikitsa, and vishavaidyam. Those propagate via gurukula discipleship, and comprise a fusion of both medicinal and supernatural treatments, drawing increasing numbers of medical tourists.
A steadily aging population (with 11.2 percent of Keralites over age 60) and low birthrate (18 per 1,000) make Kerala one of the few regions of the Third World to have undergone the "demographic transition" characteristic of such developed nations as Canada, Japan, and Norway. In 1991, Kerala's TFR (children born per women) measured the lowest in India. Hindus had a TFR of 1.66, Christians 1.78, and Muslims 2.97.
Kerala's female-to-male ratio (1.058) numbers significantly higher than that of the rest of India. The same holding true for its sub-replacement fertility level and infant mortality rate (estimated at 12 to 14 deaths per 1,000 live births). Kerala's morbidity rate stands higher than that of any other Indian state—118 (rural Keralites) and 88 (urban) per 1000 people. The corresponding all India figures tally 55 and 54 per 1,000, respectively. Kerala's 13.3 percent prevalence of low birth weight has been substantially higher than that of First World nations. Outbreaks of water-borne diseases, including diarrhoea, dysentery, hepatitis, and typhoid, among the more than 50 percent of Keralites who rely on some 30 lakh (3 million) water wells poses another problem, worsened by the widespread lack of sewers.
The life expectancy of the people of Kerala reached 68 years as per 1991 census.
The government or private trusts and individuals run schools and colleges in Kerala. The schools affiliate with either the Indian Certificate of Secondary Education (ICSE), the Central Board for Secondary Education (CBSE), or the Kerala State Education Board. Most private school]s use English as the medium of instruction though government run schools offer both English and Malayalam. After completing their secondary education, which involves ten years of schooling, students typically enroll at Higher Secondary School in one of the three streams—liberal arts, commerce or science. Upon completing the required coursework, the student can enroll in general or professional degree programs.
Thiruvananthapuram serves as one of the state's major academic hubs; it hosts the University of Kerala. The city also has several professional education colleges, including 15 engineering colleges, three medical colleges, three Ayurveda colleges, two colleges of homeopathy, six other medical colleges, and several law colleges. Trivandrum Medical College, Kerala's premier health institute, stands as one of the finest in the country, currently undergoing an upgrade in status to an All India Institute of Medical Sciences (AIIMS). The College of Engineering, Trivandrum ranks as one of the top engineering institutions in the country. The Asian School of Business and IIITM-K stand as two of the other premier management study institutions in the city, both situated inside Technopark. The Indian Institute of Space Technology, the unique and first of its kind in India, has a campus in the state capital.
Kochi constitutes another major educational hub. The Cochin University of Science and Technology (also known as "Cochin University") operates in the city. Most of the city's colleges offering tertiary education affiliate either with the Mahatma Gandhi University or Cochin University. Other national educational institutes in Kochi include the Central Institute of Fisheries Nautical and Engineering Training, the National University of Advanced Legal Studies, the National Institute of Oceanography and the Central Marine Fisheries Research Institute.
Kottayam also acts as a main educational hub; the district has attained near-universal literacy. Mahatma Gandhi University, CMS College(the first institution to start English education in Southern India), Medical College, Kottayam, and the Labour India Educational Research Center number among some of the important educational institutions in the district.
Kozhikode hosts two of the premier institutions in the country; the Indian Institute of Management, IIMK and the National Institute of Technology, NITC.
Kerala's culture blends of Dravidian and Aryan influences, deriving from both a greater Tamil-heritage region known as Tamilakam and southern coastal Karnataka. Kerala's culture developed through centuries of contact with neighboring and overseas cultures. Native performing arts include koodiyattom, kathakali – from katha ("story") and kali ("performance") – and its offshoot Kerala natanam, koothu (akin to stand-up comedy), mohiniaattam ("dance of the enchantress"), thullal, padayani, and theyyam.
Other forms of art have a more religious or tribal nature. Those include chavittu nadakom, oppana (originally from Malabar), which combines dance, rhythmic hand clapping, and ishal vocalisations. Many of those art forms largely play to tourists or at youth festivals, they enjoy less popularity with Keralites. They look to more contemporary art and performance styles, including those employing mimicry and parody.
Kerala's music also has ancient roots. Carnatic music dominates Keralite traditional music, the result of Swathi Thirunal Rama Varma's popularization of the genre in the nineteenth century. Raga-based renditions known as sopanam accompany kathakali performances. Melam (including the paandi and panchari variants) represents a more percussive style of music performed at Kshetram centered festivals using the chenda. Melam ensembles comprise up to 150 musicians, and performances may last up to four hours. Panchavadyam represents a different form of percussion ensemble; up to 100 artists use five types of percussion instruments. Kerala has various styles of folk and tribal music, the most popular music of Kerala being the filmi music of Indian cinema. Kerala's visual arts range from traditional murals to the works of Raja Ravi Varma, the state's most renowned painter.
Kerala has its own Malayalam calendar, used to plan agricultural and religious activities. Keralan's typically serve cuisine as a sadhya on green banana leaves including such dishes as idli, payasam, pulisherry, puttucuddla, puzhukku, rasam, and sambar. Keralites—both men and women alike—traditionally don flowing and unstitched garments. Those include the mundu, a loose piece of cloth wrapped around men's waists. Women typically wear the sari, a long and elaborately wrapped banner of cloth, wearable in various styles.
Malayalam literature, ancient in origin, includes such figures as the fourteenth century Niranam poets (Madhava Panikkar, Sankara Panikkar and Rama Panikkar), whose works mark the dawn of both modern Malayalam language and indigenous Keralite poetry. The "triumvirate of poets" (Kavithrayam), Kumaran Asan, Vallathol Narayana Menon, and Ulloor S. Parameswara Iyer, have been recognized for moving Keralite poetry away from archaic sophistry and metaphysics, and towards a more lyrical mode.
In the second half of the twentieth century, Jnanpith awardees like G. Sankara Kurup, S. K. Pottekkatt, and M. T. Vasudevan Nair have added to Malayalam literature. Later, such Keralite writers as O. V. Vijayan, Kamaladas, M. Mukundan, and Booker Prize winner Arundhati Roy, whose 1996 semi-autobiographical bestseller The God of Small Things takes place in the Kottayam town of Ayemenem, have gained international recognition.
Dozens of newspapers publish in Kerala in nine major languages. Malayalam and English constitute the principal languages of publication. The most widely circulating Malayalam-language newspapers include Mathrubhumi, Malayala Manorama, Deepika, Kerala Kaumudi, and Desabhimani. India Today Malayalam, Chithrabhumi, Kanyaka, and Bhashaposhini count among major Malayalam periodicals.
Doordarshan, the state-owned television broadcaster, provides a multi-system mix of Malayalam, English, and international channels via cable television. Manorama News (MM TV) and Asianet number among the Malayalam-language channels that compete with the major national channels. All India Radio, the national radio service, reaches much of Kerala via its Thiruvananthapuram 'A' Malayalam-language broadcaster. BSNL, Reliance Infocomm, Tata Indicom, Hutch and Airtel compete to provide cellular phone services. Selected towns and cities offer broadband internet provided by the state-run Kerala Telecommunications (run by BSNL) and by other private companies. BSNL and other providers provide Dial-up access throughout the state.
A substantial Malayalam film industry effectively competes against both Bollywood and Hollywood. Television (especially "mega serials" and cartoons) and the Internet have affected Keralite culture. Yet Keralites maintain high rates of newspaper and magazine subscriptions; 50 percent spend an average of about seven hours a week reading novels and other books. A sizeable "people's science" movement has taken root in the state, and such activities as writers' cooperatives have become increasingly common.
Several ancient ritualised arts have Keralite roots. Those include kalaripayattu—kalari ("place," "threshing floor," or "battlefield") and payattu ("exercise" or "practice"). Among the world's oldest martial arts, oral tradition attributes kalaripayattu's emergence to Parasurama. Other ritual arts include theyyam and poorakkali. Growing numbers of Keralites follow sports such as cricket, kabaddi, soccer, and badminton. Dozens of large stadiums, including Kochi's Jawaharlal Nehru Stadium and Thiruvananthapuram's Chandrashekaran Nair Stadium, attest to the mass appeal of such sports among Keralites.
Football stands as the most popular sport in the state. Some notable football stars from Kerala include I. M. Vijayan and V. P. Sathyan. Several Keralite athletes have attained world-class status, including Suresh Babu, P. T. Usha, Shiny Wilson, K. M. Beenamol, and Anju Bobby George. Volleyball, another popular sport, often playee on makeshift courts on sandy beaches along the coast. Jimmy George, born in Peravoor, Kannur, arguably the most successful volleyball player ever to represent India. At his prime he rated among the world's ten best players.
Cricket, the most-followed sport in the rest of India and South Asia, enjoys less popularity in Kerala. Shanthakumaran Sreesanth, born in Kothamangalam and often referred to as simply "Sreesanth," has earned fame as a controversial right-arm fast-medium-pace bowler and a right-handed tail-ender batsman whose actions proved pivotal in sealing, among other games, the 2007 ICC World Twenty20. Tinu Yohannan, son of Olympic long jumper T. C. Yohannan, count among less successful Keralite cricketers.
|This article contains Indic text. Without proper rendering support, you may see question marks or boxes, misplaced vowels or missing conjuncts instead of Indic text.|
All links retrieved June 11, 2014.
|State of Kerala
Economy | Geography | Flora and Fauna | Culture | Arts | Tourism
Ernakulam | Idukki | Kannur | Kasaragod | Kollam | Kottayam | Kozhikode | Malappuram | Palakkad | Pathanamthitta | Thiruvananthapuram | Thrissur | Wayanad
Kollam | Kozhikode | Thiruvananthapuram | Thrissur
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | 1 | 7 |
<urn:uuid:38ddf659-5125-4e51-93c6-dfe7c23fd6e9> | DynASM is a preprocessor and tiny runtime library for creating assemblers and JIT compilers in C or C++.
DynASM was written for, and is maintained as part of, LuaJIT. LuaJIT 1 used DynASM in a JIT role. LuaJIT 2 doesn't use DymASM in a JIT role, but LuaJIT 2's interpreter is hand-written in assembly, and it uses DynASM as a powerful cross-platform assembler.
To get the latest copy of DymASM, run the following:
git clone http://luajit.org/git/luajit-2.0.git cd luajit-2.0/dynasm
The official documentation for DynASM is extremely spartan, which can make it difficult to get started with DynASM. For using DynASM in a JIT role, this unofficial documentation's tutorial is recommended as a starting point. Once you're more familiar with DynASM, the reference and instruction listing pages are recommended reading for fleshing out your DynASM knowledge.
Note that DynASM supports the x86, x64, ARM, PowerPC, and MIPS instruction sets, but this unofficial documentation only covers x86 and x64. | 1 | 4 |
<urn:uuid:2d53cb64-a4ed-4901-9dd0-43316275721d> | Breast-feeding is the predominant postnatal transmission route for HIV-1 infection in children. However, a majority of breast-fed infants do not become HIV-infected despite continuous exposure to the virus through their mothers’ milk over many months. What protects some breast-fed infants from HIV-1 infection? HIV-1 entry across the infant’s mucosal barrier is partially mediated through binding of the HIV-1 surface glycoprotein gp120 to DC-SIGN (Dendritic Cell-Specific ICAM3-Grabbing Non-Integrin) on human dendritic cells. Lewis antigen glycans, present in human milk, bind to DC-SIGN and inhibit HIV-1 transfer to CD4+ T lymphocytes. Human Milk Oligosaccharides (HMO) carry one or more Lewis antigen epitopes. We hypothesize that HMO compete with gp120 for DC-SIGN binding.
In collaboration with Dr. Benhur Lee’s lab at the University of California - Los Angeles, we have shown in two independent assays that physiological concentrations of HMO significantly reduce gp120-binding to DC-SIGN by more than 80% (Hong et al. 2009). These results may provide an additional explanation for the inhibitory effects of human milk on HIV-1 mother-to-child-transmission.
Our lab now aims to identify the specific individual HMO that interact with DC-SIGN. The results may guide the development of glycan-based drugs that prevent transmission of HIV-1 and other pathogens that use DC-SIGN as an entry point. However, blocking DC-SIGN may be a two edged sword.
In collaboration with Dr. Grace Aldrovandi’s lab at the Children’s Hospital Los Angeles and Dr. Louise Kuhn’s lab at Columbia University we also aim to identify whether the presence of specific oligosaccharides in human milk correlates with a reduced risk for HIV-1 mother-to-child-transmission (Bode et al. 2012). | 1 | 3 |
<urn:uuid:e2071d75-150e-4cf5-a9a1-899c0fdf8fb1> | Philippine Legal Research
By Milagros Santos-Ong
Published July 2005
Table of Contents
3.1 Executive Branch
3.3 Judicial System
4. Legal System
4.2 Sources of Law
6.1 Law Schools
6.2 Bar Associations
The Philippines is an archipelago of 7,107 islands with a land area of 299,740 sq. kilometers. It is surrounded by the Pacific Ocean on the East, South China Sea on the North and the West and the Celebes Sea on the South. This comprises the National Territory of the Philippines. Article I of the 1987 Constitution provides that the "national territory comprises the Philippine archipelago, with all the islands and waters embraced therein and all other territories which the Philippines has sovereignty or jurisdiction."
The Filipino culture was molded over more than a hundred ethnic groups consisting of 91% Christian Malay, 4% Muslim Malay, 1.5% Chinese and 3% others. As of the August 2007 national census, the population of the Philippines has increased to 88.57 million and is estimated to reach 92.23 million in 2009. The census is scheduled to be undertaken this 2009.
Filipino is the national language (1987 Constitution, Art. XIV, sec. 6). However, Filipino and English are the official languages for the purpose of communication and instruction (Art. XIV, sec 7). There are several dialects or regional languages spoken throughout the different islands of the country, but there are eight major dialects, which include Bicolano, Cebuano, Hiligaynon or Ilongo, Ilocano, Pampango, Pangasinense, Tagalog, and Waray.
There are two major religions of the country: Christianity and Islam. Christianity, more particularly Catholicism, is practiced by more than 80% of the population. It was introduced by Spain in 1521. The Protestant religion was introduced by American missionaries.
Aglipay, or the Philippine Independent Church, and the Iglesia ni Kristo are two Filipino independent churches. Other Christian religious organizations like the El Shaddai, Ang Dating Daan, and 'Jesus is Lord' have been established and have a great influence to the nation.
The Constitution of the Philippines specifically provides that the separation of Church and State is inviolable. (Constitution (987), Art. II, sec.6). However, religion has a great influence in the legal system of the Philippines. For the Muslim or Islamic religion, a special law, the Code of Muslim Personal Laws (Presidential Decree no. 1083), was promulgated and special courts were established, the Shari’a courts. The Church has affected the present political system. A priest had to take leave as a priest when he was elected governor of a province in Region 3. A movement was even started to be able to choose the President of the Philippines and other government officials in the May 2009 national election.
The Constitution is the fundamental law of the land. The present political structure of the Philippines was defined by the 1987 Constitution, duly ratified in a plebiscite held on February 2, 1987. There is a move now in Congress which was started at the House of Representatives to revise/amend the present Constitution. One of the major problems to be resolved by both Houses of Congress is the mode or method in revising/amending the Constitution.
The 1987 Constitution provides that the Philippines is a democratic and republican state where sovereignty resides in the people and all government authority emanates from them (Article II, section 1).
The government structure differs as one goes through the history of the Philippines, which may be categorized as follows: a) Pre-Spanish; b). Spanish period; c). American period; d). Japanese period; e). Republic; and f). Martial Law Period
a) Pre-Spanish (before 1521)
The Barangays or independent communities were the unit of government structures before Spain colonized the Philippines. The head of each barangay was the Datu. He governs the barangays using native rules which are customary and unwritten. There were two codes during this period: the Maragtas Code issued by Datu Sumakwel of Panay Island and the Code of Kalantiao issued by Datu Kalantiano in 1433. The existence of these codes is questioned by some historians.
Just like many ancient societies, trial by ordeal was practiced.
b) Spanish period (1521-1898)
The Spanish period can be traced from the time Magellan discovered the Philippines when he landed on Mactan Island (Cebu) on March 16, 1521. Royal decrees, Spanish laws, and/or special issuances of special laws for the Philippines were extended to the Philippines from Spain by the Spanish Crown through the councils. The chief legislator is the governor-general who exercises legislative functions by promulgating executive decrees, edicts or ordinances with the force of law. The Royal Audencia, or Spanish Supreme Court, in the Philippines also exercised legislative functions when laws are passed in the form of autos accordados. Melquiades Gamboa, in his book entitled “ An Introduction to Philippine Law” (7th ed, 1969), listed the most prominent laws in this period: Fuero Juzgo, Fuero Real, Las Siete Partidas, Leyes de Toros, Nueva Recopilacion de las Leyes de Indias and the Novisima Recopilacion. Some of these laws were also in force in other Spanish colonies. Laws in force at the end of the Spanish rule in 1898 are as follows: Codigo Penal de 1870, Ley Provisional para la Aplicaciones de las Dispociciones del Codigo Penal en las Islas Filipinas, Ley de Enjuciamento Criminal, Ley de Enjuciameniento Civil, Codigo de Comercio, Codigo Civil de 1889, Ley Hipotecaria, Ley de Minas, Ley Notarial de 1862, Railway Law of 1877, Law of Foreigners for Ultramarine Provinces and the Code of Military Justice. Some of these laws remained in force even during the early American period and/or until Philippine laws were promulgated.
In between the Spanish and the American period is what Philippine historians consider the first Philippine Republic. This was when General Emilio Aguinaldo proclaimed the Philippine Independence in Kawit , Cavite on June 12, 1898. The Malolos Congress also known as Assembly of the Representatives, which can be considered as revolutionary in nature, was convened on September 15, 1898. The first Philippine Constitution, the Malolos Constitution was approved on January 20, 1899. General Emilio Aguinaldo was the President and Don Gracio Gonzaga as the Chief Justice. A Republic, although with de facto authority, was in force until the start of the American Sovereignty when the Treaty of Paris was signed on December 10, 1898.
c) American period (1898-1946)
The start of this period can be traced after the Battle of Manila Bay when Spain ceded the Philippines to the United States upon the signing of the Treaty of Paris on December 10, 1898. A military government was organized with the military governor as the chief executive exercising executive, legislative and judicial functions. Legislative function was transferred to the Philippine Commission in 1901 which was created by the United States President as commander-in-chief of the Armed forces and later ratified by the Philippine Bill of 1902. This same Bill provided for the establishment of the First Philippine Assembly which convened on October 16, 1907. The Jones law provided for the establishment of a bicameral legislative body on October 16, 1916, composed of the Senate and the House of Representatives.
The United States Constitution was recognized until the promulgation of the Philippine Constitution on February 8, 1935, signed by U.S. President Franklin Delano Roosevelt on March 23, 1935 and ratified at a plebiscite held on May 14, 1935.
The organic laws that governed the Philippines during this period were: President McKinley’s Instruction to the Second Philippine Commission on April 7, 1900; Spooner Amendment of 1901; Philippine Bill of 1902; Jones Law of 1916 and the Tydings McDuffie Law of May 1, 1934. The later law is significant for it allowed the establishment of a Commonwealth government and the right to promulgate its own Constitution. The 1935 Constitution initially changed the legislative system to a unicameral system. However, the bicameral system was restored pursuant to the 1940 Constitutional amendment. The Commonwealth government is considered as a transition government for ten years before the granting of the Philippine independence. Cayetano Arellano was installed as the first Chief Justice in 1901. The Majority of the Justices of the Philippine Supreme Court were Americans. Decisions rendered by the Supreme Court of the Philippines were appealed to the United States Supreme Court, which were reported in the United States Supreme Court Reports.
Manuel L. Quezon and Sergio Osmeña were elected as President and Vice-President respectively during the September 14, 1935 elections. In this election, President Quezon won over General Emilio Aguinaldo and Bishop Gregorio Aglipay, the President of the First Philippine Republic (1898) and the head of the Aglipayan church, respectively. This Commonwealth government went into exile in Washington DC during the Japanese period from May 13, 1942 to October 3, 1944. President Manuel L. Quezon died on August 1, 1944 and was succeeded by President Sergio Osmena who brought back the government to Manila on February 28, 1945.
d) Japanese period (1941-1944)
The invasion of the Japanese forces when Clark Field, an American military airbase in Pampanga, was bombed on December 8, 1941, marked the start of the Japanese period which lasted for three years. A Japanese Republic was established with Jose P. Laurel as its President. Jose Yulo was the Chief Justice of the Supreme Court. This period was considered as a military rule by the Japanese Imperial Army. The 1943 Constitution was ratified by a special national convention of the Kapisanan sa Paglilingkod ng Bagong Pilipinas (KALIBAPI). This period lasted for three years and ended in 1944 with the defeat of the Japanese forces.
e) Republic period (1946-1972)
July 4, 1946 was the inauguration of Philippine independence. A Philippine Republic was born. A republic means a government by the people and sovereignty resides in the entire people as a body politic. The provisions of the 1935 Constitution defined the government structure which provided for the establishment of three co-equal branches of government. Executive power rests in the President, legislative power in two Houses of Congress and judicial power in the Supreme Court, and inferior courts. Separation of powers is recognized.
Efforts to amend the 1935 Constitution started on August 24, 1970 with the approval of Republic Act No. 6132 where 310 delegates were elected on November 10, 1970. On June 1, 1971, the Constitutional Convention met. While it was still in session, President Ferdinand E. Marcos declared Martial Law on September 21, 1972. The Constitutional Convention completed the draft Constitution on November 29, 1972. It was submitted for ratification through citizens’ assemblies on January 17, 1973. This is known as the 1973 Constitution.
f) Martial Law Period (1972-1986).
The Congress of the Philippines was abolished when Martial Law was declared on September 21, 1972. The Martial Law period was governed by the 1973 Constitution which established a parliamentary form of government. Executive and legislative powers were merged and the Chief Executive was the Prime Minister who was elected by majority of all members of the National Assembly (Parliament). The Prime Minister had the power to advise the President. The President is the symbolic head of state. This parliamentary government was never implemented due to the transitory provision of the 1973 Constitution. Military tribunals were also established. Amendments to the Constitution were made wherein by virtue of amendment No. 3, the powers of the President and the Prime Minister were merged into the incumbent President Ferdinand E. Marcos. Amendment No. 6 authorized President Marcos to continue exercising legislative powers until Martial law is in effect. Amendment No. 7 provided for the barangays as the smallest political subdivision and the sanggunians, or councils. The 1981 amendment introduced the modified presidential/parliamentary system of government of the Philippines. The President shall be elected by the people for a term of six years while the Prime Minister shall be elected by a majority of the Batasang Pambansa (Parliament) upon the nomination of the President. He was the head of the Cabinet and had supervision over all the ministries.
Proclamation No. 2045 (1981) lifted Martial Law and abolished military tribunals. Elections were held on June 16, 1981 and President Marcos was re-elected into office as President. The constitution was again amended in 1984 and a plebiscite was held on January 27, 1984 pursuant to Batas Pambansa Blg. 643 (1984). Elections were held on May 14, 1984 for the 183 elective seats in the 200 member of the Batasang Pambansa.
An impeachment resolution by 57 members of the opposition was filed against President Marcos but was dismissed. A special presidential election, popularly known as Snap Election, was called by President Marcos on November 3, 1985 and was held on February 7, 1986. The National Movement for Free Elections, or NAMFREL, results showed that Corazon Aquino led by over a million votes. However, the Batasang Pambansa declared that Ferdinand E. Marcos and Arturo M. Tolentino won over Corazon C. Aquino and Salvador H. Laurel as President and Vice-President, respectively. This event led to the People Power revolution, which ousted President Marcos on February 25, 1986.
g) Republic Revival (1986-present)
The Republic period was revived after the bloodless revolution popularly known as People Power or the EDSA Revolution.
Corazon C. Aquino and Salvador H. Laurel took their oath of office as President and Vice President of the Philippine Republic on February 25, 1986. Proclamation No. 1 (1986) was promulgated wherein the President and the Vice President took power in the name and by the will of the Filipino people. Proclamation No. 3 (1986) adopted as the Provisional Constitution or Freedom Constitution, provided for a new government.
A Constitutional Commission was constituted by virtue of Article V of the Provisional Constitution and Proclamation No. 9 (1986). The Constitutional Commission, composed of 48 members, was mandated to draft a Constitution. After 133 days, the draft constitution was submitted to the President on October 15, 1986 and ratified by the people in a plebiscite held on February 2, 1987. Under the transitory provision of the 1987 Constitution, the President and Vice President elected in the February 7, 1986 elections were given a six year term of office until June 30, 1992. Congressional elections were held on May 11, 1987. The Republican form of government was officially revived when the 1987 Constitution was ratified and Congress was convened in 1987. Legislative enactments again rested in the Congress. Republic Acts were again issued by Congress, the number of which took off from the last number used before Martial Law was declared. The numbering of Republic Acts continued from the number last used before Martial Law (Republic Act No. 6635 (1972) and Republic Act No. 6636 (1987). The Republic form of government by virtue of the 1987 Constitution was the same type of republican government prior to Martial law by virtue of the 1935 Constitution with three co-equal branches: Executive, Legislative and the Judiciary.
The Philippines once again became a Republic by virtue of the 1987 Constitution. The same type of republican form of government prior to Martial law was established with three co-equal branches were organized, Executive, Legislative and the Judiciary.
Aside from the three co-equal branches, the following are other offices in government:
The President is vested with the executive power. (Art. VII, sec. 1, 1987 Constitution). The President is both the Chief of State (head of government) and the Commander-in-Chief of all the Armed Forces of the Philippines (Art. VII, sec. 18). Since 1898 when the First Philippine Republic was established, the Philippines has had thirteen (13) Presidents.
The following are the Departments under the Executive Branch:
- Department of Agrarian Reform
- Department of Agriculture
- Department of Budget and Management
- Department of Education
- Department of Energy
- Department of Environment and Natural Resources
- Department of Finance
- Department of Foreign Affairs
- Department of Health
- Department of Interior and Local Government
- Department of Justice
- Department of Labor and Employment
- Department of National Defense
- Department of Public Works and Highways
- Department of Science and Technology
- Department of Social Welfare and Development
- Department of Tourism
- Department of Trade and Industry
- Department of Transportation and Communications
- National Economic and Development Authority
- Office of the Press Secretary
There are specific bureaus and offices directly under the Office of the President.
Both the President and the Vice-President are elected by direct vote of the Filipino people for a term of six years. The President is not eligible for a reelection while the Vice President cannot serve for more than two terms. Congress is empowered to promulgate rules in the canvassing of certificates of election. The Supreme Court sitting en banc is the sole judge of all election contests relating to their election, returns and qualifications (Art VII, sec. 4). The Supreme Court en banc thus acts as the Presidential Electoral Tribunal. The Supreme Court promulgated the 2005 Rules on the Presidential Tribunal (A.M. No. 05-11-06-SC). Both may be removed from office by impeachment (Art. XI sec. 2) to be initiated by the House of Representatives (Art. XI, sec, 3) and tried and decided by the Senate (Art. XI, sec, 3 (6)). The Cabinet members are nominated by the President, subject to the confirmation of the Commission on Appointments (Art. VII, sec, 16) which consists of the President of the Senate, as ex- officio Chairman, twelve Senators and twelve members of the House of Representatives (Art. VI, sec. 1).
Cabinet members are nominated by the President, subject to the confirmation of the Commission on Appointments (Art. VII, sec, 16), which consists of the President of the Senate, as ex officio Chairman, twelve Senators and twelve members of the House of Representatives (Art. VI, sec. 1).
The President exercises control over all the executive departments, bureaus and offices (Art. VI, sec, 17).
Legislative power is vested in the Congress of the Philippines, consisting of the Senate and the House of Representatives (Art. VI, sec. 1). History has provided that the legislative structure has undergone numerous changes. To better appreciate its transition, the Philippine Senate has provided a detailed account and is found on Senate website.
The Senate is composed of twenty four (24) Senators who are elected at large by qualified voters who serve for a term of not more than six (6) years. No Senator may be elected for more than two consecutive terms. (Art VI, sec. 4). The Senate is led by the Senate President, Pro Tempore, Majority Leader and the Minority Leader. The Senate President is elected by majority vote of its members. There are thirty six (36) permanent committees and five (5) oversight committees. The sole judge of contests relating to election, returns and qualifications of members of the Senate rests with the Senate Electoral Tribunal (SET) which is composed of nine members, three of whom are Justices of the Supreme Court and six members of the Senate. (Art. VI, sec. 17). The Senate Electoral Tribunal has approved on November 12, 2003 its Revised Rules.
The House of Representatives is composed of not more than two hundred fifty (250) members, elected by legislative districts for a term of three years. No representative shall serve for more than three consecutive terms. The party-list representatives, who come from registered national, regional and sectional parties and organizations, shall constitute twenty percent (20%) of the total number of representatives. The election of party-list representatives was by virtue of the Republic Act No. 7941 which was approved on March 3, 1995. In a recent decision of the Supreme Court penned by Justice Antonio T. Carpio on April 21, 2009, Barangay Association for National Advancement and Transparency (BANAT) v. Commission on Elections (G.R. No. 17971) and Bayan Muna, Advocacy for Teacher Empowerment Through Action, Cooperation and Harmony Towards Educational Reforms, Inc. and Abono (G.R. No. 179295), Republic Act No. 7941 was declared unconstitutional with regards to the two percent threshold in the distribution of additional party-list seats. The Court in this decision provided a procedure in the allocation of additional seats under the Party-List System. Major political parties are disallowed from participating in party-list elections.
The officials of the House are the Speaker of the House, Deputy Speaker for Luzon, Deputy Speaker for Visayas, Deputy Speaker for Mindanao, Majority Leader, and Minority Leader. The Speaker of the House is elected by majority vote of its members. There are fifty seven (57) standing committees and sixteen (16) special committees of the House of Representatives. The sole judge of contests relating to election, returns and qualifications of members of the House of Representatives rests with the House of Representatives Electoral Tribunal (HRET) which is composed of nine members, three of whom are Justices of the Supreme Court and six members of the Senate.(Art. VI, sec. 17). The House of Representatives Electoral Tribunal adopted its 1998 Internal Rules on March 24, 1998..
3.3 Judicial System
Organizational Chart of the whole Judicial System and those of each type of Court is available in 2002 Revised Manual of Clerks of Court. Manila: Supreme Court, 2002. Organizational Chart was amended due to the passage of Republic Act No. 9282 (CTA)
Judicial power rests with the Supreme Court and the lower courts, as may be established by law (Art. VIII, sec. 1). The judiciary enjoys fiscal autonomy. Its appropriation may not be reduced by the legislature below the appropriated amount the previous year (Art. VIII, sec. 2). The Rules of Court of the Philippines as amended and the rules and regulations issued by the Supreme Court define the rules and procedures of the Judiciary. These rules and regulations are in the form of Administrative Matters, Administrative Orders, Circulars, Memorandum Circulars, Memorandum Orders and OCA Circulars. To inform the members of the Judiciary, legal profession and the public of these rules and regulations, the Supreme Court disseminates this rules and regulations to all courts, publishes important ones in newspapers of general circulation, prints in book or pamphlet form and now downloads them in the Supreme Court website and the Supreme Court E-Library website.
Department of Justice Administrative Order No. 162 dated August 1, 1946 provided for the Canon of Judicial Ethics. Supreme Court of the Philippines promulgated a new Code of Judicial Conduct for the Philippine Judiciary effective June 1, 2004 (A.M. No. 03-05-01-SC), which was published in two newspapers of general circulation on May 3, 2004 (Manila Bulletin & Philippine Star) and available on its website and the Supreme Court E-Library website.
The Supreme Court promulgated on June 21, 1988 the Code of Professional Responsibility for the legal profession. The draft was prepared by the Committee on Responsibility, Discipline and Disbarment of the Integrated Bar of the Philippines.
A Code of Conduct for Court Personnel (A.M. No. 03-06-13-SC) was adopted on April 13, 2004, effective June 1, 2004, published in two newspapers of general circulation on April 26, 2004 (Manila Bulletin & Philippine Star) and available at its website and the Supreme Court E-Library website.
The barangay chiefs exercised judicial authority prior to the arrival of Spaniards in 1521. During the early years of the Spanish period, judicial powers were vested upon Miguel Lopez de Legaspi, the first governor general of the Philippines where he administered civil and criminal justice under the Royal Order of August 14, 1569.
The Royal Audencia was established on May 5, 1583, composed of a president, four oidores (justices) and a fiscal. The Audencia exercised both administrative and judicial functions. Its functions and structure were modified in 1815 when its president was replaced by a chief justice and the number of justices was increased. It came to be known as the Audencia Territorial de Manila with two branches, civil and criminal. Royal Decree issued July 24, 1861 converted it to a purely judicial body wherein its decisions were appealable to the Supreme Court of the Philippines to the Court of Spain in Madrid. A territorial Audencia in Cebu and Audencia for criminal cases in Vigan were organized on February 26, 1898. The Audencias were suspended by General Wesley Merrit when a military government was established after Manila fell to American forces in 1898. Major General Elwell S. Otis re-established the Audencia on May 29, 1899 by virtue of General Order No. 20. Said Order provided for six Filipino members of the Audencia. Act No. 136 abolished the Audencia and established the present Supreme Court on June 11, 1901 with Cayetano Arellano as the first Chief Justice together with associate justices, the majority of whom were American. Filipinization of the Supreme Court started only during the Commonwealth, 1935. Administrative Code of 1917 provided for a Supreme Court with a Chief Justice and eight associate Justices. With the ratification of the 1935 Constitution, the membership was increased to 11 with two divisions of five members each. The 1973 Constitution further increased its membership to 15 with two (2) divisions.
Pursuant to the provisions of the 1987 Constitution, the Supreme Court is composed of a Chief Justice and fourteen Associate Justices who shall serve until the age of seventy (70). The Court may sit En Banc or in its three (3) divisions composed of five members each. A vacancy must be filled up by the President within ninety (90) days of occurrence.
Article VIII, sec. 4 (2) explicitly provides for the cases that must be heard En Banc and sec. 4 (3) for cases that may be heard by divisions. (Constitution, Art. VIII, sec. 4, par.1) Judiciary Reorganization Act of 1980 transferred from the Department of Justice to the Supreme Court the administrative supervision of all courts and their personnel. This was affirmed by Art. VIII, sec. 6 of the 1987 Constitution. To effectively discharge this constitutional mandate, The Office of the Court Administrator (OCA) was created under Presidential Decree No. 828, as mended by Presidential Decree No. 842, and and its functions further strengthened by a Resolution of the Supreme Court En Bans dated October 24, 1996. Its principal function is the supervision and administration of the lower courts throughout the Philippines and all their personnel. It reports and recommends to the Supreme Court all actions that affect the lower court management. The OCA is headed by the Court Administrator, three (3) Deputy Court Administrators and three (3) Assistant Court Administrators.
According to the 1987 Constitution, Art. VIII, sec. 5, The Supreme Court exercises the following powers:
- Exercise jurisdiction over cases affecting ambassadors, other public ministers and consuls, and over petitions for certiorari, prohibition, mandamus, quo warranto, and habeas corpus.
- Review, revise, reverse, modify, or affirm on appeal or certiorari, as the law or the Rules of Court may provide final judgments and orders of lower courts in:
- All cases ion which the constitutionality or validity of any treaty, international or executive agreement, law, presidential decree, proclamation, order, instruction, ordinance, or regulation is in question.
- All cases involving the legality of any tax, impost, assessment, or toll, or any penalty imposed in relation thereto.
- All cases in which the jurisdiction of any lower court is in issue.
- All criminal cases ion which the penalty imposed is reclusion perpetua or higher.
- All cases in which only an error or question of law is involved.
- Assign temporarily judges of lower court to other stations as public interest may require. Such temporary assignment shall not exceed six months without the consent of the judge concerned.
- Order a change of venue or place of trial to avoid a miscarriage of justice.
- Promulgate rules concerning the protection and enforcement of constitutional rights, pleading, practice, and procedure in all courts, the admission to the practice of law, the Integrated Bar, and legal assistance to the underprivileged. Such rules shall provide a simplified and inexpensive procedure for the speedy disposition of cases, shall be uniform for all courts the same grade, and shall not diminish, increase or modify substantive rights. Rules of procedure of special courts and quasi-judicial bodies shall remain effective unless disapproved by the Supreme Court.
- Appoint all officials and employees of the Judiciary in accordance with the Civil Service Law (Sec. 5 , id.).
The Supreme Court has adopted and promulgated the Rules of Court for the protection and enforcement of constitutional rights, pleadings and practice and procedure in all courts, and the admission in the practice of law. In line with this mandate of the Rules of Court and extrajudicial killing and disappearances, the Supreme Court passed two important Resolutions: the Rule on the Writ of Amparo, approved on September 25, 2007 and effective on October 24, 2007, and the Rule on the Writ of Habeas Data, approved on January 22, 2008 and effective February 2, 2008. Amendments are promulgated through the Committee on Revision of Rules. The Court also issues administrative rules and regulations in the form of court issuances and the Supreme Court E-Library website.
The Judicial and Bar Council was created by virtue of Art. VIII, sec. 8. under the supervision of the Supreme Court. Its principal function is to screen prospective appointees to any judicial post. The Judicial and Bar Council has promulgated on October 31, 2000 its Rules (JBC-009) in the performance of its function. It is composed of the Chief Justice as ex-officio Chairman, the Secretary of Justice and representatives of Congress as ex-officio members, a representative of the Integrated Bar, a professor of law, a retired member of the Supreme Court and a representative of the private sector as members.
The Philippine Judicial Academy (PHILJA) is the “training school for justices, judge, court personnel, lawyers and aspirants to judicial posts.” It was originally created by the Supreme Court on March 16, 1996 by virtue of Administrative Order No. 35-96 and was institutionalized on February 26, 1998 by virtue of Republic 8557. It is an important component of the Supreme Court for its important mission on judicial education. No appointee to the Bench may commence the discharge his adjudicative function without completing the prescribed court in the Academy. Its organizational structure and administrative set-up are provided for by the Supreme Court in its En Banc resolution ( Revised A.M. No. 01-1-04-sc-PHILJA).
The Philippine Mediation Center was organized “pursuant to Supreme Court “en banc” Resolution A.M. No. 01-10-5-SC-PHILJA, dated October 16, 2001, and in line with the objectives of the Action Program for Judicial Reforms (APJR) to decongest court dockets, among others, the Court prescribed guidelines in institutionalizing and implementing the mediation program in the Philippines. The same resolution designated the Philippine Judicial Academy as the component unit of the Supreme Court for Court-Annexed Mediation and other Alternative Dispute Resolution (ADR) Mechanisms, and established the Philippine Mediation Center (PMC).”
Mandatory Continuing Legal Education Office was organized to implement the rules on Mandatory Continuing Legal Education for members of the Integrated Bar of the Philippines (B.M. No. 850 – “Mandatory Continuing Legal Education (MCLE)). It holds office in the Integrated Bar of the Philippines main office.
Commonwealth Act No. 3 (December 31, 1935), pursuant to the 1935 Constitution (Art VIII, sec. 1), established the Court of Appeals. It was formally organized on February 1, 1936 and was composed of eleven justices with Justice Pedro Concepcion as the first Presiding Justice. Its composition was increased to 15 in 1938 and further increased to 17 in 1942 by virtue of Executive Order No. 4. The Court of Appeals was regionalized in the later part of 1944 when five District Court of Appeals were organized for Northern, Central and Southern Luzon, for Manila and for Visayas and Mindanao. It was abolished by President Osmena in 1945, pursuant to Executive Order No. 37 due to the prevailing abnormal conditions. However, it was re-established on October 4, 1946 by virtue of Republic Act No. 52 with a Presiding Justice and fifteen (15) Associate Justices. Its composition was increased by the following enactments: Republic Act No. 1605 to eighteen (18); Republic Act No. 5204 to twenty-four (24); Presidential Decree No. 1482 to one (1) Presiding Justice and thirty-four (34) Associate Justices; Batas Pambansa Blg. 129 to fifty (50); Republic Act No. 8246 to sixty-nine (69). With Republic Act No. 8246, the Court of Appeals in Cebu, and Cagayan de Oro were established.
Batas Pambansa Blg. 129 changed the name of the Court of Appeals to Intermediate Appellate Court. Executive Order No. 33 brought back its name to Court of Appeals.
Section 9 of Batas Pambansa Blg. 129 as amended by Executive Order No. 33 and Republic Act No. 7902 provides for the jurisdiction of the Court of Appeals as follows:
- Original jurisdiction to issue writs of mandamus, prohibition, certiorari habeas corpus, and quo warrant, and auxiliary writs or processes, whether or not in aid of its appellate jurisdiction;
- Exclusive original jurisdiction over actions for annulment of judgment of Regional Trial Courts; and
- Exclusive appellate jurisdiction over all final judgments, decisions, resolutions, orders or awards of Regional Trial Courts and quasi-judicial agencies, instrumentalities, boards or commissions, including the Securities and Exchange Commission, the Social Security Commission, the Employees Compensation Commission and the Civil Service Commission, except those falling within the appellate jurisdiction of the Supreme Court in accordance with the Constitution, the Labor Code of the Philippines under Presidential Decree No. 442, as amended, the provisions of this Act, and of subparagraph (1) of the third paragraph and subparagraph (4) of the fourth paragraph of Section 17 of the Judiciary Act of 1948.
The Supreme Court, acting on the recommendation of the Committee on Revision of the Rules of Court, resolved to approve the 2002 Internal Rules of the Court of Appeals (A.M. No. 02-6-13-CA) and amended by a resolution of the Court En Banc on July 13, 2004 (A.M. No. 03-05-03-SC).
Pursuant to Republic Act No. 9372 otherwise known as the Human Security Act of 2007, the Chief Justice issued Administrative Order No. 118-2007, designating the First, Second and Third Divisions of the Court of Appeals to handle cases involving the crimes of terrorism or conspiracy to commit terrorism and all other matters incident to the said crimes emanating from the Metropolitan Manila and Luzon. For those emanating from Visayas, all divisions of the Court of Appeals stationed in Cebu are designated to handle these cases while the Court of Appeals stationed in Cagayan De Oro will handle cases from Mindanao.
The Anti-Graft Court, or Sandiganbayan, was created to maintain integrity, honesty and efficiency in the bureaucracy and weed out misfits and undesirables in government service (1973 Constitution (Art. XIII, sec. 5) and 1987 Constitution (Art. XI, sec. 4)). It was restructured by Presidential Decree No. 1606 as amended by Republic Act No. 8249. It is composed of a Presiding Justice and fourteen (14) Associate Justices still in five Divisions of three (3) Justices each.
The Supreme Court, acting on the recommendation of the Committee on Revision of the Rules of Court, resolved with modification the Revised Internal Rules of the Sandiganbayan on August 28, 2002 (A.M. No. 02-6-07—SB)
Created by Republic Act No. 1125 on June 16, 1954, it serves as an appellate court to review tax cases. Under Republic Act No. 9282, its jurisdiction has been expanded where it now enjoys the same level as the Court of Appeals. This law has doubled its membership, from three to six justices.
The Supreme Court acting on the recommendation of the Committee on Revision of the Rules of Court resolved to approve the Revised Rules of the Court of Tax Appeals (A.M. No. 05-11-07-CTA) and amended by a resolution of the Court En Banc on November 22, 2005.
The Court of Tax Appeals has exclusive appellate jurisdiction to review by appeal the following:
- Decisions of the Commissioner of Internal Revenue in cases involving disputed, assessments, refunds of internal revenue taxes, fees or other charges, penalties imposed in relation thereto, or other matters arising under the National Internal Revenue Code or other laws administered by the Bureau of Internal Revenue;
- In actions of the Commissioner of Internal Revenue in cases involving disputed assessments, refunds of internal revenue taxes, fees or other charges, penalties in relation thereto, or other matters arising under the National Internal Revenue Code or other laws administered by the Bureau of Internal Revenue, where the National Internal Revenue Code provides a specific period of action, in which case the inaction shall be deemed a denial;
- Decisions, orders or resolutions of the Regional Trial Courts in local tax cases originally decided or resolved by them in the exercise of their original or appellate jurisdiction;
- Decisions of the Commissioner of Customs in cases involving liability for customs duties, fees, or other money charges; seizure, detention or release of property affected; fines, forfeitures or other penalties imposed in relation thereto; or other matters arising under the Customs Law or other laws administered by the Bureau of Customs.
- Decisions of the Central Board of Assessment Appeals in the exercise of its appellate jurisdiction over cases involving the assessment and taxation of real property originally decided by the provincial or city board of assessment appeals;
- Decisions of the Secretary of Finance on customs cases elevated to him automatically for review from decisions of the Commissioner of Customs which are adverse to the Government under Section 2315 of the Tariff and Customs Code;
- Decisions of the Secretary of Agriculture in the case of nonagricultural product, commodity or article, and the Secretary of Agriculture in the case of agricultural product, commodity or article, involving dumping and countervailing duties under Section 301 and 302, respectively, of the Tariff and Customs Code, and safeguard measures under R.A. No. 8800, where either party may appeal the decision to impose or not to impose said duties.
It also has jurisdiction over cases involving criminal offenses as herein provided:
- Exclusive original jurisdiction over all criminal offenses arising from violations of the National Internal Revenue Code or Tariff and Customs Code and other laws administered by the Bureau of Internal Revenue or the Bureau of Customs: Provided, however, That offenses or felonies mentioned in this paragraph where the principal amount of taxes and fees, exclusive of charges and penalties, claimed is less than One million pesos (P 1,000,000.00) or where there is no specified amount claimed shall be tried by the regular Courts and the jurisdiction of the CTA shall be appellate. Any provision of law or the Rules of Court to the contrary notwithstanding, the criminal action and the corresponding civil action for the recovery of civil liability for taxes and penalties shall at all times be simultaneously instituted with, and jointly determined in the same proceeding by, the CTA the filing of the criminal action being deemed to necessarily carry with it the filing of the civil action, and no right to reserve the filing of such action separately form the criminal action will be recognized.
- Exclusive appellate jurisdiction in criminal offenses:
- Over appeals from judgments, resolutions or orders of the Regional Trial Courts in tax collection cases originally decided by them, in their respective territorial jurisdiction.
- Over petitions for review of the judgments, resolution or orders of the RTCs in the exercise of their appellate jurisdiction over tax collection cases originally decided by the MeTCs, MTCs and MCTCs, in their respective jurisdiction.
Regional Trial Courts
They are called the second level courts and are divided into thirteen (13) judicial regions: National Capital Region (Metro Manila) and the twelve (12) regions of the country, which are divided into several branches. The jurisdictions are defined in sec. 19-23 of Batas Pambansa Blg. 129 as amended by Republic Act No. 7671. The Supreme Court designates certain branches of regional trial courts as special courts to handle exclusively criminal cases, juvenile and domestic relations cases, agrarian cases, urban land reform cases which do not fall under the jurisdiction of quasi-judicial bodies. The Supreme Court issues resolutions designating specific branches of the Regional Trial Courts as special courts for heinous crimes, dangerous drugs cases, commercial courts and intellectual property rights violations. Special rules are likewise promulgated. A.M. No. 00-8-10-SC is a resolution of the Court En Banc on the Rules of Procedure on Corporate Rehabilitation. The Interim Rules was promulgated in November 2000 and December 2008 affects special commercial courts. Some Regional Trial Courts are specifically designated to try and decide cases formerly cognizable by the Securities and Exchange Commission (A.M. No. 00-11-030SC)
Some branches of the Regional Trial Courts have been designated as family courts (A.M. No. 99-11-07) because the family courts to be established pursuant to Republic Act No. 8369 of the Family Court Law of 1997 have not yet been organized. Pursuant to Republic Act No. 8369, the Family Court Law of 1997, some branches of the Regional Trial Courts have been designated as family courts (A.M. No. 99-11-07).
The Regional Trial Courts’ jurisdictions are defined as follows:
- Exercise exclusive original jurisdiction in Civil Cases as follows:
- All civil actions in which the subject of the litigation is incapable of pecuniary estimation;
- All civil actions which involve the title to, or possession of real property, or any interest therein, where the assessed value of the property involved exceeds twenty thousand pesos (P 20,000.00) or, civil actions in Metro Manila, where such value exceeds Fifty thousand pesos (P 50,000.00) except actions for forcible entry into and unlawful detainer of lands or buildings, original jurisdiction over which is conferred upon the MeTCs, MTCs, and MCTCs;
- All actions in admiralty and maritime jurisdiction where the demand or claim exceeds one hundred thousand pesos (P 100,000.00) or, in Metro Manila, where such demand or claim exceeds two hundred thousand pesos (P 200,000.00);
- All matters of probate, both testate and intestate, where the gross value of the estate exceeds One hundred thousand pesos (P 100,000.00) or, in probate matters in Metro Manila, where such gross value exceeds Two hundred thousand pesos (P 200,000.00);
- All actions involving the contract of marriage and marital relations;
- All cases not within the exclusive jurisdiction of any court, tribunal, person or body exercising judicial or quasi-judicial functions;
- All civil actions and special proceedings falling within the exclusive original jurisdiction of a Juvenile and Domestic Relations Court and of the Court of Agrarian Relations as now provided by law; and
- All other cases in which the demand, exclusive of interest, damages of whatever kind, attorney’s fees, litigation expenses and costs or the value of the property in controversy exceeds One hundred thousand pesos (P 100,000.00) or, in such other cases in Metro Manila, where the demand, exclusive of the above-mentioned items exceeds Two hundred pesos (P 200,000.00) (Sec. 19, Batas Pambansa Blg. 129, as amended by R.A No. 7691).
- Exercise original jurisdiction in other cases as follows:
- The issuance of writs of certiorari, prohibition, mandamus, quo warranto, habeas corpus, and injunction which may be enforced in any part of their respective regions; and
- Actions affecting ambassadors and other public ministers and consuls.
They shall exercise appellate jurisdiction over MeTCs, MTCCs, MTCs, and MCTCs in their respective territorial jurisdiction.
Metropolitan Trial Courts (MeTC), Municipal Trial Courts in Cities (MTCC), Municipal Trial Courts (MTC) and Municipal Circuit Trial Courts (MCTC)
These are called the first level courts established in each city and municipality. Their jurisdiction is provided for by section 33, 35 of Batas Pambansa Blg 129. Their jurisdiction has been expanded by special laws namely Republic Act Nos. 9276, 9252, 9305, 9306, and 9308.
MeTCs, MTCCs, MTCs, and MCTCs shall exercise original jurisdiction in Civil Cases as provided for in section 33 of Batas Pambansa Blg. 129 is as follows:
- Exclusive original jurisdiction over civil actions and probate proceedings, testate and intestate, including the grant of provisional remedies in proper cases, where the value of the personal property, estate or amount of the demand does not exceed One hundred thousand pesos (P 100,000.00) or, in Metro Manila where such personal property, estate or amount of the demand does not exceed Two hundred thousand pesos (P 200,000.00), exclusive of interests, damages of whatever kind , attorney’s fees, litigation expenses, and costs the amount of which must be specifically alleged: Provided, That interests, damages of whatever kind, attorney’s fees, litigation expenses and costs shall be included in the determination of the filing fees. Provided further, That where there are several claims or causes of action between the same or different parties embodied in the same complaint, the amount of the demand shall be the totality of the claims in all the causes of action arose out of the same or different transactions;
- Exclusive original jurisdiction over cases of forcible entry and unlawful detainer: Provided, That when, in such cases, the defendant raises the question of ownership in his pleadings and the question of ownership in his pleadings and the question of possession cannot be resolved without deciding the issue of ownership, the issue of ownership shall be resolved only to determine the issue of possession; and
- Exclusive original jurisdiction in all civil actions which involve title to, or possession of, real property, or any interest therein where the assessed value of the property or interest therein does not exceed Twenty thousand pesos (P 20,000.00) or, in civil actions in Metro Manila, where such assessed value does not exceed Fifty thousand pesos (P 50,000.00) exclusive of interest, damages of whatever kind, attorney’s fees, litigation expenses and costs: Provided, That in cases of land not declared for taxation purposes the value of such property shall be determined by the assessed value of the adjacent lots (Sec. 33, Batas Pambansa Blg. 129).
Section 33 of Batas Pambansa Blg. 129 provides that the Supreme Court may designate MeTCs, MTCCs, MTCs, and MCTCs to hear and determine cadastral or land registration cases where the value does not exceed one hundred thousand pesos (P100,000.00). Their decision is can be appealed in the same manner as the Regional Trial Courts.
The MeTCs, MTCCs, MTCs, and MCTCs are empowered to hear and decide petitions for a writ of habeas corpus or applications for bail in criminal cases in the province or city in the absence of the Regional Trial Court Judges.
By virtue of A.M. No. 08-8-7-SC, enacted September 9, 2008 and effective October 1-2008, the Metropolitan Trial Courts, Municipal Trial Courts in Cities, Municipal Trial Courts and Municipal Circuit Trial Courts were designated to try small claims cases for payment of money where the value of the claim does not exceed One Hundred Thousand Pesos (P100,000.00) exclusive of interest and costs. These courts shall apply the rules of procedure provided in A.M. No. 08-8-7-SC in all actions “which are: (a) purely civil in nature where the claim or relief prayed for by the plaintiff is solely for payment or reimbursement of sum of money, and (b) the civil aspect of criminal actions, either filed before the institution of the criminal action, or reserved upon the filing of the criminal action in court, pursuant to Rule 111 of the Revised Rules Of Criminal Procedure.”
These special courts were created by sec. 137 of Presidential Decree No. 1083 or the Code of Muslim Personal Laws. The judges should possess all the qualifications of a Regional Trial Court Judge and should also be learned in Islamic law and jurisprudence. Articles 143, 144, and 155 of Presidential Decree No. 1083 provides the jurisdiction of the said courts as follows:
Shari’a District Courts (SDC) as provided for in paragraph (1), Article 143 of Presidential Decree No. 1083, shall have exclusive jurisdiction over the following cases:
- All cases involving custody, guardianship, legitimacy, paternity and filiations arising under the Code;
- All cases involving disposition, distribution and settlement of the estates of deceased Muslims, probate of wills, issuance of letters of administration or appointment of administrators or executors regardless of the nature or aggregate value of the property.
- Petitions for the declaration of absence and death and for the cancellation or correction of entries in the Muslim Registries mentioned in Title VI of Book Two of the Code;
- All actions arising from customary contracts in which the parties are Muslim, if they did not specified which law shall govern their relations; and
- All petitions for mandamus, prohibition, injunction, certiorari, habeas corpus, and all other auxiliary writs and processes in aid of its appellate jurisdiction.
The SDC in concurrence with existing civil courts shall have original jurisdiction over the following cases (paragraph (2) of Article 143):
- Petitions by Muslims for the constitution of family home, change of name and commitment of an insane person to any asylum:
- All other personal and real actions not mentioned in paragraph (1) (d) wherein the parties involved are Muslims except those for forcible entry and unlawful detainer, which shall fall under the exclusive original jurisdiction of the MTCs;
- All special civil actions for interpleader or declaratory relief wherein the parties are Muslims or the property involved belongs exclusively to Muslims.
Article 144 of Presidential Decree No. 1083 provides that the SDC within shall have appellate jurisdiction over all cases tried in the Shari’a Circuit Courts (SCC) within their territorial jurisdiction.
Article 155 of Presidential Decree No. 1083 provides that the SCCs have exclusive original jurisdiction over:
- All cases involving offenses defined and punished under the Code;
- All civil actions and proceedings between parties who are Muslims or have been married in accordance with Article 13 of the Code involving disputes relating to:
- Divorce recognized under the Code;
- Betrothal or breach of contract to marry;
- Customary dower (mahr);
- Disposition and distribution of property upon divorce;
- Maintenance and support, and consolatory gifts (mut’a); and
- Restitution of marital rights.
· All cases involving disputes to communal properties.
Rules of procedure are provided for in articles 148 and 158. En Banc Resolution of the Supreme Court in 183, provided the special rules of procedure in the Shari’a courts (Ijra-at-Al Mahakim Al Sharia’a).
Shari’a courts and personnel are subject to the administrative supervision of the Supreme Court. Appointment of judges, qualifications, tenure, and compensation are subject to the provisions of the Muslim Code (Presidential Decree No. 1083. SDCs and SCCs have the same officials and other personnel as those provided by law for RTCs and MTCs, respectively.
Quasi-Courts or Quasi-Judicial Agencies
Quasi-judicial agencies are administrative agencies, more properly belonging to the Executive Department, but are empowered by the Constitution or statutes to hear and decide certain classes or categories of cases.
Quasi-judicial agencies which are empowered by the Constitution are the Constitutional Commissions: Civil Service Commission, Commission on Elections and the Commission on Audit.
Quasi-judicial agencies empowered by statutes are: Office of the President. Department of Agrarian Reform, Securities and Exchange Commission, National Labor Relations Commission, National Telecommunication Commission, Employees Compensation Commission, Insurance Commission, Construction Industry Arbitration Commission, Philippine Atomic Energy Commission, Social Security System, Government Service Insurance System, Bureau of Patents, Trademark and Technology, National Conciliation Mediation Board, Land Registration Authority, Civil Aeronautics Board, Central Board of Assessment Appeals, National Electrification Administration, Energy Regulatory Board, Agricultural Inventions Board and the Board of Investments. When needed, the Supreme Court issues rules and regulations for these quasi-judicial agencies in the performance of their judicial functions. Republic Act No. 8799, known as the “Securities Regulation Code,” reorganized the Securities and Exchange Commission (Chapter II) and provided for its powers and function (sec.5). Specifically provided for in these powers and function is the Commission’s jurisdiction over all cases previously provided for in sec. 5, Pres. Decree No. 902-A (sec. 5.2). The Supreme Court promulgated rules of procedure governing intra-corporate controversies under Republic Act No. 8799 (A.M. No. 01-2-04-SC).
Decisions of these quasi-courts can be appealed to the Court of Appeals except those of the Constitutional Commissions: Civil Service Commission, Commission on Elections and the Commission on Audit, which can be appealed by certiorari to the Supreme Court (Art. IX-A, sec. 7).
Other Judicial Procedures
Katarungang Pambarangay - Presidential Decree No. 1508, or the Katarungang Pambarangay Law, took effect December 11, 1978, and established a system of amicably settling disputes at the barangay level. Rules and procedures were provided by this decree and the Local Government Code, Title I, Chapter 7, sec. 339-422). This system of amicable settlement of dispute aims to promote the speedy administration of justice by easing the congestion of court dockets. The Court does not take cognizance of cases filed if they are not filed first with the Katarungang Pambarangay.
Alternative Dispute Resolution (ADR) System - Republic Act No. 9285 institutionalized the use of an alternative dispute resolution system which serves to promote the speedy and impartial administration of justice and unclog the court dockets. This act shall be without prejudice to the adoption of the Supreme Court of any ADR system such as mediation, conciliation, arbitration or any combination thereof. The Supreme Court by virtue of an En Banc Resolution dated October 16, 2001 (Administrative Matter No. 01-10-5-SC-PHILJA), designated the Philippine Judicial Academy as the component unit of the Supreme Court for court-referred or court-related mediation cases and alternative dispute resolution mechanism and establishing the Philippine Mediation Center. Muslin law provides its own arbitration Council called The Agama Arbitration Council.
Civil Service Commission - Act No. 5 (1900) established the Philippine civil service and was reorganized as a Bureau in 1905. It was established in the 1935 Constitution. Republic Act No. 2260 (1959) converted it from a Bureau into the Civil Service Commission. Presidential Decree No. 807 further redefined its role. Its present status is provided for in the 1987 Constitution, Art. IX-B and reiterated by the provision of the 1987 Administrative Code (Executive Order No. 292).
Commission on Elections - It is the constitutional commission created by a 1940 amendment to the 1935 Constitution whose primary function is to manage to maintain its authority and independence in the conduct of elections. The COMELEC exercises administrative, quasi-judicial and judicial powers. Its membership increased to nine with a term of nine years by the 1973 Constitution. It was however decreased to seven with a term of seven years without re-appointment by the 1987 Constitution.
Commission on Audit - Article IX, sec, 2 of the 1987 Constitution provided the powers and authority of the Commission on Audit, which is to examine, audit and settle all accounts pertaining to the revenue and receipts of and expenditures or uses of funds and property owned or held in trust by or pertaining to the Government including government owned and controlled corporations with original charters.
Article X of the 1987 Constitution provides for the territorial and political subdivisions of the Philippines as follows: province, cities, municipalities and barangays. The 1991 Local Government Code or Republic Act No. 7160, as amended by Republic Act No. 9009, provides the detail that implements the provision of the Constitution. The officials, namely, the governor, city mayor, city vice mayor, municipal mayor, municipal vice-mayor and punong barangay are elected by their respective units. (1991 Local Government Code, Title II, Chapter 1, sec. 41 (a)). The regular members of the sangguniang panlalawigan (for the province), sangguniang panglunsod (for cities), sangguniang bayan (municipalities) are elected by districts while the sangguniang barangay are elected at large.
Each territorial or political subdivision enjoys local autonomy as defined in the Constitution. The President exercises supervision over local Governments.
Each region is composed of several provinces while each province is composed of a cluster of municipalities and component cities (Local Government Code, Title IV, Chapter 1, sec. 459). The Provincial government is composed of the governor, vice-governor, members of the sangguniang panlalawigan and other appointed officials
The city consists of more urbanized and developed barangays which are created, divided, merged, abolished or its boundary altered by law or act of Congress, subject to the approval of majority votes cast by its residents in a plebiscite conducted by the Comelec (Local Government Code, Title III, Chapter 1, sec. 448-449). A City may be classified either as a component or highly urbanized. The city government is composed of the mayor, vice-mayor, members of the sangguniang panlunsod (which is composed of the president of the city chapter of the liga ng mga barangay, president of the panlungsod ng mga pederasyon ng mga sangguniang kabataan and the sectoral representatives) and other appointed officials.
The municipality consists of a group of barangays which is created, divided, merged, abolished or its boundary altered by law or act of Congress, subject to the approval of majority votes casts in a plebiscite conducted by the Comelec (Local Government Code, Title II, Chapter 1, sec. 440-441). The municipal government is composed of the mayor, vice-mayor, sangguniang members (which is composed of president of the municipal chapter of the liga ng mga barangay, president of the pambayang pederasyon ng mga sangguniang kabataan and the sectoral representatives) and other appointed officials. In order for a municipality to be converted into cities, a law or act of Congress must be passed by virtue of the provisions of the Local Government Code and the Constitution. A plebiscite must be conducted to determine if a majority of the people in the said municipality are in favor of converting the municipality into a city. Although laws have been passed, their constitutionality can be question in the Supreme Court. This can be seen in the November 18, 2008 decision penned by Justice Antonio T. Carpio. The League of Cities of the Philippines, City of Iloilo, City of Calbayog filed consolidated petitions questioning the constitutionality of the Cityhood Laws and enjoined the Commission on Elections and the respondent municipality from conducting plebiscites. The Cityhood Laws were declared as unconstitutional for they violated sections 6 and 10, Article X of the 1987 Constitution. The Cityhood laws referred to in this case are: Republic Acts 9389, 9390, 9391, 9392, 9293, 9394, 9398, 9404, 9405, 9407, 9408, 9409, 9434, 9435, 9436 and 9491. (League of Cities of the Philippines (CP) represented by LCP National President Jerry Trenas v. Commission on Elections, G.R. No. 176951, 177499, 178056, November 18, 2008)
The Barangay is the smallest local government unit which is created, divided, merged, abolished or its boundary altered by law or by an ordinance of the sangguniang panlalawigan or sangguniang panlunsod, subject to the approval of majority votes casts in a plebiscite conducted by the Comelec (Local Government Code, Title I, Chapter 1, sec. 384-385)
The Philippines is divided into the following local government units:
- Region I (ILOCOS REGION)
- Region II (CAGAYAN VALLEY)
- Region III (CENTRAL LUZON)
- Region IV (CALABARZON & MIMAROPA)
- Region V (BICOL REGION)
- Region VI (WESTERN VISAYAS)
- Region VII (CENTRAL VISAYAS)
- Region VIII (EASTERN VISAYAS)
- Region IX (ZAMBOANGA PENINSULA)
- Region X (NORTHERN MINDANAO)
- Region XI (DAVAO REGION)
- Region XII (SOCCSKSARGEN)
- Region XIII (CARAGA)
- Autonomous Region in Muslim Mindano (ARMM)
- Cordillera Administrative Region (CAR)
- National Capital Region (NCR)
The Caraga Administrative Region (Region III) was created by Republic Act No. 7901 which was passed by both houses of Congress and approved by the President on February 23, 1995. The Autonomous Region in Muslim Mindanao was created by Republic Act No. 6734 was passed by both houses of Congress on February7, 2001 and lapsed into law without the signature of the President in accordance with Article VI, Section 27 (1) of the Constitution on March 31, 2001. The Cordillera Autonomous Region was created by Republic Act No. 6766 which was approved on October 23, 1989.
Commission on Human Rights - The Commission on Human Rights was created as an independent office for cases of violation of the human rights (Art. XIII, sec. 17). Specific powers and duties are expressly provided for by section 18 of the 1987 Constitution. It is composed of a Chairperson and four (4) members.
Office of the Ombudsman - The 1987 Constitution explicitly provides that Ombudsman and his deputies are called the protectors of the people for they are tasked to act promptly on complaints filed against public officials or employees of the government including government owned and controlled corporations (Art. XI, sec. 12). Its powers, duties and functions are provided for in section 13. It is responsible for prosecuting government official for their alleged crimes. However, Republic Act No. 6770, sec, 15 provides that the Ombudsman shall give priority to complaints filed against high ranking government officials and those occupying supervisory positions. It is composed of the Ombudsman and six (6) deputies.
The President, Vice President, members of the Supreme Court, Constitutional Commission and the Ombudsman may be removed from office by impeachment for conviction of violations of the Constitution, treason, bribery, graft and corruption, other high crimes or betrayal of public trust. (Art. XI, sec. 2). The House of Representatives has the exclusive power to initiate (Art. XI, sec. 3 (1)) while the Senate has the sole power to try and decide impeachments cases (Art. XI, sec. 3(6)). All other public officials and employees may be removed by law (Art. XI, sec. 2 the Civil Service Law).
The Philippine legal system may be considered as a unique legal system because it is a blend of civil law (Roman), common law (Anglo-American), Muslim (Islamic) law and indigenous law. Like other legal systems, there are two main sources of law.
There are two primary sources of the law:
- Statutes or statutory law - Statutes are defined as the written enactment of the will of the legislative branch of the government rendered authentic by certain prescribed forms or solemnities are more also known as enactment of congress. Generally they consist of two types, the Constitution and legislative enactments. In the Philippines, statutory law includes constitutions, treaties, statutes proper or legislative enactments, municipal charters, municipal legislation, court rules, administrative rules and orders, legislative rules and presidential issuance.
- Jurisprudence - or case law - is cases decided or written opinion by courts and by persons performing judicial functions. Also included are all rulings in administrative and legislative tribunals such as decisions made by the Presidential or Senate or House Electoral Tribunals. Only decisions of the House of Representatives Electoral Tribunal are available in print as House of Representatives Electoral Tribunal Reports, volume 1 (January 28, 1988-October 3, 1990) to present. They will be available electronically at the Supreme Court E-Library and as a separate CD.
- For Muslim law, the primary sources of Shariah are Quran, Sunnaqh, Ijma and Qiyas. Jainal D. Razul in his book Commentaries and Jurisprudence on the Muslin Law of the Philippines (1984) further stated there are new sources of muslim law, which some jurists rejected such as Istihsan or juristic preference; Al-Masalih, Al Mursalah or public interest; Istidlal (custom) and Istishab. (deduction based on continuity or permanence).
Classification of Legal Sources
Primary Authority is the only authority that is binding on the courts.
Classification by Authority
“Authority is that which may be cited in support of an action, theory or hypothesis.” Legal of materials primary authority are those that contain actual law or those that contain law created by government. Each of the three branches of government: Legislative, Executive and Judiciary, promulgates laws.
The legislature promulgates statutes, namely: Act, Commonwealth Act, Republic Act, Batas Pambansa. Executive promulgates presidential issuances (Presidential Decrees, Executive Orders, Memorandum Circular, Administrative Orders, Proclamations, etc.), rules and regulations through its various departments, bureaus and agencies. The Judiciary promulgates judicial doctrines embodied in decisions. We however need to clarify that the Presidential Decrees or law issued by President Ferdinand E. Marcos during Martial Law and Executive Orders issued by Aquino President Corazon C. Aquino before the opening Congress in July 1987 can be classified as legislative acts, there being no legislature during these two periods.
Primary Authority or sources may be further subdivided into the following:
- Mandatory primary authority is law created by the jurisdiction in which the law operates like the Philippines;
- Persuasive mandatory authority is law created by other jurisdictions but which have persuasive value to our courts e.g. Spanish and American laws and jurisprudence. These sources as used specially when there are no Philippine authorities available or when the Philippine statute or jurisprudence under interpretation is based on either the Spanish or American law;
It is in this regard that the collections of law libraries in the Philippines include United States court reports, West’s national reporter system, court reports of England and international tribunal, important reference materials such as the American Jurisprudence, Corpus Juris Secundum, Words and Phrases and different law dictionaries. Some of these law libraries subscribe to the Westlaw and/or LexisNexis. The Supreme Court , University of the Philippines, University of Santo Tomas and a number of prominent law libraries also have a Spanish collection where a great number of our laws originated.
Secondary authority or sources are commentaries or books, treatise, writings, journal articles that explain, discuss or comment on primary authorities. Also included in this category are the opinions of the Department of Justice, Securities and Exchange Commission or circulars of the Bangko Sentral ng Pilipinas. These materials are not binding on courts but they have persuasive effect and/or the degree of persuasiveness. With regards to commentaries or books, treatise, writings, journal articles, the reputation or expertise of the author is a consideration. Some of the authors of good reputation and considered experts in the field are Chief Justice Ramon C. Aquino and Justice Carolina Grino Aquino on Revised Penal Code or Criminal Law, Senator Arturo M. Tolentino on Civil law, Chief Justice Enrique M. Fernando and Fr. Joaquin Bernas on Constititional Law, Prof. Perfecto Fernandez on Labor Law, Vicente Francisco, Chief Justice Manuel Moran on Remedial Law, and Justice Vicente Abad Santos and Senator Jovito Salonga on International Law, etc.
Classification by Source
It is important for legal research experts to know the source where the materials were taken from. One has to determine whether they came from primary (official) sources or secondary (unofficial sources). Primary and secondary sources for the sources of law are found in the Philippine Legal Information Resources and Citations section - part II - of the 2009 Update.
Primary sources are those published by the issuing agency itself or the official repository, the Official Gazette. Thus, for Republic Acts and other legislative enactments or statutes, the primary sources are the Official Gazette published by the National Printing Office and the Laws and Resolutions published by Congress. For Supreme Court decisions, the primary sources are the Philippine Reports, the individually mimeographed Advance Supreme Court decisions (discontinued by the Supreme Court effective January 2009) and the Official Gazette. Publication of Supreme Court decisions in the Official Gazette is selective. Complete court reports for Supreme Court decisions from 1901 to the present can be found in the Philippine Reports.
The Secondary Sources are the unofficial sources and generally referred to as those commercially published or those that are not published by government agencies or instrumentalities.
Some of the Secondary sources of statutes are the Vital Legal Documents, published by the Central Book Supply, contains a compilation of Presidential Decrees (1973). The second edition contains Republic Acts. Prof. Sulpicio Guevara published three books which contain s the full text of legislative enactments or laws namely: a). Public Laws Annotated (7 vols.) , compilation of all laws from 1901 to 1935, b). Commonwealth Acts Annotated (3vos.). compilation of laws from 1935-1945 c). The Laws of the First Philippine Republic (The Laws of Malolos) 1898-1899. For the Supreme Court decisions, Supreme Court Reports Annotated (SCRA), a secondary source, published by the Central Book Supply is more updated and popular in the legal community than the Philippine Reports, the primary and official source. Citations in commentaries or books, treatise, writings, journal articles, pleading and even court decisions show SCRA’s popular acceptance. The general rule is that in the absence of a primary source, the secondary source may be cited. This was the primary rationale for the SCRA’s popularity. There was no primary source for complete compilation of Supreme Court decisions for more than twenty (20) years. The publication of the Philippine Reports by the National Printing Office ceased in 1960s. It was only in 1982 when the publication of the Philippine Reports was revived by then Chief Justice Enrique M. Fernando who requested then President Ferdinand E. Marcos to take charge of its publication with special appropriation in the Judiciary’s annual budget.
With the advent of the new information technology, electronic or digitized sources are popular sources of legal information for the following reasons: a) updated legal information is readily available and b) the search engines used facilitate research, and c) no complete and update manually published search tools for statute and case law. These electronic sources are in the forms of CD ROMS, online or virtual libraries of the issuing government agency or instrumentality and the now growing websites of law offices such as Chan Robles Law Firm Library and Jaromay, Laurente Law Office On Line Library, or law schools such as the Arellano Law Foundation Lawphil. Net. In case of conflict between the printed and electronic sources, the printed version coming from the issuing government agency prevails. This policy prevails even for the Supreme Court E-Library, where it is explicitly provided in its website.
Legal research for statute law in the Philippines benefited remarkably from the use of the latest technology due to two major problems: a) no complete and updated published or printed search tools or law finders for statute law and b) no complete compilation of statute law from 1901-present were available. Problems of the publication of compilations of statute law or the existence of the full-text of Presidential Decrees was even brought to the Supreme Court in the Tanada v. Tuvera, G.R. No. 63915, April 24, 1985 (220 Phil 422), December 29, 1986 (146 SCRA 446) case. This case which was first decided before the bloodless revolution popularly known as People Power or the EDSA Revolution was modified in the December 29, 1986 or after the People Power or the EDSA Revolution.
Still, with regards to Statute Law in the Philippines, the other problem is how to classify sources published in the newspapers. Since 1987, based on the definition of primary and secondary source, they may be considered as primary sources pursuant to Executive Order No. 200, s. 1987 which provides that laws become effective fifteen (15) days after publication in the Official Gazette or in two newspapers of general circulation. In case of conflict between the two versions, the version of the Official Gazette holds.
In finding the law, our ultimate goal is to locate mandatory primary authorities which have bearing on the legal problem at hand. If these authorities are scarce or nonexistent, our next alternative is to find any relevant persuasive mandatory authority. If our search is still negative, the next alternative might be secondary authorities. There are however instances where the secondary authorities, more particularly the commentaries made by experts of the field, take precedence over the persuasive mandatory authorities. With the availability of both, using both sources is highly recommended.
Classification by Character
This refers to the nature of the subject treated in books. This classification categorizes books as : a) Statute Law Books, b) Case Law Books or Law Reports, c) a combination of both and d) “Law Finders.”
Law Finders refer to indexes, citators, encyclopedias, legal dictionaries, thesauri or digests. A major problem in the Philippines is that there are no up-to-date Law Finders. Federico Moreno’s Philippine Law Dictionary, the only available Philippine law dictionary was last published in 1988, and, Jose Agaton Sibal’s Philippine Legal Thesaurus which is likewise considered a dictionary was published in 1986. Foreign law dictionaries like Blacks’ Law Dictionary, Words and Phrases are used as alternate. To search for legal information, legal researchers go online virtual libraries such as the Supreme Court E-Library (http://elibrary.judiciary.gov.ph), Chan Robles Virtual Law Library, and the different databases in CD-ROM format from CD Asia Technologies Asia Inc. The databases developed by CD Asia include not only the compilation of Laws (statutes) and Jurisprudence, but also include a compilation of legal information that are not available in printed form such as Opinions of the Department of Justice, Securities and Exchange Commission and Bangko Sentral (Central Bank) rules and regulations. Search engines used in these databases answer for the lack of complete and updated indexes of legal information. In this regard, effective legal research can be conducted with one cardinal rule in mind: "ALWAYS START FROM THE LATEST." The exception to this is when the research has defined or has provided a SPECIFIC period.
Statute laws are the rules and regulations promulgated by competent authorities; enactments of legislative bodies (national or local) or they may be rules and regulations of administrative (departments or bureau) or judicial agencies. Research of statutory law does not end with consulting the law itself. At times it extends to the intent of each provision or even the words used in the law. In this regard, the deliberations of these laws must be consulted. The deliberation of laws, except Presidential Decrees and other Martial law issuances, are available.
The different Constitutions of the Philippines are provided in some history books such as Gregorio F. Zaide’s Philippine Constitutional History and Constitutions of Modern Nations (1970) and Founders of Freedom; The History of Three Constitution by a seven-man Board. The Philippine legal system recognizes the following Constitutions: Malolos, 1935, 1973, Provisional or Freedom and 1987 Constitutions.
Text of the Malolos Constitution is available in some history books such as Gregorio F. Zaide’s Philippine Constitutional History and Constitutions of Modern Nations, p. 176 (1970). For the rest of the above mentioned Constitutions, the texts are available in published Philippine constitutional law books. Full text of these Constitutions will be available at the Supreme Court E-Library.
The Constitutional Convention proceedings provide for the intent and background of each provision of the Constitution. Sources for the 1934-1935 Constitutional Convention are: 10 volumes of the Constitutional Convention Record by the House of Representatives (1966), Salvador Laurel's seven volumes book entitled Proceedings of the Constitutional Convention (1966); 6 volumes of the Philippine Constitution, Origins, Making, Meaning and Application by the Philippine Lawyers Association with Jose Aruego as one of its editors (1970) and Journal of the Constitutional convention of the Philippines by Vicente Francisco.
Proceedings of the 1973 Constitutional Convention were never published. A photocopy and softcopy of the complete compilation is available at the Filipiniana Reading Room of the National Library of the Philippines.
Journals (3 volumes) and Records (5 volumes) of the Constitutional Convention of 1986 were published by the Constitutional Commission. This publication does not have an index. This problem was remedied when CD Technologies Asia Inc. came out with a CD-ROM version which facilitated research for it has a search engine.
The proceedings and text of the 1935, 1973 and 1987 Constitutions will be available at the Supreme Court E-Library.
Commentaries or interpretations on the constitution, decisions of the Supreme Court and other courts, textbooks or treaties, periodical articles of the different Constitution are available. (See. Legal Bibliography on page 34)
Treaties and other International Agreements
A treaty is an agreement or a contract between two (bilateral) or more (multilateral) nations or sovereigns, entered into by agents appointed (generally the Secretary of Foreign Affairs or ambassadors) for the purpose and duly sanctioned by supreme powers of the respective countries. Treaties that do not have legislative sanctions are executive agreements which may or may not have legislative authorization, and which have limited execution by constitutional restrictions
In the Philippines, a treaty or international agreement shall not be valid and effective unless concurred in by at least two-thirds of all members of the Senate (Constitution, Article VII, section 21). Those without the concurrence of the Senate are considered as Executive Agreements.
The President of the Philippines may enter into international treaties or agreements as the national welfare and interest may require, and may contract and guarantee foreign loans on behalf of the Republic, subject to such limitations as may be provided by law. During the time of Pres. Marcos, there was the so-called Tripoli Agreement.
The official text of treaties is published in the Official Gazette, Department of Foreign Affairs Treaty Series (DFATS), United Nations Treaty Series (UNTS) or the University of the Philippines Law Center's Philippine Treaty Series (PTS). To locate the latest treaties, there are two possible sources: Department of Foreign Affairs and the Senate of the Philippines. There is no complete repository of all treaties entered into by the Philippines. There is a selective publication of treaties in the Official Gazette. The DFATS was last published in the 1970s while the PTS's last volume, vol. 8 contains treaties entered into until 1981.With the UN Treaty Series, it is available only in UN depository libraries in the country and its United Nation Information Center in Makati. Forthcoming will be the compilation of treaties from 1946-2007 in CD-ROM at the Supreme Court Library.
For tax treaties Eustaquio Ordoño has published a series on the Philippine tax treaties. Other sources of important treaties are appended in books on the subject or law journals such as the American Journal of International Law or the Philippine Yearbook of International Law.
To locate these treaties, the Foreign Service Institute published the Philippine Treaties Index (1946-1982) and UN Multilateral Treaties Deposited with the Secretary General. Electronically, major law libraries use the Treaties and International Agreements Researchers Archives (TIARA), WESTLAW, LEXIS, other online sources and the Internet.
Statutes Proper (Legislative Enactments)
Statutes are enactments of the different legislative bodies since 1900 broken down as follows:
- 4,275 ACTS - Enactments from 1900-1935
- 733 Commonwealth Acts - Enactments from 1935-1945
- 2034 Presidential Decrees - Enactments from 1972-1985
- 884 Batas Pambansa. – Enactments from 1979-1985
- 9547 Republic Acts - Enactments from 1946-1972, 1987- April 1, 2009
The above figures clearly show that during Martial Law, both President Marcos and the Batasang Pambansa (Parliament) were issuing laws at the same time - Presidential Decrees by President Marcos and Batas Pambansa by the Philippine Parliament.
During Martial Law, aside from Presidential Decrees, the President promulgated other issuances namely: 57 General Orders, 1,525 Letters of Instruction, 2,489 Proclamations, 832 Memorandum Order, 1,297 Memorandum Circular, 157 Letter of Implementation, Letter of Authority, Letters of Instruction, 504 Administrative Order and 1,093 Executive Orders.
As previously stated, the Presidential Decrees issued by Pres. Marcos during Martial Law and the Executive Orders issued by Pres. Aquino before the opening of Congress may be classified as legislative acts for there was no legislature during those two periods.
Laws passed by the new 1987 Congress started from Rep. Act No. 6636, as the last Republic Act promulgated by Congress before Martial Law was Rep. Act No. 6635.
The following are the Philippine codes adopted from 1901 to present:
- Child and Youth Welfare Code
- Civil Code
- Comprehensive Agrarian Reform Code
- Coconut Industry Code
- Code of Commerce
- Cooperative Code
- Corporation Code
- Family Code
- Fire Code
- Forest Reform Code
- Intellectual Property Code
- Labor Code
- Land Transportation and Traffic Code
- Local Government Code
- Muslim Code of Personal Laws
- National Building Code
- National Code of Marketing of Breast-milk Substitutes and Supplements
- National Internal Revenue Code
- Omnibus Election Code
- Philippine Environment Code
- Revised Administrative Code
- Revised Penal Code
- Sanitation Code
- State Auditing Code
- Tariff and Customs Code
- Water Code
From the above list of codes, recently amended is the Fire Code of the Philippines, Republic Act No. 9514 - "An Act Establishing a Comprehensive Fire Code of the Philippines Repealing Presidential Decree No. 1185 and for Other Purposes", Approved by the President on December 19, 2008.
The House of Representatives prepared the procedure on how a bill becomes a law. This procedure is pursuant to the Constitution and recognized by both Houses of Congress. To better appreciate the procedure, a diagram was prepared by the House of Representatives.
SOURCE: Congressional Library; House Printing Division, Administrative Support Bureau, July 1996.
Administrative acts, orders and regulations of the President touching on the organization or mode of operation of the government, re-arranging or adjusting districts, divisions or parts of the Philippines, and acts and commands governing the general performance of duties of public officials and employees or disposing of issues of general concern are made effective by Executive Orders. Those orders fixing the dates when specific laws, resolutions or orders cease to take effect and any information concerning matters of public moment determined by law, resolution or executive orders, take the form of executive Proclamation.
Executive Orders and Proclamations of the Governor-General were published annually in a set Executive Orders and Proclamations. Thirty three (33) volumes were published until 1935 by the Bureau of Printing. Administrative Acts and Orders of the President and Proclamations were published. Only a few libraries in the Philippines have these publications for the majority were destroyed during World War II. There are copies available at the Law Library of Congress, Cincinnati Law Library Association (who offered to donate them to the Supreme Court of the Philippines) and some at the Library of the Institute of South East Asian Studies in Singapore.
In researching for Proclamations, Administrative Orders, Executive Orders and Memorandum Orders & Circulars of the President, the year it was promulgated is a must, or if no year is available, the President issuing it must be stated. As a new President is sworn in, all the Presidential issuances start with No. 1. The only exception was Executive Orders issued by President Carlos Garcia after he assumed the Presidency because President Magsaysay died in a plane crash. He continued the number started by President Magsaysay. When President Garcia was elected President, he started his Executive Order No. 1.
To look for the intent of Republic Acts, we have to go through the printed Journals and Records of both houses of Congress which contain their deliberation. To facilitate the search, the House Bill No. or Senate Bill No. or both found on the upper left portion of the first page of the law is important. The proceedings of the House of Representatives and the Philippine Senate are now available on their websites. The Batasang Pambansa has likewise published it proceedings. There are no available proceedings for the other laws Acts, Commonwealth Act and Presidential Decrees.
Administrative Rules and Regulations
Administrative Rules and regulations are orders, rules and regulations issued by the heads of Departments, Bureau and other agencies of the government for the effective enforcement of laws within their jurisdiction. However, in order that such rules and regulations may be valid, they must be within the authorized limits and jurisdiction of the office issuing them and in accordance with the provisions of the law authorizing their issuance. Access to administrative rules and regulations have been facilitated due to the two developments: a) government agencies, including government owned and controlled corporations, have their own websites where they include the full-text of their issuances, and b) the National Administrative Register, which is available in print, CD-Rom and in the Supreme Court website.
In handling these types of materials, there are two important items needed: a.) Issuing Agency and b.) Year it was promulgated. This is due to the fact that all Departments, Bureaus, and other government agencies use the administrative orders, memorandum orders and memorandum circulars for their administrative rules and regulations and they start always with number 1 every year. Even the Supreme Court issues Administrative Orders, Circulars, Memorandum Orders, and Administrative Matters.
Before the Administrative Code of 1987, these orders, rules and regulations were selectively published in the Official Gazette. Thus, the only source to be able to get a copy of the text of these rules and regulations is the issuing government agency itself.
When the 1987 Administrative Code (Executive Order No. 292) was promulgated, all governmental and department orders, rules and regulations have to be filed with the University of government agencies including government owned and controlled corporations have to file three (3) certified copies of their orders, rules and regulations with the University of the Philippines Law Center's Office of National Administrative Register and are required to publish quarterly in a publication called National Administrative Register. Aside from the printed copies, the National Administrative Register is available electronically on CD-ROM (CD Asia Technologies Inc.) and online at the Supreme Court E-Library. Rules in force on the date on which the Code took effect which are not filed within three months from the date not thereafter shall be the basis of any sanction against any person or party. Each rule becomes effective 15 days after the filing, unless a different date is fixed by law or specified in the rule, such as in cases of imminent danger to public health, safety and welfare, the existence of which must be expressed in a statement accompanying the rule. The court shall take judicial notice of the certified copy of each rule duly filed or as published in the bulletin or codified rules
University of the Philippines Law Center’s Office of National Administrative Register is not only tasked to publish this quarterly register but must keep an up-to-date codification of all rules thus published and remaining in effect together with a complete index and appropriate tables. Every rule establishing an offense or defining an act which pursuant to law is punishable as a crime or subject to a penalty shall in all cases be published in full text. Exceptions to the “filing requirement" are Congress, Judiciary, Constitutional Commission, military establishments in all matters relative to Armed Forces personnel, the Board of Pardons and Parole and state universities and colleges.
As previously stated, there are no up-to-date or complete Statutes finders. Those published are listed in the Philippine Legal Information Resources and Citations (part II of the 2009 Update ). As previously stated, to facilitate legal research, one has to go online to virtual libraries such as the Supreme Court E-Library, Chan Robles Virtual Law Library, Arellano Law Foundation’s The I Lawphil Project, and CD Asia Technologies or the different databases in CD ROM such as those of CD Asia Technologies Asia Inc., !e-library! A Century and 4 Years of Supreme Court Decisions and i-Law Instant CD.
SOURCE: 2002 Revised Manual of Clerks of Court. Manila, Supreme Court, 2002. Organizational Chart was amended due to the passage of Republic Act No. 9282 (CTA)
Case Law or Judicial decisions are official interpretations or manifestation of law made by persons and agencies of the government performing judicial and quasi-judicial functions. At the apex of the Philippine Judicial System is the Supreme Court, or what is referred to as court of last resort. The reorganization of the Judiciary of 1980 (Batas Pambansa Bldg. 129) established the following courts:
- Court of Appeals;
- Regional Trial Courts divided into different judicial regions,
- Metropolitan Trial Court;
- Municipal Trial Court in Cities;
- Municipal Trial Courts;
- Municipal Circuit Trial Courts.
The Shariah (Sharia’a) Circuit and District Courts (Presidential Decree No. 1083), Court of Tax Appeals (Republic Act No. 1125) and the Sandiganbayan (Presidential Decree Nos. 1486 and 1606), sec. 4, Art XI of the 1987 Constitution were created by separate laws.
Conventional decisions are decisions or rulings made by regularly constituted court of justice. Subordinate decisions are those made by administrative agencies performing quasi-judicial functions.
One major problem in conducting research on case law is the availability of published or printed decisions from the Court of Appeals to the rest of the judicial and quasi-judicial agencies. The Judicial Reform Program of the Supreme Court with the establishment of the Supreme Court E-Library aims to address this problem and also those from statute law. The decisions of the Supreme Court, Court of Appeals, Sandiganban and the Court of Tax Appeals will be made available in the Supreme Court E-Library. Downloading of the decisions of the Appellate Courts have started from the most recent and will continue until all their first decision from their creation will be completed. The Reporters Office of the Supreme Court and the Court of Appeals keep all the original and complete copies of the court decisions. For the rest of the members of the Judiciary or the quasi-judicial agencies, copies of their decisions may be taken from the Legal Office, Office of the Clerks of Court or their libraries.
Supreme Court Decisions
Decisions of the Supreme Court bind the lower courts and are a source of law, the law of the land. It is the judgment of this court which determines whether a law is constitutional or not. Unconstitutional laws even though it is signed by the President and passed by both house of congress can not take effect in the Philippines.
Decisions of the Supreme Court are classified as follows:
- "Regular decisions" and extended Resolutions are published in court reports either in primary or secondary sources. These decisions provide the justice who penned the decision or ponente and the other justices responsible for promulgating the decision, whether En Banc or by Division. Separate dissenting and/or concurring opinions are likewise published with the main decision. These regular and extended resolutions are available electronically in the Supreme Court E-Library under Decsions.
- Unsigned Minute Resolutions are not published. Although they bear the same force and effect as the regular decisions or extended resolutions, they are signed and issued by the respective Clerks of Court En Banc or by either of the three (3) Divisions. Since these Minutes Resolutions are not published, the Supreme Court has now incorporated these Minute Resolutions, more particularly those that resolve a motion for reconsideration or those that explain or affirm a decision; and (2) Administrative Matters in the Supreme Court E-Library, under RESOLUTIONS.
Case Reports in the Philippines such as the Philippine Reports, Supreme Court Reports Annotated (SCRA), and the Supreme Court Advance Decisions (SCAD) come in bound volumes which generally cover a month per volume. The Official Gazette and Philippine Reports are the official repositories of decisions and extended resolutions of the Supreme Court. The difference between the two lies in the fact that the Official Gazette selectively publishes Supreme Court decisions while Philippine Reports contains all decisions of the Supreme Court except minute resolutions. However, from 1901 until 1960, there were unpublished decisions of the Supreme Court. The list and subject field are found at the back of each volume of the Philippine Reports. Some of these decisions are cited in treatises or annotations. In view to the importance of these decisions, the late Judge Nitafan of the Regional Trial Court of Manila started publishing Supreme Court Unpublished Decisions; vol. 1 covers decisions from March 1946 to February 1952.
Even before the war, there were unpublished decisions of the Court. The source of these unpublished decisions is the Office of the Reporter of the Supreme Court. Due to World War II, a number of the original decisions have been burned. So, there is no complete compilation of the original decisions of the Supreme Court. This problem is being addressed by the Supreme Court E-Library where are great number of these unpublished decisions of the Supreme Court before the war were retrieved from different sources such as the United States National Archives in Maryland, private collection of former Supreme Court Justices such as Chief Justice Ramon Avancena and Justice George Malcom (collection is found in the University of Michigan) and private law libraries who were able to save some of their collection such as the University of Santo Tomas, the oldest university in the Philippines. Search for the unpublished decisions still continues. A list of these unpublished decisions is in the Supreme Court E-Library, Project COMPUSDEC, under JURISPRUDENCE.
The early volumes, particularly those before the war were originally published in Spanish in the Jurisprudencia Filipina. They were translated in English to become the Philippine Reports. Some decisions after the second Philippine independence were still in the Spanish language. There are a number of decisions now in the Filipino language. The Philippine Reports until volume 126 (1960's) was published by the Bureau of Printing, now called the National Printing Office. Printing was transferred to the Supreme Court in the 1980s due to the need for a complete official publication of the Court’s decision. The Supreme Court’s Philippine Reports started with volume 127.
The most popular secondary source is the Supreme Court Reports Annotated (SCRA). Actually, legal practitioners cite it more than the Philippine Reports and the Lex Libris Jurisprudence CD ROM.
How can we search for Supreme Court decisions:
- Topic or Subject Approach: (Please See Complete title of the publication from the Philippine Legal Bibliography chapter)
- Philippine Digest
- Republic of the Philippine Digest
- Velayo's digest
- Magsino's Compendium
- Supreme Court's unpublished Subject Index
- Martinez's Summary of Supreme Court rulings 1984 to 1997
- UP Law Center's Supreme Court decisions: subject index and digest's
- SC's Case Digest's
- Philippine Law and Jurisprudence
- Castigador’s Citations
- SCRA Quick Index Digest
- Lex Libris Jurisprudence
- Title Approach or Title of the Approach: (Please See Complete title of the publication from the Philippine Legal Bibliography chapter)
- Philippine Digest - Case Index
- Republic of the Philippines Digest
- Ong, M. Title Index to SC decisions 1946-1978 2v.; 1978-1981 1st Suppl; 1981-1985, 2nd Suppl; 1986 to present is unpublished but available at the SC Lib
- Ateneo's Index & Aquirre's Index
- Lex Libris Jurisprudence/Template search
Court of Appeals decisions
Decisions of the Court of Appeals are merely persuasive on lower courts. They are cited in cases where there are no Supreme Court decisions in point. In this regard, they are considered as judicial guides to lower courts and that conclusion or pronouncement they make can be raised as a doctrine.
Sources of Court of Appeals decisions are:
- Official Gazette (selective publication)
- Court of Appeals Reports which was published by the Court of Appeals until 1980. Even this publication is not a complete compilation. It is still considered selective for not all CA decisions are published.
- Court of Appeals Reports (CAR) by Central Book Supply. One volume was published
- Philippine Law and Jurisprudence
- Reports Office of the Court of Appeals
- Subject or Topic Approach:
- Velayo's Digest;
- Moreno's Philippine Law dictionary
Decisions of Special Courts
Sandiganbayan and the Court of Tax Appeals do not have published decisions. The Sandiganbayan has only one volume published; Sandiganbayan Reports vol. 1 covers decisions promulgated from December 1979 to 1980.
Court of Tax Appeals decisions from 1980 to 2004 are found in the Lex Libris particularly in Taxation CD ROM.
Decisions of Administrative Agencies, Commissions and Boards
Laws have been promulgated which grants some administrative agencies to perform quasi-judicial functions. These functions are distinct from their regular administrative or regulatory functions where rules and regulations are promulgated. The Securities Regulations Code (Republic Act No. 8799) signed by President Joseph E. Estrada on July 19,2000 affects Securities and Exchange Commission's (SEC) quasi-judicial functions. The other agencies performing said functions are National Labor Relations Commission (NLRC), Insurance Commission, Housing and Land Use Regulatory Board (HLURB), Government Service Insurance System (GSIS), Social Security System (SSS) and even the Civil Service Commission (CSC). Some of their decisions are published in the Official Gazette. Some have their own publication such as the SEC and the CSC or some include them in their own websites
CD Asia Technologies’ Lex Libris series has individual CD ROMs for the Department of Justice, Securities and Exchange Commission, Bangko Sentral ng Pilipinas (Central Bank of the Philippines), and the Bureau of Internal Revenue. Included in these individual CD ROMs are the pertinent laws, their respective issuances as well as Supreme Court decisions. It CD ROM on Labor (vol. VII) incorporated issuances from the Department of Labor and Employment and its affiliated agencies and offices. The Trade, Commerce and Industry CD ROM includes Supreme Court decisions, laws and issuances of its various agencies such as the Department of Trade and Industry, Board of Investments, Bureau of Customs, Bangko Sentral and the Philippine Stock Exchange.
The Constitution (sec.5) vests the Supreme Court with the power of admission to the practice of law. The judicial function to admit to the legal profession is exercised by the Supreme Court through a Bar Examination Committee. The requirements to be able to apply for admission to the bar are provided in Rule 138, sec. 2 and sections 5-6 (academic requirements). Every applicant for the admission must be a Filipino citizen and at least 21 years of age. As to the academic requirements, he should have finished a four year pre-law course and a four year law degree. The Bar Examinations are given during the four (4) Sundays of September of each year. The lists of lawyers who are allowed to practice are found in the Rolls of Attorneys of the Supreme Court and the publication of the Court entitled, Law List. The online version of the Law List, available in the Supreme Court and Supreme E-Library, includes the annual lists of additional members of the bar.
Special Bar Exams for Shari’a Court lawyers is provided for by virtue of the Court En Banc Resolution dated September 20, 1983. The exam is given every two years. Although the exam is conducted by the Supreme Court Bar Office, it is the Office of Muslim Affairs who certifies as to who are qualified to take the exam.
Republic Act No. 7662, approved on December 23, 1993, provided for reforms in legal education and created a Legal Education Board. The Board shall be composed of a Chairman who shall preferably be a former justice of the Supreme Court of Court of Appeals and regular members composed of: a representative of each of the following: Integrated Bar of the Philippines (IBP), Philippine Association of Law Schools (PALS), Philippine Association of Law Professors (PALP), ranks of active law practitioners and law students’ sector. The reforms in the legal education system envisioned by Republic Act No. 7662 will require proper selection of law students, maintain the quality of law schools and require legal apprenticeship and continuing legal education.
All attorneys whose names are in the Rolls of Attorneys of the Supreme Court who have qualified for and have passed the bar examinations conducted annually, taken the attorney’s oath, unless otherwise disbarred must be a member of the Integrated Bar of the Philippines. Bar Matter No. 850 was promulgated by the Resolution of the Supreme Court En Banc on August 22, 2000, as amended on October 2, 2001, providing for the rules on Mandatory Continuing Legal Education (MCLE) for Active Members of the Integrated Bar of the Philippines (IBP). The members of the IBP have to complete every three (3) years at least thirty six (36) hours of continuing legal activities approved by the MCLE Committee. An IBP member who fails to comply with the said requirement shall pay a non-compliance fee and shall be listed as a delinquent member of the IBP. A Mandatory Continuing Legal Education Office was established by the Supreme Court (SC Administrative Order No. 113-2003) to implement said MCLE. Under the Resolution of the Court en Banc dated September 2, 2008 (Bar Matter No. 1922), the counsel’s MCLE Certificate of Compliance must be indicated in all pleadings filed with the Courts.
The Office of the Bar Confidant of the Supreme Court as of January 2009 has the following one hundred five (106) law schools throughout the Philippines:
- Abra Valley Colleges, Taft St., Bangued, Abra
- Adamson University, 900 San Marcelino St., Manila
- Aemilianum College Inc., Sorsogon City
- Aklan Colleges, Kalibo Aklan
- Andres Bonifacio College, College Park, Diolog City
- Angeles University Foundation, Mac Arthur Highway, Angeles City Pampanga
- Aquinas University, 2-S King’s Building, JAA Penaranda St., Legaspi City
- Araullo University, Bitas, Cabanatuan City
- Arellano University, Taft Ave., cor. Menlo St, Pasay City
- Ateneo de Davao University, Jacinto St., Davao City
- Ateneo de Manila University, Rockwell Drive, Rockwell Center, Makati City
- Bicol University, Daraga Albay
- Bohol Institute of Technology, Tagbilaran, Bohol
- Bukidnon State College, Malaybalay, Bukidnon
- Bulacan State University, Malolos, Bulacan
- Cagayan Colleges-Tuguegarao, Tuguegarao, Cagayan
- Cagayan State University, Tuguegarao, Cagayan
- Camarines Norte School of Law, Itomang, Talisay, Camarines Norte
- Central Philippines University, Jaro, Iloilo City
- Christ the King College, Calbayog City
- Colegio dela Purisima Concepcion, IBP Office, Hall of Justice, Roxas City
- Cor Jesus College, Digos, Davao del Sur
- Cordillera Career Development College, Buyagan, La Trinidad, Benguet
- Don Mariano Marcos Memorial State University, San Fernando, La Union
- Dr. Vicente Orestes Romualdez Education Foundation Inc., Tacloban City, Leyte
- East Central Colleges, San Ferando City, Pampanga
- Eastern Samar State University, Borogan, Eastern Samar
- Far Eastern University, Nicanor Reyes Sr. St., Sampaloc,Manila
- Fernandez College of Arts & Technology, Gil Carlos St., Baliuag, Bulacan
- Foundation University, Dr. Miciano St., Dumaguete City
- Harvadian Colleges, San Fernando City, Pampanga
- Holy Name University, Tagbilaran City, Bohol
- Isabela State University, Cauayan Campus, Cauayan, Isabela
- Josefina Cerilles State Collage, Pagadian City
- Jose Rizal University, 82 Shaw Blvd., Mandaluyong City
- Leyte Colleges, Zamora St., Tacloban City
- Liceo de Cagayan University, Rodolfo N. Pelaez Blvd, Carmen, Cagayan de Oro City
- Luna Goco Colleges, Calapan, Oriental Mindoro
- Lyceum of the Philippines, L.P. Leviste St., Makati City
- Lyceum-Northwestern University, Dagupan City, Pangasinan
- Manila Law College Foundation, Sales St., Sta. Cruz, Manila
- Manuel L. Quezon University, R. Hidalgo St., Quiapo, Manila
- Manuel S. Enverga University Foundation, Foundation St., Lucena City
- Masbate Colleges, Masbate, Masbate
- Medina Colleges, Ozamiz City
- Mindanao State University, Marawi City
- Misamis University, Bonifacio St., Ozamis City
- Naval Institute of Technology-UEP
- New Era University, St Joseph St., Milton Hills Subd., Bgy, New Era, Quezon City
- Northeastern College, Santiago City, Isabela
- Northwestern University, Laoag City
- Notre Dame University, Note Dame Ave., Cotabato City
- Our Lady of Mercy Colloge, Borogan, Eastern Samar
- Pagadian College of Criminology & Sciences, Pagadian City
- Palawan State University, Sta. Monica, Puerto Princesa, Palawan
- Pamantasan ng Lungsod ng Maynila, Intramuros, Manila
- Pamantasan ng Lungsod ng Pasay, Pasadera St., Pasay City
- Philippine Advent College, Sindangan, Zamboanga del Norte
- Philippine Law School, F.B. Harrison St., Pasay City
- Polytechnic College of La Union. La Union
- Polytechnic University of the Philippines, Pureza St., Sta. Mesa, Manila
- Samar Coleges, Catbalogan, Samar
- San Beda College, Mendiola St., San Miguel, Manila
- San Pablo Colleges, San Pablo City
- San Sebastian College-Recoletos, IBP Bldg., Surigao City
- St. Ferdinand College, Santa Ana, Centro Iligan, Isabela
- Saint Louis College, San Fernando City, La Union
- St. Louis University, Bonifacio St., Baguio City
- St. Mary’s University, 3700 Bayombong, Nueva Vizcaya
- Silliman University, Hubbard Avenue, Dumaguete City, Negros Oriental
- Southwestern University, Urgillo St., Sambag District, Cebu City
- Tabaco Colleges, Tabaco, Albay
- University of Baguio, Baguio City
- University of Batangas, Batangas City
- University of Bohol, Tagbilaran City
- University of Eastern Philippines, Catarman, Northern Samar
- University of Iloilo, Iloilo City
- University of Manila, M.V. delos Santos, Manila
- University of Mindanao, Bolton St., Davao City
- University of Negros Occidental-Recoletos, Lizares St., Bacolod City
- University of Northeastern Philippines, San Roque, Iriga City, Camarines Sur
- University of Northwestern Philippines, Vigan, Ilocos Sur
- University of Northern Philippines, Vigan, Ilocos Sur
- University of Nueva Caceres, Igualdad St., Naga City
- University of Pangasinan, Dagupan City, Pangasinan
- University of Perpetual Help-Rizal, Las Pinas City
- University of Perpetual Help System, Binan, Laguna
- University of San Agustin, Ge. Luna St., Iloilo City
- University of San Carlos, P. del Rosario St., Cebu City
- University of San Jose-Recoltos, Cebu City
- University of Santo Tomas, Espana, Manila
- University of Southern Philippines Foundation, Cebu City
- University of St. La Salle, La Salle Ave., Bacolod City
- University of the Cordelleras, Harrison Road, Baguio City
- University of the East, C.M. Recto Avenue, Manila
- University of the Philippines, Diliman, Quezon City
- University of Visayas, Gullas Law School, Colon St., Cebu City
- Urios College, San Francisco St. cor. J.C. Aquino Avenue, Butuan City
- Virgen de los Remedos College, 10 Fontaine St, Eat Bajac-Bajac, Olongapo City
- Virgen Milagrosa University, San Carlos City, Pangasinan
- Westeern Mindanao State University, Zamboanga City
- Xavier University, Corales Ave., Cagayan de Oro City
The above list from the Office of the Bar Confidant does not include newly organized law schools and/or law schools who do not yet have any graduate to qualify for the annual bar examination.
The following educational Association and/or Organizations:
- Philippine Association of Law Deans
- Philippine Association of Law Professors
- Philippine Association of Law Students
The official organization for the legal profession is the Integrated Bar of the Philippines (IBP), established by virtue of Republic Act No. 6397. This confirmed the power of the Supreme Court to adopt rules for the integration of the Philippine Bar. Presidential Decree 181 (1973) constituted the IBP into a corporate body.
There are now about 40,000 attorneys who composed the IBP. These are the attorneys whose names are in the Rolls of Attorneys of the Supreme Court who have qualified for and have passed the bar examinations conducted annually, taken the attorney’s oath, unless otherwise disbarred. Membership in the IBP is compulsory. The Supreme Court in its resolution Court En Banc dated November 12, 2002 (Bar Matter No. 1132) and amended by resolution Court En Banc dated April 1, 2003 (Bar Matter No. 112-2002) require all lawyers to indicate their Roll of Attorneys Number in all papers and pleadings filed in judicial and quasi-judicial bodies in additional to the previously required current Professional Tax Receipt (PTR) and IBP Official Receipt or Life Member Number.
Other Bar Associations
Philippine Bar Association is the oldest voluntary national organization of lawyers in the Philippines which traces its roots to the Colegio de Abogados de Filipinas organized on April 8, 1891. It was formally incorporated as a direct successor of the Colegio de Abogados de Filipinas on March 27, 1958.
The other voluntary bar associations are the Philippine Lawyers Association, Trial Lawyers Association of the Philippines, Vanguard of the Philippine Constitution, PHILCONSA, All Asia Association, Catholic Lawyers Guild of the Philippines, Society of International Law, WILOCI, Women Lawyers Association of the Philippines (WLAP), FIDA. The Philippines is also a member of international law associations such as the ASEAN Law Association, and LAWASIA. | 1 | 7 |
<urn:uuid:9300dad7-067d-42d5-8781-3bf9cc74a3cd> | As well as performing its core function of Internet firewall, IPCop can provide a number of other services that are useful in a small network.
In a larger network it is likely that these services will be provided by dedicated servers and should be disabled here.
A web proxy server is a program that makes requests for web pages on behalf of all the other machines on your intranet. The proxy server will cache the pages it retrieves from the web so that if 3 machines request the same page only one transfer from the Internet is required. If your organization has a number of commonly used web sites this can save on Internet accesses.
Normally you must configure the web browsers used on your network to use the proxy server for Internet access. You should set the name/address of the proxy to that of the IPCop machine and the port to the one you have entered into the transparent” mode. In this case the browsers need no special configuration and the firewall automatically redirects all traffic on port 80, the standard HTTP port, to the proxy server.box, default 800. This configuration allows browsers to bypass the proxy if they wish. It is also possible to run the proxy in “
You can choose if you want to proxy requests from your Green (private) network and/or your Blue (wireless) network. Just tick the relevant boxes.
Log enabled. If you choose to enable the proxy then you can also log web accesses by ticking the box. Accesses made through the proxy can be seen by clicking the Proxy Logs choice of the Logs menu.
If your ISP requires you to use their cache for web access then you should specify the hostname and port in thetext box. If your ISP's proxy requires a user name and password then enter them in the and boxes.
Squid only knows about standard HTTP request methods.
Unknown methods are denied, unless you add them to the
You can add up to 20 additional "extension"
For example, subversion
uses some non-standard methods that squid blocks.
To allow subversion to work through IPCop's transparent proxy,
you will have to add
MERGE to the
Disallow local proxying on blue/green networks. Check this option to disable proxying to green and blue networks (if blue is available). This closes a possible hole between Green and Blue if they are run in “transparent” mode.
or specify a list of destinations
which are not to be proxied.
This gives somewhat more flexibility, allowing you to
define which destination networks are to be DENIED
through the proxy. You can specify a network (or
networks) with an IP Address and Netmask, for example:
Cache Management. You can choose how much disk space should be used for caching web pages in the Cache Management section. You can also set the size of the smallest object to be cached, normally 0, and the largest, 4096KB. For privacy reasons, the proxy will not cache pages received via https, or other pages where a username and password are submitted via the URL.
Repair cache. You can repair the proxy cache by clicking the button.
Clear cache. You can flush all pages out of the proxy cache at any time by clicking the button.
Transfer limits. The web proxy can also be used to control how your users access the web. The only control accessible via the web interface is the maximum size of data received from and sent to the web. You can use this to prevent your users downloading large files and slowing Internet access for everyone else. Set the two fields to 0, the default, to remove all restrictions.
Save. To save any changes, press the button.
Caching can take up a lot of space on your hard drive. If you use a large cache, then the minimum size hard drive listed in the IPCop documentation will not be large enough.
The larger the cache you choose the more memory is required by the proxy server to manage the cache. If you are running IPCop on a machine with low memory do not choose a large cache.
DHCP (Dynamic Host Configuration Protocol) allows you to control the network configuration of all your computers or devices from your IPCop machine. When a computer (or a device like a printer, pda, etc.) joins your network it will be given a valid IP address and its DNS and WINS configuration will be set from the IPCop machine. To use this feature new machines must be set to obtain their network configuration automatically.
You can choose if you want to provide this service to your Green (private) network and/or your Blue (wireless) network. Just tick the relevant box.
For a full explanation of DHCP you may want to read Linux Magazine's “ Network Nirvana - How to make Network Configuration as easy as DHCP ”
The following DHCP parameters can be set from the web interface:
Enabled. Check this box to enable the DHCP server for this interface.
IP Address/Netmask. The IP Address of the network interface and it's Netmask are displayed here for reference.
Start Address (optional). You can specify the lowest and highest addresses that the server will hand out to other requestors. The default is to hand out all the addresses within the subnet you set up when you installed IPCop. If you have machines on your network that do not use DHCP, and have their IP addresses set manually, you should set the start and end address so that the server will not hand out any of these manual IPs.
You should also make sure that any addresses listed in the fixed lease section (see below) are also outside this range.
End Address (optional). Specify the highest address you will handout (see above).
To enable DHCP to provide fixed leases without handing out dynamic leases, leave both Start and End Address fields blank. However, if you provide a Start Address, you also have to provide an End Address, and vice versa.
Base IP for fixed lease creation (optional). The ability to add fixed leases from the list of dynamic leases was added in v1.4.12.
You can specify an IP Address which will be used as the base from which new fixed leases will be incremented.
Default lease time. This can be left at its default value unless you need to specify your own value. The default lease time is the time interval IP address leases are good for. Before the lease time for an address expires your computers will request a renewal of their lease, specifying their current IP address. If DHCP parameters have been changed, when a lease renewal request is made the changes will be propagated. Generally, leases are renewed by the server.
Maximum lease time. This can be left at its default value unless you need to specify your own value. The maximum lease time is the time interval during which the DHCP server will always honor client renewal requests for their current IP addresses. After the maximum lease time, client IP addresses may be changed by the server. If the dynamic IP address range has changed, the server will hand out an IP address in the new dynamic range.
Domain name suffix (optional). There should not be a leading period in this box. Sets the domain name that the DHCP server will pass to the clients. If any host name cannot be resolved, the client will try again after appending the specified name to the original host name. Many ISP's DHCP servers set the default domain name to their network and tell customers to get to the web by entering “www” as the default home page on their browser. “www” is not a fully qualified domain name. But the software in your computer will append the domain name suffix supplied by the ISP's DHCP server to it, creating a FQDN for the web server. If you do not want your users to have to unlearn addresses like www, set the Domain name suffix identically to the one your ISP's DHCP server specifies.
Allow bootp clients. Check this box to enable bootp Clients to obtain leases on this network interface. By default, IPCop's DHCP server ignores Bootstrap Protocol (BOOTP) request packets.
Primary DNS. Specifies what the DHCP server should tell its clients to use for their Primary DNS server. Because IPCop runs a DNS proxy, you will probably want to leave the default alone so the Primary DNS server is set to the IPCop box's IP address. If you have your own DNS server then specify it here.
Secondary DNS (optional). You can also specify a second DNS server which will be used if the primary is unavailable. This could be another DNS server on your network or that of your ISP.
Primary NTP Server (optional). If you are using IPCop as an NTP Server, or want to pass the address of another NTP Server to devices on your network, you can put its IP address in this box. The DHCP server will pass this address to all clients when they get their network parameters.
Secondary NTP Server (optional). If you have a second NTP Server address, put it in this box. The DHCP server will pass this address to all clients when they get their network parameters.
Primary WINS server address (optional). If you are running a Windows network and have a Windows Naming Service (WINS) server, you can put its IP address in this box. The DHCP server will pass this address to all hosts when they get their network parameters.
Secondary WINS server address (optional). If you have a second WINS Server, you can put its IP address in this box. The DHCP server will pass this address to all hosts when they get their network parameters.
When you press, the change is acted upon.
If you have any special parameters you want to distribute to your network via the DHCP server, you add them here. (This functionality was added in v1.4.6).
You can add additional DHCP Options here:
You specify the name of the DHCP option here,
Option value. The value, appropriate to the option, goes here. It could be a string, an integer, an IP Address, or an on/off flag, depending on the option.
Possible option formats are: boolean, integer 8, integer 16, integer 32, signed integer 8, signed integer 16, signed integer 32, unsigned integer 8, unsigned integer 16, unsigned integer 32, ip-address, text, string, array of ip-address.
The following formats were added in v1.4.12: array of integer 8, array of integer 16, array of integer 32, array of signed integer 8, array of signed integer 16, array of signed integer 32, array of unsigned integer 8, array of unsigned integer 16, array of unsigned integer 32.
Option scope (optional). The scope of the option will be Global, unless one of the interface checkboxes is checked, in which case it will only apply to that interface.
Enabled. Click on this check box to tell the DHCP server to hand out this option. If the entry is not enabled, it will be stored in IPCop's files, but the DHCP server will not issue the option.
Add. Click on this button to add the option.
List options. Click on this button to display a list of options with possible values.
If the option you want is not included in the built-in list of options, you can add your own custom definitions. The syntax required is listed at the foot of the Options List.
For example, to add the ldap-server option (code 95) to the
Add a DHCP Option with name:
(be sure to enter value correctly, 1 space between code and
95 and no spaces around the = sign).
You should then see an entry with Option name:
ldap-server, Option value:
code 95=string and Option scope:
Now you can add an ldap-server as you would with any built-in
DHCP option, with Option name:
ldap-server and Option value:
If you have machines whose IP addresses you would like to manage centrally but require that they always get the same fixed IP address you can tell the DHCP server to assign a fixed IP based on the MAC address of the network card in the machine.
This is different to using manual addresses as these machines will still contact the DHCP server to ask for their IP address and will take whatever we have configured for them.
Add a new fixed lease. You can specify the following fixed lease parameters:
Enabled. Click on this check box to tell the DHCP server to hand out this static lease. If the entry is not enabled, it will be stored in IPCop's files, but the DHCP server will not issue this lease.
MAC Address (optional). The six octet/byte colon separated MAC address of the machine that will be given the fixed lease.
If you leave the MAC Address field blank, the DHCP server will try to assign a fixed lease based on the hostname or fully qualified domain name (FQDN) of the client.
If you provide a MAC Address and a hostname in the Hostname or FQDN field, the DHCP server will provide that hostname to the client.
The format of the MAC address is xx:xx:xx:xx:xx:xx, not xx-xx-xx-xx-xx-xx, as some machines show, i.e. 00:e5:b0:00:02:d2.
It is possible to assign different fixed leases to the same device, provided the IP addresses are in different subnets. Duplicated addresses are highlighted in the table in bold text.
IP Address. The static lease IP address that the DHCP server will always hand out for the associated MAC address. Do not use an address in the server's dynamic address range.
It is possible to assign an IP Address outwith the local subnets to a device. The IP address will be highlighted in orange in the table.
Hostname or FQDN (optional).
The client will receive a hostname, or in the case of a
Fully Qualified Domain Name, a hostname and domain name, if
a MAC address is also provided.
If the MAC Address field is blank, the
DHCP server will try to assign a fixed lease based on the
hostname or FQDN of the client, using the
Remark (optional). If you want, you can include a string of text to identify the device using the fixed lease.
Router IP Address (optional). For fixed leases, it is possible to send the Client a router (gateway) address that is different from the IPCop address.
DNS Server (optional). Send the Client another DNS server, not the DNS server(s) configured in the DHCP settings section.
Enter optional bootp pxe data for this fixed lease. Some machines on your network may be thin clients that need to load a boot file from a network server.
next-server (optional). You can specify the server here if needed.
filename (optional). Specify the boot file for this machine.
root-path (optional). If the boot file is not in the default directory then specify the full path to it here.
The current fixed leases are displayed at the foot of this section, and they can be enabled/disabled, edited or deleted.
You can sort the display of the fixed leases by clicking on the underlined headingsor . Another click on the heading will reverse the sort order.
To edit an existing lease, click on its pencil icon. The fixed leases values will be displayed in the Edit an existing lease section of the page. The fixed lease being edited will be highlighted in yellow. Click the button to save any changes.
To remove an existing profile, click on its trash can icon. The lease will be removed.
If DHCP is enabled, this section lists the dynamic leases
contained in the
The IP Address, MAC Address, hostname (if available) and
lease expiry time of each record are shown, sorted by IP
You can re-sort the display of dynamic leases by clicking on any of the four underlined column headings. A further click will reverse the sort order.
It is easy to cut and paste a MAC Address from here into the fixed lease section, if needed.
A new method of adding fixed leases from the list of dynamic leases was added in v1.4.12. Used in conjunction with the Base IP for fixed lease creation field, you can select one or more checkboxes, and click the button to quickly add a number of devices to the fixed lease list.
Lease times that have already expired are “struck through”.
Dynamic DNS (DYNDNS) allows you to make your server available to the Internet even though it does not have a static IP address. To use DYNDNS you must first register a subdomain with a DYNDNS provider. Then whenever your server connects to the Internet and is given an IP address by your ISP it must inform the DYNDNS server of that IP address. When a client machine wishes to connect to your server it will resolve the address by going to the DYNDNS server, which will give it the latest value. If this is up to date then the client will be able to contact your server (assuming your firewall rules allow this). IPCop makes the process of keeping your DYNDNS address up to date easier by providing automatic updates for many of the DYNDNS providers.
The following DYNDNS parameters can be set from the web interface:
Service. Choose a DYNDNS provider from the dropdown. You should have already registered with that provider.
Behind a proxy. This tick box should be ticked only if you are using the no-ip.com service and your IPCop is behind a proxy. This tick box is ignored by other services.
Enable wildcards. Enable Wildcards will allow you to have all the subdomains of your dynamic DNS hostname pointing to the same IP as your hostname (e.g. with this tick box enabled, www.ipcop.dyndns.org will point to the same IP as ipcop.dyndns.org). This tick box is useless with no-ip.com service, as they only allow this to be activated or deactivated directly on their website.
Hostname. Enter the hostname you registered with your DYNDNS provider.
Domain. Enter the domain name you registered with your DYNDNS provider.
Username. Enter the username you registered with your DYNDNS provider.
Password. Enter the password for your username.
Enabled. If this is not ticked then IPCop will not update the information on the DYNDNS server. It will retain the information so you can re-enable DYNDNS updates without reentering the data.
This section shows the DYNDNS entries you have currently configured.
To edit an entry click on its pencil icon. The entry's data will be displayed in the form above. Make your changes and click the button on the form.
You can also update the Behind a proxy, Use wildcards and Enabled tick boxes directly from the current hosts list entry.
You can force IPCop to refresh the information manually by pressing, however, it is best to only update when the IP address has actually changed, as dynamic DNS service providers don't like to handle updates that make no changes. Once the host entries have been enabled your IP will automatically be updated each time your IP changes.
As well as caching DNS information from the Internet, the DNS proxy on IPCop allows you to manually enter hosts whose address you want to maintain locally. These could be addresses of local machines or machines on the Internet whose address you might want to override.
The following parameters can be set from the web interface:
Host IP Address. Enter the IP address here.
Hostname. Enter the host name here.
Domain name (optional). If the host is in another domain then enter it here.
Enabled. Check this box to enable the entry.
When you press, the details will be saved.
This section shows the local DNS entries you have currently configured.
You can re-sort the display by clicking on any of the three underlined column headings. A further click will reverse the sort order.
To enable or disable an entry - click on the “Enabled” icon (the checkbox in the Action column) for the particular item you want to enable or disable. The icon changes to an empty box when a rule is disabled. Click on the checkbox to enable it again.
To edit an entry click on its Pencil icon. The entry's data will be displayed in the form above. Make your changes and click the button on the form.
To delete an entry click on its Trash Can icon.
IPCop can be configured to obtain the time from a known accurate timeserver on the Internet. In addition to this it can also provide this time to other machines on your network.
To configure the time system, make sure that the Enabled box is ticked and enter the full name of the timeserver you want to use in the Primary NTP Server box. You can also enter an optional Secondary NTP Server if you want.
We suggest that, for efficiency, you synchronize IPCop with your ISP's time servers, where available. If they are not provided, try the www.pool.ntp.org project, which is “a big virtual cluster of timeservers striving to provide reliable easy to use NTP service for millions of clients without putting a strain on the big popular timeservers.”
Follow their instructions on how to use country zones (for example 0.us.pool.ntp.org) rather than the global zone (0.pool.ntp.org), to further improve efficiency.
In January 2008 the IPCop vendor pool became available. Please use 0.ipcop.pool.ntp.org 1.ipcop.pool.ntp.org or 2.ipcop.pool.ntp.org instead of the previous default zone names.
If you want to provide a time service to the rest of your network then tick the Provide time to local network checkbox.
You can choose to update the time on IPCop on a periodic basis, for instance every hour, or to update it when you wish from this web page (just click Set Time Now).
To save your configuration click the Save button.
Although IPCop can act as a timeserver for your network, it uses the ntpdate command to update its time on a periodic basis instead of allowing the more accurate ntpd server to maintain the time continuously. This means that the IPCop clock is more likely to drift out of synchronisation with the real time but does not require that IPCop is permanently connected to the Internet.
If you find IPCop's onboard clock is being stepped by a large amount
when it synchronizes with another NTP Server, you can apply a correction
factor in the
/etc/ntp/drift file to compensate.
You can find the step amount in the System Logs, in the NTP section. You should see something like:
10:40:00 ntpdate step time server 192.168.1.1 offset 3.371245 sec
If you divide the time error by the time passed, and multiply by one
million, you get the value (in parts per million) to put in
In the example below, 3.37 is the daily offset; 86400 equals the number of seconds in a day; and the result in PPM is 39.004:
(3.37 ÷ 86400 × 1000000) = 39.004
Change the value in the drift file with the command (as root):
echo 39.004 > /etc/ntp/drift
If you do not want to use an Internet timeserver you can enter the time manually and click thebutton.
If you correct the time by a large amount, and offset the clock ahead of itself, the fcron server that runs regular cron jobs can appear to stop while it waits for the time to catch up. This can affect graph generation and other regular tasks that run in the background.
If this happens, try running the command fcrontab -z in a terminal to reset the fcron server.
Traffic Shaping allows you to prioritize IP traffic moving through your firewall. IPCop uses WonderShaper to accomplish this. WonderShaper was designed to minimize ping latency, ensure that interactive traffic like SSH is responsive all while downloading or uploading bulk traffic.
Many ISPs sell speed as download rates, not as latency. To maximize download speeds, they configure their equipment to hold large queues of your traffic. When interactive traffic is mixed into these large queues, their latency shoots way up, as ACK packets must wait in line before they reach you. IPCop takes matters into its own hands and prioritizes your traffic the way you want it. This is done by setting traffic into High, Medium and Low priority categories. Ping traffic always has the highest priority — to let you show off how fast your connection is while doing massive downloads.
To use Traffic Shaping in IPCop:
Use well known fast sites to estimate your maximum upload and download speeds. Fill in the speeds in the corresponding boxes of the Settings portion of the web page.
Enable traffic shaping by checking the Enable box.
Identify what services are used behind your firewall.
Then sort these into your 3 priority levels. For example:
Interactive traffic such as SSH (port 22) and VOIP (voice over IP) go into the high priority group.
Your normal surfing and communicating traffic like the web (port 80) and streaming video/audio to into the medium priority group.
Put your bulk traffic such as P2P file sharing into the low traffic group.
Create a list of services and priorities using the Add service portion of the web page.
The services, above, are only examples of the potential Traffic Shaping configuration. Depending on your usage, you will undoubtedly want to rearrange your choices of high, medium and low priority traffic.
IPCop contains a powerful intrusion detection system, Snort, which analyses the contents of packets received by the firewall and searches for known signatures of malicious activity.
Snort is a passive system which requires management by the
User. You need to monitor the logs, and interpret the
information. Snort only logs suspicious activity, so if
you need an active system, consider
snort_inline or the
You should also note that Snort is memory hungry, with newer versions using about 80Mb per interface. This depends in part on the ruleset used, and can be reduced by selection of the rules used.
IPCop can monitor packets on the Green, Blue, Orange and Red interfaces. Just tick the relevant boxes and click the Save button.
A standard installation of IPCop comes with a set of Snort's default rules. As more attacks are discovered, the rules Snort uses to recognize them will be updated. To utilize Sourcefire VRT Certified rules you need to register on Snort's website www.snort.org and obtain an “Oink Code”.
Select the correct radio button, add your Oink Code and click the Save button before your first attempt to download a ruleset.
Then, click the Refresh update list button, followed by the Download new ruleset button, and finally click Apply now.
After a successful operation the date and time will be displayed beside each button.
The final button - Read last ruleset installation log - will display the last installation log. | 1 | 2 |
<urn:uuid:f792dd0d-e165-4b71-9b4e-24e565730e1a> | |Home | About | Journals | Submit | Contact Us | Français|
Simplicity is the ultimate sophistication.—Leonardo da Vinci
Recent years have seen rapid proliferation of ablative and antiarrhythmic therapies for treating various ventricular and supraventricular arrhythmias. Yet cardioversion and defibrillation remain the main modalities to restore normal sinus rhythm. Their simplicity, reliability, safety, and, most important, their efficacy in promptly restoring normal sinus rhythm are unmatched in our current treatment armamentarium.
Contemporary cardiology has been significantly affected by the ready availability of this simple method for terminating atrial and ventricular tachyarrhythmias. However, fascination with electricity and its use in biological systems is hardly contemporary. The first capacitor that was able to store electric energy in a glass container was discovered in 1745. It was named the Leyden jar, and its use was shortly thereafter tested in the electrocution of small animals. There is a large body of literature in Italy, France, and England on biological and medical application of electricity dating from the 17th and 18th centuries. Although physicians across Europe started using electricity as an experimental treatment, the earliest recorded scientific approach with the use of electric shocks was that of Peter Abildgaard in 1775.1 He systematically shocked hens, delivering electric charges in different parts of their body. Electric stimuli applied anywhere across the body of the hen, particularly in the head, could render the animal lifeless, but subsequent shocks delivered to the chest could revive the heart.
Abildgaard was only one of the several scientists who studied the effects of electricity on animals. Some reported similar findings, and others could not reproduce his results. However, Luigi Galvani in 1781 first clearly described the link between electricity and its presence in biological systems.2 He was the first to use the term animal electricity, coined after his famous experiments in which he caused the legs of a skinned frog to kick when touched with a pair of scissors during an electric storm. The recognition of electricity in living organisms sparked intense interest and excitement and led to application of electricity to revive the dead. Possibly the first description of successful resuscitation with the use of electric shock was reported by Charles Kite in 1788, when a 3-year-old girl, a victim of a fall, was shocked through the chest by an electric generator and a Leyden jar by a Mr Squires of London.3 A similar report by Fell appeared in Gentlemen’s Magazine in 1792, with the description of this first prototype of a modern defibrillator (Figure 1).
In 1792, the British scientist James Curry published a review of resuscitation cases and recommended that “moderate shocks be passed through the chest in different directions in order, if possible, to rouse the heart to act.”4 Several other successful attempts at resuscitation led the Royal Humane Society in England to publish a report in 1802 suggesting the application of electric shock to distinguish “real from apparent death” and praising the potential of electric resuscitation.
Scientists at that time were unaware that, at least in some cases, revival with electricity was perhaps due to successful termination of ventricular fibrillation (VF). Ludwig and Hoffa were the first to describe this arrhythmia in 1849 when they observed bizarre and chaotic action of the ventricles when exposed directly to electric current.5 The nature of this arrhythmia was subjected to speculation. Neurogenic theory that explained VF as a consequence of abnormal generation and conduction within the neural network was favored. A French neurophysiologist, Edme Vulpian, coined the term fibrillation and first suggested that the heart itself was responsible for originating and sustaining this irregular rhythm that results in mechanical disarray.6
In 1889, John McWilliam of Aberdeen, Scotland, was the first to suggest that VF, and not cardiac standstill, was the mechanism of sudden death in humans.7 Previously, he had experimented with mammalian animal hearts and was able to induce VF by applying electricity directly to the heart.8 Two physicians, Jean-Louis Prevost, a former trainee of Vulpian, and Frederic Battelli, worked together at the University of Geneva, Switzerland, on the mechanism of electrically induced VF.9 They confirmed the observations of Ludwig, Vulpian, and McWilliam in 1899 by showing that a small amount of electricity delivered across the chest can induce VF.9 It is fascinating that their secondary observation, mentioned only in a footnote, that larger electric shocks successfully restored normal sinus rhythm stirred little interest until the first defibrillation experiments some 30 years later. Even well-respected figures, giants in the field like Carl Wiggers, who later made significant contributions to the theory of fibrillation and defibrillation, were skeptical of the report of Prevost and Battelli and did not find “their claims worthy of the time, effort or expense.”10
Nevertheless, Prevost and Battelli proposed the so-called incapacitation theory, whereby VF is terminated by complete electromechanical incapacitation of the myocardium established by the electric shock that also stopped and abolished the return of normal electric and mechanical work of the heart. Consequently, direct massage of the heart was suggested to support the circulation until electromechanical function of the heart was restored. This method was perfected by Carl Wiggers10 and used later during the pioneering studies with defibrillation in humans by Claude Beck.11
The late 19th and early 20th centuries brought rapid expansion of commercially available electric power. This progress was followed by a growing number of accidents involving electrocution. It soon became apparent that most of the deaths were due to VF. Orthello Langworthy and Donald Hooker, both physicians at Johns Hopkins University, and William Kouwenhoven, an electrical engineer, were funded by the Consolidated Edison Electric Company of New York City to investigate the possible remedies for these frequent accidents. They studied both alternating current (AC) and direct current (DC) shocks and concluded that AC shock was more effective in terminating VF.12 In 1933, the Johns Hopkins group succeeded in terminating VF in a dog when they accidently applied a second shock, hence the term countershock.13 In 1936, Ferris and colleagues, another team composed of engineers and cardiologists, reported the first closed-chest defibrillation in sheep with the use of an AC shock.14
All of these experiments culminated with the first reported defibrillation of the exposed human heart performed by Claude Beck (Figure 2), a cardiothoracic surgeon at Western Reserve University/University Hospitals of Cleveland, Ohio, in 1947.11 Beck was aware of Carl Wiggers’ work on the mechanisms of fibrillation and defibrillation. Wiggers, also of Western Reserve University, had described the induction of VF through the concept of the vulnerable period.15 He was also a proponent of defibrillation, although he did not believe in transthoracic delivery of electric shocks. These conclusions influenced Beck when he performed the first known defibrillation of VF in humans. He was operating on a 14-year-old boy. During the closure of the wound, the pulse stopped, at which time the wound was reopened, and cardiac massage was performed for the next 45 minutes. An ECG confirmed VF, and seeing no other option, Beck delivered a single shock that failed to defibrillate the VF. After intracardiac administration of procaine hydrochloride, he delivered the second shock that restored sinus rhythm. This success triggered immediate acceptance of defibrillation across the world. Beck’s defibrillator used AC directly from the wall socket (Figure 3). He built it together with his friend James Rand III of the RAND Development Corporation. The most significant drawback, however, was that it could be used only to defibrillate exposed hearts. Therefore, for years it was used only in operating rooms.
Concurrent with the studies in the 1930s and 1940s in the West, a different approach to defibrillation was being developed in the Soviet Union. The latter provided further insight into the mechanisms of defibrillation and paved the way for development of modern defibrillation waveforms and the use of DC shock. The director of the Institute of Physiology at the Second Medical University in Moscow was Professor Lina Stern, who, as a former trainee and then associate of Prevost and Battelli, had studied VF and defibrillation. She assigned a PhD project on the study of arrhythmogenesis and defibrillation to Naum Gurvich (Figure 4), a young physician member of her laboratory. Gurvich later became a key figure and made fundamental discoveries in the fields of fibrillation and defibrillation. In 1939, in their classic work,16,17 Gurvich and Yuniev proposed using a single discharge from a capacitor to defibrillate VF, thus effectively introducing DC shock for defibrillation purposes. Until then, an AC shock was favored and was being developed as the most effective way to defibrillate VF. Parenthetically, in the West, AC shock continued to be used exclusively until the early 1960s. During his doctoral research (1933–1939), Gurvich found that an AC shock at a frequency of 50 to 500 Hz could not be tolerated and, in fact, led to VF. However, he also showed that a single discharge from a capacitor with a DC shock terminated VF. Another advantage of a DC shock was that large amounts of energy could be delivered in a relatively short period of time. In the 1940s, combining his studies with the Wiggers-Wegria model of the vulnerable period, he proposed a completely new concept in the field of defibrillation that was based on using biphasic defibrillation waveforms. Gurvich first reported using rounded biphasic waveforms, produced by a capacitor and inductor, for defibrillation as early as 1939, although at that time he was unaware of the superiority of this waveform over the monophasic waveform. More importantly, these advances allowed Gurvich18 to propose his “excitatory” theory of defibrillation, which suggested that direct excitation of the myocardium prevents further propagation of fibrillatory waves without preventing resumption of normal sinus rhythm. He also introduced the concept of the mother-reentrant circuit as a foundation for the development and sustainability of VF.19 In the United States, MacKay and Leeds in 1953 reported on their first experience with DC shock in dogs.20 Their conclusion was similar to that of Gurvich: They pointed out that DC shock is more efficacious and safer than AC shock, and they also suggested the use of DC shock in humans. All of these reports had opened the way to explore the use of DC or capacitor shocks. In 1952, Gurvich designed the first commercially available transthoracic DC defibrillator (Figure 5) in the world.19,21,22 The application of this device was described in great detail in the Soviet Ministry of Health resuscitation guidelines, published first in 1952.23 The guidelines required every operating room of a major hospital to have a defibrillator. This first DC defibrillator, ID-1-VEI, used a monophasic waveform that, 10 years later, became known as the Lown waveform.
Following the work of Gurvich in Moscow, another physician-scientist behind the Iron Curtain made the next important defibrillation contribution. In 1957, Bohumil Peleška, from Prague, Czechoslovakia, reported on both direct and transthoracic use of DC shock for defibrillation purposes.24 He constructed his own DC defibrillator, modifying Gurvich’s design by including an iron core in the inductor,25 and is credited with improving the procedure of cardioversion by using lower voltage and describing the effects of DC shock. Thus, the original work on biphasic defibrillation waveforms and DC cardioversion and defibrillation had originated initially in the East.
It was again in the Soviet Union in February 1959 that Vishnevskii and Tsukerman performed the first reported cardioversion of atrial fibrillation (AF) using a DC shock.26,27 The patient had AF for 3 years, and the restoration of normal sinus rhythm took place during mitral valve surgery. The same team reported the first successful transthoracic cardioversion of atrial arrhythmias in 20 patients using DC cardioversion in 1960.28 In 1970, Gurvich introduced the first biphasic transthoracic defibrillator, which became standard in Soviet medical practice from that time, preceding Western analogs by at least 2 decades.29
Of note, as part of “an international trip to further international cooperation in medical research for the good of people,” in 1958, the well-known and influential senator Hubert H. Humphrey visited Moscow.30 During that trip, Humphrey visited the Research Laboratory of General Reanimatology (Resuscitation), where he met with its director, Vladimir Negovsky, and the laboratory’s leading defibrillation researcher, Naum Gurvich. “There, I saw his successful animal experiments on the reversibility of death, that is, on the revival of ‘clinically dead’ animals through massive electric shocks. When I returned to our country, I reported publically on his experiments.”31 Later, Humphrey urged the development of programs through the National Institutes of Health “on the physiology of death, on resuscitation, and related topics.”31 Nevertheless, the work behind the Iron Curtain remained virtually unrecognized in the West. However, as we shall see, the work became known to an electrical engineer working for the American Optical Company, and this had a profound impact on the field.
In 1956, Paul Zoll of Beth Israel Hospital and Harvard Medical School in Boston, Mass, demonstrated successful closed-chest defibrillation in humans, again using an AC shock.32 Not long after, in 1960, working at Lariboisiere Hospital in Paris, France, an electrical engineer and physician, Fred Zacouto, completed the design of the first external automatic defibrillator/pacer33 (Figure 6). He had invented it in March 1953 and filed the related patent in July 1953 in Paris. His “Bloc Réanimateur” was able to sense a slow pulse from an infrared device attached to different parts of patient’s body (ear lobe and a finger) and provide transcutaneous pacing until spontaneous return of heart activity. At the same time, it could detect VF from an ECG and deliver an AC shock of adjustable voltage and duration with the ability to redetect VF and redeliver a shock if needed. It was first used to successfully defibrillate a patient in November 1960. A total of 68 devices were produced and sold by 1968, first by Zacouto’s Savita company and later by Thomson-CFTH. The device was used in hospitals in France, Switzerland, and Germany.
Bernard Lown (Figure 7) of the Peter Bent Brigham Hospital in Boston, Mass, is credited in the Western world with initiating the modern era of cardioversion. He was the first in the West to combine defibrillation and cardioversion with portability and safety. In 1959, in a patient with recurrent bouts of ventricular tachycardia (VT), Lown was the first to transthoracically apply AC shock using the Zoll defibrillator to successfully terminate an arrhythmia other than VF.34 This event is notable because intravenous administration of procainamide had failed to terminate the patient’s VT, and application of the transthoracic shock became a dire necessity to try to save a human life.35 Because the procedure was unplanned and on an urgent basis and because there was not any information of which Lown’s team was aware to provide data on the safety and efficacy of the procedure, it was done despite the hospital’s resistance and only after Lown took sole responsibility.35 Lown later recalled the following: “Never having seen an AC defibrillator, I hadn’t the remotest idea how to use one. A host of questions needed prompt answers: Was the shock painful? Was the anesthesia required? Was there an appropriate voltage setting to reverse ventricular tachycardia? If the shock failed, how many additional ones could be delivered? Did the electric discharge traumatize the heart or injure the nervous system? Could it burn the skin? Were there any hazards for bystanders? Was it explosive for the patients receiving oxygen? My head was migrainous from the avalanche of questions.”35 At that time, clearly, Lown knew little about defibrillation and the intricacies of AC versus DC shock.
In early 1961, Lown “fortunately, and quite accidentally, met a brilliant young electrical engineer, Baruch [sic] Berkowitz [sic]”35 (Figure 8), who was helping Lown’s laboratory with instruments for research projects unrelated to the problem of cardioversion and defibrillation. Barouh Berkovits had been developing a DC defibrillator while working for the American Optical Corporation as the Director of Cardiovascular Research. Although the American Optical Corporation manufactured an AC defibrillator, Berkovits was very aware of its shortcomings because he was familiar with the previous work of Gurvich.36 Thus, aware that DC shock was safer and more effective, Berkovits had decided to build a DC defibrillator for possible commercial use. After the “accidental” meeting of Berkovits with Lown, when they learned of each other’s interests, Berkovits asked Lown if he would be interested in testing his device. In April 1961, Lown formally asked Berkovits to study his DC defibrillator in canines and for possible clinical application.37 A series of intense experiments followed that involved testing the efficacy of multiple waveforms and evaluating the safety of DC shock in a very large number of canines. During these experiments, the Lown-Berkovits investigation group, aware of the importance of avoiding the vulnerable period, introduced for the first time the novel concept of synchronizing delivery of the shock with the QRS complex sensed from the ECG. During these studies, they also developed a monophasic waveform, later known as the “Lown waveform,” with high efficacy and safety for shock delivery during a rhythm other than VF. These studies culminated with the use of the DC cardioverter-defibrillator in patients. Lown is also credited with coining the term cardioversion for delivery of a synchronized shock during an arrhythmia other than VF. Noting the previous work with DC defibrillation in humans by Gurvich in the Soviet Union and Peleška in Czechoslovakia, as well as the adverse effects of AC shock, in 1962 Lown et al reported their success in terminating VT with a single DC monophasic shock in 9 patients.38 Lown subsequently went on to expand DC cardioversion to successfully convert both atrial and ventricular arrhythmias using the monophasic DC shock.39–41 This success promptly resulted in the acceptance and worldwide spread of DC cardioversion. One result of the success of the DC cardioverter-defibrillator was the development of the modern cardiac care unit, where Lown again played an important role. In 1962, Berkovits patented the DC defibrillator for the American Optical Corporation.
The impact of this “new technique” was indeed profound. The ability to “reverse death” with a simple shock had dramatically improved in-hospital cardiac arrest outcomes. However, it was widely known that the highest mortality was taking place in the immediate period after an individual suffered a heart attack, mainly outside hospital premises.
This problem was boldly addressed by J. Frank Pantridge, who, working together with John Geddes at the Royal Victoria Hospital in Belfast, UK, created the first Mobile Coronary Care Unit, which began operation on January 1, 1966.42 The initial assembly of the defibrillator for this mobile unit, which consisted of 2 car batteries, a static inverter, and an American Optical defibrillator, weighed 70 kg. Any initial skepticism that defibrillation out of the hospital would not be feasible, and may even be detrimental, disappeared when the initial 15-month experience with the “flying squad” was published.43 Aware of the work of Peleška, Pantridge’s team made further improvements in the design of the defibrillator. A key stage in the development of the mobile intensive care unit came with the design of a small, portable defibrillator. Using the miniature capacitor developed for the US National Aeronautics and Space Administration, Pantridge, together with John Anderson, a biomedical engineer, developed a 3.2-kg portable defibrillator that became available in 1971.
With great passion, Pantridge advocated his approach of making early defibrillation readily available everywhere. His ideas first became widely accepted in the United States. Subsequently, Anderson and Jennifer Adgey, another physician from the Belfast group, were among the first to develop the semiautomatic and automatic portable external defibrillator in the late 1970s and early 1980s. With continued development, the portable defibrillator gradually evolved from exclusive use by physicians and was given to paramedics, then to firemen, and finally to members of the public. The benefits of this approach are more than obvious today.44
Although external transthoracic DC cardioversion gained wide acceptance and radically improved patient outcomes, the work on defibrillation did not stop here. Defibrillation from an implantable device was the next major achievement that dramatically changed our approach to treat sudden cardiac death. Michel Mirowski conceived the idea for an implantable cardiac defibrillator while working in Israel. Mirowski trained at Tel Hashomer Hospital in Israel, where his mentor was Harry Heller.45 Heller had developed repetitive bouts of VT that were treated with quinidine or procain-amide. However, Mirowski was very aware that, sooner or later, this arrhythmia would take Heller’s life. It was the sudden death of his mentor in 1966 and the recognition that sudden arrhythmic death was a major problem without, at that time, a solution that influenced Mirowski to dedicate his career to design and develop the implantable cardiac defibrillator. Mirowski recognized that it would be very difficult to accomplish his goal in Israel. In 1968, he accepted a position at Sinai Hospital of Baltimore, Md, as a director of the Coronary Care Unit, with 50% of his time for research. He arrived there in the summer of 1969, and in November 1969 he began working toward his goal with Morton Mower, a young cardiologist and a vital coinvestigator. Together, they produced and tested in dogs the first prototype of an automatic defibrillator46 (Figure 9). Virtually simultaneously and independently, John Schuder, a PhD in Electrical Engineering and then an Associate Professor of Biophysics and Surgery at the University of Missouri in Columbia, also began work on an implantable defibrillator.47 While contemplating future projects during an American Heart Association meeting in 1969, and having been steeped in “transthoracic defibrillation, knowledge about waveform efficacy, and an appreciation of circuit design and component problems,” Schuder later commented, “it was almost immediately apparent that the automatic implantable defibrillator was a doable project. I decided to go home and do it.”47 In fact, Schuder was the first to implant and successfully use a cardiac defibrillator in a dog in January 1970.48 He subsequently abandoned his work on the implantable defibrillator, instead concentrating his work on optimization of shocking waveforms. Schuder’s continued contributions laid the foundation for the miniature, low-energy, reliable, high-voltage, biphasic waveform, which ultimately made contemporary implantable cardioverter-defibrillator (ICD) therapy possible.
The continued path to the first implantable cardiac defibrillator in humans was anything but simple or short. As stated by William Staewen, the Director of the Biomedical Engineering Department at Sinai Hospital of Baltimore, Md, and Morton Mower, “The design had to be virtually unflawed. It had to reliably sense ventricular defibrillation and deliver a high energy electric shock to correct the arrhythmia in less than one minute. This had to be accomplished with a device placed remotely in the hostile environment of the body. It had to function as designed for years and must not, if it would fail for any reason, cause injury to the patient.”49 When one considers the technical challenges with the potential for both harmful effects and lack of clinical benefit, it comes as no surprise that many leading medical and engineering authorities, including Lown himself, challenged this novel and original idea.50 Nevertheless, Mirowski and Mower, ultimately working with Dr Stephen Heilman and his small company, Medrad (later, Intec Systems, a subsidiary of Medrad), persevered in their project, overcoming many obstacles, from the enormous to the small. They finally achieved their goal. In February 1980, after 11 years of development, the first internal cardiac defibrillator was implanted in a patient at the Johns Hopkins Hospital in Baltimore by Levi Watkins, the cardiothoracic surgeon, and Philip Reid, the cardiac electrophysiologist. After the third patient implantation, the device also included cardioversion. The cardioversion-defibrillation device obtained Food and Drug Administration approval in 1985. Soon after, antitachycardia pacing was added. The Food and Drug Administration approval ended a century-long era of investigation, description of basic mechanisms of arrhythmias, and attempts at resuscitation of the dead that finally culminated in an implantable device that safely and effectively aborted sudden cardiac death. The ICD device continued to improve and has now been developed to the point that it can be used virtually at any time and in any place to treat ventricular arrhythmias, if needed. The dedication of many individuals and groups has made this possible. Unfortunately, the space limitation for this article prevents us from mentioning all those who have and still are contributing to the developments in this field. Finally, we should note that an implantable atrial defibrillator was also developed,51 but its use is limited by the pain associated with delivered therapy.
Little has changed in the technique of cardioversion since Lown’s article in the early 1960s. Progress has been made in reducing the already low associated complication rate and in understanding the factors responsible for success. Successful cardioversion or defibrillation occurs when a shock with sufficient current density reaches the myocardium. Because the maximum energy stored in the capacitor is fixed, the principal determinant of current density is transthoracic impedance to DC discharge. The factors influencing trans-thoracic impedance that can be modified by the technique of cardioversion include the interface between the electrode and skin, the electrode size, and the electrode placement. Although a variety of chest placements have been used, there are 2 conventional positions for the electrode paddles: anteroposterior and anterolateral. In the anterolateral position, paddles are placed between the ventricular apex and the right infraclavicular area, whereas in the anteroposterior position, one paddle is placed over the sternum and the second interscapularly. Lown originally advocated that the anteroposterior position is superior because it requires less energy to reverse AF.52 Some studies have confirmed this notion,53,54 whereas others have shown no advantage to either paddle position.55,56 Because only ≈4% to 5% of the shocking energy actually reaches the heart,57 minor deviation of this electric field probably has little effect on the final outcome. In today’s era of biphasic waveforms, the position of the paddles most likely plays an even smaller role.
The size of the electrodes through which the shock is delivered has been shown to significantly influence the transthoracic impedance.58–60 A larger electrode leads to lower impedance and higher current, but an increase in size of the electrode beyond the optimal size leads to a decrease in current density.61 In humans, paddle electrode size with a diameter between 8 and 12 cm appears to be optimal.62,63 Improved skin-to-electrode contact also leads to reduction of transthoracic impedance and an increase in the success rate. Hand-held paddles may be more effective than self-adhesive patch electrodes, perhaps because of better skin contact.64 In addition, the usage of non–salt-containing gels has been associated with an increase in transthoracic resistance.65
Gurvich was the first to demonstrate the superiority of the biphasic waveform over the monophasic in dogs in 1967.66 Most of the external defibrillators in the Soviet Union from the early 1970s used biphasic waveforms,67 which are known in Russia as the Gurvich-Venin waveform. It took much longer for the West to realize the benefit of the biphasic waveform over the original Lown monophasic waveform. The first experiments comparing the monophasic and biphasic waveforms for transthoracic defibrillation were done independently by Schuder et al in the 1980s.68,69 Ventritex’s Cadence V-100, approved by the Food and Drug Administration in 1993, was the first ICD that used a biphasic waveform. Curiously, this waveform was first used in ICDs and only a few years later in external defibrillators. The efficacy of an ICD is limited by the maximum stored energy. In their attempt to limit the device size, manufacturers of the ICD finally chose the more effective biphasic waveform. Although the Gurvich-Venin biphasic waveform was superior to the monophasic waveform, its requirement for an inductor precluded major reduction in size for use in ICDs. It was the work of John Schuder47 and also Raymond Ideker,70 then at Duke University, on optimization of biphasic waveforms that made miniaturization of implantable defibrillators possible. After 2000, most defibrillators developed for either external or internal use were “biphasic” devices, meaning that they reverse polarity 5 to 10 ms after the discharge begins. The biphasic waveform has been shown in humans to defibrillate both AF and VF more effectively than monophasic waveform.71–75 Despite the clear superiority of the biphasic waveform, the recommended initial shock energy remains unclear. The 2006 American College of Cardiology/American Heart Association/European Society of Cardiology guidelines on the management of AF recommend starting at 200 J with a monophasic waveform. “A similar recommendation to start with 200 J applies to biphasic waveforms, particularly when cardioverting patients with AF of long duration.”76 The American Heart Association Advanced Cardiac Life Support guidelines recommend initially defibrillating VF with the use of a 360-J monophasic shock or a default 200-J biphasic shock if the type of the biphasic waveform is unknown.
Besides the waveform shape, success in restoring normal sinus rhythm is related directly to the type and duration of the arrhythmia. Successful termination of organized tachycardias requires less energy than disorganized rhythms such as polymorphic VT, AF, or VF. Similarly, tachycardias of shorter duration have higher immediate conversion success rates. For instance, the overall success rate in restoring sinus rhythm in patients with AF is ≥90% when the arrhythmia is of <1 year’s duration compared with 50% when AF has been present for >5 years.77
The risks associated with DC cardioversion are related mainly to inadvertent initiation of new tachyarrhythmias, the unmasking of bradycardia, and postshock thromboembolism. More than 25% of patients have bradycardia immediately after cardioversion, and this incidence is higher in patients with underlying sinus node dysfunction.78 Ventricular arrhythmias are uncommon after cardioversion unless an unsynchronized shock was applied, VT previously existed,79,80 or digitalis toxicity was present. In the latter instance, DC cardioversion is contraindicated. A major risk associated with cardioversion is thromboembolism. Thromboembolic events are more likely to occur in patients with AF who have not been anticoagulated adequately before cardioversion. The incidence varies and has been reported to be between 1% and 7%.81,82 In a large series, the incidence was reduced to 0.8% from 5.3% with proper anticoagulation.81
Nevertheless, the efficacy and safety of cardioversion in its current form have withstood the test of time, and it continues to be used widely by clinicians as the most frequent approach to restoring sinus rhythm. This success, associated with a very favorable risk profile, has initiated a trend toward wider use of cardioversion/defibrillation not only by medical personnel but also by the general public. Although portable automatic external defibrillators have existed since 1979,83 the accumulation of clinical studies confirming their safety, efficacy, and diagnostic accuracy has recently prompted several US federal initiatives to expand public access to defibrillators.84–87
It is hard to imagine the changes that the future may bring to a technique that has changed so little over the last several decades. Progress usually occurs when light is shed on the unknown. Clearly, as we more fully understand all the intricacies of fibrillation and defibrillation, advances in this field will be made.
We ultimately need to prevent sudden cardiac death in a more effective manner. Currently, we are only partially successful in this task.88 By far, the vast majority of sudden cardiac death episodes occur in subjects without any identifiable or recognized heart disease. Our current attention is focused only on the relatively small percentage of patients with identifiable or recognized risk factors for sudden cardiac death, mainly subjects with structural heart disease. It is obvious that we do not have an effective solution for the largest part of the population at risk. Further expansion of defibrillation in public spaces is needed. Early warning systems detecting the location of cardiac arrest victims and rapid use of a nearby defibrillator should be developed.89
Our success in preventing sudden cardiac death will depend on our ability to identify the subjects at risk for future events and/or to reduce the adverse effects and risks that are associated with our current treatment strategies. At the present time, in subjects without clear and identifiable risk markers, we are unable to predict who will suffer from sudden cardiac death. Hence, it is necessary to focus our attention on improving the risk profile of our most effective available treatment for sudden cardiac death: defibrillation. For these subjects, only by reducing the risk of the therapy without affecting the quality of life can we improve the risk-benefit ratio and expand the use of cardioversion/defibrillation to combat this serious problem effectively. Eventually, the use of defibrillation may be similar to the current use of seat belts. If the risks are sufficiently low and major inconveniences are avoided, there would be a good reason to expand their use to populations at much lower relative risk for sudden cardiac death.
In this regard, several areas of potential improvement can be identified. The continued development of a less invasive initial implantation procedure that can also avoid intravascular housing of the leads and the device should be pursued. Already, prototypes of an ICD with subcutaneous leads whose implantation does not require intravenous access have been designed.90 Their approval is currently under review. Further improvement in the technology of wearable vest defibrillators can result in even better outcomes with less risk. Having the device on the human body rather than in it will eliminate the risks associated with implantation and will avoid all future complications associated with maintaining the device in the intravascular and intracardiac space. The device will have to be much smaller and less cumbersome than currently available wearable vests to avoid interference with daily activities. It would still have to provide accurate diagnosis and safe and effective treatment of lethal arrhythmias. The benefits of obviating invasive implantation and having a device that will completely eliminate known adverse issues associated with the presence of leads would be indispensable.
Another very important area for future improvement would be to further reduce the defibrillation threshold. This would serve the ultimate goal of eventually eliminating pain, anesthesia, and sedation during shocks, if possible. To achieve this, several different strategies, perhaps in combination with each other, will be used. Current research points toward the direction this is already taking. In all likelihood, more effective cardioversion/defibrillation waveforms will be used. Shocks from ≥2 sites simultaneously or sequentially will further improve cardioversion/defibrillation effectiveness. In addition, combination of shocks with cardiac pacing may prove particularly useful. We already know that pacing can influence and terminate reentrant or triggered arrhythmias.91 Work on animal models and humans on the mechanisms of VF and AF suggests the presence of 1 or more drivers92–95 that may make the strategy of combining shocks with pacing plausible. The hope would be that this combination will result in the need for less energy to restore normal sinus rhythm. This would certainly benefit internal as well as external cardioversion/defibrillation. Clearly, this approach requires more work on the mechanisms of these arrhythmias and the technology used to cardiovert and defibrillate them. Just as in the past, dedicated individuals and teams will be needed to fully solve the puzzles of fibrillation and defibrillation. It may be a while before we come close to this goal, but if the past is the harbinger of the future, then we look forward to the future with great optimism.
As we reviewed the beginnings and subsequent development of defibrillation and cardioversion for this article, we were surprised to learn how much seminal work had been done behind the Iron Curtain that was almost completely unknown in the West. We can only speculate on the reasons for this, but the final result was that for too many years, humanity was deprived of life-saving treatment that should have been available much earlier. It was fortunate that Barouh Berkovits bridged the gap between East and the West by making the DC transthoracic cardioverter/defibrillator available to Dr Bernard Lown, an interested and dedicated clinician. However, it took another 4 decades for both West and East to merge on the uniform use of biphasic shock waveforms for external as well as internal defibrillators. It seems obvious to us that if we can learn anything from this history, it would be to facilitate cooperation and avoid the barriers that have existed and even still exist among us. Whatever those barriers may be, they are, after all, only made by humans.
The authors wish to thank the following individuals with whom personal communication by 1 or more of the authors made this article possible: A.A. Jennifer Adgey; Sidney Alexander; Raghavan Amarasingham; Barouh Berkovits; Amy Beeman; Betsy Bogdansky; Ian Clement; Leonard Dreifus; Edwin Duffin, Jr; John Fisher; Gregory Flaker; Bruce Fye; John Geddes; Boris Gorbunov; Robert J. Hall; M. Stephen Heilman; Raymond Ideker; James Jude; Alan Kadish; Claudia Kenney; Richard Kerber; G. Guy Knickerbocker; Bernard Kosowsky; Peter Kowey; Mark Kroll; Samuel Levy; Bernard Lown; Frank Marcus; Morton Mower; John Muller; Michael Orlov; Phillip Podrid; Christine Riegel; Jeremy Ruskin; Ariella Rosengard; Vikas Saini; John Schuder; Hein Wellens; Roger Winkle; Fred Zacouto; and special thanks to Jayakumar Sahadevan.
Sources of Funding
This work was supported in part by grants from the Jennie Zoline Foundation, Blue Dot Foundation, Gemstone Foundation, and National Institutes of Health/National Heart, Lung, and Blood Institute grants HL067322 and HL074283. | 1 | 9 |
<urn:uuid:88f29ae6-83f9-44f1-b3f9-d1d5d0bf8028> | ||A Wikibookian suggests that this book or chapter be merged into History of Electronics.
Please discuss whether or not this merge should happen on the discussion page.
Microwaves can be used to transmit power over long distances,
and post-World War II research was done to examine possibilities. NASA worked in the 1970s and early 1980s to research the possibilities of using Solar Power Satellite (SPS) systems with large solar arrays that would beam power down to the Earth's survace via microwaves.
Van allen radiation belt
The presence of a radiation belt had been theorized prior to the Space Age and the belt's presence was confirmed by the Explorer I on January 31, 1958 and Explorer III missions, under Doctor James van Allen. The trapped radiation was first mapped out by Explorer IV and Pioneer III.
New technology was added to FM radio in the early 1960s to allow FM stereo transmissions, where the frequency modulated radio signal is used to carry stereophonic sound, using the pilot-tone multiplex system.
On December 29, 1949 KC2XAK of Bridgeport, Connecticut became the first UHF television station to operate on a regular daily schedule.
In Britain, UHF television began with the launch of BBC TWO in 1964. BBC ONE and ITV soon followed, and colour was introduced on UHF only in 1967 - 1969. Today all British terrestrial television channels (both analog and digital) are on UHF.
The Federal Communications Commission (FCC) is an independent United States government agency, created, directed, and empowered by Congressional statute.
The FCC was established by the Communications Act of 1934 as the successor to the Federal Radio Commission and is charged with regulating all non-Federal Government use of the radio spectrum (including radio and television broadcasting), and all interstate telecommunications (wire, satellite and cable) as well as all international communications that originate or terminate in the United States. The FCC took over wire communication regulation from the Interstate Commerce Commission. The FCC's jurisdiction covers the 50 states, the District of Columbia, and U.S. possessions.
Table of contents [showhide] 1 Organization 2 History 2.1 Report on Chain Broadcasting 2.2 Allocation of television stations 3 Regulatory powers 4 External links
Organization The FCC is directed by five Commissioners appointed by the President and confirmed by the Senate for 5-year terms, except when filling an unexpired term. The President designates one of the Commissioners to serve as Chairperson. Only three Commissioners may be members of the same political party. None of them can have a financial interest in any Commission-related business.
As the chief executive officer of the Commission, the Chairman delegates management and administrative responsibility to the Managing Director. The Commissioners supervise all FCC activities, delegating responsibilities to staff units and Bureaus. The current FCC Chairman is Michael Powell, son of Secretary of State Colin Powell. The other four current Commissioners are Kathleen Abernathy, Michael Copps, Kevin Martin, and Jonathon Adelstein.
History Report on Chain Broadcasting In 1940 the Federal Communications Commission issued the "Report on Chain Broadcasting." The major point in the report was the breakup of NBC (See American Broadcasting Company), but there were two other important points. One was network option time, the culprit here being CBS. The report limited the amount of time during the day, and what times the networks may broadcast. Previously a network could demand any time it wanted from an affiliate. The second concerned artist bureaus. The networks served as both agents and employees of artists, which was a conflict of interest the report rectified.
Allocation of television stations The Federal Communications Commission assigned television the Very High Frequency, VHF, band and gave TV channels 1-13. The 13 channels could only accommodate 400 stations nationwide and could not accommodate color in its state of technology in the early 1940s. So in 1944 CBS proposed to convert all of television to the Ultra High Frequency band, UHF, which would have solved the frequency and color problem. There was only one flaw in the CBS proposal, everyone else disagreed. In 1945 and 1946 the Federal Communications Commission held hearings on the CBS plan. RCA said CBS wouldn't have its color system ready for 5-10 years. CBS claimed it would be ready by the middle of 1947. CBS also gave a demonstration with a very high quality picture. In October of 1946 RCA presented a color system of inferior quality which was partially compatible with the present VHF black and white system. In March 1947 the Federal Communications Commission said CBS would not be ready, and ordered a contiuation of the present system. RCA promised its electric color system would be fully compatible within five years, in 1947 an adaptor was required to see color programs in black and white on a black and white set.
In 1945 the Federal Communications Commission moved FM radio to a higher frequency. The Federal Communications Commission also allowed simulcasting of AM programs on FM stations. Regardless of these two disadvantages, CBS placed its bets on FM and gave up some TV applications. CBS had thought TV would be moved according to its plan and thus delayed. Unfortunately for CBS, FM was not a big moneymaker and TV was. That year the Federal Communications Commission set 150 miles as the minimum distance between TV stations on the same channel.
There was interference between TV stations in 1948 so the Federal Communications Commission froze the processing of new applications for TV stations. On September 30, 1948, the day of the freeze, there were thirty-seven stations in twenty-two cities and eighty-six more were approved. Another three hundred and three applications were sent in and not approved. After all the approved stations were constructed, or weren't, the distribution was as follows: New York and Los Angeles, seven each; twenty-four other cities had two or more stations; most cities had only one including Houston, Kansas City, Milwaukee, Pittsburgh, and St. Louis. A total of just sixty-four cities had television during the freeze, and only one-hundred-eight stations were around. The freeze was for six months only, initially, and was just for studying interference problems. Because of the Korean Police Action, the freeze wound up being three and one half years. During the freeze, the interference problem was solved and the Federal Communications Commission made a decision on color TV and UHF. In October of 1950 the Federal Communications Commission made a pro-CBS color decision for the first time. The previous RCA decisions were made while Charles Denny was chairman. He later resigned in 1947 to become an RCA vice president and general consel. The decision approved CBS' mechanical spinning wheel color TV system, now able to be used on VHF, but still not compatible with black-and-white sets.
RCA, with a new compatible system that was of comparable quality to CBS' according to TV critics, appealed all the way to the U.S. Supreme Court and lost in May, 1951, but its legal action did succeed in toppling CBS' color TV system, as during the legal battle, many more black-and-white television sets were sold. When CBS did finally start broadcasting using its color TV system in mid-1951, most American television viewers already had black-and-white receivers that were incompatible with CBS' color system. In October of 1951 CBS was ordered to stop work on color TV by the National Production Authority, supposedly to help the situation in Korea. The Authority was headed by a lieutenant of William Paley, the head of CBS.
The Federal Communications Commission, under chairman Wayne Coy, issued its Sixth Report and Order in early 1952. It established seventy UHF channels (14-83) providing 1400 new potential stations. It also set aside 242 stations for education, most of them in the UHF band. The Commission also added 220 more VHF stations. VHF was reduced to 12 channels with channel 1 being given over to other uses and channels 2-12 being used solely for TV, this to reduced interference. This ended the freeze. In March of 1953 the House Committee on Interstate and Foreign Commerce held hearings on color TV. RCA and the National Television Systems Committee, NTSC, presented the RCA system. The NTSC consisted of all of the major television manufacturers at the time. On March 25, CBS president Frank Stanton conceded it would be "economically foolish" to pursue its color system and in effect CBS lost.
December 17, 1953 the Federal Communications Commission reversed its decision on color and approved the RCA system. Ironically, color didn't sell well. In the first six months of 1954 only 8,000 sets were sold, there were 23,000,000 black and white sets. Westinghouse made a big, national push and sold thirty sets nationwide. The sets were big, expensive and didn't include UHF.
The problem was that UHF stations would not be successful unless people had UHF tuners, and people would not voluntarily pay for UHF tuners unless there were UHF broadcasters. Of the 165 UHF stations that went on the air between 1952 and 1959, 55% went off the air. Of the UHF stations on the air, 75% were losing money. UHF's problems were the following:(1) technical inequality of UHF stations as compared with VHF stations; (2) intermixture of UHF and VHF stations in the same market and the millions of VHF only receivers; (3) the lack of confidence in the capabilities of and the need for UHF television. Suggestions of de-intermixture (making some cities VHF only and other cities UHF only) were not adopted, because most existing sets did not have UHF capability. Ultimately the FCC required all TV sets to have UHF tuners. However over four decades later, UHF is still considered inferior to VHF, despite cable television, and ratings on VHF channels are generally higher than on UHF channels.
The allocation between VHF and UHF in the 1950s, and the lack of UHF tuners is entirely analogous to the dilemma facing digital television of high definition television fifty years later.
Regulatory powers The Federal Communications Commission has one major regulatory weapon, revoking licenses, but short of that has little leverage over broadcast stations. It is reluctant to do this since it operates in a near vacuum of information on most of the tens of thousands of stations whose licences are renewed every three years. Broadcast licenses are supposed to be renewed if the station met the "public interest, convenience, or necessity." The Federal Communications Commission rarely checked except for some outstanding reason, burden of proof would be on the compaintant. Fewer than 1% of station renewals are not immediately granted, and only a small fraction of those are actually denied.
Note: Similar authority for regulation of Federal Government telecommunications is vested in the National Telecommunications and Information Administration (NTIA).
Source: from Federal Standard 1037C
See also: concentration of media ownership, Fairness Doctrine, frequency assignment, open spectrum
There was an urgent need during radar development in World War II for a microwave generator that worked in shorter wavelengths - around 10cm rather than 150cm - available from generators of the time. In 1940, at Birmingham University in the UK, John Randall and Harry Boot produced a working prototype of the cavity magnetron, and soon managed to increase its power output 100-fold. In August 1941, the first production model was shipped to the United States.
FM radio is a broadcast technology invented by Edwin Howard Armstrong that uses frequency modulation to provide high-fidelity broadcast radio sound.
W1XOJ was the first FM radio station, granted a construction permit by the FCC in 1937. On January 5, 1940 FM radio was demonstrated to the FCC for the first time. FM radio was assigned the 42 to 50 MHz band of the spectrum in 1940.
After World War II, the FCC moved FM to the frequencies between 88 and 106 MHz on June 27, 1945, making all prewar FM radios worthless. This action severely set back the public confidence in, and hence the development of, FM radio. On March 1, 1945 W47NV began operations in Nashville, Tennessee becoming the first modern commercial FM radio station.
Main Page | Recent changes | Edit this page | Page history
Printable version | Disclaimers
Not logged in
Log in | Help
Other languages: العربية | Български | Dansk | Deutsch | Ελληνικά | Español | Esperanto | Français | Kurdî | Lietuvių | Nederlands | 日本語 | Polski | Português | Română | Simple English | Suomi | Svenska | 中文 Television
From Wikipedia, the free encyclopedia.
Television is a telecommunication system for broadcasting and receiving moving pictures and sound over a distance. The term has come to refer to all the aspects of television programming and transmission as well. The televisual has become synonymous with postmodern culture. The word television is a hybrid word, coming from both Greek and Latin. "Tele-" is Greek for "far", while "-vision" is from the Latin "visio", meaning "vision" or "sight".
Table of contents [showhide] 1 History 2 TV standards 3 TV aspect ratio 4 Aspect ratio incompatibility 5 New developments 6 TV sets 7 Advertising 8 US networks 9 European networks 10 Colloquial names 11 Related articles 11.1 External links 11.2 See also: 12 Further Reading 12.1 TV as social pathogen, opiate, mass mind control, etc.
History Paul Gottlieb Nipkow proposed and patented the first electromechanical television system in 1884.
A. A. Campbell Swinton wrote a letter to Nature on the 18th June 1908 describing his concept of electronic television using the cathode ray tube invented by Karl Ferdinand Braun. He lectured on the subject in 1911 and displayed circuit diagrams.
A semi-mechanical analogue television system was first demonstrated in London in February 1924 by John Logie Baird and a moving picture by Baird on October 30, 1925. The first long distance public television broadcast was from Washington, DC to New York City and occurred on April 7, 1927. The image shown was of then Commerce Secretary Herbert Hoover. A fully electronic system was demonstrated by Philo Taylor Farnsworth in the autumn of 1927. The first analogue service was WGY, Schenectady, New York inaugurated on May 11, 1928. The first British Television Play, "The Man with the Flower in his Mouth", was transmitted in July 1930. CBS's New York City station began broadcasting the first regular seven days a week television schedule in the U. S. on July 21, 1931. The first broadcast included Mayor James J. Walker, Kate Smith, and George Gershwin. The first all-electronic television service was started in Los Angeles, CA by Don Lee Broadcasting. Their start date was December 23, 1931 on W6XAO - later KTSL. Los Angeles was the only major U. S. city that avoided the false start with mechanical television.
In 1932 the BBC launched a service using Baird's 30-line system and these transmissions continued until 11th September 1935. On November 2, 1936 the BBC began broadcasting a dual-system service, alternating on a weekly basis between Marconi-EMI's high-resolution (405 lines per picture) service and Baird's improved 240-line standard from Alexandra Palace in London. Six months later, the corporation decided that Marconi-EMI's electronic picture gave the superior picture, and adopted that as their standard. This service is described as "the world's first regular high-definition public television service", since a regular television service had been broadcast earlier on a 180-line standard in Germany. The outbreak of the Second World War caused the service to be suspended. TV transmissions only resumed from Alexandra Palace in 1946.
The first live transcontinental television broadcast took place in San Francisco, California from the Japanese Peace Treaty Conference on September 4, 1955.
Programming is broadcast on television stations (sometimes called channels). At first, terrestrial broadcasting was the only way television could be distributed. Because bandwidth was limited, government regulation was normal. In the US, the Federal Communications Commission allowed stations to broadcast advertisements, but insisted on public service programming commitments as a requirement for a license. By contrast, the United Kingdom chose a different route, imposing a television licence fee (effectively a tax) to fund the BBC, which had public service as part of its Crown Charter. Development of cable and satellite means of distribution in the 1970s pushed businessmen to target channels towards a certain audience, and enabled the rise of subscription-based television channels, such as HBO and Sky. Practically every country in the world now has developed at least one television channel. Television has grown up all over the world, enabling every country to share aspects of their culture and society with others.
TV standards See broadcast television systems.
There many means of distributing television broadcasts, including both analogue and digital versions of:
* Terrestrial television * Satellite television * Cable television * MMDS (Wireless cable)
TV aspect ratio All of these early TV systems shared the same aspect ratio of 4:3 which was chosen to match the Academy Ratio used in cinema films at the time. This ratio was also square enough to be conveniently viewed on round Cathode Ray Tubes (CRTs), which were all that could be produced given the manufacturing technology of the time -- today's CRT technology allows the manufacture of much wider tubes. However, due to the negative heavy metal health effects associated with disposal of CRTs in landfills, and the space-saving attributes of flat screen technologies that lack the aspect ratio limitations of CRTs, CRTs are slowly becoming obsolete.
In the 1950s movie studios moved towards wide screen aspect ratios such as Cinerama in an effort to distance their product from television.
The switch to digital television systems has been used as an opportunity to change the standard television picture format from the old ratio of 4:3 (1.33:1) to an aspect ratio of 16:9 (1.78:1). This enables TV to get closer to the aspect ratio of modern wide-screen movies, which range from 1.85:1 to 2.35:1. The 16:9 format was first introduced on "widescreen" DVDs. DVD provides two methods for transporting wide-screen content, the better of which uses what is called anamorphic wide-screen format. This format is very similar to the technique used to fit a wide-screen movie frame inside a 1.33:1 35mm film frame. The image is squashed horizontally when recorded, then expanded again when played back. The U.S. ATSC HDTV system uses straight wide-screen format, no image squashing or expanding is used.
There is no technical reason why the introduction of digital TV demands this aspect ratio change, however it has been decided to introduce these changes for marketing reasons.
Aspect ratio incompatibility Displaying a wide-screen original image on a conventional aspect television screen presents a considerable problem since the image must be shown either:
* in "letterbox" format, with black stripes at the top and bottom * with part of the image being cropped, usually the extreme left and right of the image being cut off (or in "pan and scan", parts selected by an operator) * with the image horizontally compressed
A conventional aspect image on a wide screen television can be shown:
* with black vertical bars to the left and right * with upper and lower portions of the image cut off * with the image horizontally distorted
A common compromise is to shoot or create material at an aspect ratio of 14:9, and to lose some image at each side for 4:3 presentation, and some image at top and bottom for 16:9 presentation.
Horizontal expansion has advantages in situations in which several people are watching the same set; it compensates for watching at an oblique angle.
* Digital television (DTV) * High Definition TV (HDTV) * Pay Per View * Web TV * programming on-demand.
TV sets The earliest television sets were radios with the addition of a television device consisting of a neon tube with a mechanically spinning disk (the Nipkow disk, invented by Paul Gottlieb Nipkow) that produced a red postage-stamp size image . The first publicly broadcast electronic service was in Germany in March 1935. It had 180 lines of resolution and was only available in 22 public viewing rooms. One of the first major broadcasts involved the 1936 Berlin Olympics. The Germans had a 441 line system in the fall of 1937. (Source: Early Electronic TV)
Television usage skyrocketed after World War II with war-related technological advances and additional disposable income. (1930s TV receivers cost the equivalent of $7000 today (2001) and had little available programming.)
For many years different countries used different technical standards. France initially adopted the German 441 line standard but later upgraded to 819 lines, which gave the highest picture definition of any analogue TV system, approximately four times the resolution of the British 405 line system. Eventually the whole of Europe switched to the 625 line standard, once more following Germany's example. Meanwhile in North America the original 525 line standard was retained.
A television with a VHF "rabbit ears" antenna and a loop UHF antenna. Television in its original and still most popular form involves sending images and sound over radio waves in the VHF and UHF bands, which are received by a receiver (a television set). In this sense, it is an extension of radio. Broadcast television requires an antenna (UK: aerial). This can be an external antenna mounted outside or smaller antennas mounted on or near the television. Typically this is an adjustable dipole antenna called "rabbit ears" for the VHF band and a small loop antenna for the UHF band.
Color television became available on December 30, 1953, backed by the CBS network. The government approved the color broadcast system proposed by CBS, but when RCA came up with a system that made it possible to view color broadcasts in black and white on unmodified old black and white TV sets, CBS dropped their own proposal and used the new one.
European colour television was developed somewhat later, in the 1960s, and was hindered by a continuing division on technical standards. The German PAL system was eventually adopted by West Germany, the UK, Australia, New Zealand, much of Africa, Asia and South America, and most West European countries except France. France produced its own SECAM standard, which was eventually adopted in much of Eastern Europe. Both systems broadcast on UHF frequencies and adopted a higher-definition 625 line system.
Starting in the 1990s, modern television sets diverged into three different trends:
* standalone TV sets; * integrated systems with DVD players and/or VHS VCR capabilities built into the TV set itself (mostly for small size TVs with up to 17" screen, the main idea is to have a complete portable system); * component systems with separate big screen video monitor, tuner, audio system which the owner connects the pieces together as a high-end home theater system. This approach appeals to videophiles that prefer components that can be upgraded separately.
There are many kinds of video monitors used in modern TV sets. The most common are direct view CRTs for up to 40" (4:3) and 46" (16:9) diagonally. Most big screen TVs (up to over 100") use projection technology. Three types of projection systems are used in projection TVs: CRT based, LCD based, and reflective imaging chip based. Modern advances have brought flat screens to TV that use active matrix LCD or plasma display technology. Flat panel displays are as little as 4" thick and can be hung on a wall like a picture. They are extremely attractive and space-saving but they remain expensive.
Nowadays some TVs include a port to connect peripherals to it or to connect the set to an A/V home network (HAVI), like LG RZ-17LZ10 that includes a USB port, where one can connect a mouse, keyboard and so on (for WebTV, now branded MSN TV).
Even for simple video, there are five standard ways to connect a device. These are as follows:
* Component Video- three separate connectors, with one brightness channel and two color channels (hue and saturation), and is usually referred to as Y, B-Y, R-Y, or Y Pr Pb. This provides for high quality pictures and is usually used inside professional studios. However, it is being used more in home theater for DVDs and high end sources. Audio is not carried on this cable.
* SCART - A large 21 pin connector that may carry Composite video, S-Video or, for better quality, separate red, green and blue (RGB) signals and two-channel sound, along with a number of control signals. This system is standard in Europe but rarely found elsewhere.
* S-Video - two separate channels, one carrying brightness, the other carrying color. Also referred to as Y/C video. Provides most of the benefit of component video, with slightly less color fidelity. Use started in the 1980s for S-VHS, Hi-8, and early DVD players to relay high quality video. Audio is not carried on this cable.
* Composite video - The most common form of connecting external devices, putting all the video information into one stream. Most televisions provide this option with a yellow RCA jack. Audio is not carried on this cable.
* Coaxial or RF (coaxial cable) - All audio channels and picture components are transmitted through one wire and modulated on a radio frequency. Most TVs manufactured during the past 15-20 years accept coaxial connection, and the video is typically "tuned" on channel 3 or 4. This is the type of cable usually used for cable television.
Advertising From the earliest days of the medium, television has been used as a vehicle for advertising. Since their inception in the USA in the late 1940s, TV commercials have become far and away the most effective, most pervasive, and most popular method of selling products of all sorts. US advertising rates are determined primarily by Nielsen Ratings
US networks In the US, the three original commercial television networks (ABC, CBS, and NBC) provide prime-time programs for their affiliate stations to air from 8pm-11pm Monday-Saturday and 7pm-11pm on Sunday. (7pm to 10pm, 6pm to 10pm respectively in the Central and Mountain time zones). Most stations procure other programming, often syndicated, off prime time. The FOX Network does not provide programming for the last hour of prime time; as a result, many FOX affiliates air a local news program at that time. Three newer broadcasting networks, The WB, PAX, and UPN, also do not provide the same amount of network programming as so-called traditional networks.
European networks In much of Europe television broadcasting has historically been state dominated, rather than commercially organised, although commercial stations have grown in number recently. In the United Kingdom, the major state broadcaster is the BBC (British Broadcasting Corporation), commercial broadcasters include ITV (Independent Television), Channel 4 and Channel 5, as well as the satellite broadcaster British Sky Broadcasting. Other leading European networks include RAI (Italy), Télévision Française (France), ARD (Germany), RTÉ (Ireland), and satellite broadcaster RTL (Radio Télévision Luxembourg). Euronews is a pan-European news station, broadcasting both by satellite and terrestrially (timesharing on State TV networks) to most of the continent. Broadcast in several languages (English, French, German, Spanish, Russian, etc.) it draws on contributions from State broadcasters and the ITN news network.
* Telly * The Tube/Boob Tube * The Goggle Box * The Cyclops * Idiot Box
* List of 'years in television' * Lists of television channels * List of television programs * List of television commercials * List of television personalities * List of television series o List of Canadian television series o List of US television series o List of UK television series * Animation and Animated series * Nielsen Ratings * Home appliances * Reality television * Television network * Video * Voyager Golden Record * V-chip * Wasteland Speech * DVB * Television in the United States
* "Television History" * Early Television Foundation and Museum * Television History site from France * TV Dawn * British TV History Links * UK Television Programmes * aus.tv.history - Australian Television History * TelevisionAU - Australian Television History * Federation Without Television
See also: Charles Francis Jenkins Federation Without Television
Further Reading TV as social pathogen, opiate, mass mind control, etc.
* Jerry Mander Four Arguments for the Elimination of Television * Marie Winn The Plug-in Drug * Neil Postman Amusing Ourselves to Death * Terence McKenna Food of the Gods * Joyce Nelson The Perfect Machine * Andrew Bushard Federation Without Television: the Blossoming Movement
Alternate use of the term: Television (band) Television camera
Edit this page | Discuss this page | Page history | What links here | Related changes
Other languages: العربية | Български | Dansk | Deutsch | Ελληνικά | Español | Esperanto | Français | Kurdî | Lietuvių | Nederlands | 日本語 | Polski | Português | Română | Simple English | Suomi | Svenska | 中文 Main Page | About Wikipedia | Recent changes |
This page was last modified 20:30, 17 Apr 2004. All text is available under the terms of the GNU Free Documentation License (see Copyrights for details). Disclaimers. Wikipedia is powered by MediaWiki, an open source wiki engine. Main Page Recent changes Random page Current events Community Portal Edit this page Discuss this page Page history What links here Related changes Special pages Contact us Donations
From Wikipedia, the free encyclopedia.
Renewable energy is energy from a source which can be managed so that it is not subject to depletion in a human timescale . Sources include the sun's rays, wind, waves, rivers, tides, biomass, and geothermal. Renewable energy does not include energy sources which are dependent upon limited resources, such as fossil fuels and nuclear fission power.
Table of contents [showhide] 1 General Information 2 Pros and cons of renewable energy 3 Renewable energy history 3.1 Wood 3.2 Animal Traction 3.3 Water Power 3.4 Wind Power 3.5 Solar power 3.6 The renewable energy movement 4 Renewable Energy Today 5 Modern sources of renewable energy 5.1 Smaller-scale sources 5.2 Renewables as solar energy 5.3 Solar energy per se 5.3.1 Solar electrical energy 5.3.2 System problems with solar electric 5.3.3 Solar thermal electric energy 5.3.4 Solar thermal energy 220.127.116.11 Solar water heating 18.104.22.168 Solar heat pumps 22.214.171.124 Solar ovens 5.4 Wind Energy 5.5 Geothermal Energy 5.6 Water power 5.6.1 Electrokinetic energy 5.6.2 Hydroelectric Energy 5.6.3 Tidal power 5.6.4 Tidal stream power 5.6.5 Wave power 5.6.6 OTEC 5.7 Biomass 5.7.1 Liquid biofuel 5.7.2 Solid biomass 5.7.3 Biogas 6 Renewable energy storage systems 6.1 Hydrogen fuel cells 6.2 Other renewable energy storage systems 6.2.1 Pumped water storage 6.2.2 Battery storage 6.2.3 Electrical grid storage 7 Renewable energy use by nation 8 Renewable energy controversies 8.1 The funding dilemma 8.2 Centralization versus decentralization 8.3 The nuclear "renewable" claim 9 References
General Information Most renewable forms of energy, other than geothermal, are in fact stored solar energy. Water power and wind power represent very short-term solar storage, while biomass represents slightly longer-term storage, but still on a very human time-scale, and so renewable within that human time-scale. Fossil fuels, on the other hand, while still stored solar energy, have taken millions of years to form, and so do not meet the definition of renewable.
Renewable energy resources may be used directly as energy sources, or used to create other forms of energy for use. Examples of direct use are solar ovens, geothermal heat pumps, and mechanical windmills. Examples of indirect use in creating other energy sources are electricity generation through wind generators or photovoltaic cells, or production of fuels such as ethanol from biomass (see alcohol as a fuel).
Pros and cons of renewable energy Renewable energy sources are fundamentally different from fossil fuel or nuclear power plants because of their widespread occurrence and abundance - the sun will 'power' these 'powerplants' (meaning sunlight, the wind, flowing water, etc.) for the next 4 billion years. Some renewable sources do not emit any additional carbon dioxide and do not introduce any new risks such as nuclear waste. In fact, one renewable energy source, wood, actively sequesters carbon dioxide while growing.
A visible disadvantage of renewables is their visual impact on local environments. Some people dislike the aesthetics of wind turbines or bring up nature conservation issues when it comes to large solar-electric installations outside of cities. Some people try to utilize these renewable technologies in an efficient and aesthetically pleasing way: fixed solar collectors can double as noise barriers along highways, roof-tops are available already and could even be replaced totally by solar collectors, etc.
Some renewable energy capture systems entail unique environmental problems. For instance, wind turbines can be hazardous to flying birds, while hydroelectric dams can create barriers for migrating fish ? a serious problem in the Pacific Northwest that has decimated the numbers of many salmon populations.
Another inherent difficulty with renewables is their variable and diffuse nature (with the exception being geothermal energy, which is however only accessible where the Earth's crust is thin, such as near hot springs and natural geysers). Since renewable energy sources are providing relatively low-intensity energy, the new kinds of "power plants" needed to convert the sources into usable energy need to be distributed over large areas. To make the phrases 'low-intensity' and 'large area' easier to understand, note that in order to produce 1000 kWh of electricity per month (a typical per-month-per-capita consumption of electricity in Western countries), a home owner in cloudy Europe needs to use ten square meters of solar panels. Systematic electrical generation requires reliable overlapping sources or some means of storage on a reasonable scale (pumped-storage hydro systems, batteries, future hydrogen fuel cells, etc.). So, because of currently-expensive energy storage systems, a small stand-alone system is only economic in rare cases.
If renewable and distributed generation were to become widespread, electric power transmission and electricity distribution systems would no longer be the main distributors of electrical energy but would operate to balance the electricity needs of local communities. Those with surplus energy would sell to areas needing "top ups".
Renewable energy history The original energy source for all human activity was the sun via growing plants. Solar energy's main human application throughout most of history has thus been in agriculture and forestry, via photosynthesis.
Wood Firewood was the earliest manipulated energy source in human history, being used as a thermal energy source through burning, and it is still important in this context today. Burning wood was important for both cooking and providing heat, enabling human presence in cold climates. Special types of wood cooking, food dehydration and smoke curing, also enabled human societies to safely store perishable foodstuffs through the year. Eventually, it was discovered that partial combustion in the relative absence of oxygen could produce charcoal, which provided a hotter and more compact and portable energy source. However, this was not a more efficient energy source, because it required a large input in wood to create the charcoal.
Animal Traction Motive power for vehicles and mechanical devices was originally produced through animal traction. Animals such as horses and oxen not only provided transportation but also powered mills. Animals are still extensively in use in many parts of the world for these purposes.
Water Power Animal power for mills was eventually supplanted by water power, the power of falling water in rivers, wherever it was exploitable. Direct use of water power for mechanical purposes is today fairly uncommon, but still in use.
Originally, water power through (hydroelectricity) was the most important source of electrical generation throughout society, and is still an important source today. Throughout most of the history of human technology, hydroelectricity has been the only renewable source of electricity generation significantly tapped for the generation of electricity.
Wind Power Wind power has been used for several hundred years. It was originally used via large sail-blade windmills with slow-moving blades, such as those seen in the Netherlands and mentioned in Don Quixote. These large mills usually either pumped water or powered small mills. Newer windmills featured smaller, faster-turning, more compact units with more blades, such as those seen throughout the Great Plains. These were mostly used for pumping water from wells. Recent years have seen the rapid development of wind generation farms by mainstream power companies, using a new generation of large, high wind turbines with two or three immense and relatively slow-moving blades.
Solar power Solar power as a direct energy source has been not been captured by mechanical systems until recent human history, but was captured as an energy source through architecture in certain societies for many centuries. Not until the twentieth century was direct solar input extensively explored via more carefully planned architecture (passive solar) or via heat capture in mechanical systems (active solar) or electrical conversion (photovoltaic). Increasingly today the sun is harnessed for heat and electricity.
The renewable energy movement Renewable energy as an issue was virtually unheard-of before the middle of the twentieth century. There were experimentations with passive solar energy, including daylighting, in the early part of the twentieth century, but little beyond what had actually been practiced as a matter of course in some locales for hundreds of years. The renewable energy movement gained awareness, credence and strength with the great burgeoning of interest in environmental affairs in the mid-1900s, which in turn was largely due to Rachel Carson's ?'Silent Spring'?.
The first US politician to focus significantly on solar energy was Jimmy Carter, in response to the long term consequences of the 1973 energy crisis. No president since has paid much attention to renewable energy.
Renewable Energy Today Around 80% of energy requirements are focused around heating or cooling buildings and powering the vehicles that ensure mobility (cars, trains, airplanes). This is the core of society's energy requirements. However, most uses of renewable power focus on electricity generation.
Geothermal heat pumps (also called ground-source heat pumps) are a means of extracting heat in the winter or cold in the summer from the earth to heat or cool buildings.
Modern sources of renewable energy There are several types of renewable energy, including the following:
* Solar power. * Wind power. * Geothermal energy. * Electrokinetic energy. * Hydroelectricity. * Biomatter, including Biogas Energy.
Smaller-scale sources Of course there are some smaller-scale applications as well:
* Piezo electric crystals embedded in the sole of a shoe can yield a small amount of energy with each step. Vibration from engines can stimulate piezo electric crystals. * Some watches are already powered by movement of the arm. * Special antennae can collect energy from stray radiowaves or even light (EM radiation).
Renewables as solar energy Most renewable energy sources can trace their roots to solar energy, with the exception of geothermal and tidal power. For example, wind is caused by the sun heating the earth unevenly. Hot air is less dense, so it rises, causing cooler air to move in to replace it. Hydroelectric power can be ultimately traced to the sun too. When the sun evaporates water in the ocean, the vapor forms clouds which later fall on mountains as rain which is routed through turbines to generate electrity. The transformation goes from solar energy to potential energy to kinetic energy to electric energy.
Solar energy per se Since most renewable energy is "Solar Energy" this term is slightly confusing and used in two different ways: firstly as a synonym for "renewable energies" as a whole (like in the political slogan "Solar not nuclear") and secondly for the energy that is directly collected from solar radiation. In this section it is used in the latter category.
There are actually two separate approaches to solar energy, termed active solar and passive solar.
Solar electrical energy For electricity generation, ground-based solar power has serious limitations because of its diffuse and intermittent nature. First, ground-based solar input is interrupted by night and by cloud cover, which means that solar electric generation inevitably has a low capacity factor, typically less than 20%. Also, there is a low intensity of incoming radiation, and converting this to high grade electricity is still relatively inefficient (14% - 18%), though increased efficiency or lower production costs have been the subject of much research over several decades.
Two methods of converting the Sun's radiant energy to electricity are the focus of attention. The better-known method uses sunlight acting on photovoltaic (PV) cells to produce electricity. This has many applications in satellites, small devices and lights, grid-free applications, earthbound signaling and communication equipment, such as remote area telecommunications equipment. Sales of solar PV modules are increasing strongly as their efficiency increases and price diminishes. But the high cost per unit of electricity still rules out most uses.
Several experimental PV power plants mostly of 300 - 500 kW capacity are connected to electricity grids in Europe and the USA. Japan has 150 MWe installed. A large solar PV plant was planned for Crete. In 2001 the world total for PV electricity was less than 1000 MWe with Japan as the world's leading producer. Research continues into ways to make the actual solar collecting cells less expensive and more efficient. Other major research is investigating economic ways to store the energy which is collected from the Sun's rays during the day.
Alternatively, many individuals have installed small-scale PV arrays for domestic consumption. Some, particularly in isolated areas, are totally disconnected from the main power grid, and rely on a surplus of generation capacity combined with batteries and/or a fossil fuel generator to cover periods when the cells are not operating. Others in more settled areas remain connected to the grid, using the grid to obtain electricity when solar cells are not providing power, and selling their surplus back to the grid. This works reasonably well in many climates, as the peak time for energy consumption is on hot, sunny days where air conditioners are running and solar cells produce their maximum power output. Many U.S. states have passed "net metering" laws, requiring electrical utilities to buy the locally-generated electricity for price comparable to that sold to the household. Photovoltaic generation is still considerably more expensive for the consumer than grid electricity unless the usage site is sufficiently isolated, in which case photovoltaics become the less expensive.
System problems with solar electric Frequently renewable electricity sources are disadvantaged by regulation of the electricity supply industry which favors 'traditional' large-scale generators over smaller-scale and more distributed generating sources. If renewable and distributed generation were to become widespread, electric power transmission and electricity distribution systems would no longer be the main distributors of electrical energy but would operate to balance the electricity needs of local communities. Those with surplus energy would sell to areas needing "top ups". Some Governments and regulators are moving to address this, though much remains to be done. One potential solution is the increased use of active management of electricity transmission and distribution networks.
Solar thermal electric energy The second method for utilizing solar energy is solar thermal. A solar thermal power plant has a system of mirrors to concentrate the sunlight on to an absorber, the resulting heat then being used to drive turbines. The concentrator is usually a long parabolic mirror trough oriented north-south, which tilts, tracking the Sun's path through the day. A black absorber tube is located at the focal point and converts the solar radiation to heat (about 400°C) which is transferred into a fluid such as synthetic oil. The oil can be used to heat buildings or water, or it can be used to drive a conventional turbine and generator. Several such installations in modules of 80 MW are now operating. Each module requires about 50 hectares of land and needs very precise engineering and control. These plants are supplemented by a gas-fired boiler which ensures full-time energy output. The gas generates about a quarter of the overall power output and keeps the system warm overnight. Over 800 MWe capacity worldwide has supplied about 80% of the total solar electricity to the mid-1990s.
One proposal for a solar electrical plant is the solar tower, in which a large area of land would be covered by a greenhouse made of something as simple as transparent foil, with a tall lightweight tower in the centre, which could also be composed largely of foil. The heated air would rush to and up the centre tower, spinning a turbine. A system of water pipes placed throughout the greenhouse would allow the capture of excess thermal energy, to be released throughout the night and thus providing 24-hour power production. A 200 MWe tower is proposed near Mildura, Australia.
Solar thermal energy Solar energy need not be converted to electricity for use. Many of the world's energy needs are simply for heat ? space heating, water heating, process water heating, oven heating, and so forth. The main role of solar energy in the future may be that of direct heating. Much of society's energy need is for heat below 60°C (140°F) - e.g. in hot water systems. A lot more, particularly in industry, is for heat in the range 60 - 110°C. Together these may account for a significant proportion of primary energy use in industrialized nations. The first need can readily be supplied by solar power much of the time in some places, and the second application commercially is probably not far off. Such uses will diminish to some extent both the demand for electricity and the consumption of fossil fuels, particularly if coupled with energy conservation measures such as insulation.
Solar water heating Domestic solar hot water systems were once common in Florida until they were displaced by highly-advertised natural gas. Such systems are today common in the hotter areas of Australia, and simply consist of a network of dark-colored pipes running beneath a window of heat-trapping glass. They typically have a backup electric or gas heating unit for cloudy days. Such systems can actually be justified purely on economic grounds, particularly in some remoter areas of Australia where electricity is expensive.
Solar heat pumps With adequate insulation, heat pumps utilizing the conventional refrigeration cycle can be used to warm and cool buildings, with very little energy input other than energy needed to run a compressor. Eventually, up to ten percent of the total primary energy need in industrialized countries may be supplied by direct solar thermal techniques, and to some extent this will substitute for base-load electrical energy.
Solar ovens Large scale solar thermal powerplants, as mentioned before, can be used to heat buildings, but on a smaller scale solar ovens can be used on sunny days. Such an oven or solar furnace uses mirrors or a large lens to focus the Sun's rays onto a baking tray or black pot which heats up as it would in a standard oven.
Wind Energy Wind turbines have been used for household electricity generation in conjunction with battery storage over many decades in remote areas. Generator units of more than 1 MWe are now functioning in several countries. The power output is a function of the cube of the wind speed, so such turbines require a wind in the range 3 to 25 m/s (11 - 90 km/h), and in practice relatively few land areas have significant prevailing winds. Like solar, wind power requires alternative power sources to cope with calmer periods.
There are now many thousands of wind turbines operating in various parts of the world, with utility companies having a total capacity of over 39,000 MWe of which Europe accounts for 75% (ultimo 2003). Additional windpower is generated by private windmills both on-grid and off-grid. Germany is the leading producer of wind generated electricity with over 14,600 MWe in 2003. In 2003 the U.S.A. produced over 6,300 Mwe of wind energy, second only to Germany.
New wind farms and offshore wind parks are being planned and built all over the world. This has been the most rapidly-growing means of electricity generation at the turn of the 21st century and provides a complement to large-scale base-load power stations. Denmark generates over 10% of its electricity with windturbines, whereas windturbines account for 0.4% of the total electricity production on a global scale (ultimo 2002). The most economical and practical size of commercial wind turbines seems to be around 600 kWe to 1 MWe, grouped into large wind farms. Most turbines operate at about 25% load factor over the course of a year, but some reach 35%.
Geothermal Energy Where hot underground steam or water can be tapped and brought to the surface it may be used to generate electricity. Such geothermal power sources have potential in certain parts of the world such as New Zealand, United States, Philippines and Italy. The two most prominent areas for this in the United States are in the Yellowstone basin and in northern California. Iceland produced 170 MWe geothermal power and heated 86% of all houses in the year 2000. Some 8000 MWe of capacity is operating over all.
There are also prospects in certain other areas for pumping water underground to very hot regions of the Earth's crust and using the steam thus produced for electricity generation. An Australian startup company, Geodynamics, proposes to build a commercial plant in the Cooper Basin region of South Australia using this technology by 2004.
Water power Energy inherent in water can be harnessed and used, in the forms of kinetic energy or temperature differences.
Electrokinetic energy This type of energy harnesses what happens to water when it is pumped through tiny channels. See electrokinetics (water).
Hydroelectric Energy Hydroelectric energy produces essentially no carbon dioxide, in contrast to burning fossil fuels or gas, and so is not a significant contributor to global warming. Hydroelectric power from potential energy of rivers, now supplies about 715,000 MWe or 19% of world electricity. Apart from a few countries with an abundance of it, hydro capacity is normally applied to peak-load demand, because it is so readily stopped and started. It is not a major option for the future in the developed countries because most major sites in these countries having potential for harnessing gravity in this way are either being exploited already or are unavailable for other reasons such as environmental considerations.
The chief advantage of hydrosystems is their capacity to handle seasonal (as well as daily) high peak loads. In practice the utilization of stored water is sometimes complicated by demands for irrigation which may occur out of phase with peak electrical demands.
Tidal power Harnessing the tides in a bay or estuary has been achieved in France (since 1966) and Russia, and could be achieved in certain other areas where there is a large tidal range. The trapped water can be used to turn turbines as it is released through the tidal barrage in either direction. Worldwide this technology appears to have little potential, largely due to environmental constraints.
Tidal stream power A relatively new technology development, tidal stream generators draw energy from underwater currents in much the same way that wind generators are powered by the wind. The much higher density of water means that there is the potetial for a single generator to provide significant levels of power. Tidal stream technology is at the very early stages of development though and will require significantly more research before it becomes a significant contributor to electrical generation needs.
Wave power Harnessing power from wave motion is a possibility which might yield much more energy than tides. The feasibility of this has been investigated, particularly in the UK. Generators either coupled to floating devices or turned by air displaced by waves in a hollow concrete structure would produce electricity for delivery to shore. Numerous practical problems have frustrated progress.
OTEC Ocean Thermal Energy Conversion is a relatively unproven technology, though it was first used by the French engineer Jacques Arsene d'Arsonval in 1881. The difference in temperature between water near the surface and deeper water can be as much as 20°C. The warm water is used to make a liquid such as ammonia evaporate, causing it to expand. The expanding gas forces its way through turbines, after which it is condensed using the colder water and the cycle can begin again.
Biomass Biomass, also known as biomatter, can be used directly as fuel or to produce liquid biofuel. Agriculturally produced biomass fuels, such as biodiesel, ethanol and bagasse (a byproduct of sugar cane cultivation) are burned in internal combustion engines or boilers.
Liquid biofuel Liquid biofuel is usually bioalcohols -like methanol and ethanol- or biodiesel. Biodiesel can be used in modern diesel vehicles with little or no modification and can be obtained from waste and crude vegetable and animal oil and fats (lipids). In some areas corn, sugarbeets, cane and grasses are grown specifically to produce ethanol (also known as alcohol) a liquid which can be used in internal combustion engines and fuel cells.
Solid biomass Direct use is usually in the form of combustible solids, either firewood or combustible field crops. Field crops may be grown specifically for combustion or may be used for other purposes, and the processed plant waste then used for combustion. Most sorts of biomatter, including dried manure, can actually be burnt to heat water and to drive turbines. Plants partly use photosynthesis to store solar energy, water and CO2. Sugar cane residue, wheat chaff, corn cobs and other plant matter can be, and is, burnt quite successfully. The process releases no net CO2.
Biogas Animal feces (manure) release methane under the influence of anaerobic bacteria which can also be used to generate electricity. See biogas.
Renewable energy storage systems One of the great problems with renewable energy, as mentioned above, is transporting it in time or space. Since most renewable energy sources are periodic, storage for off-generation times is important, and storage for powering transportation is also a critical issue.
Hydrogen fuel cells Hydrogen as a fuel has been touted lately as a solution in our energy dilemmas. However, the idea that hydrogen is a renewable energy source is a misunderstanding. Hydrogen is not an energy source, but a portable energy storage method, because it must be manufactured by other energy sources in order to be used. However, as a storage medium, it may be a significant factor in using renewable energies. It is widely seen as a possible fuel for hydrogen cars, if certain problems can be overcome economically. It may be used in conventional internal combustion engines, or in fuel cells which convert chemical energy directly to electricity without flames, in the same way the human body burns fuel. Making hydrogen requires either reforming natural gas (methane) with steam, or, for a renewable and more ecologic source, the electrolysis of water into hydrogen and oxygen. The former process has carbon dioxide as a by-product, which exacerbates (or at least does not improve) greenhouse gas emissions relative to present technology. With electrolysis, the greenhouse burden depends on the source of the power, and both intermittent renewables and nuclear energy are considered here.
With intermittent renewables such as solar and wind, matching the output to grid demand is very difficult, and beyond about 20% of the total supply, apparently impossible. But if these sources are used for electricity to make hydrogen, then they can be utilized fully whenever they are available, opportunistically. Broadly speaking it does not matter when they cut in or out, the hydrogen is simply stored and used as required.
Nuclear advocates note that using nuclear power to manufacture hydrogen would help solve plant inefficiencies. Here the plant would be run continuously at full capacity, with perhaps all the output being supplied to the grid in peak periods and any not needed to meet civil demand being used to make hydrogen at other times. This would mean far better efficiency for the nuclear power plants.
About 50 kWh (1/144,000 J) is required to produce a kilogram of hydrogen by electrolysis, so the cost of the electricity clearly is crucial.
Other renewable energy storage systems Sun, wind, tides and waves cannot be controlled to provide directly either reliably continuous base-load power, because of their periodic natures, or peak-load power when it is needed. In practical terms, without proper energy storage methods these sources are therefore limited to some twenty percent of the capacity of an electricity grid, and cannot directly be applied as economic substitutes for fossil fuels or nuclear power, however important they may become in particular areas with favorable conditions. If there were some way that large amounts of electricity from intermittent producers such as solar and wind could be stored efficiently, the contribution of these technologies to supplying base-load energy demand would be much greater.
Pumped water storage Already in some places pumped storage is used to even out the daily generating load by pumping water to a high storage dam during off-peak hours and weekends, using the excess base-load capacity from coal or nuclear sources. During peak hours this water can be used for hydroelectric generation. However, relatively few places have the scope for pumped storage dams close to where the power is needed.
Battery storage Many "off-the-grid" domestic systems rely on battery storage, but means of storing large amounts of electricity as such in giant batteries or by other means have not yet been put to general use. Batteries are generally expensive, have maintenance problems, and have limited lifespans. One possible technology for large-scale storage exists: large-scale flow batteries.
Electrical grid storage One of the most important storage methods advocated by the renewable energy community is to rethink the whole way that we look at power supply, in its 24-hour, 7-day cycle, using peak load equipment simply to meet the daily peaks. Solar electric generation is a daylight process, whereas most homes have their peak energy requirements at night. Domestic solar generation can thus feed electricity into the grid during grid peaking times during the day, and domestic systems can then draw power from the grid during the night when overall grid loads are down. This results in using the power grid as a domestic energy storage system, and relies on ?'net metering'?, where electrical companies can only charge for the amount of electricity used in the home that is in excess of the electricity generated and fed back into the grid. Many states now have net metering laws.
Today's peak-load equipment could also be used to some extent to provide infill capacity in a system relying heavily on renewables. The peak capacity would complement large-scale solar thermal and wind generation, providing power when they were unable to. Improved ability to predict the intermittent availability of wind enables better use of this resource. In Germany it is now possible to predict wind generation output with 90% certainty 24 hours ahead. This means that it is possible to deploy other plants more effectively so that the economic value of that wind contribution is greatly increased.
Renewable energy use by nation Iceland is a world leader in renewable energy due to its abundant hydro- and geothermal energy sources. Over 99% of the country's electricity is from renewable sources and most of its urban household heating is geothermal. Israel is also notable as much of its household hot water is heated by solar means. These countries' successes are at least partly based on their geographical advantages.
Leading countries by renewable electricity production, (2000) Hydro Geothermal Wind PV Solar 1. Canada U.S. Germany Japan 2. U.S. Philippines U.S. Germany 3. Brazil Italy Spain U.S. 4. China Mexico Denmark India 5. Russia Indonesia India Australia
Share of the total power consumption in EU-countries that are renewable.
< td> 5,73 < td> 7,54 < td> 5,19
1985 1990 1991 1992 1993 1994
EUR-15 5,61 5,13 4,92 5,16 5,28 5,37 Belgium 1,04 1,01 1,01 0,96 0,84 0,80 Denmark 4,48 6,32 6,38 6,80 7,03 6,49 Germany 2,09 2,06 1,61 1,73 1,75 1,79 Greece 8,77 7,14 7,63 7,13 7,33 7,16 Spain 8,83 6,70 6,56 6,49 6,50 France 7,24 6,34 6,75 7,32 7,98 Ireland 1,75 1,65 1,68 1,59 1,59 1,63 Italy 5,60 4,64 5,16 5,34 5,50 Luxembourg 1,28 1,21 1,14 1,26 1,21 1,34 The Netherlands 1,36 1,35 1,35 1,37 1,38 1,43 Austria 24,23 22,81 20,99 23,39 24,23 23,71 Portugal 25,07 17,45 17,03 13,88 15,98 16,61 Finland 18,29 16,71 17,02 18,10 18,48 18,28 Sweden 24,36 24,86 22,98 26,53 27,31 24,04 United Kingdom 0,47 0,49 0,48 0,56 0,54 0,65 Table from
Renewable energy controversies As with anything, even renewable energy generates controversies.
The funding dilemma Research and development in renewable energies has been severely hampered by only receiving a tiny fraction of energy R&D budgets, with conventional energy sources getting the lion's share.
Centralization versus decentralization Frequently renewable electricity sources will be disadvantaged by regulation of the electricity supply industry which favors 'traditional' large-scale generators over smaller-scale and more distributed generating sources. If renewable and distributed generation were to become widespread, electric power transmission and electricity distribution systems would no longer be the main distributors of electrical energy but would operate to balance the electricity needs of local communities. Those with surplus energy would sell to areas needing "top ups". Some Governments and regulators are moving to address this, though much remains to be done. One potential solution is the increased use of active management of electricity transmission and distribution networks.
The nuclear "renewable" claim Some nuclear advocates claim that nuclear energy should be regarded as renewable energy. Arguments they put forward include:
* The view that nuclear energy does not contribute to global warming (although evaporative cooling has a minor effect by introducing additional water vapor into the atmosphere, along with the heat production of the process). * Fast breeder reactors can produce more fuel than they consume. * The view that uranium and thorium, being radioactive, are not theoretically long-term resources. * The view that nuclear waste, since it will eventually become less radioactive than the original ore bodies, is not theoretically a long-term problem.
This viewpoint is strongly rejected by most renewable energy advocates. The fact that nuclear power uses a depleting resource (uranium or thorium), that the half-life of uranium 238 is 4.5 billion years, and that the decay of the waste to a safe level may take three thousand years or longer (depending on the technology used) means that it cannot be included in such a classification. Breeder reactors consume uranium or thorium to produce fissile fuel, so this particular argument is a simple misunderstanding of the basic processes involved. Similar arguments can also be applied against proposed nuclear fusion power stations using deuterium and tritium, the latter bred from lithium, as fuel.
* U.S. Energy Information Administration provides lots of statistics and information on the industry. * Boyle, G. (ed.), Renewable Energy: Power for a Sustainable Future. Open University, UK, 1996.
From Wikipedia, the free encyclopedia.
Solar power has become of increasing interest as other finite power sources such as fossil fuels and hydroelectric power become both more scarce and expensive (in both fiscal and environmental terms). As the earth orbits the sun it receives 1,410 W / m2 as measured upon a surface kept normal (at a right angle) to the sun. Of this approximately 19% of the energy is absorbed by the atmosphere, while clouds reflect 35% of the total energy upon average.
After passing through the Earth's atmosphere most of the sun's energy is in the form of visible and ultraviolet light. Plant's use solar energy to create chemical energy through photosynthesis. We use this energy when we burn wood or fossil fuels. There have been experiments to create fuel by absorbing sunlight in a chemical reaction in a way similar to photosynthesis without using living organisms.
Most solar energy used today is converted into heat or electricity.
Types of solar power
Methods of solar energy have been classified using the terms direct, indirect, passive and active.
Direct solar energy involves only one transformation into a usable form. Examples:
* Sunlight hits a photovoltaic cell creating electricity. (Photovoltaics are classified as direct despite the fact that the electricity is usually converted to another form of energy such as light or mechanical energy before becoming useful.) * Sunlight hits a dark surface and the surface warms when the light is converted to heat by interacting with matter. The heat is used to heat a room or water.
Indirect solar energy involves more than one transformation to reach a usable form. Example:
* systems to close insulating shutters or move shades. Passive solar systems are considered direct systems although sometimes they involve convective flow which technically is a conversion of heat into mechanical energy.
Active solar energy refers to systems that use electrical, mechanical or chemical mechanisms to increase the effectiveness of the collection system. Indirect collection systems are almost always active systems.
Solar design is the use of architectural features to replace the use of electricity and fossil fuels with the use of solar energy and decrease the energy needed in a home or building with insulation and efficient lighting and appliances.
Architectural features used in solar design:
* South facing windows with insulated glazing that has high ultraviolet transmitance. * Thermal masses. * Insulating shutters for windows to be closed at night and on overcast days. * Fixed awnings positioned to create shade in the summer and exposure to the sun in the winter. * Movable awnings to be repositioned seasonally. * A well insulated and sealed building envelope. * Exhaust fans in high humidity areas. * Passive or active warm air solar panels. * Passive or active Trombe walls. * Active solar panels using water or antifreeze solutions. * Passive solar panels for preheating potable water. * Photovoltaic systems to provide electricity. * Windmills to provide electricity.
Solar hot water systems are quite common in some countries where a small flat panel collector is mounted on the roof and able to meet most of a household's hot water needs. Cheaper flat panel collectors are also often used to heat swimming pools, thereby extending their swimming seasons.
Solar cooking is helping in many developing countries, both reducing the demands for local firewood and maintaining a cleaner environment for the cooks. The first known record of a western solar oven is attributed to Horace de Saussure, a Swiss naturalist experimenting as early as 1767. A solar box cooker traps the sun's power in an insulated box; these have been successfully used for cooking, pasteurization and fruit canning.
Solar cells (also referred to as photovoltaic cells) are devices or banks of devices that use the photoelectric effect of semiconductors to generate electricity directly from the sunlight. As their manufacturing costs have remained high during the twentieth century their use has been limited to very low power devices such as calculators with LCD displays or to generate electricity for isolated locations which could afford the technology. The most important use to date has been to power orbiting satellites and other spacecraft. As manufacturing costs decreased in the last decade of the twentieth centery solar power has become cost effective for many remote low power applications such as roadside emergency telephones, remote sensing, and limited "off grid" home power applications.
Solar power plants generally use reflectors to concentrate sunlight into a heat absorber.
* Heliostat mirror power plants focus the sun's rays upon a collector tower. The vast amount of energy is generally transported from the tower and stored by use of a high temperature fluid. Liquid sodium is often used as the transport and storage fluid. The energy is then extracted as needed by such means as heating water for use in stream turbines. * Trough concentrators have been used successfully in the State of California (in the U.S.) to generate 350MW of power in the past two decades. The parabolic troughs can increase the amount of solar radiation striking the tubes up to 30 or 60 times, where synthetic oil is heated to 390°C. The oil is then pumped into a generating station and used to power a steam turbine. * Parabolic reflectors are most often used with a stirling engine or similar device at its focus. As the single parabolic reflector achieves a greater focusing accuracy than any larger bank of mirrors can achieve, the focus is used to achieve a higher temperature which in turn allows a very efficient conversion of heat into mechanical power to drive a electrical generator. Parabolic reflectors can also be used to generate steam to power turbines to generate electricity.
Applying Solar Power
Deployment of solar power depends largely upon local conditions and requirements, for example while certain European or U.S. states could benefit from a public hot water utility, such systems would be both impractical and counter-productive in countries like Australia or states like New Mexico. As all industrialised nations share a need for electricity, it is clear that solar power will increasingly be used to supplying a cheap, reliable electricity supply.
Many other types of power generation are indirectly solar-powered. Plants use photosynthesis to convert solar energy to chemical energy, which can later be burned as fuel to generate electricity; oil and coal originated as plants. Hydroelectric dams and wind turbines are indirectly powered by the sun.
In some areas of the U.S., solar electric systems are already competitive with utility systems. The basic cost advantage is that the home-owner does not pay income tax on electric power that is not purchased. As of 2002, there is a list of technical conditions: There must be many sunny days. The systems must sell power to the grid, avoiding battery costs. The solar systems must be inexpensively mass-purchased, which usually means they must be installed at the time of construction. Finally, the region must have high power prices. For example, Southern California has about 260 sunny days a year, making it an excellent venue. It yields about 9%/yr returns of investment when systems are installed at $9/watt (not cheap, but feasible), and utility prices are at $0.095 per kilowatt-hour (the current base rate). On grid solar power can be especially feasible when combined with time-of-use net metering, since the time maximum production is largely coincident with the time of highest pricing.
For a stand-alone system some means must be employed to store the collected energy for use during hours of darkness or cloud cover - either as electrochemically in batteries, or in some other form such as hydrogen (produced by electrolysis of water), flywheels in vacuum, or superconductors. Storage always has an extra stage of energy conversion, with consequent energy losses, greatly increasing capital costs.
Several experimental photovoltaic (PV) power plants of 300 - 500 kW capacity are connected to electricity grids in Europe and the U.S. Japan has 150 MWe installed. A large solar PV plant is planned for the island of Crete. Research continues into ways to make the actual solar collecting cells less expensive and more efficient. Other major research is investigating economic ways to store the energy which is collected from the sun's rays during the day.
Main Renewable resource, Renewable energy, Sustainable design
Solar: Solar box cooker, Solar thermal energy, Sun, Solar power satellite, Current solar income
Energy crisis: 1973 energy crisis, 1979 energy crisis
Electricity: Electricity generation, Electricity retailing, Energy storage, Green electricity, Direct current, Photoelectric effect, Power station, Power supply, Microwave power transmission, Solar cell, Power plant
Lists: List of conservation topics, List of physics topics
People: Leonardo da Vinci, Charles Eames, Charles Kettering, Menachem Mendel Schneerson
Other: Autonomous building, Solar-Club/CERN-Geneva-Switzerland, Electric vehicle, Lightvessel, Mass driver, Clock of the Long Now, Tidal power, Cumulonimbus Smart 1, Science in America, Slope Point, Back to the land, Architectural engineering, Ecology, Geomorphology, List of conservation topics, Nine Nations of North America | 1 | 15 |
<urn:uuid:6b4562b3-b108-45de-bf91-cc81c6e5141a> | Mustapha Muktar, Ph.D
Department of Economics
Bayero University, Kano-Nigeria
One of the key factors that play a pivotal role in a region’s economic growth is the presence of a reliable and efficient transportation system, this is mainly due to the fact that a well developed transportation system provides adequate access to the region which in turn is a necessary condition for the efficient operation of manufacturing, retail, labour and housing markets.
Transportation is a critical factor in the economic growth and development. It is a wealth creating industry on its own inadequate transportation limits a nation’s ability to utilise its natural resources, distributes foods and other finished goods, integrate the manufacturing and agriculture sectors and supply education, medical and other infrastructural facilities. There is the need therefore to maintain and improve the existing transportation and build new infrastructures for a national wealth. The national wealth is the growth domestic products (GDP) which is an indicator or measures of the rate of economic growth.
Transportation infrastructure is critical to sustain economic growth because people want to improve their standard of living and they see increased income as the way to achieve that goal, transportation system enhancement are in turns a means of maintaining or improving economic opportunities, quality of life and ultimately income for people in a particular region Lucas (1998)
Transportation also has a broader role in shaping development and the environment. Policy concerns in the next millennium will increasingly focus on the effects of transportation on where people live and on where businesses locate; and on the effects that these location decisions have on land use patterns, congestion of urban transportation systems, use of natural resources, air and water quality, and the overall quality of life Issues of urban sprawl, farmland preservation, and air and water quality have already pushed their way to the forefront of policy debates at both the national and local levels. To make prudent decisions, policy makers must be equipped with the best information and analysis possible about the interactions among these various factors.
Transportation becomes the back bone of any economy, especially countries like Nigeria, as such an anatomy of aspects relating to inefficiencies and lack of good transportation network in Nigeria coupled with low rate of economic growth (GDP) is crucial, attached to this is the poor government policy on transportation (Lack pf regulation of fees charged by private transporters, inadequate fuel. Lack of spare parts and above all the prevalence of bad roads and lack of security have succeeded in trimming down the transport system in Nigeria which have a negative effect on the economic growth.
Investment in transportation infrastructure is critical to sustained economic growth. Mobility studies show that transportation is absolutely essential to economic productivity and remains competitive in the global economy. An international study found every 10 percent increase in travel speed; labour market expands 15 percent and productivity by 3 percent (Barrister and Berechinan. 2000).
It is universally recognized that transport is crucial for sustained economic growth and modernization of a nation. Adequacy of this vital infrastructure is an important determinant of the success of a nation's effort in diversifying its production base, expanding trade and linking together resources and markets into an integrated economy. It is also necessary for connecting villages with towns, market centres and in bringing together remote and developing regions closer to one another. Transport, therefore, forms a key input for production processes and adequate provision of transport infrastructure and services helps in increasing productivity and lowering production costs.
The provision of transport infrastructure and services helps in reducing poverty. It needs no emphasis that various public actions aimed at reducing poverty cannot be successful without adequate transport infrastructure and services. It is difficult to visualize meeting the targets or universal education and healthcare for all without first providing adequate transport facilities.
All sectors, including transport, operate within the socioeconomic framework provided by the State. Specific policies are designed within the framework for each sector in order to, meet national goals and objectives. Currently, the main objective of development planning in India is higher growth in Gross Domestic Product (GDP). The aim is to achieve a target of 8 percent growth in GDP by 2007, i.e. by the end of Tenth Five Year Plan. The higher rate of economic growth must also be accompanied by wider dispersal of economic activity and has to go together with the objectives of reduction in poverty, provision of gainful and high quality employment, improvement in literacy rates, reduction in the growth of population, reduction in gender inequality in illiteracy and wage rate, reduction in infant mortality, etc. As a service industry, transport does not exist for its own sake. It serves as a means to achieve other objectives. In formulating policy for the development of the transport sector, various macro objectives mentioned above therefore have to be taken into account. Some of these are economic in character while others are of a socio-political nature. Economic and non-economic objectives are not always consistent. However, their mix is one of the important factors which determine the pattern of investment and its funding in various sectors of economy.
Transport demand, both freight and passenger, is linked to the level of economic activity and development needs. It runs parallel to the growth of GDP. A higher rate of growth will therefore mean higher transport demand. However, as growth of GDP results in dispersal of economic activity, the demand for transport will go up further.
The demand for transport services is also affected by the structural changes that are taking place in the Indian economy. As a result, the share of high value low volume commodities has been increasing, which in turn demands more flexible modes such as road transport. There has been an increase in the level of urbanization owing to migration and growth of population. The share of urban areas in the total GDP therefore has been on the rise. Such a spatial shift in the distribution and concentration of economic activity has a profound effect on the nature and level of transport demand. The most obvious result was the increase in demand for urban transport services. Taking various factors into account, it is expected that the elasticity of demand for freight traffic with respect to GDP growth will decline in the future but will still he more than one. With India's resolve to move to a higher growth path, it means that the demand for transport will continue to experience a high growth rate.
TRANSPORTATION AND ECONOMIC GROWTH
Transportation also contributes to the economy by providing millions of jobs. It allows men and women to earn their living by manufacturing vehicles and by driving, maintaining, and regulating them to allow for the safe and efficient movement of goods and people. One out of every seven jobs in the United States is transportation related Transportation jobs in transportation industries as well as in non-transportation industries employed nearly 20 million people in 2002, accounting for 16 percent of U.S. total occupational employment. For example, the for-hire transportation sector employed over 44 million workers In 2002 More than 60 percent of these for-hire workers are either in freight-related occupations or in Jobs that directly support freight transportation. An additional 1.7 million workers are employed in transportation equipment manufacturing and another 4.5 million in transportation-related industries such as automotive service and repair, highway construction, and motor vehicle and parts dealers (USDOT BTS 2004). Transportation-related occupations also make up a significant portion of the employment of non-transportation industries such as truck drivers, freight arrangement agents, and freight-moving workers in the wholesale and retail industries. In 2002, there were about 9.2 million people employed in transportation-related occupations in non-transportation industries.
Growth in productivity is the fundamental driving force for economic growth Productivity growth in freight transportation has long been a driving force for the growth of U.S. overall productivity and contributed directly to the growth of the U.S. GDP. For example, from 1991 to 2000 labor productivity rose 21 percent in the overall non-farm business sector' During the same time period, labor productivity rose 53 percent for rail, 23 percent for trucking, and 143 percent for pipeline. All three of these modes are primarily engaged in freight transportation. Such productivity gains result in lower transportation costs and lower prices for consumers. This brings savings to consumers and reduces business costs.
Measuring Economic Benefits of Transportation
If all of the steps described above were followed, planners and policy makers would be left with a list of investments that have the potential to generate economic benefits. The three part analysis is shown in Figure A-7 provide a reasonable comprehensive analysis of each project’s likely contribution to economic development.
When a highway improvement is proposed, the economic evaluation must first identify which industries will be impacted. This involves the following sequence of three analytical steps within the Commodity Flow analysis.
1. Locate the improvement on the highway or rail network.
2. Identify what commodities are being shipped and person trips on the roadway that will have the proposed improvements and forecast the growth of these commodities.
3. Locate the origins and destinations of these commodities and identify the industries that are involved in shipping and receiving.
An automobile is a wheeled passenger vehicle that carries its own motor. Different types of automobiles include cars, buses, trucks, and vans. Some include motorcycles in the category, but cars are the most typical automobiles. As of 2002 there were 590 million passenger cars worldwide (roughly one car for every ten people), of which 170 million in the U.S. (roughly one car for every two people).Wikipedia, (2007)
The automobile was thought of as an environmental improvement over horses when it was first introduced in the 1890s. Before its introduction, in New York City alone, more than 1,800 tons of manure had to be removed from the streets daily, although the manure was used as natural fertilizer for crops and to build top soil. In 2006, the automobile is recognized as one of the primary sources of world-wide air pollution and a cause of substantial noise pollution and adverse health effects.
The first forms of road transport were horses, oxen or even humans carrying goods over dirt tracks that often followed game trails. As commerce increased, the tracks were often flattened or widened to accommodate the activities. Later, the travois, a frame used to drag loads, was developed. The wheel came still later, probably preceded by the use of logs as rollers.
With the advent of the Roman Empire, there was a need for armies to be able to travel quickly from one area to another, and the roads that existed were often muddy, which greatly delayed the movement of large masses of troops. To resolve this issue, the Romans built great roads. The Roman roads used deep roadbeds of crushed stone as an underlying layer to ensure that they kept dry, as the water would flow out from the crushed stone, instead of becoming mud in clay soils.
During the Industrial Revolution, and because of the increased commerce that came with it, improved roadways became imperative. The problem was rain combined with dirt roads created commerce-miring mud. John Loudon Mac Adam (1756-1836) designed the first modern highways. He developed an inexpensive paving material of soil and stone aggregate (known as macadam), and he embanked roads a few feet higher than the surrounding terrain to cause water to drain away from the surface.
Various systems had been developed over centuries to reduce bogging and dust in cities, including cobblestones and wooden paving. Tar-bound macadam (tarmac) was applied to macadam roads towards the end of the 19th century in cities such as Paris. In the early 20th century tarmac and concrete paving were extended into the countryside.
Transport on roads can be roughly grouped into two categories: transportation of goods and transportation of people. In many countries licensing requirements and safety regulations ensure a separation of the two industries.
The nature of road transportation of goods depends, apart from the degree of development of the local infrastructure, on the distance the goods are transported by road, the weight and volume of the individual shipment and the type of goods transported. For short distances and light, small shipments a van or pickup truck may be used. For large shipments even if less than a full truckload (Less than truckload) a truck is more appropriate. In some countries cargo is transported by road in horse drawn carriages, donkey carts or other non-motorized mode. Delivery services are sometimes considered a separate category from cargo transport. In many places fast food is transported on roads by various types of vehicles. For inner city delivery of small packages and documents bike couriers are quite common.
Rail transport is the transport of passengers and goods by means of wheeled vehicles specially designed to run along railways or railroads. Rail transport is part of the logistics chain, which facilitates the international trading and economic growth in most countries.
Typical railway/railroad tracks consist of two parallel rails, normally made of steel, secured to cross-beams, termed sleepers (U.K.) or 'ties' (U.S.). The sleepers maintain a constant distance between the two rails; a measurement known as the 'gauge' of the track. To maintain the alignment of the track it is either laid on a bed of ballast or else secured to a solid concrete foundation. The whole is referred to as permanent way (UK usage) or right-of-way (North American usage).
Railway rolling stock, which is fitted with metal wheels, moves with low frictional resistance when compared to road vehicles. On the other hand, locomotives and powered cars normally rely on the point of contact of the wheel with the rail for traction and adhesion (the part of the transmitted axle load that makes the wheel "adheres" to the smooth rail). While this is usually sufficient under normal dry rail conditions, adhesion can be reduced or even lost through the presence of unwanted material on the rail surface, such as moisture, grease, ice or dead leaves.
Rail transport is an energy-efficient and capital-intensive means of mechanized land transport and is a component of logistics. Along with various engineered components, rails constitute a large part of the permanent way. They provide smooth and hard surfaces on which the wheels of the train can roll with a minimum of friction. As an example, a typical modern wagon can hold up to 125 tons of freight on two four-wheel bogies/trucks (100 tons in UK). The contact area between each wheel and the rail is tiny, a strip no more than a few millimeters wide, which minimizes friction. In addition, the track distributes the weight of the train evenly, allowing significantly greater loads per axle / wheel than in road transport, leading to less wear and tear on the permanent way. This can save energy compared with other forms of transportation, such as road transport, which depends on the friction between rubber tires and the road. Trains also have a small frontal area in relation to the load they are carrying, which cuts down on forward air resistance and thus energy usage, although this does not necessarily reduce the effects of side winds.
Due to these various benefits, rail transport is a major form of public transport in many countries. In Asia, for example, many millions use trains as regular transport in India, China, South Korea and Japan. It is also widespread in European countries. By comparison, intercity rail transport in the United States is relatively scarce outside the Northeast Corridor, although a number of major U.S. cities have heavily-used, local rail-based passenger transport systems or light rail or commuter rail operations.
The vehicles travelling on the rails, collectively known as rolling stock, are arranged in a linked series of vehicles called a train, which can include a locomotive if the vehicles are not individually powered. A locomotive (or 'engine') is a powered vehicle used to haul a train of unpowered vehicles. In the U.S.A., individual unpowered vehicles are known generically as cars. These may be passenger carrying or used for freight purposes. For passenger-carrying vehicles, the term carriage or coach is used, while a freight-carrying vehicle is known as a freight car in the United States and a wagon or truck in Great Britain. An individually-powered passenger vehicle is known as a railcar or a power car; when one or more as these are coupled to one or more unpowered trailer cars as an inseparable unit, this is called a railcar set.
Previous studies on the economic development of the United States emphasized infrastructure, business climate, taxation, cost and availability of raw materials, labour, capital, access to markets, and climate when explaining growth of the region.
Plaut and Pluita (1983) in their state level analysis of industrial growth used labor and energy cost, availability and productivity variables, land and raw materials, environment, business climate, taxes and government expenditures as explanatory variables. They found market accessibility, labor variables, land, environment, business climate, and propel1y taxes to be highly significant in explaining all three measures of industrial growth production, employment and capital stock growth.
Carlino and Mills (1987) looked at the determinants of county growth. County level data were used to analyze what variables had an impact on the growth of population and employment during the 1970s and 1980s. Structural equations were estimated using a two-stage least-squares technique for total employment and population, and for manufacturing employment and population, since the manufacturing sector appeared to influence regional economic growth Eight regional dummies were used to identify the association of a county to a particular region Population density, interstate-highway density, and family income were shown to contribute significantly to the employment density growth, whereas employment, interstate-highway density, family income, and the central city dummy contributed to the population density growth.
Deller, Tsai, Marcouiller and English (200 I) looked at how amenities influence rural economic growth. Economic growth was represented in their study by three types of growth: growth in population, growth in employment, and growth in per capita income. Results of their analysis showed that higher levels of income inequality are associated with lower levels of growth in terms of population. Property taxes had a negative effect on population and income growth; population over age sixty-five was negatively related with economic growth; climate strongly influenced growth levels of population; all amenity attributes, such as levels of water amenities, developed recreational infrastructure; winter recreational activities, were statistically significant and positively related to economic growth.
Government policies can have an impact on the firm's decision- making process, particularly taxation and incentive policies. Corporate income and property tax rates can affect a firm's profits either directly or indirectly (Gerking and Morgan, 1991). It is obvious that a firm's profits will decrease if the burden of an increase in taxes is borne directly by the firm. This study proved that a firm's profits decrease if the increase in taxes is passed forward to the consumer. By passing the tax to the consumer through higher prices, the firm's market will decline, thus indirectly reducing profit.
On the other hand, Newman and Sullivan argue that business taxes should not be viewed strictly as another cost to the firm (Newman and Sullivan, 1988). They perceive business taxes in part as benefit taxes. "Firms derive some benefit from local or state expenditures for fire, public safety, transportation, and perhaps education" (Newman and Sullivan, 1988, p. 216). The relevant question for the firm now would not be which location would minimize the tax burden to the firm, but what location would provide the firm with the most desirable overall fiscal package.
Agglomeration economies represent the cost savings that accrue to firms that locate in communities with a relatively large concentration of manufacturing commercial business activity (Hery and Drabenstott, 1996; Johnson, 2001; McNamara, Kriesel, and Rainey, 1995). The concentration of activity tends to provide broader access to markets, business services, and technological expertise. In addition, agglomeration forces are generally associated with an abundant supply of skilled labor. Thus, communities in or near large Metropolitan Statistical Areas (MSAs) have location advantages over smaller and more remote communities.
As expected, agricultural agglomeration was highly significant and negatively related to the gross county product since agriculture represents an industry that offers an alternative way of land use (Blum, 1982) Wages in agriculture also tend to be lower than in other sectors Employment agglomeration in construction and retail industries were insignificant.
The concentration of roads, measured as the number of miles in all roads divided by land area, represented infrastructure in this study. This variable was highly significant and positively related to the gross county product.
The number of person-trips per year variable represented the ability of the county to attract outside residents for business and/or personal activities in the area. This variable was chosen for its relation to the business and personal travel and service usage. The number of person-trips was highly significant and positively related to the gross county product. The amenity index showed that rural amenities contributed to the increase in income growth in the county.
Another outcome of this research is that economic development was significantly and positively related to the level of human capital in the area. The coefficient for the percent of the population with high school diploma was the highest among all variables, followed by the coefficient on infrastructure. These results imply that counties seeking to increase income growth should insure that they have a comparative advantage or at least be comparable with competing communities regarding the level of human capital and infrastructure.
Large investments have been made for the development of the transport sector in India. This has resulted in the expansion of transport infrastructure and facilities. There have also been impressive qualitative developments. These include the emergence of the multi modal transport system, training centres of excellence and reduction in the arrears of over-aged assets. In spite of these impressive achievements, the transport infrastructure has not been developed to the extent that it can effectively address the problems of accessibility and mobility needs for the movement of people and goods. About 40 percent of villages are yet to be linked with all-weather roads. India has made remarkable progress in many areas while remain regressive in many others. The ongoing liberalization of Indian economy, despite some noticeable bumps, has rekindled a global interest in this slumbering giant. Indian economic growth accelerated to 6.9 percent in 2004-05 as compared to 5.8 percent in 2001-02, 6.1 percent in 1999-00, 6.7 percent in 1989-90, 5.2 percent in 1979-80 and 1.0 percent in 1971-72 in terms of real GDP at factor cost. The combined gross fiscal deficit as percentage of GDP was 8.3 percent in 2004-05RE as compared to 9.9 percent in 2001-02, 9.5 percent in 1999-00, 8.9 percent in 1989-90 and 7.5 percent in 1980-81. The gross domestic capital formation at constant price (a proxy for domestic real investment) as percentage of GDP was 16.8 percent in 2004-05 as compared to 14.8 percent in 2001-02,18.2 percent in 1999-00, 35.4 percent in 1989-90, 76.9 percent in 1979-80 and 136.3 percent in 1971 - 72.
Indian Railways is one of the largest railway systems in the world. By carrying about 1.1 million passengers and over 1.20 million tonnes of freight per day the rail system occupies a unique position in the socio-economic map of the country and is considered a means and a barometer of growth. Rail is one of the principal modes of transport for carrying long-haul bulk freight and passenger traffic. It also has an important role as the mass rapid transit mode in the suburban areas of large metropolitan cities. The growth of railway route length was 0.4 percent in 2004-05 as compared to 0.2 percent in 2001-02, -0.1 percent in 1999-00. 0.4 percent in 1989-90. 0.3 percent in 1979-80 and 0.5 percent in 1971-72.
The road network in India is seemingly very large with a length of about 3 million kilometers. However, it cannot meet the accessibility and mobility requirements of a country of India's size and population. The growth of road length was higher in all the years as compared to the growth of railway route length. It was 1.8 percent in 2004-05 as compared to 1.5 percent in 2001-02, 0.8 percent in 1999-00, 3.3 percent in 1989-90, 3.2 percent in 1979-80 and 10.3 percent in 1971-72 It is also found that while the growth of road length continuously increasing since 1999-00, the growth of railway route length increasing since 2002-03.
From the above trend it is clear that India's real economic growth is as a result of rail and road route length growth.
The next step is to divide the value added by transportation into the respective modes. Goods movement-intensive industries have less flexibility in the modes they use than is often understood by economic development officials and transportation planners. Careful analysis of each industry’s logistics indicates which mode dominates the industries shipping patterns. The analysis may reveal opportunities for mode shifts that in turn provide significant cost savings and/or improved productivity, but these opportunities are few and far between. The Alameda Corridor project in Los Angeles, for example, will probably increase the amount of containers moving out of the Port of LA and Long Beach by rail significantly, but only as on-dock rail capacity in increased by terminal operators and only over time. Figure A-4 illustrates the breakout of the value-added from transportation by mode at the national level.
During the past few decades, continued shifts in the U.S. economy towards more services, increased production of high-value and light-weight goods, expanded trade with Mexico and China, and the current pattern of global production and distribution systems influenced trends in U.S. freight transportation As the nation's economy shifted towards more services, the goods share of GDP declined relative to total GDP. Thirty-four years ago, in 1970, goods accounted for 43 percent of US. GDP, only slightly lower than the 46-percent share of services in GDP But, by 2002 the share of goods in GDP decreased to 33 percent, while the share of services increased to 58 percent Because freight transportation is, in general, more closely associated with goods production than with services production, the decline in goods share of GDP contributed to a slower growth in freight transportation (measured in ton-miles) than the overall growth of GDP in the past few decades Between 1970 and 2002, U.S. real GDP, measured in 2000 chain-type dollars, grew 167 percent During the same time period, US freight transportation, measured in ton-miles, grew only 73 percent Consequently, the freight transportation intensity of the U.S. economy decreased from 059 ton-miles per dollar of GDP.
Freight transportation intensity declined even within the goods producing sector. In 1970, It took 2.1 ton-miles of freight transportation to produce $1 of goods GDP. In 2002, it took only half that amount, 1.1 ton-miles, to produce the same value of goods GDP (in real terms). This trend reflects two underlying changes in the U.S. economy:
· the downsizing of products towards lighter weight products (such as computers, cell phones, and hand-held digital devices), and
· Improvement in the efficiency of the freight transportation system, not only in terms of faster and timelier delivery, but also higher direct accessibility.
Within those industries that need help and would likely benefit, an understanding of how much each industry uses various modes (both those currently located in Oregon and those targeted by economic development officials) provides the first step in targeting transportation investments. Figure A-5 presents a qualitative rating of the modal intensity for major industry groups based on national averages.
Unfortunately, it will not be sufficient to have this understanding at the national level and it may not suffice at the state level. The successes of most transportation investments vary by region and by rural versus urban corridors. Research on the role of highways and lane expansions in economic development, for example, shows that improvements to rural highway connections between communities can have significant benefits even if there is no congestion.
APPOL S & KOSARDA J. (2006) “Airport as New Urban Anchors” Capel Hill University North Cariolina.
Benett, E., P. Grohman, B. G (1999), “Public-Private Partnerships for the Urban Environment” Unpublished paper, Chicago.
PPPUE (2004) “Environment Options and Issues” PPPUE Working Paper Series Volume I, United Nations Development Programme, Yale University
CASCETTA, E., (1995) “The Italian Decision Support System for Transportation Policies and Investments” General Architecture and Development Status. Sydney: Proceedings of 7th World Conference on Transport Research.
CASCETTA, E.(1997) “National Transport Modeling in Italy: Simulation and Evaluation Models for the Italian DSS” Proceedings Seminar on National Transport Models: The State of the Art 9-11 June 1997.
ELBERT R.(2006) “Understanding The Impact of Transportation On Economic Development” Minnesota Department of Transportation.
ESTACHE, A. & T. SEREBRISKY (2004), “Where Do We Stand on Transport Infrastructure Deregulation and Public-Private Partnership?”, World Bank Policy Research Working Paper 3356
DALY, A. (2000). National Models. In HENSHER, D.A. and BUTTON, K.J. (ed.), 2000. Handbook of Transport Modelling. Pergamon.
DALY, A. and SILLAPARCHARN, P. (2004). “National Traffic Forecasting Models in Europe and Elsewhere”. Proceedings of European Conference of Ministers of Transport, Organisation or Economic Co-operation and Development.
DARGAY, J. and GATELY, D. (1999). “Income's Effect on Car and Vehicle Ownership, Worldwide: 1960-2015.” Transportation Research Part A, vol. 33 (1999) pp.101-138.
DfT (1997), National Road Traffic Forecasts (Great Britain) 1997. Available from:
DfT (2003) National Transport Model (NTM). Available from:
EKANEM N.F. &ONAKOMOYA S. O (1977) (Ed) Transportation in Nigerian National Development” Published by NISER, Ibadan.
GUNN, H. (1994) “The Netherlands National Model: a Review of Seven Years of Application”, International Transactions in Operational Research, 1(2), pp 125 -133, Pergamon.
GUNN, H.(2000) “An Overview of European National Models”, in LUNDQVIST, L., and MATTSSON, LG. (eds.), 2001. National Transport Models: Recent Development and Prospects. Berlin: Springer-Verlag.
HCG, (1992) “The National Model System for Traffic and Transport”, Ministry of Transport and Public Works, Rotterdam.
HOFMAN, F. (2001) “Applications Areas for the Dutch National Mode”l, in
LUNDQVIST, L., and JONG, G.C. (2000) “Review of European and national passenger and freight market forecasting systems” (Deliverable 2). Contract No: 2000-AM-10816.
KHAN, A. and WILLUMSEN, L.G. (1986). “ Modelling Car Ownership and Use in Developing Countries”. Traffic Engineering and Control 27, pp554-560.
LUNDQVIST, L. and MATTSSON, LG.(eds.), (2001). National Transport Models: Recent Development and Prospects, Berlin: Springer-Verlag.
OCMLT 2000. Transport Data and Model Center (TDMC). JMP (Thailand)
MATTSSON, LG. (eds.), (2001) “National Transport Models: Recent Development and Prospects”. Berlin
PRUSTY S. (2006) “Rail-Road Transportation, Domestic Investment and Economic Growth: A co-integration Analysis For India” Goa Institute of management Working paper
RAINLEY D. B. (2002) “Transportation Infrastructures and Rural Economic Growth” Mack Blackwell Transportation Centre.
SIKA (2002) SAMPERS – Overview. Available from: http://www.sikainstitute.
Van de Velde, D. M. (2005), The Netherlands, in Calthrop, E. and Ludewig, J. (eds) “ Reforming Europes Railways An Assessment of Progress, Community of European Railway and Infrastructure Companies” (CER), Eurailpress, Hamburg
WORD BANK (2000) “Sustainable Transport: Priorities for Policy Reform” Published by the World Bank.
WORSLEY, T.T. and HARRIS R.C.E.(2001) “General Modeling Approaches: Top-down or Bottom up”?,
in LUNDQVIST, L., and MATTSSON, LG. (eds.), 2001. National Transport Models: Recent Development and Prospects. Berlin. | 1 | 5 |
<urn:uuid:5370a9c0-4c1a-45e4-ab23-478219c7b85d> | Two-sided markets, also called two-sided networks, are economic platforms having two distinct user groups that provide each other with network benefits. The organization that creates value primarily by enabling direct interactions between two (or more) distinct types of affiliated customers is called a multi-sided platform (MSP).
Two-sided networks can be found in many industries, sharing the space with traditional product and service offerings. Example markets include credit cards (composed of cardholders and merchants); HMOs (patients and doctors); operating systems (end-users and developers); yellow pages (advertisers and consumers); video-game consoles (gamers and game developers); recruitment sites (job seekers and recruiters); search engines (advertisers and users); and communication networks, such as the Internet. Examples of well known companies employing two-sided markets include such organizations as American Express (credit cards), eBay (marketplace), Taobao (marketplace in China), Facebook (social medium), Mall of America (shopping mall), Match.com (dating platform), Monster.com (recruitment platform), and Sony (game consoles).
Benefits to each group exhibit demand economies of scale. Consumers, for example, prefer credit cards honored by more merchants, while merchants prefer cards carried by more consumers. Two-sided markets are particularly useful for analyzing the chicken-and-egg problem of standards battles, such as the competition between VHS and Beta. They are also useful in explaining many free pricing or "freemium" strategies where one user group gets free use of the platform in order to attract the other user group.
Two-sided markets represent a refinement of the concept of network effects. There are both same-side and cross-side network effects. Each network effect can be either positive or negative. An example of a positive same-side network effect is end-user PDF sharing or player-to-player contact in PlayStation 3; a negative same-side network effect appears when there is competition between suppliers in an online auction market or competition for dates on Match.com. The concept of network effects was conceived independently by Geoffrey Parker and Marshall Van Alstyne (2000,2000, 2005) to explain behavior in software markets and by Rochet & Tirole to explain behavior in credit card markets. The first known peer-reviewed paper on interdependent demands was published in 2000.
Multi-sided platforms exist because there is a need of intermediary in order to match both parts of the platform in a more efficient way. Indeed, this intermediary will minimize the overall cost, for instance, by avoiding duplication, or by minimizing transaction costs. This intermediary will make possible exchanges that would not occur without them and create value for both sides. Two-sided platforms, by playing an intermediary role, produce certain value for both users (parties) that are interconnected through it, and therefore those sides (parties) may both be evaluated as customers (unlike in the traditional seller-buyer dichotomy).
A two-sided network typically has two distinct user groups. Members of at least one group exhibit a preference regarding the number of users in the other group; these are called cross-side network effects. Each group's members may also have preferences regarding the number of users in their own group; these are called same-side network effects. Cross-side network effects are usually positive, but can be negative (as with consumer reactions to advertising). Same-side network effects may be either positive (e.g., the benefit from swapping video games with more peers) or negative (e.g., the desire to exclude direct rivals from an online business-to-business marketplace).
For example, in marketplaces such as eBay or Taobao, buyers and sellers are the two groups. Buyers prefer a large number of sellers, and, meanwhile, sellers prefer a large number of buyers, such that the members in one group can easily find their trading partners from the other group. Therefore, the cross-side network effect is positive. On the other hand, a large number of sellers mean severe competition among sellers. Therefore, the same-side network effect is negative. Figure 1 depicts these relationships.
Neither cross-side network effects nor same-side network effects are sufficient for an organization to be a MSP. Examining traditional supermarkets, it is clear that shoppers prefer higher number of suppliers and bigger variety of goods, while suppliers value higher number of buyers. Nevertheless, a supermarket does not qualify as an MSP because it does not enable direct contact between shoppers and suppliers. On the other hand, such network effects are not required for a firm to be seen as an MSP. One example is the situation in which niche event organizers implement a ticketing service managed by a small on-line ticket provider in their websites. Consumers affiliate with the on-line ticket provider only when they go to the website to buy the ticket. However, cross-side network effects and same-side network effects are common in MSPs.
In two-sided networks, users on each side typically require very different functionality from their common platform. In credit card networks, for example, consumers require a unique account, a plastic card, access to phone-based customer service, a monthly bill, etc. Merchants require terminals for authorizing transactions, procedures for submitting charges and receiving payment, "signage" (decals that show the card is accepted), etc. Given these different requirements, platform providers may specialize in serving users on just one side of a two-sided network. A key feature of two-sided markets is the novel pricing strategies and business models they employ. In order to attract one group of users, the network sponsor may subsidize the other group of users. Historically, for example, Adobe's portable document format (PDF) did not succeed until Adobe priced the PDF reader at zero, substantially increasing sales of PDF writers.
In the operating systems market for home computers, created in the early 1980s with the introduction of the Macintosh and IBM PC, Microsoft decided to steeply discount the systems developer toolkit (SDKs) for its operating system, relative to Apple pricing at that time, lowering the barrier to entry to the home computer market for software businesses. This resulted in a big increase in the number of applications being developed for home computers, with the Microsoft Windows/IBM PC being the operating system/computer type combination of choice for both software businesses and software users.
Because of network effects, successful platforms enjoy increasing returns to scale. Users will pay more for access to a bigger network, so margins improve as user bases grow. This sets network platforms apart from most traditional manufacturing and service businesses. In traditional businesses, growth beyond some point usually leads to diminishing returns: Acquiring new customers becomes harder as fewer people, not more, find the firm's value proposition appealing.
Fueled by the promise of increasing returns, competition in two-sided network industries can be fierce. Platform leaders can leverage their higher margins to invest more in R&D or lower their prices, driving out weaker rivals. As a result, mature two-sided network industries are usually dominated by a handful of large platforms, as is the case in the credit card industry. In extreme situations, such as PC operating systems, a single company emerges as the winner, taking almost all of the market.
Platform managers must choose the right price to charge each group in a two-sided network and ignoring network effects can lead to mistakes. In figure 2, pricing without taking network effects into account means finding prices that maximize the areas of the two blue rectangles. Adobe initially used this approach when it launched PDF and charged for both reader and writer software.
In two-sided networks, such pricing logic can be misguided. If firms account for the fact that adoption on one side of the network drives adoption on the other side, they can do better. Demand curves are not fixed: with positive cross-side network effects, demand curves shift outward in response to growth in the user base on the network's other side. When Adobe changed its pricing strategy and made its reader software freely available, its managers uncovered a key rule of two-sided network pricing. They subsidized the more price sensitive side, and charged the side whose demand increased more strongly in response to growth on the other side. As illustrated in figure 3, giving consumers a free reader created demand for the document writer, the network's "money side".
Similarly, gaming manufacturers very often subsidize the gamers and sell their consoles at substantial losses (e.g. Sony's PS3 lost $250 per unit sold )in order to penetrate the market and receive royalties of software sold for their gaming console.
On the other hand, even though two-sided pricing strategies generally increase total platform profits compared to traditional one-sided strategies, the actual end value of the two-sided pricing strategy is contingent on market characteristics and may not offset the costs of implementation. For example, profits of an application provider increase with the implementation of a two-sided pricing strategy of the platform provider only if the application is subsidized by the provider. Platform providers should also be more cautious when the giveaway product has appreciable unit costs, as with tangible goods. Free-PC incurred $80M in losses in 1999 when it decided to give away computers and Internet access at no cost to consumers who agreed to view Internet-delivered ads that could not be minimized or hidden. Unfortunately, willingness to pay does not materialize on the money side, as few marketers were eager to target consumers who were so cost conscious.
If building a bigger network is one reason to subsidize adoption, then stimulating value adding innovations is the other. Consider, for example, the value of an operating system with no applications. While Apple initially tried to charge both sides of the market, like Adobe did in figure 2, Microsoft uncovered a second pricing rule: subsidize those who add platform value. In this context, consumers, not developers are the money side.
Which market represents the money side and which market represents the subsidy side depends on this critical tradeoff: increasing network size versus growing network value. The size rule lets people increase adoption more while the value rule lets people increase price more.
Although recently developed in terms of economic theory, two-sided networks help to explain many classic battles, for example, Beta vs. VHS, Mac vs. Windows, CBS vs. RCA in color TV, American Express vs. Visa, and more recently Blu-ray vs. HD DVD.
In the case of color TV, CBS and RCA offered rival formats but initially neither gained market traction. Viewers had little reason to buy expensive color TVs in the absence of color programming. Likewise, broadcasters had little reason to develop color programming when households lacked color TVs. RCA won the battle in two ways. It flooded the market with low cost black-and-white TVs incompatible with the CBS format but compatible with its own. Broadcasters then needed to use the RCA format to reach established viewers. RCA also subsidized Walt Disney's Wonderful World of Color, which gave consumers reason to buy the new technology.
When two-sided markets contain more than one competing platform, the condition of users affiliating with more than one such platform is called multihoming. Instances arise, for example, when consumers carry credit cards from more than one banking network or they continue using computers based on two different operating systems. This condition implies an increase of "homing" costs, which comprise all the expenses network users incur in order to establish and maintain platform affiliation. These ongoing costs of platform affiliation should be distinguished from switching costs, which refer to the one time costs of terminating one network and adopting another.
Their significance in industry and antitrust law arises from the fact that the greater the multihoming costs, the greater is the tendency toward market concentration. Higher multihoming costs reduce user willingness to maintain affiliation with competing networks providing similar services.
Winner takes all
Attracted by the prospects of large margins, platforms can try to compete to be the winner-take-all in two-sided markets with strong network effects. That means that one platform serves the mature networked market. Examples of the standards battles include VHS vs Betamax, Microsoft vs Netscape and the DVD market. Not all two-sided markets with strong positive network effects are intended to be supplied by one platform. Markets must have high multi-homing costs and similar consumers' needs.
Even if the market is meant to be dominated by one platform, companies can choose to cooperate rather than competing to be the winner-take-it-all. For instance, DVD companies pooled their technologies creating the DVD format in 1995.
In case of fighting, companies will have the risk of stranding in the short term, but in the long term they will be able to price according to monopoly theories. Consequently, the winner-take-all scenario can be threatened by government intervention.
Threat of envelopment
Since frequently platforms have overlapping user bases, it is not uncommon for a platform to be "enveloped" by an adjacent provider.
Usually, this occurs when a rival platform provides the same functionality of a platform as a part of a multiplatform bundle. If the money-side perceives that such multiplatform bundles delivers more value at a lower price, a stand-alone platform is in danger. If one cannot reduce price on the money-side or enhance one's value proposition, one can try to change one's business model or find a "bigger brother" to help. The last option when facing envelopment is to resort to legal remedies, since antitrust law for two-sided networks is still in dispute. However, in many cases a stand-alone business facing envelopment has little choice but to sell out to the attacker or exit the field.
- http://www.hbs.edu/research/pdf/12-024.pdf Andrei Hagiu and Julian Wright (2011). Multi-Sided Platforms, Harvard Working Paper 12-024.
- http://ssrn.com/abstract=249585 Geoffrey Parker and Marshall Van Alstyne (2000) "Information Complements, Substitutes, and Strategic Product Design"
- http://rje.org Bernard Caillaud and Bruno Jullien (2003). "Chicken & Egg: Competing Matchmakers". Rand Journal of Economics 34(2) 309–328.
- http://idei.fr/doc/wp/2005/2sided_markets.pdf Jean-Charles Rochet and Jean Tirole (2005). [Two-Sided Markets: A Progress Report]
- http://ssrn.com/abstract=1177443 Geoffrey Parker and Marshall Van Alstyne (2005). ``Two-Sided Network Effects: A Theory of Information Product Design." Management Science, Vol. 51, No. 10.
- http://hbr.harvardbusiness.org/2006/10/strategies-for-two-sided-markets/ar/1 Thomas Eisenmann, Geoffrey Parker, and Marshall Van Alstyne (2006). [ "Strategies for Two-Sided Markets." Harvard Business Review].
- Parker, Geoffrey G.; Van Alstyne, Marshall W. (2000). "Internetwork externalities and free information goods": 107–116. doi:10.1145/352871.352883.
- Chen, Jianqing; Ming Fan; Mingzhi Li (2015). "Advertising versus Brokerage Model for Online Trading Platforms". MIS Quarterly: forthcoming.
- http://www.hbsp.com Eisenmann (2006), ``Managing Networked Businesses: Course Overview."
- http://www.hbs.edu/research/pdf/12-024.pdf Hagiu A., Wright J. "Multi-Sided Platforms" Harvard Working Paper 12-024.
- http://hbr.org/2006/10/strategies-for-two-sided-markets/ar/1 Eisenmann T., Parker G., and Van Alstyne M.W. "Strategies for Two-Sided Markets" Article Preview, Harvard Business Review, October 2006
- http://www.hbsp.com Tripsas (2000), "Adobe Systems, Inc.", Case 9-801-199.
- http://ssrn.com/abstract=1177443 Parker and Marshall Van Alstyne (2005), page 1498.
- Kenji Hall, "The PlayStation 2 Still Rocks.
- Economides, Nicholas; Katsamakas, Evangelos (July 2006). "Two-Sided Competition of Proprietary vs. Open Source Technology Platforms and the Implications for the Software Industry". Management Science. 52 (7): 1057–1071. doi:10.1287/mnsc.1060.0549.
- Thomas R. Eisenmann, Geoffrey Parker, Marshall W. Van Alstyne (2006). Strategies for Two-Sided Markets, Harvard Business Review.
- J. Gregory Sidak, The Impact of Multisided Markets on the Debate over Optional Transactions for Enhanced Delivery over the Internet, 7 POLÍTICA ECONÓMICA Y REGULATORIA EN TELECOMUNICACIONES 94, 96 (2011).
- Carl Shapiro, Hal R. Varian (1999). Art of Standards Wars (http://faculty.haas.berkeley.edu/shapiro/wars.pdf)
- http://ssrn.com/abstract=1496336 Thomas Eisenmann, Geoffrey Parker, and Marshall Van Alstyne (2011). "Platform Envelopment." Strategic Management Journal.
- Sangeet Paul Choudary (2013) "Platform Thinking" A Comprehensive Guide to Platform Business Models <-- can't be found
- Geoffrey G Parker and Marshall Van Alstyne (2000). "Internetwork Externalities and Free Information Goods," Proceedings of the 2nd ACM conference on Electronic commerce; also available at SSRN: Information Complements, Substitutes, and Strategic Product Design
- Jean-Charles Rochet and Jean Tirole (2001). Platform Competition in Two-Sided Markets. <-- can't be found
- Jean-Charles Rochet and Jean Tirole (2003). Platform Competition in Two-Sided Markets. Journal of the European Economic Association, 1(4): 990-1029.
- Jean-Charles Rochet and Jean Tirole (2005). Two-Sided Markets: A Progress Report
- Bernard Caillaud and Bruno Jullien (2003). ``Chicken & Egg: Competing Matchmakers. » Rand Journal of Economics 34(2) 309–328.
- Geoffrey Parker and Marshall Van Alstyne (2005). ``Two-Sided Network Effects: A Theory of Information Product Design." Management Science, Vol. 51, No. 10.
- Thomas Eisenmann (2006) ``Managing Networked Businesses: Course Overview." Harvard Business Online
- Thomas Eisenmann, Geoffrey Parker, and Marshall Van Alstyne (2006). "Strategies for Two-Sided Markets." Harvard Business Review. <-- can't be found
- Mark Armstrong (2006). "Competition in two-sided markets"
- Book : Invisible Engines : How Software Platforms Drive Innovation and Transform Industries – David Evans, Andrei Hagiu, and Richard Schmalensee (2006). http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=10937
- Weyl, E. Glen (2010). "A Price Theory of Multi-sided Platforms". American Economic Review. 100 (4): 1642–1672. doi:10.1257/aer.100.4.1642.
- Thomas Eisenmann, Geoffrey Parker, and Marshall Van Alstyne (2011). "Platform Envelopment." Strategic Management Journal, Vol 32.
- Institut D'Economie Industrielle: Two-Sided Market Papers | 1 | 2 |
<urn:uuid:1f3bde47-d3d4-4c21-a1b4-7858b88191eb> | Tycoon. The word comes from Japan, where it means the equivalent of shogun. But long ago America confiscated the title. It was in the U.S. that tycoons became paragons of power and influence. American tycoons swung deals, not swords, and changed the
From the Print Edition:
J.P. Morgan, Mar/Apr 00
Tycoon. The word comes from Japan, where it means the equivalent of shogun. But long ago America confiscated the title. It was in the U.S. that tycoons became paragons of power and influence. American tycoons swung deals, not swords, and changed the business landscape forever. The Gilded Age was the greatest era of tycoons. At the end of the nineteenth century, the Industrial Revolution was in full swing and every emerging market seemed to have at least one power broker ready to corner it. No antitrust laws stopped monopolies.
No income tax sapped personal wealth. Men were free to form trusts and build great monuments with their riches. To be sure, tycoons had already existed, but new technology created a boomtown. The courts and taxes did their best to remove tycoons from the playing field, breaking up the great trusts and making it harder to pass wealth through generations. What arose was an age of managers. Corporate power was consolidated by the likes of General Motors' Alfred Sloane--great businessmen but not really tycoons. F. Scott Fitzgerald seemed to think that the tycoon phenomenon had run its course when he wrote about a movie mogul in The Last Tycoon. Not by a long shot. The Electronics Age has ushered in a new round of tycoons itching to take over computer and communications markets.
Herewith, we've assembled a list of the nation's great tycoons. What do they have in common? It isn't higher education. Most of our choices never graduated from college, although five have their names on universities. The two who went to Harvard dropped out. It isn't technical preeminence. Astor never set a fur trap. Rockefeller never dug an oil well. Gates didn't design his operating system. Tycoons are set apart by their ability to change the world and their belief that the world should be theirs. Add a ravenous work ethic and you have a fair description of the men who follow.
ANDREW CARNEGIE (1835-1919) STEEL WILL
When Andrew Carnegie was a boy in Dunfermline, Scotland, his mother impressed upon him a favorite credo: "Look after the pennies, and the pounds will look after themselves." That tenet would guide Carnegie throughout his life. In 1848 Carnegie's father, a skilled weaver, lost his career to the power loom, and the family sought opportunity in America, settling near Pittsburgh. The 12-year-old Andrew started as a bobbin boy in a textile mill, then ran telegraph messages. Pennsylvania Railroad superintendent Thomas Scott hired 17-year-old Carnegie as an assistant. The young apprentice repaid his boss by boosting the railroad's profits with cost-cutting measures that would become his hallmarks: he increased the flow of freight on the lines, lowered costs with revolutionary cost-accounting techniques, and cut wages.
By 24, Carnegie was running the western branch and investing his own money in oil, iron and other companies. (During the panic of 1873, Scott would ask Carnegie for financial help. Carnegie turned him down.) By his mid-30s, Carnegie was a wealthy man with an affinity for iron and the metal of the future, steel. He built his own steel mills using the latest technology and hiring the best employees. He secured every facet needed for steelmaking, from iron deposits and coal mines to railroads and ore ships. He plowed his profits back into the business and continued to cut costs. Extremely competitive, the diminutive (5-foot-2-inch) Steel King was soon outproducing, underbidding and crushing rivals, while supplying low-priced rails and bridges to burgeoning America.
Carnegie Steel became the largest steelmaker in the world. Carnegie's business techniques belied his public persona. Son and nephew of social reformers, Carnegie portrayed himself as a friend of the working class, publicly supporting eight-hour workdays (the standard was 12) and union labor. Yet his workers labored at blast furnaces 12 hours a day, seven days a week for a Carnegie-reduced wage of 14 cents an hour. In 1892, Carnegie's partner, Henry Clay Frick, crushed Carnegie Steel's last union in bloody clashes at its Homestead steel plant outside Pittsburgh. The public was horrified, and Carnegie tried to blame Frick.
In 1901, Carnegie sold out to financier J. P. Morgan for a stunning $480 million, with Carnegie's $250 million cut making him one of the richest men in the world. Whether it was his relatives' reformist influence or a Presbyterian regard for the Final Judgment, Carnegie embarked on philanthropy with the same zeal he had brought to building his empire. Declaring that "the man who dies rich dies disgraced," Carnegie disbursed $350 million of his $400 million fortune, building more than 2,500 libraries and other institutions before his death in 1919.--Terrence Fagan
J.D. ROCKEFELLER (1839-1937) TRUST IN GOD
In 1855, a devout young Cleveland Baptist took a bookkeeping job with a local merchant and found it so fulfilling to pore over the minutiae of accounts that he thereafter celebrated the anniversary of his first payday as joyfully as his own birthday. John D. Rockefeller was so convinced that it was his providence to become rich through figures that he would later say, "God gave me my money."
Rockefeller soon embarked on a produce business, which boomed during the Civil War and must have further convinced him of his destiny. Young Rockefeller had strong antislavery sentiments, but believed so fervently in his fate to be wealthy that he paid a substitute to serve for him. His eye for opportunity was drawn to the nascent oil industry in the wilds of Pennsylvania.
Rockefeller and his partners built a business refining crude into kerosene. He brought to the venture a talent for watching the books and scrupulously securing every economy in a boomtown business full of imprudent wildcatters. His other genius was for spotting failings as well as abilities in his partners; when associates displayed weaknesses he eliminated them and replaced them with colleagues, such as Henry Flagler, Oliver Payne and John Archbold, who offered strengths.
The fledgling industry was rife with problems, not the least of which was the cost of transporting oil to market. Rockefeller solved that by securing clandestine contracts with the railroads that afforded huge rebates. With that edge, he was able to squeeze out the competition, which he usually then enveloped. With fewer bidders, he could pinch suppliers and regulate wildly fluctuating prices.
Industrial espionage and close association with politicos (his son married the daughter of trust-friendly Sen. Nelson Aldrich) smoothed out other snags for the tycoon. Another problem was retaining control of the empire while amassing the capital it would take to run it. With Flagler's aid Rockefeller incorporated. Later, Samuel Dodd helped him create a byzantine holding company, or trust, to circumvent laws that blocked businesses from owning property out of state.
Called Standard Oil, the company would come to control every facet of the business, from production and refining to shipping and barrel making. The trust garnered not only a worldwide monopoly, but the lasting enmity of ruined competitors and a horrified public. The lightning rod for disdain was the work of muckraking journalist Ida Tarbell. Rockefeller would argue that his business methods, common practice at the time, had slashed oil prices. Nevertheless, antitrust suits dissolved Standard Oil into dozens of smaller companies in 1911.
The ironic upshot was that his holdings gained so much on the open market that Rockefeller, already retired, became the country's first billionaire as a result.--Jack Bettridge
JAMES B. DUKE (1856-1925) THE DUKE OF TOBACCO
When Washington Duke returned from the Civil War to his farm near Durham, North Carolina, he had 50 cents, two blind mules and some cured tobacco. Seeing the success of the Bull Durham brand of chewing tobacco, Duke determined that selling rather than growing tobacco held a better future.
It was Washington's irrepressible son James Buchanan "Buck" Duke who saw that prerolled cigarettes represented a market segment not controlled by Bull Durham. When Duke was 33, he met with his four largest competitors in a lower Manhattan hotel on April 23, 1889 as the rough-hewn thorn in the side of the cigarette-making industry. He would emerge with an agreement that he would head the American Tobacco Co., the newly minted cigarette trust that would come to control the market for most tobacco products in the United States and gobble up more than 250 companies in its wake.
It was a combination of keen foresight, unmatched negotiating ability and cutthroat competitiveness that had put the young tobacco farmer in the position to rule his world. He attacked with a vengeance with marketing schemes such as brand names, in-store displays, trading cards, athletic endorsements, coupons and innovative packaging. Determining early on that sex sells, he even procured the endorsement of a comely actress. It was his introduction of the cigarette-making machine that allowed him the economies to go toe-to-toe with the serious competition in New York City.
Undersold at the store counter and outspent on promotions, his enemies had little choice but to join his tobacco trust. Manufacturing more than 90 percent of the country's cigarettes, Duke's trust squeezed farmers, distributors and even the manufacturer of the cigarette-making machine. Soon American Tobacco was able to corner the plug, snuff and pipe tobacco, as well as cheroots markets and start a British subsidiary. The steamroller could be stopped only when the government stepped in to divide it, in 1911. Duke groused that "in England, if a fellow had built up a whale of a business, he'd be knighted. Here, they want to put him in jail." Undeterred, Buck sought fortune in another business--electricity.
The Duke utility controlled most of the Carolina region before his death. After his will made his daughter, Doris, the country's richest woman, enough was left to fund the hastily renamed Duke University and the Duke Endowment, which is hailed as a model charity in philanthropy circles. He also left his daughter a lifelong distrust of others, and she died heirless in 1993, her affairs in as much disarray as her father's were organized.--JB
HENRY FORD (1863-1947) AUTO PILOT
Throughout his life, automotive pioneer Henry Ford had a love-hate relationship with reporters. In 1943, after columnist Drew Pearson dared to suggest that the government take control of the Ford Motor Co. because its chief was too old and frail, the 80-year-old Detroit industrialist and exercise fanatic responded, "I can lick him in any contest he suggests." Ford's adept handling of the press resulted in his being one of the best-known, best-loved and most-publicized figures of his time, despite his famous obstinacy and anti-Semitic writings. Although he didn't invent the automobile, he reinvented it, making it accessible to the public and spurring the American auto industry.
The eldest child of Irish immigrant farmers who settled near Dearborn, Michigan, Ford loved machinery and loathed farm work. He dropped out of school at age 15 and became an engineer. Ford struggled for many years to create a gas-powered car, succeeding in 1896, at age 32, when he drove his first gas-powered car, the Quadricycle, amidst little fanfare through downtown Detroit. Three years later, he built his second car, which caught the attention of several local businessmen. With their financial backing, Ford Motor Co. was launched in June 1903. Named vice president and chief engineer, Ford received a quarter interest in the firm. He publicized his cars through racing and advertising. But it wasn't until he built the now-famous Model T, in 1908, that he changed history.
At the time, cars were priced for the affluent. The Model T, the firm's ninth model, was sold at a more accessible $850, and was met with great enthusiasm. Introducing the first moving assembly line and other economies, Ford continued to reduce the price (it cost only $360 by 1916), and demand increased, with sales reaching more than 472,000 by the First World War. For the next 18 years, Ford was the preeminent automaker, producing more than half the cars sold (a whopping 15.5 million).
Profits soared and so did Ford's reputation as a man with a formidable business acumen. Workers revered him because he believed in paying them well, reducing their work hours and sharing profits. Not content with building cars, Ford made several forays into politics, embarking on an ill-fated 1915 peace campaign to Norway to end the First World War. He ran for the U.S. Senate three years later but was defeated by less than 4,400 votes.
He would never seek public office again but was politically active, vehemently opposing U.S. involvement in foreign wars and the formation of labor unions. Groomed to run the business, Ford's son, Edsel died before his father, then was commemorated by the hapless car model named for him. A grandson, Henry Ford II, eventually took over the reins to the company.--Shandana Durrani
W.R. HEARST (1863-1951) PUBLISHING PRINCE
In 1897, when renewed hostilities between Spanish colonialists and Cuban rebels seeking independence threatened, publisher William Randolph Hearst wanted lavish headlines with which to sell newspapers. He dispatched the artist Frederic Remington to Havana to supply images. Remington cabled him: "Everything is quiet. There is no trouble here. There will be no war. I want to return."
Hearst's alleged reply was in keeping with the tone of "yellow journalism," a style he helped create: "Please remain. You furnish the pictures and I'll furnish the war." The son of a millionaire gold miner turned senator, Hearst wasn't your average silver-spooned brat content to take over daddy's business. At 23, Hearst begged his father to let him run the San Francisco Examiner, which the elder Hearst had won as a gambling debt. Spending lavishly for writers, the son built circulation with sensational reportage on scandal and corruption. The newspaper became the cornerstone of an influential media juggernaut.
During the Spanish-American War, Hearst blurred the line between reporting the news and creating it. A Hearst reporter aided the escape of a Cuban political prisoner, and his New York Journal implied that the Maine, an American warship, had been sunk by Spain. A skilled editor, Hearst put his stamp on the empire. In memos, he dispensed helpful hints that became editorial guidelines. Among his dictums: "Don't print a lot of dull stuff that people are supposed to like and don't." Hearst's free spending built and ultimately endangered his empire, which at its height included more than 20 newspapers, nine magazines, telegraphic news facilities, radio stations and motion picture production syndicates. But projects such as San Simeon, his opulent modern-day castle, drained cash.
The Depression hit Hearst's empire hard, and he was forced to borrow money and sell holdings. Like his father, Hearst pursued political ambitions, serving as a U.S. representative from New York and losing gubernatorial and mayoralty races. His wife bore him five sons, before their relationship fell apart. Never bothering to divorce, Hearst lived his final 30 years with film actress Marion Davies. This and other peccadilloes were exposed in the 1941 Orson Welles film Citizen Kane, loosely based on the publisher's life. When he couldn't quash the film's release, Hearst instructed his media empire not to acknowledge it. The film was roundly hailed, but it is also credited with ruining Welles's promising career by tweaking the vindictive Hearst. Hearst passed his still-vital empire onto his sons. His granddaughter Patty became sensationalized in her own right when kidnapped by the Symbionese Liberation Army in the 1970s.--Jason Sheftell
DAVID SARNOFF (1891-1971) NETWORK NABOB
On April 15, 1912, when the Titanic plunged to the depths of the north Atlantic, David Sarnoff was a young telegraph operator manning the wireless machine atop the Wanamaker Department Store in Manhattan. Sarnoff glued himself to the wireless for a reported 72 hours, relaying news of the tragedy to newspapers and the families of the survivors.
"The Titanic disaster brought radio to the front, and also me," he would later say. Sarnoff would use his genius for self-promotion and sense of emerging technology to become the electronic media visionary who popularized radio and television. He created a multinational communication's powerhouse by turning the wireless radio into a mass device of entertainment and news.
He fathered the radio network NBC, which for a time had two programming stations (the Federal Communications Commission would force RCA to spin off one as ABC), and was instrumental in ushering in the age of television. A Russian immigrant, Sarnoff started working in his early teens hawking newspapers, became an office boy and then worked as a telegraph operator for Marconi Wireless Telegraph Co. of America.
Although fascinated by wireless technology, he focused his efforts on commercializing radio. In 1915, Sarnoff advocated "bringing music into the house by wireless." Scoffed at by the Marconi executive team, Sarnoff repositioned his idea and moved on. A few years later at RCA, he would pen a memo to senior executives, saying: "We must have a suitable apparatus for sale before we can sell [the radio]."
Seeking that apparatus himself, Sarnoff pushed through the first radio sports broadcast in 1921. Radio owners listened as heavyweight boxing champion Jack Dempsey dropped challenger Georges Carpentier. Radio sales skyrocketed as America scanned the radio dial for Glenn Miller and the news. As head of RCA, Sarnoff was the first to string together radio sounds through telephone lines.
In 1926, the National Broadcasting Company's radio network was born. People in Iowa could now listen to a news broadcast from New York. Sarnoff turned the company's resources towards an invention known as the iconoscope, an early television. At the 1939 World's Fair in New York, he delivered another first: a television broadcast. He also felt a duty to deliver programming of high culture to the masses, creating an NBC orchestra and helping to develop high-fidelity FM. The tycoon also had a touch of the imperious. After offering Franklin Roosevelt "all the facilities and personnel" of RCA for the Second World War effort, Sarnoff was named a brigadier general. He liked the title, and was so addressed by colleagues and co-workers thereafter.--JS
RAY KROC (1902-1984) MAC DADDY
The story goes that toward the end of his life McDonald's head Ray Kroc was driving in Southern California when he decided to indulge his penchant for surprise checks of franchise restaurants. Recognizing the Burger Meister, employees rolled out the red carpet for their boss. A subsequent story in a local paper reported that Kroc was chagrined to have been caught in his little ruse, but suggested that his arrival in an $80,000 limousine might have been a tip-off to the countermen. When a friend ribbed him about it, Kroc called the story pure nonsense, saying: "It was a $40,000 limousine. Only an idiot would pay $80,000 for a car."
Apocryphal or not, the story is pure Kroc. He sold burgers by the billions by scrupulously patrolling his preserve and securing every economy. It was at the not-so-tender age of 52 that he had the brainstorm that would put the country on a fast-food diet. Kroc, a milkshake machine salesman, had noticed that one San Bernadino, California, restaurant ordered inordinate numbers of his product, so he went to see it for himself.
What he found was the first McDonald's, a drive-in offering quick, efficient and friendly service coupled with low prices and a carefully selected menu. It had a devoted and passionate customer base, with people driving many miles just to taste its hamburgers, fries and milkshakes. Kroc fell in love with the idea. The owners, Mac and Dick McDonald, were hesitant, but soon approved Kroc's expansive desire to multiply the concept into a nationwide chain. Within a year, Kroc had opened a McDonald's franchise in Des Plaines, Illinois, near his hometown.
Within 10 years, the chain, by then owned by Kroc (he bought out the brothers for $2.7 million in 1961), had expanded to 700 eateries. Today, that number has increased to more than 25,000 McDonald's worldwide, with franchises in Central America, Russia and Japan. Kroc relied heavily on perseverance and hard work. During the first years of operation, Kroc refused to take a dime from McDonald's profits, opting instead to live off his salesman salary.
He wore many hats, often cooking fries, ordering supplies or helping the janitor clean the restaurant. He insisted on exacting standards for every restaurant in the chain (the parking lots of each establishment are cleaned on the same strict schedule and condiments are dabbed out in uniform proportions on every burger) and created a Hamburger U. for prospective franchisees to make sure those standards were kept. He also listened to his employees and franchisees; some of McDonald's most popular items, such as the Big Mac, Filet-O-Fish and Egg McMuffin, were invented by franchise operators.
His passion to make McDonald's succeed did not jibe with a happy home life, however, and he lost two wives to divorce. A third became his widow and oversaw charities funded by the millions he had earned.--SD
THOMAS WATSON JR. (1914-1993) COMPUTER MAGNATE
Many men determine to outdo their father in business. But not many have to go to the extent that Thomas Watson Jr. did. Taking over an already successful IBM from his father, he spent $5 billion (the most expensive private undertaking of its time) to take the company into a new age of computing.
Pronounced as "arguably the greatest capitalist who ever lived" by Fortune magazine in 1987, Watson inherited Big Blue from his father, Tom Sr. Of the transition, Watson said, "Fear of failure became the most powerful force in my life. I think anybody who gets a job like mine, unless he's stupid, must be a little bit afraid." From the time he was born in 1914 (the year his father joined International Business Machines, then called Computing-Tabulating-Recording Co.), trouble seemed to stick to Watson Jr., who was known to his neighbors as Terrible Tommy.
Never a good student, it took Watson six years and three schools to graduate from high school. Growing up in the shadow of his father's success often left him feeling inadequate and lost. Watson went to Brown University where he excelled in drinking and carousing. He joined IBM as a salesman, but was less than enthusiastic. Watson Jr.'s playboy behavior was at odds with his father's puritanical rules. A devotion to flying also distracted young Watson. By his early 20s, he had logged more than 1,000 hours of flying time. A stint in the Second World War as a pilot changed Watson.
Suddenly in a position of authority, he found himself using his father's management techniques. He returned mature and focused. Watson Sr. had built IBM to be the leader in punch-card tabulators. He also built an intensely loyal workforce of dark-suited men. Under Watson Sr., IBM men sang IBM pride songs and worshipped their CEO--who challenged them to think. Things would be different under Watson Jr. He saw the future of the company in electronic computers and for 10 years, battled his father to make the move from punch cards to circuit boards.
Watson Jr. was a tougher boss than his father and executives worked under high stress. During Watson's two decades at the helm, IBM saw revenue grow from $900 million to $7.5 billion and the number of employees rise from 72,500 to 270,000. He introduced the idea of unbundling the technology package, breaking sales down to each aspect of a computer's hardware and software.
At 57, Watson retired. After having a heart attack Watson decided that "I wanted to live more than I wanted to run IBM." But Watson was always more than his corporate persona. While battling antitrust lawsuits, Watson regularly referred to a list he kept in his desk drawer of adventures he had yet to take, such as flying a helicopter and sailing the Arctic. In retirement he served as President Carter's diplomat to the Soviet Union for two years. When he died, at 79, Time called him the "oldest living jet pilot."--Stacey C. Rivera
SAM WALTON (1918-1992) DIMESTORE COWBOY
When Sam Walton finished a peripatetic hitch in the Army at the end of the Second World War, he had a college degree, some experience working at J.C. Penney's, and a hankering to go into retail for himself. He was considering a deparment store in St. Louis when his wife, Helen, spoke up: "Sam, we've been married two years and we've moved 16 times. Now, I'll go with you anyplace you want so long as you don't ask me to live in a big city."
Mrs. Walton's dictum would aptly define the Wal-Mart retailing empire to come: a chain of discount stores set up in small towns with little competition and low overhead. By following it, Mr. Sam would come from nowhere to be the richest man in America by 1985. But Walton's was a long day's journey into overnight success. It was two decades before he would fully roll out the Wal-Mart concept. The young Missourian made his bones as owner-manager of a number of Ben Franklin variety stores throughout Arkansas.
As he added stores he stuck to small towns and always undercut the competition, feeling he could make up in volume what he lost in margin. As he worked, the gregarious Walton learned--from his own experience and by watching others in the burgeoning world of discount. By 1962, he was ready to start a chain of his own design. When Ben Franklin wouldn't back him, he went out on his own. Hands-on attentiveness, imaginative promotions, relentless expansion, strong customer service and gung-ho employees were the secrets to the Wal-Mart formula. Walton became the cheerleader for the chain.
He once cajoled workers by promising to do the hula on Wall Street if they surpassed projections. Sam danced that jig. After taking Wal-Mart public in 1970, he focused on paying back "associates," or employees, instituting profit-sharing and stock-option programs that left many longtime workers millionaires. He entertained every employee's idea and was willing to try anything on a small basis until it failed. The chain led the industry in using computers for inventory and distribution. Soon Walton was able to circumvent manufacturer's reps and buy directly from the maker getting lower prices and faster delivery.
From this stemmed Walton's Buy American program. Yes, it was flag waving, but Mr. Sam also helped U.S. suppliers compete. Innovation draws detractors. They said he was destroying the character of downtown America, he was killing the small businessman, he was antilabor. But to defend himself Walton pointed to his low prices, the number of retailers who had thrived in his wake, and the employees he had made wealthy. "The whole thing is driven by customers, who are free to choose where to shop."--JB
TED TURNER (1938- ) MEDIA MAVERICK
It was the worst disaster in yachting history. About 300 yachts, including Ted Turner's Tenacious, were competing in 1979's Fastnet race off Ireland when a fierce storm hit. The race became a fight for survival, and Turner and his crew, among many others, were reported missing. By the time the storm passed, 22 sailors had died; only 92 boats crossed the finish line. The Tenacious was first across, with Ted regaling the press at dockside. He's been called Captain Outrageous and Terrible Ted.
But cable TV pioneer Robert Edward Turner III, Time Warner's vice chairman (and its largest shareholder), has also been compared to broadcasting legend William Paley. And no one's ever called him boring. Son of a disciplinarian father, young Ted grew up in military schools. After expulsion from college, he worked for his father Ed's billboard company. A natural salesman, Ted quickly moved up.
In 1963, Ed Turner committed suicide. Few expected Ed's playboy son, then 24, to take over. But Turner expanded the business, buying ailing radio stations and promoting them with his billboards. In the early 1970s, he bought two bankrupt UHF TV stations, investments so controversial that his accountant quit. But behind his impulsiveness was a gift for seeing potential in overlooked properties. He broadcast old movies and TV shows, wrestling and Atlanta Braves games, selling his programming as escapism.
By 1973 he was broadcasting from Atlanta via microwave, creating the first cable network. Four years later he began beaming his signal via satellite to cable systems nationwide, then a radical idea. TBS, his Atlanta "Superstation," would become the country's most profitable. During this time Turner, an avid sailor, raced his yachts, earning four Yachtsman of the Year titles and, in 1977, the America's Cup. In 1980 he launched the Cable News Network, the first global channel. By the 1990s, TBS was cable's largest network with 18 channels, a vast film archive and an unprecedented global reach. In 1996, Turner sold TBS to Time Warner in a $7.5 billion stock swap; his cut--nine percent of Time Warner's common stock.
In January, America Online and Time Warner agreed to merge. Worth an estimated $10 billion, Turner will be vice chairman of the new company. Though Turner's impulsiveness often pays off, his shoot-from-the-lip style has offended many. His mercurial behavior and decades of reputed womanizing contributed to the breakups of his first two marriages. He separated from third wife, Jane Fonda, in January. America's largest private landowner, with 1.4 million acres, Turner has become a conservationist. Promoting global understanding, he created the Goodwill Games and has pledged $1 billion to the United Nations. Critics say his save-the-world dreams are unrealistic. To Turner, they're just another challenge.--TF
BILL GATES (1955- ) SOFTWARE POKER
William H. Gates III had business on his mind right from the beginning. At the ripe old age of 10, he wrote a $5 contract giving him unlimited access to his older sister's baseball mitt. In 1980, Gates inked a far shrewder deal, licensing a disk operating system to IBM Corp. for its new personal computer.
The move gave Gates's young Microsoft Corp. revenues from every computer sold with that system and created a pair of juggernauts--Microsoft has revenues of $19.7 billion, and Gates is, by a comfortable margin, the wealthiest man in the world. Forbes magazine estimates his net worth at $90 billion. The IBM deal was like a poker game, which Gates was ready to play.
The Harvard dropout spent many nights playing cards until daybreak, and his negotiating talents were considerable. He didn't have the software that IBM needed, but he was negotiating with a competitor who did. Gates kept the details cloaked from both parties and bought the system he needed for $50,000. That program became MS-DOS, which powers more than 80 percent of the computers sold in the world today.
Born in 1955 in Seattle, Gates grew up in a large home, the son of a successful lawyer. He wrote his first computer program, a clunky tic-tac-toe game, at age 13, and bonded with fellow computer whiz Paul Allen, later Microsoft's cofounder, over a prehistoric computer, which lacked a screen. All that devotion to poker and computers seemed to leave Gates little room to ponder everyday life.
Steven Ballmer, a college buddy who became Microsoft's president (and recently chief executive officer), once said that Gates never put sheets on his bed, and once left for vacation with the windows and door to his room wide open. And it was raining. Business focus has never been a problem.
Gates is a driven boss, known for shouting "That's the stupidest thing I've ever heard" at an unsavory idea. But the rewards of working for Gates are undeniable. Allen is the third richest man in America ($30 billion), according to Forbes, and Ballmer ($19.5 billion) is No. 4. An estimated 2,500 past and present Microsoft employees are millionaires. Gates has long worried about the long-term success of his company.
"Success is a lousy teacher," he wrote in his 1995 book The Road Ahead. "It seduces smart people into thinking they can't lose." Losing now looks possible--in November a judge found that the company's near monopoly harms consumers. Microsoft could go the way of Ma Bell, but Wall Street hasn't slammed the company's stock, which still trades high.--David Savona
STEVE CASE (1958- ) WEB MASTER
On August 7, 1996, the power went out for more than 6 million America Online subscribers. For nearly 19 hours, customers were cut off from e-mails, the Internet and other interactive offerings provided by the world's biggest online service. A few years earlier, such an outage would have drawn scant attention. But by 1996, the new communications medium had begun to rival the telephone as a way to keep in touch.
"If AOL five years ago had been inaccessible for a weekend, nobody would have known or cared," CEO Steve Case would spin-doctor the cataclysm to The Washington Post the following year. "We were like a little hobby people played with. Suddenly now we were more part of the everyday life." The little company has grown into an Internet powerhouse almost overnight.
Case has led AOL on a buying binge the past few years, acquiring such companies as Netscape Communications, Hughes Electronics and MapQuest.com. But the deal that rocked the media and online worlds came this January, when AOL agreed to purchase Time Warner, the entertainment, publishing and cable behemoth, for a record $165 billion. Just as radio and television transformed earlier eras, online communications has become an indispensable part of modern life.
Under Case, a shy, low-key executive who often wears khakis, AOL has grown to more than 20 million members worldwide, with annual revenues approaching $900 million and e-mail traffic equaling 80 million a day. In spite of well-publicized access and pricing problems, the online service has grown to control more than half of the U.S. home market. When Case attended Williams College in the late 1970s, his least favorite subject was computer programming.
But he was intrigued by the ability of the college's computers to talk to computers in far-off places, and he envisioned a time when computers would facilitate human interaction. Case eventually became involved with a start-up company called Control Video Corp., which struggled unsuccessfully to sell Atari video games for PCs. Rising quickly through the ranks in the '80s, he made deals for the company, now known as Quantum Computer Services, to develop online services for Apple, Tandy and IBM. I
n 1991 Case was named president and CEO of the company, which he had renamed America Online. He persuaded the board to resist the advances of Bill Gates, who wanted to buy the company as an inroad into the Internet world. After a direct mail campaign put 250 million diskettes into consumers' hands, AOL's membership exploded in 1994. Two years later, AOL usurped Prodigy and CompuServe (it would buy the latter the following year) as the online services leader.-- Bruce Goldman
You must be logged in to post a comment. | 1 | 4 |
<urn:uuid:7345780b-2d93-44aa-9cde-763cdffaa89c> | Natural killer (NK) cells are a vital component of the innate immune response to virus-infected cells. It is important to understand the ability of NK cells to recognize and lyse HIV-1 infected cells because identifying any aberrancy in NK cell function against HIV-infected cells could potentially lead to therapies that would enhance their cytolytic activity. There is a need to use HIV-infected primary T-cell blasts as target cells rather then infected-T-cell lines in the cytotoxicity assays. T-cell lines, even without infection, are quite susceptible to NK cell lysis. Furthermore, it is necessary to use autologous primary cells to prevent major histocompatibility complex class I mismatches between the target and effector cell that will result in lysis. Early studies evaluating NK cell cytolytic responses to primary HIV-infected cells failed to show significant killing of the infected cells 1,2. However, using HIV-1 infected primary T-cells as target cells in NK cell functional assays has been difficult due the presence of contaminating uninfected cells 3. This inconsistent infected cell to uninfected cell ratio will result in variation in NK cell killing between samples that may not be due to variability in donor NK cell function. Thus, it would be beneficial to work with a purified infected cell population in order to standardize the effector to target cell ratios between experiments 3,4. Here we demonstrate the isolation of a highly purified population of HIV-1 infected cells by taking advantage of HIV-1's ability to down-modulate CD4 on infected cells and the availability of commercial kits to remove dead or dying cells 3-6. The purified infected primary T-cell blasts can then be used as targets in either a degranulation or cytotoxic assay with purified NK cells as the effector population 5-7. Use of NK cells as effectors in a degranulation assay evaluates the ability of an NK cell to release the lytic contents of specialized lysosomes 8 called "cytolytic granules". By staining with a fluorochrome conjugated antibody against CD107a, a lysosomal membrane protein that becomes expressed on the NK cell surface when the cytolytic granules fuse to the plasma membrane, we can determine what percentage of NK cells degranulate in response to target cell recognition. Alternatively, NK cell lytic activity can be evaluated in a cytotoxic assay that allows for the determination of the percentage of target cells lysed by release of 51Cr from within the target cell in the presence of NK cells.
21 Related JoVE Articles!
Isolation of Mouse Lung Dendritic Cells
Institutions: Louisiana State University .
Lung dendritic cells (DC) play a fundamental role in sensing invading pathogens 1,2
as well as in the control of tolerogenic responses 3
in the respiratory tract. At least three main subsets of lung dendritic cells have been described in mice: conventional DC (cDC) 4
, plasmacytoid DC (pDC) 5
and the IFN-producing killer DC (IKDC) 6,7
. The cDC subset is the most prominent DC subset in the lung 8
The common marker known to identify DC subsets is CD11c, a type I transmembrane integrin (β2) that is also expressed on monocytes, macrophages, neutrophils and some B cells 9
. In some tissues, using CD11c as a marker to identify mouse DC is valid, as in spleen, where most CD11c+
cells represent the cDC subset which expresses high levels of the major histocompatibility complex class II (MHC-II). However, the lung is a more heterogeneous tissue where beside DC subsets, there is a high percentage of a distinct cell population that expresses high levels of CD11c bout low levels of MHC-II. Based on its characterization and mostly on its expression of F4/80, an splenic macrophage marker, the CD11chi
lung cell population has been identified as pulmonary macrophages 10 and more recently, as a potential DC precursor 11
In contrast to mouse pDC, the study of the specific role of cDC in the pulmonary immune response has been limited due to the lack of a specific marker that could help in the isolation of these cells. Therefore, in this work, we describe a procedure to isolate highly purified mouse lung cDC. The isolation of pulmonary DC subsets represents a very useful tool to gain insights into the function of these cells in response to respiratory pathogens as well as environmental factors that can trigger the host immune response in the lung.
Immunology, Issue 57, Lung, dendritic cells, classical, conventional, isolation, mouse, innate immunity, pulmonary
Expansion, Purification, and Functional Assessment of Human Peripheral Blood NK Cells
Institutions: MD Anderson Cancer Center - University of Texas.
Natural killer (NK) cells play an important role in immune surveillance against a variety of infectious microorganisms and tumors. Limited availability of NK cells and ability to expand in vitro
has restricted development of NK cell immunotherapy. Here we describe a method to efficiently expand vast quantities of functional NK cells ex vivo
using K562 cells expressing membrane-bound IL21, as an artificial antigen-presenting cell (aAPC).
NK cell adoptive therapies to date have utilized a cell product obtained by steady-state leukapheresis of the donor followed by depletion of T cells or positive selection of NK cells. The product is usually activated in IL-2 overnight and then administered the following day 1
. Because of the low frequency of NK cells in peripheral blood, relatively small numbers of NK cells have been delivered in clinical trials.
The inability to propagate NK cells in vitro
has been the limiting factor for generating sufficient cell numbers for optimal clinical outcome. Some expansion of NK cells (5-10 fold over 1-2 weeks) has be achieved through high-dose IL-2 alone 2
. Activation of autologous T cells can mediate NK cell expansion, presumably also through release of local cytokine 3
. Support with mesenchymal stroma or artificial antigen presenting cells (aAPCs) can support the expansion of NK cells from both peripheral blood and cord blood 4
. Combined NKp46 and CD2 activation by antibody-coated beads is currently marketed for NK cell expansion (Miltenyi Biotec, Auburn CA), resulting in approximately 100-fold expansion in 21 days.
Clinical trials using aAPC-expanded or -activated NK cells are underway, one using leukemic cell line CTV-1 to prime and activate NK cells5
without significant expansion. A second trial utilizes EBV-LCL for NK cell expansion, achieving a mean 490-fold expansion in 21 days6
. The third utilizes a K562-based aAPC transduced with 4-1BBL (CD137L) and membrane-bound IL-15 (mIL-15)7
, which achieved a mean NK expansion 277-fold in 21 days. Although, the NK cells expanded using K562-41BBL-mIL15 aAPC are highly cytotoxic in vitro
and in vivo
compared to unexpanded NK cells, and participate in ADCC, their proliferation is limited by senescence attributed to telomere shortening8
. More recently a 350-fold expansion of NK cells was reported using K562 expressing MICA, 4-1BBL and IL159
Our method of NK cell expansion described herein produces rapid proliferation of NK cells without senescence achieving a median 21,000-fold expansion in 21 days.
Immunology, Issue 48, Natural Killer Cells, Tumor Immunology, Antigen Presenting Cells, Cytotoxicity
Analysis of Pulmonary Dendritic Cell Maturation and Migration during Allergic Airway Inflammation
Institutions: McMaster University, Hamilton, University of Toronto.
Dendritic cells (DCs) are the key players involved in initiation of adaptive immune response by activating antigen-specific T cells. DCs are present in peripheral tissues in steady state; however in response to antigen stimulation, DCs take up the antigen and rapidly migrate to the draining lymph nodes where they initiate T cell response against the antigen1,2
. Additionally, DCs also play a key role in initiating autoimmune as well as allergic immune response3
DCs play an essential role in both initiation of immune response and induction of tolerance in the setting of lung environment4
. Lung environment is largely tolerogenic, owing to the exposure to vast array of environmental antigens5
. However, in some individuals there is a break in tolerance, which leads to induction of allergy and asthma. In this study, we describe a strategy, which can be used to monitor airway DC maturation and migration in response to the antigen used for sensitization. The measurement of airway DC maturation and migration allows for assessment of the kinetics of immune response during airway allergic inflammation and also assists in understanding the magnitude of the subsequent immune response along with the underlying mechanisms.
Our strategy is based on the use of ovalbumin as a sensitizing agent. Ovalbumin-induced allergic asthma is a widely used model to reproduce the airway eosinophilia, pulmonary inflammation and elevated IgE levels found during asthma6,7
. After sensitization, mice are challenged by intranasal delivery of FITC labeled ovalbumin, which allows for specific labeling of airway DCs which uptake ovalbumin. Next, using several DC specific markers, we can assess the maturation of these DCs and can also assess their migration to the draining lymph nodes by employing flow cytometry.
Immunology, Issue 65, Medicine, Physiology, Dendritic Cells, allergic airway inflammation, ovalbumin, lymph nodes, lungs, dendritic cell maturation, dendritic cell migration, mediastinal lymph nodes
Cell-based Flow Cytometry Assay to Measure Cytotoxic Activity
Institutions: Vaccine and Gene Therapy Institute of Florida.
Cytolytic activity of CD8+ T cells is rarely evaluated. We describe here a new cell-based assay to measure the capacity of antigen-specific CD8+ T cells to kill CD4+ T cells loaded with their cognate peptide. Target CD4+ T cells are divided into two populations, labeled with two different concentrations of CFSE. One population is pulsed with the peptide of interest (CFSE-low) while the other remains un-pulsed (CFSE-high). Pulsed and un-pulsed CD4+ T cells are mixed at an equal ratio and incubated with an increasing number of purified CD8+ T cells. The specific killing of autologous target CD4+ T cells is analyzed by flow cytometry after coculture with CD8+ T cells containing the antigen-specific effector CD8+ T cells detected by peptide/MHCI tetramer staining. The specific lysis of target CD4+ T cells measured at different effector versus target ratios, allows for the calculation of lytic units, LU30
cells. This simple and straightforward assay allows for the accurate measurement of the intrinsic capacity of CD8+ T cells to kill target CD4+ T cells.
Immunology, Issue 82, Cytotoxicity, Effector CD8+ T cells, Tetramers, Target CD4+ T cells, CFSE, Flow cytometry
Culturing of Human Nasal Epithelial Cells at the Air Liquid Interface
Institutions: The University of North Carolina at Chapel Hill, The University of North Carolina at Chapel Hill, The University of North Carolina at Chapel Hill, The University of North Carolina at Chapel Hill.
models using human primary epithelial cells are essential in understanding key functions of the respiratory epithelium in the context of microbial infections or inhaled agents. Direct comparisons of cells obtained from diseased populations allow us to characterize different phenotypes and dissect the underlying mechanisms mediating changes in epithelial cell function. Culturing epithelial cells from the human tracheobronchial region has been well documented, but is limited by the availability of human lung tissue or invasiveness associated with obtaining the bronchial brushes biopsies. Nasal epithelial cells are obtained through much less invasive superficial nasal scrape biopsies and subjects can be biopsied multiple times with no significant side effects. Additionally, the nose is the entry point to the respiratory system and therefore one of the first sites to be exposed to any kind of air-borne stressor, such as microbial agents, pollutants, or allergens.
Briefly, nasal epithelial cells obtained from human volunteers are expanded on coated tissue culture plates, and then transferred onto cell culture inserts. Upon reaching confluency, cells continue to be cultured at the air-liquid interface (ALI), for several weeks, which creates more physiologically relevant conditions. The ALI culture condition uses defined media leading to a differentiated epithelium that exhibits morphological and functional characteristics similar to the human nasal epithelium, with both ciliated and mucus producing cells. Tissue culture inserts with differentiated nasal epithelial cells can be manipulated in a variety of ways depending on the research questions (treatment with pharmacological agents, transduction with lentiviral vectors, exposure to gases, or infection with microbial agents) and analyzed for numerous different endpoints ranging from cellular and molecular pathways, functional changes, morphology, etc.
models of differentiated human nasal epithelial cells will enable investigators to address novel and important research questions by using organotypic experimental models that largely mimic the nasal epithelium in vivo
Cellular Biology, Issue 80, Epithelium, Cell culture models, ciliated, air pollution, co-culture models, nasal epithelium
High-throughput Detection Method for Influenza Virus
Institutions: Blood Research Institute, Mount Sinai School of Medicine , Blood Research Institute, City of Milwaukee Health Department Laboratory, Medical College of Wisconsin , Medical College of Wisconsin .
Influenza virus is a respiratory pathogen that causes a high degree of morbidity and mortality every year in multiple parts of the world. Therefore, precise diagnosis of the infecting strain and rapid high-throughput screening of vast numbers of clinical samples is paramount to control the spread of pandemic infections. Current clinical diagnoses of influenza infections are based on serologic testing, polymerase chain reaction, direct specimen immunofluorescence and cell culture 1,2
Here, we report the development of a novel diagnostic technique used to detect live influenza viruses. We used the mouse-adapted human A/PR/8/34 (PR8, H1N1) virus 3
to test the efficacy of this technique using MDCK cells 4
. MDCK cells (104
or 5 x 103
per well) were cultured in 96- or 384-well plates, infected with PR8 and viral proteins were detected using anti-M2 followed by an IR dye-conjugated secondary antibody. M2 5
and hemagglutinin 1
are two major marker proteins used in many different diagnostic assays. Employing IR-dye-conjugated secondary antibodies minimized the autofluorescence associated with other fluorescent dyes. The use of anti-M2 antibody allowed us to use the antigen-specific fluorescence intensity as a direct metric of viral quantity. To enumerate the fluorescence intensity, we used the LI-COR Odyssey-based IR scanner. This system uses two channel laser-based IR detections to identify fluorophores and differentiate them from background noise. The first channel excites at 680 nm and emits at 700 nm to help quantify the background. The second channel detects fluorophores that excite at 780 nm and emit at 800 nm. Scanning of PR8-infected MDCK cells in the IR scanner indicated a viral titer-dependent bright fluorescence. A positive correlation of fluorescence intensity to virus titer starting from 102
PFU could be consistently observed. Minimal but detectable positivity consistently seen with 102
PFU PR8 viral titers demonstrated the high sensitivity of the near-IR dyes. The signal-to-noise ratio was determined by comparing the mock-infected or isotype antibody-treated MDCK cells.
Using the fluorescence intensities from 96- or 384-well plate formats, we constructed standard titration curves. In these calculations, the first variable is the viral titer while the second variable is the fluorescence intensity. Therefore, we used the exponential distribution to generate a curve-fit to determine the polynomial relationship between the viral titers and fluorescence intensities. Collectively, we conclude that IR dye-based protein detection system can help diagnose infecting viral strains and precisely enumerate the titer of the infecting pathogens.
Immunology, Issue 60, Influenza virus, Virus titer, Epithelial cells
Quantitative Analyses of all Influenza Type A Viral Hemagglutinins and Neuraminidases using Universal Antibodies in Simple Slot Blot Assays
Institutions: Health canada, The State Food and Drug Administration, Beijing, University of Ottawa, King Abdulaziz University, Public Health Agency of Canada.
Hemagglutinin (HA) and neuraminidase (NA) are two surface proteins of influenza viruses which are known to play important roles in the viral life cycle and the induction of protective immune responses1,2
. As the main target for neutralizing antibodies, HA is currently used as the influenza vaccine potency marker and is measured by single radial immunodiffusion (SRID)3
. However, the dependence of SRID on the availability of the corresponding subtype-specific antisera causes a minimum of 2-3 months delay for the release of every new vaccine. Moreover, despite evidence that NA also induces protective immunity4
, the amount of NA in influenza vaccines is not yet standardized due to a lack of appropriate reagents or analytical method5
. Thus, simple alternative methods capable of quantifying HA and NA antigens are desirable for rapid release and better quality control of influenza vaccines.
Universally conserved regions in all available influenza A HA and NA sequences were identified by bioinformatics analyses6-7
. One sequence (designated as Uni-1) was identified in the only universally conserved epitope of HA, the fusion peptide6
, while two conserved sequences were identified in neuraminidases, one close to the enzymatic active site (designated as HCA-2) and the other close to the N-terminus (designated as HCA-3)7
. Peptides with these amino acid sequences were synthesized and used to immunize rabbits for the production of antibodies. The antibody against the Uni-1 epitope of HA was able to bind to 13 subtypes of influenza A HA (H1-H13) while the antibodies against the HCA-2 and HCA-3 regions of NA were capable of binding all 9 NA subtypes. All antibodies showed remarkable specificity against the viral sequences as evidenced by the observation that no cross-reactivity to allantoic proteins was detected. These universal antibodies were then used to develop slot blot assays to quantify HA and NA in influenza A vaccines without the need for specific antisera7,8
. Vaccine samples were applied onto a PVDF membrane using a slot blot apparatus along with reference standards diluted to various concentrations. For the detection of HA, samples and standard were first diluted in Tris-buffered saline (TBS) containing 4M urea while for the measurement of NA they were diluted in TBS containing 0.01% Zwittergent as these conditions significantly improved the detection sensitivity. Following the detection of the HA and NA antigens by immunoblotting with their respective universal antibodies, signal intensities were quantified by densitometry. Amounts of HA and NA in the vaccines were then calculated using a standard curve established with the signal intensities of the various concentrations of the references used.
Given that these antibodies bind to universal epitopes in HA or NA, interested investigators could use them as research tools in immunoassays other than the slot blot only.
Immunology, Issue 50, Virology, influenza, hemagglutinin, neuraminidase, quantification, universal antibody
Rescue of Recombinant Newcastle Disease Virus from cDNA
Institutions: Icahn School of Medicine at Mount Sinai, Icahn School of Medicine at Mount Sinai, Icahn School of Medicine at Mount Sinai, University of Rochester.
Newcastle disease virus (NDV), the prototype member of the Avulavirus
genus of the family Paramyxoviridae1
, is a non-segmented, negative-sense, single-stranded, enveloped RNA virus (Figure 1)
with potential applications as a vector for vaccination and treatment of human diseases. In-depth exploration of these applications has only become possible after the establishment of reverse genetics techniques to rescue recombinant viruses from plasmids encoding their complete genomes as cDNA2-5
. Viral cDNA can be conveniently modified in vitro
by using standard cloning procedures to alter the genotype of the virus and/or to include new transcriptional units. Rescue of such genetically modified viruses provides a valuable tool to understand factors affecting multiple stages of infection, as well as allows for the development and improvement of vectors for the expression and delivery of antigens for vaccination and therapy. Here we describe a protocol for the rescue of recombinant NDVs.
Immunology, Issue 80, Paramyxoviridae, Vaccines, Oncolytic Virotherapy, Immunity, Innate, Newcastle disease virus (NDV), MVA-T7, reverse genetics techniques, plasmid transfection, recombinant virus, HA assay
Expression of Functional Recombinant Hemagglutinin and Neuraminidase Proteins from the Novel H7N9 Influenza Virus Using the Baculovirus Expression System
Institutions: Icahn School of Medicine at Mount Sinai, Icahn School of Medicine at Mount Sinai, Icahn School of Medicine at Mount Sinai.
The baculovirus expression system is a powerful tool for expression of recombinant proteins. Here we use it to produce correctly folded and glycosylated versions of the influenza A virus surface glycoproteins - the hemagglutinin (HA) and the neuraminidase (NA). As an example, we chose the HA and NA proteins expressed by the novel H7N9 virus that recently emerged in China. However the protocol can be easily adapted for HA and NA proteins expressed by any other influenza A and B virus strains. Recombinant HA (rHA) and NA (rNA) proteins are important reagents for immunological assays such as ELISPOT and ELISA, and are also in wide use for vaccine standardization, antibody discovery, isolation and characterization. Furthermore, recombinant NA molecules can be used to screen for small molecule inhibitors and are useful for characterization of the enzymatic function of the NA, as well as its sensitivity to antivirals. Recombinant HA proteins are also being tested as experimental vaccines in animal models, and a vaccine based on recombinant HA was recently licensed by the FDA for use in humans. The method we describe here to produce these molecules is straight forward and can facilitate research in influenza laboratories, since it allows for production of large amounts of proteins fast and at a low cost. Although here we focus on influenza virus surface glycoproteins, this method can also be used to produce other viral and cellular surface proteins.
Infection, Issue 81, Influenza A virus, Orthomyxoviridae Infections, Influenza, Human, Influenza in Birds, Influenza Vaccines, hemagglutinin, neuraminidase, H7N9, baculovirus, insect cells, recombinant protein expression
Killer Artificial Antigen Presenting Cells (KaAPC) for Efficient In Vitro Depletion of Human Antigen-specific T Cells
Institutions: Johns Hopkins University, University of Regensburg, Asklepios Medical Center.
Current treatment of T cell mediated autoimmune diseases relies mostly on strategies of global immunosuppression, which, in the long term, is accompanied by adverse side effects such as a reduced ability to control infections or malignancies. Therefore, new approaches need to be developed that target only the disease mediating cells and leave the remaining immune system intact. Over the past decade a variety of cell based immunotherapy strategies to modulate T cell mediated immune responses have been developed. Most of these approaches rely on tolerance-inducing antigen presenting cells (APC). However, in addition to being technically difficult and cumbersome, such cell-based approaches are highly sensitive to cytotoxic T cell responses, which limits their therapeutic capacity. Here we present a protocol for the generation of non-cellular killer artificial antigen presenting cells (KaAPC), which allows for the depletion of pathologic T cells while leaving the remaining immune system untouched and functional. KaAPC is an alternative solution to cellular immunotherapy which has potential for treating autoimmune diseases and allograft rejections by regulating undesirable T cell responses in an antigen specific fashion.
Immunology, Issue 90, Autoimmunity, Apoptosis, antigen-specific CD8+ T cells, HLA-A2-Ig, Fas/FasL, KaAPC
Activation and Measurement of NLRP3 Inflammasome Activity Using IL-1β in Human Monocyte-derived Dendritic Cells
Institutions: New York University School of Medicine, Mount Sinai Medical Center, Mount Sinai Medical Center.
Inflammatory processes resulting from the secretion of Interleukin (IL)-1 family cytokines by immune cells lead to local or systemic inflammation, tissue remodeling and repair, and virologic control1,2
. Interleukin-1β is an essential element of the innate immune response and contributes to eliminate invading pathogens while preventing the establishment of persistent infection1-5
Inflammasomes are the key signaling platform for the activation of interleukin 1 converting enzyme (ICE or Caspase-1). The NLRP3 inflammasome requires at least two signals in DCs to cause IL-1β secretion6
. Pro-IL-1β protein expression is limited in resting cells; therefore a priming signal is required for IL-1β transcription and protein expression. A second signal sensed by NLRP3 results in the formation of the multi-protein NLRP3 inflammasome. The ability of dendritic cells to respond to the signals required for IL-1β secretion can be tested using a synthetic purine, R848, which is sensed by TLR8 in human monocyte derived dendritic cells (moDCs) to prime cells, followed by activation of the NLRP3 inflammasome with the bacterial toxin and potassium ionophore, nigericin.
Monocyte derived DCs are easily produced in culture and provide significantly more cells than purified human myeloid DCs. The method presented here differs from other inflammasome assays in that it uses in vitro
human, instead of mouse derived, DCs thus allowing for the study of the inflammasome in human disease and infection.
Immunology, Issue 87, NLRP3, inflammasome, IL-1beta, Interleukin-1 beta, dendritic, cell, Nigericin, Toll-Like Receptor 8, TLR8, R848, Monocyte Derived Dendritic Cells
A Mouse Tumor Model of Surgical Stress to Explore the Mechanisms of Postoperative Immunosuppression and Evaluate Novel Perioperative Immunotherapies
Institutions: Ottawa Hospital Research Institute, University of Ottawa, University of Ottawa, The Second Hospital of Shandong University, University of Tabuk, Ottawa General Hospital.
Surgical resection is an essential treatment for most cancer patients, but surgery induces dysfunction in the immune system and this has been linked to the development of metastatic disease in animal models and in cancer patients. Preclinical work from our group and others has demonstrated a profound suppression of innate immune function, specifically NK cells in the postoperative period and this plays a major role in the enhanced development of metastases following surgery. Relatively few animal studies and clinical trials have focused on characterizing and reversing the detrimental effects of cancer surgery. Using a rigorous animal model of spontaneously metastasizing tumors and surgical stress, the enhancement of cancer surgery on the development of lung metastases was demonstrated. In this model, 4T1 breast cancer cells are implanted in the mouse mammary fat pad. At day 14 post tumor implantation, a complete resection of the primary mammary tumor is performed in all animals. A subset of animals receives additional surgical stress in the form of an abdominal nephrectomy. At day 28, lung tumor nodules are quantified. When immunotherapy was given immediately preoperatively, a profound activation of immune cells which prevented the development of metastases following surgery was detected. While the 4T1 breast tumor surgery model allows for the simulation of the effects of abdominal surgical stress on tumor metastases, its applicability to other tumor types needs to be tested. The current challenge is to identify safe and promising immunotherapies in preclinical mouse models and to translate them into viable perioperative therapies to be given to cancer surgery patients to prevent the recurrence of metastatic disease.
Medicine, Issue 85, mouse, tumor model, surgical stress, immunosuppression, perioperative immunotherapy, metastases
Development, Expansion, and In vivo Monitoring of Human NK Cells from Human Embryonic Stem Cells (hESCs) and Induced Pluripotent Stem Cells (iPSCs)
Institutions: University of Minnesota, Minneapolis, University of Minnesota, Minneapolis.
We present a method for deriving natural killer (NK) cells from undifferentiated hESCs and iPSCs using a feeder-free approach. This method gives rise to high levels of NK cells after 4 weeks culture and can undergo further 2-log expansion with artificial antigen presenting cells. hESC- and iPSC-derived NK cells developed in this system have a mature phenotype and function. The production of large numbers of genetically modifiable NK cells is applicable for both basic mechanistic as well as anti-tumor studies. Expression of firefly luciferase in hESC-derived NK cells allows a non-invasive approach to follow NK cell engraftment, distribution, and function. We also describe a dual-imaging scheme that allows separate monitoring of two different cell populations to more distinctly characterize their interactions in vivo
. This method of derivation, expansion, and dual in vivo
imaging provides a reliable approach for producing NK cells and their evaluation which is necessary to improve current NK cell adoptive therapies.
Stem Cell Biology, Issue 74, Bioengineering, Biomedical Engineering, Medicine, Physiology, Anatomy, Cellular Biology, Molecular Biology, Biochemistry, Hematology, Embryonic Stem Cells, ESCs, ES Cells, Hematopoietic Stem Cells, HSC, Pluripotent Stem Cells, Induced Pluripotent Stem Cells, iPSCs, Luciferases, Firefly, Immunotherapy, Immunotherapy, Adoptive, stem cells, differentiation, NK cells, in vivo imaging, fluorescent imaging, turboFP650, FACS, cell culture
Development of an IFN-γ ELISpot Assay to Assess Varicella-Zoster Virus-specific Cell-mediated Immunity Following Umbilical Cord Blood Transplantation
Institutions: Université de Montréal, Université de Montréal, Université de Montréal.
Varicella zoster virus (VZV) is a significant cause of morbidity and mortality following umbilical cord blood transplantation (UCBT). For this reason, antiherpetic prophylaxis is administrated systematically to pediatric UCBT recipients to prevent complications associated with VZV infection, but there is no strong, evidence based consensus that defines its optimal duration. Because T cell mediated immunity is responsible for the control of VZV infection, assessing the reconstitution of VZV specific T cell responses following UCBT could provide indications as to whether prophylaxis should be maintained or can be discontinued. To this end, a VZV specific gamma interferon (IFN-γ) enzyme-linked immunospot (ELISpot) assay was developed to characterize IFN-γ production by T lymphocytes in response to in vitro
stimulation with irradiated live attenuated VZV vaccine. This assay provides a rapid, reproducible and sensitive measurement of VZV specific cell mediated immunity suitable for monitoring the reconstitution of VZV specific immunity in a clinical setting and assessing immune responsiveness to VZV antigens.
Immunology, Issue 89, Varicella zoster virus, cell-mediated immunity, T cells, interferon gamma, ELISpot, umbilical cord blood transplantation
A Restriction Enzyme Based Cloning Method to Assess the In vitro Replication Capacity of HIV-1 Subtype C Gag-MJ4 Chimeric Viruses
Institutions: Emory University, Emory University.
The protective effect of many HLA class I alleles on HIV-1 pathogenesis and disease progression is, in part, attributed to their ability to target conserved portions of the HIV-1 genome that escape with difficulty. Sequence changes attributed to cellular immune pressure arise across the genome during infection, and if found within conserved regions of the genome such as Gag, can affect the ability of the virus to replicate in vitro
. Transmission of HLA-linked polymorphisms in Gag to HLA-mismatched recipients has been associated with reduced set point viral loads. We hypothesized this may be due to a reduced replication capacity of the virus. Here we present a novel method for assessing the in vitro
replication of HIV-1 as influenced by the gag
gene isolated from acute time points from subtype C infected Zambians. This method uses restriction enzyme based cloning to insert the gag
gene into a common subtype C HIV-1 proviral backbone, MJ4. This makes it more appropriate to the study of subtype C sequences than previous recombination based methods that have assessed the in vitro
replication of chronically derived gag-pro
sequences. Nevertheless, the protocol could be readily modified for studies of viruses from other subtypes. Moreover, this protocol details a robust and reproducible method for assessing the replication capacity of the Gag-MJ4 chimeric viruses on a CEM-based T cell line. This method was utilized for the study of Gag-MJ4 chimeric viruses derived from 149 subtype C acutely infected Zambians, and has allowed for the identification of residues in Gag that affect replication. More importantly, the implementation of this technique has facilitated a deeper understanding of how viral replication defines parameters of early HIV-1 pathogenesis such as set point viral load and longitudinal CD4+ T cell decline.
Infectious Diseases, Issue 90, HIV-1, Gag, viral replication, replication capacity, viral fitness, MJ4, CEM, GXR25
Assessing the Development of Murine Plasmacytoid Dendritic Cells in Peyer's Patches Using Adoptive Transfer of Hematopoietic Progenitors
Institutions: The University of Texas MD Anderson Cancer Center, The University of Texas Graduate School of Biomedical Sciences.
This protocol details a method to analyze the ability of purified hematopoietic progenitors to generate plasmacytoid dendritic cells (pDC) in intestinal Peyer's patch (PP). Common dendritic cell progenitors (CDPs, lin-
) were purified from the bone marrow of C57BL6 mice by FACS and transferred to recipient mice that lack a significant pDC population in PP; in this case, Ifnar-/-
mice were used as the transfer recipients. In some mice, overexpression of the dendritic cell growth factor Flt3 ligand (Flt3L) was enforced prior to adoptive transfer of CDPs, using hydrodynamic gene transfer (HGT) of Flt3L-encoding plasmid. Flt3L overexpression expands DC populations originating from transferred (or endogenous) hematopoietic progenitors. At 7-10 days after progenitor transfer, pDCs that arise from the adoptively transferred progenitors were distinguished from recipient cells on the basis of CD45 marker expression, with pDCs from transferred CDPs being CD45.1+
and recipients being CD45.2+
. The ability of transferred CDPs to contribute to the pDC population in PP and to respond to Flt3L was evaluated by flow cytometry of PP single cell suspensions from recipient mice. This method may be used to test whether other progenitor populations are capable of generating PP pDCs. In addition, this approach could be used to examine the role of factors that are predicted to affect pDC development in PP, by transferring progenitor subsets with an appropriate knockdown, knockout or overexpression of the putative developmental factor and/or by manipulating circulating cytokines via HGT. This method may also allow analysis of how PP pDCs affect the frequency or function of other immune subsets in PPs. A unique feature of this method is the use of Ifnar-/-
mice, which show severely depleted PP pDCs relative to wild type animals, thus allowing reconstitution of PP pDCs in the absence of confounding effects from lethal irradiation.
Immunology, Issue 85, hematopoiesis, dendritic cells, Peyer's patch, cytokines, adoptive transfer
In Vitro Analysis of Myd88-mediated Cellular Immune Response to West Nile Virus Mutant Strain Infection
Institutions: The University of Texas Medical Branch, The University of Texas Medical Branch, The University of Texas Medical Branch.
An attenuated West Nile virus (WNV), a nonstructural (NS) 4B-P38G mutant, induced higher innate cytokine and T cell responses than the wild-type WNV in mice. Recently, myeloid differentiation factor 88 (MyD88) signaling was shown to be important for initial T cell priming and memory T cell development during WNV NS4B-P38G mutant infection. In this study, two flow cytometry-based methods – an in vitro
T cell priming assay and an intracellular cytokine staining (ICS) – were utilized to assess dendritic cells (DCs) and T cell functions. In the T cell priming assay, cell proliferation was analyzed by flow cytometry following co-culture of DCs from both groups of mice with carboxyfluorescein succinimidyl ester (CFSE) - labeled CD4+
T cells of OTII transgenic mice. This approach provided an accurate determination of the percentage of proliferating CD4+
T cells with significantly improved overall sensitivity than the traditional assays with radioactive reagents. A microcentrifuge tube system was used in both cell culture and cytokine staining procedures of the ICS protocol. Compared to the traditional tissue culture plate-based system, this modified procedure was easier to perform at biosafety level (BL) 3 facilities. Moreover, WNV- infected cells were treated with paraformaldehyde in both assays, which enabled further analysis outside BL3 facilities. Overall, these in vitro
immunological assays can be used to efficiently assess cell-mediated immune responses during WNV infection.
Immunology, Issue 93, West Nile Virus, Dendritic cells, T cells, cytokine, proliferation, in vitro
Artificial Antigen Presenting Cell (aAPC) Mediated Activation and Expansion of Natural Killer T Cells
Institutions: University of Maryland .
Natural killer T (NKT) cells are a unique subset of T cells that display markers characteristic of both natural killer (NK) cells and T cells1
. Unlike classical T cells, NKT cells recognize lipid antigen in the context of CD1 molecules2
. NKT cells express an invariant TCRα chain rearrangement: Vα14Jα18 in mice and Vα24Jα18 in humans, which is associated with Vβ chains of limited diversity3-6
, and are referred to as canonical or invariant NKT (i
NKT) cells. Similar to conventional T cells, NKT cells develop from CD4-CD8- thymic precursor T cells following the appropriate signaling by CD1d 7
. The potential to utilize NKT cells for therapeutic purposes has significantly increased with the ability to stimulate and expand human NKT cells with α-Galactosylceramide (α-GalCer) and a variety of cytokines8
. Importantly, these cells retained their original phenotype, secreted cytokines, and displayed cytotoxic function against tumor cell lines. Thus, ex vivo
expanded NKT cells remain functional and can be used for adoptive immunotherapy. However, NKT cell based-immunotherapy has been limited by the use of autologous antigen presenting cells and the quantity and quality of these stimulator cells can vary substantially. Monocyte-derived DC from cancer patients have been reported to express reduced levels of costimulatory molecules and produce less inflammatory cytokines9,10
. In fact, murine DC rather than autologous APC have been used to test the function of NKT cells from CML patients11
. However, this system can only be used for in vitro
testing since NKT cells cannot be expanded by murine DC and then used for adoptive immunotherapy. Thus, a standardized system that relies on artificial Antigen Presenting Cells (aAPC) could produce the stimulating effects of DC without the pitfalls of allo- or xenogeneic cells12, 13
. Herein, we describe a method for generating CD1d-based aAPC. Since the engagement of the T cell receptor (TCR) by CD1d-antigen complexes is a fundamental requirement of NKT cell activation, antigen: CD1d-Ig complexes provide a reliable method to isolate, activate, and expand effector NKT cell populations.
Immunology, Issue 70, Medicine, Molecular Biology, Cellular Biology, Microbiology, Cancer Biology, Natural killer T cells, in vitro expansion, cancer immunology, artificial antigen presenting cells, adoptive transfer
In Vitro Assay to Evaluate the Impact of Immunoregulatory Pathways on HIV-specific CD4 T Cell Effector Function
Institutions: The Ragon Institute of MGH, MIT and Harvard, Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM).
T cell exhaustion is a major factor in failed pathogen clearance during chronic viral infections. Immunoregulatory pathways, such as PD-1 and IL-10, are upregulated upon this ongoing antigen exposure and contribute to loss of proliferation, reduced cytolytic function, and impaired cytokine production by CD4 and CD8 T cells. In the murine model of LCMV infection, administration of blocking antibodies against these two pathways augmented T cell responses. However, there is currently no in vitro
assay to measure the impact of such blockade on cytokine secretion in cells from human samples. Our protocol and experimental approach enable us to accurately and efficiently quantify the restoration of cytokine production by HIV-specific CD4 T cells from HIV infected subjects.
Here, we depict an in vitro
experimental design that enables measurements of cytokine secretion by HIV-specific CD4 T cells and their impact on other cell subsets. CD8 T cells were depleted from whole blood and remaining PBMCs were isolated via Ficoll separation method. CD8-depleted PBMCs were then incubated with blocking antibodies against PD-L1 and/or IL-10Rα and, after stimulation with an HIV-1 Gag peptide pool, cells were incubated at 37 °C, 5% CO2
. After 48 hr, supernatant was collected for cytokine analysis by beads arrays and cell pellets were collected for either phenotypic analysis using flow cytometry or transcriptional analysis using qRT-PCR. For more detailed analysis, different cell populations were obtained by selective subset depletion from PBMCs or by sorting using flow cytometry before being assessed in the same assays. These methods provide a highly sensitive and specific approach to determine the modulation of cytokine production by antigen-specific T-helper cells and to determine functional interactions between different populations of immune cells.
Immunology, Issue 80, Virus Diseases, Immune System Diseases, HIV, CD4 T cell, CD8 T cell, antigen-presenting cell, Cytokines, immunoregulatory networks, PD-1: IL-10, exhaustion, monocytes
Enrichment of NK Cells from Human Blood with the RosetteSep Kit from StemCell Technologies
Institutions: University of California, Irvine (UCI).
Natural killer (NK) cells are large granular cytotoxic lymphocytes that belong to the innate immune system and play major roles in fighting against cancer and infections, but are also implicated in the early stages of pregnancy and transplant rejection. These cells are present in peripheral blood, from which they can be isolated. Cells can be isolated using either positive or negative selection. For positive selection we use antibodies directed to a surface marker present only on the cells of interest whereas for negative selection we use cocktails of antibodies targeted to surface markers present on all cells but the cells of interest. This latter technique presents the advantage of leaving the cells of interest free of antibodies, thereby reducing the risk of unwanted cell activation or differenciation. In this video-protocol we demonstrate how to separate NK cells from human blood by negative selection, using the RosetteSep kit from StemCell technologies. The procedure involves obtaining human peripheral blood (under an institutional review board-approved protocol to protect the human subjects) and mixing it with a cocktail of antibodies that will bind to markers absent on NK cells, but present on all other mononuclear cells present in peripheral blood (e.g., T lymphocytes, monocytes...). The antibodies present in the cocktail are conjugated to antibodies directed to glycophorin A on erythrocytes. All unwanted cells and red blood cells will therefore be trapped in complexes. The mix of blood and antibody cocktail is then diluted, overlayed on a Histopaque gradient, and centrifuged. NK cells (>80% pure) can be collected at the interface between the Histopaque and the diluted plasma. Similar cocktails are available for enrichment of other cell populations, such as human T lymphocytes.
Immunology, issue 8, blood, cell isolation, natural killer, lymphocyte, primary cells, negative selection, PBMC, Ficoll gradient, cell separation
Interview: Glycolipid Antigen Presentation by CD1d and the Therapeutic Potential of NKT cell Activation
Institutions: La Jolla Institute for Allergy and Immunology.
Natural Killer T cells (NKT) are critical determinants of the immune response to cancer, regulation of autioimmune disease, clearance of infectious agents, and the development of artheriosclerotic plaques. In this interview, Mitch Kronenberg discusses his laboratory's efforts to understand the mechanism through which NKT cells are activated by glycolipid antigens. Central to these studies is CD1d - the antigen presenting molecule that presents glycolipids to NKT cells. The advent of CD1d tetramer technology, a technique developed by the Kronenberg lab, is critical for the sorting and identification of subsets of specific glycolipid-reactive T cells. Mitch explains how glycolipid agonists are being used as therapeutic agents to activate NKT cells in cancer patients and how CD1d tetramers can be used to assess the state of the NKT cell population in vivo following glycolipid agonist therapy. Current status of ongoing clinical trials using these agonists are discussed as well as Mitch's prediction for areas in the field of immunology that will have emerging importance in the near future.
Immunology, Issue 10, Natural Killer T cells, NKT cells, CD1 Tetramers, antigen presentation, glycolipid antigens, CD1d, Mucosal Immunity, Translational Research | 1 | 8 |
<urn:uuid:0ab47304-4f54-4468-950e-906a45e7a6a4> | This is the online version of the glossary in DVD Demystified. To request a definition of a term not listed, e-mail Jim.
Other good online glossaries include
1080i - 1080 lines of interlaced video (540 lines per field). Usually refers to 1920x1080 resolution in 1.78 aspect ratio.
1080p - 1080 lines of progressive video (1080 lines per frame). Usually refers to 1920x1080 resolution in 1.78 aspect ratio.
2-2 pulldown - The process of transferring 24-frame-per-second film to video by repeating each film frame as two video fields. (See Chapter 3 for details.) When 24-fps film is converted via 2-2 pulldown to 25-fps 625/50 (PAL) video, the film runs 4 percent faster than normal.
2-3 pulldown - The process of converting 24-frame-per-second film to video by repeating one film frame as three fields, then the next film frame as two fields. (See Chapter 3 for details.)
3-2 pulldown - An uncommon variation of 2-3 pulldown, where the first film frame is repeated for 3 fields instead of two. Most people mean 2-3 pulldown when they say 3-2 pulldown.
4:1:1 - The component digital video format with one Cb sample and one Cr sample for every four Y samples. 4:1 horizontal downsampling with no vertical downsampling. Chroma is sampled on every line, but only for every four luma pixels (i.e., 1 pixel in a 1 x 4 grid). This amounts to a subsampling of chroma by a factor of two compared to luma (and by a factor of four for a single Cb or Cr component). DVD uses 4:2:0 sampling, not 4:1:1 sampling.
4:2:0 - The component digital video format used by DVD, where there is one Cb sample and one Cr sample for every four Y samples (i.e., 1 pixel in a 2 x 2 grid). 2:1 horizontal downsampling and 2:1 vertical downsampling. Cb and Cr are sampled on every other line, in between the scan lines, with one set of chroma samples for each two luma samples on a line. This amounts to a subsampling of chroma by a factor of two compared to luma (and by a factor of four for a single Cb or Cr component).
4:2:2 - The component digital video format commonly used for studio recordings, where there is one Cb sample and one Cr sample for every two Y samples (i.e., 1 pixel in a 1 x 2 grid). 2:1 horizontal downsampling with no vertical downsampling. This allocates the same number of samples to the chroma signal as to the luma signal. The input to MPEG-2 encoders used for DVD is typically in 4:2:2 format, but the video is subsampled to 4:2:0 before being encoded and stored.
4:4:4 - A component digital video format for high-end studio recordings, where Y, Cb, and Cr are sampled equally.
480i - 480 lines of interlaced video (240 lines per field). Usually refers to 720 x 480 (or 704 x 480) resolution.
480p - 480 lines of progressive video (480 lines per frame). 480p60 refers to 60 frames per second; 480p30 refers to 30 frames per second; and 480p24 refers to 24 frames per second (film source). Usually refers to 720 x 480 (or 704 x 480) resolution.
4C - The four-company entity: IBM, Intel, Matsushita, Toshiba.
525/60 - The scanning system of 525 lines per frame and 60 interlaced fields (30 frames) per second. Used by the NTSC television standard.
5C - The five-company entity: IBM, Intel, Matsushita, Toshiba, Sony.
625/50 - The scanning system of 625 lines per frame and 50 interlaced fields (25 frames) per second. Used by PAL and SECAM television standards.
720p - 720 lines of progressive video (720 lines per frame). Higher definition than standard DVD (480i or 480p). 720p60 refers to 60 frames per second; 720p30 refers to 30 frames per second; and 720p24 refers to 24 frames per second (film source). Usually refers to 1280 x 720 resolution in 1.78 aspect ratio.
8/16 modulation - The form of modulation block code used by DVD to store channel data on the disc. See modulation.
AAC - Advanced audio coder. An audio-encoding standard for MPEG-2 that is not backward-compatible with MPEG-1 audio.
AC - Alternating current. An electric current that regularly reverses direction. Adopted as a video term for a signal of non-zero frequency. Compare to DC.
AC-3 - The former name of the Dolby Digital audio-coding system, which is still technically referred to as AC-3 in standards documents. AC-3 is the successor to Dolby’s AC-1 and AC-2 audio coding techniques.
access time - The time it takes for a drive to access a data track and begin transferring data. In an optical jukebox, the time it takes to locate a specific disk, insert it in an optical drive, and begin transferring data to the host system.
ActiveMovie - The former name for Microsoft’s DirectShow technology.
ADPCM - Adaptive differential pulse code modulation. A compression technique which encodes the difference between one sample and the next. Variations are lossy and lossless.
AES - Audio Engineering Society.
AES/EBU - A digital audio signal transmission standard for professional use, defined by the Audio Engineering Society and the European Broadcasting Union. S/P DIF is the consumer adaptation of this standard.
AGC - Automatic gain control. A circuit designed to boost the amplitude of a signal to provide adequate levels for recording. Also see Macrovision.
aliasing - A distortion (artifact) in the reproduction of digital audio or video that results when the signal frequency is more than twice the sampling frequency. The resolution is insufficient to distinguish between alternate reconstructions of the waveform, thus admitting additional noise that was not present in the original signal.
AMGM_VOBS - Video Object Set for Audio Manager Menu.
analog - A signal of (theoretically) infinitely variable levels. Compare to digital.
angle - In DVD-Video, a specific view of a scene, usually recorded from a certain camera angle. Different angles can be chosen while viewing the scene.
ANSI - American National Standards Institute. (See Appendix C.)
AOTT_AOBS - Audio Object Set for Audio Only Title.
apocryphal - Of questionable authorship or authenticity. Erroneous or fictitious. The author of DVD Demystified is fond of saying that the oft-cited 133-minute limit of DVD-Video is apocryphal.
application format - A specification for storing information in a particular way to enable a particular use.
artifact - An unnatural effect not present in the original video or audio, produced by an external agent or action. Artifacts can be caused by many factors, including digital compression, film-to-video transfer, transmission errors, data readout errors, electrical interference, analog signal noise, and analog signal crosstalk. Most artifacts attributed to the digital compression of DVD are in fact from other sources. Digital compression artifacts will always occur in the same place and in the same way. Possible MPEG artifacts are mosquitoes, blocking, and video noise.
aspect ratio - The width-to-height ratio of an image. A 4:3 aspect ratio means the horizontal size is a third again wider than the vertical size. Standard television ratio is 4:3 (or 1.33:1). Widescreen DVD and HTDV aspect ratio is 16:9 (or 1.78:1). Common film aspect ratios are 1.85:1 and 2.35:1. Aspect ratios normalized to a height of 1 are often abbreviated by leaving off the :1.
ASV (Audio Still Video) - A still picture on a DVD-Audio disc.
ASVOBS - Audio Still Video Object Set.
ATAPI - Advanced Technology Attachment (ATA) Packet Interface. An interface between a computer and its internal peripherals such as DVD-ROM drives. ATAPI provides the command set for controlling devices connected via an IDE interface. ATAPI is part of the Enhanced IDE (E-IDE) interface, also known as ATA-2. ATAPI was extended for use in DVD-ROM drives by the SFF 8090 specification.
ATSC - Advanced Television Systems Committee. In 1978, the Federal Communications Commission (FCC) empaneled the Advisory Committee on Advanced Television Service (ACATS) as an investigatory and advisory committee to develop information that would assist the FCC in establishing an advanced broadcast television (ATV) standard for the United States. This committee created a subcommittee, the ATSC, to explore the need for and to coordinate development of the documentation of Advanced Television Systems. In 1993, the ATSC recommended that efforts be limited to a digital television system (DTV), and in September 1995 issued its recommendation for a Digital Television System standard, which was approved with the exclusion of compression format constraints (picture resolution, frame rate, and frame sequence).
ATV - Advanced television. TV with significantly better video and audio than standard TV. Sometimes used interchangeably with HDTV, but more accurately encompasses any improved television system, including those beyond HDTV. Also sometimes used interchangeably with the final recommended standard of the ATSC, which is more correctly called DTV.
authoring - For DVD-Video, authoring refers to the process of designing, creating, collecting, formatting, and encoding material. For DVD-ROM, authoring usually refers to using a specialized program to produce multimedia software.
autoplay (or automatic playback) - A feature of DVD players which automatically begins playback of a disc if so encoded.
B picture (or B frame) - One of three picture types used in MPEG video. B pictures are bidirectionally predicted, based on both previous and following pictures. B pictures usually use the least number of bits. B pictures do not propagate coding errors since they are not used as a reference by other pictures.
bandwidth - Strictly speaking, the range of frequencies (or the difference between the highest and the lowest frequency) carried by a circuit or signal. Loosely speaking, the amount of information carried in a signal. Technically, bandwidth does not apply to digital information; the term data rate is more accurate.
BCA - Burst cutting area. A circular section near the center of a DVD disc where ID codes and manufacturing information can be inscribed in bar-code format. (See Figure 4.4.)
birefringence - An optical phenomenon where light is transmitted at slightly different speeds depending on the angle of incidence. Also light scattering due to different refractions created by impurities, defects, or stresses within the media substrate.
bit rate - The volume of data measured in bits over time. Equivalent to data rate.
bit - A binary digit. The smallest representation of digital data: zero/one, off/on, no/yes. Eight bits make one byte.
bitmap - An image made of a two-dimensional grid of pixels. Each frame of digital video can be considered a bitmap, although some color information is usually shared by more than one pixel.
bits per pixel - The number of bits used to represent the color or intensity of each pixel in a bitmap. One bit allows only two values (black and white), two bits allows four values, and so on. Also called color depth or bit depth.
bitstream recorder - A device capable of recording a stream of digital data but not necessarily able to process the data.
bitstream - Digital data, usually encoded, designed to be processed sequentially and continuously.
BLER - Block error rate. A measure of the average number of raw channel errors when reading or writing a disc.
block - In video encoding, an 8 x 8 matrix of pixels or DCT values representing a small chunk of luma or chroma. In DVD MPEG-2 video, a macroblock is made up of 6 blocks: 4 luma and 2 chroma.
blocking - A term referring to the occasional blocky appearance of compressed video (an artifact). Caused when the compression ratio is high enough that the averaging of pixels in 8 x 8 blocks becomes visible.
Blue Book - The document that specifies the CD Extra interactive music CD format (see also Enhanced CD). The original CDV specification was also in a blue book.
Book A - The document specifying the DVD physical format (DVD-ROM). Finalized in August 1996.
Book B - The document specifying the DVD-Video format. Mostly finalized in August 1996.
Book C - The document specifying the DVD-Audio format.
Book D - The document specifying the DVD record-once format (DVD-R). Finalized in August 1997.
Book E - The document specifying the rewritable DVD format (DVD-RAM). Finalized in August 1997.
bps - Bits per second. A unit of data rate.
brightness - Defined by the CIE as the attribute of a visual sensation according to which area appears to emit more or less light. Loosely, the intensity of an image or pixel, independent of color; that is, its value along the axis from black to white.
buffer - Temporary storage space in the memory of a device. Helps smooth data flow.
burst - A short segment of the color subcarrier in a composite signal, inserted to help the composite video decoder regenerate the color subcarrier.
B-Y, R-Y - The general term for color-difference video signals carrying blue and red color information, where the brightness (Y) has been subtracted from the blue and red RGB signals to create B-Y and R-Y color-difference signals. (See Chapter 3.)
byte - A unit of data or data storage space consisting of eight bits, commonly representing a single character. Digital data storage is usually measured in bytes, kilobytes, megabytes, and so on.
caption - A textual representation of the audio information in a video program. Captions are usually intended for the hearing impaired, and therefore include additional text to identify the person speaking, offscreen sounds, and so on.
CAV - Constant angular velocity. Refers to rotating disc systems in which the rotation speed is kept constant, where the pickup head travels over a longer surface as it moves away from the center of the disc. The advantage of CAV is that the same amount of information is provided in one rotation of the disc. Contrast with CLV and ZCLV.
Cb, Cr - The components of digital color-difference video signals carrying blue and red color information, where the brightness (Y) has been subtracted from the blue and red RGB signals to create B-Y and R-Y color-difference signals. (See Chapter 3.)
CBEMA - Computer and Business Equipment Manufacturers Association. (See Appendix C.)
CBR - Constant bit rate. Data compressed into a stream with a fixed data rate. The amount of compression (such as quantization) is varied to match the allocated data rate, but as a result quality may suffer during high compression periods. In other words, data rate is held constant while quality is allowed to vary. Compare to VBR.
CCI - Copy control information. Information specifying if content is allowed to be copied.
CCIR Rec. 601 - A standard for digital video. The CCIR changed its name to ITU-R, and the standard is now properly called ITU-R BT.601.
CD - Short for compact disc, an optical disc storage format developed by Philips and Sony.
CD+G - Compact disc plus graphics. A variation of CD which embeds graphical data in with the audio data, allowing video pictures to be displayed periodically as music is played. Primarily used for karaoke.
CD-DA - Compact disc digital audio. The original music CD format, storing audio information as digital PCM data. Defined by the Red Book standard.
CD-i - Compact disc interactive. An extension of the CD format designed around a set-top computer that connects to a TV to provide interactive home entertainment, including digital audio and video, video games, and software applications. Defined by the Green Book standard.
CD-Plus - A type of Enhanced CD format using stamped multisession technology.
CD-R - An extension of the CD format allowing data to be recorded once on a disc by using dye-sublimation technology. Defined by the Orange Book standard.
CD-ROM XA - CD-ROM extended architecture. A hybrid version of CD allowing interleaved audio and video.
CD-ROM - Compact disc read-only memory. An extension of the Compact disc digital audio (CD-DA) format that allows computer data to be stored in digital format. Defined by the Yellow Book standard.
CDV - A combination of laserdisc and CD which places a section of CD-format audio on the beginning of the disc and a section of laserdisc-format video on the remainder of the disc.
cDVD - DVD-Video content stored on a CD (or CD-R/RW). Also called mini DVD. Most consumer DVD players can't play a cDVD.
cell - In DVD-Video, a unit of video anywhere from a fraction of a second to hours long. Cells allow the video to be grouped for sharing content among titles, interleaving for multiple angles, and so on.
CEMA - Consumer Electronics Manufacturers Association. A subsidiary of the Electronics Industry Association (EIA). (See Appendix C.)
CGMS - Copy guard management system. A method of preventing copies or controlling the number of sequential copies allowed. CGMS/A is added to an analog signal (such as line 21 of NTSC). CGMS/D is added to a digital signal, such as IEEE 1394.
challenge key - Data used in the authentication key exchange process between a DVD-ROM drive and a host computer, where one side determines if the other side contains the necessary authorized keys and algorithms for passing encrypted (scrambled) data.
channel bit - The bits stored on the disc, after being modulated.
channel data - The bits physically recorded on an optical disc after error-correction encoding and modulation. Because of the extra information and processing, channel data is larger than the user data contained within it.
channel - A part of an audio track. Typically there is one channel allocated for each loudspeaker.
chapter - In DVD-Video, a division of a title. Technically called a part of title (PTT).
chroma (C´) - The nonlinear color component of a video signal, independent of the luma. Identified by the symbol C´ (where ´ indicates nonlinearity) but usually written as C because it’s never linear in practice.
chroma subsampling - Reducing color resolution by taking fewer color samples than luminance samples. (See 4:1:1 and 4:2:0.)
chrominance (C) - The color component (hue and saturation) of light, independent of luminance. Technically, chrominance refers to the linear component of video, as opposed to the transformed nonlinear chroma component.
CIE - Commission Internationale de l’Éclairage/International Commission on Illumination. (See Appendix C.)
CIF - Common intermediate format. Video resolution of 352x288.
CIRC - Cross-interleaved Reed Solomon code. An error-correction coding method which overlaps small frames of data.
clamping area - The area near the inner hole of a disc where the drive grips the disc in order to spin it.
closed caption - Textual video overlays that are not normally visible, as opposed to open captions, which are a permanent part of the picture. Captions are usually a textual representation of the spoken audio. In the United States, the official NTSC Closed Caption standard requires that all TVs larger than 13 inches include circuitry to decode and display caption information stored on line 21 of the video signal. DVD-Video can provide closed caption data, but the subpicture format is preferred for its versatility.
CLUT - Color lookup table. An index that maps a limited range color values to a full range of values such as RGB or YUV.
CLV - Constant linear velocity. Refers to a rotating disc system in which the head moves over the disc surface at a constant velocity, requiring that the motor vary the rotation speed as the head travels in and out. The further the head is from the center of the disc, the slower the rotation. The advantage of CLV is that data density remains constant, optimizing use of the surface area. Contrast with CAV and ZCLV.
CMF - Cutting master format. Specification for storing information needed for full DVD mastering (including CSS protection) in the control area of a DVD-R(A) disc. See also DDP.
CMI - Content management information. General information about copy protection and allowed use of protected content. Includes CCI.
codec - Coder/decoder. Circuitry or computer software that encodes and decodes a signal.
color depth - The number of levels of color (usually including luma and chroma) that can be represented by a pixel. Generally expressed as a number of bits or a number of colors. The color depth of MPEG video in DVD is 24 bits, although the chroma component is shared across 4 pixels (averaging 12 actual bits per pixel).
color difference - A pair of video signals that contain the color components minus the brightness component, usually B-Y and R-Y (G-Y is not used, since it generally carries less information). The color-difference signals for a black-and-white picture are zero. The advantage of color-difference signals is that the color component can be reduced more than the brightness (luma) component without being visually perceptible.
colorburst - See burst.
colorist - The title used for someone who operates a telecine machine to transfer film to video. Part of the process involves correcting the video color to match the film.
combo drive - A DVD-ROM drive capable of reading and writing CD-R and CD-RW media. May also refer to a DVD-R or DVD-RW or DVD+RW drive with the same capability. (Also see RAMbo).
component video - A video system containing three separate color component signals, either red/green/blue (RGB) or chroma/color difference (YCbCr, YPbPr, YUV), in analog or digital form. The MPEG-2 encoding system used by DVD is based on color-difference component digital video. Very few televisions have component video inputs.
composite video - An analog video signal in which the luma and chroma components are combined (by frequency multiplexing), along with sync and burst. Also called CVBS. Most televisions and VCRs have composite video connectors, which are usually colored yellow.
compression - The process of removing redundancies in digital data to reduce the amount that must be stored or transmitted. Lossless compression removes only enough redundancy so that the original data can be recreated exactly as it was. Lossy compression sacrifices additional data to achieve greater compression.
constant data rate or constant bit rate - See CBR.
contrast - The range of brightness between the darkest and lightest elements of an image.
control area - A part of the lead-in area on a DVD containing one ECC block (16 sectors) repeated 192 times. The repeated ECC block holds information about the disc.
CPPM - Content Protection for Prerecorded Media. Copy protection for DVD-Audio.
CPRM - Content Protection for Recordable Media. Copy protection for writable DVD formats.
CPSA - Content Protection System Architecture. An overall copy protection design for DVD.
CPTWG - Copy Protection Technical Working Group. The industry body responsible for developing or approving DVD copy protection systems.
CPU - Central processing unit. The integrated circuit chip that forms the brain of a computer or other electronic device. DVD-Video players contain rudimentary CPUs to provide general control and interactive features.
crop - To trim and remove a section of the video picture in order to make it conform to a different shape. Cropping is used in the pan & scan process, but not in the letterbox process.
CVBS - Composite video baseband signal. Standard single-wire video, mixing luma and chroma signals together.
DAC - Digital-to-analog converter. Circuitry that converts digital data (such as audio or video) to analog data.
DAE - Digital audio extraction. Reading digital audio data directly from a CD audio disc.
DAT - Digital audio tape. A magnetic audio tape format that uses PCM to store digitized audio or digital data.
data area - The physical area of a DVD disc between the lead in and the lead out (or middle area) which contains the stored data content of the disc.
data rate - The volume of data measured over time; the rate at which digital information can be conveyed. Usually expressed as bits per second with notations of kbps (thousand/sec), Mbps (million/sec), and Gbps (billion/sec). Digital audio date rate is generally computed as the number of samples per second times the bit size of the sample. For example, the data rate of uncompressed 16-bit, 48-kHz, two-channel audio is 1536 kbps. Digital video bit rate is generally computed as the number of bits per pixel times the number of pixels per line times the number of lines per frame times the number of frames per second. For example, the data rate of a DVD movie before compression is usually 12 ´ 720 ´ 480 ´ 24 = 99.5 Mbps. Compression reduces the data rate. Digital data rate is sometimes inaccurately equated with bandwidth.
dB - See decibel.
DBS - Digital broadcast satellite. The general term for 18-inch digital satellite systems.
DC - Direct current. Electrical current flowing in one direction only. Adopted in the video world to refer to a signal with zero frequency. Compare to AC.
DCC - Digital compact cassette. A digital audio tape format based on the popular compact cassette. Abandoned by Philips in 1996.
DCT - Discrete cosine transform. An invertible, discrete, orthogonal transformation. Got that? A mathematical process used in MPEG video encoding to transform blocks of pixel values into blocks of spatial frequency values with lower-frequency components organized into the upper-left corner, allowing the high-frequency components in the lower-right corner to be discounted or discarded. Also digital component technology, a videotape format.
DDP - Disc description protocol. A specification for storing all the information needed to master a DVD (including CSS protection) on a DLT.
DDWG Digital Display Working Group - (see DVI).
decibel (dB) - A unit of measurement expressing ratios using logarithmic scales related to human aural or visual perception. Many different measurements are based on a reference point of 0 dB; for example a standard level of sound or power.
decimation - A form of subsampling which discards existing samples (pixels, in the case of spatial decimation, or pictures, in the case of temporal decimation). The resulting information is reduced in size but may suffer from aliasing.
decode - To reverse the transformation process of an encoding method. Decoding processes are usually deterministic.
decoder - 1) A circuit that decodes compressed audio or video, taking an encoded input stream and producing output such as audio or video. DVD players use the decoders to recreate information that was compressed by systems such as MPEG-2 and Dolby Digital; 2) a circuit that converts composite video to component video or matrixed audio to multiple channels.
delta picture (or delta frame)- A video picture based on the changes from the picture before (or after) it. MPEG P pictures and B pictures are examples. Contrast with key picture.
deterministic - A process or model whose outcome does not depend upon chance, and where a given input will always produce the same output. Audio and video decoding processes are mostly deterministic.
digital signal processor (DSP) - A digital circuit that can be programmed to perform digital data manipulation tasks such as decoding or audio effects.
digital video noise reduction (DVNR) - Digitally removing noise from video by comparing frames in sequence to spot temporal aberrations.
digital - Expressed in digits. A set of discrete numeric values, as used by a computer. Analog information can be digitized by sampling.
digitize - To convert analog information to digital information by sampling.
DIN - Deutsches Institut für Normung/German Institute for Standardization. (See Appendix C.)
directory - The part of a disc that indicates what files are stored on the disc and where they are located.
DirectShow - A software standard developed by Microsoft for playback of digital video and audio in the Windows operating system. Replaces the older MCI and Video for Windows software.
disc key - A value used to encrypt and decrypt (scramble) a title key on DVD-Video discs.
disc menu - The main menu of a DVD-Video disc, from which titles are selected. Also called the system menu or title selection menu. Sometimes confusingly called the title menu, which more accurately refers to the menu within a title from which audio, subpicture, chapters, and so forth can be selected.
discrete cosine transform (DCT) - An invertible, discrete, orthogonal transformation. A mathematical process used in MPEG video encoding to transform blocks of pixel values into blocks of spatial frequency values with lower-frequency components organized into the upper-left corner, allowing the high-frequency components in the lower-right corner to be discounted or discarded.
discrete surround sound - Audio in which each channel is stored and transmitted separate from and independent of other channels. Multiple independent channels directed to loudspeakers in front of and behind the listener allow precise control of the soundfield in order to generate localized sounds and simulate moving sound sources.
display rate - The number of times per second the image in a video system is refreshed. Progressive scan systems such as film or HDTV change the image once per frame. Interlace scan systems such as standard television change the image twice per frame, with two fields in each frame. Film has a frame rate of 24 fps, but each frame is shown twice by the projector for a display rate of 48 fps. 525/60 (NTSC) television has a rate of 29.97 frames per second (59.94 fields per second). 625/50 (PAL/SECAM) television has a rate of 25 frames per second (50 fields per second).
Divx - Digital Video Express. A short-lived pay-per-viewing-period variation of DVD.
DLT - Digital linear tape. A digital archive standard using half-inch tapes, commonly used for submitting a premastered DVD disc image to a replication service.
Dolby Digital - A perceptual coding system for audio, developed by Dolby Laboratories and accepted as an international standard. Dolby Digital is the most common means of encoding audio for DVD-Video and is the mandatory audio compression system for 525/60 (NTSC) discs.
Dolby Pro Logic - The technique (or the circuit which applies the technique) of extracting surround audio channels from a matrix-encoded audio signal. Dolby Pro Logic is a decoding technique only, but is often mistakenly used to refer to Dolby Surround audio encoding.
Dolby Surround - The standard for matrix encoding surround-sound channels in a stereo signal by applying a set of defined mathematical functions when combining center and surround channels with left and right channels. The center and surround channels can then be extracted by a decoder such as a Dolby Pro Logic circuit which applies the inverse of the mathematical functions. A Dolby Surround decoder extracts surround channels, while a Dolby Pro Logic decoder uses additional processing to create a center channel. The process is essentially independent of the recording or transmission format. Both Dolby Digital and MPEG audio compression systems are compatible with Dolby Surround audio.
downmix - To convert a multichannel audio track into a two-channel stereo track by combining the channels with the Dolby Surround process. All DVD players are required to provide downmixed audio output from Dolby Digital audio tracks.
downsampling - See subsampling.
DRC - See dynamic range compression.
driver - A software component that enables an application to communicate with a hardware device.
DSD - Direct Stream Digital. An uncompressed audio bitstream coding method developed by Sony. An alternative to PCM. Used by SACD.
DSI - Data search information. Navigation and search information contained in the DVD-Video data stream. DSI and PCI together make up an overhead of about 1 Mbps.
DSP - Digital signal processor (or processing).
DSVCD - Double Super Video Compact Disc. Long-playing (100-minute) variation of SVCD.
DTS - Digital Theater Sound. A perceptual audio-coding system developed for theaters. A competitor to Dolby Digital and an optional audio track format for DVD-Video and DVD-Audio.
DTS-ES - A version of DTS decoding that is compatible with 6.1-channel Dolby Surround EX. DTS-ES Discrete is a variation of DTS encoding and decoding that carries a discrete rear center channel instead of a matrixed channel.
DTV - Digital television. In general, any system that encodes video and audio in digital form. In specific, the Digital Television System proposed by the ATSC or the digital TV standard proposed by the Digital TV Team founded by Microsoft, Intel, and Compaq.
duplication - The reproduction of media. Generally refers to producing discs in small quantities, as opposed to large-scale replication.
DV - Digital Video. Usually refers to the digital videocassette standard developed by Sony and JVC.
DVB - Digital video broadcast. A European standard for broadcast, cable, and digital satellite video transmission.
DVC - Digital video cassette. Early name for DV.
DVCAM - Sony’s proprietary version of DV.
DVCD - Double Video Compact Disc. Long-playing (100-minute) variation of VCD.
DVCPro - Matsushita’s proprietary version of DV.
DVD - An acronym that officially stands for nothing, but is often expanded as Digital Video Disc or Digital Versatile Disc. The audio/video/data storage system based on 12- and 8-cm optical discs.
DVD-Audio (DVD-A) - The audio-only format of DVD. Primarily uses PCM audio with MLP encoding, along with an optional subset of DVD-Video features.
DVD-R - A version of DVD on which data can be recorded once. Uses dye sublimation recording technology.
DVD-RAM - A version of DVD on which data can be recorded more than once. Uses phase-change recording technology.
DVD-ROM - The base format of DVD. ROM stands for read-only memory, referring to the fact that standard DVD-ROM and DVD-Video discs can’t be recorded on. A DVD-ROM can store essentially any form of digital data.
DVD-Video (DVD-V) - A standard for storing and reproducing audio and video on DVD-ROM discs, based on MPEG video, Dolby Digital and MPEG audio, and other proprietary data formats.
DVI (Digital Visual Interface) - The digital video interface standard developed by the Digital Display Working Group (DDWG). A replacement for analog VGA monitor interface.
DVNR - (see digital video noise reduction)
DVS - Descriptive video services. Descriptive narration of video for blind or sight-impaired viewers.
dye polymer - The chemical used in DVD-R and CD-R media that darkens when heated by a high-power laser.
dye-sublimation - Optical disc recording technology that uses a high-powered laser to burn readable marks into a layer of organic dye. Other recording formats include magneto-optical and phase-change.
dynamic range compression - A technique of reducing the range between loud and soft sounds in order to make dialogue more audible, especially when listening at low volume levels. Used in the downmix process of multichannel Dolby Digital sound tracks.
dynamic range - The difference between the loudest and softest sound in an audio signal. The dynamic range of digital audio is determined by the sample size. Increasing the sample size does not allow louder sounds; it increases the resolution of the signal, thus allowing softer sounds to be separated from the noise floor (and allowing more amplification with less distortion). Dynamic range refers to the difference between the maximum level of distortion-free signal and the minimum limit reproducible by the equipment.
EBU - European Broadcasting Union. (See Appendix C.)
ECC - See Error correction code.
ECD - Error-detection and correction code. See error-correction code.
ECMA - European Computer Manufacturers Association. (See Appendix C.)
EDC - A short error-detection code applied at the end of a DVD sector.
edge enhancement - When films are transferred to video in preparation for DVD encoding, they are commonly run through digital processes that attempt to clean up the picture. These processes include noise reduction (DVNR) and image enhancement. Enhancement increases contrast (similar to the effect of the "sharpen" or "unsharp mask" filters in PhotoShop), but can tend to overdo areas of transition between light and dark or different colors, causing a "chiseled" look or a ringing effect like the haloes you see around streetlights when driving in the rain. Video noise reduction is a good thing, when done well, since it can remove scratches, spots, and other defects from the original film. Enhancement, which is rarely done well, is a bad thing. The video may look sharper and clearer to the casual observer, but fine tonal details of the original picture are altered and lost.
EDS - Enhanced data services. Additional information in NTSC line such as a time signal.
EDTV - Enhanced-definition television. A system which uses existing transmission equipment to send an enhanced signal which looks the same on existing receivers but carries additional information to improve the picture quality on new enhanced receivers. PALPlus is an example of EDTV. (Contrast with HDTV and IDTV.)
EFM - Eight-to-fourteen modulation. A modulation method used by CD, where eight data bits are represented by 14 channel bits. The 8/16 modulation used by DVD is sometimes called EFM plus.
EIA - Electronics Industry Association. (See Appendix C.)
E-IDE - Enhanced Integrated Drive Electronics. Extensions to the IDE standard providing faster data transfer and allowing access to larger drives, including CD-ROM and tape drives, using ATAPI. E-IDE was adopted as a standard by ANSI in 1994. ANSI calls it Advanced Technology Attachment-2 (ATA-2) or Fast ATA.
elementary stream - A general term for a coded bitstream such as audio or video. Elementary streams are made up of packs of packets.
emulate - To test the function of a DVD disc on a computer after formatting a complete disc image.
encode - To transform data for storage or transmission, usually in such a way that redundancies are eliminated or complexity is reduced. Most compression is based on one or more encoding methods. Data such as audio or video is encoded for efficient storage or transmission and is decoded for access or display.
encoder - 1) A circuit or program that encodes (and thereby compresses) audio or video; 2) a circuit that converts component digital video to composite analog video. DVD players include TV encoders to generate standard television signals from decoded video and audio; 3) a circuit that converts multichannel audio to two-channel matrixed audio.
Enhanced CD - A general term for various techniques that add computer software to a music CD, producing a disc which can be played in a music player or read by a computer. Also called CD Extra, CD Plus, hybrid CD, interactive music CD, mixed-mode CD, pre-gap CD, or track-zero CD.
entropy coding - Variable-length, lossless coding of a digital signal to reduce redundancy. MPEG-2, DTS and Dolby Digital apply entropy coding after the quantization step. MLP also uses entropy coding.
EQ - Equalization of audio.
error-correction code - Additional information added to data to allow errors to be detected and possibly corrected. See Chapter 3.
ETSI - European Telecommunications Standards Institute. (See Appendix C.)
father - The metal master disc formed by electroplating the glass master. The father disc is used to make mother discs, from which multiple stampers (sons) can be made.
field - A set of alternating scan lines in an interlaced video picture. A frame is made of a top (odd) field and a bottom (even) field.
file system - A defined way of storing files, directories, and information about them on a data storage device.
file - A collection of data stored on a disc, usually in groups of sectors.
filter - (verb) To reduce the amount of information in a signal. (noun) A circuit or process that reduces the amount of information in a signal. Analog filtering usually removes certain frequencies. Digital filtering (when not emulating analog filtering) usually averages together multiple adjacent pixels, lines, or frames to create a single new pixel, line, or frame. This generally causes a loss of detail, especially with complex images or rapid motion. See letterbox filter. Compare to interpolate.
FireWire - A standard for transmission of digital data between external peripherals, including consumer audio and video devices. The official name is IEEE 1394, based on the original FireWire design by Apple Computer.
fixed rate - Information flow at a constant volume over time. See CBR.
forced display - A feature of DVD-Video allowing subpictures to be displayed even if the player’s subpicture display mode is turned off. Designed for showing subtitles in a scene where the language is different from the native language of the film.
formatting - 1) Creating a disc image. 2) Preparing storage media for recording.
fps - Frames per second. A measure of the rate at which pictures are shown for a motion video image. In NTSC and PAL video, each frame is made up of two interlaced fields.
fragile watermark - A watermark designed to be destroyed by any form of copying or encoding other than a bit-for-bit digital copy. Absence of the watermark indicates that a copy has been made.
frame doubler - A video processor that increases the frame rate (display rate) in order to create a smoother-looking video display. Compare to line doubler.
frame rate - The frequency of discrete images. Usually measured in frames per second (fps). Film has a rate of 24 frames per second, but usually must be adjusted to match the display rate of a video system.
frame - The piece of a video signal containing the spatial detail of one complete image; the entire set of scan lines. In an interlaced system, a frame contains two fields.
frequency - The number of repetitions of a phenomenon in a given amount of time. The number of complete cycles of a periodic process occurring per unit time.
G byte - One billion (109) bytes. Not to be confused with gigabyte (230 bytes).
G - Giga. An SI prefix for denominations of 1 billion (109).
Galaxy Group - The group of companies proposing the Galaxy watermarking format. (IBM/NEC, Hitachi/Pioneer/Sony.)
GB - Gigabyte.
Gbps - Gigabits/second. Billions (109) of bits per second.
gigabyte - 1,073,741,824 (230) bytes. See the end of Chapter 1 (p. 12) for more information.
GOP - Group of pictures. In MPEG video, one or more I pictures followed by P and B pictures. A GOP is the atomic unit of MPEG video access. GOPs are limited in DVD-Video to 18 frames for 525/60 and 15 frames for 625/50.
gray market - Dealers and distributors who sell equipment without proper authorization from the manufacturer.
Green Book - The document developed in 1987 by Philips and Sony as an extension to CD-ROM XA for the CD-i system.
H/DTV - High-definition/digital television. A combination of acronyms that refers to both HDTV and DTV systems.
Half D1 - MPEG-2 picture resolution of 352 x 480 (NTSC) or 352 x 576 (PAL/SECAM). See HHR.
HAVi - A consumer electronics industry standard for interoperability between digital audio and video devices connected via a network in the consumer’s home.
HDCD - High-definition Compatible Digital. A proprietary method of enhancing audio on CDs.
HDTV - High-definition television. A video format with a resolution approximately twice that of conventional television in both the horizontal and vertical dimensions, and a picture aspect ratio of 16:9. Used loosely to refer to the U.S. DTV System. Contrast with EDTV and IDTV.
Hertz - See Hz.
hexadecimal - Representation of numbers using base 16.
HFS - Hierarchical file system. A file system used by Apple Computer’s Mac OS operating system.
HHR - Horizontal Half Resolution. MPEG-2 picture resolution of 352 x 480 (NTSC) or 352 x 576 (PAL/SECAM). Supported by the DVD-Video specification. Encoding video at HHR greatly reduces the bandwidth with a minor reduction in picture quality. Also called Half D1.
High Sierra - The original file system standard developed for CD-ROM, later modified and adopted as ISO 9660.
horizontal resolution - See lines of horizontal resolution.
HQ-VCD - High-quality Video Compact Disc. Developed by the Video CD Consortium (Philips, Sony, Matsushita and JVC) as a successor to VCD. Evolved into SVCD.
HRRA - Home Recording Rights Association.
HSF - See High Sierra.
HTML - Hypertext markup language. A tagging specification, based on SGML (standard generalized markup language), for formatting text to be transmitted over the Internet and displayed by client software.
hue - The color of light or of a pixel. The property of color determined by the dominant wavelength of light.
Huffman coding - A lossless compression technique of assigning variable-length codes to a known set of values. Values occurring most frequently are assigned the shortest codes. MPEG uses a variation of Huffman coding with fixed code tables, often called variable-length coding (VLC).
Hz - Hertz. A unit of frequency measurement. The number of cycles (repetitions) per second.
I picture (or I frame) - In MPEG video, an intra picture that is encoded independent from other pictures (see intraframe). Transform coding (DCT, quantization, and VLC) is used with no motion compensation, resulting in only moderate compression. I pictures provide a reference point for dependent P pictures and B pictures and allow random access into the compressed video stream.
i.Link - Trademarked Sony name for IEEE 1394.
IDE - Integrated Drive Electronics. An internal bus, or standard electronic interface between a computer and internal block storage devices. IDE was adopted as a standard by ANSI in November 1990. ANSI calls it Advanced Technology Attachment (ATA). Also see E-IDE and ATAPI.
IDTV - Improved-definition television. A television receiver that improves the apparent quality of the picture from a standard video signal by using techniques such as frame doubling, line doubling, and digital signal processing.
IEC - International Electrotechnical Commission. (See Appendix C.)
IED - ID error correction. An error-detection code applied to each sector ID on a DVD disc.
IEEE 1394 - A standard for transmission of digital data between external peripherals, including consumer audio and video devices. Also known as FireWire.
IEEE - Institute of Electrical and Electronics Engineers. An electronics standards body.
IFE - In-flight entertainment.
I-MPEG - Intraframe MPEG. An unofficial variation of MPEG video encoding that uses only intraframe compression. I-MPEG is used by DV equipment.
interframe - Something that occurs between multiple frames of video. Interframe compression takes temporal redundancy into account. Contrast with intraframe.
interlace - A video scanning system in which alternating lines are transmitted, so that half a picture is displayed each time the scanning beam moves down the screen. An interlaced frame is made of two fields. (See Chapter 3.)
interleave - To arrange data in alternating chunks so that selected parts can be extracted while other parts are skipped over, or so that each chunk carries a piece of a different data stream.
interpolate - To increase the pixels, scan lines, or pictures when scaling an image or a video stream by averaging together adjacent pixels, lines, or frames to create additional inserted pixels or frames. This generally causes a softening of still images and a blurriness of motion images because no new information is created. Compare to filter.
intraframe - Something that occurs within a single frame of video. Intraframe compression does not reduce temporal redundancy, but allows each frame to be independently manipulated or accessed. (See I picture.) Compare to interframe.
inverse telecine - The reverse of 3:2 pulldown, where the frames which were duplicated to create 60-fields/second video from 24-frames/second film source are removed. MPEG-2 video encoders usually apply an inverse telecine process to convert 60-fields/second video into 24-frames/second encoded video. The encoder adds information enabling the decoder to recreate the 60-fields/second display rate.
ISO 9660 - The international standard for the file system used by CD-ROM. Allows filenames of only 8 characters plus a 3-character extension.
ISO - International Organization for Standardization. (See Appendix C.)
ISRC - International Standard Recording Code.
ITU - International Telecommunication Union. (See Appendix C.)
ITU-R BT.601 - The international standard specifying the format of digital component video. Currently at version 5 (identified as 601-5).
Java - A programming language with specific features designed for use with the Internet and HTML.
JCIC - Joint Committee on Intersociety Coordination.
JEC - Joint Engineering Committee of EIA and NCTA.
jewel box - The plastic clamshell case that holds a CD or DVD.
jitter - Temporal variation in a signal from an ideal reference clock. There are many kinds of jitter, including sample jitter, channel jitter, and interface jitter. See Chapter 3.
JPEG - Joint Photographic Experts Group. The international committee which created its namesake standard for compressing still images.
k byte - One thousand (103) bytes. Not to be confused with KB or kilobyte (210 bytes). Note the small “k.”
k - Kilo. An SI prefix for denominations of one thousand (103). Also used, in capital form, for 1024 bytes of computer data (see kilobyte).
karaoke - Literally empty orchestra. The social sensation from Japan where sufficiently inebriated people embarrass themselves in public by singing along to a music track. Karaoke was largely responsible for the success of laserdisc in Japan, thus supporting it elsewhere.
KB - Kilobyte.
kbps - Kilobits/second. Thousands (103) of bits per second.
key picture (or key frame)- A video picture containing the entire content of the image (intraframe encoding), rather than the difference between it and another image (interframe encoding). MPEG I pictures are key pictures. Contrast with delta picture.
kHz - Kilohertz. A unit of frequency measurement. One thousand cycles (repetitions) per second or 1000 hertz.
kilobyte - 1024 (210) bytes. See p. 12 for more information.
land - The raised area of an optical disc.
laserdisc - A 12-inch (or 8-inch) optical disc that holds analog video (using an FM signal) and both analog and digital (PCM) audio. A precursor to DVD.
layer - The plane of a DVD disc on which information is recorded in a pattern of microscopic pits. Each substrate of a disc can contain one or two layers. The first layer, closest to the readout surface, is layer 0; the second is layer 1.
lead in - The physical area 1.2 mm or wider preceding the data area on a disc. The lead in contains sync sectors and control data including disc keys and other information.
lead out - On a single-layer disc or PTP dual-layer disc, the physical area 1.0 mm or wider toward the outside of the disc following the data area. On an OTP dual-layer disc, the physical area 1.2 mm or wider at the inside of the disc following the recorded data area (which is read from the outside toward the inside on the second layer).
legacy - A term used to describe a hybrid disc that can be played in both a DVD player and a CD player.
letterbox filter - Circuitry in a DVD player that reduces the vertical size of anamorphic widescreen video (combining every 4 lines into 3) and adds black mattes at the top and bottom. Also see filter.
letterbox - The process or form of video where black horizontal mattes are added to the top and bottom of the display area in order to create a frame in which to display video using an aspect ratio different than that of the display. The letterbox method preserves the entire video picture, as opposed to pan & scan. DVD-Video players can automatically letterbox a widescreen picture for display on a standard 4:3 TV.
level - In MPEG-2, levels specify parameters such as resolution, bit rate, and frame rate. Compare to profile.
line doubler - A video processor that doubles the number of lines in the scanning system in order to create a display with scan lines that are less visible. Some line doublers convert from interlaced to progressive scan.
linear PCM - A coded representation of digital data that is not compressed. Linear PCM spreads values evenly across the range from highest to lowest, as opposed to nonlinear (companded) PCM which allocates more values to more important frequency ranges.
lines of horizontal resolution - Sometimes abbreviated as TVL (TV lines) or LoHR. A common but subjective measurement of the visually resolvable horizontal detail of an analog video system, measured in half-cycles per picture height. Each cycle is a pair of vertical lines, one black and one white. The measurement is usually made by viewing a test pattern to determine where the black and white lines blur into gray. The resolution of VHS video is commonly gauged at 240 lines of horizontal resolution, broadcast video at 330, laserdisc at 425, and DVD at 500 to 540. Because the measurement is relative to picture height, the aspect ratio must be taken into account when determining the number of vertical units (roughly equivalent to pixels) that can be displayed across the width of the display. For example, an aspect ratio of 1.33 multiplied by 540 gives 720 pixels.
Lo/Ro - Left only/right only. Stereo signal (no matrixed surround information). Optional downmixing output in Dolby Digital decoders. Does not change phase, simply folds surround channels forward into Lf and Rf.
locale - See regional code.
logical unit - A physical or virtual peripheral device, such as a DVD-ROM drive.
logical - An artificial structure or organization of information created for convenience of access or reference, usually different from the physical structure or organization. For example, the application specifications of DVD (the way information is organized and stored) are logical formats.
lossless compression - Compression techniques that allow the original data to be recreated without loss. Contrast with lossy compression.
lossy compression - Compression techniques that achieve very high compression ratios by permanently removing data while preserving as much significant information as possible. Lossy compression includes perceptual coding techniques that attempt to limit the data loss to that which is least likely to be noticed by human perception.
LP - Long-playing record. An audio recording on a plastic platter turning at 33 1/3 rpm and read by a stylus.
LPCM - See linear PCM.
Lt/Rt - Left total/right total. Four surround channels matrixed into two channels. Mandatory downmixing output in Dolby Digital decoders.
luma (Y´) - The brightness component of a color video image (also called the grayscale, monochrome, or black-and-white component). Nonlinear luminance. The standard luma signal is computed from nonlinear RGB as Y´ = 0.299 R´ + 0.587 G´ + 0.114 B´.
luminance (Y) - Loosely, the sum of RGB tristimulus values corresponding to brightness. May refer to a linear signal or (incorrectly) a nonlinear signal.
M byte - One million (106) bytes. Not to be confused with megabyte (220 bytes).
M - Mega. An SI prefix for denominations of one million (106).
Mac OS - The operating system used by Apple Macintosh computers.
macroblock - In MPEG MP@ML, the four 8 x 8 blocks of luma information and two 8 x 8 blocks of chroma information form a 16 x 16 area of a video frame.
macroblocking - An MPEG artifact. See blocking.
Macrovision - An antitaping process that modifies a signal so that it appears unchanged on most televisions but is distorted and unwatchable when played back from a videotape recording. Macrovision takes advantage of characteristics of AGC circuits and burst decoder circuits in VCRs to interfere with the recording process.
magneto-optical - Recordable disc technology using a laser to heat spots that are altered by a magnetic field. Other formats include dye-sublimation and phase-change.
main level (ML) - A range of proscribed picture parameters defined by the MPEG-2 video standard, with maximum resolution equivalent to ITU-R BT.601 (720 x 576 x 30). (Also see level.)
main profile (MP) - A subset of the syntax of the MPEG-2 video standard designed to be supported over a large range of mainstream applications such as digital cable TV, DVD, and digital satellite transmission. (Also see profile.)
mark - The non-reflective area of a writable optical disc. Equivalent to a pit.
master - The metal disc used to stamp replicas of optical discs. The tape used to make additional recordings.
mastering - The process of replicating optical discs by injecting liquid plastic into a mold containing a master. Often used inaccurately to refer to premastering.
matrix encoding - The technique of combining additional surround-sound channels into a conventional stereo signal. Also see Dolby Surround.
matte - An area of a video display or motion picture that is covered (usually in black) or omitted in order to create a differently shaped area within the picture frame.
MB - Megabyte.
Mbps - Megabits/second. Millions (106) of bits per second.
megabyte - 1,048,576 (220) bytes. See p. 12 for more information.
megapixel - A term referring to an image or display format with a resolution of approximately 1 million pixels.
memory - Data storage used by computers or other digital electronics systems. Read-only memory (ROM) permanently stores data or software program instructions. New data cannot be written to ROM. Random-access memory (RAM) temporarily stores data—including digital audio and video—while it is being manipulated, and holds software application programs while they are being executed. Data can be read from and written to RAM. Other long-term memory includes hard disks, floppy disks, digital CD formats (CD-ROM, CD-R, and CD-RW), and DVD formats (DVD-ROM, DVD-R, and DVD-RAM).
MHz - One million (106) Hz.
Microsoft Windows - The leading operating system for Intel CPU-based computers. Developed by Microsoft.
middle area - On a dual-layer OTP disc, the physical area 1.0 mm or wider on both layers, adjacent to the outside of the data area.
Millennium Group - The group of companies proposing the Galaxy watermarking format. (Macrovision, Philips, Digimarc)
mini DVD - 1) Small size (8-cm) DVD. 2) DVD-Video content stored on a CD (or CD-R/RW). Less ambiguously called cDVD.
mixed mode - A type of CD containing both Red Book audio and Yellow Book computer data tracks.
MKB (Media Key Block) - Set of keys used in CPPM and CPRM for authenticating players.
MLP (Meridian Lossless Packing) - A lossless compression technique (used by DVD-Audio) that removes redundancy from PCM audio signals to achieve a compression ratio of about 2:1 while allowing the signal to be perfectly recreated by the MLP decoder.
MO - Magneto-optical rewritable discs.
modulation - Replacing patterns of bits with different (usually larger) patterns designed to control the characteristics of the data signal. DVD uses 8/16 modulation, where each set of 8 data bits is replaced by 16 channel bits before being written onto the disc.
mosquitoes - A term referring to the fuzzy dots that can appear around sharp edges (high spatial frequencies) after video compression. Also known as the Gibbs Effect.
mother - The metal discs produced from mirror images of the father disc in the replication process. Mothers are used to make stampers, often called sons
motion compensation - In video decoding, the application of motion vectors to already-decoded blocks to construct a new picture.
motion estimation - In video encoding, the process of analyzing previous or future frames to identify blocks that have not changed or have only changed location. Motion vectors are then stored in place of the blocks. This is very computation-intensive and can cause visual artifacts when subject to errors.
motion vector - A two-dimensional spatial displacement vector used for MPEG motion compensation to provide an offset from the encoded position of a block in a reference (I or P) picture to the predicted position (in a P or B picture).
MP@ML - Main profile at main level. The common MPEG-2 format used by DVD (along with SP@SL).
MP3 - MPEG-1 Layer III audio. A perceptual audio coding algorithm. Not supported in DVD-Video or DVD-Audio formats.
MPEG audio - Audio compressed according to the MPEG perceptual encoding system. MPEG-1 audio provides two channels, which can be in Dolby Surround format. MPEG-2 audio adds data to provide discrete multichannel audio. Stereo MPEG audio is the mandatory audio compression system for 625/50 (PAL/SECAM) DVD-Video.
MPEG video - Video compressed according to the MPEG encoding system. MPEG-1 is typically used for low data rate video such as on a Video CD. MPEG-2 is used for higher-quality video, especially interlaced video, such as on DVD or HDTV. (See Table 3.5 for a comparison of MPEG-1 and MPEG-2.)
MPEG - Moving Pictures Expert Group. An international committee that developed the MPEG family of audio and video compression systems.
Mt. Fuji - See SFF 8090.
MTBF - Mean time between failure. A measure of reliability for electronic equipment, usually determined in benchmark testing. The higher the MTBF, the more reliable the hardware.
multiangle - A DVD-Video program containing multiple angles allowing different views of a scene to be selected during playback.
multichannel - Multiple channels of audio, usually containing different signals for different speakers in order to create a surround-sound effect.
multilanguage - A DVD-Video program containing sound tracks and subtitle tracks for more than one language.
multimedia - Information in more than one form, such as text, still images, sound, animation, and video. Usually implies that the information is presented by a computer.
multiplexing - Combining multiple signals or data streams into a single signal or stream. Usually achieved by interleaving at a low level.
MultiRead - A standard developed by the Yokohama group, a consortium of companies attempting to ensure that new CD and DVD hardware can read all CD formats (see “Innovations of CD” in Chapter 2 for a discussion of CD variations).
multisession - A technique in write-once recording technology that allows additional data to be appended after data written in an earlier session.
mux - Short for multiplex.
mux_rate - In MPEG, the combined rate of all packetized elementary streams (PES) of one program. The mux_rate of DVD is 10.08 Mbps.
NAB - National Association of Broadcasters.
NCTA - National Cable Television Association.
nighttime mode - Name for Dolby Digital dynamic range compression feature to allow low-volume nighttime listening without losing legibility of dialog.
noise floor - The level of background noise in a signal or the level of noise introduced by equipment or storage media below which the signal can’t be isolated from the noise.
noise - Irrelevant, meaningless, or erroneous information added to a signal by the recording or transmission medium or by an encoding/decoding process. An advantage of digital formats over analog formats is that noise can be completely eliminated (although new noise may be introduced by compression).
NRZI - Non-return to zero, inverted. A method of coding binary data as waveform pulses. Each transition represents a one, while lack of a transition represents a run of zeros.
NTSC - National Television Systems Committee. A committee organized by the Electronic Industries Association (EIA) that developed commercial television broadcast standards for the United States. The group first established black-and-white TV standards in 1941, using a scanning system of 525 lines at 60 fields per second. The second committee standardized color enhancements using 525 lines at 59.94 fields per second. NTSC refers to the composite color-encoding system. The 525/59.94 scanning system (with a 3.58-MHz color subcarrier) is identified by the letter M, and is often incorrectly referred to as NTSC. The NTSC standard is also used in Canada, Japan, and other parts of the world. NTSC is facetiously referred to as meaning never the same color because of the system’s difficulty in maintaining color consistency.
NTSC-4.43 - A variation of NTSC where a 525/59.94 signal is encoded using the PAL subcarrier frequency and chroma modulation. Also called 60-Hz PAL.
numerical aperture (NA) - A unitless measure of the ability of a lens to gather and focus light. NA = n sin θ, where θ is the angle of the light as it narrows to the focal point. A numerical aperture of 1 implies no change in parallel light beams. The higher the number, the greater the focusing power and the smaller the spot.
OEM - Original equipment manufacturer. Computer maker.
operating system - The primary software in a computer, containing general instructions for managing applications, communications, input/output, memory and other low-level tasks. DOS, Windows, Mac OS, and UNIX are examples of operating systems.
opposite path - See OTP.
Orange Book - The document begun in 1990 which specifies the format of recordable CD. Three parts define magneto-optical erasable (MO) and write-once (WO), dye-sublimation write-once (CD-R), and phase-change rewritable (CD-RW) discs. Orange Book added multisession capabilities to the CD-ROM XA format.
OS - Operating system.
OSTA - Optical Storage Technology Association. (See Appendix C.)
OTP - Opposite track path. A variation of DVD dual-layer disc layout where readout begins at the center of the disc on the first layer, travels to the outer edge of the disc, then switches to the second layer and travels back toward the center. Designed for long, continuous-play programs. Also called RSDL. Contrast with PTP.
out of band - In a place not normally accessible.
overscan - The area at the edges of a television tube that is covered to hide possible video distortion. Overscan typically covers about 4 or 5 percent at the edges of the picture but can cover as much as 10 percent.
P picture (or P frame) - In MPEG video, a "predicted" picture based on difference from previous pictures. P pictures (along with I pictures) provide a reference for following P pictures or B pictures.
pack - A group of MPEG packets in a DVD-Video program stream. Each DVD sector (2048 bytes) contains one pack.
packet - A low-level unit of DVD-Video (MPEG) data storage containing contiguous bytes of data belonging to a single elementary stream such as video, audio, control, and so forth. Packets are grouped into packs.
packetized elementary stream (PES) - The low-level stream of MPEG packets containing an elementary stream, such as audio or video.
PAL - Phase Alternate Line. A video standard used in Europe and other parts of the world for composite color encoding. Various version of PAL use different scanning systems and color subcarrier frequencies (identified with letters B, D, G, H, I, M, and N), the most common being 625 lines at 50 fields per second, with a color subcarrier of 4.43 MHz. PAL is also said to mean “picture always lousy” or “perfect at last,” depending on which side of the ocean the speaker comes from.
palette - A table of colors that identifies a subset from a larger range of colors. The small number of colors in the palette allows fewer bits to be used for each pixel. Also called a color look-up table (CLUT).
pan & scan - The technique of reframing a picture to conform to a different aspect ratio by cropping parts of the picture. DVD-Video players can automatically create a 4:3 pan & scan version from widescreen video by using a horizontal offset encoded with the video.
parallel path - See PTP.
parental management - An optional feature of DVD-Video that prohibits programs from being viewed or substitutes different scenes within a program depending on the parental level set in the player. Parental control requires that parental levels and additional material (if necessary) be encoded on the disc.
part of title - In DVD-Video, a division of a title representing a scene. Also called a chapter. Parts of titles are numbered 1 to 99.
PCI - Presentation control information. A DVD-Video data stream containing details of the timing and presentation of a program (aspect ratio, angle change, menu highlight and selection information, and so on). PCI and DSI together make up an overhead of about 1 Mbps.
PCM - An uncompressed, digitally coded representation of an analog signal. The waveform is sampled at regular intervals and a series of pulses in coded form (usually quantized) are generated to represent the amplitude.
PC-TV - The merger of television and computers. A personal computer capable of displaying video as a television.
pel - See pixel.
perceived resolution - The apparent resolution of a display from the observer’s point of view, based on viewing distance, viewing conditions, and physical resolution of the display.
perceptual coding - Lossy compression techniques based on the study of human perception. Perceptual coding systems identify and remove information that is least likely to be missed by the average human observer.
PES (packetized elementary stream) - A single video or audio stream in MPEG format.
PGCI - Program chain information. Data describing a chain of cells (grouped into programs) and their sector locations, thus composing a sequential program. PGCI data is contained in the PCI stream.
phase-change - A technology for rewritable optical discs using a physical effect in which a laser beam heats a recording material to reversibly change an area from an amorphous state to a crystalline state, or vice versa. Continuous heat just above the melting point creates the crystalline state (an erasure), while high heat followed by rapid cooling creates the amorphous state (a mark). (Other recording technologies include dye-sublimation and magneto-optical.)
physical format - The low-level characteristics of the DVD-ROM and DVD-Video standards, including pits on the disc, location of data, and organization of data according to physical position.
picture stop - A function of DVD-Video where a code indicates that video playback should stop and a still picture be displayed.
picture - In video terms, a single still image or a sequence of moving images. Picture generally refers to a frame, but for interlaced frames may refer instead to a field of the frame. In a more general sense, picture refers to the entire image shown on a video display.
PIP - Picture in picture. A feature of some televisions that shows another channel or video source in a small window superimposed in a corner of the screen.
pit - The depressed area of an optical disc.
pit art - A pattern of pits to be stamped onto a disc to provide visual art rather than data. A cheaper alternative to a printed label.
pit - A microscopic depression in the recording layer of a disc. Pits are usually 1/4 of the laser wavelength so as to cause cancellation of the beam by diffraction.
pixel aspect ratio - The ratio of width to height of a single pixel. Often means sample pitch aspect ratio (when referring to sampled digital video). Pixel aspect ratio for a given raster can be calculated as y/x x w/h (where x and y are the raster horizontal pixel count and vertical pixel count, and w and h are the display aspect ratio width and height). Pixel aspect ratios are also confusingly calculated as x/y x w/h, giving a height-to-width ratio. (See Table 4.17.)
pixel depth - See color depth.
pixel - The smallest picture element of an image (one sample of each color component). A single dot of the array of dots that makes up a picture. Sometimes abbreviated to pel. The resolution of a digital display is typically specified in terms of pixels (width by height) and color depth (the number of bits required to represent each pixel).
PMMA - Polymethylmethacrylate. A clear acrylic compound used in laserdiscs and as an intermediary in the surface transfer process (STP) for dual-layer DVDs. PMMA is also sometimes used for DVD substrates.
POP - Picture outside picture. A feature of some widescreen displays that uses the unused area around a 4:3 picture to show additional pictures.
premastering - The process of preparing data in the final format to create a DVD disc image for mastering. Includes creating DVD control and navigation data, multiplexing data streams together, generating error-correction codes, and performing channel modulation. Often includes the process of encoding video, audio, and subpictures.
presentation data - DVD-Video information such as video, menus, and audio which is presented to the viewer. (See PCI.)
profile - In MPEG-2, profiles specify syntax and processes such as picture types, scalability, and extensions. Compare to level.
program chain - In DVD-Video, a collection of programs, or groups of cells, linked together to create a sequential presentation.
program - In a general sense, a sequence of audio or video. In a technical sense for DVD-Video, a group of cells within a program chain (PGC).
progressive scan - A video scanning system that displays all lines of a frame in one pass. Contrast with interlaced scan. See Chapter 3 for more information.
psychoacoustic - See perceptual encoding.
PTP - Parallel track path. A variation of DVD dual-layer disc layout where readout begins at the center of the disc for both layers. Designed for separate programs (such as a widescreen and a pan & scan version on the same disc side) or programs with a variation on the second layer. Also most efficient for DVD-ROM random-access application. Contrast with OTP.
PUH - Pickup head. The assembly of optics and electronics that reads data from a disc.
QCIF - Quarter common intermediate format. Video resolution of 176 x 144.
quantization levels - The predetermined levels at which an analog signal can be sampled as determined by the resolution of the analog-to-digital converter (in bits per sample); or the number of bits stored for the sampled signal.
quantize - To convert a value or range of values into a smaller value or smaller range by integer division. Quantized values are converted back (by multiplying) to a value which is close to the original but may not be exactly the same. Quantization is a primary technique of lossless encoding.
QuickTime - A digital video software standard developed by Apple Computer for Macintosh (Mac OS) and Windows operating systems. QuickTime is used to support audio and video from a DVD.
QXGA - A video graphics resolution of 2048 x 1536.
RAM - Random-access memory. Generally refers to solid-state chips. In the case of DVD-RAM, the term was borrowed to indicate ability to read and write at any point on the disc.
RAMbo drive - A DVD-RAM drive capable of reading and writing CD-R and CD-RW media. (A play on the word “combo.”)
random access - The ability to jump to a point on a storage medium.
raster - The pattern of parallel horizontal scan lines that makes up a video picture.
read-modify-write - An operation used in writing to DVD-RAM discs. Because data can be written by the host computer in blocks as small as 2 KB, but the DVD format uses ECC blocks of 32 KB, an entire ECC block is read from the data buffer or disc, modified to include the new data and new ECC data, then written back to the data buffer and disc.
Red Book - The document first published in 1982 that specifies the original compact disc digital audio format developed by Philips and Sony.
Reed-Solomon - An error-correction encoding system that cycles data multiple times through a mathematical transformation in order to increase the effectiveness of the error correction, especially for burst errors (errors concentrated closely together, as from a scratch or physical defect). DVD uses rows and columns of Reed-Solomon encoding in a two-dimensional lattice, called Reed-Solomon product code (RS-PC).
reference picture (or reference frame) - An encoded frame that is used as a reference point from which to build dependent frames. In MPEG-2, I pictures and P pictures are used as references.
reference player - A DVD player that defines the ideal behavior as specified by the DVD-Video standard.
regional code - A code identifying one of the world regions for restricting DVD-Video playback. See Table A.21.
regional management - A mandatory feature of DVD-Video to restrict the playback of a disc to a specific geographical region. Each player and DVD-ROM drive includes a single regional code, and each disc side can specify in which regions it is allowed to be played. Regional coding is optional—a disc without regional codes will play in all players in all regions.
replication - 1) The reproduction of media such as optical discs by stamping (contrast with duplication); 2) a process used to increase the size of an image by repeating pixels (to increase the horizontal size) and/or lines (to increase the vertical size) or to increase the display rate of a video stream by repeating frames. For example, a 360 x 240 pixel image can be displayed at 720 x 480 size by duplicating each pixel on each line and then duplicating each line. In this case the resulting image contains blocks of four identical pixels. Obviously, image replication can cause blockiness. A 24-fps video signal can be displayed at 72 fps by repeating each frame three times. Frame replication can cause jerkiness of motion. Contrast with decimation. Also see interpolate.
resampling - The process of converting between different spatial resolutions or different temporal resolutions. This may be based on simple sampling of the source information at higher or lower resolution or may include interpolation to correct for differences in pixel aspect ratios or to adjust for differences in display rates.
resolution - 1) A measurement of relative detail of a digital display, typically given in pixels of width and height; 2) the ability of an imaging system to make clearly distinguishable or resolvable the details of an image. This includes spatial resolution (the clarity of a single image), temporal resolution (the clarity of a moving image or moving object), and perceived resolution (the apparent resolution of a display from the observer’s point of view). Analog video is often measured as a number of lines of horizontal resolution over the number of scan lines. Digital video is typically measured as a number of horizontal pixels by vertical pixels. Film is typically measured as a number of line pairs per millimeter; 3) the relative detail of any signal, such as an audio or video signal. Also see lines of horizontal resolution.
RGB - Video information in the form of red, green, and blue tristimulus values. The combination of three values representing the intensity of each of the three colors can represent the entire range of visible light.
ROM - Read-only memory.
rpm - Revolutions per minute. A measure of rotational speed.
RS - Reed-Solomon. An error-correction encoding system that cycles data multiple times through a mathematical transformation in order to increase the effectiveness of the error correction. DVD uses rows and columns of Reed-Solomon encoding in a two-dimensional lattice, called Reed-Solomon product code (RS-PC).
RS-CIRC - See CIRC.
RSDL - Reverse-spiral dual-layer. See OTP.
RS-PC - Reed-Solomon product code. An error-correction encoding system used by DVD employing rows and columns of Reed-Solomon encoding to increase error-correction effectiveness.
R-Y, B-Y - The general term for color-difference video signals carrying red and blue color information, where the brightness (Y) has been subtracted from the red and blue RGB signals to create R-Y and B-Y color-difference signals. (See Chapter 3.)
S/N - Signal-to-noise ratio. Also called SNR.
S/P DIF - Sony/Philips digital interface. A consumer version of the AES/EBU digital audio transmission standard. Most DVD players include S/P DIF coaxial digital audio connectors providing PCM and encoded digital audio output.
sample rate - The number of times a digital sample is taken, measured in samples per second, or Hertz. The more often samples are taken, the better a digital signal can represent the original analog signal. Sampling theory states that the sampling frequency must be more than twice the signal frequency in order to reproduce the signal without aliasing. DVD PCM audio allows sampling rates of 48 and 96 kHz.
sample size - The number of bits used to store a sample. Also called resolution. In general, the more bits allocated per sample, the better the reproduction of the original analog information. Audio sample size determines the dynamic range. DVD PCM audio uses sample sizes of 16, 20, or 24 bits.
sample - A single digital measurement of analog information. A snapshot in time of a continuous analog waveform. See sampling.
sampling - Converting analog information into a digital representation by measuring the value of the analog signal at regular intervals, called samples, and encoding these numerical values in digital form. Sampling is often based on specified quantization levels. Sampling may also be used to adjust for differences between different digital systems (see resampling and subsampling).
saturation - The intensity or vividness of a color.
scaling - Altering the spatial resolution of a single image to increase or reduce the size; or altering the temporal resolution of an image sequence to increase or decrease the rate of display. Techniques include decimation, interpolation, motion compensation, replication, resampling, and subsampling. Most scaling methods introduce artifacts.
scan line - A single horizontal line traced out by the scanning system of a video display unit. 525/60 (NTSC) video has 525 scan lines, about 480 of which contain actual picture. 625/50 (PAL/SECAM) video has 625 scan lines, about 576 of which contain actual picture.
scanning velocity - The speed at which the laser pickup head travels along the spiral track of a disc.
SCMS - Serial copy management system. Used by DAT, MiniDisc, and other digital recording systems to control copying and limit the number of copies that can be made from copies.
SCSI - Small Computer Systems Interface. An electronic interface and command set for attaching and controlling internal or external peripherals, such as a DVD-ROM drive, to a computer. The command set of SCSI was extended for DVD-ROM devices by the SFF 8090 specification.
SDDI - Serial Digital Data Interface. A digital video interconnect designed for serial digital information to be carried over a standard SDI connection.
SDDS - Sony Dynamic Digital Sound. A perceptual audio-coding system developed by Sony for multichannel audio in theaters. A competitor to Dolby Digital and an optional audio track format for DVD.
SDI - See Serial Digital Interface. Also Strategic Defense Initiative, a.k.a. Star Wars, which as of 2000 was still not available on DVD other than as bootleg copies.
SDMI - Secure Digital Music Initiative. Efforts and specifications for protecting digital music.
SDTV - Standard-definition television. A term applied to traditional 4:3 television (in digital or analog form) with a resolution of about 700 x 480 (about 1/3 megapixel). Contrast with HDTV.
seamless playback - A feature of DVD-Video where a program can jump from place to place on the disc without any interruption of the video. Allows different versions of a program to be put on a single disc by sharing common parts.
SECAM - Séquential couleur avec mémoire/sequential color with memory. A composite color standard similar to PAL, but currently used only as a transmission standard in France and a few other countries. Video is produced using the 625/50 PAL standard and is then transcoded to SECAM by the player or transmitter.
sector - A logical or physical group of bytes recorded on the disc—the smallest addressable unit. A DVD sector contains 38,688 bits of channel data and 2048 bytes of user data.
seek time - The time it takes for the head in a drive to move to a data track.
Serial Digital Interface (SDI) - The professional digital video connection format using a 270 Mbps transfer rate. A 10-bit, scrambled, polarity-independent interface, with common scrambling for both component ITU-R 601 and composite digital video and four groups each of four channels of embedded digital audio. SDI uses standard 75-ohm BNC connectors and coax cable.
SFF 8090 - Specification number 8090 of the Small Form Factor Committee, an ad hoc group formed to promptly address disk industry needs and to develop recommendations to be passed on to standards organizations. SFF 8090 (also known as the Mt. Fuji specification), defines a command set for CD-ROM– and DVD-ROM–type devices, including implementation notes for ATAPI and SCSI.
SI - Système International (d’Unités)/International System (of Units). A complete system of standardized units and prefixes for fundamental quantities of length, time, volume, mass, and so on.
signal-to-noise ratio - The ratio of pure signal to extraneous noise, such as tape hiss or video interference. Signal-to-noise ratio is measured in decibels (dB). Analog recordings almost always have noise. Digital recordings, when properly prefiltered and not compressed, have no noise.
simple profile (SP) - A subset of the syntax of the MPEG-2 video standard designed for simple and inexpensive applications such as software. SP does not allow B pictures. See profile.
simulate - To test the function of a DVD disc in the authoring system, without actually formatting an image.
SMPTE - The Society of Motion Picture and Television Engineers. An international research and standards organization. The SMPTE time code, used for marking the position of audio or video in time, was developed by this group. (See Appendix C.)
son - The metal discs produced from mothers discs in the replication process. Fathers or sons are used in molds to stamp discs.
SP@ML - Simple profile at main level. The simplest MPEG-2 format used by DVD. Most discs use MP@ML. SP does not allow B pictures.
space - The reflective area of a writable optical disc. Equivalent to a land.
spatial resolution - The clarity of a single image or the measure of detail in an image. See resolution.
spatial - Relating to space, usually two-dimensional. Video can be defined by its spatial characteristics (information from the horizontal plane and vertical plane) and its temporal characteristics (information at different instances in time).
squeezed video - See anamorphic.
stamping - The process of replicating optical discs by injecting liquid plastic into a mold containing a stamper (father or son). Also (inaccurately) called mastering.
STP - Surface transfer process. A method of producing dual-layer DVDs that sputters the reflective (aluminum) layer onto a temporary substrate of PMMA, then transfers the metalized layer to the already-molded layer 0.
stream - A continuous flow of data, usually digitally encoded, designed to be processed sequentially. Also called a bitstream.
subpicture - Graphic bitmap overlays used in DVD-Video to create subtitles, captions, karaoke lyrics, menu highlighting effects, and so on.
subsampling - The process of reducing spatial resolution by taking samples that cover larger areas than the original samples or of reducing temporal resolutions by taking samples that cover more time than the original samples. See chroma subsampling. Also called downsampling.
substrate - The clear polycarbonate disc onto which data layers are stamped or deposited.
subtitle - A textual representation of the spoken audio in a video program. Subtitles are often used with foreign languages and do not serve the same purpose as captions for the hearing impaired. See subpicture.
surround sound - A multichannel audio system with speakers in front of and behind the listener to create a surrounding envelope of sound and to simulate directional audio sources.
SVCD - Super Video Compact Disc. MPEG-2 video on CD. Used primarily in Asia.
SVGA - A video graphics resolution of 800 x 600 pixels.
S-VHS - Super VHS (Video Home System). An enhancement of the VHS videotape standard using better recording techniques and Y/C signals. The term S-VHS is often used incorrectly to refer to s-video signals and connectors.
s-video - A video interface standard that carries separate luma and chroma signals, usually on a four-pin mini-DIN connector. Also called Y/C. The quality of s-video is significantly better than composite video since it does not require a comb filter to separate the signals, but it’s not quite as good as component video. Most high-end televisions have s-video inputs. S-video is often erroneously called S-VHS.
SXGA - A video graphics resolution of 1280 x 1024 pixels.
sync - A video signal (or component of a video signal) containing information necessary to synchronize the picture horizontally and vertically. Also, specially formatted data on disc which helps the readout system identify location and specific data structures.
syntax - The rules governing construction or formation of an orderly system of information. For example, the syntax of the MPEG video encoding specification defines how data and associated instructions are used by a decoder to create video pictures.
system menu - The main menu of a DVD-Video disc, from which titles are selected. Also called the title selection menu or disc menu.
T - Tera. An SI prefix for denominations of one trillion (1012).
telecine artist - The operator of a telecine machine. Also called a colorist.
telecine - The process (and the equipment) used to transfer film to video. The telecine machine performs 3:2 pulldown by projecting film frames in the proper sequence to be captured by a video camera.
temporal resolution - The clarity of a moving image or moving object, or the measurement of the rate of information change in motion video. See resolution.
temporal - Relating to time. The temporal component of motion video is broken into individual still pictures. Because motion video can contain images (such as backgrounds) that do not change much over time, typical video has large amounts of temporal redundancy.
tilt - A mechanical measurement of the warp of a disc. Usually expressed in radial and tangential components: radial indicating dishing and tangential indicating ripples in the perpendicular direction.
time code - Information recorded with audio or video to indicate a position in time. Usually consists of values for hours, minutes, seconds, and frames. Also called SMPTE time code. Some DVD-Video material includes information to allow the player to search to a specific time code position.
title key - A value used to encrypt and decrypt (scramble) user data on DVD-Video discs.
title - The largest unit of a DVD-Video disc (other than the entire volume or side). Usually a movie, TV program, music album, or so on. A disc can hold up to 99 titles, which can be selected from the disc menu. Entire DVD volumes are also commonly called titles.
track buffer - Circuitry (including memory) in a DVD player that provides a variable stream of data (up to 10.08 Mbps) to the system decoders of data coming from the disc at a constant rate of 11.08 Mbps (except for breaks when a different part of the disc is accessed).
track pitch - The distance (in the radial direction) between the centers of two adjacent tracks on a disc. DVD-ROM standard track pitch is 0.74 mm.
track - 1) A distinct element of audiovisual information, such as the picture, a sound track for a specific language, or the like. DVD-Video allows one track of video (with multiple angles), up to 8 tracks of audio, and up to 32 tracks of subpicture; 2) one revolution of the continuous spiral channel of information recorded on a disc.
transfer rate - The speed at which a certain volume of data is transferred from a device such as a DVD-ROM drive to a host such as a personal computer. Usually measured in bits per second or bytes per second. Sometimes confusingly used to refer to data rate, which is independent of the actual transfer system.
transform - The process or result of replacing a set of values with another set of values. A mapping of one information space onto another.
trim - See crop.
tristimulus - A three-valued signal that can match nearly all colors of visible light in human vision. This is possible because of the three types of photoreceptors in the eye. RGB, YCbCr, and similar signals are tristimulus, and can be interchanged by using mathematical transformations (subject to possible loss of information).
TVL - Television line. See lines of horizontal resolution.
TWG - Technical Working Group. A general term for an industry working group. Specifically, the predecessor to the CPTWG.
TWG - Technical working group. A usually ad-hoc group of representatives working together for a period of time to make recommendations or define standards.
UDF Bridge - A combination of UDF and ISO 9660 file system formats that provides backward-compatibility with ISO 9660 readers while allowing full use of the UDF standard.
UDF - Universal Disc Format. A standard developed by the Optical Storage Technology Association designed to create a practical and usable subset of the ISO/IEC 13346 recordable, random-access file system and volume structure format.
universal DVD - A DVD designed to play in DVD-Audio and DVD-Video players (by carrying a Dolby Digital audio track in the DVD-Video zone).
universal DVD player - A DVD player that can play both DVD-Video and DVD-Audio discs.
user data - The data recorded on a disc independent of formatting and error-correction overhead. Each DVD sector contains 2048 bytes of user data.
UXGA - A video graphics resolution of 1600x1200.
VBI - Vertical blanking interval. The scan lines in a television signal that do not contain picture information. These lines are present to allow the electron scanning beam to return to the top and are used to contain auxiliary information such as closed captions.
VBR - Variable bit rate. Data that can be read and processed at a volume that varies over time. A data compression technique that produces a data stream between a fixed minimum and maximum rate. A constant level of compression is generally maintained, with the required bandwidth increasing or decreasing depending on the complexity (the amount of spatial and temporal energy) of the data being encoded. In other words, data rate is held constant while quality is allowed to vary. Compare to CBR.
VBV - Video buffering verifier. A hypothetical decoder that is conceptually connected to the output of an MPEG video encoder. Provides a constraint on the variability of the data rate that an encoder can produce.
VCAP Video Capable Audio Player - An audio player which can read the limited subset of video features defined for the DVD-Audio format. (Constrast with universal DVD player.)
VCD - Video Compact Disc. Near-VHS-quality MPEG-1 video on CD. Used primarily in Asia.
VfW - See Video for Windows.
VGA (Video Graphics Array) - A standard analog monitor interface for computers. Also a video graphics resolution of 640 x 480 pixels.
VHS - Video Home System. The most popular system of videotape for home use. Developed by JVC.
Video CD - An extension of CD based on MPEG-1 video and audio. Allows playback of near-VHS-quality video on a Video CD player, CD-i player, or computer with MPEG decoding capability.
Video for Windows - The system software additions used for motion video playback in Microsoft Windows. Replaced in newer versions of Windows by DirectShow (formerly called ActiveMovie).
Video manager (VMG) - The disc menu. Also called the title selection menu.
Video title set (VTS) - A set of one to ten files holding the contents of a title.
videophile - Someone with an avid interest in watching videos or in making video recordings. Videophiles are often very particular about audio quality, picture quality, and aspect ratio to the point of snobbishness.
VLC - Variable length coding. See Huffman coding.
VOB - Video object. A small physical unit of DVD-Video data storage, usually a GOP.
volume - A logical unit representing all the data on one side of a disc.
VSDA - Video Software Dealers Association. (See Appendix C.)
WAEA - World Airline Entertainment Association. Discs produced for use in airplanes contain extra information in a WAEA directory. The in-flight entertainment working group of the WAEA petitioned the DVD Forum to assign region 8 to discs intended for in-flight use.
watermark - Information hidden as “invisible noise” or “inaudible noise” in a video or audio signal.
White Book - The document from Sony, Philips, and JVC, begun in 1993 that extended the Red Book compact disc format to include digital video in MPEG-1 format. Commonly called Video CD.
widescreen - A video image wider than the standard 1.33 (4:3) aspect ratio. When referring to DVD or HDTV, widescreen usually indicates a 1.78 (16:9) aspect ratio.
window - A usually rectangular section within an entire screen or picture.
Windows - See Microsoft Windows.
XA - See CD-ROM XA.
XDS - (line 21)
XGA - A video graphics resolution of 1024 x 768 pixels.
XVCD - A non-standard variation of VCD.
Y - The luma or luminance component of video: brightness independent of color.
Y/C - A video signal in which the brightness (luma, Y) and color (chroma, C) signals are separated. Also called s-video.
YCbCr - A component digital video signal containing one luma and two chroma components. The chroma components are usually adjusted for digital transmission according to ITU-R BT.601. DVD-Video’s MPEG-2 encoding is based on 4:2:0 sYCbCr signals. YCbCr applies only to digital video, but is often incorrectly used in reference to the YPbPr analog component outputs of DVD players.
Yellow Book - The document produced in 1985 by Sony and Philips that extended the Red Book compact disc format to include digital data for use by a computer. Commonly called CD-ROM.
YPbPr - A component analog video signal containing one luma and two chroma components. Often referred to loosely as YUV or Y, B-Y, R-Y.
YUV - In the general sense, any form of color-difference video signal containing one luma and two chroma components. Technically, YUV is applicable only to the process of encoding component video into composite video. See YCbCr and YPbPr.
ZCLV - Zoned constant linear velocity. Concentric rings on a disc within which all sectors are the same size. A combination of CLV and CAV.
March 20, 2009 | 1 | 3 |
<urn:uuid:9bfa929c-be3d-4b59-94c0-5dbafcd68031> | Saturday, January 28, 2006
Firm recalls teethers due to bacteria risk
WASHINGTON -- A Massachusetts company recalled 500,000 liquid-filled baby teethers distributed in the U.S. and Canada yesterday.
Possible bacterial contamination could cause serious illness.
Six styles of teethers may be contaminated with the Pseudomonas aeruginosa or the Pseudomonas putida bacteria.
The Disney Days of Hunny Soft Cool Ring Teether, bearing style number Y1447 and the Disney Soft Cool Ring Teether, bearing style number Y1470 or Y1490, feature Winnie-the-Pooh characters. The Sesame Beginnings Chill and Chew Teether, style number Y3095, features Sesame Street characters.
The other teethers recalled are The First Years Cool Animal Teether (style number Y1473) and The First Years Floating Friends Teether (style number Y1474).
Thursday, January 26, 2006
Airborne legionnaires bacteria can travel miles
By Anne Harding
NEW YORK (Reuters Health) - The bacterium responsible for causing legionnaires disease can spread up to 6 kilometers from its source by airborne transmission, French researchers report.
Legionella pneumophila likes to live in hot water, such as in industrial cooling towers or the water systems of large buildings where it can then cause pneumonia-like infections. Now it seems that a wider area may be at risk.
Past studies found airborne legionella spread only a few hundred meters, lead author Dr. Tran Minh Nhu Nguyen, who is currently at the National Public Health Institute in Helsinki, told Reuters Health. If other investigators confirm the new findings, he added, "a number of regulations and guidelines related to this environmental health risk should be revised accordingly."
In the Journal of Infectious Diseases, Nguyen and his team report on their investigation of a 2004 outbreak of legionnaires disease that occurred in Pas-de-Calais in northern France.
They identify a contaminated cooling tower at a petrochemical plant as the source of the outbreak, which killed 21 of the 86 individuals with laboratory-confirmed infection. Most of the victims lived within 6 kilometers of the plant, although one lived 12 kilometers away.
The fatality rate is "striking" when compared with past community-acquired outbreaks, in which fatality rates ranged from1 percent to 11 percent, the researchers note. They think the strain of legionella involved could have been unusually virulent.
The outbreak occurred in two peaks, the first ending after the cooling tower had been shut down and the second beginning during cleaning of the tower and peaking once it had reopened.
The pattern suggests that high-pressure cleaning methods used to decontaminate the towers contributed to the bacterium's spread. "There are measures and guidelines for managing cooling towers contaminated with legionella," Nguyen said. "However, how well they have been adopted and implemented depends on the individual country and setting."
SOURCE: Journal of Infectious Diseases, January 1, 2006.
Wednesday, January 25, 2006
Viable Group A Streptococci In Macrophages During Acute Soft Tissue Infection
Article Date: 17 Jan 2006 - 19:00pm (UK)
This study shows that that group A streptococci survive intracellularly in macrophages during acute invasive infections; the streptococcal pyrogenic exotoxin SpeB may have a role in this survival. Citation: Thulin P, Johansson L, Low DE, Gan BS, Kotb M, et al. (2006) Viable group A streptococci in macrophages during acute soft tissue infection. PLoS Med 3(3): e53.
LINK TO THE PUBLISHED ARTICLE: dx.doi.org/10.1371/journal.pmed.0030053
All works published in PLoS Biology are open access. Everything is immediately available - to read, download, redistribute, include in databases, and otherwise use - without cost to anyone, anywhere, subject only to the condition that the original authorship and source are properly attributed. Copyright is retained by the authors. The Public Library of Science uses the Creative Commons Attribution License.
Tuesday, January 24, 2006
2 million in U.S. may carry staph; infections on the rise
The Dallas Morning NewsPublished on: 01/23/06
DALLAS — The first nationwide statistical snapshot of a worrisome infection estimates at least 2 million people in the country may be silently carrying a potentially dangerous bacterium. And a second study reports that the germ appears to be creeping into hospital patients at a steady pace.
The studies are the latest clues to the behavior of methicillin-resistant Staphylococcus aureus, or MRSA. Since its appearance in the general population in 1999, the infection has already become notorious for illness among inmates, children and professional athletes. Last September, MRSA spread among Katrina evacuees at a shelter in Dallas. The St. Louis Rams battled an outbreak that affected five players, who apparently passed the infection to members of the San Francisco 49ers.
On Friday, the Dallas County Department of Health and Human Services issued a warning about MRSA infections picked up in contaminated whirlpool footbaths at some area nail salons.
Despite high-profile attacks of MRSA, scientists had not been able to say exactly how widespread the bug might be. Most of the time, people carry staph bacteria in their noses without knowing it. Only when it slips through breaks in the skin does MRSA announce itself. It concerns doctors not only because of its potential to cause disease, but also because it resists many traditional treatments.
This month, in the Journal of Infectious Diseases, researchers from the U.S. Centers for Disease Control and Prevention report that about 32 percent of the American population harbors S. aureus. About 1 percent of that staph appears to be MRSA. That would mean between 1.2 million and 3.8 million people carry the more dangerous form.
The most likely staph carriers were children, though the MRSA form was most common among older people, especially women.
"It's become one of the dominant infections of childhood," said Dr. William Schaffner of Vanderbilt University School of Medicine, who wrote a commentary that appears with the study.
Dr. Schaffner also noted that the study was conducted in 2001, just as the organism was getting a foothold in the U.S. population, and later data point to an epidemic that has only grown. For example, about 10 percent of children who come to Vanderbilt are colonized with MRSA.
MRSA has been long known as a hospital menace. In 1999, however, MRSA infections began showing up in otherwise healthy people who had never been near a hospital. The bacterium even got its own acronym: CA-MRSA, for "community acquired" MRSA.
CA-MRSA bears some differences from its hospital-bound cousin. While it appears to be more susceptible to antibiotics, it has at times shown a greater knack for causing some of staph's more terrible consequences, such as a particularly deadly form of pneumonia. Most often, though, CA-MRSA causes skin infections.
The infection is making a renewed name for itself in hospitals. A second study of more than 1,200 intensive care units, to be published in the Feb. 1 issue of Clinical Infectious Diseases, notes that 64 percent of staph infections now appear to be caused by MRSA. In 1992 the number was 36 percent.
Doctors are intensifying efforts to control the spread of MRSA, which thrives in places where people crowd together and which takes advantage of lapses in hygiene.
"It's not fancy drugs that we need in terms of prevention," said Dr. Jane Siegel of Children's Medical Center.
Infectious Diseases Thriving as Human Population Grows
Contact: Jill Stoddard212-854-6465 or [email protected]
Humans have provoked a lot of wobbling in the global food web, and one result is the explosion of infectious diseases.
“All of our infectious diseases are other species making a living off of us,” says Joel Cohen, a populations expert at both Columbia and Rockefeller Universities. “Think of the thousands of bacteria in our gut, the fungi on our skin, the insects that suck our blood, and the diseases those insects inject.”
As a result, new microbes and viruses that prey on humans, such as Ebola and HIV, are burgeoning around the world, and old ones continue to thrive.
“Over the last 10,000 years, the number of humans has increased about 1,000 fold, creating a lot more demand on other species, and providing more available material,” says Cohen.
Of particular interest to Cohen is Chagas’ disease, caused by an insect-borne parasite similar to the one responsible for African sleeping sickness. Cohen’s mathematical model of how the disease spreads has had public health implications for millions of poor Latin Americans.
“The network of infectious disease is incredibly dynamic,” Cohen says.
Because most species rely on other species for their energy, or are consumed by other species in search of energy, the species are all interconnected, forming a network known as the food web.
Cohen himself has kept logs of the species of food he eats, and over time it comes to about 150 different species of plants and animals. Humans collectively consume tens of thousands of other species. “That represents a lot of energy, and a lot of diversity, coming in,” he says.
Cohen's life work is to figure out the dynamics and interactions between the 100 million or so interconnected species on this planet.
The broad reach of his research has earned Cohen not only entrance into expected societies, such as the U.S. National Academy of Sciences, the American Academy of Arts and Sciences, and the American Philosophical Society, but also onto the worldwide Board of Governors of The Nature Conservancy.
Cohen is Abby Rockefeller Mauzé Professor at Rockefeller University and Professor of Populations at the Earth Institute at Columbia University.
In 2002, New York Mayor Michael R. Bloomberg gave Cohen his Award for Excellence in Science and Technology.
This is an edited version of a Rockefeller University article written by Renee Twombly. View full story.
Earth Institute News
Sunday, January 22, 2006
Rare bacteria species found in wounds of tsunami patients. Predominance of gram-negative rods, increased antibiotic resistance
Article in Swedish
[PubMed - indexed for MEDLINE]
Saturday, January 21, 2006
Bacteria in dirt may be "born" resistant to drugs
Thursday, January 19, 2006
By Maggie Fox
WASHINGTON (Reuters) - Bacteria in dirt may be "born" with a resistance to antibiotics, which could help shed light on the problem of drug-defying "superbugs," Canadian researchers said on Thursday.
They tested 480 different bacteria found in soil and discovered that every single one had some resistance to antibiotics, meaning they had evolved a mechanism for evading the effects of the drugs.
The findings, published in the journal Science, could help explain why bacteria so quickly develop resistance to antibiotics, and why drug companies must constantly develop new ones.
"It explains where these things come from in the first place," Gerry Wright, chair of Biochemistry and Biomedical Sciences at Ontario's McMaster University, said in a telephone interview. "This work could prove to be extremely valuable to the drug development process."
Wright's team dug up 480 strains of Streptomyces bacteria and tested them for resistance to various antibiotics.
"Without exception, every strain ... was found to be multi-drug resistant to seven or eight antibiotics on average, with two strains being resistant to 15 of 21 drugs," they wrote in their report.
'A LOGICAL PLACE TO START'
These particular bacteria do not infect people, but Wright believes the findings almost certainly apply to other species of microbes.
"It turns out that Streptomyces make lots of antibiotics," Wright said. "Anything that ends in 'mycin' comes from streptomycin - vancomycin, streptomycin."
That was why they chose this group of bacteria.
"We were curious to see where these things might come from in the first place, so it seemed that was a logical place to start. I expect lots of these (drug-resistant) genes are peppered all over the microbial community," Wright said.
They exposed the bacteria to known antibiotics and then searched for genes that were activated when the microbes survived.
"We found old mechanisms and new mechanisms," Wright said.
"We found a brand-new resistance mechanism to an antibiotic called telithromycin," he said, referring to Aventis' drug Ketek, only approved in 2004.
Ketek was designed to overcome resistance to antibiotics, but one of the bacteria Wright tested evolved a way to prevent it from working.
Almost as soon as penicillin was introduced in the 1940s, bacteria began to develop resistance to its effects, prompting researchers to develop many new generations of antibiotics.
But their overuse and misuse have helped fuel the rise of drug-resistant "superbugs." The U.S. Centers for Disease Control and Prevention says 70 percent of infections that people get while in the hospital are resistant to at least one antibiotic.
Wright said his findings do not get doctors off the hook. He said they still must prescribe antibiotics only when they are needed, and stress to patients the need to use them properly.
Soil bacteria live in a constant kind of arms race, making antibiotics to protect themselves against other bacteria, and then evolving antibiotic resistance to evade the antibiotics made by other bacteria.
"Their coping tactics may be able to give us a glimpse into the future of clinical resistance to antibiotics," Wright said.
Monday, January 16, 2006
A New Way to Stop Food Bacteria??
Researchers believe they may have found a novel way to disrupt bacteria that cause food poisoning.
The US and UK team have uncovered a previously unrecognised mechanism which bacteria use to escape the body's natural defence responses.
Using this mechanism, the pathogens detect a toxic gas produced by the body and turn it into something that is harmless to evade the onslaught.
Interrupting this might be a way to beat these bacteria, they told Nature.
The team from Georgia Institute of Technology in the US, and the John Innes Centre in the UK looked at harmless strains of the bacterium Escherichia coli.
However, they believe their findings will apply to the more harmful strains of E. coli and its close relation salmonella that cause outbreaks of food poisoning around the world.
These bacteria are usually transmitted to humans through undercooked meat, unwashed vegetables and poor food hygiene and can cause diarrhoea and cramps, which usually get better without help.
However, in those who are particularly vulnerable, such as people with weakened immune systems, the consequences can be particularly serious and may require hospital treatment.
Food poisoning bacteria
About six million people in the UK - 10% of the population - have a case of food poisoning each year. More than half of these are caused by bacteria such as E. coli and salmonella.
There are drugs available to treat complicated infections but the bacteria are learning how to dodge these and are becoming resistant.
Ultimately, Professor Ray Dixon and colleagues hope their discovery will help scientists find new ways to treat such infections.
They found E. coli was able to recognise and rid itself of the poisonous nitric oxide that the body produced to fight infection.
The bacterium has a protein called NorR that once activated controls the expressions of genes. These genes hold the code for an enzyme that removes the nitric oxide, allowing the bug to fend off the body's defences.
Colleague Professor Stephen Spiro explained: "It turns out that the protein NorR contains a single molecule of iron. Our study found that the nitric oxide binds to the iron, which activates the protein.
"If we can interfere with the mechanism, it could lead to better antibiotics and treatments," he said.
Professor Dixon stressed that this would be some years away.
Professor Jay Hinton, head of molecular microbiology at the Institute of Food Research, said:
"Antibiotic resistance is increasing. We do need alternatives.
"What they have found is interesting and unexpected. This, coupled with other work, could lead to new treatments in the future."
He recommended more studies to determine whether the same mechanism was apparent in pathogenic strains of Salmonella, E. coli and other food poisoning bugs.
BBC Health News
Sunday, January 15, 2006
Stomach bacteria linked to iron deficiency
Thursday, January 12, 2006
By Anthony J. Brown, MD
NEW YORK (Reuters Health) - Helicobacter pylori infection, which affects about one third of adults in the US, is associated with an increased risk of iron deficiency and related anemia, according to the results of a new study.
Moreover, this relationship holds true even in the absence of peptic ulcer disease, which can cause iron-deficiency anemia through hemorrhage, the researchers report in the American Journal of Epidemiology.
"For the first time in a national sample of the US population, we found an apparent link between H. pylori infection and iron deficiency" and iron-deficiency anemia, lead author Dr. Victor M. Cardenas, from the University of Texas at Houston, told Reuters Health.
H. pylori infection has previously been found to cause stomach inflammation and most ulcers. The bacterium also increases the risk of stomach cancer.
The researchers identified this new relationship based on an analysis of data from the current National Health and Nutrition Examination Survey (1999-2000). Data on 7,462 subjects who were at least three years of age were included in the analysis.
The presence of H. pylori infection raised the risk of iron deficiency and iron-deficiency anemia by 1.4- and 2.6-fold, respectively. H. pylori infection was also tied to other types of anemia, but to a much lesser extent.
How might H. pylori infection promote iron deficiency short of causing a bleeding ulcer? "The rapid turnover of H. pylori, which seems to sequester iron, is one possible mechanism," Cardenas said.
He added that his group is now seeking funding for a randomized trial to see if eradication of H. pylori can improve iron deficiency in children.
SOURCE: American Journal of Epidemiology, January 15, 2006.
Saturday, January 14, 2006
Lyme Disease Prevention and ControlReducing exposure to ticks is the best defense against Lyme disease and other tick-borne infections. There are several approaches you and your family can use to prevent and control Lyme disease.
Use repellent, tick checks, and other simple measures to prevent tick bites
Control ticks around your home and in your community
Ask your doctor if taking antibiotics after tick bite is right for you
Learn the early signs of tick-borne illness
A Lyme disease vaccine is no longer available. The vaccine manufacturer discontinued production in 2002, citing insufficient consumer demand. Protection provided by this vaccine diminishes over time. Therefore, if you received the Lyme disease vaccine before 2002, you are probably no longer protected against Lyme disease.
Ticks Transmit Lyme Disease
The Lyme disease bacterium, Borrelia burgdorferi, normally lives in mice, squirrels and other small animals. It is transmitted among these animals – and to humans -- through the bites of certain species of ticks.
In the northeastern and north-central United States, the blacklegged tick (or deer tick, Ixodes scapularis) transmits Lyme disease. In the Pacific coastal United States, the disease is spread by the western blacklegged tick (Ixodes pacificus). Other tick species found in the United States have not been shown to transmit Borrelia burgdorferi. Blacklegged ticks live for two years and have three feeding stages: larvae, nymph, and adult. When a young tick feeds on an infected animal, the tick takes the bacterium into its body along with the blood meal. The bacterium then lives in the gut of the tick. If the tick feeds again, it can transmit the bacterium to its new host. Usually the new host is another small rodent, but sometimes the new host is a human. Most cases of human illness occur in the late spring and summer when the tiny nymphs are most active and human outdoor activity is greatest.Although adult ticks often feed on deer, these animals do not become infected. Deer are nevertheless important in transporting ticks and maintaining tick populations.
Other Modes of Transmission
There is no evidence that Lyme disease is transmitted from person-to-person. For example, a person cannot get infected from touching, kissing or having sex with a person who has Lyme disease.
During Pregnancy & While Breastfeeding
Lyme disease acquired during pregnancy may lead to infection of the placenta and possible stillbirth, however, no negative effects on the fetus have been found when the mother receives appropriate antibiotic treatment. There are no reports of Lyme disease transmission from breast milk.
Although no cases of Lyme disease have been linked to blood transfusion, scientists have found that the Lyme disease bacteria can live in blood that is stored for donation. As a precaution, the American Red Cross and the US Food and Drug Administration ask that persons with chronic illness due to Lyme disease do not donate blood. Lyme disease patients who have been treated with antibiotics and have recovered can donate blood beginning 12 months after the last dose of antibiotics was taken.
Although dogs and cats can get Lyme disease, there is no evidence that they spread the disease directly to their owners. However, pets can bring infected ticks into your home or yard. Consider protecting your pet, and possibly yourself, through the use of tick control products for animals.
You will not get Lyme disease from eating venison or squirrel meat, but in keeping with general food safety principles meat should always be cooked thoroughly. Note that hunting and dressing deer or squirrels may bring you into close contact with infected ticks.
There is no credible evidence that Lyme disease can be transmitted through air, food, water, or from the bites of mosquitoes, flies, fleas, or lice.
Lyme Disease Transmission
Lyme Disease Symptoms
The Lyme disease bacterium can infect several parts of the body, producing different symptoms at different times. Not all patients with Lyme disease will have all symptoms, and many of the symptoms can occur with other diseases as well. If you believe you may have Lyme disease, it is important that you consult your health care provider for proper diagnosis.
The first sign of infection is usually a circular rash called erythema migrans or EM. This rash occurs in approximately 70-80% of infected persons and begins at the site of a tick bite after a delay of 3-30 days. A distinctive feature of the rash is that it gradually expands over a period of several days, reaching up to 12 inches (30 cm) across. The center of the rash may clear as it enlarges, resulting in a bull’s-eye appearance. It may be warm but is not usually painful. Some patients develop additional EM lesions in other areas of the body after several days. Patients also experience symptoms of fatigue, chills, fever, headache, and muscle and joint aches, and swollen lymph nodes. In some cases, these may be the only symptoms of infection.
Untreated, the infection may spread to other parts of the body within a few days to weeks, producing an array of discrete symptoms. These include loss of muscle tone on one or both sides of the face (called facial or “Bell’s palsy), severe headaches and neck stiffness due to meningitis, shooting pains that may interfere with sleep, heart palpitations and dizziness due to changes in heartbeat, and pain that moves from joint to joint. Many of these symptoms will resolve, even without treatment.
After several months, approximately 60% of patients with untreated infection will begin to have intermittent bouts of arthritis, with severe joint pain and swelling. Large joints are most often effected, particularly the knees. In addition, up to 5% of untreated patients may develop chronic neurological complaints months to years after infection. These include shooting pains, numbness or tingling in the hands or feet, and problems with concentration and short term memory.
Most cases of Lyme disease can be cured with antibiotics, especially if treatment is begun early in the course of illness. However, a small percentage of patients with Lyme disease have symptoms that last months to years after treatment with antibiotics. These symptoms can include muscle and joint pains, arthritis, cognitive defects, sleep disturbance, or fatigue. The cause of these symptoms is not known. There is some evidence that they result from an autoimmune response, in which a person’s immune system continues to respond even after the infection has been cleared.
Lyme Disease Symptoms
Lyme Disease Diagnosis
Lyme disease is diagnosed based on symptoms, objective physical findings (such as erythema migrans, facial palsy, or arthritis), and a history of possible exposure to infected ticks. Validated laboratory tests can be very helpful but are not generally recommended when a patient has erythema migrans. For detailed recommendations on serologic testing, click here.
When making a diagnosis of Lyme disease, health care providers should consider other diseases that may cause similar illness. Not all patients with Lyme disease will develop the characteristic bulls-eye rash, and many may not recall a tick bite. Laboratory testing is not recommended for persons who do not have symptoms of Lyme disease.
Laboratory Testing Several forms of laboratory testing for Lyme disease are available, some of which have not been adequately validated. Most recommended tests are blood tests that measure antibodies made in response to the infection. These tests may be falsely negative in patients with early disease, but they are quite reliable for diagnosing later stages of disease.
CDC recommends a two-step process when testing blood for evidence of Lyme disease. Both steps can be done using the same blood sample.
1) The first step uses an ELISA or IFA test. These tests are designed to be very “sensitive,” meaning that almost everyone with Lyme disease, and some people who don’t have Lyme disease, will test positive. If the ELISA or IFA is negative, it is highly unlikely that the person has Lyme disease, and no further testing is recommended. If the ELISA or IFA is positive or indeterminate (sometimes called "equivocal"), a second step should be performed to confirm the results.
2) The second step uses a Western blot test. Used appropriately, this test is designed to be “specific,” meaning that it will usually be positive only if a person has been truly infected. If the Western blot is negative, it suggests that the first test was a false positive, which can occur for several reasons. Sometimes two types of Western blot are performed, “IgM” and “IgG.” Patients who are positive by IgM but not IgG should have the test repeated a few weeks later if they remain ill. If they are still positive only by IgM and have been ill longer than one month, this is likely a false positive.
CDC does not recommend testing blood by Western blot without first testing it by ELISA or IFA. Doing so increases the potential for false positive results. Such results may lead to patients being treated for Lyme disease when they don’t have it and not getting appropriate treatment for the true cause of their illness. For detailed recommendations for test performance and interpretation of serologic tests for Lyme disease, click here.
Other Types of Laboratory Testing
Some laboratories offer Lyme disease testing using assays whose accuracy and clinical usefulness have not been adequately established. These tests include urine antigen tests, immunofluorescent staining for cell wall-deficient forms of Borrelia burgdorferi, and lymphocyte transformation tests. In general, CDC does not recommend these tests. Click here for more information. Patients are encouraged to ask their physicians whether their testing for Lyme disease was performed using validated methods and whether results were interpreted using appropriate guidelines.
Patients who have removed a tick often wonder if they should have it tested. In general, the identification and testing of individual ticks is not useful for deciding if a person should get antibiotics following a tick bite. Nevertheless, some state or local health departments offer tick identification and testing as a community service or for research purposes. Check with your health department; the phone number is usually found in the government pages of the telephone book.
Lyme Disease Diagnosis
Lyme Disease Treatment and Prognosis
The National Institutes of Health (NIH) has funded several studies on the treatment of Lyme disease. These studies have shown that most patients can be cured with a few weeks of antibiotics taken by mouth. Antibiotics commonly used for oral treatment include doxycycline, amoxicillin, or cefuroxime axetil. Patients with certain neurological or cardiac forms of illness may require intravenous treatment with drugs such as ceftriaxone or penicillin.
Patients treated with antibiotics in the early stages of the infection usually recover rapidly and completely. A few patients, particularly those diagnosed with later stages of disease, may have persistent or recurrent symptoms. These patients may benefit from a second 4-week course of therapy. Longer courses of antibiotic treatment have not been shown to be beneficial and have been linked to serious complications, including death.
Studies of women infected during pregnancy have found that there are no negative effects on the fetus if the mother receives appropriate antibiotic treatment for her Lyme disease. In general, treatment for pregnant women is similar to that for non-pregnant persons, although certain antibiotics are not used because they may affect the fetus. If in doubt, discuss treatment options with your health care provider.
For details on long term treatment trials sponsored by NIH visit the NIH Lyme Disease web site.
To read treatment guidelines developed by the Infectious Disease Society of America, click here (IDSA Guidelines for Treatment of Lyme Disease/PDF 120KB, 114 pages).
Lyme Disease Treatment and Prognosis
Lyme Disease Statistics
Lyme Disease Resources
Other tick-borne diseases
Tuesday, January 10, 2006
New warning signs for child meningitis
• Research has highlighted three new earlier symptoms of meningitis
• New signs include cold hands and feet, mottled skin colour and leg pain
• Meningitis Trust estimates 3,000 people a year in the UK become infected
"Early diagnosis and treatment is crucial to increase the likelihood of patient survival," - Harry Burns, the chief medical officer for Scotland
Story in full THOUSANDS of children's lives will be saved after meningitis researchers identified new early-warning signs for parents.
Until now, parents have been warned to look out for their child having a headache, stiff neck, sensitivity to light and a pinprick rash as signs of meningitis. But these symptoms can occur as little as two hours from the child becoming critically ill or even dying, leaving little time for treatment.
Now research has highlighted three new earlier symptoms of the infection - leg pain, cold hands and feet, and an abnormally pale, mottled skin colour - which together, or separately with other signs such as fever, can be indicators of the condition.
Doctors said the findings could speed up diagnosis and treatment of the disease, which the Meningitis Trust estimates infects 3,000 people a year in the UK, killing 300 - mostly children. Worldwide, the figure runs into thousands.
Dr Matthew Thomson, from Oxford University, who led the research, agreed that spotting the signs of the disease earlier could save thousands of lives.
"This disease develops so quickly in children - from the child becoming ill to being dead within 24 hours," he said. "The sooner a child can be spotted and admitted to hospital, the more likely they are to survive and do well."
Dr Thompson led a team investigating children who contracted the most dangerous, bacterial form of meningitis.
Most had only non-specific symptoms in the first four to six hours, but were close to death 24 hours after infection. Classic symptoms developed late, after an average of 13 to 22 hours. However, 72 per cent of the children developed identifiable early sepsis (infection) symptoms in just eight hours on average.
Almost three out of four parents noticed the onset of symptoms such as cold hands and feet, leg pain, and abnormal pallor up to 19 hours before their children were admitted to hospital.
In an online edition of the medical journal, The Lancet, published today, the researchers wrote: "
Although we must avoid undermining the importance of classic symptoms, we could substantially speed up diagnosis if the emphasis was shifted to early recognition of sepsis."
The researchers analysed patient questionnaires and scoured medical records.
Of the 448 children surveyed, all aged 16 or younger, 103 died and 345 survived. Only half the children were sent to hospital the first time they saw a doctor. In many cases, children were admitted to hospital only after an initial misdiagnosis, the research found. Generally, doctors look for the classic symptoms of rash, headaches, stiff neck, light sensitivity and impaired consciousness.
"We believe that primary-care clinicians are over-reliant on using these three symptoms to diagnose meningococcal disease in children, and that parents may be influenced by doctors or public health campaigns to seek medical advice only on the appearance of features such as a rapidly evolving rash," said Dr Thompson's team. "Moreover, clinicians and parents may be falsely reassured by the absence of these features."
Often children were seen by a local GP who had never encountered a case of meningitis outside hospital.
Dr Thompson warned that the research was in the early stages, but recommended that all parents be informed of the new warning signs.
The new warnings relate to the early signs of meningococcal disease, which can lead to meningitis as well as septicaemia and blood poisoning.
Vaccination can protect children against meningitis C, but other strains, most commonly meningitis B, kill children and adults indiscriminately.
In developed countries, meningitis, with its associated illnesses, are the leading infectious causes of death in children. At least four in 100,000 British children will at some time become ill with meningococcal disease.
Harry Burns, the chief medical officer for Scotland, promised to examine the research. "Early diagnosis and treatment is crucial to increase the likelihood of patient survival," he said. "We will look closely at the findings and consider carefully our advice to parents and doctors."
Olivia Giles, an Edinburgh lawyer who lost her limbs to meningococcal septicaemia in 2002, said the symptoms of blood poisoning, such as the pallor caused by blood rushing to protect vital organs, were well known.
But she said the fact it occurs earlier than classic symptoms should be stressed to all parents.
"You should be on the alert and have the information in the house or your purse so if you feel something is not normal you can look at the information and monitor the symptoms."
Ms Giles suffered from the early symptoms. "My hands and feet felt like blocks of ice and I had a horrible pallor from very early on," she said.
But it was not until 24 hours later, when the classic symptoms of meningococcal septicaemia emerged, that she was rushed to hospital. Doctors were left with no option but to amputate her hands and feet.
"Every second counts," says Ms Giles. "The minute it gets into your blood, it spreads rapidly. The sooner they give you the antibiotics, the less damage it will do."
Miss Giles, 40, who married this summer, added: "Listen to your instincts, be armed and ready to act quickly. You do not wait for a rash. The cold hands and feet is quite a warning."
Beverley Corbett, of the Meningitis Trust, which funded the research, also welcomed the research."Diagnosis of meningococcal disease is extremely difficult in the early stages, especially when classic symptoms are not present," she said. "This is why we emphasise the importance of early symptoms and remaining vigilant."
How the disease strikes
‘Classic’ symptoms: Red rash Headache Stiff neck Sensitivity to light Impaired consciousness
‘New’ symptoms: Leg pain Cold hands and feet Abnormally pale or mottled skin colour
Monday, January 09, 2006
New Angiotech product designed to fight infections
By LEONARD ZEHR
Thursday, January 5, 2006 Page B2
Moving to shed its image of a one-product company, Angiotech Pharmaceuticals Inc. is expanding its line of biomaterials with a drug-coated catheter designed to reduce hospital infections.
Best known for its drug-coated stents, which prop open blocked coronary arteries, Angiotech is expected to announce today that it has begun a pivotal clinical trial in the United States of a central venous catheter (CVC) coated with the chemotherapy drug 5-Flourouracil (5-FU).
CVCs are inserted into very ill patients to administer fluids, drugs and nutrients and withdraw blood, but they pose a risk of infection by bacteria contaminating the surface of the catheter.
Infections that reach the bloodstream can become life threatening.
According to industry estimates, there are about 3.5 million CVC procedures in the United States each year, resulting in up to 250,000 related infections and 40,000 deaths.
"This is a really big market," said Rui Avelar, a doctor and Angiotech's chief medical officer. "Infections can be devastating if they happen with hip and knee replacement surgery and implanting a pacemaker."
The company figures the annual cost of caring for patients with CVC-associated infections in the United States is about $2.3-billion (U.S.).
He said minute quantities of 5-FU can be an effective anti-bacterial agent.
"We think [this product] kills a wide variety of bugs. Because it isn't an antibiotic, we are not contributing to an increase of antibiotic resistant bacteria and we have shown that it blocks creation of biofilm."
Biofilms are an important survival tool for bacteria and are associated with antibiotic resistance in some bacterial infections.
Dr. Avelar said some CVCs are now coated with antibiotics. "If you can come up with an anti-infective that isn't an antibiotic, you are head and shoulders above everybody else."
Angiotech's one-year CVC study will enroll 600 patients at 20 clinics in the United States.
"The use of 5-FU as an anti-infective coating to prevent catheter-related bloodstream infections is innovative and unique," said Dr. Stephen Heard, lead investigator for the trial and chairman of the department of anesthesiology at the University of Massachusetts Memorial Medical Center.
For Angiotech, anti-infective products represent another step in diversifying the company's revenue base.
The company derives about 90 per cent of its revenue -- estimated at $200-million in 2005 -- from royalties on the sale of the Taxus drug-coated stent, which shaped a $5-billion-a-year industry.
"People can't see beyond the Taxus stent, even though we're doing a lot of other things," Dr. Avelar said.
It has commercialized surgical sealants to control bleeding and adhesive gels to prevent tissue adhesion after surgery.
Besides biomaterials, such as stents, to prevent scarring and a vascular wrap in clinical testing to prevent the narrowing of blood vessels, the company is developing products to accelerate healing and treat localized tumours from recurring after surgery with a drug-coated biomaterial.
"New competitors in drug-coated stents are not likely before 2008 and our initiatives will hit in 2007 and 2008," he said.
Friday, January 06, 2006
Scientists in move over MRSA
The bug not only costs lives - the health service spends thousands of pounds on trying to keep it out of hospitals.
Pharmacists at Queen's University in Belfast say they have developed a new way of killing MRSA.
It is due to be tried out on patients as early as next year.
For many years antibiotics have been used to kill bacteria, but bugs like MRSA are resistant to antibiotics, so now scientists are turning the clock back.
Dr Ryan Donnelly of Queen's School of Pharmacy
Dr Ryan Donnelly, of Queen's School of Pharmacy, said: "The ability of light to kill bacteria was first discovered about 100 ago, but because of the antibiotic era it was largely forgotten.
"It is only recently with the emergence of antibiotic-resistant bacteria that this has come to the fore again and many different groups involved in treating the likes of MRSA are trying to use this technology now."
A new gel is used to put a drug where it is needed.
Dr Paul McCarron, also of Queen's, said: "I saw my son, Niall, who was playing with kiddies' slime and I was just looking at the way it flowed between his fingers.
Dr Paul McCarron got inspiration from kiddies' play slime
"I thought it had the correct flow properties, to press into a leg ulcer for example. In other words, it can be pressed in and it will slowly flow to fill the cavity.
"More importantly, whenever you remove it, it can be removed all in one go."
The gel deposits a drug into the wound or ulcer and then it is lifted out, leaving behind the drug.
The drug makes MRSA and other bugs sensitive to light - much more so than the human cells, so when a powerful light is shone on the wound, it is the bugs like MRSA that will be killed.
Dr Donnelly said: "Certainly, from the work we have done so far, we would like to think that this technology could be successful in eradicating MRSA from wounds and burns in patients in the clinical situation."
BBC Northern Ireland health correspondent Dot Kirby said tests were due to begin on patients in Belfast City Hospital in the next 12 to 18 weeks.
"If this technique does work, its cost is likely to be small," she said.
"The drugs are cheap and the light units are expected to cost around £15,000. Each light unit could serve a whole hospital."
BBC Health News
Wednesday, January 04, 2006
Mycobacterium Avium Complex (MAC)
Mycobacterium Avium Complex (MAC) is a serious illness caused by common bacteria. MAC is also known as MAI (Mycobacterium Avium Intracellulare). MAC infection can be localized (limited to one part of your body) or disseminated (spread through your whole body, sometimes called DMAC). MAC infection often occurs in the lungs, intestines, bone marrow, liver and spleen.
The bacteria that cause MAC are very common. They are found in water, soil, dust and food. Almost everyone has them in their body. A healthy immune system will control MAC, but people with weakened immune systems can develop MAC disease.Up to 50% of people with AIDS may develop MAC, especially if their CD4 cell count is below 50. MAC almost never causes disease in people with more than 100 CD4 cells.HOW DO I KNOW IF I HAVE MAC?
The symptoms of MAC can include high fevers, chills, diarrhea, weight loss, stomach aches, fatigue, and anemia (low numbers of red blood cells). When MAC spreads in the body, it can cause blood infections, hepatitis, pneumonia, and other serious problems.
Many different opportunistic infections can cause these symptoms. Therefore, your doctor will probably check your blood, urine, or saliva to look for the bacteria that causes MAC. The sample will be tested to see what bacteria are growing in it. This process, called culturing, can take several weeks. Even if you are infected with MAC, it can be hard to find the MAC bacteria.If your CD4 cell count is less than 50, your doctor might treat you for MAC, even without a definite diagnosis. This is because MAC infection is very common but can be difficult to diagnose.
HOW IS MAC TREATED?
The MAC bacteria can mutate and develop resistance to some of the drugs used to fight it. Doctors use a combination of antibacterial drugs (antibiotics) to treat MAC. At least two drugs are used: usually azithromycin or clarithromycin plus up to three other drugs. MAC treatment must continue for life, or else the disease will return.
People react differently to anti-MAC drugs. You and your doctor may have to try different combinations before you find one that works for you with the fewest side effects.
The most common MAC drugs and their side effects are:
Amikacin (Amkin®): kidney and ear problems; taken as an injection.
Azithromycin (Zithromax®): nausea, headaches, vomiting, diarrhea; taken as capsules or intravenously.
Ciprofloxacin (Cipro® or Ciloxan®): nausea, vomiting, diarrhea; taken as tablets or intravenously.
Clarithromycin (Biaxin®): nausea, headaches, vomiting, diarrhea; taken as capsules or intravenously. Note: The maximum dose of Biaxin is 500 milligrams twice a day.
Ethambutol (Myambutol®): nausea, vomiting, vision problems.
Rifabutin (Mycobutin®): rashes, nausea, anemia. Many drug interactions.
Rifampin (Rifampicin®, Rifadin®, Rimactane®): fever, chills, muscle or bone pain; can turn urine, sweat, and saliva red-orange (may stain contact lenses); can interfere with birth control pills. Many drug interactions.
CAN MAC BE PREVENTED?
The bacteria that cause MAC are very common. It is not possible to avoid being exposed. The best way to prevent MAC is to take strong anti-HIV medications. Even if your CD4 cell count drops very low, there are drugs that can stop MAC disease from developing in up to 50% of people.
The antibiotic drugs azithromycin and clarithromycin have been used to prevent MAC. These drugs are usually prescribed for people with less than 75 CD4 cells.Combination antiretroviral therapy can make your CD4 cell count go up. If it goes over 100 and stays there for 3 months, it may be safe to stop taking medications to prevent MAC. Be sure to talk with your doctor before you stop taking any of your prescribed medications.
DRUG INTERACTION PROBLEMS
Several of the drugs used to treat MAC interact with many other drugs, including antiretroviral drugs, antifungal drugs, and birth control pills. This is especially true for rifampin, rifabutin and rifapentine. Be sure your doctor knows about all the medications that you are taking so that all possible interactions can be considered.
THE BOTTOM LINE
MAC is a serious disease caused by common bacteria. MAC can cause serious weight loss, diarrhea, and other symptoms.
If you develop MAC, you will probably be treated with azithromycin or clarithromycin plus one to three other antibiotics. You will have to continue taking these drugs for life to avoid a recurrence of MAC.People with 75 CD4 cells or less should talk with their doctors about taking drugs to prevent MAC.
Mycobacterium avium complex
Mycobacterium avium complex, or MAC, is a serious bacterial infection that HIV+ people can get. MAC is related to tuberculosis. MAC is also sometimes called MAI, which stands for Mycobacterium avium intracellulare.
MAC infection is usually found only in people with under 50 T4 cells. The symptoms of MAC can include weight loss, fevers, chills, night sweats, swollen glands, abdominal pains, diarrhea and overall weakness. MAC usually affects the intestines and inner organs first, causing liver tests to be high. Swelling and inflammation also occur.
Preventing MAC: A multi-center trial has shown that rifabutin, or Mycobutin, can nearly cut in half the rate at which people develop MAC. The drug is approved for prevention of MAC. Recent information from studies of rifabutin show that the drug may also help people live longer. Taking the drug for MAC prevention reduced the risk of dying by 14% in these studies. The most serious side effects of rifabutin are low white blood-cell counts and elevated liver enzymes. Very few people in trials had to discontinue the drug because of toxicity.
Clarithromycin (Biaxin) is the second drug to be approved for the prevention of MAC. In studies, it reduced the number of MAC infections by 69%, or over two-thirds. In a recent study people taking this drug to prevent MAC lived longer on average than those receiving placebo (a fake or dummy pill used in clinical trials to see if a treatment really works).
A third drug called azithromycin has now also been approved for preventing MAC. This drug can be taken once a week. A recent study found that azithromycin was better at preventing MAC than rifabutin. Azithromycin has not been directly compared to clarithromycin for preventing MAC.
A recent study comparing rifabutin, clarithromycin and a combination of the two drugs found clarithromycin to be clearly superior to rifabutin for the prevention of MAC.
However, clarithromycin is also thought to be the most effective treatment for MAC. Some doctors are concerned that if a person develops MAC while taking clarithromycin, the MAC infection will be resistant to the effects of the drug. This would make the infection much harder to treat. In studies, half the people that developed MAC while taking clarithromycin turned out to have MAC infections that were resistant to the drug. This might have been due to their having an undetected active MAC infection before starting preventive treatment. It is very important that you are properly tested for both active MAC and tuberculosis (TB) infection before starting any preventive treatment.
Treating MAC: The recommendations of the US Public Health Task Force on MAC are that treatment for disseminated MAC should include at least 2 drugs, one of which should be clarithromycin or azithromycin. Effective treatment should continue for life.
The Task Force also noted that many doctors use ethambutol as the second drug, and that other second, third or fourth drug(s) include: rifabutin, rifampin, ciprofloxacin and amikacin. Due to a recent study, clofazimine (trade name Lamprene) is no longer recommended as a part of MAC treatment. The study found that poor survival was associated with adding clofazimine to MAC treatment. The recommendations do not support the use of isoniazid (INH) or pyrazinamide for MAC therapy.
A recent alert from the National Institutes of Health also notes that the drug clarithromycin (Biaxin) should never be used at dose higher than the approved dose of 500 mg twice a day.
Some cautions: If you're taking AZT, rifabutin can reduce the amount of AZT in your blood. Lower amounts of AZT would make the AZT less effective against HIV. Rifabutin also lowers the amount of clarithromycin in the blood.
The anti-fungal drug fluconazole (Diflucan) can increase the amount of rifabutin in the blood by up to 80%. Increased levels of drug in the blood may lead to greater risk of side effects.
Side effects of rifabutin can be kidney and liver damage, bone marrow suppression, rash, fever, gastrointestinal distress, and uveitis (a swelling of the eye). Early warning signs of kidney problems are decreased urination, increased thirst, or light-headedness after you stand up. Uveitis can cause eye pain, light sensitivity, redness and blurred vision. A harmless side effect of rifabutin can be an orange color that appears in the urine and other body fluids, and sometimes on the skin, too. Soft contact lenses can become permanently discolored. Side effects of clarithromycin can be diarrhea, nausea, and abnormal or metallic taste. Clarithromycin may cause severe abdominal pain at high doses. Side effects of azithromycin include mild GI symptoms such as nausea and diarrhea, dizziness, sensitivity to sunlight, and rare cases of hearing loss.
Monday, January 02, 2006
Bacterial Protein Mimics Host to Cripple Defenses
Like a wolf in sheep’s clothing, a protein from a disease-causing bacterium slips into plant cells and imitates a key host protein in order to cripple the plant’s defenses. This discovery, reported in this week’s Science Express by researchers at the Boyce Thompson Institute (BTI) for Plant Research, advances the understanding of a disease mechanism common to plants, animals, and people.
That mechanism, called programmed cell death (PCD), causes a cell to commit suicide. PCD helps organisms contain infections, nip potential cancers in the bud, and get rid of old or unneeded cells. However, runaway PCD leads to everything from unseemly spots on tomatoes to Parkinson’s and Alzheimer’s diseases.
BTI Scientist and Cornell University Professor of Plant Pathology Gregory Martin studies the interaction of Pseudomonas syringae bacteria with plants to find what determines whether a host succumbs to disease. Martin and graduate student Robert Abramovitch previously found that AvrPtoB, a protein Pseudomonas injects into plants, disables PCD in a variety of susceptible plants and in yeast (a single-celled ancestor of both plants and animals). Abramovitch and Martin compared AvrPtoB’s amino acid sequence to known proteins in other microbes and in higher organisms, but found no matches that might hint at how the protein works at the molecular level.
“We had some biochemical clues to what AvrPtoB was doing, but getting the three-dimensional crystal structure was really key,” Martin explained. To find that structure, Martin and Abramovitch worked with collaborators at Rockefeller University. The structure of AvrPtoB revealed that the protein looks very much like a ubiquitin ligase, an enzyme plant and animal cells use to attach the small protein ubiquitin to unneeded or defective proteins. Other enzymes then chew up and “recycle” the ubiquitin-tagged proteins.
To confirm that AvrPtoB was a molecular mimic, Martin and Abramovitch altered parts of the protein that correspond to crucial sites on ubiquitin ligase. These changes rendered Pseudomonas harmless to susceptible tomato plants, and made the purified protein inactive. AvrPtoB’s function is remarkable not only because its amino acid sequence is so different from other ubiquitin ligases, but also because bacteria don’t use ubiquitin to recycle their own proteins.
“An interesting question is where this protein came from,” Martin noted. “Did the bacteria steal it from a host and modify it over time, or did it evolve independently? We don’t know.”
Regardless, the discovery “helps us understand how organisms regulate cell death on a fundamental level,” Martin said. AvrPtoB provides a sophisticated tool researchers can use to knock out PCD brought on by a variety of conditions, shedding light on immunity. The protein itself or a derivative might one day be applied to control disease in crops or in people. For now, Martin and Abramovitch are working to find which proteins AvrPtoB acts on, and what role those proteins play in host PCD.
Sunday, January 01, 2006
High Level Of Antibiotic Resistance In Bacteria That Cause Food Poisoning
The bacteria, Campylobacter, causes between 5 and 14 percent of all diarrhoeal illness worldwide. The most common sources of infection are inadequately cooked meat, particularly poultry, unpasteurised milk and contaminated drinking water. The illness normally clears up after a week, without treatment. But small children and people with a weakened immune system often take antibiotics to prevent the infection from spreading to the bloodstream – and causing life threatening septicaemia.
Researchers from the Swiss Federal Veterinary Office collected raw poultry meat samples from 122 retail outlets across Switzerland and Liechtenstein, and tested their antibiotic resistance. From 415 meat samples, they isolated 91 strains of Campylobacter, 59% of which were sensitive to all the antibiotics tested.
19 strains (22%) were resistant to one antibiotic, 9 strains (10%) to two antibiotics, and 8 strains (9%) were resistant to at least three antibiotics. Two strains were resistant to five antibiotics. One of these showed resistance to ciprofloxacin, tetracycline and erythromycin – the most important antibiotics for treating Campylobacter infection in humans.
Meat was more likely to be infected with Campylobacter if it was kept chilled, rather than frozen. However, the storage conditions did not affect the frequency of antibiotic resistance in the bacteria.
Although the frequency of antibiotic resistance in Switzerland may seem high, meat produced in the country was, in fact, less likely to be infected with antibiotic resistant Campylobacter than meat produced elsewhere. Jürg Danuser commented: "The level of antibiotic resistance in Campylobacter depends on the amount of antibiotics that the chickens received. Maybe in Switzerland antibiotics were used less, so there is less resistance"
Initially, the researchers thought that poultry was more likely to be infected with antibiotic resistant bacteria if it was raised using conventional indoor farming methods rather than in an animal-friendly way. However, the majority of meat produced in an animal friendly way came from Switzerland, and this skewed the results. The researchers therefore concluded that only the country of origin and not the farming methods were likely to influence the level of antibiotic resistance in the bacteria.
Jürg Danuser discussed this: "It's possible that chickens raised in an animal-friendly way are more healthy, so they need less treatment with antibiotics and so their Campylobacter are less resistant to antibiotics. But the other side of the story is that these chickens go outside more often, so they are in more contact with wild birds, which is the reservoir of Campylobacter."
These findings are of concern for Swiss consumers, but, as mentioned above, the picture for other countries is even bleaker. The researchers wrote: "The high prevalence of Campylobacter in raw poultry meat samples found in this study agrees with data from other studies." In the USA, 90% of Campylobacter strains isolated from poultry meat had resistance to at least one, and 45% to at least two antibiotics.
Worries over antibiotic resistant bacteria led the EU to ban the use of four antibiotics as growth promoters in chickens, in 1999. The US Food and Drugs administration (FDA) followed their lead in late 2000, by banning the use of a particular class of antibiotics called fluorquinolones in poultry farming.
Food poisoning caused by eating Campylobacter infected poultry is on the increase. In Switzerland, 1 in 1,086 people suffer from Campylobacter infection every year; the number is approximately ten times higher in the US. | 1 | 2 |
<urn:uuid:086c9b13-ed3d-472a-9a14-2225a9d19dfd> | česky | english
History of Risks & Threat Events to CAs and PKI
In Risk Management terms, History refers to the series of attack events that are documented and examinable, for the purpose of validating threat attack models.
This is an ongoing effort to document those events that have been reasonably seen as attacks and threats relevant to the CA and the usage of certificates. The purpose of this page is to help risk assessments validate their threat models against recorded events.
Only attacks with whose existence is established by sufficiently reliable reporting are listed here. Consequences need to be identifiable, but they do not need to be against any specific party. To some extent, where we set the bar is difficult to justify because we lack a clear history of user damages, and those that do the damage are not talking. However, some history is better than none.
The above index indicates first known deployment which is a very uncertain measurand in secret affairs. However, in history, date is always first, so the above timeline is updated as new information comes in.
1995 Wikipedia writes: Early versions of Netscape's SSL encryption protocol used pseudo-random quantities derived from a PRNG seeded with three variable values: the time of day, the process ID, and the parent process ID. These quantities are often relatively predictable, and so have little entropy and are less than random, and so that version of SSL was found to be insecure as a result. The problem was reported to Netscape in 1994 by Phillip Hallam-Baker, then a researcher in the CERN Web team, but was not fixed prior to release. The problem in the running code was discovered in 1995 by Ian Goldberg and David Wagner who had to reverse engineer the object code because Netscape refused to reveal the details of its random number generation (security through obscurity). That RNG was fixed in later releases (version 2 and higher) by more robust (i.e., more random and so higher entropy from an attacker's perspective) seeding. Consequences. None reported beyond media and academic embarrassment.
2001. False certs. An unknown party used weaknesses in validation to get two certificates issued in the name of Microsoft.com (Guerin). The attacker was thought to be of the reputational variety: interested in embarrassment of CA not exploitation.
2003. Phishing. This attack bypasses the security afforded by certificates due to weaknesses in the secure browsing model (Grigg1). The existence of an unsecured mode of communication (HTTP) alongside a secure mode (HTTPS) provides an easy borders-of-the-map or downgrade attack, which user interfaces offer little resistance against. Consequences: Best guesstimate runs at around $100m per annum (FC 1343).
2006. Dual_EC. The NSA caused the supply of bad number generators to industry (Anatomy of a NSA intervention), possibly impacting the signing of certificates. Short story: in the early 2000s, NIST standardised the approach for generating random numbers as a Special Publication 800-90 (SP800-90). This approach included a number of standard stretchers as the third phase in a collector/mixer/stretcher design. NSA designed and pushed a particular approach based on 2 elliptic curves, which was accepted as Dual_EC within SP800-90 in 2006. ISO (International Standards Organisation) followed suit (iso18031). NSA then coordinated and/or directly influenced at least one major supplier in the USA to make Dual_EC the default for all products shipped by that supplier. In 2007, Dual_EC was shown to be suspicious. In 2013, Snowden's revelations pointed the finger at a NIST 2006 product, and within a month, NIST withdrew endorsement over Dual_EC. The supplier immediately followed. Consequences: no evidence of direct breaches as yet, only indirect reputation effects. The supplier's credibility is ruined because it did not act when the warnings were clear, and instead followed NIST's lead without question (and/or under influence of government contracts). This supplier was a major player in the CA industry. Broader questions are raised about the entire crypto supply industry of the USA (Where do we stand?), NIST's role in crypto standards, and all FIPS-certified cryptographic products as they were typically required to use SP800-90 (Greene). Which includes most HSMs used to generate CA keys and sign certificates. This is no single event, consequences are spread as early as 2006 (shipments) to 2013 (confirmation) and probably later as default users will take a long time to switch away from Dual_EC.
Debian RNG. A change made to OpenSSL RNG code in 2006 dramatically reduced entropy used to generate keys in Debian-based distributions of Linux (including Ubuntu) which was used on some desktops and many small business servers (Wikipedia). Consequences. When discovered in May 2008, rework included a massive regeneration of keys, including X.509 certificate keys, and then subsequent re-issuance of certs. No hacks known as yet?
2007.1. Flame. A malware called Flame was signed by a Microsoft sub-CA that was perverted by means of an older algorithm MD5 (arstechnica). The sub-CA was also wrongly approved for code-signing. The signature was attacked and a new signature forged onto a new certificate that signed the malware (wikipedia1, Stevens).
"Using a technique from dubbed counter- cryptanalysis, it was found that the certificate was generated by a chosen-prefix collision attack on MD5, i.e., an attack that extends two prefixes P and P′ with suffixes S and S′ such that P∥S and P′∥S′ have the same hash value: a collision. Although the attack seems to be based on the same principles as published collision attacks, such as the attack by Wang et al. from and the attack by Stevens et al. from , it has not been published before." Fillinger)
The malware was produced by Operation OlympicGames (NSA, CIA, Israel) against Iran's nuclear project (wikipedia2, wapo), see also Stuxnet. The certificate was apparently attacked in 2009 but the malware was in circulation as early as 2007 (skywiper). Consequences: Damages to Iran are unknown as yet. As it was an intelligence-gathering malware, it is hard to attribute damages directly. Microsoft revoked 3 sub-CAs in a security update effecting all distributions.
2007.2. Stuxnet. Two code-signing certificates, stolen from two separate chip manufacturers in Taiwan, were used to sign drivers that were installed as part of a rootkit to infect Windows machines (Krebs), (Wikipedia1). The overall goal was a highly targetted sabotage of Iranian centrifuges engaged in production of high-grade nuclear material. Stuxnet was actually two attacks with the same goal, but different methods (Langner), the first in 2007 or before, the second in 2009. Consequences: Various estimates suggested that Stuxnet succeeded in knocking out and perhaps destroying some 1000 centrifuges, estimated at 10% of Iran's centrifuge capacity (ISIS) and delaying Iran's weapon building program by 1.5-2 years (NYT20120601.2, Langner). DEBKA suggests the damage is far more severe and sweeping than first reported, effecting and targetting thousands or even millions of significant computers (DEBKA1), and carrying on into 2012 (DEBKA2). Claims have been made that collateral damage effected other similar plants in Russia (kaspersky). Attack was part of Operation OlympicGames (NSA, CIA, Israel) (NYT, wapo, Wired, IBT/DerSpiegel, FP), see also Flame and Regin1, Regin2.
2008.1. Interface breach. One CA created a false certificate for a vendor by probing the RA of a competitor for weaknesses (Leyden). Consequences: limited to lowered reputations for all of those involved.
2008.2. Weak root. An academic group succeeded in attacking a CA with weak cryptographic protections in its certificates (Sotirov et al). This resulted in the attackers acquiring a signed certificate over two keys, one normal and one that acted as a sub-root. This gave them the ability to sign new certificates that would be accepted by major vendors. Consequences: as the root that was attacked was slated to be removed within the month, consequences were limited. Faster rollout of the new root, perhaps a few certificate re-issuances and reputation damage.
2009 Etisalat's mass surveillance attack. A CA/telco signed a false certificate for a mobile network operator, signed a firmware update, and delivered it to all mobile subscribers in its network (pcworld). The attack worked because the mobile's software accepted any update from any channel signed by any CA in the rootlist of the device (post-PRISM). The firmware update contained spyware that registered phone details (including the PIN) and forwarded all emails on demand to Etisalat (Blackberrycool-1). It was spotted within a week because the spyware was delivered through unexpected channels, and it drained the battery of the mobile. The spyware was supplied by SS8 (Blackberrycool-2), an American company specialising in legal intercepts. Consequences: 140,000 subscribers were annoyed by battery draining and having to install / run anti-virus. Compromise of secret emails, and secret PINs. Damage to reputation for Etisalat (spying on customers), SS8 (crappy code) and RIM (poor security).
2009 Duqu. A malware signed with a valid but abused signature, from the same family as Flame and Stuxnet. Its purpose is "to be used for espionage and targeted attacks against sites such as Certificate Authorities (CAs)" (mcafeee) and "one of Duqu's actions is to steal digital certificates (and corresponding private keys, as used in public-key cryptography) from attacked computers to help future viruses appear as secure software." (wikipedia1). Duqu was fingered against a Hungarian CA (The//Intercept) and operated from 2009 to 2011, when unearthed in a hack on a secure firm in Hungary. Duqu is thought to be operated by Israel (Wired). Consequences: unknown, difficult to quantify as damage appears to be limited, and the malware was self-cleaning.
Critical cert. A developer's laptop used to sign HP distros in 2010 was breached, a malware inserted itself into the signing process, got signed, then mailed itself back home Krebs. The malware wasn't used on HP, instead it was discovered 4 years later by Symantec. Meanwhile the certificate expired, but the cert holder still plans to revoke the certificate, and is expecting support issues as the revoked certificate blocks various and many packages. The base plan is to re-sign, but this does not apply to recovery partitions which can reset software back to factory config. Consequences: No direct damages reported. Indirectly, it could cause chaos if packages actually take the revocation seriously.
Playstation. The ECDSA private key for signing PlayStation games was hacked due to not using random numbers in the DSA signatures over games (Wikipedia). Consequences. In theory, the crack means that homebrew developers can sign their own games and bypass the control monopoly over games distribution, with consequent lowering of revenues to Sony and insider game developers. Beyond that?
2010 Regin. GCHQ attacked Belgacom with spearphishing QuantumInsert to insert Regin malware (TheIntercept, f-secure). Malware was signed but the certs were just pretending to be Microsoft code-signing certs. Presumably people would be tricked into thinking these were real certs and Microsoft protection was just buggy. Regin was fingered to be part of 5eyes hacktool kit qwerty. Consequences. Internal systems were breached, customer private communications was grabbed. "Belgacom invested several million dollars in its efforts to clean-up its systems and beef-up its security after the attack. However, [some] believe parts of the GCHQ malware were never fully removed."
2010 APT RSA-RI provided a case study of a multiple-APT (Advanced Persistent Threat) attack on a company that traced back to 2010. Two trojans were found being validly digitally signed by x.509 certificates (Case Study). "Digitally signed malware is rare, and implies a higher level of sophistication from an adversary." Consequences. The case study revealed no consequences, which weakened the effect of the report.
2011.1. False certs. A claimed-lone Iranian attacker, ichsunx2, breached approximately 4 CAs. His best success was to use weaknesses in an Registration Authority to acquire 9 certificates for several high profile communications sites (Zetter). It was claimed that the attacker operated under the umbrella of the Iranian state but no evidence for that was forthcoming. Consequences. No known user damages. Browser vendors revoked-by-patch ioerror.
Same CA also suffered a "compromise of its ‘UTN-USERFirst- Hardware’ certificate" which signed directly 85k certificates including 50 intermediate CAs, bringing it up to a total market-impact of 120k domains (WEIS 2013 Asghari et al). Consequences: unknown, the total market patch is interesting but not germane as yet.
2011.2. DigiNotar. The same attacker, icksunx2, breached a Dutch CA and issued 531 certificates (wikipedia). The CA’s false certs were first discovered in an attack on Google’s gmail service, suggested to be directed against political activists opposed to the Iran government. Controls within the CA were shown to be grossly weak in a report by an independent security auditor (FOX-IT1, FOX-IT3, also see enisa report), and the CA filed for bankrupcy protection (perhaps for that reason). Vendors discovered that revocation was not an option, and issued new browsers that blocked the CA in code. Consequences: Rework by google, and vendor-coordinated re-issuance of software to all browser users. Potential for loss of confidentiality of activists opposed to Iranian government. Many Netherlands government agencies had to replace their certificates. Tantalising hint from Brazil case that the CA may have been hacked by NSA. GCHQ reported MITMs against google (DerSpiegel-GCHQ).
2011.3. Certificate Stealing. 3 separate incidents indicate that certificates are now worth stealing. Infostealer.Nimkey is a malware distributed through traditional spam/phishing channels (Yahoo). Once it infects, it searches the victim computer for keys and sends them to a server in China. Duqu is a variant of Stuxnet that used a stolen code-signing cert to install drivers (Wikipedia2). From inspection of the malware, the attack was variously quoted as IP/data collection/espionage, stealing keys, or attacking CAs (McAfee). Identity fraud of some form was used to get a valid certificate issued in the name of a company by intercepting the verification communications to that company's employee (F-secure). Consequences: Re-issuance of certificates and reviews of security. In none of these 3 cases were any direct damages assessed.
2011.4. Spear Phishing. A group of 9 certificates were identified in targetted malware injection attacks (FOX-IT2). As the certificates were all alleged to be only 512 bits, the conjecture is that new private keys were crunched for them. Consequences: One public-facing sub-CA in Malaysia was dropped, 3 other CAs re-issued some certs and reviewed controls. No known customer breaches, but probably replacement certs for the holders (minor).
2011.5. Website hack. A captive CA for a telecom had its website hacked, and subscriber information and private IP compromised (Goodin). Attacker was listed as a hacker who tipped off the media, claiming not to be the first. Parent telecom shut down the website.
2012.1. Weak Key scan. Two academic groups independently scanned the net for all published certificates (6-11 million examples) and analysed them (Heninger, et al) and (Lenstra, et al). They found that 1% of certificates were in common, and 0.4% were constructed with poor parameters which permitted the revealing of the secret keys. The keys were traced to 3 popular hardware devices with one popular software package at its core that mishandled the random numbers on key generation (Wikipedia). Consequences: Damages have not been assessed but would involve some rework and reputational loss by the suppliers of these devices. Gain in reputation for the academic groups.
2012.2. CA breached contract against MITMs. A CA announced that it had issued a subroot to a company for the purposes of intercepting the secure communications of its employees (SpiderLabs). This is contrary to contract with vendors and industry compact. At some moment of clarity, the CA decided to withdraw the subroot. Consequences: loss or damage to that customer due to contract withdrawal. Such contracts have been estimated to cost $50k. Destruction of the equipment concerned, maybe $10k. Loss of reputation to that CA, which specialises in providing services to US government agencies. Potential for delisting the CA concerned in vendors' trust lists which could be a bankruptcy event (TheRegister). Loss of time at vendors which debated the appropriate response.
2012.4 In the vendor's words: "We recently received two malicious utilities that appeared to be digitally signed using a valid [Vendor] code signing certificate. The discovery of these utilities was isolated to a single source. As soon as we verified the signatures, we immediately decommissioned the existing [Vendor] code signing infrastructure and initiated a forensics investigation to determine how these signatures were created. We have identified a compromised build server with access to the [vendor] code signing infrastructure. We are proceeding with plans to revoke the certificate and publish updates for existing [vendor] software signed using the impacted certificate. ...." If nothing else, kudos for a model disclosure!
2012.5 A CA here issued 2 intermediate roots to two separate customers 8th August 2011Mozilla mail/Mert Özarar. The process that allowed this to happen was discovered later on, fixed, and one of the intermediates was revoked. On 6th December 2012, the remaining intermediate was placed into an MITM context and used to issue an unauthorised certificate for *.google.com DarkReading. These certificates were detected by Google Chrome's pinning feature, a recent addition. "The unauthorized Google.com certificate was generated under the *.EGO.GOV.TR certificate authority and was being used to man-in-the-middle traffic on the *.EGO.GOV.TR network" wired. Actions. Vendors revoked the intermediates microsoft, google, Mozilla. Damages. Google will revoke Extended Validation status on the CA in January's distro, and Mozilla froze a new root of the CA that was pending inclusion.
2012.6 writes Symantec: "the VOHO attack campaign of June, 2012. What was particularly interesting about this attack was the use of the watering hole attack technique and the compromise of B9’s trusted file signing infrastructure. The VOHO campaign was ultimately targeting US defense contractors whose systems were protected by B9’s trust-based protection software but when the Hidden Lynx attackers’ progress was blocked by this obstacle, they reconsidered their options and found that the best way around the protection was to compromise the heart of the protection system itself and subvert it for their own purpose. This is exactly what they did when they diverted their attention to B9 and breached their systems. Once breached, the attackers quickly found their way into the file signing infrastructure that was the foundation of the B9 protection model, they then used this system to sign a number of malware files and then these files were used in turn to compromise the true intended targets."
2012.7 A security provider's code-signing cert was compromised through a misconfigured VM and the cert was used to sign dozens of malware. This was part of a targetted attack at a particular unnamed segment of industry, presumably one which was served by the provider. Consequences. "compromised specific Websites (a watering hole style attack...). We believe the attackers inserted a malicious Java applet onto those sites that used a vulnerability in Java to deliver additional malicious files, including files signed by the compromised certificate."
2013.1 Brazil. The Ministry of Mines and Energy was attacked by the 5E group of intelligence agencies, led by Canada's CSEC, in what seems to be a state-industrial espionage campaign (globo).
...the author of the presentation makes the next steps very clear: among the actions suggested is a joint operation with a section of the American NSA, TAO, which is the special cyberspy taskforce, for an invasion known as “Man on the Side”. All incoming and outgoing communications in the network can be copied, but not altered. It’s like working on a computer with someone looking over your shoulder.
A vague accusation was previously made on Brazilian TV that certificate-based MITM attacks may have been made against many overseas corporations by the NSA:
Now, documents published by Fantastico appear to show that, far from “cracking” SSL encryption—a commonly used protocol that shows up in your browser as HTTPS—the spy agencies have been forced to resort to so-called “man-in-the-middle” attacks to circumvent the encryption by impersonating security certificates in order to intercept data. ... However, in some cases GCHQ and the NSA appear to have taken a more aggressive and controversial route—on at least one occasion bypassing the need to approach Google directly by performing a man-in-the-middle attack to impersonate Google security certificates. ... One document published by Fantastico, apparently taken from an NSA presentation that also contains some GCHQ slides, describes “how the attack was done” to apparently snoop on SSL traffic. The document illustrates with a diagram how one of the agencies appears to have hacked into a target’s Internet router and covertly redirected targeted Google traffic using a fake security certificate so it could intercept the information in unencrypted format.
The attack happened, but the role of certificates is obscured by the fog of journalism (ElReg). Consequences: uncertain, the purpose is economic or industrial espionage: the aim of the Canadian agency: “Discover contacts of my target” – the Ministry of Mines and Energy of Brazil.
2013.2 Android's Secure Random. The default Java random number generator for all Android was found to be weak. This lead to breaches of the ECDSA key as signatures were made without sufficient randomness ElReg. Likely, this would also impact any client-certificates or similar cert-protected operations on Androids. Consequences. At least one Bitcoin theft was rumoured, but need more details here... No evidence of PKI breaches as yet, probably because Android is more client-side and PKI has concentrated on server-side keys.
2013.3 Lavabit. FBI subpoened the SSL encryption key of a small email provider (Register). While stating they were only interested in tracking one customer (Snowden) it gave them access to all customers, and was probably an illegally broad request, not particularised. "On Aug. 5, Judge Claude M. Hilton ordered a $5,000-a-day fine until Mr. Levison produced the keys in electronic form. Mr. Levison’s lawyer, Jesse R. Binnall, appealed both the order to turn over the keys and the fine. After two days, Mr. Levison gave in, turning over the digital keys — and simultaneously closing his e-mail service, apologizing to customers on his site. That double maneuver, a prosecutor later told his lawyer, fell just short of a criminal act" (NYT). Consequences: loss of an entire business. Compromise of entire customer base's secret communications, as the key has probably now gone to the NSA, and we know the NSA escrow encrypted traffic for future decryption. Indirect damage to reputation of all SSL sites, as it is clear that the USA courts will overreach to demand keys (something that UK's RIP permitted but was apparently never used).
2013.4 Signed Trojans. In two separate incidents, trojans were discovered to be signed by valid certificates signed by the same CA (1, 2). In both cases, the trojans seemed to be attacks on online banking, and one cert had signed 70b variants of trojans. The claimed companies for the certificates, one in Brazil, the other in France, did not exist, although it looks like the Brazilian name was registered as a company (whatever that means). Also see #WildNeutron below. Consequences: revocation and press reports (embarrassment).
2013.5 Fibre Tapping. Over the last several years, a major public email and phone supplier put SSL protection by default on all email and other services users. The NSA bypassed the protections of SSL by tapping unencrypted links between data centers (WaPo, FC1). The graphic reveals the story better than words. Consequences: potential breach of all and any services that might have been exposed over the unencrypted links, including access capabilities, intellectual property, financial data. Reports of entire databases, etc, being compromised in copying make this breach far bigger than the credit card hacking breaches, possibly the largest corporate breach to date. For the future, encrypted links seem more likely, and more end-to-end security models will likely be used. Reputation for security has taken a big hit, as the encryption of offsite data and the tapping of fibre is a widely known threat (FC2).
2013.6 ANSSI. The French cyberdefense agency (their description) ANSSI national government CA issued an intermediate CA cert to the French Ministry of Finance who went on to issue several fraudulent certificates for Google domains (google, SSI). The usage was apparently to decrypt SSL traffic within the ministry. The intermediates were revoked by the CA. Consequences. This should result in revocation of the top-level CA by browsers as several warnings have been shot in this direction. However, it is unlikely that they will do so; the CAs exercise considerable pressure in secret over the vendors. As this is a top-level western powers government CA, likely a compromise will be found (ElReg). Damages likely reduced to embarrassment and annoyance (bugzilla).
2014.1 Heartbleed. Researchers discovered and announced a flaw in the OpenSSL implementation of the TLS protocol for some recent versions allowed an attacker to access private data including keys from effected clients and servers. This in effect compromised (made uncertain) all keys in webservers running the buggy versions, as well as opened up client certificates to compromise. The attack did not cause any diagnostics to differentiate, therefore detection was difficult. Only action is to upgrade OpenSSL and regenerate keys and certs where effected. After three weeks, 73% of certs remained to be reissued and 87% were yet to be revoked (Dumitras). Consequences: Massive re-issuance and re-install exercise for all OpenSSL sites. CRA reported credible exploit over 900 customers but no damages as yet. Schneier claimed 6 weeks after "In the end, the actual damage was also minimal, although the expense of restoring security was great." Costs in rework have been suggested as high as $500m FC CHS lost 4.5m records. Research shows revocation is unreliable (SecurePKI) which has been theoretically observed countless times.
2014.2 Review. This discovery of Heartbleed above triggered a wide-spread review of common cryptographic libraries used and [[|processes]] employed for TLS/SSL; other suppliers reported goto fail similar finds, as well as more for OpenSSL CVE-2014-0224. Good history of SSL/TLS. Although not an attack on CAs nor PKI, it does break open the customer by attacking near to the certs. Consequences: No damages reported as yet. Gotofail and Poodle may have been implicated by review. Breaches like this (Heartbleed, Lucky13, gotofail) are setting an overall ceiling on expectations over security using secure browsing stack of HTTPS and TLS and causing rethinks at all levels.
2014.3 Indian CA. An intermediate CA was compromised in India and several false certs were issued for google sites and also Yahoo. Google took the unusual step of restricting the certs under that CA to Indian domains. Microsoft's auto-update system revoked the certs. No damages reported.
2014.4 Facebook analysis "we have designed and implemented a method to detect the occurrence of SSL man-in-the-middle attack on a top global website, Facebook. Over 3 million real-world SSL connections to this website were analyzed. Our results indicate that 0.2% of the SSL connections analyzed were tampered with forged SSL certificates, most of them related to antivirus software and corporate-scale content filters. We have also identified some SSL connections intercepted by malware."
2014.5 Poodle attack is a downgrade attack on TLS to SSL v3 which then breaks open the packet using an attack on the weak padding. This allows older servers to be broken. Cloudflare reports low levels of SSL3.0 usage -- 0.65% of all HTTPS. Mozo and Chrome both announced intent to drop SSL3.0 entirely in short term (within 2m), which may disrupt some laggards. Later, it was discovered that TLS 1.0 and 1.1 were also susceptible to puddle if the padding wasn't checked correctly which was detected on around 3-4% of scanned servers. Consequences: No damages reported as yet.
2014.6 Emmental attack consists of man-in-the-browser trojan introduced into user's platform via phishing that corrupts both DNS resolver and platform's CA root list. It then proceeds to pop up warnings to trick the user into installing matching malware on user's mobile phone, which is listed as the second channel. Is targetted at 34 banks in Europe. Consequences. None reported in the paper.
2014.7 DarkHotel. Kaspersky published details of a 4 year operation called DarkHotel that attacked against high-profile guests at hotels. By tricking the user and/or laptop into doing an upgrade, trojans were inserted. The updates were signed by somewhat valid RSA keys and Kaspersky strongly suggests that the majority of keys were factored / forged 512 bit keys, whereas some longer ones were stolen. Consequences. The somewhat vague description suggest that various executives had their corporate intellectual property stripped. As it was highly targetted, and a very expensive attack, this suggests defence companies or state secrets.
2014.8 Guardians of Peace. Over the year, Sony Pictures Entertainment got hacked by "Guardians of Peace" originally thought to be North Korean interests (meaning, probably state-endorsed cyberwarefare units) upset at the release of a politically sensitive comedy, but later indicated as a possible inside job. Entry may have been by spear phishing (ArsTechnica). Within the month, malware appeared signed by Sony certs. GoP release file dumps with a selection of business certs (banking, infra, servers) and "a Sony Corp. CA 2 “root” certificate - a digital certificate issued by Sony’s corporate certificate authority to Sony Pictures to be used in creating server certificates for Sony’s Information Systems Service (ISS) infrastructure. This may have been used to create the Sony Pictures certificate that was used to sign a later version of the malware that took the company’s computers offline."
Consequences. SEP itself filed "The current quarter is expected to include approximately $15m in investigation and remediation costs" and "the grand total could be $35 million for the fiscal year ending March 31,... 'The figure primarily covers costs such as those associated with restoring our financial and IT systems.' " RT. Damages appear to have been mitigated (transferred): "We had insurance against cyber-attacks and will be able to recover a significant portion of the costs." Early estimate to SEP included an estimated $90m against fully pulling the movie although it made $15m when released on the net, perhaps evidencing an unexpected positive consequence -- negative damage -- of the hack. Several other unreleased films such as Fury were pushed out onto filesharing networks, dampening their revenue prospects.
2014.10 flyingPig. GCHQ runs a scanning service called Flying Pig that analyses SSL attacks (DerSpiegel-GCHQ):
- Multiple examples of FIS (foreign intelligence service?) data exfiltration using SSL have been found using FLYING PIG
- In particular, certificates related to LEGION JADE, LEGION RUBY, and MAKERSMARK activity were found on FLYING PIG using known signatures
- These were then used to find previously unknown servers involved in exfiltration from US companies.
- FLYING PIG has also been used to identify events involving a mail server used by Russian intelligence.
2014? Steel Mill. BSI in Germany reported (BSI Report) attackers gained access to the steel mill through the plant’s business network, then successively worked their way into production networks to access systems controlling plant equipment. The attackers infiltrated the corporate network using a spear-phishing attack—sending targeted email that appears to come from a trusted source in order to trick the recipient into opening a malicious attachment or visiting a malicious web site where malware is downloaded to their computer. Once the attackers got a foothold on one system, they were able to explore the company’s networks, eventually compromising a “multitude” of systems, including industrial components on the production network. “Failures accumulated in individual control components or entire systems,” the report notes. As a result, the plant was “unable to shut down a blast furnace in a regulated manner” which resulted in “massive damage to the system.” According to the report, the attackers appeared to possess advanced knowledge of industrial control systems. “The know-how of the attacker was very pronounced not only in conventional IT security but extended to detailed knowledge of applied industrial controls and production processes,” the report says. (wired) Date unknown.
2014. Superfish was a program (,company) and root certificate installed in Lenovo laptops shipped September - December 2014 (forbes). The root certificate was a single key/cert pair for all installs, and was inserted into the system's root list. The Superfish program then MITM'd all the user's traffic and injected 'applicable' adverts in the browser's google search process. The root key was extracted, and due to it being the same on all installs, holders of the root could now MITM any Lenovas that have not been cleansed of the malware. Worse, Superfish also rewrote any certificate that appeared bad to be good to the client, thus making any system MITMable by any outside agent (Filippo). Epic fail. Consequences. Remedial work includes changes in procedure at Lenova, and cleansing ofan unknown number of users: 16m laptops shipped over 4 month period, or 40 million users as claimed by Pinhas of Superfish (Superfish).
2015 . Duqu 2.0 Kaspersky found a highly sophisticated penetration of its own systems which also penetrated various international events of diplomatic significance (ArsTechnica). The malware lived in memory only and was self-healing, it relied on a zero-day to install code into kernals that bypassed the certificate checking mechanisms of Windows. Point of entry was suspected as being a spear-phishing attack on a regional office non-technical staff member using a zero-day (Wired, Kaspersky). It was capable of bypassing more than a dozen anti-virus products . It was fingered as being an update of Duqu above and signs pointed at it being from Israel. Kudos to Kaspersky for coming clean on this as per normal as soon as the zero-day was patched. Consequences. a lot of watching and cleaning by the company, and possibly loss of secrets. Kaspersky estimated that the budget for the attack operation was $10m, and the entire framework or platform cost $50m (FAQ).
2015. CNNIC The national CA in China issued an intermediate root cert to a company MSC Holdings under contract for storage in a HSM and only for own company domains. The company installed it in an SSL-MITM proxy that MITMed all users over several google domains. Google became aware via Certificate Transparency) and raised the alarm (google). Google and Mozilla determined that CNNIC had been negligent because it had "delegated their substantial authority to an organization not fit to hold it." Consequences. No user damage has been claimed as yet. The intermediate was revoked at browser level. CNNIC will be de-listed from the root lists for Mozilla (mozilla) and Chrome, but not Apple nor Microsoft. CNNIC is invited to do remedial work then re-apply.
20xx. Intelligence Community. (More a threat actor than a single event.) Ross Anderson published a good summary of everything we can conclude from the Snowden revelations about the NSA and friends (up to 65) attacking industry and people Anderson. Primary threats to the CA business would be: key theft, implants, bad RNGs, supply chain, insiders. Primary threats to users would be mass surveillance, leakage to police, parallel construction, poor usability of cryptographic tools. Breaching the cryptography directly remains a theoretical threat at best.
2015. Wild Neutron used a stolen cert to sign code for install on victim platforms, as well as a Flash zero-day. This malware was active in 2011 then 2013, during which it attacked the big user-facing IT corps. (SecureList) Consequences. Unknown at this stage but the target list suggests high-level economic attack motivations (law, investment, bitcoin, M&A, IT, healthcare, real estate).
2015 CIN - Corruptor-Injector Networks. CINs appear to be pwned routers that are capable of presenting entire sites in facade, including deep interception and rewriting of certificate based security (cryptostorm). Cryptostorm claims to have intercepted the trace of activity of a CIN and is reverse engineering it. So far the complexity of the attack is beating them, but it seems to involve manipulation of all aspects of the SSL/Certificate workflow on a massive scale. The cost of mounting such infrastructure must be high - in the many millions, and indicates state-level long term support - APTs. Consequences. If carried out at scale, users of corrupted routers are likely to be completely pwned. It is likely that the attacks will be targetted and if state-level, then may be limited to those with state enemies, although given recent posturing, that may be cold comfort.
2015(?) Weaponised CRLs. Cryptostorm reports that CRLs are no longer being used seriously by many players, and they are pursuing evidence that a CRL may have been used in the Stux/Flame/Duqu attacks by USA/Israel against Iranian nuclear plants (Cryptohaven). Another group had discovered DOS attacks using CRLs. As a result, cryptostorm are now blocking CRLs internally. Note that these reports are preliminary, with little confirming detail. Consequences. Unravelling the utility of the CRL system is problematic for the integrity of the CA infrastructure. Without an after-the-event ability to reach the users, the CA loses one leg of its business case. Sites are switching to OCSP and similar but if the site is now delivering a liveness record, the CA is no longer in its dominating position in the marketplace.
2015 Ashley Madison. A website business holding 30 million accounts for people searching for extra-marital adventure was hacked completely. No known cause of the breach as yet. The hackers were on either an extortion mission or a judgmental mission, either way their demands were not met so the entire dataset was posted into Tor. Consequences. The business is almost certainly dead. Also there is a large externality as people are discovered to have been on the site. In few cases is precise detail easily available as yet, but all are tarred with the same brush. Mass anguish (Hunt) and an expectation of many divorces, professional repucussions, etc.
2015 Office of Personnel Management. The OPM, a USA federal agency responsible for most background checks for government and military (not spooks) was hacked. Security was lax. Approximately all background checks and personnel files are compromised (breached). USA government (MSM) points finger at China, but no evidence provided. NB, as with AshMad above, there is no known evidence that this was a CA-related breach, but is included for its headline nature. Damages. Unbelievably Huge. This trove lays open the entire pesonnel of the USA government and military for aggressive enemy spying operations, it is literally the largest known spying coup ever. In direct cost terms, USA Department of Defence costed it at $132m (DefenseOne, Appropriations, Threatpost) but this does not cover non-DoD agencies (e.g., VetAffairs will cost $5m) and only covers the industry standard "compliance" cost of monitoring for credit abuse, etc. E.g,. a joke.
2015 Accidental Issuance. A major CA accidentally issued and released a cert for a major website, which was spotted by Certificate Transparency logs within Chrome (CA blog. website blog). Immediately spotted, and revoked, the cert never left their control. Damages. Incident report. Embarrassment of negotiating with browsers for a pass, the website concerned, and some employees fired. Updated report indicates 23 certs mis-issued, 164 instances of dodgy issuance in testing, over 76 domain owners.
2015 Misuse under contract. From a vague article by Forbes: "Alibaba’s 25pp marketplace doesn’t need the phone to be unlocked to install on iOS. It flouts Apple security rules in other ways. FORBES has learned the store breaks Apple policy by using an Enterprise Certificate to install itself on users’ phones. These certificates are supposed to be used by businesses to disseminate bespoke apps within the confines of the corporate network and are strictly not for commercial use. Apple could simply revoke the certificate, but it would be easy for Alibaba’s subsidiary to obtain a new one and start breaking the rules all over again" (Forbes) which enterprise certificates were complicit in the spread of viruses delivering malware. Also talks about jailbreaking which involves replacing the Apple root control chain.
2015 Dell notebooks with rogue root CA. Dell delivers new XPS 15 notebooks with an in the Windows trust store installed root CA certificate including its private key (found by rotorcowboy). "[A] network attacker could use this CA do sign his or her own fake certificates for use on real websites and an affected Dell user would be none the wiser unless they happened to check the website's certificate chain. This CA could also be used to sign code to run on people's machines".. On deletion, the root CA is re-installed in the Windows trust store after every reboot. Just one day later a second bad root CA (DSDTestProvider) was found, delivered with the Dell System Detect Tool, also including its private key. Damages. Beside Dell's reputational damage, malware using these CAs was spotted in the wild as Symantec reports.
2016 Supply Chain. Efforts to deprecate old algorithms have shown how hard it is to deal with the supply chain problem. SHA1 has been in deprecation mode since 2000 or so due to SHA2 being standardised, yet the cryptography supply chain continues to fight back. Mozilla backed away from blocking SHA1-signed certs because corporate MITM boxes were not updated with newer root certificates (MozoMITM). CAcert itself has had trouble getting its own roots re-signed with SHA2 due to the costs of bringing people together and underlying internal strife blocking new works (proof?).
2016 KeRanger Apple Mac computers were targeted with a ransomware apparently signed by a developer's certificate, which was then revoked by Apple (Reuters). The malware was delivered over a popular download application, Transmission. Damages. None as yet, but a suggestion that lock-downs don't start until Monday...
2016 DROWN older routers sporting SSLv2 can be attacked with an Oracle attack which reveals keys (DROWN). Which can be used to attack all recorded sessions that aren't PFE. Upgrade to all users is suggested. Damages. Widespread upgrade costs predicted.
2016 Android apps accept any cert. Many apps in Android will accept any cert presented, thus making the system vulnerable to MITM. No warning is provided to the user that they are being MITM'd (pgut001). This potentially explains why Androids work with public Wifis that routinely MITM whereas laptops will reject. Worse: if the user attempts to install own certs, Android warns that the user eternally, whereas if the app installs own certs, no permission or warning is forthcoming (HowToGeek, Google groups).
2016 Carbanak. Criminal APT group launched spear phishing attack on employees of banks with attachments, some signed by valid and revoked certificates of major CAs, loaded malware on to users' computers. From there, the malware hopped into banks and other institutions to raid monetary accounts (Kaspersky). Started Damages. According to victims and the law enforcement agencies (LEAs) involved in the investigation, this could result in cumulative losses of up to 1 billion USD.
2016 Branding. One CA attacked another by attempting to trademark the brands used by the second CA (Let'sEncrypt). Damages. Waste of legal resources (fees!) by both. Loss of credibility (Brand!) for the attacker. What were they thinking?
2016 Startcom/WoSign. A CA permitted base domain certs to be issued if the requestor proved over a subdomain (github cert, mozilla). In particular, a person got a cert over github.com by proving they was in control of x.github.com. The CA did not report the incident nor did the next audit cover the incidents (no action). The CA had a further incident reported over accepting large port numbers, although this is bemusing because ports are not privileged except by convention (Incident report). In another incident, the CA backdated certs to before a deadline for dropping SHA1 (Mozilla).
The CA was cross-validated by another CA sparking calls to check that CA as well. It was discovered that the superior CA had been secretly sold to the same interests that owned the sub-CA (who owns who). More scurrilous suggestions that one country is safe and another is not, etc (more). Damages. Three browser vendors (Apple, Mozilla and google) started the process of dropping the CAs. Mozilla report.
2016 Easy Tool. Perhaps related to the previous, researchers discovered that an easy certificate tool provided by a CA had a number of weaknesses (StartEncrypt). The tool was fixed within a week or so. Damages. No breaches evident, so mostly embarrassment and continuing pain at the revelation that if one CA fails, all of them fail.
2016 C0m0d0. A CA used image recognition to scan the email address in its validation process (bug report). Austrians tricked it by changing the 1 (one) to an l ('ell') and got themselves a cert in the name of a telecom. Damages. CA reported to uber-CA, lots of explaining to do...
2017 Brazilian Bank. hackers took control of bank's domain / DNS account and rerouted the entire property to cloud copy that had been HTTPS-protected with Lets-Encrypt certs issued 6 months prior (Hijacked). Owned bank for 6 hours during which they milked any customers logging in. Damages. Customers redirected to phishing sites, malware injected into customers, ATMs/PoS might have been taken over as well. Massive rework, massive loss of banking credentials, but no report of money thefts at this stage.
Help in improving the facts gratefully accepted. Be careful with speculation, we need facts for this exercise. Embarrassing the victims does not help the mission of this page, so names of CAs and vendors are typically dropped.
Commentary & References
Discussed in this mozilla thread and comments incorporated 20120411.
SSL/TLS in a post-PRISM world is another list of breaches, includes "a video parody to explain the problem to non-technical people."
Recent Hacks is a list of data breaches with details in graphical and summary form. | 1 | 4 |
<urn:uuid:4f0ccee2-dca2-49c4-a0be-93d218d3d151> | Sunday, September 9, 2012
Ethics of Accumulated Earnings tax
Accumulated earnings tax This paper is a study of accumulated earnings tax. The study of accumulated earnings tax begins with the definition of accumulated earnings tax (AET), case law, with a study of Apple, cash reserves, accumulated earnings, dividends and the avoidance of accumulated earnings tax. The definition of accumulated earnings tax is tax imposed by the United States government on companies with retained earnings that are deemed to be unreasonable and in excess of what is considered ordinary. Ordinary is $250,000 plus taxation payments. The federal government uses this tax to deter investors from negatively influencing a company’s decision to pay dividends. Essentially, this tax persuades companies to issue dividends, rather than retaining the earnings. The premise behind this is that companies that retain earnings experience higher stock price appreciation. The Internal Revenue Service is not in business to increase stock prices. However, the IRS may indirectly benefit from the increase in stock price. The government has set an extra tax on the retained earnings when excess accumulated earnings occur. If a dividend is paid the IRS will collect a tax from the stockholders. Plainly, speaking the Accumulated Earnings Tax threat is intended to encourage C corporations to make timely payments of dividends, thus triggering the double taxation of C corporation earnings. The Accumulated Earnings Tax rate is tied to the dividend rate. The dividend tax rate is 15% through 2012. Historically, the AET rate was much higher than the dividend rate. This is not the case now nor will it in the near future; as it is an election year. The third definition of accumulated earnings tax is a penalty tax that is imposed on C corporations, the IRS perceives as trying to avoid or defer shareholder income tax through an unnecessary accumulation of earnings. There is no bright-line test to define when a C Corporation is purposely avoiding income tax or what is an impermissible accumulation of earnings. Historically, accumulated earnings tax applied to shareholders on their share of any profits that were not paid out by the corporation , if the failure to pay dividends was motivated by a desire to prevent the surtax from coming into play. This created an impossible evidentiary standard for the Secretary of the Treasury. The Secretary was to determine if the accumulation was, “unreasonable for the purposes of the business.” Corporations could invest their profits to increase future income. Partnerships were taxed on their share of the profits. There was no differentiation of whether or not the profit was received. The accumulated earnings tax was replaced by a ten percent tax on undistributed profit. The accumulated earnings tax underwent numerous changes throughout the 1910 and 1920s. In 1920, the Supreme Court ruled, in Eisner v. Macomber,“We are clear that not only does this stock dividend really take nothing from the property of the corporation and add nothing to that of the shareholder, but that the antecedent accumulation of profits evidenced thereby, while indicating that the shareholder is richer because of an increase of his capital, at the same time shows he has not realized or received any income in the transaction.” The Court affirmed the right of Congress to tax under the Sixteenth Amendment. However Congress did not have the power to redefine income. In other words the holding from this case, “A pro rata stock dividend where a shareholder received no actual cash or other property, and retained the same proportionate share of ownership of the corporation as was held prior to the dividend, was not taxable income to the shareholder within the meaning of the Sixteenth Amendment, and that an income tax imposed by the Revenue Act of 1916 on such dividend was unconstitutional, even where the dividend indirectly represented accrued earnings of the corporation.” After Macomber, Congress dropped attempts to tax the shareholder and pursued Corporations for Accumulated Earnings Tax. In most cases, the corporation has the burden to prove its accumulated earning, also known as retained earnings are reasonable. In Doug-Long Inc. v. Commissioner the court held that the taxpayer had the burden of proving whether its earnings and profits were permitted to accumulate beyond the reasonable needs of its business. The court held that the taxpayer failed to carry its burden of disproving the presumption that its earnings and profits accumulated beyond its reasonable needs and that the taxpayer was liable for the tax imposed under § 531. Doug Long operated a truck stop and service station. He rented out space to a United States Post Office and a diner. He argued that he required the accumulated earnings due to :(a) development and expansion of its business; (b) supply and fuel inventory problems; (c) vapor emission recovery system; (d) increased parking space; (e) construction of a truck service-repair facility, and (f) an outstanding debt owed to the taxpayer. At the time of the case, the 1970’s Arab Oil Embargo was taking place, and the taxpayer worked very hard to maintain gasoline in his truck stop and service station. However, the taxpayer or the opinion didn’t consider the politics of the Arab Oil Embargo. Historically, the Arab Oil Embargo was a very difficult time for Americans and gasoline companies and it was very difficult for many business owners, such as Doug Long, to maintain gasoline. In hindsight, if more evidence about the challenging gasoline market had been presented, the outcome might have been different. The Court held the issues raised by the taxpayer were too distant and not concrete with the future needs of the business uncertain or vague. Shortly after the Doug Long matter the Bardahl formula was used to determine taxpayer’s corporation's working capital needs. A corporation's working capital needs were the amount of working capital necessary for one operating cycle. An operating cycle consists of: (a) an inventory cycle, (b) an accounts receivable cycle, and (c) a credit cycle. The operating cycle is (a) plus (b) minus (c). The Bardahl formula provides the mechanism for determining each component cycle. Both parties agreed the Bardahl formula should be used in this case. However, the numbers each side preceded were different. The Bardahl formula originated in Bardahl Manufacturing Corporation v. Commissioner, T.C. Memo 1965-200; 1965 Tax Ct. Memo LEXIS 128; 24 T.C.M. (CCH) 1030; T.C.M. (RIA) 65200, (1965).Bardahl Manufacturing Corporation was a multinational corporation. Petitioner(Bardahl ) required working capital at the end of each year at least in the amount sufficient to cover its reasonably anticipated costs of operation for a single operating cycle. Since its operating cycle during the period 1956 through 1959 averaged approximately 35 percent of a year, its working capital requirements for the continuation of its normal operations amounted to approximately 35 percent of its reasonably anticipated (and steadily increasing) total annual operating costs and cost of goods sold as of the end of each of the years in issue. In modern times, some question whether or not large corporations, such as Microsoft, Apple, Google, or Facebook, are similar to Bardahl Manufacturing in the 1970’s. Using the Bardahl company as a model, the tax court was able to determine the amount of working capital necessary for a year in the manufacturing industry. As a result, the courts found that Bardahl did not have accumulated earning in year 1957. However, the company did in the years 1956, 1958, and 1959 due to Ole Bardahl’s drawing account. Ole Bardahl was the major stockholder in Bardahl Manufacturing and its related companies, which could be compared to Microsoft, Apple, Google or Facebook today. However, Ole engaged in loans and other transactions that would not be tolerated in today’s corporations. These practices included the use of a drawing account for Ole Bardahl and real estate investments unassociated with the business. The purpose of the drawing account was to make cash advances and to pay certain personal bills during Ole Bardahl’s absence from the country. From 1956 to 1959, Ole Bardahl traveled extensively in connection with the business operations, and his federal taxes were paid from this account every January by the petitioner. Additionally, Ole Bardahl made substantial cash payments to the company in reduction of his temporary loan balance several times a year and paid off the remainder of his account every December. At the same time, the petitioner also engaged in real estate investments unrelated to the business, or stated in other words, having no reasonable connection with the business. The Court held these were investments he want to invest with his friends, and the transactions are listed as numbers one and two in the treasury Reg. 1.533-1(a)(2). As a result of the Bardahl case, several new standards helped to clarify when the accumulated earning tax penalty could be applied to a business. The Court stated, “ given the nature of petitioner's business, the amounts of its inventories and the rate of inventory turnover, the amount of its accounts receivable, and the collection rate thereof, we would be unable to find on the record before us that a cash reserve sufficient to cover one year's operating costs would be justified. The parties appear to agree on brief that the most appropriate basis for determining petitioner's need for operating capital is to compute the amount of cash reasonably expected as being sufficient to cover its operating costs for a single operating cycle. The evidence discloses that the corporation's operating cycle, consisting of the period of time required to convert cash into raw materials, raw materials into an inventory of marketable Bardahl products, the inventory into sales and accounts receivable, and the period of time required to collect its outstanding accounts,(which averaged approximately 4.2 months during the 4 years here involved). Thus the length of Manufacturing's operating cycle during the period 1956 through 1959 averaged approximately 35 percent of a year. In the Court’s opinion, at the beginning of each year petitioner would need liquid assets to meet operating expenses for a maximum of only 4.2 months. This was based on the assumption that no operating revenue would be received until the expiration of its operating cycle. Additional expenditures for operating costs beyond the first 4.2 months of each year would not be incurred without the production of operating revenue.” The corporate purpose or intent is a subjective question, therefore difficult of proof. However, Treas. Reg. § 1.533-1(a)(2), provides three indicia of the purpose to avoid income tax with respect to a corporation's shareholders: (i) Dealings between the corporation and its shareholders, such as withdrawals by the shareholders as personal loans or the expenditure of funds by the corporation for the personal benefit of the shareholders, (ii) The investment by the corporation of undistributed earnings in assets having no reasonable connection with the business of the corporation (§ 1.537-3), and (iii) The extent to which the corporation has distributed its earnings and profits. The Court was unable to find from the record that the above-mentioned loans and advances to Ole Bardahl were of indefinite duration and should be regarded as a substitute for the distribution of dividends to the corporation's principal stockholder. The corporate meetings minutes revealed what Ole Bardahl envisioned was needed for capital for Bardahl Manufacturing and subsidiaries. The Secretary was reported in the corporate minutes to have stated, “in his opinion, it might be good financial policy to hold a minimum end dividend payment in 1958, in order to retain sufficient liquid capital to meet not only current working capital needs, but also to provide necessary funds to complete the expansion proposals.” The Court in the final paragraph of the opinion agrees with the reasoning of the Petitioner on its needs and practice not to incur loans. The Court stated, “Although the evidence before us discloses that petitioner had followed a consistent policy of financing its operations and capital improvements exclusively with cash expenditures and despite the further fact that it has demonstrated a favorable dividend record which shows that throughout the 4-year period here involved it had distributed an average of approximately 18 percent of its net income (after taxes) as cash dividends to its shareholders, these factors are overridden by its substantial investment in unrelated real estate ventures and its loans made for purposes unrelated to its business throughout each of those years. Accordingly, in addition to the statutory presumption that at the end of 1956, 1958, and 1959 was availed of for the purpose of avoiding taxes on its shareholders by reason of the fact that it allowed its earnings and profits to accumulate beyond its reasonable needs, we are of the opinion that the record in its entirety, independently demonstrates that such accumulations were in fact due to the purpose of avoiding taxes on its shareholders and we so hold.” Presently, several large corporations share similarities with Bardahl Manufacturing of the 1950’s, such as Microsoft, Apple, Google, and Facebook, but these modern companies have not been penalized with the accumulated earnings tax. In July, 2004 Microsoft had accumulated earnings of 60 billion, they paid a dividend of $0.16. Later, in December 2004, Microsoft paid 32 billion dollars in a one -time dividend payout, and the dividend was doubled to $0.32. Bill Gates, the founder of Microsoft, had persuaded the Board of Directors not to pay dividends until 2004 while he was creating a charitable trust. Once the trust was created, the dividends were then paid out, and Gates received $3.2 billion all of which was donated to charity . Although Gates is a very charitable man, donating millions of dollars and countless hours to help those in need, the decision to withhold dividend payouts until 2004 can be seen as personal gain. Gates did not have to pay taxes because of his large charitable donation, and his persuasion of the Board of Directors could arguably be seen as unrelated to Microsoft. Another major company who seemingly avoided the accumulated earnings tax is Apple which in the past few years has gained incredible profitability. Within the first quarter of 2012 , Apple profits reached 13.1 billion dollars, making the company one of the most successful businesses to date. This was not the first time Apple had seen profits of 13.1 billion dollars though; their revenue in the fourth quarter of 2010. was also 13.1 billion dollars. Last year alone, Apple added 38. billion dollars to its cash reserves, and reports suggest the company is worth 97 billion dollars in cash and equivalents. Of the 97 billion dollar net worth, about 64 billion dollars is offshore which is not subject to United States’ tax laws. In the past Microsoft and Apple have been competitors in the business world, but in 2010, Apple surpassed Microsoft in terms of revenue for the first time ever. Their revenues were a record setting 20.34 billion dollars. Since then, Apple has continually come out on top with Microsoft only reporting 20.9 billion dollars revenue at the end of 2011 while Apple reported 46.33 billion dollars in revenue for the same period. Not surprisingly, Apple’s profits exceeded those of Microsoft’s in April 2011 for the first time ever. This past quarter alone, Apple’s net income of 13.06 billion dollars was almost double that of Microsoft’s net income of 6.62 billion dollars. Peter Oppenheimer, Apple’s chief financial officer, told analysts that the company and its board of directors were “actively discussing” uses of the cash, including potential acquisitions and further investments in the company’s supply chain. “We’re not letting it burn a hole in our pockets,” he said. Due to the large amount of publicity over their net revenue, it is not surprising that Apple announced it would pay dividends on first quarter earnings and a stock purchase. With so much net capital recorded, many wonder how both Microsoft and Apple have avoided the accumulated earnings tax. Clearly, both companies are in the public spotlight, and their huge cash reserves are very well documented worldwide. In order for the two companies to avoid the AET, both would not only have to justify their cash reserves but show their decision to withhold dividends was reasonable. Apple alone is worth more than the government of Greece, and many question how they avoided the accumulated earnings tax. The answer comes in the form of the economy. After the crash of 2008, most companies had increased accumulated earnings because it became substantially more difficult to obtain credit or loans. As a result, corporations had to have large cash reserves to cover their expenses and expand their business ventures. The previous $250,000 limitation seems unreasonable given the economic downturn where cash flow was vital to profitability and the significant increase in company’s research and development needs.. Apple’s announcement to pay dividends highlights the cash they've been holding for all this time, and it’s unclear why the IRS has not charged Apple with the accumulated earnings tax for previous years. One could hardly argue that the retained earnings were needed for research and development. Nor did the accumulated retained earnings represent a "reasonable" amount of working capital . For a company the size of Apple, spending 20 billion dollars on research is not an unreasonable number, but within the past year, their cash reserves increased by 37% to 33.6 billion. The extra 13 billion could certainly pose a problem because it’s well over the $250k retained limit, but Apple had a plan. Even though Apple is reportedly worth 95 billion dollars, only the 33.6 billion is in the United States. Most of the companies’ money was overseas, and the 37% increase was new this year, so Apple only had to worry about planning for the extra 13 billion dollars. The corporation was able to create a plan including buyback and dividends for the extra funds, and the IRS is only interested in those who store cash without a plan. Additionally, Apple uses the Double Dutch and Irish Sandwich to gain overseas profit, like most other companies’, but the money is untouchable to the IRS. Since the profit is out of their tax jurisdiction, Apple cannot be penalized with the accumulated earnings tax. “They really didn't have that much cash on hand, a company the size of Apple can make plans for 20 billion pretty easily. Over the last year the cash balance increased 37% to $33.6 billion. That extra 13 billion they made last year is a little more difficult to make a reasonable plan for.” “A bit more than the $250k where the AET (allegedly) becomes a concern.” “Isn't a large portion of their cash overseas? Might that have something to do with it?” “They have 70B offshore, and it's staying offshore.” “A bit more than the $250k where the AET (allegedly) becomes a concern.” “The AET only becomes a concern when you don't have a plan, Apple had a plan. They simply did not have that much money on hand before this year, they had 20 billion. Ignore what the press is saying about 95 billion, most of that is in the Bahamas. The real number is 33.6 billion, that is what is taxable, that is what they had to figure out a plan for, and the new plan included a buyback and dividend.” “Just like every other company they used the double Irish and Dutch sandwich to get overseas profits to their tax haven of choice, the Bahamas. The IRS can't touch it, so it doesn't matter as far as the AET is concerned”. The Double Irish and Dutch Sandwich refers to the international tax practice of moving the licensing of intellectual property to low tax havens through several different countries. this is outside the topic of this paper. However, an article from the New York Times is hereby attached. Oracle, IBM, Google, Microsoft and Facebook use the Double Irish and Dutch Sandwich in addition to Apple. The New York Times credited Apple with the creation of the Double Irish with Dutch Sandwich; a tax-avoidance strategy of routing profits through the Netherlands to Ireland and then through the Caribbean. , “The Times reports Apple has created subsidiaries in low-tax places like Ireland, the Netherlands, Luxembourg and the British Virgin Islands — some little more than a letterbox or an anonymous office — that help cut the taxes it pays around the world.” Apple recently advertized for an iOS software engineer” strengthen its multi-view stereo research group" — in other words, 3D. The engineer would research and create patents for 3D usage in the iphone. ipad, Mac and iTV for cameras and display. Microsoft is researching 3D holograms. drivera1's reply:”Micro spent about $9 billion on R&D and totaling almost $69 billion over the last decade. Microsoft was granted the third most U.S. patents of all companies in 2010. IBM was granted 5,896 patents in 2010. Samsung was second with 4,551. Microsoft was granted 3,094 patents in 2010. Apple was granted 563 patents in 2010. In the past, Apple with Steve Jobs did not spend money on research and development as Steve Jobs was happy to announce. However, in 2006 Apple spent $500 million and in 2007, it spent $800 million. The results for Apple were the iphone and ipad. Both products generated the record breaking revenue In 2007, Microsoft spent over $7 billion. This year Apple devoted $2.4 billion to R&D. Apple, post Steve Jobs will need to do R&D to continue it dominance in technology. The New York Times published an article stating that Apple only paid 9.8 % tax rate. was based upon quarterly taxes paid in 2011 The 9.8 % tax rate was based upon quarterly taxes paid in 2011. Taxes paid in the first two quarters were based upon Apple paying either 90% of Apple’s expected tax for 2010 or 100% of the total tax on last year’s taxes. In other words, The NYTimes was comparing apples and oranges, two different tax bills with different total tax in one figure. The article also stated that Apple relocated their investment headquarters to Nevada. The state of Nevada does not have a state income tax, thereby, saving employees and the company money. The New York Times published as similar article on General Electric (GE) and that company’s tax avoidance practices. General Electric’s reputation score dropped from the low 30s to the high teens. Apples reputation score began in the 50s, increased the score value, and returned to its normal 50 score. Apple responded to the allegations, from The New York Times , by stating, “ Apple pays an enormous amount of taxes, which help our local, state and federal governments. In the first half of fiscal year 2012, our U.S. operations have generated almost $5 billion in federal and state income taxes, including income taxes withheld on employee stock gains, making us among the top payers of U.S. income tax. Apple’s last annual report that it paid $8.3 billion in worldwide taxes. The blog discussion on Apple and accumulated earnings tax didn’t understand that accumulated earnings tax only applies to United States earnings, not international intellectual property. . Local United States and small corporations are the only corporations that need to be concerned with accumulated earnings tax. As business changes, the internal revenue tax code must change. On one hand, accumulated earnings tax originated in the early 1910’s and is obsolete. However, both Apple and Microsoft may have paid dividends because of their excessive earnings and the threat of accumulated earnings tax. Ethics in accumulated earnings tax appears to be based on the company and the public view of the company. | 1 | 2 |
<urn:uuid:a68058e5-d70d-4b09-8f10-49780b189f95> | By Amanda Diedrick // Little House by the Ferry Blog
“The Fears That Your Elders Grew By”
There’s a line in a ‘Crosby, Stills and Nash’ song that says, “And you, of tender years, can’t know the fears that your elders grew by.” I heard that song as I was writing this, and I realized how true it is. We can listen to or read the accounts, but I’m not sure we can really, truly appreciate the suffering or fears faced by so many of our ancestors.
On my mother’s side, our forefathers (and mothers) were Eleutheran adventurers and British Loyalists all of whom fleeing persecution, settled in the Bahamas.
Perhaps my best-known ancestor is Wyannie Malone, my 8x great-grandmother. She and her husband, a cooper (barrel maker) possibly named Benjamin, lived near Charleston, South Carolina.
With family roots in the UK, they sided with the British Loyalists during the American Revolution. Their son, Ephraim (and possibly two others, Walter and Benjamin Jr.) fought in the local Loyalist militia.
Years later, Ephraim Malone recalled that as the Americans were coming up the river to take Charleston, his family was forced to hide their livestock on an island in the river. Ephraim’s father hid his tools and the family’s meager savings beneath a tree.
Following the Battle of Charleston and the end of the Revolutionary War, Loyalists like the Malone family faced unimaginable violence and cruelty. Their property and assets were confiscated. They were raped, branded, whipped, tarred and feathered. Some had their ears “cropped.” Others were imprisoned and sentenced to hard labour or even death. At least one group of Loyalists was confined to a dark, damp copper mine, where they ultimately died.
Their “crimes?” In some cases they were acts as innocuous as drinking tea, or toasting the King’s health.
Losing their land in the siege at Charleston was only the beginning of Wyannie Malone’s heartbreak. Soon after the siege, her husband took ill and died. One of their sons was killed during the war (some reports say three sons died in battle, but only one can be confirmed.) A daughter left to marry a sea captain, but was never heard from again. It was said that during a disagreement, her husband tossed her overboard.
On his deathbed, Wyannie’s husband begged her to take their remaining children and go to The Bahamas. At the time, there were numerous advertisements published in local newspapers encouraging Loyalists to escape persecution by the patriots and emigrate to the islands. The ads boasted of mild weather, tropical splendor and miles of lush land well-suited for agriculture.
And so, the recently widowed Wyannie and at least three of her children — Ephraim, David and young Wyannie – boarded a southbound schooner either in Charleston or St. Augustine, Florida, with no idea what lay ahead.
Many believe that the Malones landed first at Cherokee or Little Harbour and spent some time there. What we know for sure is that in September 1785, they rowed into a large harbour on Elbow Cay, north of the Abaco mainland. Accompanying them was a young man named Jacob Adams who had served as a Loyalist soldier alongside Ephraim Malone.
There, on the sheltered fringes of the harbour, along with other British Loyalists, they founded Hope Town. (It’s said that the vessel on which they originally left the U.S. was named The Hope.)
It’s difficult to imagine today, with Hope Town’s candy-striped lighthouse standing tall and its glittering harbour dotted with bobbing sailboats and encircled by pastel-painted cottages, what it was like to be its first permanent residents. To settle on a strange, completely uninhabited island and to endure violent summer storms, bugs and insects, wild animals and disease, while struggling to build shelter, locate fresh water and cultivate food before supplies ran out.
And, ultimately, to discover that the islands weren’t necessarily as advertised. The soil was, in fact, fertile – but far too sparse to support extended agriculture. In less than a decade, most of Loyalists’ cotton and other crops failed. Many who could afford to do so returned to the U.S., where by then conditions had improved, and where they helped found Key West.
For others, like Wyannie and her children, lacking the resources to leave and having nothing to which to return, there was no choice but to stay. Through blind determination and amazing resilience, they forged a life. The land having proved unable to sustain them, they turned to the sea, earning livelihoods through fishing, turtling, wrecking and boat building.
In the early 1800s, as a reward for their loyalty to the King, Jacob Adams and Ephraim Malone received significant land grants in Hope Town.
Ephraim married Elizabeth Tedder, of Harbour Island.
Young Wyannie married Jacob Adams. Both my great-grandmother, Margaret Eunice Key and my great-grandfather, Herman Thomas Curry were descendants of this couple.
David Malone married Patience Beek, of Harbour Island. My great-grandfather, Leon Albury (husband of Margaret Eunice Key) was their 3x great-grandson.
Today, there are more than 20,000 descendants of Wyannie Malone worldwide.
In the early 1990s, Virginia “Jinny” Whittemore McAleer and her husband Mac compiled a listing the first six generations of Wyannie descendants for Hope Town’s Wyannie Malone Historical Museum. In 1998, Jinny and Mac updated the publication. ‘The Genealogy of Wyannie Malone’ is large, impressive and informative undertaking, though extremely difficult to come by — it quite literally took me years to locate a copy.
But there’s great news. The Wyannie Malone Museum is currently undertaking the monumental task of updating the book to make corrections and to incorporate all the Wyannie descendants born since the last edition was published.
The museum asks that all known descendants of Wyannie Malone submit information about their family trees for the book. Even if you’ve submitted corrections and additions in the recent past, they ask that you send them again to this dedicated email address: [email protected].
Bonnie Hall, who’s spearheading the project, says they’ll take as much information as you can provide — names, birth, death and marriage dates and locations, spouse names, burial places, military records, even stories about your ancestors’ lives.
If you can assist with this project, please contact the museum at [email protected]. And if your ancestors hail from Abaco, but you’re not sure if you’re a Wyannie descendant, get in touch, and I’ll check the Index of my ‘The Genealogy of Wyannie Malone’ to see if they’re listed. | 1 | 2 |
<urn:uuid:4a9354ae-a642-49f4-afd6-c97d812448f4> | Recently there’s been a lot of news about OpenSolaris, more specifically in reference to the great progress made by virtualization technologies in it. In this article, I will exam some of these technologies, and compare them with the state of the art on other platforms.
OpenSolaris’ Zones is a mechanism that provides isolated environments with a subset of the host operating system’s privileges, allowing applications to run within the zone without any modifications (Xen is also capable of this). This makes zones useful for server consolidation, load balancing and much more.
Each zone has a numeric ID and a unique name; the global zone has ID 0, is always running and cannot be halted. There are two user space tools for zone configuration, creation and management:
zoneadm; these tools use a lightweight IPC (Inter Process Communication) mechanism called doors to communicate with the kernel, which is implemented as a virtual file system (
doorfs). When using doors, context switches are executed using a unique synchronization mechanism called shuttle, instead of through the kernel dispatcher; this allows faster transfer of control between kernel threads.
I should mention that Linux does not have a doors IPC system, though there was an attempt to write one by Nikita Danilov in 2001; this project can be found on sourgeforge.net (Doors for Linux).
Some operations are not allowed in a zone:
mknod from inside a zone, for example, will return
mknod: Not owner; the creation of raw sockets is also prohibited, with the one exception of
socket(AF_INET,SOCK_RAW,IPPROTO_ICMP) (which is permitted in order to allow zones to perform ping). It’s worth noting that zones can modify the attributes of a device (such as its permissions) but can not rename it.
All zoneadmd daemons run in the global zone, and each zone has a zoneadmd process (used for state transitions) assigned to it. When dealing with zones other than the global zone, processes running in one zone cannot affect or see processes in other zones: they can affect or see only processes within their own zone.
A zone can be in one of the following states: configured, installed, ready, running, shutting down or down.
- Configured: configuration was completed and committed
- Installed: the packages have been successfully installed
- Ready: the virtual platform has been established
- Running: the zone booted successfully and is now running
- Shutting down: the zone is in the process of shutting down
- Down: the zone has completed the shut down process and is down
Another interesting feature of zones is that they can be bound to a resource pool; Solaris Containers is the name for zones which use resource management.
Branded Zones enable you to create non-global zones which contain foreign operating environments. The lx brand provides a Linux environment under Solaris, which can be created with zonecfg using the
set brand=lx option when configuring with the zonecfg command.
The lx zone only supports user level applications; therefore, you cannot use Linux device drivers or kernel modules–including file systems—in an lx zone. Implementing lx zones required a lot of additions and modifications: for example, executing an ELF binary in an lx zone is performed by the lx brand ELF handler. In Linux, system calls are made by calling
interrupt 0x80, whereas Solaris usually uses
syscall instructions for a system call on x86, while in earlier versions it was done with
lcall instructions (in Sparc, system calls are initiated by traps). Since Solaris did not have a handler for
interrupt 0x80, the Brandz project was started to add such a handler; this handler, in fact, simply delegates the call to the handler in the brand module, where it is eventually executed. The lx brand is available only for i386/x86_64 systems: you cannot run Linux applications on SPARC using the lx brand. You will often encounter the term “Solaris Containers for Linux Applications” or the acronym “SCLA” as a synonym to branded lx zones.
The branded zone was integrated into the mainline Solaris tree in December 2006 (OpenSolaris brandZ project.)
CrossBow and IP Instances
CrossBow is a new OpenSolaris virtualization networking project that allows you to create multiple virtual NICs (VNICs) from a single physical nic. It also enables you to control QoS parameters making it possible to assign specific bandwidth allocations and provide different priorities to each virtual nic, protocol, or service. This can be done by a system administrator (with the
flowadm commands) or by an application using
setsockopt(). CrossBow is ideal for server consolidation, the isolation of Solaris Zones, tuning a system’s network resources, enhancing security (in the case of a distributed denial of service attack, for example, only the attacked vnic will be impacted instead of the entire system), and much more.
Here is an example for setting vnic bandwidth:
dladm create-vnic -d bge0 -m 00:01:02:03:04:05 -b 10000
dladm is a utility which administers data links.
The network virtualization layer in CrossBow was implemented by changes made to the MAC layer, and by adding a new VNIC pseudo driver. The VNIC pseudo driver appears in the system as if it were a regular network driver, allowing you to run the usual commands (i.e.
snoop). The VNIC pseudo driver was implemented as a Nemo/GLDv3 MAC driver and it relies on hardware-based flow classification.
IP instances are part of the CrossBow project that uses the flow classification feature of NICs, but also has a solution for NICs without this feature; in the future, almost all 1GB and 10GB NICs will support flow classification. With IP instances, each zone can have its own instance of the kernel TCP/IP stack: each zone will also have its own ARP table and its own IP routing table, IP filter rules table and pfhooks (pfhooks is the OpenSolaris equivalent of Linux’s nfhooks or Netfilter hooks). IP instances also enable zones to use DHCP, IPMP and IPSec (IP Security protocol, which is used in VPNs), with each zone having its own IPSec Security Policy Database (SPD) and Security Association (SA). In order to implement IP instances, all global data in the kernel TCP/IP stack which might be modified during runtime, was made non-global. For example, a new structure named ip_stack was created for the IP kernel layer, (layer 3 in the 7 layer model, the network layer); a new structure named
udp_stack was created for the UDP kernel layer (layer 4 in the 7 layer model, transport layer) and so on. Using IP instances, non-global zones can apply IP filter rules (IP filter is the OpenSolaris equivalent of IP tables in Linux); prior to the CrossBow and IP instances project, this was impossible. IP instances are enabled with
set ip-type=exclusive when creating a zone with zonecfg. A non-global zone created without this option will, of course, share its IP instance with the global zone (as was the case before the integration of the IP Instances project). See OpenSolaris Crossbow project for more information.
Xen in OpenSolaris is a port of the Linux Xen project. It enables us to run OpenSolaris as domain 0 or OpenSolaris as a guest (domU). The last update to the Xen project, as for today, was in July 2007. There is HVM support in OpenSolaris Xen; this means that if you have processors with virtualization extensions, you can run unmodified operating systems as guests.
The Xen project uses virtual NICs from the CrossBow Project, which is discussed in the previous section. There is also support for management tools (
virsh). For more information about Xen, see:
A new platform called i86xpv was prepared for Xen; You can verify that you booted into Xen by running
uname -i (you should get i86xpv)
New features include PAE for 32 bit Solaris, Xen crash dumps for dom0, better integration with other Solaris network virtualization projects, and more.
In this article, I showed the current state of the art of some interesting virtualization techniques in OpenSolaris, many of which enable you to use our hardware more efficiently. It seems that OpenSolaris made a great effort in this field, and is now the same abilities to other modern OSes, along with some nice extras. | 1 | 3 |
<urn:uuid:d303e07c-9f77-41b3-9b37-2abb259a9874> | From my point of view one of the best ways to get started in electronics is to build your own laboratory power supply. In this instructable I have tried to collect all the necessary steps so that anyone can construct his own.
All the parts of the assembly are directly orderable in digikey, ebay, amazon or aliexpress except the meter circuit. I made a custom meter circuit shield for Arduino able to measure up to 36V - 4A, with a resolution of 10mV - 1mA that can be used for other projects also.
The power supply has the following features:
- Nominal Voltage: 24V.
- Nominal Current: 3A.
- Output Voltage Ripple: 0.01% (According to the specs of the power supply circuit kit).
- Voltage measurement resolution: 10mV.
- Current measurement resolution: 1mA.
- CV and CC modes.
- Over current protection.
- Over voltage protection.
Step 1: Parts and Wiring Diagram
Apart from the Image, I have attached the file WiringAndParts.pdf to this step. The document describes all the functional parts, icluding the ordering link, of the bench power supply and how to connect them.
The mains voltage comes in through an IEC panel connector (10) that has a built in fussible holder, there is a power switch in the front panel (11) that breaks the circuit formed from the IEC connector to the transformer (9).
The transformer (9) outputs 21VAC. The 21 VAC go directly to the power supply circuit (8). The output of the power supply circuit (8) goes directly to the IN terminal of the meter circuit (5).
The OUT terminal of the meter circuit (5) is connected directly to the positive and negative binding posts (4) of the power supply. The meter circuit measures both voltage and current (high side), and can enable or disable the connection between in and out.
Cables, in general use scrap cables you have in house. You can check the internet for appropriate AWG gauge for 3A but, in general the thumb rule of 4A/mm² works, specially for short cables. For the mains voltage wiring (120V or 230V) use appropriately isolated cables, 600V in USA, 750V in Europe.
The series pass transistor of the power supply circuit (Q4) (12) has been wired instead of been soldered to allow an easy installation of the heatsink (13).
The original 10K potentiometers of the power supply circuit has been replaced with multiturn models (7), this makes possible a precise adjustment of the output voltage and current.
The arduino board of the meter circuit is powered using a power jack cable (6) that comes from the power supply circuit (8). The power supply board has been modified to obtain 12V instead of 24V.
The positive pin of the CC LED from the power supply circuit is wired to the mode connector of the Meter Circuit. This allow it to know when to display CC or CV mode.
There are two buttons wired to the meter circuit (3). The Off button “red” disconnects the output voltage. The On button “black” connects the output voltage and resets OV or OC errors.
There are two potentiometers wired to the meter circuit (2). One sets the OV threshold and the other sets the OC threshold. These potentiometers do not need to be multiturn, I have used the original potentiometers from the power supply circuit.
The 20x4 I2C alphanumeric LCD (1) is wired to the meter circuit. It shows the present information about output voltage, output current, OV setpoint, OC setpoint and status.
Step 2: Power Supply Circuit Kit
I bought this kit that is rated as 30V, 3A:
I am attaching an assembly guide I found in the Internet and an image of the Schematic. Briefly:
The circuit is a linear power supply.
Q4 and Q2 are a Darlington array and form the series pass transistor, it is controlled by the operational amplifiers to maintain the voltage and the current at the desired value.
The current is measured by R7, adding this resistance in the low side makes the ground of the power supply circuit and the output ground different.
The circuit drives a LED that turns on when the constant current mode is on.
The circuit incorporates the Graeth bridge to rectify the AC input. The AC input is also used to generate a negative biasing voltage to reach 0V.
There is no thermal protection in this circuit, so appropriate dimensioning of the heatsink is very important.
The circuit has a 24V output for an “optional” fan. I have substituted the 7824 regulator with a 7812 regulator to get 12V for the Arduino board of the meter circuit.
I have not assembled the LED, instead I have used this signal to indicate the meter circuit if the power supply is in CC or CV.
Step 3: Power Supply Circuit Kit Assembling
In this circuit all parts are through hole. In general you must start with the smallest ones.
- Solder all the resistors.
- Solder the rest of the components.
- Use pliers when bending diodes leads to avoid breaking them.
- Bend the leads of the DIP8 TL081 op amps.
- Use heatsink compound in when assembling heatsinks.
Step 4: Meter Circuit Design and Schematic
The circuit is a shield for Arduino UNO compatible with R3 versions. I have designed it with parts available at digikey.com.
The output of the vkmaker power supply circuit kit is connected to the IN terminal block and the OUT terminal block goes directly to the binding posts of the power supply.
R4 is a shunt resistor in the positive rail valued 0.01ohm, it has a voltage drop proportional to the current oputput. The differential voltage R4 is wired directly to RS+ and RS- pins of IC1. The maximum voltage drop at maximum current output is 4A*0.01ohm = 40mV.
R2, R3 and C2 form a ~15Hz filter to avoid noise.
IC1 is a high side current amplifier: MAX44284F. It is based in a chopped operational amplifier that makes it able to get a very low input offset voltage, 10uV at maximum at 25ºC. At 1mA the voltage drop in R4 is 10uV, equal the maximum input offset voltage.
The MAX44284F has a voltage gain of 50V/V so the output voltage, SI signal, at the maximum current of 4A, will value 2V.
The maximum common mode input voltage of MAX44284F is 36V, this limits the input voltage range to 36V.
R1 and C1 form a filter to suppress 10KHz and 20KHz unwanted signals that can appear due to the architecture of device, it is recommended in page 12 the of datasheet.
R5, R6 and R7 are a high impedance voltage divider of 0.05V/V. R7 with C4 form a ~5Hz filter to avoid noise. The voltage divider is placed after R4 to measure the real output voltage after the voltage drop.
IC3 is MCP6061T operational amplifier, it forms a voltage follower to isolate the high impedance voltage divider. The maximum input bias current is 100pA at room temperature, this current is negligible to the impedance of the voltage divider. At 10mV the voltage at the input of IC3 is 0.5mV, much bigger than its input offset voltage: 150uV at maximum.
The output of IC3, SV signal, has a voltage of 2V at 40V input voltage (the maximum possible is 36V due to IC1). SI and SV signals are wired to IC2. IC2 is an MCP3422A0, a dual channel I2C sigma delta ADC. It has an internal voltage reference of 2.048V, selectable voltage gain of 1, 2, 4, or 8V/V and selectable number of 12, 14, 16 or 18bits.
For this circuit I am using a fixed gain of 1V/V and a fixed resolution of 14bits. SV, and SI signals are not differential so the negative pin of each input must be grounded. That means that the number of available LSBs are going to be half.
As the internal voltage reference is 2.048V and the effective number of LSB are 2^13, the ADC values will be: 2LSB per each 1mA in the case of current and 1LSB per each 5mV in the case of voltage.
X2 is the connector for the ON push button. R11 prevents the Arduino pin input from static discharges and R12 is a pull-up resistor that makes 5V when unpressed and ~0V when pressed. I_ON signal.
X3 is the connector for the OFF push button. R13 prevents the Arduino pin input from static discharges and R14 is a pull-up resistor that makes 5V when unpressed and ~0V when pressed. I_OFF signal.
X5 is the connector for the overcurrent protection set point potentiometer. R15 prevents the Arduino input pin from static discharges and R16 prevents the +5V rail from a short circuit. A_OC signal.
X6 is the connector for the overvoltage protection set point potentiometer. R17 prevents the Arduino input pin from static discharges and R18 prevents the +5V rail from a short circuit. A_OV signal.
X7 ins an external input that is used to get the constant current or constant voltage mode of the power supply. As it can have many input voltages it is made using Q2, R19, and R20 as a voltage level shifter. I_MOD signal.
X4 is the connector of the external LCD, it is just a connection of the 5V rail, GND and I2C SCL-SDA lines.
I2C lines, SCL and SDA, are shared by IC2(the ADC) and the external LCD, they are pulled up by R9 and R10.
R8 and Q1 form the driver of K1 relay. K1 connects the output voltage when powered. With 0V in -CUT the relay is unpowered, and with 5V in -CUT the relay is powered. D3 is the free wheeling diode to suppress negative voltages when cutting the voltage of relay coil.
Z1 is a Transient Voltage Suppressor with a nominal voltage of 36V.
Step 5: Meter Circuit PCB
I have used the free version of Eagle for both the schematic and the PCB. The PCB is 1.6 thick double sided design that has a separate ground plane for the analog circuit and the digital circuit. The design is pretty simple. I got a dxf file from the Internet with the for the outline dimension and the position of the Arduino pinhead connectors.
I am posting the following files:
- Original eagle files: 00002A.brd and 00002A.sch.
- Gerber files: 00002A.zip.
- And the BOM(Bill Of Materials) + assembly guide: BOM_Assemby.pdf.
I ordered the PCB to PCBWay (www.pcbway.com). The price was amazingly low: $33, including shipping, for 10 boards that arrived in less than a week. I can share the remaining boards with my friends or use them in other projects.
There is a mistake in the design, I put a via touching the silkscreen in the 36V legend.
Step 6: Meter Circuit Assembling
Although most of parts are SMT in this board, it can be assembled with a regular soldering iron. I have used a Hakko FX888D-23BY, fine tip tweezers, some solder wick, and a 0.02 solder.
- After receiving the parts the best idea is to sort them, I have sorted capacitors and resistors and stapled the bags.
- First assemble the small parts, starting with resistors and capacitors.
- Assemble R4 (0R1) starting with one of the four leads.
- Solder the rest of parts, in general for SOT23, SOIC8, etc. the best way is to apply solder in one pad first, solder the part in its place and then solder the rest of the leads. Sometimes solder can join many pads together, in this case you can use flux and solder wick to remove the solder and clean the gaps.
- Assemble the rest of through hole components.
Step 7: Arduino Code
I have attached the file DCmeter.ino. All the program is included in this file apart from the LCD library “LiquidCrystal_I2C”. The code is highly customizable, especially the shape of progress bars and the messages displayed.
As all arduino codes it has the setup() function executed first time and the loop() function executed continuously.
The setup function configures the display, including the specials chars for the progress bar, inits the MCP4322 state machine and sets up the relay and the LCD backlight for first time.
There is no interrupts, in each iteration the loop function does the following steps:
Get the value of all the input signals I_ON, I_OFF, A_OC, A_OV and I_MOD. I_ON, and I_OFF are debounced. A_OC and A_OV are read directly from the Arduino´s ADC and filtered using the median part of the last three measurements. I_MOD is read directly without debouncing.
Control the turn on time of the backlight.
Execute the MCP3422 state machine. Each 5ms it polls the MCP3422 to see if the last conversion finished and if so it start the next, successively gets the value of voltage and current present at the output.
If there are fresh values of output voltage and current from the MCP3422 state machine, updates the status of the power supply based on the measurements and updates the display.
There is a double buffer implementation for faster updating the display.
The following macros can be adjusted for other projects:
MAXVP: Maximum OV in 1/100V units.
MAXCP: Maximum OC in 1/1000A units.
DEBOUNCEHARDNESS: Number of iterations with a consecutive value to guess it is correct for I_ON and I_OFF.
LCD4x20 or LCD2x16: Compilation for 4x20 or 2x16 display, the 2x16 option is not implemented yet.
The 4x20 implementation shows the following information: In the first row the output voltage and the output current. In the second row a progress bar representing the output value relative to protection set point for both voltage and current. Int the third row the current setpoint for overvoltage protection and overcurrent protection. In the fourth row the current status of the power supply: CC ON (On in constant current mode), CV ON (On in constant voltage mode), OFF, OV OFF (Off showing that the power supply went off because of a OV), OC OFF (Off showing that the power supply went off because of a OC).
I have made this file for designing the chars of the progress bars: https://drive.google.com/open?id=1ych5bmo9lfsu44W...
Step 8: Thermal Issues
Using the right heatsink is very important in this assembly because the power supply circuit is not self protected against overheat.
According to datasheet the 2SD1047 transistor has a junction to case thermal resistance of Rth-j,c = 1.25ºC/W.
According to this web calculator: http://www.myheatsinks.com/calculate/thermal-resi... the thermal resistance of the heatsink I have purchased is Rth-hs,air = 0.61ºC/W. I will assume that the actual value is lower because the heatsink is attached to the case and the heat can be dissipated that way too.
According to the ebay seller, the thermal conductivity of the isolator sheet I have purchased is K = 20.9W/(mK). With this, with a thickness of 0.6mm, the thermal resistance is: R = L/K = 2.87e-5(Km2)/W. So, the thermal resistance case to heatsink of the isolator for the 15mm x 15mm surface of the 2SD1047 is: Rth-c,hs = 0.127ºC/W. You can find a guide for these calculations here: http://www.myheatsinks.com/calculate/thermal-resi...
The maximum allowable power for 150ºC in the junction and 25ºC in the air is: P = (Tj - Ta) / (Rth-j,c + Rth-hs,air + Rth-c,hs) = (150 - 25) / (1.25 + 0.61 + 0.127) = 63W.
The output voltage of the transformer is 21VAC at full load, that makes an average of 24VDC after diodes and filtering. So the maximum dissipation will be P = 24V * 3A = 72W. Taking into account that the thermal resistance of the heatsink is a little bit lower due to the metal enclosure dissipation, I have assumed it is enough.
Step 9: Enclosure
The enclosure, including shipping, is the most expensive part of the power supply. I found this model in ebay, from Cheval, a Thay manufacturer: http://www.chevalgrp.com/standalone2.php. In fact, the ebay seller was from Thailand.
This box has a very good value for money and arrived pretty well packaged.
Step 10: Mechanizing Front Panel
The best option for mechanizing and engraving the front panel is using a router like this https://shop.carbide3d.com/products/shapeoko-xl-k... or making a custom plastic cover with PONOKO, for example. But as I do not have the router and I did not wanted to spend much money I decided to make it the old way: Cutting, trimming with file and using transfer letters for the text.
I have attached an Inkscape file with the stencil: frontPanel.svg.
- Cut the stencil.
- Cover the panel with painter tape.
- Glue the stencil to the painter tape. I have used a glue stick.
- Mark the position of drills.
- Drill holes to allow the fret saw or coping saw blade get into the internal cuts.
- Cut all the shapes.
- Trim with a File. In the case of round holes for potentiometers and binding posts it is not necessary to use the saw before filing. In the case of the display hole the file trimming must be the best possible because this edges ar going to be seen.
- Remove the stencil and the painter tape.
- Mark the position of the texts with a pencil.
- Transfer the letters.
- Remove the pencil markings with an eraser.
Step 11: Mechanizing Back Pannel
- Mark the position of the heatsink, including the hole for the power transistor and the position of the holding screws.
- Mark the hole for accessing the heatsink from the interior of the power supply enclosure, I have used the insulator as a reference.
- Mark the hole for the IEC connector.
- Drill the contour of the shapes.
- Drill the holes for the screws.
- Cut the shapes with cutting pliers.
- Trim the shapes with a file.
Step 12: Assembling Front Panel
- Strip out a multiconductor cable from scrap to get cables.
- Build the LCD assembly soldering the I2C to parallel interface.
- Build the “molex connector”, wire and shrinkable tube assembly for: potentiometers, pushbuttons and LCD. Remove any protuberance in potentiometers.
- Remove the pointer ring of knobs.
- Cut the rod of potentiometers to the size of the knob. I have used a piece of cardboard as a gauge.
- Attach the push buttons and power button.
- Assemble the potentiometers and install the knobs, the multiturn potentiometers I have bought have a ¼ inch shaft and the one turn models have a 6mm shaft. I have used washers as spacers to trim the distance of potentiometers.
- Screw the binding posts.
- Put double sided tape in the LCD and stick it to the panel.
- Solder the positive and negative wires to the binding posts.
- Assemble the GND terminal lug in the green binding post.
Step 13: Assembling Back Panel
- Screw the heatsink to the back panel, although paint is a thermal isolator, I have put heatsink grease to increase the heat transfer from the heatsink to the enclosure.
- Assemble the IEC connector.
- Position the adhesive spacers using the power supply kit circuit.
- Screw the power transistor and the insulator, there must be thermal grease in each surface.
- Assemble the 7812 for powering the arduino, it is facing the case to allow heat dissipation, using one of the screws that hold the heatsink. I should have used a plastic washer like this http://www.ebay.com/itm/100PCS-TO-220-Transistor-... but I ended up using the same insulator as the power transistor and a bent piece of the case.
- Wire the power transistor and the 7812 to the power supply circuit.
Step 14: Final Assembly and Wiring
- Mark and drill the holes for the transformer.
- Assemble the transformer.
- Stick the adhesive legs of the enclosure.
- Stick the DC meter circuit using adhesive spacers.
- Scrape the paint to screw the GND lug.
- Build the mains voltage wire assemblies, all the terminations are 3/16” Faston. I have used shrinkable tube to isolate the terminations.
- Cut the front part of the holder of the enclosure in the right side to get space for the power pushbutton.
- Connect all wires according to assembly guide.
- Instal the Fuse (1A).
- Put the output voltage potentiometer (the VO potentiometer), to the minimum CCW and adjust the output voltage the closest possible to zero volts using the multiturn fine adjusting potentiometer of the vkmaker power supply circuit.
- Assemble the enclosure.
Step 15: Improvements and Further Working
- Use grower style washers to avoid screws get loose with vibration, specially the vibration from the transformer.
- Paint the front panel with transparent varnish to prevent letters to be wiped out.
- Add a usb connector like this: http://www.ebay.com/itm/Switchcraft-EHUSBBABX-USB-... in the back panel. Useful for upgrading code without disassembly or for making a small ATE controlling the On Off functions, get status and measuring using a PC.
- Make the 2x16 LCD compilation of code.
- Make a new power supply circuit, instead of using the vkmaker kit, with digital control of the output voltage and current.
- Perform the adequate tests to characterize the power supply. | 1 | 13 |
<urn:uuid:317f0e49-ec8f-43b0-8355-b785282e07d7> | Note: this 'web page' prints out as about 27 printer pages
The following notes are intended to show you the range of different fruit and nuts that can be grown in warm temperate areas, and how they might fit into a strategy of growing some food in either a suburban or peri-urban country garden.
Detailed notes and illustrations on pruning, culture, and local pests and diseases affecting the plants you have
sorted out from this list as possibly worth growing can be found in some of the excellent books on fruit and nut
growing in your local bookstore or library.
temperate areas are areas that are generally cold in
winter, but while there are usually air frosts, it never
snows. In the more oceanic influenced variations of this zone,
citrus will fruit, but some of the most heat demanding citrus,
such as the true grapefruit, will only be successful in the
high heat, almost mediterranean variation of this broad
climatic zone. Elevated, or seaside sites, may have only a few
ground frosts in cold years, and no air frosts. In these
microclimates some deciduous fruit cultivars will not have
enough winter chilling, and selecting low chill cultivars is essential.
There is a complex interplay between accumulated heat, wind
effects, chilling, length of season, prescence or abscence of
late frosts, and varietal differences that determines what can
be grown in any one part of this broad zone. Local
experience-seeing what your neighbours grow-is particularly
Indicator plants for warm temperate areas are-peaches, citrus, low chill stonefruit, feijoa, kiwifruit, casimiroa; tamarillo, avocado and banana in favored microclimates
Our choice of type of fruit tree, or even variety of apple or orange or whatever, is not infuenced only by our particular local climatic conditions. Soil, and overwhelmingly, soil drainage, is a vital factor. In general, stonefruit are least tolerant of clay soils (especially where there is a high water table), except that plums are much more tolerant than other stonefruit. Apples are more tolerant still of wet soils, and pears are the most tolerant. Paradoxically, clay soils need heavy mulching or irrigating in hot summers. Lack of water is one of the most important factors in reduced fruit yeild. Luckily, the home fruit gardener can overcome problems of both poor drainage and dry, sandy soil, by the same methods-using lots of organic soils amendments such as peat or compost, using raised beds, and selecting dwarf trees. The ultimate work around for poor soils is to grow dwarf trees in large containers.
When we chose which fruit trees to plant, we have to take into account our personal circumstances and preferences. How much space is available for fruit trees? Is it sunny or rather shady? Is my lifestyle too busy to put a lot of time into regular spraying and pruning? Do I take pride in doing the whole cultural programme well? Will this tree grow very big and shade views or damage paved areas or drains? What does it take to keep assorted varmints-opposums, crows, blackbirds, bullfinches, rats, voles rabbits, wandering children, etc away from the fruit (and bark), and realistically, am I likely to do what it takes? Will the tree start fruiting before I am likely to leave this address? What landscape values (form, blossom, fragrance, foliage, fruit color) does the tree have, and how important is that to me and my 'significant other'? Am I looking for particular health benefits in growing some of my own fruit, and if so, which fruits will deliver those benefits? Am I looking for particular connoisseur taste experiences in growing some of my own fruit, and am I willing to give up productivity if the best variety is poorly productive? 'Growing all my own fruit' is a dream, but an impractical dream even on the basis of there not being enough daylight hours in a week to accomplish such a task, so what are the best strategies-very early and very late varieties when market prices are high? Grow only the species such as Mayhaw or Casimiroa that never appear in the market? Grow a lot of one fruit very well and can/bottle it? A mixed strategy?
The answers to many
of these questions is found in dwarf fruiting trees and in
varieties that cannot (for a variety of reasons) be grown
commercially. It's a delicious challenge, and a very personal
one, because everyones situation and motivation is different
These notes are intended to help you decide how much of your food you would like to grow, now, or in the future.
Plant Hardiness Zones JJJJ This Agriculture Research
Service map not only tells you which hardiness zone you are in,
you can zoom in on any part of the map, or go to your individual
state. State or zoom in maps also give you typical cold hardy
plants, and align the cold hardiness information to a typical
ACTINIDIA-See 'Hardy Kiwifruit' and 'Kiwifruit'
ALMOND-See 'Nut, Almond'
Growing Wild Annona species JJJJ from the Center for New Crops & Plant Products, at Purdue University Site, an extract from Julia Morton's Book 'Fruits of warm climates'. Discusses and describes Annona senegalensis, with a little on Annona montana. Also covers origin and distribution, uses. Concise, informative. 1 good photos of A. montana fruit
The undisputed King of all fruit for the Urban food
garden. Apples are reliable and heavy croppers (usually), and
are a fruit that everyone likes. Most importantly, they start
bearing very quickly-within 2-3 years of planting for the most
dwarf apples, and within 4-5 years from planting for the
semi-dwarfs (They will bear earlier than this, but it is best
to pull the fruit off and encourage growth at first).The range
of flavors is the most extensive and complex of any fruit,
encompassing perfumed, anise, honeyed, spicy, and with a wide
range and combination of sugar levels and acids. Many superbly
flavored cultivars, such as 'Telstar' or 'Freyburg' won't
stand shipping, or become too easily damaged if they are
properly tree ripened, and so only the home gardener is
able to enjoy these taste sensations. Esaliered trees should
be on a semi-dwarfing rootstock such as MM106. Small free
standing bushes can be created by buying a tree grafted to a
very dwarfing rootstocks such as MM9. These mini trees
definitely need staking with the stake driven well into the
ground at the time of planting. Dwarf trees, either espaliered
against a wall or fence, or as small bushes, are the only game
in town for the small garden of the urban Hominid. Varieties
that bear on short 'spurs' are also desirable, as they are
naturally smaller. Cordoning
apples is not worth the effort unless they are varieties that
spur freely, and are on a slightly more vigorous rootstock
(such as MM106).
Apple blossom is a lovely sight, and the natural columnar spurring types such as 'Polka®' have a particularly valuable form for use in landscaping.
The two major problems are codling moth and bird damage. Moth can be confused by placing pheromone lures around, and birds can be netted out of the tree, or a variety of cunning and reasonably priced commercial bird scare devices can be tried. Some apples are subject to some quite damaging fungus diseases unless they are sprayed; however, there are disease resistant varieties, and most varieties will get by with indifferent attention to copper sprays so long as the trees get fertilised and mulched and watered in hot dry weather. Most of us move house so frequently that by the time a tree is perhaps bady affected, we will have moved anyway.
Conversely, remove badly diseased trees you may find in a property you move to and start with healthy new stock-but don't plant them in the same place as the old trees were removed from.
The kind of apple or apples should be decided by the purpose you have in mind-cooking or fresh eating-and what you like. Some like complex apples with high acid and high sugars, such as 'Cox's Orange', others like perfumed sweet apples with low acid, such as 'Gala'. In the flush of the season, apples are relatively cheap, so a good strategy is to grow an apple that is simply not available, and that has superb eating qualities. Paradoxically, even common commercial varieties can reveal extra sweetness and depth of flavour when they are allowed to hang on the tree longer than would be commercially feasible, and when their soil is amended with lots of organic material and flavor promoting materials such as seaweed and fish manure leaf sprays.
Virtually any soil will grow apples, but light or sandy soils need to be mulched and watered in summer, especially if the weak-rooted MM9 rootstock is being used. The trees need to be kept healthy with good nutrition, adequate sunshine, mulching to suppress weed competion, and summer watering.-an apple tree is said to need at least 20 healthy leaves to mature one fruit. It is advisable to keep pruning to a minimum, but any pruning that needs doing should be done in summer, even if you have to sacrifice a few fruit. Prune the newly grown summer laterals back to 3 or 4 leaves, cut vigorous shoots right back, and when necesary, shorten main branches to a downward pointing bud or spur. Take out the occasional larger branch when necessary to keep the tree open and uncrowded, and prune back some excessively long spurs. Some apples are 'tip bearers', and for these kinds, pruning all the laterals means few fruit next year! Prune them in winter. Only the strongest laterals should be pruned- to about 6 buds. The leaders should also be cut back by about a third. All in all, 'tip bearers' are not as well suited to the small garden. Spray with copper when half the leaves have fallen and in spring at bud burst. Winter pruned trees are much more likely to get a fairly serious disease called 'silverleaf' unless each cut is treated with a top quality wound sealing paste, or unless the tree had been vaccinated against the disease early in it's life. Some apples get into a pattern of bearing heavily every second year, with little or nothing in the in-between years. This 'biennial bearing is difficult to correct. Sometimes hand thinning the fruit when it is newly set will restore a more regular annual pattern.Thinning gives better sized apples anyway. There is often a natural drop of small fruitlets, and once this has passed, it is a good idea to thin the apples to about 4inches/100mm apart.
Apples for cool summers and mild winters-Gravenstein, Akane, Chehalis, Liberty, Jonagold.
Disease resistant varieties-Belmac, Prima, Primevere, Priscella, Redfree, Jonafree, Liberty.
General apple culture.
Alphabetical list and description of apple cultivars.
APRICOTPrunus armeniaca- Home grown
apricots can be so sweet and flavorsome they find every
unfilled cavity in your teeth! Tree ripened fruit of the most
flavorsome cultivars are a connoisseur delight of the highest
order. The main challenges are to keep birds away from them,
and in warmer areas, to get good fruit set. They require less
winter chilling than most peaches, but, paradoxically, often
drop their buds following a warm winter and early spring.
Equally, because they flower very early in Spring, the
blossoms can be damaged in locations that tend to trap frost
in pockets. Apricots really need reasonably free draining
soil, unless they are grafted onto plum roostock. Many
varieties of apricot
are self fertile. However, a pollenizer will increase
They are reasonably attractive in bloom, altho' not quite as showy as most peaches. As they bear fruit on short spurs, they don't need the regular fairly drastic yearly pruning that peaches and nectarines do. Most pruning can be done in summer, after fruiting, and is aimed at controlling size and form, removing old played out spurs and encouraging some new growth for future spurring.
Birds love apricots, and netting the tree is difficult, given it's size. This makes dwarf cultivars an interesting proposition. In addition, like all stone fruit, apricots are subject to 'silverleaf' fungus disease, and 'brown rot' of the fruit. Drier climates have far fewer problems with fungus than wetter areas, and are regarded as almost trouble free trees.
All in all, apricots are immensely rewarding, but because selecting the right variety for your local climatic conditions is of the highest importance, and the fruits have to be protected from varmints, apricots are best regarded as a must for those drier and cold enough but not too cold areas where apricots fruit well, but an uncertain bet in late frost prone, or humid, or very warm areas.
Blenheim-'Royal'. the medium large fruit are sweet but with good acid balance, and firm fleshed. Highly productive tree, and the fruit hold their shape well when canned/bottled.(US, UK, NZ) Blenheim is a moderate chill variety. A lovely photo of the fruit is on the Sierra Gold Nursery web site.
Jordanne-is a very large, high-colored apricot with very good flavor, but it needs a pollenizer (US)
Newcastle-Small, round yellow skinned fruit with soft texture. The tree is large and vigorous, but is subject to disease, especially in humid areas. Newcastle is a low chill variety.(US, NZ)
Newcastle Early Seedling- said to be an improved 'Newcastle'-earlier, better adapted to warm, low chill areas.(NZ)
Sundrop- main commercial variety. Fantastic looking fruit, but not exactly tops in sweetness & flavor. (NZ)
Golden Amber-the fruit are large, fine grained, yellow fleshed, firm, with excellent flavor. The late season fruit have the advantage of ripening over an extended period.The trees are upright, vigorous, and highly productive.(US)
'Goldrich'(US), 'Perfection'(US), and 'Rival'(US) need another variety to act as pollenizer. 'Rival' will pollenize all the others.
Goldstrike-exceptionally high colored flesh, very firm, and is acidic unless fully tree ripened. Needs a pollenizer.(US)
'Puget Gold'- the cv. best adapted to areas with cool summers mild winters where apricots are not generally successful (US)
Dwarf apricots- such as 'Moonglow'(US) and 'Sungold'(US), are both sweet, if not as richly flavored as standard cultivars, but both require a pollenizer. Which happens to be each other.
Lower chill apricots-in the very warmest parts of the warm temperate zone, even these may not suceed, or only in some years-'Blenheim'(USA, NZ), 'Katy Kot'(USA, NZ), 'Gold Kist' (USA), 'Newcastle'(USA, NZ), 'Newcastle Early Seedling'(NZ), 'Trevatt'(NZ).
APRICOT-PLUM HYBRIDS These very exciting
hybrids between the two species are mainly the work of Zaiger
genetics in USA. Pluot® is a trademark name for varieties
derived from complex interspecific hybrids between plum and
apricots. Generally, a 'pluot®' is a cross between a
plumcot (P. armeniaca x P. domestica)
and a plum
Thus it usually has
75% plum genes and 25% apricot genes. Reflecting this, Pluots
have smooth skin like a plum. As already mentioned, plumcots
are a straight plum/apricot hybrid. An aprium® is
also a trademark name for varieties derived from crosses
between plumcots (P. armeniaca x P.domestica)
and apricots (P. armeniaca).Because this results in 75%
apricot genes and only 25% plum genes, the fruits are scantly
covered in a very fine fuzz as are apricots.
One of the features of these hybrids is that they are very sweet, and have complex and excellent flavor.
Plants grafted on 'citation' rootstock are semi dwarfed. The only real drawback has been sorting out pollenizer for these very new fruits. 'Dapple Dandy' has been suggested as a pollenizer for some of them, and the ubiquitous 'Santa Rosa' for Dapple Dandy itself.
Dapple Dandy (Plumcot)-pale greenish yellow skin with distinctive red dots. The firm flesh is creamy white streaked with crimson, and is sweet and highly flavored. It is a very useful pollenizer for other apricot-plum hybrids.(US)
Flavorella (Plumcot) Early season.Flavorella is a medium sized, translucent golden yellow skinned fruit, with a slight red blush and very slight fuzz.It is firm, juicy, and with a very good flavor. The tree is spreading and a pollenizer is required.(US)
Flavor Delight (Aprium®)
Flavor King (Pluot®)-Late season.F.K. has large attractive fruit, with yellowy red sweet, perfumed flesh. The moderately spreading tree is mid to late season blooming, an advantage in areas prone to late spring frosts. A pollenizer is required.(US, NZ)
Flavor Queen (Pluot®)-Mid late season. F.Q. is medium to large sized, has yellow skin and sweet, juicy, yellow flesh of excellent flavor.The fruit hold well on the tree, a useful advantage for extending the season. F.Q. blooms early, so needs a pollenizer that also blooms early. (US)
Flavorich (Pluot®) Late season.The black fruit are large, with orange, sweet flesh of excellent flavor.The moderately spreading tree is mid to late season blooming, an advantage in areas prone to late spring frosts. A pollenizer is required.
Flavor Supreme (Pluot®)-red fleshed, early, and with better flavor than early red fleshed plums.(US)
Flor Ziran 'Black Apricot'-(Plumcot)-dark purple skin, tender, juicy, fine grained orange flesh somewhat suffused with red. The tree is vigorous.(US)
Plum Parfait (Plumcot)-Early season. The medium sized fruit are dark yellow heavily blushed with red, the flesh is dark yellow, streaked red at the freestone pit, and with very good flavor.The tree is naturally relatively small (3M/10 feet) and spreading. It has the twin advantages of being self fertile and low chill.(US)
ASIAN PEAR-Pyrus serotina 'Nashi',
'Misunashi', 'Apple Pear', 'Sand Pear', 'Water Pear'. These
are fruit that look more or less like apples, but have
somewhat pearlike flesh, are extremely juicy, with little
acidity and moderate to high (depending on the variety)
sweetness. Some cultivars have rather coarse and gritty flesh,
hence the name 'Sand Pear'. These cultivars are now not much
grown, for obvious reasons. They can be grown anywhere apples
succeed and where there are no late spring frosts to damage
the blossom. Like the European pear, they are suceptible to
fireblight. Commercial Asian pears can be pretty tasteless.
They flower a little later than stone fruit, and just before
most European pears, altho' European pears whose flowering
period overlaps will pollenize Asian pears.
Shinseiki (US, NZ) is usually recommended as the pollenizer for most cultivars. Early seaon fruit ripen in early to mid summer, mid season are mid summer to late summer, and late season ripen late summer to early autumn.
Shinsui (US, NZ) is early season, small to medium sized, russet brown, juicy, very sweet (often over 15% brix) and moderately gritty. The fruit only keep about 4 days at room temperature, and around 8 days in the fridge. Its best pollenizer is 'Nijisseiki', then 'Shinseiki' or 'Hosui'. The tree is extremely vigorous, and doesn't crop as heavily as some of the other varieties. It's virtue is it's earliness.
Kosui (US, NZ) is early, with greenish gold skin, medium sized, crisp, very sweet, very juicy and tender fleshed. Kosui seems to maintain it's sweetness over a wide range of growing conditions. Kosui can be cross pollenized by, and will pollenize, 'Nijiseiki' and 'Hosui', but it is poorly compatible with 'Shinsui' and vice versa.'Shinseiki' is also an effective pollenizer. It usually sets very heavy crops. Kosui has rather brittle branches, so it should not be planted in a very windy position. The tree is not too vogorous. Kosui is relatively suceptible to disease, and in humid areas it is inclined to have some degree of branch die back.
Hosui (US, NZ) is rather a medium to large golden brown mid season variety with prominent lenticels on the skin. It is highly flavored, sweet and juicy, except in areas with cool summers, when it tends to be acidic and with low sugars. The tree is vigorous, medium to large sized with willowy, drooping branches. It flowers heavily. It may need more winter chill than some parts of the warm temperate areas may provide. Hosui will store for months in the fridge. It has limited self fertility, but sets well with 'Nijisseikeiki', 'Shinseiki', and 'Shinsui'.
Shinseiki ('New Century') (US, NZ) is mature mid season, and is a medium sized yellow-green medium to large smooth skinned fruit.It is firm fleshed, crisp and juicy, but fairly mediocre flavored. The tree is upright and moderately vigorous. Pollenizer are 'Shinsui' and 'Kosui'. Shinseiki is a good pollenizer for other cultivars.
Nijisseiki ('Twentieth Century)(US, NZ) is a late season variety. It is medium sized, yellow-green skinned, just sweet but rather flavorless. 'Kosui', 'Hosui', and 'Shinseiki' and 'Shinsui' will pollinize it. It is one of the most productive varieties of Asian pear. Like 'Hosui', it may need more winter chill than other varieties. The fruit can store for months in the fridge. The tree spurs well, and is easy to manage.
ASIMINA Asimina triloba 'Papaw', 'Pawpaw', 'Asimoyer'. This relatively small (to about 6metres /20 feet) decidous North American tree is the solitary temperate climate member of a family of tropical and subtropical fruiting trees, the best known of which is the 'cherimoya' or 'custard apple'. The British, Australians, and New Zealanders call the tropical papaya fruit 'pawpaw'. The papaya is no relation whatever of Asimina. To avoid this cultural misunderstanding it is best to simply call this fruit 'Asimina'. The fruit are 75mm-125mm/3-5 inches long, green skinned, and carried in clusters of two to three vaguely stumpy banana shaped fruit. The smooth pulp is browny yellow to almost orange, depending on the variety, with a double row of smooth dark brown roughly lima bean sized seeds.The flavor is variable, according to the seed source, but in the best types it is tropical, intense, and sweet. The friuit are an excellent source of vitamin A and C, and it's mineral content is as good or better than many common fruits such as apple, peach or grape. The fruit ripen in autumn, and is highly productive if the right pollinating insects are present This is definitely a tree to consider, but it does come with some difficulties. The fruit is highly desirable, it is unlikely to be commercially available because of it's short shelf life once ripe, the leaves are long, drooping, and elliptical, giving an almost tropical look, the tree is hardy once established, it does well in shade and tolerates sun; but it tends to send out numerous suckers, which while not vigorous-the tree is slow growing-are annoying. The tree must have some shade for the first 3 or four years of its life. Unless you have one of the few self fertile cultivars, you will need to plant two for cross pollination. In some areas, and in some countries, such as New Zealand, there seems to be an absence of the correct pollinating insect-the trees flower well, but set few or no fruit.The very warmest parts of the warm temperate zone, where it starts to tip into almost subtropical, may not have enough winter cold to trigger flowering and subsequent fruiting. Planting grafted plants, or suckers from known varieties is a good idea, as the quality of the fruit is guaranteed. There are many different cultivars include 'Davis'-excellent flavor, large fruit, productive; 'Sunflower'-good flavor and size, partly self fertile; 'Well's Delight'-very large, excellent flavor.
Pawpaw - JJJJ an
Purdue Universities' New Crop Proceedings (USA). The information
is slanted to commercial potential, but it is rich in information on the botany, distribution, nutritional content, propagation,
varieties, and growing conditions for this fruit.
AURORABERRY- Looks like a blackberry, it has large, firm black shiny fruit. Flavour is very good, 'perfumy', clean taste, with none of the sulfur and bitter notes that boysenberries, for example have. It is blander than an olallieberry, and can be acidic if it isn't fully ripe. This is a fairly early bramble, as it ripens in early summer. It is a weaker plant than other brambles, which is an advantage in all areas except wet and humid areas where brambles are subject to disease. All brambles need to be tied up on wires, free standing, against a fence or a wall. This doesn't suit every situation, especially as they really need good sun to ripen the fruit and minimise disease. Not unnaturally, thorned brambles such as this can be a nuisance in small spaces. Otherwise recommended.
AVOCADO-Persea amaericana A little
more frost tender than citrus, and must have either very free
draining soil, or on slow draining soils, large raised beds on
a raised slope or hill with massive amounts of permanent
organic compost mulch (at least 60cm/2 feet deep, but not
piled against the trunk); must also have plenty of sun.
Avocado need shelter from the worst wind. The trees are
handsome, altho' in cool and wet winters they may get a bit of
root rot and look a bit threadbare until warmer drier weather
arrives.A deep organic mulch speeds their recovery. The young
trees need to be covered against frost in the more frost prone
parts of the warm temperate zone, but once they get a bit of
size on they recover well from frost damage as long as the
trees are were healthy in the first place. Avocadoes don't
need spraying, and apart from providing vast organic mulch in
poorer drained areas, only require regular fertilising and
judicious pruning to regulate size. The only caveat is that a
nasty fungal disease, 'Dothiorella canker', affects the trunks
and/or fruits of avocadoes in the wetter coastal parts of
California, and there is little that can be done about it.
The avocado is a large tree, and there are no truly dwarfing roostocks at this time, altho' there is one dwarf variety. Heavy cropping on trees such as 'Reed' and 'Fuerte', plus pruning, can keep the trees relatively small. But even the, you should allow for a 'footprint' of a circle of about 4M/13feet in diameter. The premier hominid food, and home grown can be richer in flavor than shop bought. A grafted tree in good conditions will commence fruiting in about the third year from planting out. In very warm areas the ripening dates may be a month before those listed below. In fact, avocadoes can often be picked earlier than the dates listed, and they will ripen satisfactorily, but they will be insipid, tastless, watery, and lacking richness.
Bacon-excellent pollinator variety for Hass & Reed, relatively cold hardy, good cropper, mid winter to spring fruiter, but mediocre to poor taste, and very vigorous and upright.
Fuerte-fruits in winter and carries through to the end of spring, very high quality fruit, without peer for its season. Small spreading tree (for an avocado), thin skin, can get splits and rots at the base, fruit set without a pollinator is very poor indeed. Hass will pollenize it and vice versa.
Hayes-Fruits from spring to mid summer, a bit earlier than Hass. Very high quality, slightly larger than Hass, thick skin makes it a bit harder to tell when its ripe. Skin colour change is the best guide.
Hass-Excellent quality, ripe from around mid spring to autumn. They are at their home grown best in summer, but commercially, large fruit are picked in winter and early spring and artificially ripened. The skin is pebbled, green turning black, and fairly thick. Starts cropping at an early age. Upright tree.
Reed-ready before Fuerte, from summer (best quality in late summer onward) to early winter but will store on the tree right through winter in some areas. Large round fruit, very high quality. Thick skinned, bit hard to decide when it is ready- stem end flicks off is best test. Reed kicks into fruiting at a fairly young age, and bears very heavily, and like Hass, is fairly upright. It is late flowering, so the flowers are unlikely to be damaged by spring frosts.
Wurtz-A good quality summer avocado, wurtz's main feature is that is only grows to about half the height of most avocados. It has weeping foliage, low vigor, and is sometimes promoted as a dwarf avocado.
Zutano-Ready from mid winter onward, poor quality fruit, and relatively thin skinned. It's virtues are that it is an upright tree, and it is relatively cold and wind tolerant.
Reed, Hass and Fuerte are probably the top selections for home garden avocados in the warm temperate zone. In very hot and
humid areas, it is best to go for thicker skinned varieties to avoid fungal diseases affecting the fruit
Sheet. An excellent fact sheet (prints out
to about 6 printer pages) at the Californian Rare fruit growers
site, covering all aspects of growing avocados,
plus notes on varieties. Written for USA conditions, but widely
Avocado Pollination Notes first rate overview of avocado pollination, written for New Zealand conditions, broadly applicable.
BABACO Carica x heilbornii var. pentagona The babaco is supposed to be a sterile hybrid between two 'mountain papaya' species, Carica pubescens and C.stipulata. The fruits are lemony acidy tasting, very very soft, and extremely juicy. They have no sweetness whatever until about mid summer. At that stage, it is usually only the smaller fruits from the very top of the tree that are left. They are then fairly sweet, fragrant, and pleasant to eat. In the case of babaco, 'smaller' is a relative term. The smaller fruit are about the size of tropical papaya, but fruit can be 30cm/12 inches or more long. The fruit come into bearing the first year after planting, have quite a tropical aspect with their head of lobed leaves atop 2-3 long tall (2-3M/yds) trunks. The tree itself looks very dramatic when it is packed with the large green and yellow fruit. Like all mountain papaya, it is damaged by air frost, and in severe air frosts will be killed. Their shape makes them an ideal candidate for growing in the frost protected areas under the eaves of the house. They are relatively drought tolerant.
Musa acuminata and hybrids of M.acuminata x M.Balbisiana
[= 'M.paradisiaca']. Bananas are a tropical herb, and
it is stretching the limits of their range to fruit them in
the warm temperate areas. But fruit they do, as long as their
needs are met. But the plants are slower to produce, less
robust, the flowers smaller, less bananas are set, and the
most 'tropical blooded' (those with purplish or pinkish
blushes to the leaf petioles) are either slow or unsuccessful.
Variety selection is particularly important. The banana
deserves to be popular for it's productivity in a small space,
it's pleasing landscape qualities, and, of course, it's
delicious fruit.The fact is that the banana is a warm weather
plant. When the cold of winter comes on, it tends to yellow
somewhat, and the leaves get pretty tatty. In a warm winter it
looks pretty good, and ripens any green bunches that had
developed over summer. In a cold winter a bad frost will
severely injure the plant, but it will resprout from the
ground when warm weather returns. Bananas only really suceed
in the warmest part of the warm temperate zone, but if they
are tucked under the eaves of the house, their range can be
It is the ideal crop for the small space gardener, as it makes best use of vertical space, is not too large, crops quickly, and the fruit are concentrated in one place-making for easy bagging against pests.
There is a species, Musa basjoo, the Japanese Fibre Banana, being touted as " the world's cold hardiest banana. It is hardy planted in ground to -3 degrees F. and with protective mulching, down to -20 degrees F". It is from Southern Japan, and is usually grown or the fibre in the leaves, rather than the fruit. The fruit are small and seedy, but edible.
The banana is a
water loving plant, and thrives with plentiful water in dry
spells and regular fertilising. However, as long as it is
fairly well mulched, it will still fruit with less than
adequate water, albeit the fruit may be smaller and less well
filled. Bananas are also greedy feeders-they have to be,
considering the weight of fruit that is regularly removed from
the clump. Spring growth is crucial. Good growth in the early
months makes for larger and better bunches. The point is to
keep the clump well watered and fertilised at this time, using
a complete garden fertiliser that has a bit extra
potash/potassium in it, as bananas need quite a bit of this
element for its fruit. Regular light liming may be needed on
acid soils. In order to keep the resources of the clump
concentrated on fruiting plants, it is best to allow two
plants to fruit and have two replacements coming on. Remove
all other suckers that develop.
The naming and identification of banana varieties can be challenging.
The Bluefield/Gros Michel bananas are the bananas of commerce grown in South America and the Phillipines, and grow very tall-up to 18 ft/5.5m. Being so tall, they are subject to blowing over when they are carrying their very heavy (to 100lb/45kg) bunches, unless propped up.From planting to harvest is about 15 months in this cultivar. Poorly adapted to the warm temperate zone, not recommended.
Williams/Mons Mari/Giant Cavendish is a giant mutation of the cultivar 'Dwarf Cavendish/Chinese'. It is 6½ -13ft/2-4m high, the fruit are similar to 'Gros Michel', and they are ready about 12 months from planting. Both 'Blufield' and 'Williams' are suceptible to the very damaging 'Panama disease' (Fusarium wilt). Fruits as well as any, but it's height makes it suceptible to wind damage, and it is one of the poorer performing cultivars in warm temperate areas. Not recommended.
Dwarf Cavendish/Dwarf Chinese/Chinese a common variety in home gardens because of it's relatively small size (8ft/2.5m) and tolerance to a wide range of conditions, including cool.The bananas are essentially the same as 'Williams'. Suceptible to Panama disease. Needs warmer temperatures than the warm temperate zone can provide.Not recommended.
Dwarf Orinoco-Relatively cold tolerant fairly reliable bearer with quite large ( 6 inch/150mm), very sweet, angular, bright yellow, astringency free, soft fruit with a rather distinct tough central 'core'. In cooler years the fruit can be rather thin, with dense flesh and moderate sweetness, but they are never astringent. Worth a place in a collection.
Sucrier/Pisang Mas/Honey, as it's name suggests, is a very sweet banana; it has small fruit, thin skin, yellowy flesh, and small bunches (up to 28½lb/13kg). The plants are 8-11½ft/2.5-3.5m high, and prefer light shade. Planting to harvest is about 11 months under subtropical conditions. Unfortuneately, this cultivar is not well adapted to cooler temperatures. Not recommended.
Lady Finger/Pome/Brazilian is relatively drought hardy, wind resistant, fast growing, is up to 16ft /5m high, and has short, slightly angular (not plump) fruit which (because it has a little acidity as well as sugar) has a rich true banana flavour, in bunches up to 66lbs/30kg. It has a tendency to have some undeveloped fruit in the bunch. It is suceptible to Panama disease.Planting to harvest is about 14 months under subtropical conditions-longer in warm temperate conditions. Because this variety is both tall and slow to come into fruit when grown in warm temperate areas, it must be regarded as a 'maybe', in spite of it's exceptionally good flavor.
Sugar/Silk/Apple/Hua Moa-10 to15 feet/3-4.5m high, the banana are short and plump, very thin skinned, inclined to split and to tear off and fall when it is very ripe, very white fleshed, dense, sweet, without flouriness or sliminess, but astringent when it isn't fully ripe. It is highly suceptible to Panama disease. It bears fairly reliably in warm temperate areas, and in spite of splitting, it's superior flavor and reliable productivity makes it a recommendation.
Mysore/Misi Luki is up to 15ft/4.5m high, a vigorous plant with purpley pink midribs somewhat tolerant of drought and poor soils, with very tightly packed cylindrical bunches up to 77lb/35kg of slightly yellowish fleshed pleasantly sweet/acid balanced, short and fat attractive bright yellow 'bottle necked' fruit. It is known for the fruit to hold well on the bunch, even at full ripeness.This cultivar is the main commercial banana of India. It is suceptible to Panama disease.
Red Dacca is interesting because the tall (to 18ft/5.5m) bear average sized bunches of large, plump bananas that are washed purply pink when ripe. Planting to harvest is about 18 months for this cultivar under subtropical conditions. It is suceptible to Panama disease. Not recommended.
Pisang Rajah is an important variety in Malaysia and Indonesia.It grows up to 15ft/4.5m, and takes about 16 months from planting to harvesting the up to 55lb/25kg bunches of medium sized sweet bananas. It is relatively tolerant to wind and cooler conditions.
Blue Java is so called beacuse the bunches of immature fruit are covered in a waxy bloom which gives them a blue-green caste. The plants grow to 13ft/4m, planting to harvest is about 14 months under subtropical conditions. The fruit has particularly long stalks, are slightly angular, and have white flesh. Suceptible to Panama disease. Fruits poorly in warm temperate areas, not recommended. Ducasse/Pisang Awak-is a particularly vigorous and hardy banana. It grows up to 16½ft/5m high, and has up to 77lb/35kg bunches of tightly packed, small bananas with a light wax bloom. Harvest is about 17 months after planting in subtropical conditions. This is the most important banana of Thailand. Suceptible to Panama disease.(note: it is somewhat fertile, and if it is pollinated it may have hard, black seeds inside). In spite of the seeds, worth trying.
Goldfinger-released in 1989 this banana was bred in Honduras specifically for the less favorable conditions of subtropical areas, so is definitley worth a try.
Banana varieties and planting instructions JJJJ About 26 edible varieties are
tabularly described with a photo of the fruit in the 'Aloha
The best one to grow may simply be your friends or neighbours. If you come across a banana you like, or it's owner recommends, simply get a spade and dig out a sucker. With plenty of water in the hot weather, applying fertiliser regularly , and starting with big healthy suckers it is possible to cut your first bunch within two years of planting. Once a clump is established, there will virtually always be one or two stems fruiting. Once fruited, the stem never flowers again, and needs to be cut down. It makes good mulch for the clump.
Banana sap dripping from a freshly cut stem or fruit stalk will stain clothes, so be careful. Cut the bunch when the first few fruits show the first sign of color (bunches can be cut when the fruit are green but the fruit must be 'plump' to have good flavor when they ripen up). They will ripen up very quickly once hung up inside in a warm, light place, and have very good flavor. Winter maturing bunches - fairly typical for bananas in the warm temperate zone - take as much as three weeks longer to ripen if they are stored in a cool dark place, and their flavor is often very poor.
BLACK CURRANT- see 'Currants'
BLACK SAPOTE Diospyros digyna- 'Chocolate
Pudding Tree', 'Black Persimmon'. A handsome tree with dark
green leathery leaves against black barked branchlets. Plants
will flower and set fruit if they are grown in warm, totally
frost protected situations, or if the winter has been
unseasonably warm. The fruit are supposed to be about the size
of a very large apple, but under warm temperate conditions
they are much smaller, possibly due to poor pollination, and
may only be the size of a plum. In the warm temperate area
they remain a collectors item, rather than a useful cropping
plant. Black spotes are a relative of the persimmon, and the
flesh is similar in texture to a soft ripe persimmon fruit-
rather jelly like and soft. The flesh is chocolate colored,
and some claim it has the appearance and the texture of
chocolate pudding. The taste is moderately sweet, with no
great depth of flavor. The fruit retain their green color, but
soften when ripe, and should then be picked and left to become
very soft before eating. Trees can bear as early as three
years from planting under the most ideal conditions. Flowering
is in autumn, and fruit size and mature in late winter /early
spring. Given a large pot, these plants make a handsome patio
There is a picture of the fruit at the 'Garden of delight ' web site
BLACKBERRIES Rubus ursinus-The thorny
wild blackberry has the most exquisite sweetness and floral
flavor. It is invasive, spreading, trailing, painfully thorny
and unattractive. The cultivated blackberry usually has stout,
usually semi erect, easily managed canes that can be trained
to a fence or wall, very attractive large flowers, is
non-invasive, and nearly all are mainly or entirely thornless;
but the fruit, while much larger than it's wild progenitor,
very often lack sweetness and flavor. Black berries start into
bearing virtually the year after they are planted. Like most
brambles, they are bird magnets, and realistically, have to be
netted.One of the advantages of the blackberry is that
tolerates partial shade. They are reasonable easy to grow,
tolerating most soils, altho sandy soils will have to be
heavily mulched to keep it moist. In wet and humid areas it
can be subject to fungal diseases. Erect growing varieties
have the best disease resistance. pruning is easy, immediately
after harvest simply remove the canes that have just fruited
and cut out any new canes that seem weak. Keep only about 8
new canes a plant. They can then be tied in tiers along your
wires or tied against a wall in a fan shape. In the summer the
new canes do need to have their ends cut off at about 2.4M/8
feet, to promote flowering laterals for the following spring.
These laterals can have excessive length pruned off (down to
about 30cm/12inches) in winter to make them easier to net, if
you want. With many brambles-especially vigorous trailing
types like boysenberry-it is a good idea to pick up the new
canes as they grow over spring and early summer and
temporararily tie them to a wire to keep them off the ground
and stop tip rooting. With erect and stout caned blackberries
this is not really necessary. Blackberries need little
fertilser beyond some nitrogen.
Waldo-is very early, crops reasonably, has very good flavor, and is not too vigorous, but is thorny.(UK)
Ashton cross-is mid season, heavy cropping, very good flavor, but thorny.
Loch Ness-early to mid season, heavy cropping, desirable semi-erect habit thornless traits, flavor good (for a thornless).(UK)
Thornfree-late fruiting, very productive, poor tasting fruit, subject to fungal disease in wet and humid areas. (US, NZ)
Other erect blackberries include Darrow, Cherokee, Cheyenne, Comanche, and Shawnee (US)
BLUEBERRY Vaccinium ashei, V. australe,
V.corymbosum-Fresh blueberries of the most flavorsome
varieties are a delightful experience; run of the mill
varieties are not worth bothering with. But-birds love
blueberries-they must be netted, or you will get very little.
In addition, they are rigorously demanding in soil type-either
it is a naturally highly acidic soil, or the soil will have to
be extensively amended with peat, acidifying agents such as
sulfur, and/or acidifying plant material such as pine needles
added as a mulch. Alternatively, container mixes for acid
loving plants can be used. Blueberries have a fibrous root
system, and will not tolerate the soil drying out. Conversely,
the soil needs to be reasonably well drained. Heavy
incorporations of peat to either sandy soil or to heavy soil
will help fix drying out in the one case, and poor aeration
and drainage in the other.
There are two main types of blueberry-'highbush', V.australe and V.corymbosum; and 'rabbiteye', V.ashei.
The highbush types grow to about 1.8M/6 feet, and are entirely self fertile. They need some winter chill, and fruit poorly in the warmest parts of the warm temperate zone. The fruit mature from early to mid summer.
Rabbiteye types are taller plants, are more tolerant of heavier and less acid soils, need less winter chill to flower well, and tolerate heat and drought better than the highbush types. Their fruit follows on the highbush types, maturing from around mid through to late summer. These are the types best adapted to the warmer parts of warm temperate areas. On the minus side, they are self infertile, so two varieties are needed for cross pollination, the berries are a little smaller, and the flesh texture perhaps a little grainy.
Providing it's somewhat exacting requirements are met, you can expect light crops from your bush in the first few years, building to around 2.25kg/5lbs by the fifth year, and 4 or 5 kgs/approx.10lb when the bush is mature. Pruning is not needed for the first 3 or 4 years, and is simple, a matter of removing about a quarter of the very oldest stems every year. Blueberries have variable autumn colors, depending on the cultivar. Some are yellow, some orange, and some red.Those with the strongest autumn colors have strong landscape value. Blueberries flower early in spring (don't plant them in a frost pocket or you won't get fruit), and the pendant white tubular flowers are very pretty.
Highbush Blueberry Varieties-
Earliblue-Early season. Large berries and good autumn color, rather low yeilds.(US, UK, NZ)
Bluecrop-Early season. Large berries, highly productive, orange and red autumn colors.(US, UK, NZ)
Nui-Early season. Large berries, moderately productive, very large fruit, good flavor, sometimes has a bonus light autumn crop.(NZ)
Stanley-Early to mid season. Medium sized berry, moderate yeilds, excellent flavor.(US, UK, NZ)
Berkley-Mid season. open and spreading bush. Very productive of very large berries. Relatively high chill requirement.(US, UK, NZ)
Herbert-Late season. Smaller bush, heavy cropper, very large fruit, one of the best tasting blueberries, unremarkable autumn colors.(US, UK, NZ)
Colville-late season. Large fruit on a productive, vigorous bush. Holds it's fruit well without dropping them near maturity.(US, UK, NZ)
Rabbiteye Blueberry Varietie-
Climax-performs well in warm areas, producing heavy yeilds of good sized fruit.(US, NZ)
Delite-Mid season. Very vigorous (more than 2M/6ft 6inches), high yeilding and very good flavor.(US, NZ)
Walker-Mid season. In good years it is a particulalry sweet blueberry.(US, NZ)
Woodard-Mid to late season. The medium sized rather spreading bushes are particularly well adapted to the warmer areas. Woodard is large (for a rabbiteye, anyway), light blue, and has good flavor. (US, NZ)
requirements JJJ Written
by The Hort and Food Research Institute of New Zealand
Ltd, this useful page covers the nutrient requirements, what
sort of fertilisers are useful, nutrient disorders, and how to
correct them. Commercially oriented, but still good for the home
BOYSENBERRY The boysenberry is a raspberry-blackberry hybrid with 'Himalayan Giant' blackberry being one parent. The boysenberry is acid, but sweetens if left to darken and become plump and turgid, at which point they fall off the vine at a gentle pull. However, boysenberries still have a very slight bitter and sulfurous note even when fully ripe. They start fruiting in very early summer and have a short picking season. The thornless variety is the best one to grow-altho it should properly be described as 'semi-thornless'. One of the virtues of the boysenberry is that it is drought tolerant, relative to other berry fruit, and thrives on lighter free draining soils, where others fail. The boysenberry tolerates a wide range of soils. Boysenberries are not usually found in the marketplace as they are very soft when ripe, so if you want to eat fresh fruit you will have to grow them yourself. Boysenberries need a wire or fence to grow on, they need to be sprayed against fungus diseases unless you have a fairly dry climate, and they must be netted against birds if you are to harvest fully vine ripened fruit. Pruning is as for blackberry.
CARISSACarissa macrocarpa 'Natal Plum' A very useful plant for the home food garden, because the small bushy and thorny shrub has attractive fragrant white flowers, won't form massive roots that can damage paved areas, and because it will remain fruitful even when trimmed to fit into a narrow space, such as a border. The small roundish fruit are about an inch/2.5cm wide and a bit more long. They are bright red streaked with a darker red ground color. The fruit are variable, but most are mild, somewhat sweet, sometimes slightly astringent, with small seeds in the centre and exude a harmless latex when cut. They have about the same vitamin C content as an orange.
CASANASolanum (Cyphomandra) casana This plant is straight out of the wilds of the Andes and has never been selected or improved in any way.Casana is a single stemmed tree ( a close relative of the tamarillo) with a small canopy of very large hairy heart shaped leaves at about 2.4M/8 feet. Large numbers of pointed oval 75mm/3 inch dull yellow fruit are carried in small bunches along the branches. The fruit are variable, according to the seed source, some are seedy, with strong 'off flavors' and rather dry pulp, others are moderately sweet, delicately perfumed flavored but with a slightly 'tinny' backtaste, and with juicy pulp. The best are pleasant to eat as a fresh fruit. The plants are dramatic looking when they have conditions they like. The soil must be well drained, as they are very intolerant of poor drainage. The plants are damaged by frost. It is only suited to the warmer parts of the warm temperate zone. It has the unique distinction of not just growing well in moderate shade, but of growing best in moderate shade, such as the shady side of the house. It is a greedy feeder on organic matter, and requires constant, even moisture. The plants will fruit in the second year if grown well, but are short lived-about 6 years at best. Casana will grow well in cold conditions but not frosty conditions. It is unlikely to do well in areas with very high summer temperatures (it is from the high Andean mist forests).
CASIMIROA Casimiroa edulis- 'Ice
cream fruit'. Related to citrus, but the fruit flesh is smooth
and fibreless and more akin to avocado flesh without the
oiliness. The fruit are variable, from about apple size
upwards, very sweet, and with very large citrus-like 'pips'
inside. There is anything from one to five of these very large
seeds in the fruit. Some cultivars are slightly bitter just
under the skin, and some have a particularly rich almost
'butterscotch' flavor. The fruit are nutritious, with good
levels of vitamins A and C. The fruit are rarely available
commercially, because the fruit just don't keep. The skin is
very thin, and on a very ripe fruit it will virtually rub off.
The flesh is very easily bruised when it is ripe. This fruit
is quite unique in it's combination of sweetness (15-20%
sugars), unusual texture, and good flavor. The deseeded fruit
freeze well, and make a most excellent smoothee milkshake.
Freezing is a useful device, because they fruit in autumn
(some extend into winter), and well grown trees produce
prodigous amounts of fruit, which can create a mess if you
can't eat or give them away fast enough. Less frost
hardy than citrus. Casimiroas must have adequate water in
summer to prevent fruit drop Any reasonably well drained
soil will grow casimiroas. The tree tends to make rather long
droopy lank growth, but this can be cut back closer to the
trunk to encourage branching, and tipping soft new growth
regularly makes a much more compact and branchy tree as well.
Prune them after fruiting. They make a rather large tree (some
will grow to 10M/33 feet or more across), and the strong roots
can lift pavers and block drains if they are planted too close
to the house. They are about the same size as avocado tree.
The chief problem is bird damage, but this can largely be
avoided by picking the fruit when firm when birds don't
trouble them. Picking the right time to harvest the fruit
takes some experience. Sometimes there is a slight shift to a
yellowish tone to the normally green fruit. Picked too soon,
and the fruit take several weeks to soften, and are rubbery
and inedible. Picked at the correct time and the fruit should
soften in 2-5 days and be fantastic. Some varieties of
casimiroa are smaller than others, but no attention has been
paid to selecting dwarfing roostocks for these trees, altho'
it would almost certainly be possible to do so.
Pike-a small, well branched, almost weeping tree, Pike is well suited to the home garden because of it's compact size
Fernie-another naturally small tree (around 3M/10feet after 10 years) with good flavored fruit and often only 1 seed.
Lomita-quite large fruit, the tree remains relatively small, the fruit have good flavor, and, unusually, will store for up to 2 weeks off the tree.
Mac's Golden-the fruit are large, the flesh yellow and with a particularly rich flavor.
Reinikie Commercial-particularly good sweetness and flavor, R.C. has yellow flesh and yellow skin when ripe, so it is easier to judge when to pick it, apart from anything else. It may need warmer temperatures at flowering than other cultivars.
CHERIMOYA- Annona cherimola A South
American small tree that bears medium to very large
bluntly heart shaped green fruit from mid winter to spring
(depending on variety). The flesh is soft, cream or white,
juicy, very sweet and complexly flavored. It is without a
doubt one of the most delicious fruits there are. It has
numerous bean sized smooth shiny black/brown seeds embedded in
the flesh. Trees and fruit are damaged by air frosts, but not
ground frosts. More tender than the casimiroa.The tree is
small, amenable to severe pruning, and can be relatively
easily espaliered. The trees are happy in light shade. The
trees can also be grown as a large bush by repeatedly cutting
back the vigorous summer shoots and stripping the tops of the
pruned branches of their leaves (the leaf buds are unusual in
that they are hidden underneath the leaf stalk, which has to
be removed to allow the bud to grow out). However, this may
have to be done regularly over summer, as the trees are
vigorous growers-and some cultivars, such as 'Bronceada', are
very vigorous. Cherimoyas are attractive trees in full growth
over summer, with quite large large leaves. However, they lose
their leaves progressively over springtime, at which time they
look quite tatty. If the trees are pruned, they become quite
spreading, and as the wood is brittle, subject to branch
breakage. The worst pests are thrips insects causing an
unsightly silvering on the leaves, and wood boring/girdling
insects, which seem to be attracted to cherimoyas in
particular. They need good drainage, and like avocado, are
subject to rootrot. A thick organic mulch helps in marginal
soils. The fruit are easily damaged by frost, the skin
becoming blackened and splitting. They also sunburn easily.
The trees are self fruitful, but often set poorly due to the
lack of the correct pollinating insect. Fruit set and size is
increased dramatically if you can be bothered hand pollinating
the rather insignificant greeny bronze flowers. Most people
use a childs paintbrush to do the work.
A grafted tree should start fruiting within 2 or 3 years of planting out. Any grafted tree will have lovely fruit. Some cultivars have smoother flesh than others, or have a slightly resinous taste, or the flesh is whiter-but the difference is between 'delightful' and 'fantastic', so it doesn't matter. Cherimoyas are picked while still firm-usually when the green skin takes on a very slight yellowish tinge. They will be ripen in the fruit bowl about 4 days from picking.
Bronceada-extremely vigorous trees that must be pruned or their branches tend to break. The fruit are very large, and of fine flavor.(NZ)
Burton's Favorite-a medium sized fruit with pure white very smooth flesh and superb flavor (NZ)
Pierce-vigorous tree, sets very good quality fruit without hand pollination.(US)
fact sheet An
review of everything you need to know about growing cherimoya in
the home garden, at the California Rare Fruits growers
Organisation site. Covers botany, culture, brief notes on 16
varieties, further reading, and more. Written for Californian
conditions, but widely applicable. Highly recommended.
Growing Ceriman - from the Center for New Crops & Plant Products, at Purdue University Site, an extract from Julia Morton's Book 'Fruits of warm climates'. Covers Description, Origin and Distribution, varieties, suitable climates and soils, propagation, culture, harvesting, pests and diseases and more. Concise, informative. 3 good photos of fruit
CHERRYPrunus avium-. Bird theft is problematical, but cherries are easy care and can be very productive of premium taste treats. Large trees. Usually need two to tango. In some areas, they are suceptible to brown rot, which badly damages the fruit. In humid, maritime areas, cracking after rain can be a big problem, most particularly in the firmer varieties, rather than the softer types. Because cherries mature early in the fruit season, they can also be damaged by hail. Sweet cherries need about 1000 hours of winter chilling.'Bing', 'Lambert', and 'Napoleon' have the longest chilling requirement, and are not suited to the wtz as a consequence. Stella-somewhat self fertile, and probably better adapted to warmer areas than most temperate cultivars. Birds are a real problem, and until a reliable dwarfing rootstock is found, the best the home gardener can do is to grow cherries trained as a fan against a wall, and then net them. This requires a high degree of skill, effort, and dedication. So most of us will either chose another fruit, or enjoy the blossoms without high expectations of beating the birds to the fruit. Tangshe -self fertile, fruits very well in warm areas, fruit are pleasant but not as good as temperate cultivars. With the exception of 'Stella' and 'Compact Stella', all sweet cherries need a pollinator to bear well. The 'Stellas' seem well to flower quite well in thee warmer parts of the wtz. Generally, dark colored varieties will pollenize dark varieties, and light colored varieties pollenize light varieties. Sour (pie) cherries bloom later than sweet cherries and bear heavily without a pollinator. For cool summers and mild winter areas, try Van, Angela, Hardy Giant, Emperor Francis
CHESTNUT- see 'Nut, Chestnut'
CHILEAN CRANBERRY- (Myrtus ugni)- highly recommended - knee high little shrub that bears heaps of sweet, resinous, aromatic fruit, about blueberry size or less. Nothing quite like it, a late summer treat. Frost hardy, easy to grow, productive. It is never found in the markets and is probably chock full of health promoting substances.
CultureJJJ A brief fact sheet
on the Chilean cranberry, which this nursery insists on calling
the Chilean 'guava'
CHINESE PEAR- see 'Asian Pear'
CITRUS- listed under their fruit type, e.g.'grapefruit', 'kumquat', 'lemon', 'lemonade', 'lime', 'orange', 'mandarin', 'tangelo', 'tangor' etc.
CHOKECHERRY-Aronia sp. A native of northeast USA, this small deciduous shrub is grown commercially in Northern Europe for the health giving (supposedly) properties of the mild and pleasant, somewhat blueberry like berries. The foliage is very ornamental in autumn. Unusual and hard to find, if you are a health freak, this is an easy grow plant. Requires two for cross pollination and berry set.
CRANBERRY- Vaccinium macrocarpon These small wiry stemmed bog plants live in an acid peaty soil and produce oval approximately grape sized sour red fruit. The soil should be prepared as for blueberries but even more acidic organic and wetter. This can be arranged by digging a hole and lining it with plastic to create an artificial bog. Fill the lined hole with peat or a mixture of peat and lime free soil, and plant your cranberry in that. Mulch heavily with peat. You should obtain a yield from a well grown bed of about 0.5kg per square metre/1 Ib per square yard. Cranberries don't need pruning, but their rambling wiry stems may need cutting back every now and then. Cranberries keep very well in the refrigerator- up to two months-so the fruit can be progressively stored as they ripen over summer. Cranberries form a low mat, and so can be incorporated in borders or raised gardens, and to that extent are well suited to small space gardening. Their delicate little pink spring flowers are charming, and the fruit attractive, they require no pollinator, seem to fruit satisfactorily in warm temperate areas (although their may be cultivar differences), and seem to be unaffected by pests and diseases. The only question that remains is why grow the acid little devils, when you can buy canned cranberries and cranberry juice quite cheaply?
CURRANTS- Easy to grow, packed full of
vitamins, don't take up much space- as long as pollination is
good and you throw a net over to keep the birds off, you'll
Black currant (Ribes nigrum)-There is quite a lot going for the black currant. It is a 'natural tonic in a berry' due to it's high vitamin content, it is more tolerant of wet soil than most other berry fruit, they are more adaptable to soil acidity, the bushes are small, they bear heavily in suitable climates (4.5kg /IOlb is normal for a healthy well grown bush), they come into bearing within two years of planting, they are not as attractive to birds as other berry fruit such as raspberries, and they are easy to prune (cut off a third of the shoots every winter at about 50mm/2 inches from the soil level-the oldest shoots). On the down side, they are early bloomers, and therefore subject to damage in frost pockets, they are not particularly attractive looking plants, the fruit are only sweet enough to eat as a fresh fruit if they are planted in ther full sun. They can be affected by a serious disease called 'reversion disease', but this is just bad luck. In the warmest part of the warm temperate areas black currants will usually fail to fruit through lack of winter chill.
White currant (R.sylvestre)-uncommon, similar to the black, but not! (black, that is). The comments under 'Red Currants' applies equally to white currants.
Red currant (R.rubrum)-the best selection for warmer areas, with cvs. such as 'Amgot' producing mightily. Red currants produce a lot of fruit (4.5kg /IOlb is normal for a healthy well grown bush), and unlike blackcurrants, can be pruned into particular shapes, such as cordons (yeilding around 0.5-1kg/1-2lb) or fans. Red currants are not subject to reversion disease. Red currants are easy to prune-in winter cut laterals back to one bud to encourage fruiting spurs, and cut out branches that have been fruiting for three years or so to allow a continuing growth of younger branches. The long 'strigs' of bright red shiny little fruit is attractive in itself, and fan or cordoned bushes have architectural landscape value.
DATE - A Mediterranean climate palm, altho' it will grow well in the warmer parts of the warm temperate zone - without fruiting. Takes two to tango, not really a proposition for the size of plant and leangth of time to fruiting.
Dates - JJJJ from
Center for New Crops & Plant Products, at Purdue University
Site, an extract from Julia Morton's Book 'Fruits of warm
climates'. Covers Description, Origin and Distribution,
varieties, suitable climates and soils, propagation, culture,
harvesting, pests and diseases and more. Concise, informative. 4
good photos of fruit and the palm.
ELDERBERRY Sambucus canadensis- These stemmy bushes produce heaps of small black berries with a slightly soapy taste, whose main use seems to be to feed the birds. The big panicles of creamy ethereal flowers are very attractive in spring. The shrub/bushes have a habit of sending up suckers further out from the base of the plant, especially if the roots are cut at any time.
Acca seillowiana- At least as frost hardy as
citrus, perhaps more so. Clippable into a hedge or standard,
excellent grey backdrop plant, superb fruit in Autumn, fairly
frost tolerant. Every garden should have two. Two, because
apart from 'Unique', they require cross pollination. Feijoas
are harvested in late autumn and early winter - a time when
fruit buyers don't have a lot of choice, as stonefruit is
finished and local citrus hasn't really started. The fruit are
juicy sweet, excellent flavor, great eating fresh, and can be
canned/bottled. They don't travel or store well, so home
garden fruit are far superior. Some fruit sold in stores lack
sufficient pulp cavity, and have very thick skin. Such
varieties can be avoided by growing your own. Grafted or
cutting grown plants will bear within three years, given good
care. Seedlings take 5 years or more. Feijoaas are useful
because they will bear well even in partial shade.
Unique- Early, self fertile, and productive, this is a feijoa of choice for the small space garden, even altho' the flavor is unexceptional. NZ
Coolidge-said to be self fertile, small fruit. US
Andre-said to be self fertile US
Gemini-Very nice flavor. It is sweet, with little acid. The overall rating is very good. The fruit are mostly very odd looking-longish, with a funny bulbous protuberance at the blossom end.NZ
Apollo -Very good flavor, a nice sugar acid balance, this variety rates as one of the best. The fruit are longish, somewhat torpedo shaped, as if they are not properly filled out at the stem end. Some, presumably better pollinated, are well filled out and oval.NZ
Ficus carica- The perfect fig- soft, sweet,
sticky, flavorsome- comes from fruit almost fully tree ripened
(picked a day or two before perfection & allowed to fully
ripen indoors). Some fig trees can be pruned hard to keep them
nettable and very small. Birds are a major problem, so the
tree must be netted, or individual fruits bagged, if you are
to get any fruit. Many varieties of fig have been introduced
into the United States and Australiasia over the years since
colonisation. Most were inferior, a few are stunning. Because
figs don't handle or store well, they are difficult to market
commercially. Therefore the home gardener has the advantage of
chosing any variety, no matter how soft, and maturing it on
the tree to the point of perfection. Figs ripen in late
summer/autumn. Some varieties have an early ('Breba') crop,
followed by a crop in late summer. In mild summer areas the
breba crop may be all that matures. Pruning to keep the tree
small often cuts off the breba crop anyway. Apart from the
birds, the biggest challenge with figs is pruning them hard
enough to keep the size down without losing too much fruiting
wood, and dealing to the inevitable basal suckering. Figs
won't tolerate waterlogging, and lengthy drying out of the
soil causes the fruit to drop or become dry.
Nomenclature of figs is muddled. Some cultivars have been mis-named, or re-named. Rely on a knowledgeable nurseryperson to sell you a fig adapted to your area, or take a cutting from a local high quality tree. The easiest care figs are the common fig varieties. One group of figs-'Smyrna' figs- only fruits if it is pollinated by a tiny wasp carrying pollen from another special kind of fig, the inedible 'Caprifig'. This makes fruiting for this type uncertain in a home garden situation, so cultivars from the smyrna group are best avoided.In wet and humid areas it is common for figs to ferment on the tree because water gets in the 'eye' at the base of the fruit. In these ares it is wise to seek out a variety with a closed eye
Brown Turkey- large, squat, transluscentie-amber flesh, greenish brown with a basal purple blush. Not a great deal of flavor in cool conditions, but very good when the season is warm. Has the important advantage of being able to be pruned very hard (US NZ AU)
Celeste-'Malta', 'Celestial'. One of the earliest figs, ready about mid summer onward, celeste is small, purplish brown, covered in a heavy bloom, has a closed eye, and is very sweet. Well adapted to moderate summer areas.(US NZ)
Excel-'Kadota hybrid'. A roundish medium sized yellowish green skinned fig with amber flesh with a rich, sweet flavor.Needs summer heat.(US)
Black mission-a purplish black fig with pink flesh, B.M. is medium to large, pear shaped, and has a breba crop in early summer followed by an early autumn crop if there is enough heat.(US)
Figs - an
outstanding database of fig cultivars, fruit description,
synonyms, and more.
GOOSEBERRY Ribes uva-crispa. Gooseberies are usually an acid fruit (although when fully bush ripened some are very mild and good eating out of hand), and usually used for pies (originally they were used in sauces served with goose-the acidity was a counterpoint to the fattiness of the goose). The berries can be green, greenish yellow, yellow, pink, or red, smooth or with fine hairs. Gooseberries don't fruit very well in warm temperate areas, as there is often not enough cold to fulfill their winter chilling needs. Some varieties need less chilling than others, so fruiting is possible, especially at the cooler end of the warm temperate zone spectrum. You are also dealing with a very thorny plant (There are a few varieties with greatly reduced thorniness). Grown as a bush (preferably on a single stem), the plant it about 5feet/1.5m high and wide. Gooseberries will grow well on most soils, provided they are not too wet, and there is plenty of organic matter incorporated in the soil. Gooseberries need a lot of potassium, so the fertiliser you use should be high in 'potash', or give additional potassium in winter ( about 1oz/square yard; 34gms/square metre) Fruit laden branches can break if grown in a windy situation, so they either need a bit of shelter, or grow them as cordons. Single cordons can be grown 12 inches/30cm apart. The birds will eat your gooseberries unless you drape a net over the plants as they ripen. In temperate areas, bushes yeild about 8lb/3.5kg and will keep fruiting for 20 years or more; a single cordon yeilds about I-21b/0.5- 1 kg . Expect less and some poor fruiting years in the warm temperate zone.In late summer prune all the laterals back to about 5 leaves, but don't prune the leaders. In winter cut the main leaders in half at an inward pointing bud or lateral (this helps overcome the gooseberries tendency to droop) 'Glendale', a vigorous red fruited form, is better adapted to warmer areas than most.
Grape CultivarsJJJJ A list of 134 grape varieties in tabular form, organised to inform on specific attributes such as disease resistance, primary use, and other facets of interest to the home gardener.
GRAPEFRUIT Citrus grandis- Grapefruit need more heat than oranges, and they generally don't perform well in the warm temperate zone except for the very hot long summer areas. Grapefruit are available almost year round from the supermarkets, so there seems little point for trying to grow grapefruit out of the hot climatic areas that they do so well in. The rootstock that the grapefruit is grafted onto has an influence on the trees resistance to virus diseases, root damaging nematodes, overthick skin, and poor soil conditions such as high calcium levels, or poor drainage. Your nurseryperson should be able to guide you to select the best roostock for your local area. Provide adequate water in dry spells, feed them a little and regularly, and you will harvest very good fruit.
Psidium guajava 'Tropical Guava' - The small tree
comes into bearing within a few years of planting out, it has
an attractive trunk and leaves, there are purple leafed forms,
it is trimmable, it makes a good hedge, and the flowers are
quite attractive. It is hardy, and undemanding as to
soil.There is a wide variety of fruit shapes and sizes to
chose from when selecting a guava variety. The best are the
large, yellow skinned, pink fleshed fruit. They are all an
excellent source of vitamin C, with a minimum of 40mg/100grams
of fruit, and a lot of variation up from this baseline
according to the variety. Guavas must have heat, and a fairly
mild, if not hot, winter. This makes them a worthwhile
fruiting propostion only in the very warmest and most frost
protected parts of the warm temperate zone. Fruit in the
merely warm parts of the wtz are resinous, never color well,
and lack sugar. Varieties available include Hong Kong Pink,
Philippine White, Pear, Mexican Cream, Ruby, Indian Red, and
Philippine-yellow skin, white, soft flesh, sweet. Medium/large fruit.
Mexican Cream-bright yellow skin, cream, soft flesh. Large pear shaped fruit.
Ruby-X -Green skin, with pink, soft, flesh. Medium sized fruit.
Thai Maroon-Deep maroon skin, deep maroon flesh. The tree has purple leaves. Medium/large fruit.
GUAVA, CATTLEY, RED Psidium Cattleianum 'Red guava', 'Strawberry guava' 'Purple guava'- A very useful plant for the home food garden, because it is a small bushy tree and won't form massive roots that can damage paved areas, and because it will remain fruitful even when trimmed to fit into a narrow space, such as a border. The trees are self fruitful, the small creamy flowers while not showy are not unattractive, it is cold hardy and relatively drought tolerant. Cattley Guavas will start fruiting about the second year of planting out. Each about 8 gram berry contains more than 3.2 mg vitamin C. The fruit are about grapesized, sweet, slightly resinous and aromatic. Fully ripe fruit turn deep purple, and soon drop from the bush. The bushes are exceedingly productive, and become handsome upright small trees. They require little pruning, and can be shaped for convenience. The fruit are usually ripe in autumn.
GUAVA, CATTLEY, YELLOW Psidium Cattleianum var.lucidum 'Yellow guava'- A shrubby tree, often smaller than the cattley guava, with similar, but deep yellow fruit. Like the red cattley guava, a very useful plant for the home food garden, because it is a small bushy tree and won't form massive roots that can damage paved areas, and because it will remain fruitful even when trimmed to fit into a narrow space, such as a border. And like the red cattley, as rich a source of vitamin C. The flavor is similar, altho perhaps not as complex. Fruiting is as for the red cattley guava.
GUAVA, COSTA RICAN Psidium friedrichsthalianum 'Cos guava'- A rather frost and cold tender species of guava with samll acid fruit that performs very poorly in even the warmest parts of warm temperate areas. Strictly for collectors.
HARDY KIWIFRUIT-Actinidia arguta,
A.kolomikta, A.melanadra, A.purpurea, A.eriantha and
others. 'Tara berries', 'Baby kiwifruit'. There have
been many different 'wild', unimproved but still edible,
species of kiwifruit introduced to the West from China and
Russia over the last fifteen years or so, altho' suprisingly,
very few are available. They vary in edibility from
'famine-only food' to very nice, with most species being very
nice-sweet, sometimes fragrant, usually soft green
fleshed, and pleasant. A.eriantha has astounding
levels of vitamin C, but unfortueatley is unpalatable, being
peppery tasting. However, most species have very good levels
of vitamin C. Some species are very cold hardy and thus
recommended for temperate areas, but paradoxically, some
(especially A.arguta) have exceptionally good bud
break in spring-better, in fact, than their much larger warm
temperate cousin the 'kiwi', and so are very successful in
this climatic area. The vines are remarably free of disease,
and the green fruit seem to be ignored by birds-presumably on
the basis that they look unripe. Their fruit is generally from
cherry to about large grape size, depending on species,
variety, and how well pollinated the flower was. The fruit are
completely smooth, and the skin is edible, unlike the
commercial 'kiwi'. The fruit of A.arguta is sometimes
marketed, but is still not readily available. These vines need
reasonable drainage and wires to grow along or a pergola to
grow over. They do need to be pruned every year, and A.arguta,
in particular, becomes a dense mass if it isn't dealt to.
Pruning is easy, pruning back to two buds at the base of the
current seasons growth when the plant is dormant. A few
cultivars are self fertile, but others must have a male plant
for pollination (the sexes are on different plants). The fruit
of self fertile varieties are larger in the prescence of a
A.kolomikta-'Kishmish'. Does best in light shade, which makes it a particularly valuable plant. After about 4 years, the leaves of some plants may develop a natural purple and cream leaf variagation, which is quite attractive. The A.kolomikta cultivar 'Ananasaya' ('pineapple') comes into bearing early and bears very well.
Actinidia arguta-'Bowerberry', and is sometimes called the 'Tara berry'; and this latter name may well end up as the generic name for all the small fruited hardy kiwifruit species. The fruit are one of the largest of the Tara berries. The vines are vigorous, and prefer full sun, altho' they will tolerate some shade, and is very widely adapted altho' it is not regarded as being as freeze tolerant as A.kolomikta. Allow about 3-5M/10-16 feet for the vines to run on. The vine can be tipped and summer pruned to keep it in bounds.'Issai' (US CAN) is said to be self fertile, precocious, and late ripening, 'Noel' is said to be particulaly large and productive (NZ), 'Geneva'(CAN) is early maturing
Actinidia arguta x actinidia species- 'Red Princess' (CAN) is a delicate looking, highly ornamental vine which bears green fruit with a reddish blush and reddish tinge to the flesh. The fruit drop readily as they approach maturity, which is a useful attribute for the home gardener. 'MSU' (CAN) has exceptionally large fruit (2-3 inches/50-75mm long) and is slower to come into bearing than most and not as productive. 'Ken's Red' (NZ CAN) is very similar to an arguta fruit, but with a red blush and dull reddish flesh.
? A. chinensis - 'Jia' This is from seed from China, grown at the Pacific Agri-Research Centre at British Columbia, in Canada. It appears to be A. chinensis; but in New Zealand A. chinesis are considered prone to late frost damage, so this variety may be a breakthrough for areas prone to occaissional late frost. More information is needed.
Hardy Kiwifruit varieties JJJJ A page with general information on adaptation and culture, then brief to good notes on four species and 19 cultivars of hardy kiwi. From Tripplebrook Farm in USA, which sells plants of the varieties described. Particulalry useful for cultivars of the Russian A. kolomikta.
Actinidia deliciosa history and culture JJJJ Tremendous amounts of information on the introduction of the kiwifruit to the West. It includes sound climatic, cultural, soil, propogation, and pollination details. The notes on Chinese cultivars are predominantly for the yellow fleshed, smooth skinned, closely related species Actinidia chinensis.
HILDABERRY Across between the tayberry and the boysenberry. Early season. The berry is very large, red, and the flavor has been described as 'good', whatever that means. The plants are thorny and vigorous. We have found no other details on this bramble, but suppose it is grown the same way as a blackberry
JABOTICABA Myciaria cauliflora- This
is a small tree which bears grape sized purplish black fruit
directly on the trunk and large branches. The fruit are juicy
and similar to grapes in taste. The tree is very slow growing
indeed, and may take many years to start bearing. In the warm
temperate areas it normally has one heavy crop a year, in late
autumn/early winter. In warmer conditions, the jaboticaba may
fruit twice a year. The small leafed trees are not
unattractive-altho' the foliage has a tendency to yellowing if
nutrient status is wrong or the tree stressed-and it takes up
very little room. Set against this is the very long time to
bearing (8-25 years in the case of seedlings) and the fact
that even when it does flower, if conditions are cool, humid
and wet, the tree may fail to set any fruit. Better to buy
grapes. A fruit for collectors only.
More detailed information can be found in the California Rare Fruit Growers (Inc) very good fact sheet at: http://www.crfg.org/pubs/ff/jaboticaba.html
JAPANESE RAISIN TREE Hovenia dulcis- A fast growing handsome and graceful small to medium tree; bears strange nibblie fruiting bodies on the tips of the branches, which when partly dry, taste for all the world like raisins! Weird. Quite good autumn colours. Quite a good landscape tree, but the fruit have novelty value only really. Most people taste them, find them acceptable, but don't bother with them again.
Zizyphus jujuba - Chinese Date, Red date
This small open, spiny, rather knarled looking deciduous shrub or small tree produces 30mm/1¼" long fleshy oblong to almost round fruit that can be eaten fresh, when they are crisp, slightly sweet (altho' fruit have 20% sugars, 16% are reducing sugars), with no acidity (acidity levels are around 4-5 %, not enough to give a marked acid note) or marked flavor, but it is usually boiled in sugar and dried. The green fruit turn a mahogony brown when ripe. It does well in hot dry areas, and fruits poorly if at all in cool summer areas. The trees are very cold tolerant, and the insignificant yellowish green flowers appear in late spring, and so are not troubled by frost. They must have free draining soil, altho' they have the virtue of tolerating some salinity and alkalinity. The trees are self fertile and highly productive in climates that suit them. The fruits ripen in autumn. Perhaps their greatest claim to fame is that they are an exceptional source of vitamin C - tree ripened fruit have analysed out at from 500 - 560 mg of vitamin C per 100 gram of flesh. This is one of the most outstanding amounts of any fruit. No wonder the Chinese value this fruit so highly!
Li- Large fruit. Small tree- around 4.5M/15'.
Lang- Large fruit, a little smaller than Li and ripens a month later.
Unless you are keen to have a 'health fruit' in your yard, the lack of marked flavor may not appeal. Try to find some fresh fruit to taste - if you find the fairly neutral flavor appealing, they are well worth growing.
More detailed information can be found in the California Rare Fruit Growers (Inc) very good fact sheet at: http://www.crfg.org/pubs/ff/jujube.html
California USA J A
mention in an article (1995) by a grower of jujubes, primarily
covering some of the harvest details.
KAFFIR PLUM Harpephyllum caffrum- male & female trees needed, attractive but frost tender evergreen glossy leaved quite large upright growing tree.The fruits are small, with thin acid but pleasant flesh over a relatively large stone. It has high landscape values, but the fact you need two trees for fruit, plus the small amount of flesh per fruit, really mean it is suitable for collectors only.
KEI APPLE Dovyalis caffra - 'Umbolo', 'Umokololo', 'Kaffir apple'. Kei apples are very spiny shrubs that make an excellent everything proof hedge. They have a major drawback-the hard, extremely sharp 50mm/2 inch spines are very painful, and the prunings take forever to rot, thus posing a threat to feet for many years unless every last piece has been picked up. The deep yellow small plum sized fruit fall from the female bushes (the sexes are on different plants) in late summer/autumn. They are acid, densely fleshy, with several slim fuzz covered seeds. They are not suitable for eating as a fresh fruit, but make good jam/jelly. There is said to be a thornless selection, and if it were available, this plant would be very useful for dual purpose hedging. But the normal spined plant is too dangerous to consider.
KIWIFRUIT Actinidia deliciosa,
A.chinensis, and hybrids -'Kiwi', 'Chinese Gooseberry'.
[ see also 'Hardy
Kiwifruit'] The fruit synonomous with kiwifruit is the
beautiful green fleshed Actinidia deliciosa cultivar
'Hayward'. Older A. deliciosa cultivars had pale
green or yellowy green flesh, weren't as highly flavored, and
have all but disappeared.
The species A.chinensis has yellow, gold, or green flesh. The best known cultivar is gold fleshed and patented - NZ plant variety right #1056 - as 'Hort 16A'. It is popularly known by its trademarked brand name 'Zespri Gold™'. No doubt they will be popularly referred to as 'the yellow kiwi'. There are, however, many other cultivars available, mainly imported from China via Japan or developed by amateur gardeners, in a range of sizes and flesh colors and shapes (in New Zealand, yellow varieties are not as yet available to the home gardener; and, curiously for the 'home' of the kiwifruit, few undeveloped kiwifruit species have ever been released ).
The size and shape of chinensis cultivars are variable, but in the largest, are similar to the existing kiwi. 'Hort16A' is atypical in that the stem end has sloping shoulders that are finally drawn into a small pinched 'beak', making the fruit look slightly testicular. Chinensis varieties are somewhat similar in taste to the standard green 'kiwi', but don't have the slightly aggressive acidity of the kiwi, and are much sweeter when well grown. When fully ripe, it's flavor it slightly honeyed, with melon tones, and with 'spicy', almost cinnamony undertones, quite complex but muted. It is substantially different in flavor to the green kiwifruit, and many prefer to it. Underipe fruit are pleasant but unremarkable. The flesh is soft with no 'stringiness', there are much fewer seeds, and the central 'core' is very small. It doesn't have the wonderful emerald green color -most have muted mid yellow to greenish colored flesh, - but is nevertheless attractive. Other seedling selections have orange, or even red flecked flesh.
The vine, however, is even more rampant in growth than kiwi, and sensitive to late frosts. In addition, a male of the same species (several male pollenizing cultivars have been patented in France and New Zealand, and may therefore be unavailable') is also required-the 'normal' green kiwifruit male pollenizing plant won't also 'do' the yellow fleshed species. The male is as rampant growing as the female.
Plants of 'Hort16A', i.e 'Zespri Gold™' are licenced to commercial growers only, and will never be available to home gardeners (NZ). While present in North America (and France and Italy), it will again only ever be released to licenced commercial growers. There are other patented varieties, the fruit of the Skelton series being most notable, but it is uncertain if they are available to the home gardener. The cultivars 'Jia' is available in USA and Canada, but again only to commercial growers under contract. France has at least one yellow fleshed kiwifruit, 'ChinaBelle®, released at the end of the year 2000, but, like the rest, exclusively to commercial growers.
The news is not all bad. There are quite a few yellow kiwifruit cultivars easily available to home gardeners (not New Zealand), and many more are likely to follow. Chinese cultivars have been imported into Europe and USA via Japan, and have been renamed - sometimes numerous times - along the way. These are 'Lushanxiang' (syn. 'First Emperor'), 'Jiangxi 79-1' (syn. 'Red Princess'), 'Kuimi' (syn. 'Turandot', 'Apple Sensation' etc) and 'Jinfeng' (syn.'Golden Yellow') . Flesh colors traverse yellow, orange and partially red fleshed. These plants cannot (legally!) be patented, and some are readily available to homegardeners in USA and parts of Europe. As European and American home gardeners grow seedings of these species, many more interesting types will doubtless become available there.
'Lushanxiang' ('First Emperor'), is the the most commonly available at this time.
Overall, the yellow kiwifruit is recommended only for large urban gardens and farmlets-it is far too vigorous for a small space garden-unless you are particularly interested in it's extraordinary vitamin C content.
of Actinidia chinensis JJJJ Julia Morton's extensive notes
on the history of kiwifruit and the industry, plus notes on
cultivars includes a listing of yellow fleshed Chinese varieties
and their characterstics. While we don't know which species they
are, we can reasonably assume most of the yellow flesh varieties
are A. chinensis.
chinensis ('the yellow fleshed kiwifruit') JJJJ A first class
overview of the history and decriptions of the various new
cultivars of yellow kiwifruit grown in China, USA, Japan, Italy,
France and New Zealand, including 'First Emperor', 'Red
Princess', 'Turandot', 'Golden Yellow', 'Hort16A', and
ChinaBelle ® . At the Purdue University New Crop site. Also
description of Actinidia in general and green kiwi.
Color pictures of kiwifruit cultivars - green, yellow, and red flesh. Click on the cultivar names to view. From the Department of of Food and Nutrition at Komazawa Women’s Junior College
The species A.deliciosa has green or yellow flesh. The best known are green fleshed cultivars. All Actinidias are rampant vines, and all require a non-fruiting male pollinator. If you want to grow kiwi, you will have to be prepared to prune regularly, or be taken over by the vine. That said, there is no doubt that home grown vine ripened kiwis have much better flavor than store bought fruit. The biggest problems are controlling the rampant growth, and keeping birds from eating them. Let your new plants (you need a male as well, remember?) grow along the very strongly secured wire on your fence or deck railing or wherever you are growing it, and just cut the tip off when it has gone far enough. This is the main fruiting arm. Branches that grow out from the main fruiting arm over the growing season don't have fruit, but the next years side branches that grow out from these now one year old branches will flower and fruit. The ends of these long side branches could be trimmed and tied up against the wall, or to your arbor, or left to dangle, but it takes up too much space. The best idea for the urban gardener is to shorten these long shoots back in the winter to a stub, containing only 3-4 buds. You get fewer fruit, but better control of the plant. The buds on the stub will grow out into fruiting lateral branches the following spring, and have flowers and fruit.
In the winter, stub this just fruited wood back to 3-4 buds just beyond the fruiting sprigs (thus the new, rather longer, stub is sitting atop last years stub). Let the buds on this spur grow out in spring and fruit for a final year, then in winter cut the spur right back to the main fruiting arm, leaving only one bud to grow out and start the whole process over again. The objective is a spur about every foot/30cm along the length of main fruiting arm.
In summer, prune to a stub any stout, upright watershoots, and prune back the ends of the fruiting laterals which are becoming tangled or in the way.
The male pollenizing vine is handled in an identical manner, except that, when flowering is finished, the flowering laterals can be pruned back more heavily. The male vine can have a much shorter main 'fruiting' (=flowering) arm than the female.
In the warmest parts of the warm temperate zone, there may be problems with poor bud burst in spring due to lack of winter chill. In this case, consider 'Vincent' or a A.chinensis cultivar, as both need less winter chill; or go for the tara berry, A.arguta. Conversely, areas that have occasional late spring frosts may have a crop failure due to frosting of the flowers. The best known cultivar is the commercial 'Hayward' cultivar. 'Skelton' is an early flowering plant with long, torpedo shaped fruit that ripen about 2 months earlier than 'Hayward', is sweeter, and has a higher ascorbic acid content. It requires an early flowering pollizer, such as 'Derek'. B114 is a prodigous cropper, with fruit hanging almost in bunches.
a collection of Actinidia species fruits JJJJ of the fruit of 16 species of
kiwifruits at the Purdue University website with the on-line
version of the new
book : 'Perspectives on New Crops
and New Uses: Proceedings of the Fourth National Symposium New
Crops and New Uses: Biodiversity and Agricultural
Sustainability' in the contribution 'New Temperate Fruits:
Actinidia chinensis and Actinidia deliciosa by
A.R. Ferguson' edited by Jules Janick of Purdue University 1999.
ASHS Press, Alexandria, VA
citation J extremely
-of who described the species, when, in what publication, the
natural range, and previous names. - 42 odd species and hybrids
at the Germplasm Resources Information Network (GRIN) database.
For the extreme enthusiast, not 'garden useful' for
most of us.
KiwifruitJJJJ Julia Morton's
extensive notes on the history of kiwifruit and the industry,
plus notes on cultivars (18, mainly Chinese varieties) makes
this a mini 'classic'. Mainly Actinidia deliciosa, some
discussion of hardy kiwis.
Carolina, USA - JJJJ
a very good fact sheet on all aspects of growing kiwi in
Carolina. Information on winter damage for A. deliciosa is
in California USA JJ
A brief article (1995) by a grower of unusual kiwifruit
covering some of the A. chinensis, A. melanandra and other
species introduced to USA.
deliciosa ('the kiwifruit') JJJJ A first class overview of the
history of the green kiwifruit (Hayward) and decriptions of the
various new cultivars of yellow kiwifruit grown in USA, Japan,
Italy, France and New Zealand, including 'First Emperor', 'Red
Princess', 'Turandot', 'Golden Yellow', 'Hort16A', and
ChinaBelle ® . At the Purdue University New Crop site.
Actinidia deliciosa and chinensis J 'Skelton cultivars' - brief notes on the 'Skelton' green kiwifruit, and pictures of a variety of 'Skelton' gold types
fruit with no name J
A whimsical piece on the name 'kiwi' fruit, sparked by
the developement by a New Zealand selection of the yellow
kiwifruit grown in China, Japan, USA and France, and carrying
the unlovely variety name 'Hort16A'. By a local food and wine
KUMQUAT Fortunella sp. A small citrus tree never exceeding 10 feet/3 metres (on dwarfing rootstock) that grows and fruits well in warm temperate areas. Ideal for pot culture, where it can be held as a small bushy tree. The fruit are round or oblong, and about the size of a large grape. The peel is sweet, but the flesh is acid. Meiwa is the cultivar most usually used for fresh eating. It has good landscape value, especially as a potted specimen, but very few of us will actually get around to preserving them.
Citrus limon A required plant for any household. If your
soil allows you to grow citrus, lemons are a must. The white
flowers are attractive, they have a pleasant scent, and they
look great hanging with fruit. The drawbacks are the need for
free draining soil, and in wet and humid areas the fruit can
be affected by a fungus called verrucosis which makes
the fruit look scurfy. In the warmest areas lemons tend to
flower and fruit almost continuously, but with the main crop
being winter and early spring. Lemon trees grow to be
large trees, producing far more lemons than the average
household could ever want. Espaliering, hedging, container
growing, and using small varieties takes care of this 'good
Meyer-Not a 'true' lemon, but a hybrid with an unknown citrus species, Meyer produces a prodigous amount of very juicy, medium sized fruit. Its landscape values are high, in that the deep yellow fruit festooning the tree are wonderfully attractive in themselves. Meyer grows in a fairly open fashion, with long branches that droop under the weight of fruit. This makes it a good candidate for espaliering and informally hedging. It bears fruit in the first year of planting out.(US, NZ, AU)
Eureka-yellow fruit, highly acid, medium sized, very similar to Lisbon. The tree is moderately vigorous, and nearly thornless. It normally starts into fruiting at a younger age than Lisbon. As a generalisation, there is more chance of getting a fair proportion of fruit in summer with Eureka compared to Lisbon.(US, NZ, AU)
Villa Franca-very similar to Eureka, same comments apply (US, NZ, AU)
Genoa-also similar to Eureka, but the fruit are slightly smaller, and again, the same comments apply.(US, NZ, AU)
Lisbon-yellow fruit, highly acid, medium sized, very similar to Eureka.The tree is large, dense foliaged and vigorous, with numerous long thorns, and the fruit tend to be carried within the canopy. It is more tolerant to adverse environmental conditions such as wind and cold than the other 'true' lemons.(US, NZ, AU)
Ponderosa-Like Meyer, not a true lemon, but a hybrid, probably with the citron. The fruit are very large, have a thick to very thick skin, and are seedy and sometimes rather dry. The tree is small, large leafed, and thorny. It tends to bear year round.(US, NZ, AU)
LEMONADE- similar in appearance to a lemon, but much smaller, the fruit are a combination of acid and sweet. Ripe in winter. The tree is fairly weak growing, with drooping branches, and well suited to espaliering. Lemonade needs to be planted in full sun to develop good flavor. A well grown fruit can be eaten skin and all. This is an unusual citrus, and one not found in the markets, and is definitely worth growing.
There are two main varieties of lime you can grow-the
small fruited, sometimes quite seedy, highly aromatic
'mexican' lime that can be picked green or yellow; and the
small lemon sized, generally seedless, pale yellow 'Bearss'
Mexican is also known as the 'bartender's lime', or the 'key' lime, and has that delightful aromatic lime smell. The tree has light green leaves, is fairly thorny, and when grafted onto a dwarfing rootstock it makes a neat shrubby tree, which is convenient, because it really needs to be container grown and pampered, as it is a heat demanding variety, and not really suited to warm temperate areas.
Bearss, also known in some areas as 'Tahitian' or 'Persian' lime, in contrast, ia about as hardy as a lemon. It is a much more vigorous and spreading tree, less thorny than Mexican, with fragrant flowers, and holds on the tree for a while when ripe, but has less flavor than 'Mexican'. Nevertheless, the flavor is still good, and it usually flowers and fruits virtually year round, like most lemons.
LOGANBERRY A raspberry/blackberry hybrid.A
large dusty maroon red berry that ripens about 10 days before
Boysenberry. It bears heavily, and is quite well adapted to
cool summer areas. It is quite acid in flavor, and not
something you would any a lot of as a fresh fruit. Trailling
and thorny, it is best as a canning/bottling propostion, but
even then you have to add a lot of sugar, which defeats the
The selection LY 654 is thornless.
Grow as for Blackberry
Euphoria longana Closely related to the lychee, the
longan forms a small, compact headed tree, often with
attractive red new growth. It is really a subtropical tree
fruit so it is difficult to fruit this tree is warm temperate
areas, even altho' it grows fairly well in the warmest parts.
The trees set the fruit so late that they rarely reach more
than small grape size before cool weather causes them to fall
off. In an ocassional warm year fruit will mature and be
sweet, but the size is often small. It is amenable to pruning,
and so is well suited to urban food gardening. The fruit,
carried in terminal clusters, are small (about an inch/25mm
wide), round, and a dull brown color. The skin is thin and
brittle, and peels to reveal a transluscent pulp enclosing a
single round, black shiny seed. The taste is much less
perfumed than the lychee, stronger, with a greater depth of
flavor. They have a tendency to biennial bearing. The trees
withstand some wind, are are more adaptable to soil and
temperature range than the lychee. The fruit would mature in
early winter (late June, Southern Hemisphere).They fruit
readily in large containers in glasshouses or other protected
areas. Strictly for the collector.
Longan in Australia JJJ A general overview and description of the longan in Australia, mainly from the commercial point of view, but still a good introductory fact sheet on it's requirements.
LOQUATEriobotrya chinensis The loquat is a handsome round headed tree, with large, dark green, serrated leaves. The small white flowers are borne in terminal clusters in late autumn, and are strongly and delightfully fragrant. The fruit are variable, from about large grape size to golf ball size, depending on cultivar. Some are more or less round, and others rather pear shaped, again, depending on variety. The yellow or near orange fruit are very juicy, soft fleshed, with 4 or 5 large brown seeds taking up the centre of the fruit. The flavor is variable-some are quite acid with little sweetness, others are very sweet with good acid balance, and some are predominantly sweet. The fruit are very rare in the market, and when they do appear, they are usually very expensive. They cannot really be shipped because they bruise easily when handled, with the bruised area turning brown. Some trees are dense, vigorous, and grow to 6M/20 feet or so, others (some of the Japanese varieties) remain about 1.5M/5 feet. Loquats will tolerate some shade, when the already large leaves become even larger. Loquats are hardy, altho' bad frosts at flowering time will destroy the flowers. They will grow on most soils, and can be grafted to quince rootstock, which also tolerates heavy soils.Quince rootstocks do send up suckers from the base, whose constant need for removal soon becomes tiresome. In humid climates, the foliage and fruit are subject to a 'black spot' fungus which makes the foliage unattractive and ruins the fruit. The main problem is bird damage. To be their best, loquats need to fully tree ripen, but birds peck them as soon as they have colored. The best strategies are to prune the trees low and net the tree; bag choice racemes individually; or grow small cultivars and net the tree. Hot temperatures at fruiting can cauase sunburn. All in all, the loqat is a very worthwhile tree so long as you select a sweet, large fruited variety, you protect the fruit from birds, and in humid areas, you are prepared to spray against fungus disease.
LUCUMA Pouteria obovata- A handsome upright tree that can be pruned for size control, the lucuma has a green skinned, about orange sized and shaped fruit (variable), with strange 'dry' flesh in which are embedded 3-5 very large shiny seeds. The flesh is butterscotch flavored, but too dry to eat other than in cooking. The fruit mature in winter, but is very difficult to pick exactly when they are ripe. Picked too soon they never ripen, too late and they split open on the tree. It is uncertain whether or not they need cross pollination. Most plants are seedlings, and are somewhat variable. Rarely available. Collectors item.
LYCHEE Litchi chinensis This is a most attractive landscape tree, but is omly able to be grown in the warmest parts of warm temperate areas. The tree forms a dense head, the flushes of new growth are an attractive bronzy pink, and when it is in fruit the clusters of round pink/red fruit are highly decorative against the foliage. The fruit are small, about 1½ inches/38mm wide, with an easily peeled brittle skin overlaying transluscent, juicy flesh. There is a single, shiny brown seed. The flavor is sweet and perfumed, although there are varietal differences. Young trees are very sensitive to fertiliser damage, and to cold wind. Once the trees are older, they will stand some frost. Lychees grow very well but fruit poorly in the warm temperate areas, as they need a period of (preferably dry) cool to initiate flowers, and rainfall can cause flower buds to be surplanted by a burst of vegetative growth instead. Brewster, Mauritius (Tai So), and Hak Ip are the cultivars with good to very good flavor and with resistance to anthracnose disease which damages the fruit. (Except Mauritius, which is suceptible). For the more mild parts of the wtz, lychees fruit well in large containers in glasshouse and conservatories, as long as they are kept slightly dry over autumn. Lychees in pots are fairly demanding, and not for the amateur.
MAGNOLIA VINE Schizandra chinensis - a hardy deciduous vine (a relative of the magnolia) growing to about 6M/20 feet that produces very attractive red berries which are tart but aromatic.The pink flowers are pleasantly fragrant. Sweetened, the berries used for juice and preserves. The berries are said to be high in vitamin C, and shizandrin, a stimulating and supposedly healthful compound.
MACADAMIA- see 'Nut, Macadamia'
MANDARIN- Firstly, the name 'tangerine' has
been applied to very orange-red colored mandarins
cultivars-presumably as a description of the color, as much as
anything else. However, to avoid confusion, it is best to
stick with the correct name-'mandarin'. Without a doubt, the
mandarin is one of the most valuable fruit for the small space
home fruit gardener in the warm temperate areas. The trees are
small to very small if grafted onto darfing or ultra dwarfing
(flying dragon) rootstock, they start bearing within three
years of planting out, the flowers are attractive, the tree in
fruit is attractive, they don't need pruning, almost none need
a pollinator, the range of flavors in the mandarins is
reasonably diverse, and there are early, mid, and late season
varieties to give a long fruiting season. The 'Satsuma' type
mandarins from Japan comprise an early ('wase') group and a
late ('unshiu') group and are probably the most cold tolerant,
and suit cool summer, frost prone, and somewhat mandarin
marginal areas. The earliest ripening varieties are all
satsuma types. They tend to be small trees, early to come into
fruiting, and prodigous croppers. The fruit colour 3 or 4
weeks before they are of good eating quality. There are
a large number of types of common mandarin, with varying
ripening times, peelability, fruit size, seediness, flavor,
cold hardiness and regularity of bearing. Fruiting starts in
early winter, with winter/early spring the main season; altho
a few late varieties such as 'Encore', 'Kara', and 'Pixie'
carry the season into early summer. Go for an early, mid
season and late variety that is adapted to your area.Any
competant nurseryperson will advise you.
mandarin cultivars in New Zealand
MANGO Mangifera indica The mango is usually a very lkarge, spreading tree. Grafted trees, are, however, smaller, and mangoes don't grow as fast or large in the warm temperate areas. In fact, they are restricted to the most extremely favorable parts of the warm temperate zone-hot summers, no air frosts, long seasons. The trees are very attractive-the leaves are shiny green and contrast with the bright red new growth. When the tree flowers it is covered in light yellow panicles, and when the fruit is ripening it is hung with bunches of green/red/yellow fruit. The mango is adaptable as to soil, and as long as the growing young tree is fed regularly and watered if necessary in a dry spell, it will thrive. A poor type of mango will be fibrous, acid, and 'turpentiney'. Selected types effectively have no fibre, are intensely sweet, and with stunning depth of delicious flavor.The fruit are too well known to need description. Grafted trees will begin to fruit 3 to 5 years after planting. Fruits of most varieties mature in autumn or winter. From flower to fruit maturity takes about 100 to 130 days. Rain when the mango is flowering can cause poor fruit set. The fungus disease 'Anthracnose' attacks the flowers, the fruitlets and soft growth.Not only can it prevent adequate fruit set by damaging flowers, fruit that do mature may rot.
MARIONBERRY- This bramble is a cross between
a blackberry and the Olallie berry
from Marion County, Oregon, USA. It is a bright black, medium
to large sized fruit. It fruits at the same time as
boysenberry. It's advantages over the boysenberry are that it
is more attractive looking, it has better flavor, the seeds
are much smaller than boysenberies slightly intrusive seeds,
and the plants are probably a bit hardier.
The plant itself is very vigorous and very thorny, and the strong canes seem elatively disease resistant. Marionberries need a wire or fence to grow on, they need to be sprayed against fungus diseases unless you have a fairly dry climate, and they must be netted against birds if you are to harvest fully vine ripened fruit. Pruning is as for blackberry.
Crataegus aestivalis "Applehaw' These hardy trees
produce fruit in spring. The trees are extremely adaptable to
soil type, and can stand both occassional flooding and
drought. They are also relatively disease resistant. While
they tolerate freezes to minus 40F, they flower very early and
the flowers are liable to be frosted in the coolest parts of
warm temperate areas. The fruit are usually red, carried in
clusters, and about an inch/25mm in diameter. The flavor is
politely described as 'wild', but they are palatable.
'Super Spur' produces prodigous quatities of fruit on a heavily spurring tree-a well established tree may produce as much as 80 gallons!
'Texas Star' has intense red berries and is a late blooming variety.
'Royalty' is also lateblooming and it's with showy white flowers are over an inch/25mm in diameter.
'Gem' is late blooming and has a concentrated fruit ripening.
MEDLAR Mespilus germanica- This unusual fruit is the size of a small apple. It has dry brown skin and contains firm flesh and some furry pips. The fruit are inedible straight off the tree-they have to be picked and left to become soft-a process known as 'bletting'. When the flesh has become soft, it is a mid brownish color, and tastes exactly of compote of apples/stewed apples. If you blett them for too long, they rot. As the fruit are ripe about the same time as apples, there seems little point in growing it, except the tree is austere, slow growing, deciduous, with attractive flowers, and it will puzzle all who see it. It is relatively indifferent to soil and position in the garden, and seems almost unaffected by pests and diseases.
MOUNTAIN PAPAYA Carica pubescens-'Ababai',
'Chamburro'. There are several species of 'mountain papaya',
as the name is really a 'catch all' to distinguish Andean
papaya species from the tropical papaya of commerce.
Certainly, the most common mountain papaya in USA and
Australasia in C.pubescens which has, by default, come
to be regarded as 'the' mountain papaya. This papaya species
is adapted to the cold, but not frosty cloud forests of the
Andes. It will recover from some frost, but heavy frost will
kill this succulent herbaceous plant. The 'trees' are
striking, having one or more 'trunks' topped with large, lobed
leaves that are pubescent underneath. Plants may be male,
female, or hermaphrodite. They can also change sex. The dumpy
75mm/3 inch fruit have 5 fleshy ridges and are a dull yellow
when ripe. In the tropical Papaya/Pawpaw of commerce, the
fleshy fruit wall is eaten, and the seds in the cavity
discarded. The opposite is true for the mountain papaya. The
fruit wall is too dense and tough to be eaten fresh, and while
juicy, has no sweetness. The seed cavity, in contrast, has
it's numerous seeds embedded in a very sweet and aromatic
pulp, and it is this part that is eaten. The mountain papaya
has high landscape values where it can be protected from heavy
frost, it produces well, the inconspicuous greeny-yellow
flowers are fragrant at night, and the fruit are aromatic and
very pleasantly flavored; on the other hand, the large numbers
of seeds are intrusive, and the pulp has to be swallowed
whole, seeds and all, with minimum chewing to avoid crunching
seeds. The fruit walls can be used if they are cooked in a
heavy sugar syrup, but who could be bothered?
Chamburro, C.stipulata, is another Andean mountain species, but is rarely encountered in the West. It is similar to C.pubescens, but the trunk of the 'trees' is covered in short stout 'thorns', the flowers are deep yellow, the fruit is larger, at about 100mm long, it does not have the fleshy ridges on the fruit, it is not sweet, has a relatively soft fruit wall, and it's very high papain content precludes it from being eaten fresh, even if you wanted to. Like C.pubescens, it is cooked in sugar syrup in South America, and it very acceptable prepared this way. But again, why bother?
Other mountain papaya species include-C.parviflora, knee high plant, tiny bright orange fruit, stunning purple flowers, not enough fruit substance to be edible; C.quercifolia-large and vigorous approximately oak-leaf shaped leaves and narrow 50mm/2 inch torpedo shaped orange fruit with extremely thin and tender skin that can be eaten whole and are rather pleasant, if variable, C.goudotiana, a very tropical single stemmed hansome purplish plant with fruit similar to C.pubescens, but rather drier and without any real sweetness or flavor. There are also hybrids of these species to be found in arboreta and in the few tenuously remaining amateur rare fruit collections left in the world.
MULBERRY- White Mulberry (Morus alba),
Black Mulberry (M.nigra), Red mulberry (M.rubra)
White Mulberry - The berries are white, pinkish, or blackish purple 25-50mm/1-2 inches long. Some varieties are sweet, others are insipid. The tree is fast growing, with large, light green, smooth and shiny leave. The fruit of the best cultivars is OK, especially if cooked, but it will have to be netted from the birds, which love them. They have to be fully ripened on the tree, otherwise they are rather dry, and certainly tasteless. To be nettable, the trees need to be heavily pruned each year, which doesn't faze them, as fruit are carried on new growth.
Black Mulberry-The fruits are very jicy, sweet, and stain when they fall from the tree. Paradoxically, while it is by far the best mulberry, it is also a nuisance from the point of view of the staining fruit. A very large deciduous tree with dark green, lobed leaves that are downy underneath. Because it is large and vigorous, it is hard to contain.
Red Mulberry -The native American mulberry, most often it is often used as a rootstock for the black mulberry (the black mulberry is difficult to propogate from cuttings and may be incompatible with the white mulberry). The fruit is edible.
NECTARINE Prunus persica- Nectarine flowers are a bit more susceptible to frost injury than peaches, otherwise the comments that apply to peaches apply to nectarines-the nectarine is a smooth skinned, fuzzless peach. There are, of course, connoisseur nectarine varieties, as there are connoisseur peach, just not so many.
NUT, ALMOND Prunus amygdalus- almonds
are the first spring blossom, and make a wonderful spring
display.. However, you need to plant two trees of different
varieties to get fruit set. The nut is enclosed in a fleshy
fruit (a good photo
of the fleshy husk is at the Sierra
Nurseries web site) that looks a bit like an unripe
peach. This is tedious to remove, and the nut crop is rarely
so large to justify the effort involved in harvest and drying.
There is no particular advantage to home grown almonds over
fresh commercial ones, so almonds have no place in the urban
hominids food garden, unless as an ornamental. Where there is
the luxury of space-and time to deal with the crop-they are a
magnificent landscape blossom tree, and the nuts are a bonus.
'Paper shell' almonds have a nut that is so soft it can be
removed by hand. 'Soft shells' are easily opened with a
kitchen nut cracker, and 'hard shells 'have a shell as hard as
a peach stone, or harder. 'Paper shells' are desirable from
the user friendliness point of view, but are more likely to be
damaged by insects, or even birds.
402 (NZ)- A softshell locally selected variety. The nut is ready about mid autumn. In humid climates, the fleshy fruit tends to become diseased, and shrink onto the nut shell making it a bit difficult to remove. The kernel is large, acutely pointed, somewhat flat. Neither bitter nor sweet, its flavor is unremarkable. Not a particularly productive tree.
IXL (NZ, US) Ready about mid autumn. The fleshy fruit is big and fat and easy to remove from the shell. The shell is a thick hard shell, and difficult to crack. The kernel is medium sized, somewhat "bitter".
Monovale (NZ)-A local hardshell selection. A prodigous producer of quite bitter hard shelled nuts.
All-in-one (US, NZ)-A small tree, it produces particularly fat, large kernels and nuts. The kernel is sweet and flavorful. Production is very low in humid areas due to a disease shrivelling the kernel.
NARANJILLASolanum quitoense Literally 'little orange' this plant is a spectacular ornamental low sprawling weak shrub. The velvety leaves have spines in the ribs and veins, but in one selection are spine free. It demands shade and perfect drainage and organic soil. It is short lived and very prone to root rot. The fruit are produced in abundance, if your plant survives and thrives. They are acidy, slightly sweet, odd. Usually you squeeze the pulp into sugary water and it turns an astonishing shade of green. A novelty to annoy the neighbours with, but not a serious crop.
NUT, CHESTNUT Castanea sativa, C.crenata,
C.x sativa Chestnuts fruit in early-mid autumn, and are
usually regarded as too large for the small garden. Grafted
trees start to bear nuts when less than head high, so it may
be possible to keep them small with severe pruning. That said,
the flavor of chestnuts is so close to the sweet potato (Ipoemea
batatas), that it is probably better to use the space
for another food bearing tree and simply buy sweet potato,
which are easier to prepare, and much cheaper
C.sativa-sweet or Spainish chestnut.
NUT, GEVUINA Gevuina avellana A small tree that has nuts similar to a macadamia. They are very subject to root rot caused by the soil fungus Phytopthora, and seem to need acid soil conditions. They are fast grower in the right conditions, but the right conditions are often difficult to determine, let alone achieve. Very little is known about growing this nut tree. It is worth attempting as a challenge, especially as it is a very small tree and accepts some shade and therefore suited to the small space garden, but don't have high expectations of bowls full of nuts. Try growing it amongst your rhododendrons.
NUT, HAZEL Corylus avellana 'Filbert', 'Fillbasket'. The hazel is a superb tasting nut, an ideal hominid food, a graceful small bushy tree (it can be trained as a standard), tolerates light shade, and a generally ideal home garden food source except that it fruit erratically or not at all in warm temperate areas, and suckers like crazy from the base of the tree. Hazels need a lot of winter chill, altho, paradoxically, because they flower in winter they can be damaged by severe frost. The only cultivar recommended for warm temperate areas is 'Merveille de Bowiller', but even then, you will need another variety for pollination.
NUT, MACADAMIA Macadamia integrifolia,
M.tetraphylla Macadamia nuts are an excellent
tree for the home food garden. The nuts are particularly
nutritious. The commercial growers go for nuts with high oil
content and low sugar content-low sugar so the nuts don't
caramelise when they are toasted. The urban hominid should go
for nuts with a high sugar content, then dry them rather than
toast or roast them. Dried, they keep for about a year before
there is any rancidity. Grafted trees are better than cutting
grown trees, as cutting grown trees sometimes are blown over
once they have become fairly tall. Macadamias can be pruned
for convenience, and if left alone, some varieties can become
very large and spreading. Cultivars derived from M.tetraphylla
are the sweetest, and have the particular advantage of having
a husk which splits well, releasing the nut. The leaves of tetraphylla
cultivars have a slightly ''prickly' margin. Cultivars
of M.integrifolia have lower sugar, smooth leaves, are
slower to come into bearing in more marginal parts of the warm
temperate area, and tend not to release the nut from the husk,
meaning they have to be hand picked. The long racemes of pale
purplish pink or white flowers are wonderfully fragrant and
abundant. Some cultivars have attractive reddish or bronze new
Macadamias will be damaged by airfrost, especially when young, but soon recover. Any other than a poorly drained soil will do. Cross pollination is essential, or nut numbers will be in the ones or twos per raceme, instead of hanging in bunches. Macadamias are loved by rats, and immature fruit can be damaged by piercing and sucking bugs. Other than that they are pretty care free.
in USA The
California Rare Fruit growers have produced this very good fact
sheet on all aspects of growing macadamis in the dooryard
orchard in the United States
New Zealand JJJJ An
article from the Journal of the New Zealand Tree Crops
Association covering most aspects of growing macadamias, albeit
in a commercial setting. The principles are, however, applicable
to the home gardener
NUT, PECAN Carya illinoisensis One of
the premier hominid foods. Unfortuneately, it grows on a tree
that ultimately grows enormous, is prone to branch break in
windy areas, requires a pollinating variety of the right type,
and requires a long hot summer to mature the nuts, plus a
fairly cold winter to initiate flowers. In the very hot
mediterranean-like parts of the warm temperate zone, they do
make a good shade tree, as the foliage is quite open and
delicate, and cropping is reliable. A grafted tree will start
giving better than token amounts after five or six years. The
chief problem is rodents stealing the nuts-and to a lesser
extent damage from a variety of caterpillars and bugs. Pecans
are fairly adaptable to soil type, but are intolerant of
salinity. For most parts of the warm temperate area, it may be
better to rely on buying commercial nuts from the areas well
suited to pecans rather than try to grow your own.
Pecan growing in USA, North Carolina JJJJ A very good page on varieties, culture, and insect pests of pecan in North Carolina. As North Carolina is regarded as at rather much at the northern limit for pecans, the information may have relevance to other cool climate or short season areas. Produced by the North Carolina State University Co-operative Extension
cultivars JJJJ All
description, from the Agricultural Research Service, U.S. Dept.
of Agriculture, in Texas, USA.
NUT, PISTACHIO Pisticia vera is a
Mediterranean nut needing both heat, and in winter cold. It
performs poorly or not at all in humid climates. The tree
tends to be weak and straggly. Unless you have a continental
type climate, hot in summer, cold but not snowy in winter,
More detailed information can be found in the California Rare Fruit Growers (Inc) very good fact sheet at:
in New Zealand A very good article from the
Journal of the New Zealand Tree Crops Association covering most
aspects of growing pistachios (except cultivar notes), albeit in
a commercial setting. The principles are, however, applicable to
the home gardener.
NUT, WALNUT Juglans regia Along with
the pecan, this is one of the nicest nuts there are. There is
quite a bit of variation in taste between the cultivars, with
some having a slight astringency and some not. The oil content
also varies, as does nut size and ease of cracking. Walnuts
are well adapted throughout the warm temperate zone, but the
amount of winter chill needed varies greatly with cultivar.
Plant a walnut cultivar requiring high winter chill in the
warmest part of warm temperate areas and you get very poor
leafing out in spring and poor yeilds. In mild winter areas
you will need to consult a knowlegeable nurseryperson about
which varieties are adapted to the area. In humid summer
areas, there is a major problem with a bacterial disease (Xanthomonas
juglandii) which blackens and destroys the nut. Some
cultivars are resistant. If you live in an area prone to late
spring frosts you will need to avoid cultivars that leaf out
early. Walnuts need well drained soil, and adequate soil
moisture in summer. Walnuts are very large trees, and should
be planted at least 7.5M/25 feet from the house to avoid
leaves in the guttering, excessive shading, and damage to
paving from roots. To get the maximum number of nuts fruit
well you usually need two different cultivars, most single
trees will bear well in the home garden. A grafted tree
will start bearing nuts in about the fifth year.
Early leafing-Serr, Payne, Placentia, Chico. Cultivars
NUT, WALNUT, ANDEAN Juglans honoreii This fast growing evergreen Juglans species is from the relative calm and frost free sub tropical Andes. It is frost tender, and, like the Pecan, suceptible to branches and the growing tip being broken in wind. Under warm temperate Southern Hemisphere conditions it produces it's nuts in winter, in June and July.
The advantages of the Andean walnut are that it fruits well; it is self fertile; it comes into bearing from seed within about five or six years; and it has large nuts that are moderately well filled. The biggest disadvantage is that the nut does not fall free of the husk and 'clings' to the nut. This means the almost tennis ball sized 'fruit' (fleshy husk plus the 'nut' in the middle) have to be collected and piled up for the husk to rot off. The olivey green to brown fruits turn dark brown as the husk breaks down, and the fleshy part becomes black and soft and spongy. Nuts falling and rotting on paved areas would be unattractive, although the decomposing husks don't seem to stain the hands, at least.
Once cleaned, the round golf-ball sized nuts can be dried. Their shell is very thick heavy, and they are not easy to open. Once open, the kernel is also difficult to remove from the shell. The kernel itself is blandly pleasant. This is a tree for the collector in a low frost area. The common walnut is the tree of choice for reliable fruiting and easy harvesting and storage, easy cracking and kernel extraction.
OLIVE Olea europea- A truly marvellous landscape tree, the olive. But the fruit have to be leached of their bitter chemicals and pickled, which involves a degree of fiddling about beyond most of our patience. They are produced commercially far better, more cheaply, and more certainly. Buy them, don't grow them-or at least, don't seriously grow them as a home orchard tree.
OLALLIE BERRY This bramble is a cross between a black Loganberry and a Youngberry. The berries are black, long and narrow, firm and sweet with wild blackberry overtones at full maturity. The plants are highly productive, vigorous and thorny. Culture is as for blackberry.
sinensis Oranges are cheap in the supermarkets,
nevertheless the orange is an excellent landscape tree-
attractive form, small size, scented flowers, decorative
fruit, trimmable. In addition, if you use orange peel in
recipes, you can be sure your own oranges will be free of
waxes, colouring, and fungicides. So long as the trees are
watered and/or mulched in summer, given regular small doses of
complete fertiliser throughout the year, and the surface
feeder roots are kept from damage, productivity with minimum
effort is assured. Citrus need a little complete proprietary
complete citrus fertiliser regularly. The best prevention for
various trace element deficiencies which citrus seem prone to
is to use composted animal manures such as pelletised chicken
manure under the trees-and a good organic mulch.
Dwarf citrus (citrus grafted to dwarfing rootstocks such as 'trifoliata') are the only from to consider for the small space garden; a valencia orage that would noramlly grow to 20' on a standard stock will be a much more sensible 10 feet on a dwarfing rootstock. And the most dwarfing of the trifoliate rootstocks (trifoliata 'flying dragon') will keep them even smaller still.
For practical purposes there are three main groups of oranges-the common orange, the navel orange, and the blood orange. The navel is the richest flavored of these.
Navel-ripe in mid to late winter, navels have an unparelleled richness and sweetness when well grown. They are relatively easy to peel, with their skin genrally being thicker than common oranges. They are also easier to pull apart intp segements.
Marrs-a medium to large orange, often seedy. It is sweet and juicy, but lacks the acidity essential for depth of flavor unless it is left to hang late on the tree. It has the advantage of being a small tree, and starting into fruit at an early age. Parson Brown- a medium sized, juicy, sweet orange on an upright, vigorous tree.
Pineapple-medium sized fruit with very good flavor, but they don't 'hold' on the tree, have a tendency to alternate bearing Valencia-medium to large juicy, sweet fruit, bearing heavily on a large upright tree. It tends to alternate bearing
Seville-a medium sized tree bearing prodigious quantities of attractive but very sour oranges whose sole purpose is to make the superb, slightly bitter, seville orange marmalade.
ORANGE BERRY Rubus calcinoides (pentalobus) a low, rather compact foliaged evergreen rubus with small dark green leaves reaching about a metre or so wide. The small white fowers appear in early summer and fruit ripen thereafter. The small bright orange fruit are acid/tangy. Prefers well drained moist soils and a sunny aspect. Self fertile.
OYSTER NUT Telfaria pedata more a large edible gourd seed than a nut, this is a rampaging climber, going to 50 feet or more. The trees they grow up are eventually smothered...The sexes are on seperate plants, so at least three plants are needed to get a better than even chance of one at least being female, but you won't know for 2 years because it takes that long before they flower.The females produce large gourd like fruit up to 50cms long and containing as many as 150 edible seeds ('nuts). The seeds are excellent, with a high oil content and a taste similar to hazels. Not a practical propostion for most urban hominids, even if they are the kind of food our distant African ancestors would have eaten.
PAPAYACarica papaya 'Pawpaw'. The papaya is relatively short-lived-it is actually classified as a herbaceous plant, not a shrub or tree-but fast-growing plant about 10ft/3m high, usually with a single stem. The plants take up very little space, are handsome, and are wonderfully productive in suitable climates. The warm temperate zone is not suitable for tropical papayas. In the very hottests part of the wtz they will survive and fruit, but they need to have protection from wind and cold. Even then, the fruit don't become very sweet. The plants themeselves are relatively cold hardy, and will even recover from some frost damage, but the real damage is done when leaf stalks are broken in windy conditions in winter. Fungi enter the wound and infect the stem, and soon the plant turns to slush. There are seperate male and female plants, and you won't know which is which until your seedlings start to flower- which is why it is best to grow three plants close together hope to get a plant of each sex.Female flowers have short stalks and a swollen, fleshy base within the petals. Male flowers are in panicles of many small flowers on the end of a long stem. Some cultivars, however, have a tendency to have both male and female flowers on the same plant-the 'Solo' strain is well known for this. Papaya must have excellent drainage, or they may get root rot and collapse. Strains of the variety 'Matsumoto' are said to be more tolerant of wetter conditions. In the wettest areas, the fungal disease 'anthracnose' can be a problem. It causes sunken circular spots on the ripening fruit. It can be largely prevented by spraying, but it is not really worth the effort. One for collectors in ideal microclimates only.
PASSIONFRUIT, BANANA Passiflora
antioquensis. P.mollisima and P.mixta. The name
'banana passionfruit' is most often given to either P.mollisima
or P.mixta. All three have torpedo shaped - in some
peoples minds 'banana' shaped - yellowish fruit. P.mollisima
and P.mixta are exceptionally vigorous, and the fruit
quality is not particularly good-both lack sugar. Because of
their rampaging nature P.mollisima and P.mixta
can smother other plants, and consequently can't be
recommended for the urban garden.
P.antioquensis, in complete contrast, has very low vigor, and often dies out for no discernable reason. It may prefer at least some shade-indeed, it is said to be suitable as an indoor plant. The flowers are very attractive, and the fruit is one of the very nicest of all the passionfruit. The pulp is sweet, perfumed and opaque creamy white. Although it can be difficult to grow, it is worth the effort.
OBSCURE & RARE SPECIES Of the 400 wild species, only
a few are in cultivation as fruit, and effectively only one
commercially. And then in very small amounts. Many species
have edible fruit, or greater or lesser worth. Details of a
few of the edible species are at this commercial site.-
PASSIONFRUIT, YELLOW Passiflora edulis var.flavicarpa- 'Golden passionfruit', 'Hawaiian passionfruit' . The yellow form is identical in all respects to the purple plant, except that the fruit are a mid yellow color, and often slightly smaller. They withstand some less than ideal soil conditions better than the purple form. The yellow passionfruit grown in many tropical areas may be different from the true P.edulis var. flavicarpa because it is larger than even the purple form, has a thicker fruit wall, and a slightly more acid flavor. The foliage is lighter, and larger. In addition, it is self infertile, requiring two plants to be present for cross pollination, whereas the purple passionfruit is self fertile.
PASSIONFRUIT, SWEET GRANADILLA Passiflora ligularis -This very vigorous vine has somewhat heart shaped leaves and very attractive large white and purple fringed flowers. It requires something fairly strong to climb up, and may reward you with orange or browny orange almost round fruit, sometimes blushed purple, about half way between golf ball and tennis ball sized, with a brittle fruit wall enclosing opaque white pulp that is sweet, perfumed and aromatic. The plant is damaged by frost, and in warm temperate areas, it fruits unreliably.
PASSIONFRUIT, HARD SHELL PASSIONFRUIT Passiflora maliformis 'Sweet Calabash'.This is a small vine, reaching only 20ft/6m. It is very frost tender indeed, and can only be grown in the most favorable microclimates. The flowers are very pretty, white and purple, and fringed. The fruit are small-about,or a bit less than, golf ball size. They are dusky yellow when ripe. The fruit are amazingly hard-it takes a hammer to break them open. The reward is a slightly musky, perfumed and aromatic delicious sweet opaque pulp. The seed is hard to find, but worth growing if you have the right climate or space in your greenhouse for it's restraint, flowers, connoisser flavor, and bizarre impenetrability.
PASSIONFRUIT, GIANT GRANADILLA Passiflora quadrangularis This is the queen and king of all passionfruit-at least in terms of size. The fruit can be as big as a melon! They fruit virtually year round, and in the subtropics, a single vine can produce upward of a hundred fruit. The plants are extensive growers in the very warmest parts of warm temperate areas, reaching 50ft/15m, and they set fruit readily. The quality of the fruit is very indifferent in the wtz, and the fruit take a long time to mature. The flowers are very large, spectacular with purple and white filaments against the red sepals. The fruit are up to 12in/30cm long, oval/oblong, turning greeny orange when ripe. The pulp is purple, sweet/acid, pleasant but not outstanding. Unless you have lots of space, or a strong hobby interest, it is better to grow a smaller species such as the purple passionfruit. \
PASSIONFRUIT, PURPLE Passiflora edulis This fast growing vine is vigorous, very easy care, and quite ornamental with it's dark green, glossy leaves and interesting purple and white fringed flowers. The vine needs something to climb on, a trellis, wires, a shed-all will do. The fruit are a bit bigger than golf ball size, purple skinned, and produced in profusion. They are ready when they fall from the vine. The fruit are excellent at this stage, but become even sweeter and more flavored if they are collected and allowed to shrivel slightly. Fruit have to be collected from the ground regularly, because they can sunburn. Rootrot is the main problem, and the only cure is prevention. Grow Passionfruit in well drained soil. They plants aren't long lived, and can be replaced after 5 or 6 years. Give the plants a dressing of a balanced fertiliser several times a year.
growing Some brief cultural notes on
the purple passionfruit, and brief notes on other species
in New Zealand Commercially oriented, but
very good notes by the former MAF Horticulture group. Especially
good on pruning and care. Note: the recommendations on P.
mollissima growing should not be implemented in New
Zealand, as this species is now regarded as a weed.
persica-The peach does best where there is a hot and dry
summer climate. In humid coastal areas they are subject to
fungal diseases, chiefly leaf curl, which causes defoliation,
and brown rot, which rots the fruit just at or before
maturity. A single copper spray at leaf drop largely takes
care of leaf curl, but preventing brown rot requires some
fairly staunch fungicides applied every few weeks of the
season, and applied thoroughly. Peaches really need reasonably
free draining soil. The best strategy for the urban food
gardener in the humid parts of the wtz is to keep the trees
healthy with excellent nutrition, grow less suceptible
varieties, and hope for a dryish spring and summer. Removing
infected fruit also helps keep the infective spore load down.
Most peach varieties are self fruitful. However, if you are
planting 'J. H. Hale', 'Stark Honeydew Hale', or 'Stark Hale
Berta Giant', you need to plant another variety to assure
adequate pollination. The dwarf peaches make spraying more
feasible, but the fruit quality doesn't really match the
mainstream cultivars. There are definite strong landscape
values from the highly ornamental pink spring blossoms, and
there are some cultivars that have exceptional connoisseur
eating quality, which, because they are too soft, or too small
etc, will never appear in the supermarkets. Peaches come into
bearing quickly, within 3 years of planting, and if the
variety is matched correctly to your local climatic
conditions, are reliably productive. Peaches do, however, need
extensive pruning every year. They do best in dry summer
areas, and are relatively short lived in cooler and wet or
humid summer areas. If you have the right climate, a free
draining soil, and are prepared to prune, then peaches can be
immensely rewarding of exceptional tree ripened fruit and
connoisseur fruit not commercially available. Peaches don't
ripen well in storage, and commercial peaches are picked just
prior to softening to enable shipping, and many modern
varieties achieve high color well before they are mature, so
even a good looking peach at the market won't necessarily have
the accumulation of sugars you can achieve by letting your
crop of the same variety hang on the tree to softening. If you
live in a cool or wet summer area, then you will have to be
dedicated, and expect some disappointments, especially in cool
The peach fruits quickly from seed, and there have been vast numbers of varieties developed over the years. It is a relatively short lived tree, for a variety of reasons, except in dry climates. Therefore a vast number of cultivars have also been abandoned or superceded over the years. Seek out a knowledgeable specialist nursery person or a authoratative book for advice on cultivars, or see the links below.
& Nectarine growing in USA, North Carolina JJJJ A very good, detailed page on everything about peach
culture in North Carolina, with particular reference to cultivar
chilling requirements. Brief notes on 27 cultivars. Written for
commercial orcharding, but the principles remain the same for us
home gardeners. From the North Carolina Cooperative Extension
Service, NC State University.
Pyrus communis Pears do well even in drier, hot inland
climatic conditions. In some countries, particularly USA, dry
summer weather is essential to control the spread of
fireblight, a bactarial disease whose spread is enhanced by
humid weather. Oregon 18 and Old Home are highly resistant to
fireblight. In contrast, most of the common dessert pear
cultivars (Bartlett, Beurré Bosc, Beurré
d'Anjou, Doyenné du Comice, Packham's Triumph and
Winter Nelis) and rootstocks (Quince A and C) are highly
susceptible. Fireblight is present in New Zealand, but is not
a problem, for reason's poorly understood. Fireblight is
effectively not present in Australia. Pears grafted on
dwarfing rootstocks such as quince rootstock reach only
2-3M/6-10ft. Grafted onto pear seedlings they can grow
anything from 4-8M/13 ft 4inches-26ft 6inches. Unlike
apples, which are ripe when they look ripe, pears are
difficult to pick at exactly the ripe stage: picked too soon
they are poor quality, picked too late and they go soft in the
middle. Most high quality cultivars are available commercially
at the supermarkets, and given the need to spray, the space
could probably be used more profitably by an apple tree.
The pear is very amenable to training into cordons and espaliers and other such architectural landscape forms, and when well done makes a magnificent spring show of white blossom.
Pears are self infertile, and must have another suitable variety as a pollinator. Plant pears in pairs, you might say.
Beurre Bosc is pollenized by William Bon Chretian and Winter Nelis. It has excellent connoisseur quality.
Doyen du Comice is pollenized by William Bon Chretian and Winter Nelis plus Beurre Bosc. A good cultivar for areas with cool summers and mild (low chill) winters. A premier connoisseur pear when grown in conditions that suit it.
Louise bon de Jersey is pollenized by Conference
Packham's Triumph is pollenized by William bon Chretien;
Bartlett/William bon Chretien is pollenized by Buerre Bosc, Clapp's Favorite, and Winter Nelis;A good cultivar for areas with cool summers and mild (low chill) winter
Winter Nelis is a small late season pear, and it will store for several months without refrigeration without breaking down. Winter Nelis is pollenized by Buerre Bosc and William Bon Chretien.
in New Zealand JJ
Brief notes on the fruit and pollenizer requirements
of 14 cultivars of pears for New Zealand home gardeners. A
Hub fact sheet.
of Pear Cultivars JJJJ
80 Color plates from the book 'The Pears of New York' by
U. P. Hedrick, published by the New York Agricultural Experiment
Station in 1921, and scanned in by the US Department of
Agriculture National Clonal Germplasm Repository at Corvallis,
Oregon. Older varieties only illustrated, but done superbly.
Growing Pejibaye - from the Center for New Crops & Plant Products, at Purdue University Site, an extract from Julia Morton's Book 'Fruits of warm climates'. Covers Description, Origin and Distribution, varieties, suitable climates and soils, propagation, culture, harvesting, pests and diseases and more. Concise, informative. 3 good photos of fruit and the palm
PERSIMMON Diospyros kaki - The
oriental persimmon fruit speak of great possibilities-at their
best they are the nectar of the Gods, and more often they are
disappointing or good but with an unpleasant edge. The fruit
are very variable in all respects-size, shape, seeded or not,
sweetness, texture, tree form, vigor, and autumn coloring.
Persimons need a fairly warm, long growing season to pump up
the sugars and to eliminate the major bugbear of
persimmons-the tannins in the flesh. All persimmon cultivars
have tannins, it's just that some have naturally much lower
levels. Some have such low levels that the fruit can be eaten
while it is still firm. These 'firm ripe' cultivars include
most of the commercial supermarket varieties, such as 'Fuyu'.
While sweet, the fruit have little real flavor at this stage,
tasting more or less like a sweet carrot. These low tannin
types are referred to as 'non-astringent' persimmons. If these
fruit are left on the tree to mature fully, they become full
of rich flavor once picked and left to soften. The other group
of persimmons (altho' the amount of tannin in various
cultivars is really a gradation from one extreme to the other,
rather than fitting into two groups) have so much tannin that
they cannot be eaten when they are colored but still hard.
They have to be left as long as possible on the tree, and then
picked and left to become very soft indoors. If your area is
not warm enough, or the season is cool, there is a tendency
for persimmon cultivars with the highest amount of tannin to
still have some residual astringency left even when soft ripe.
Adequate heat in the growing season is the prime factor in
assuring tannin free fruit for any persimmon. The best bet is
to go for fine flavored fruit, where they are obtainable, and
in cooler areas select from the non-astringent group which are
least likely to have residual astringency when fully soft
ripe. Pesimmons need some shelter from wind, as the beautiful
new spring growth is quite tender. They will grow in a wide
variety of soils as long as it is not waterlogged. Persimmons
strictly don't need pruning, as, with a few notable
exceptions, they are relatively moderate growing trees. But,
as the fruit is borne on the outside of the canopy, the fruit
will end up further and further out of reach. And birds love
persimmons. The only way to be sure of harvesting tree ripened
fruit-vital for varieties with high levels of tannin-is to
individually bag each fruit, or net the tree. If you are going
to net the tree then you will need to prune after fruiting to
keep the size manageable. Persimmons fruit on current seasons
growth. They will start bearing fruit about the third year in
Fuyu-low tannin variety, needs warmth, very good when tree ripened.(US, NZ, AU)
Izu-low tannin variety, small tree (US, NZ)
Jiro-low tannin variety, very large fruit, fairly small tree (US, NZ, AU)
Tanenashi-moderate tannin, must be eaten soft ripe, large, conic fruit, pasty textured flesh, heavy bearer, reliable, good autumn colors (NZ)
Hiratanenashi-moderate tannin, must be eaten soft ripe, medium sized flattened fruit, extremely vigorous and upright tree.(NZ)
Wrights favourite-moderate tannin, must be eaten soft ripe, very sweet superb flavor, reliable, productive (NZ)
Hachiya-moderate tannin, must be eaten soft ripe, large, conic, excellent flesh texture and flavor (US, NZ)
Persimmon in USA-North Carolina JJJ From the North Carolina Cooperative Extension Service, a good one page review covering pros and cons, brief vaiety notes, planting, fertilizing, harvesting. A good overview.
P.salicina, P.insititia- The 'common plum' of Europe (P.domestica)
includes some of the most excellent connoisseur varieties
there are; as well as many mediocre or worse. Certain European
plums are also used for drying into the dried plums we call
'prunes' (from the name of the genus, 'Prunus' ). The
winter chilling requirements (cold is needed initiate flower
buds and promote spring leaf bud burst) for European plums is
about the same as for apples. The disease 'brown rot' can
damage flowers and fruit in humid areas. The 'Japanese' plums
(P.salicinia) are not Japanese, they originate from
China. 'Japanese' plums need less chilling again than European
plums and bloom very early in spring, which makes them well
suited to the wtz- except for frost pocket areas where early
blossom may be damaged. Brown rot can also affect 'Japanese'
plums in humid maritime areas, but usually only the mature
fruit. Damsons (P.insititia), vary quite a bit in the
amount of chilling they need, so while some cultivars will be
extremely fruitful in the wtz, others will not. The fruit are
usually small to medium sized, often tart but the tartness
reducing the longer it hangs on the tree. Some varieties are
not tart at all, but sweet and pleasant. Damsons are noted for
their adaptability and extreme productivity. Greengage
(P.domestica) there seem to be various forms of
greengage, some vigorous, some not, some freestone, some semi
freestone. This is one of the most exquisite plums that can be
grown, so it is important to buy a tree propogated from a
greengage that is actually fruiting in your region. Japanese
plums bloom earlier than European plums, and for this reason
the two types will not usually pollenize each other. Plums
generally need to be cross pollenizedd by another variety. If
you don't have space for two trees, try to get a double
grafted tree, or select a variety that is self fertile and
doesn't need a pollenizer. There are no fully dwarfing
rootstocks for plums, but plum trees can be naturally small.
Usually they are medium sized trees, altho they can be pruned
lower. They naturally need little pruning, and what pruning is
needed is done after cropping. Plums do best on a good soil,
but they are also relatively tolerant of less than ideal
drainage. They are affected by diseases, the importance and
severity of which depends on how wet and humid your climate
is, and whether you can be bothereed spraying. But, as a
generalisation, you can get away with not spraying the tree in
most areas. The most important drawback is that birds will
cause a lot of damage unless you have a tree and crop big
enough for the birds and yourself. Small trees can be netted.
The other negative is that, like all stonefruit, the plum is
suceptible to a serious fungal disease called 'silverleaf'.
Silverleaf iseriously damages the tree, and often weakens it
so much it eventually dies The trees can be vaccinated with a
biological control agent when they are young, and that more or
less solves the problem.
'Stanley'(UK,USA), the number one European type, is self fruitful. 'Bluefree'(USA) and 'Stanley' are the most common pollenizers for European plums.'Greengage'(USA,UK,NZ,AU) is pollenized by 'Coe's Golden Drop'(USA,UK,NZ,AU) or 'Diamond'(USA,UK,NZ,AU). 'Redheart'(USA) is one of the best pollenizers for Japanese plums. 'Santa Rosa'(USA,NZ,AU), one of the most widely planted Japanese plum is partly self fertile. 'Burgundy'(USA), 'Kelsey'(USA) 'Nubiana'(USA), 'Simka'(USA) 'Methley'(USA) are fully self fertile and don't need a pollenizing variety.
Plum/Prune for areas with cool summers and mild winters: try Methley, Beauty, Shiro, Early Italian, Seneca
Plum cultivars The Hub's brief notes on 59 plum cultivars (European, prune, Japanese, cold hardy), and links to plum sites.
Plums in New Zealand
POMEGRANTEPunica granatum This is a useful home garden small (about 4.5M/10 feet high and wide) shrubby tree about for those drier and hotter areas where it matures fruit well. Pomegrantes will grow and fruit in most parts of the warm temperate zone, but only in the hottest and most mediterranean like parts of the wtz will the trees bear regularly and bear fruit worth having. The trees are deciduous, and stand heavy frost, but late spring frosts will wipe out the flower buds. The red flowers are very attractive, as are the small apple sized pinky red fruit. The plant itself grows on most soils, needs little pruning, and will start bearing in about it's fourth year in the ground. It is also amenable to espaliering and pruning to shape. The fruit are normally grown for their juice, which in the best varieties is a mix of sweet and tart. They are self fertile, and one tree would probably bear more than you would want to eat.
PUMMELO Citrus grandis Pummelos are similar to American grapefruits, only bigger. They are very popular in Asia, and there are a range of flavors, from sweet to sour, and a range of flesh colors, from pale yellow to red. Interestingly, they are slightly better adapted to the more cool parts of the warm temperate areas than grapefruit, which need high heat and a long growing season. The quality of the fruit is not as good as in the hotter areas, with a tendency to very thick flesh and high acidity. Nevertheless, the best microclimates can successfully mature these fruit. Probably best regarded as a collectors ite, unless you have plenty of space to try other citrus. They require the usual citrus conditions of free draining soil, organic mulch and/or water in summer, regular feeding in the growing season, shelter, full sun.
QUINCE Cydonia oblonga The quince needs less chilling than apples or pears, and it seems adapted to both humid and hot dry areas. They are self fertile, adaptable as to soil, have beautiful quite large pink spring flowers, and bear heavily when well established. The fragrant yellow fruit are the size of a large lemon, but can't be eaten fresh. They are only useful for cooking. In addition, in humid areas they are subject to leaf spot diseases. And they can sucker from the base quite persistantly, which can be annoying. Unless you want to cook with quinces, use the space for something else.
RARE FRUIT -
there are gazillions of species, ecotypes, and forms of fruit
plants that could be grown, but, for a wide variety of
reasons, rarely are. For further information, thrash around in
the sites listed below, or use the search facility on top of
the index (or any good search engine).
in New Zealand
A page from the Tauranga, New Zealand, Tree Crops Association listing and commenting on some of the rare fruit encountered on their 1999 field trip. Post the end of the new crop/rare fruit boom of the early eighties and the corporatisation of the DSIR, rare fruit are now extremely rare in New Zealand, so of interest.
RASPBERRY Rubus idaeus For
practical purposes, there are two main groups of
raspberries-summer fruiting, and autumn fruiting.. Summer
fruiting black raspberries ('blackcaps') will only fruit in
the very coolest part of warm temperate areas-they are really
a temperate fruit. Some purple raspberry cultivars (derived
from crosses of red and black raspberries) fruit well in low
chill but cool summer parts of the warm temperate zone. Even
red rasberries must be carefully selected, as few are adapted
to the relatively low winter chill conditions of warm
temperate areas. European raspberries need substantial chill,
and it is usually hybrids derived from American native red
raspberries that do best in warmer areas. Raspberries
are very much worth growing. Well grown, they produce a great
deal of fruit. And the fully cane ripened fruit has the
highest connoisseur qualities. The flavor and aroma of
raspberries is intense and universally liked. A soft, fully
ripe raspberry is a fruit without compare.
But they require more work than a lot of other fruits. True, they are usually grown in rows, and can therefore be fitted into awkward spaces. And they will take a little shade. But the canes of vigorous varieties of summer raspberries flop all over the place and scratch you with their tiny little sharp stem prickles if you don't tie them up. So you need a wall with a wire, or a free standing wire to tie them to. Purple raspberries have particularly long canes, and if you don't tie them up, the tips will take root where they touch the ground. Red raspberries sucker like crazy. True, some suckers are needed for next years crop, but many suckers appear at quite some distance from the plant. If they appear in the lawn, they can be mowed. But if they appear anywhere where you need to spray with herbicide, you can kiss your raspberries goodbye.The only way to prevent suckers spreading is to bury tin or some other barrier material 60cm/2 feet in the ground around the edge of the row. Some cultivars sucker a lot, others relatively little. The other caveat with raspberries is that they are prone to root rot, or rather, fungal infection of the roots-even on well drained soil. Again, some are more prone to root disease than others. The only thing you can do is plant in ground that hasn't had tomatoes, potatoes, eggplants, or peppers in it, and provide good drainage and a lot of organic material and mulch. Having edged, added organic material, fertilised regularly through the growing season, mulched, tied up the canes for this summers crop, removed superfluous suckers, then you can expect heavy flowering and a good crop. So long as you net the row to keep the birds from stealing it. But it's all worth it.
Autumn raspberries are pruned to near ground level in winter, and the new season growth flowers and fruits in the following autumn. Heritage is the best autumn raspberry for warm area.
Amethyst purple raspberry does well in warm temperate conditions. It has slightly more acid fruit than most raspberries, but is very vigorous-if stout prickled-and reliable.
Willamette red summer raspberry is also reasonabley well adapted to parts of the wtz.
'Juneberry'. A hardy tall shrub that produces small pleasant
berries for fresh eating or use in pemmican or preserves. Self
STRAWBERRIES Fragaria x ananassa Strawberries
are an excellent choice for the home fruit gardener, so long
as the plants are replaced after two crops, they are covered
against birds, and a flavorsome variety is available to grow.
The highly colored fruit of the supermarket look fantastic,
but they often lack sweetness and flavor and are very
disappointing.Growing the same commercial varieties at home
brings little improvement in flavor or sweetness, if any. The
best strategy is to try to find a cultivar known for it's
flavor, such as 'Captain Cook'. These are not always as
productive, and the fruit may be smaller, and in some cases
much softer, but the flavor and sweetness is a revelation.
Unfortuneately, such varieties are now very difficult to find.
Strawberries need fertile soil, free drainage (they are very
subject to root disease), and constant evenly moist soil. Pull
the first flowers off to allow the plant to make good leaf
growth to sustain a good crop. Strawberries get leaf spotting
diseases, but as long the plants are well fed, kept moist, and
replaced after several years, it is not worth spraying. If
there is a great deal of rain at fruiting, some or all of the
fruit will be affected with the grey mould fungus.You can do
preventative fungicide spraying, but most years the damage is
within acceptable limits, so you can usually live with it.
Everbearing strawberries are able to flower and fruit for as long as the temperatures are high enough, which is a relatively restricted part of the warm temperate zone.
Strawberries in the home garden A very good basic fact sheet on all aspects of strawberry growing at home -varieties, soils, weeding, mulching, fertiliser, and so on. Produced by the North Carolina State University Co-operative Extension, USA, and therefore reflecting local climatic conditions, it is nevertheless reasonably universally applicable.
SURINAM CHERRY Eugenia uniflora 'Pitanga', 'Brazilian cherry'.A very useful plant for the home food garden, because it is a small leafed, wiry stemmed bushy tree or a large shrub (with small creamy white flowers), and won't form massive roots that can damage paved areas, and because it will remain fruitful even when trimmed to fit into a narrow space, such as a border. It can also be clipped into a fruiting hedge. The shiny small leaves are very attractive, as is the bronzy red tender new growth, and it has quite good autumn foliage. That said, it is really only adapted to the very warmest parts of the warm temperate areas. The juicy fruit is small, thin skinned, about 1-1½ inches/3-4 cms wide, vaguely roundish, with 8 deep grooves running longitudinally, and with a fairly large stone. The fruit is very variable, most trees producing clusters of acid red fruit, and with some producing rather resinous, unpleasant fruit. The best types are mild, aromatic, subacid and sweet, with a melting quality, and very pleasant. 'Lorver' and 'Westree' are two very good flavored cultivars. Fruit color varies from red to almost black. Selected varieties can be hard to find. They are very slow to come into fruit in the wtz, and when they do it is in early summer. Very often they will flower again immediately after fruiting. Fruiting usually begins 8 or 9 years after planting. An attractive small shrubby tree, but not one most people will be prepared to wait for fruit from.
betaceae This small, short lived tree produces smooth,
oval, egg sized, red or yellow fruit with red or yellow sweet
and quite high acid pulp. Some cultivars are very mild, being
moderately sweet and low acid. Selected, well ripened
varieties are good eating fresh, some are only useful for
cooking. The pure yellow form is least useful, as it lacks
acidity, and the small ornage form is sweetest with the
highest flavor. Red fleshed varieties need to be very ripe, as
they have high acidity. Improved varieties are now very hard
to locate, as less fruit is grown and no germplasm or cultivar
collections exist anywhere in the world for this fruit any
Little Sweet-small orange fruit with orange flesh, high sweetness, high flavor and moderate acidity. Extremely hard to find. (NZ)
Oratia Red-standard commercial red skinned and fleshed variety. Good when fully ripe.(US, NZ, AU)
Goldmine-Red skinned, yellow fleshed variety with very good sweetness. A tendency to be a bit gritty in the fruit wall.(NZ)
Cynthia-A red skinned and red fleshed type which has outstanding sweetness. Now probably extinct. (NZ)
Inca Gold-golden yellow skin and orange-yellow flesh, mild flavor.
TANGELO- A cross between a mandarin and
(usually) a grapefruit or (sometimes) a pummelo. They are
somewhere between an orange and a grapefruit in hardiness, and
in cooler areas the fruit can be quite acid. Tangeloes fruit
better when there is a mandarin (not another tangelo) nearby
to pollinate them. Tangeloes make a medium to large sized tree
in time, and will bear far more fruit than you would want to
eat, given that most tangeloes have quite a bit of acid in
them. The fruit tend to be seedy, and very juicy. They peel
fairly well. The bright orange red fruit are very ornamental,
and the white flowers, like most citrus, attractive. The best
quality fruit come from the very warmest and long season
areas. The fruit mature in late winter/spring. There is a good
arguement for buying, rather than growing this fruit.
Minneola-the common commercial tangelo. The fruit are highly colored, with a prominent neck, and are carried on a vigorous tree.
Orlando-is difficult to peel, seedy, juicy, sweet, and needs a lot of heat
Seminole-is moderately easy to peel, soft, extremely juicy (messy to eat), and has to change from orange-red to orange-yellow before it is ripe. Picked too soon it is very acid, when dead ripe it has very high sugars along with the acidity.
Tangors are a cross between a 'tangerine' (old name for
the mandarin, no longer used) and an orange. Some so called
mandarins are in fact natural mandarin-orange hybrids, for
example 'Clementine' mandarin. Tangors need a lot of heat, but
paradoxically, they are subject to sunburn in intense heat
inland areas. This rather restricts their range. Dweet will
fruit in the warmest range of the mild summer areas, but the
fruit quality is not as good as it should be.
Dweet-a medium to large, fairly thick skinned fruit that is somewaht difficult to peel. The fruit is very juicy, and the favor moderately sweet with unusual grapefuity undertones. Left on the tree it tends to dry out and become puffy.
Temple-similar in appearace to Dweet, peels better, with the same complex flavor. It must have heat or the fruit are acid and dry. Poorly adapted to warm temperate areas.
TARA BERRY Actinidia arguta-See "HARDY KIWIFRUIT"
TAYBERRY- Early season. A cross between the
blackberry 'Aurora'and a Raspberry. The fruit are long
conical, large and dark red with very good flavor. Some people
consider it the best of the raspberry-blackberry hybrids. The
canes are Long, thorny, and moderately vigorous .Grow as for blackberry.
UGLI Possibly a hybrid of a grapefruit and a mandarin (and therefore is strictly a type of tangelo), the Ugli forms a larger tree than most mandarins, and required more heat. The fruit are large, with very thick, often deeply corrugated, pale orange skin, but easy peel. It is sometimes a little difficult to pick exactly when they are ripe-they are acid when they are underripe, and they dry out quickly if they are overipe.Definitely worth a place in a collection, but not at the expense of a mandarin.
UVALHA Eugenia uvalha (Sp. lit 'little grape') A typical subtropical eugenia, the Uvalha is a slow growing, narrow leafed, Myrtaceous 'powder puff' creamy-white flowered small shrubby tree. As long as the previous winter has been particularly mild, In mid summer it bears (usually meagerly) yellow 2.5cm/1 inch diameter fruit that are pleasant and slightly acid. There is a single, pea sized seed. The tree is slightly frost hardy. However, it takes a long time to come into fruit from seed, maybe ten years, and so is best left to the very interested.
WINEBERRY Rubus phoenicolasius-
'Japanese Wineberry'. A species from eastern Asia that has
masses of very small shiny mid red berries. The berries have
little flavor, but are pleasant. Their main use is to annoy
visitors by saying "I bet you don't know what these are". They
pick very easily, but the 'plug' is large and the fruit small,
so they have a large central cavity when picked. The stems are
packed with soft spine like prickles, which are no real
problem. The vines are stout and vigorous, but easily trained.
The plant itself has reddish stems, giving it good winter
landscape values. Birds adore this fruit, so it has to be
netted. It is also easily spread by birds.
PicturesJJJJ of the plant and fruit, plus brief descriptive notes, from the College of Natural Resources at Virginia Tech, USA
BERRY Rubus hybrid. Early to mid season. The
Youngberry is a cross between the Phenomonal berry (very
similar to the loganberry) and the dewberry. The fruit are
wine-red to black, very shiny, and smaller and rounder than an
Olallie.The flavor is sweet, mild, and is much more likely to
be acceptably edible even if it is picked a little immature,
as different from boysenberry and blackberry.The plants are
moderately vigorous. There is a thornless version. Culture is
as for blackberry.
AJ is your source on the internet for everything about apples. New countries and regions are being added continually to our Orchard Trail section. Find growers in your neighborhood and get to know them. Look up different apple varieties to learn more about them.
visit Apple Journal | 1 | 3 |
<urn:uuid:3b80cb87-2f56-4d40-95f3-148c0e0f76d6> | Methane and Carbon Dioxide Sensing for Landfill Applications - Waste & Recycling - Landfill
Application: In March 1986 an explosion destroyed a bungalow adjacent to a landfill site in Loscoe, Derbyshire. Subsequent measurements showed that 150-200 cubic meters of gas per hour were being generated by landfill waste. This event triggered a change in the way the waste industry considered and regulated gas generated at landfill sites; resulting in the landfill regulations of 2002, and in particular LFTGN03: the Guidance on the Management of landfill Gas.
The regulations, which run to 128 pages, describe the management and control of landfill sites, and define landfill gas as being all the gas generated from landfill waste. This includes gases generated by biodegradation of waste and gas arising from chemical reactions and the volatilisation of chemicals from the waste.
These regulations require that for every landfill site, a gas management plan is put in place to ensure:
Landfill gas must be collected from all landfills receiving biodegradable waste and the landfill gas must be treated and, to the extent possible, used.
The collection, treatment and use of landfill gas is required; and must be achieved in a manner, which minimises damage to, or deterioration of, the environmental risk to human health.
Mature landfill gas is a mixture predominantly made up of Methane (CH4), and normally in the range of 40-60% and Carbon Dioxide (CO2). It may also contain varying amounts of nitrogen and oxygen derived from air that has been drawn into the landfill.
Because Methane is a greenhouse gas which is 21 times more harmful to the environment than Carbon Dioxide, by converting Methane to Carbon Dioxide through burning (and thereby generating Carbon Dioxide) this is considered more beneficial to the environment. In addition, by measuring the amount of Methane burnt, carbon credits are generated, which can then be traded, thus providing a useful revenue stream. Methane is also a useful fuel that can be used to power gas generators to produce electricity and heat.
Appendix F of the regulations describes infra-red measurement as the primary method of measuring Methane and Carbon Dioxide on landfill sites.
In 1956, the founder of Edinburgh Instruments published a paper on the design and fabrication of infrared band pass filters, which are a key component of modern IR bench sensors.
Edinburgh Sensors has actively commercialised this technology over the last 40 years, resulting in a reputation for reliable, accurate, long-term stability and low maintenance gas-sensing products, which have been used extensively over the last 10 years by many of the leading Landfill gas analyser developers, leading to installations worldwide on landfill sites.
Gascard NG is an ideal OEM sensing solution for measuring either methane (CH4) or Carbon Dioxide (CO2), having been designed for ease of integration.
Available with a 0-100% range for both CO2 and CH4, the sensor features on-board Barometric Pressure Correction, and extensive Temperature compensation, which allows installation worldwide in different climates
The Gascard NG has a range of different interface options, including analogue 4-20mA/0-20mA/0-5v, true RS232 communication, optional on board LAN support, and a serial interface for interfacing relay alarms. The on-board firmware supports either a traditional 4-segment LCD or a modern graphical display.
For OEM development, Edinburgh Sensors, can provide an evaluation kit consisting of a Gascard NG sensor, an advanced graphical display interface, and relay board allowing easy evaluation of the Gascard NG functions.
Technical support is available from our Engineers who are available to provide one-to-one customer service and technical support throughout the evaluation and system integration process
In addition to OEM Gas sensors, Edinburgh Sensors have been providing Gas Monitors based upon our proprietary infrared sensor technology for many years; with tens of thousands of our monitors in operation worldwide. It is hardly surprising, therefore, that these monitors have been put to widespread use in the Landfill applications.
The NEW Guardian NG Gas Monitor is already being evaluated by a number of our volume customers working within Landfill applications. Wall-mounted, in an IP54 enclosure, with an integral power supply and sample pump, allowing samples to be taken remotely from over 30 meters away, this Gas Monitor can be provided to measure 0-100% of Carbon Monoxide (CO) or 0-30% of Carbon Dioxide (CO2).
The Monitor features volt free relay alarm outputs, controlled by programmable alarm levels; an accurate temperature and pressure compensated measurement of the gas concentration via 4-20mA( or 0-20mA) and RS232 interface; and a graphical user interface with password protection - allowing not only display of the compensated gas measurement, but control of the Gas monitor calibration and alarm functions. | 1 | 2 |
<urn:uuid:c5140886-4664-4b86-a85b-3669ce41efe0> | Overview Statistics and WHOIS who
1969 Introducing the Internet
C a portable language UNIX a universal operating system
1983 a landmark year
Decentralized Routing and ISPs
World Wide Web
ARIN and ICANN
New Modems, Wireless Networks and Smartphones
Foreign Characters in Domain Names
Latest US Stats
Internet in Australia and the NBN
HTML - Hyper Text Markup Language
Other Top Languages
Firstly some statistics. In 2017, 3 billion individual users access some 904 million host computers via a global routing table of 660,000 networks on 57,000 Autonomous Systems (AS). An Autonomous System is a single network/group of networks typically governed by a large enterprise with multiple links to other Autonomous Systems. These are then serviced by the several hundred backbone Internet Service Providers (ISPs) that make up the core of the Internet, overseen by five Regional Internet Registries (RIRs). E-mail is sent, and web pages are found through the use of domain names. There are now 329 million domain names, with 128 million of them ending with those 3 letters .com. All of these names are overseen by registrars with Go Daddy currently the largest, having 63 million domain names under management. That's a lot. So for this to work, you as a user connect to a local ISP's network. You then have access to his Domain Name System Server - DNS Server for short - software on a computer that translates a host name you send it e.g. www.google.com into a corresponding IP address (220.127.116.11). This Internet Protocol address specifies first the network, and second the host computer (similar to the way a phone number works). If the DNS server doesn't know the host name, it endeavours to connect (within a second or so) to an authoritative DNS Server that does.
Ultimately, that server is one of
At this point your DNS server caches ("stores") that name and IP address for subsequent requests, for perhaps 24 hours or so. After that, it empties the name & IP address from its cache, which means that the next time the name is requested, the ISP has to look it up again. This cache minimizes requests made on the authoritative DNS Servers, but also ensures it won't be out of date for more than 24 hours or so on any domain. And of course this only happens if the domain changes hosts. And to further reduce Internet traffic, Desktops and Mobiles also cache the host name IP address, and copies of the web page, and only download fresh data after the set time has elapsed. A proxy server similarly caches copies of pages for computers on its network. Note, manually pressing page refresh doesn't update the DNS cache — the IP address. Click here for how to manually clear a DNS cache on your desktop. On iPhones, switching to Airplane Mode, then switching back, clears the DNS. With Android phones, navigating to Settings -> Apps -> Chrome allows you to clear the cache. For more reading,
Now, if looking up details for one of the "open" .au domains, you, as an individual, can go to Ausregistry. This database provides the IP addresses of name servers for all the domains within the five "open" 2nd level domain (2LD) space i.e. ones ending in .com.au, .org.au, .net.au, .id.au or .asn.au (asn association). Note - since 2002 when it was appointed, AusRegistry has never dealt directly with the public in registering domains. Commercial registrars carry out this task , thus preventing potential "conflict of interest" situations within AusRegistry. Then, with regard to "closed" government 2LDs,
Some Background: On Oct 25th 2001, auDA (a Government endorsed body) became the authorized Domain Administrator for the .au TLD. They began with appointing AusRegistry in July 2002 on 4 year terms and this was last renewed in Dec 13 for a 4 year term 2014 - 2018. Prior to auDA and AusRegistry,
Click here to view a document with a breakdown of annual fees charged by AusRegistry to authorized registrars.
Example - Stephen Williamson Computing Services
So, by going to AusRegistry, we learn that the host computer for the domain
This means that swcs.com.au is currently hosted on the Quadra Hosting network, who provide a virtual Web hosting service capable of hosting numerous domains transparently. Thousands of different domains might in fact share the same processor (with pages being published in different folders). If the Internet traffic grows too heavy on this shared server, the swcs domain may in the future require its own, dedicated server. (This situation has not yet been reached).
Click here for a web page that will look up the IP address for a specific domain.
Click here to download a free program that will look up any IP address or Host, by accessing the WHOIS section of each of the five regional bodies responsible for IP address registration: ARIN, RIPE, APNIC, LACNIC, and AfriNIC.
Your computer uses this IP address to form
There are 4 billion addresses available - 2 to the 32nd power -
Computer routers store and forward small data packets between computer networks. Gateway routers repackage and convert packets going between homes/businesses and ISPs, or between ISPs. These connect with core routers which form the Internet backbone. So, how did it all come together? In a nutshell, it came as a joint open exercise between the U.S. Military and the research departments at a number of key universities. For
It began in 1969, when the Defense Advanced Research Projects Agency (DARPA), working with
Over in France in 1963 Louis Pouzin had written RUNCOM, an ancestor to the command-line interface and the first "shell" script. Now in 1972 he designed the datagram for use in Cyclades, a robust, packet switching network that always "assumed the worst" i.e. that data "packets" being transferred over its network would always reach their final destination via unreliable / out of order delivery services. Drawing on these ideas, in 1973 Robert Kahn & Vinton Cerf started work on a new Internetwork Transmission Control Program using set port numbers for specific uses. Click here for an initial list in December 1972. It used the concept of a "socket interface" that combined functions (or ports) with source and destination network addresses, connecting user-hosts to server-hosts.
In the late 1970s, DARPA decided to base their universal computing environment on BSD UNIX, with all development to be carried out at the University of California in Berkeley. UNIX had been greatly influenced by an earlier operating system Multics, a project that had been funded by DARPA at
With IPv4 in 1980, the National Science Foundation created a core network for institutions without access to the ARPANET. Three Computer Science depts — Wisconsin-Madison, Delaware, Purdue initially joined. Vinton Cerf came up with a plan for an inter-network connection between this CSNET and the ARPANET.
Meanwhile, at the hardware cabling level, Ethernet was rapidly becoming the standard for small and large computer networks over twisted pair copper wire. It identified the unique hardware address of the network interface card inside each computer, then regulated traffic through a variety of switches. This standard was patented in 1977 by Robert Metcalfe at the Xerox Corporation, operating with an initial data rate of 3 Mbps. Success attracted early attention and led in 1980 to the joint development of the 10-Mbps Ethernet Version 1.0 specification by the three-company consortium: Digital Equipment Corporation, Intel Corporation, and Xerox Corporation. Today, the IEEE administers these unique Ethernet addresses, sometimes referred to as a media access control (MAC) address. It is 48 bits long and is displayed as 12 hexadecimal digits (six groups of two digits) separated by colons, and thus allows for 280 trillion unique addresses. An example of an Ethernet address is 44:45:53:54:42:00 — note — IEEE designates the first three octets as vendor-specific. To learn the Ethernet address of your own computer in Windows, at a Command Line prompt type ipconfig /all and look for the physical address. To learn the Ethernet address of your ISP's computer, type ARP -a, then look for the physical address that applies to the default gateway.
Back to the Internet. On January 1st 1983, the Defense Communications Agency at Stanford split off the military network — MILNET — from their research based ARPANET network, and then mandated TCP/IP protocols on every host. In May, the
In 1984 in Europe, a consortium of several European UNIX systems manufacturers founded the
But meanwhile on an academic level, the University of Wisconsin established the first Name Server — a directory service that looked up host names when sending email on the CSNET. In September 1984, taking this to the next logical step, DARPA replaced the HOSTS.TXT file with the Domain Name System, establishing the first of the Top Level Domains — .arpa .mil .gov .org .edu .net and .com. In 1985, with 100 autonomous networks now connected — click here to see a 1985 primary gateway diagram, registration within these TLDs commenced.
In 1986, there was major expansion when the National Science Foundation built a third network, the NSFnet, having high speed links to university networks right around the country. In 1987, the
In 1989, with 500 local networks now connected through regional network consortiums, the
Over in Europe back in February 1991,
But back in this year, Jean Polly now published the phrase 'Surfing the INTERNET'.
Meanwhile in Amsterdam, Holland,
And in Australia, the AARNet who had linked all the universities in April-May 1990, now "applied to IANA for a large block of addresses on behalf of the Australian network community .... because allocations from the US were taking weeks .... The address space allocated in 1993 was large enough for over 4 million individual host addresses .... The Asia-Pacific Network Information Centre (APNIC) then started as an experimental project in late 1993 (in Tokyo), based on volunteer labour and donated facilities from a number of countries. It evolved into an independent IP address registry ... that operates out of Brisbane" - R.Clarke's Internet in Australia
Back in the U.S.
The Dept of Defense now ceased all funding of the Internet apart from the .mil domain. On January 1st 1993 the National Science Foundation set up the Internet Network Information Center - InterNIC, awarding Network Solutions the contract for ongoing registration services, working co-operatively with AT&T for directory, database & (later) information services.
This same year 1993, students and staff working at the NSF-supported
Regarding these new dial-up home users, the plan was to be able to dial up an ISP's telephone number using a home phone modem, be automatically granted access to a modem from a pool of modems at the ISP's premises, and thus have a temporary IP address assigned to the home computer for the length of the phone call. Initial costs for these SLIP / PPP connections were $US175 per month. But competition between ISPs & new technology meant that over the next two years prices plummeted rapidly. So while Mosaic was a fairly basic browser by today's standards, its new features introduced huge numbers of "unskilled" users to the web. At the end of 1993 there were 20,000 separate networks, involving over 2 million host computers and 20 million individual users. Click here to see year by year growth.
In February 1994, the NSF awarded contracts to four NAPs (Network Access Points) or, as they are now known,
On April 30 1995, the NSFnet was dissolved. The Internet Service Providers had now taken over — internetMCI, ANSnet (now owned by AOL), SprintLink, UUNET and PSINet. Click here to see a diagram. There was a massive surge in registrations for the .com domain space. In August, Microsoft released
At this time data encryption came to the fore via the Secure Socket Layer or SSL protocol, which changed all communication between user and server into a format that only user and server could understand. It encrypted this data using a server's public encryption key along with a user's private encryption key, a key that had been advised initially to the server through a special handshaking exchange. Click here for further details.
In December 1997, ARIN - American Registry of Internet Numbers - a nonprofit corporation - was given the task of registering the IP address allocations of all U.S. ISP's, a task previously handled by Jon Postel/InterNIC/Network Solutions. Meanwhile, since Sep 1995, there had been widespread dissatisfaction at the $50 per annum domain name fees for the five generic TLDs .com .net .org .gov .edu, and back in 1996 Jon Postel had proposed the creating of a number of new, competing TLDs. With this in mind, on January 28 1998, he authorized the switching over of 8 of the 12 root servers to a new IANA root zone file, thus, in effect, setting up two Internets. Within the day, a furious Ira Magaziner, Bill Clinton's senior science advisor, insisted it be switched back. Within the week, the US Govt had formally taken over responsibility for the DNS root zone file. On September 30 1998, ICANN - Internet Corporation for Assigned Names and Numbers - was formed to oversee InterNIC for names and IANA for numbers under a contract with the
In December 1998, the movie "You've Got Mail" was released with Tom Hanks and Meg Ryan and featuring AOL as their ISP. In June 1999, with ICANN's decision to allow multiple registrars of those generic domain names, .com .org and .net, Network Solutions lost its monopoly as sole domain name registrar. And with competition, registration costs for generic .com domain names dropped from $50 to $10 per annum. As mentioned previously, this .com domain name registry, by far the largest TLD with 118 million names, is now operated by Verisign who purchased Network Solutions in 2000. Around the same time, search engines became an essential part. Click Here for an article on How Search Engines Work.
Cable Modems: Firstly, some background regarding cable TV. In the US it goes back to 1948. It was introduced into Australia in 1994 by Optus, who implemented it with fibre-optic cable (i.e. transmitting via on/off light pulses). Fibre optic is more fragile than copper, and Optus (and Foxtel) employed FTTN Fibre (just) to the node, with coaxial copper wire for its final "last mile" connection. Regarding FTTP (Fibre to the Premises) OECD stats in 2009 showed that Japan had 87%, and South Korea had 67% of their households installed with it. However, the difficulties with fibre meant that FTTP installations in other countries was much lower.
Now in 1996 in the US, cable modems lifted download speeds on the Internet from 56Kbps to 1.5Mbps (i.e. over 25 fold) and more. Microsoft and NTT ran pure fibre-optic tests and saw speeds as high as 155Mbps.
ADSL Modems: In 1998, ADSL (Assymetric Digital Subscriber Line) technology (deployed on the "downstream" exchange-to-customer side) and a small 53 byte ATM format (on the "upstream" exchange-to-ISP side) was retooled for Internet access, offering initial download speeds of 768Kbps. ATM packets had been originally developed to meet the needs of Broadband ISDN, first published in early 1988. Click here for more info.
As a sidenote, click here for an excellent article on how telephones actually work. First introduced into Melbourne in 1879. Click here for a short page re Aussie voltages, milliamps and signal strength on a typical phone line.
WiFi: In August 1999 the Wi-Fi™ (IEEE 802.11) alliance was formed to provide a high-speed wireless local area networking standard covering short distances, initially 30 metres inside of buildings and 100 metres outside, though a later standard 802.11n was able to more than double this range. Typical speeds are 4-5 Mbps using 802.11b, 11-22 Mbps using 802.11g, and over 100 Mbps using 802.11n. Click here for an article re WiFi and signal strength.
In 2001 the WiMAX™ (IEEE 802.16) Forum was launched, designed to cover distances up to 50 kms, though when hundreds of users came online simultaneously the quality of the service dropped dramatically.
Mobile Phones 1G, 2G, GPRS (2.5G), Edge (2.75G), 3G, 4G - What's the Difference:
Click here for an introduction (with photos) to each of these various mobile technologies.
Click here for the date when each was first introduced to Australia.
Click here for a current list of the largest mobile network operators worldwide.
Internet on Mobile Phones 2.5G: The packet switching technology called GPRS General Packet Radio Service running at 20-40 Kbps was commercially launched on a 2G GSM mobile phone network in the UK in June 2000, followed by Nokia in China in August 2000. With GPRS, SGSNs Serving
Internet on Mobile Phones 3G and 4G: On the 3G packet switching level, two competing standards were launched worldwide. First came the CDMA2000 EV-DO Evolution-Data Optimised high-speed system in 2000 for 2G CDMA networks. Next came W-CDMA Wideband CDMA in 2001 as the main member of the UMTS Universal Mobile Telecommunications System family. Both systems used more bandwidth than 2G CDMA, but W-CDMA was also able to complement existing GSM/GPRS/Edge networks on 2G TDMA. In Australia W-CDMA is used by all mobile carriers, with Telstra switching off CDMA EV-DO in Jan 2008. While it initially ran at 100-200 Kbps, W-CDMA has evolved to higher speeds 1 to 4 Mbps by using HSPA High Speed Packet Access. Much higher speeds again at least 100Mbps may be seen with the new IP-oriented LTE Long Term Evolution or 4G standard.
Smartphones: On the hardware front, we have had the Blackberry in 2003 and their
Smartphones built in scanning cameras combined with their explosion in popularity, has meant that companies worldwide have standardised on designing applications that communicate with the user with
In recent statistics, 1.2 billion smartphones were shipped in 2014. Android ran 81% of them, 15% ran iOS (Apple), 3% ran Windows, and less than 1% were Blackberries. Click here for a recent article on the "cheap smartphone", built by companies unknown outside their own country.
Foreign Characters in Domain Names: The Domain Name System service had been originally designed to only support 37 ASCII characters i.e. the 26 letters "a - z", the 10 digits "0 - 9", and the "-" character.Although domain names could be generated in English using upper case or lower case characters, the system itself was case-insensitive — it always ignored the case used when resolving the IP address of the host. Then, in 2003 a system was released to allow domain names to contain foreign characters. A special syntax called Punycode was developed to employ the prefix
In 2006, Amazon Web Services launched the
According to a report in the Weekend Australian January 29 2017 from MoffettNathanson, US broadband stats (100 million users) show Comcast in the lead on 25%, Charter second on 22%, AT&T third on 16% and Verizon on just 7%. Numerous others make up the remaining 30%. Click here for a list.
Latest US wireless stats (300 million users) show Verizon in the lead on 37%, AT&T second on 30%, T-Mobile third on 17% and Sprint fourth on 15%. By far these outweigh the rest, the balance making up just 2%.
Now, to summarize. IP addresses are used to deliver packets of data across a network and have what is termed end-to-end significance. This means that the source and destination IP address remains constant as the packet traverses a network. Each time a packet travels through a router, the router will reference its routing table to see if it can match the network number of the destination IP address with an entry in its routing table. If a match is found, the packet is forwarded to the next hop router for the destination network in question (note that a router does not necessarily know the complete path from source to destination — it just knows the MAC hardware address of the next hop router to go to). If a match is not found, one of two things happens. The packet is forwarded to the router defined as the default gateway, or the packet is dropped by the router. To see a diagram of a packet showing its application layer Email/FTP/HTTP overlaid with separate Transport
Click here to see the latest BGP Border Gateway Protocol, the Internet's global routing table. Click here for an analysis of the headings.
Now that we have some background, let's learn more about IP address allocation in Australia.
The company, Stephen Williamson Computing Services, is currently hosted at IP address 18.104.22.168
By clicking on www.iana.org we learn that 22.214.171.124 - 126.96.36.199 i.e. 16 million addresses were allocated to APNIC Asia-Pacific Network Information Centre. And by clicking on APNIC we learn that IP Addresses 188.8.131.52 - 184.108.40.206 (which is 2000 addresses) were allocated to Net Quadrant Pty Ltd, trading as Quadra Hosting in Sydney.
APNIC is a nonprofit organization based in Brisbane, since 1998, having started as a pilot project in Tokyo in late 1993. Today the majority of its members are Internet Service Providers (ISPs) in the Asia-Pacific region. Naturally, China is involved. In Australia, Telstra (who had purchased the AARNet's commercial businesses in 1995) and Optus are two national ISPs.
In 1999, Optus (followed by Telstra) introduced Cable modems offering high speed connections transmitted over their HFC television networks, a Hybrid of Fibre-optic cable running to each street cabinet (node), then copper Coaxial cable into each house. Currently as of 2016, Australia has about one million HFC cable users.
With coaxial cable, used for carrying TV channels as well as broadband Internet, the accessible range of frequencies is 1,000 times higher than telephone cable, up to 1 gigahertz, but the Internet channel bandwidth for uploading and downloading data is then shared between about 200 houses per node.
In 2000, Telstra (followed by Optus and other service providers) introduced ADSL modems providing broadband (high-frequency) signals over copper (telephone) wire. It rapidly became the broadband standard for desktops, with about five million users as of 2016.
In using ADSL in Australia, with filters, the telephone line gets divided into three frequency or "information" bands, 0-4kHz carries the voice, 26-138kHz carries digital upload data, and 138-1100kHz carries the high frequency, high speed digital download data. One weakness with ADSL though lies in the fact that, without repeaters, the phone company was unable to transmit these high frequencies over a long distance. It meant in many cases that 4½ kilometres was the maximum limit between the modem and the telephone exchange. It also suffered where there was poor quality wiring.
With both cable and ADSL (and wireless), Telstra and Optus and the other service providers have a pool of IP addresses, and use them to allocate a single IP address to each customer's modem (or smartphone) while it stays switched on. For customers with slower
Click here for a list of Telstra telephone exchanges in Australia, including locations and 3rd party DSLAM presence.
Now some further statistics. ABS data shows Australia had 13.5 million active internet subscribers at the end of 2016. While the number of dial-up subscribers has disappeared, down from 1.3 million in 2008 to 90,000 in 2016, the faster types of connection increased from 6.6 million to 13.4 million over the same period. This growth predominantly has been in mobile wireless, which has more than quadrupled. The ABS figures show mobile subscriptions climbed from 1.37 million to 6 million over that eight-year period, giving mobile wireless 50 per cent of the broadband market compared with 20 per cent previously.
Click here for an interesting article on commercial peering in Australia, the establishment of the so called "Gang of Four" in 1998, Telstra, Optus, Ozemail (sold to iiNet in 2005) and Connect (in 1998 part of AAPT, with AAPT then sold to iiNet and TPG).
In January 2015, the top four retail ISPs for landlines were Telstra, Optus, iiNet and TPG. For mobiles, there are three — Telstra, Optus and Vodafone. In March 2015, TPG advised of its intent to take over iiNet. This was approved by shareholders on 27th July, and by the ACCC (Australian Competition and Consumer Commission) on 20th August.
The National Broadband Network is the planned "last mile" wholesale broadband network for all Australian ISPs, designed to provide fibre cable either to the node, or to the premises for 93% of Australian residents, and wireless or satellite for the final 7%. Rollout has been slower than anticipated. According to a report in March 2015, a total 899,000 homes and businesses had been passed, and 389,000 had signed up for active services. Eventually, everyone will have to switch across.
Click here for their current rollout map. Move the red pointer to the area you're interested in, and use the scroll wheel on your mouse or the +/- icons in the bottom right hand corner to zoom in and zoom out.
When pages have a .html or .htm extension, it means they are simple text files (that can be created in Notepad or Wordpad and then saved with a .htm extension). Hypertext comes from the Greek preposition hyper meaning over, above, beyond. It is text which does not form a single sequence and which may be read in various orders. especially text and graphics ... which are interconnected in such a way that a reader of the material (as displayed at a computer terminal, etc.) can discontinue reading one document at certain points in order to consult other related matter.
You specify markup commands in HTML by enclosing them within < and > characters, followed by text.
E.g. <a href="http://www.swcs.com.au/aboutus.htm" target="_blank"> Load SWCS Page</a>
<img src="steveandyve2.jpg" align=left> will load the jpg file (in this example it is stored in the same folder as the web page) and align it on the left so that the text that follows will flow around it (on the right). If the align command is omitted, the text will start underneath it (instead).
Note, only a few thousand characters are generally involved in each transfer packet of data. If many transfers are necessary to transfer all the information, the program on the sender's machine needs to ensure that each packet's arrival is successfully acknowledged. This is an important point: in packet switching, the sender, not the network, is responsible for each transfer. After an initial connection is established, packets can be simply resent if that acknowledgement is not received.
Most of these examples can be seen on this page that you are viewing. To see the text file that is the source of this page, right click on the mouse, then click View Source.
See below http://www.swcs.com.au/top10languages.htm for a brief summary of the current top 10 programming languages on the Internet.
Background information to this article came from here.
|Name||Year||Based On||Written by|
|1. Java||1995||C and C++||Sun Microsystems as a graphical language to run on any operating system (Windows, Mac, Unix) inside a Java "virtual machine". It is now one of the most in-demand programming languages, playing a major role within the Android operating system on smartphones. Sun was started by a team of programmers from Stanford University in California in 1982, building Sun workstations that ran on the Unix operating system.|
|2. C||1972||B||AT&T Bell Inc. as a high-level structured language with which to write an operating system — Unix for Digital Equipment Corporation (DEC)'s |
|3. C++||1983||C||AT&T Bell Inc. to provide C with "classes" or graphical "object" extensions. Used in writing Adobe graphical software and the Netscape web browser.|
|2000||C and C++||Microsoft to run on Windows operating systems.|
|1988||C||Licensed by Steve Jobs to run his NeXT graphical workstations. Currently runs OSX operating system on Apple iMacs and iOS on Apple iPads and iPhones.|
|6. PHP||1997||C and C++||University students as open source software running on web servers. Major community release by two programmers, Andi Gutmans and Zeev Suraski, in Israel in 2000. Used in Wordpress and Facebook.|
|7. Python||1991||C and C++||University and research students on web servers as open source software. Click here for sample instructions. First major community release in 2000. Used by Google, Yahoo, NASA.|
|8. Ruby||1995||C and C++||Japanese university students as open source software for websites and mobile apps.|
|10. SQL||1974||Initially designed by IBM as a structured query language, a special-purpose language for managing data in IBM's relational database management systems. It is most commonly used for its "Query" function, which searches informational databases. SQL was standardized by the American National Standards Institute (ANSI) and the International Organization for Standardization (ISO) in the 1980s.|
** End of List
1. Gilster, Paul (1995). The New Internet Navigator. (3rd ed.) John Wiley & Sons, Inc.
2. Roger Clarke's Brief History of the Internet in Australia - 2001, 2004
3. Goralski, Walter (2002). Juniper and Cisco Routing. John Wiley - The Internet and the Router - excerpt
4. History of Computing (with photo links) and the Internet - 2007
5. History of the Internet - Wikipedia - 2010
** End of Report | 1 | 3 |
<urn:uuid:51205122-dc85-40de-a416-2a3f1d7483ec> | Gravity is “just a theory”, like evolution, you know.
Peter Higgs is Professor Emeritus of the school of physics and astronomy at the University of Edinburgh. In 1964 he wrote a paper in which he proposed a new particle which explained a missing part of the Standard Model of physics: the Higgs Boson. (A paper he wrote which presents the history of his idea for physicists: My Life As a Boson.)
Peter MacDonald interviewed him for the BBC in February this year:
First, here’s the problem: take an object – say Isaac Newton’s apocryphal apple. Here on Earth, it weighs something. But even if you put the apple in the weightlessness of space it will still have mass. Why? Where does mass come from?
Newton couldn’t explain that. Neither could Einstein. But in 1964 Peter Higgs did.
Wow. Combined results from CMS detector at CERN show detection of #Higgs boson to 5 sigma. That’s99.9999% certain!
— Phil Plait (@BadAstronomer) July 4, 2012
You can listen to this morning’s webcast from CERN here. The announcement was also liveblogged from Australia. They have discovered a boson using the Large Hadron Collider which is likely to be the Higgs boson.
The second half of the presentation was given by Fabiola Gianotti, the head of the group of 3000 scientists who work on the LHC’s five-storey Atlas detector.
The appointment put her in the top ranks of a profession dominated by men. She came to physics from an education steeped in ancient Greek, philosophy and the history of art – she had also trained as a pianist at the Milan Conservatory. But she ultimately chose physics to answer the big question of why things are as they are. “Physics is, unfortunately, often seen as a male subject; sterile and without charm or emotion,” she told the Cern magazine. “But this is not true, because physics is art, aesthetics, beauty and symmetry.”
There is a great outline of the Standard Model here:
The standard model is the name given in the 1970s to a theory of fundamental particles and how they interact. It incorporated all that was known about subatomic particles at the time and predicted the existence of additional particles as well.
There are seventeen named particles in the standard model, organized into the chart shown below. The last particles discovered were the W and Z bosons in 1983, the top quark in 1995, the tauon neutrino in 2000, and the higgs boson in 2012.
If theories are correct, the Higgs boson existed only during the first millionth of a millionth of a second after the Big Bang some 13.6 billion years ago. As the universe cooled, all the Higgs bosons decayed into other particles. That means to find it, scientists have to make it themselves, recreating the high energies that existed when the universe was only a millionth of a millionth of a second old.
The Higgs boson could never have been discovered without the Large Hadron Collider (LHC) – indeed, which was partly built so that it could find Higgs:
The Higgs boson is the last subatomic elemental particle predicted by the Standard Model to be discovered experimentally.
The model is a fundamental part of quantum physics, which manages to incorporate three of the four known fundamental interactions – the electromagnetic, weak, and strong nuclear interactions – meaning only gravity is excluded. Since its formulation in the mid 20th century, Standard Model has been considered increasingly credible as new discoveries conformed to its predictions.
The LHC has been the source of more geeky jokes and misunderstandings than possibly any other large science project ever even before it was first switched on, 10th September nearly four years ago. On Monday 8th September 2008, Charlie Stross found it necessary to point out:
We. Are. Not. Going. To. Die. On. Wednesday.
The maximum energy the particles generated by the LHC (7TeV) get up to is many orders of magnitude below the maximum energy of cosmic rays that hit the Earth’s upper atmosphere from space every fricking day. None of them have created black holes and gobbled up the planet, or turned us all into strange matter. Nor have they done ditto to any cosmic bodies we can see, such as planets or stars. Therefore the world isn’t going end when they switch on the LHC on Wednesday. QED.
Joking is all very well, but please, can we not be spreading the FUD and scaring people needlessly? The current climate of superstitious dread with respect to the sciences is bad enough as it is …
And yes, the presentation was in Comic Sans. From now on, the Font of Knowledge.
Update, 8th October 2013:
Peter Higgs and François Englert (Professor emeritus at the Université Libre de Bruxelles) have been awarded the 2013 Nobel Prize for physics, for their discovery of the Higgs boson. There was also discussion about whether the Cern Institute should be awarded the Nobel Prize for the work of the Large Hadron Collider project in finding the Higgs boson.
But the physics committee cannot win. Give the prize for the Higgs theory, in which the eponymous boson appears, and they face another problem. A Nobel prize can be shared by a maximum of three people, but at least six physicists wrote out the theory in 1964. One – Belgian physicist Robert Brout – died in 2011. But five into three does still not go.
The committee can contrive the wording of the prize to narrow the number downwards and this is likely to happen. The prize could go to François Englert, who published the idea first, and Peter Higgs, who was second, but crucially was first to flag up the new particle. But that would rebuff the trio of Gerald Guralnik, Carl Richard Hagen and Tom Kibble, who developed the theory separately and published just a month after Higgs. The possibility has already caused acrimony among the scientists. Guralnik and Hagen, two US researchers, believe European physicists have conspired to erase their contribution from history. | 1 | 2 |
<urn:uuid:1bb28c56-f993-4a55-a843-0d1cae7319e4> | Humans use mnemonic codes to refer to machine code instructions. A more readable rendition of the machine language is called an assembly language and consists of both binary numbers and simple words whereas machine code is composed only of the two binary digits 0 and 1.
For example, on the Zilog Z80 processor, the machine code 00000101 causes the CPU to decrement the B processor register. In assembly language this would be written as DEC B.
The MIPS architecture provides a specific example for a machine code whose instructions are always 32 bits long. The general type of instruction is given by the op (operation) field, the highest 6 bits. J-type (jump) and I-type (immediate) instructions are fully specified by op. R-type (register) instructions include an additional field funct to determine the exact operation. The fields used in these types are:
6 5 5 5 5 6 bits [ op | rs | rt | rd |shamt| funct] R-type [ op | rs | rt | address/immediate] I-type [ op | target address ] J-type
rs, rt, and rd indicate register operands; shamt gives a shift amount; and the address or immediate fields contain an operand directly.
For example adding the registers 1 and 2 and placing the result in register 6 is encoded:
[ op | rs | rt | rd |shamt| funct] 0 1 2 6 0 32 decimal 000000 00001 00010 00110 00000 100000 binary
Loading a value from the memory cell 68 cells after the one register 3 points to into register 8:
[ op | rs | rt | address/immediate] 35 3 8 68 decimal 100011 00011 01000 00000 00001 000100 binary
Jumping to the address 1025:
[ op | target address ] 2 1025 decimal 000010 00000 00000 00000 10000 000001 binary | 1 | 13 |
<urn:uuid:e931b389-ea68-45f4-bb69-7f475cfdb8a9> | ||This article has multiple issues. Please help improve it or discuss these issues on the talk page.
Point of sale (POS) or checkout is the location where a software is combined with the mobile device.
The combined software, hardware, and peripheral devices at a POS station manage the selling process, typically driven by a sales associate or cashier. Modern POS systems now have stations created for the customer to check themselves out by scanning and bagging their own items, then paying with a debit or credit card. The POS is sometimes referred to as the Point of Purchase (POP) when discussing it from the retailers perspective.
For SMB retailers, the POS will be customized by retail industry as different industries have different needs. For example, a grocery or candy store will need a scale at the point of sale, while bars and restaurants will need to customize the item sold when a customer has a special meal or drink request. The modern point of sale will also include advanced functionalities to cater to different verticals, such as inventory, CRM, financials, warehousing, and so on, all built into the POS software. Prior to the modern POS, all of these functions were done independently and required the manual re-keying of information, which resulted in a lot of errors.
Software prior to the 1990s
Early Electronic Cash Registers (ECR) were controlled with proprietary software and were very limited in function and communications capability. In August 1973 IBM announced the IBM 3650 and 3660 Store Systems that were, in essence, a mainframe computer used as a store controller that could control 128 IBM 3653/3663 point of sale registers. This system was the first commercial use of client-server technology, Dillard’s Department Stores.
One of the first microprocessor-controlled cash register systems was built by William Brobeck and Associates in 1974, for McDonald’s Restaurants. It used the Intel 8008, a very early microprocessor. Each station in the restaurant had its own device which displayed the entire order for a customer—for example: Vanilla Shake, Large Fries, BigMac—using numeric keys and a button for every menu item. By pressing the [Grill] button, a second or third order could be worked on while the first transaction was in progress. When the customer was ready to pay, the [Total] button would calculate the bill, including sales tax for almost any jurisdiction in the United States. This made it accurate for McDonald’s and very convenient for the servers and provided the restaurant owner with a check on the amount that should be in the cash drawers. Up to eight devices were connected to one of two interconnected computers so that printed reports, prices, and taxes could be handled from any desired device by putting it into Manager Mode. In addition to the error-correcting memory, accuracy was enhanced by having three copies of all important data with many numbers stored only as multiples of 3. Should one computer fail, the other could handle the entire store.
In 1986, in Las Vegas Nevada to large crowds visiting the Atari Computer booth. This was the first commercially available POS system with a widget-driven color graphic touch screen interface and was installed in several restaurants in the USA and Canada.
Modern software (post 1990s)
In 1992 Martin Goodwin and Bob Henry created the first point of sale software that could run on the Microsoft Windows platform named IT Retail. Since then a wide range of POS applications have been developed on platforms such as Windows and Unix. The availability of local processing power, local data storage, networking, and graphical user interface made it possible to develop flexible and highly functional POS systems. Cost of such systems has also declined, as all the components can now be purchased off-the-shelf.
The key requirements that must be met by modern POS systems include: high and consistent operating speed, reliability, ease of use, remote supportability, low cost, and rich functionality. Retailers can reasonably expect to acquire such systems (including hardware) for about $4000 US (2009) per checkout lane.
Hardware interface standardization (post 1990s)
Vendors and retailers are working to standardize development of computerized POS systems and simplify interconnecting POS devices. Two such initiatives are The National Retail Foundation.
OPOS (Java what OPOS is for Windows, and thus largely platform independent.
There are several communication protocols POS systems use to control peripherals:
- Epson Esc/POS
- UTC Standard
- UTC Enhanced
- ICD 2002
- CD 5220
- ADM 787/788
There are also nearly as many proprietary protocols as there are companies making POS peripherals. EMAX, used by EMAX International, was a combination of AEDEX and IBM dumb terminal.
Most POS peripherals, such as displays and printers, support several of these command protocols in order to work with many different brands of POS terminals and computers.
Cloud-based POS (post 2000s)
The advent of internet browser. Using the previous advances in the communication protocols for POS’s control of hardware, cloud-based POS systems are independent from platform and operating system limitations. Cloud-based POS systems are also created to be compatible with a wide range of POS hardware.
Cloud-based POS systems are different from traditional POS largely because user data, including sales and inventory, are not stored locally, but in a remote server. The POS system is also not run locally, so there is no installation required.
The advantages of a cloud-based POS are instant centralization of data, ability to access data from anywhere there is internet connection, and lower costs. Cloud-based POS also helped expand POS systems to mobile devices.
Apple Mac OS X/iOS Based Systems
In recent years, a number of companies have offered Apple-centric POS systems for hospitality and retail including LightSpeed. Some of these function similar to traditional POS systems using client-server models, while newer systems can run in the cloud on iOS based devices.
Retail industry
The retailing industry is one of the predominant users of POS terminals.
A Retail Point of Sales system typically includes a computer, monitor, cash drawer, gift cards, gift registries, customer loyalty programs, BOGOF (buy one get one free), quantity discounts and much more. POS software can also allow for functions such as pre-planned promotional sales, manufacturer coupon validation, foreign currency handling and multiple payment types.
The POS unit handles the sales to the consumer but it is only one part of the entire POS system used in a retail business. “Back-office” computers typically handle other functions of the POS system such as inventory control, purchasing, receiving and transferring of products to and from other locations. Other typical functions of a POS system are to store sales information for reporting purposes, sales trends and cost/price/profit analysis. Customer information may be stored for receivables management, marketing purposes and specific buying analysis. Many retail POS systems include an accounting interface that “feeds” sales and cost of goods information to independent accounting applications.
Recently new applications have been introduced by start-ups and established enterprises that enable POS transactions to be conducted using mobile phones and tablets. New entrants include Square, Intuit’s GoPayments, and NCR Inc.’s Silver platform, ShopKeep POS, and GoPago.
Hospitality industry
Hospitality point of sales systems are computerized systems incorporating registers, computers and peripheral equipment, usually on a computer network. Like other point of sale systems, these systems keep track of sales, labor and payroll, and can generate records used in accounting and book keeping. They may be accessed remotely by restaurant corporate offices, troubleshooters and other authorized parties.
Point of sales systems have revolutionized the restaurant industry, particularly in the fast food sector. In the most recent technologies, registers are computers, sometimes with touch screens. The registers connect to a server, often referred to as a “store controller” or a “central control unit.” Printers and monitors are also found on the network. Additionally, remote servers can connect to store networks and monitor sales and other store data.
Newer, more sophisticated, systems are getting away from the central database “file server” type system and going to what is called a “cluster database”. This eliminates any crashing or system downtime that can be associated with the back office file server. This technology allows 100% of the information to not only be stored, but also pulled from the local terminal. Thus eliminating the need to rely on a separate server for the system to operate.
The efficiency of such systems has decreased service times and increased efficiency of orders.
Another innovation in technology for the restaurant industry is Wireless POS. Many restaurants with high volume use wireless handheld POS to collect orders which are sent to a server. The server sends required information to the kitchen in real time.
Hair and beauty industry
Point of sale systems in the hair and beauty industry have become very popular with increased use of computers. In order to run a salon efficiently it is essential to keep all appointments, client, employee roster and the checkout in a system where you can create performance reports for. The nature of salons and spas vary depending on the setup of the business and products offered in addition to the business. This is why POS comes along with most salon software.
Restaurant business
Restaurant POS refers to point of sale (POS) software that runs on computers, usually touch screen displays. Restaurant POS systems assist businesses to track transactions in real time.
Typical restaurant POS software is able to create and print guest checks, print orders to kitchens and bars for preparation, process credit cards and other payment cards, and run reports. In addition, some systems implement wireless pagers and electronic signature capture devices.
In the fast food industry, displays may be at the front counter, or configured for drive through or walk through cashiering and order taking. Front counter registers take and serve orders at the same terminal, while drive through registers allow orders to be taken at one or more drive through windows, to be cashiered and served at another. In addition to registers, drive through and kitchen displays are used to view orders. Once orders appear they may be deleted or recalled by the touch interface or by bump bars. Drive through systems are often enhanced by the use of drive through wireless (or headset) intercoms.
POS systems are often designed for a variety of clients, and can be programmed by the end users to suit their needs. Some large clients write their own specifications for vendors to implement. In some cases, POS systems are sold and supported by third party distributors, while in other cases they are sold and supported directly by the vendor.
Wireless systems consist of drive though microphones and speakers (often one speaker will serve both purposes), which are wired to a “base station” or “center module.” This will, in turn broadcast to headsets. Headsets may be an all-in-one headset or one connected to a belt pack.
Hotel business
POS software allows for transfer of meal charges from dining room to guest room with a button or two. It may also need to be integrated with property management software.
Hardware stores and lumber yards
POS software for this industry is very specialized compared to other industries. POS software must be able to handle special orders, purchase orders, repair orders, service and rental programs as well as typical point of sale functions.
Ruggedized hardware is required for point-of-sale systems used in outdoor environments. Wireless devices, battery powered devices, all-in-one units, and Internet-ready machines are typical in this industry.
Checkout system
- General computer hardware
- General computer software
- Checkout hardware
- Checkout software
- Miscellaneous store hardware
POS systems are manufactured and serviced by such firms as the point of sale companies category for complete list).
Point of sales systems in restaurant environments operate on DOS, Windows or Unix environments. They can use a variety of physical layer protocols, though Ethernet is currently the preferred system.
Checkout hardware generally includes a PIN pad with integrated card swipe.
Accounting forensics
Tax fraud
POS systems record sales for business and tax purposes. Illegal software dubbed “zappers” is increasingly used on them to falsify these records with a view to evading the payment of taxes.
See also
- ISO 8583
- Point of sale companies category
- Point of sale display
- Self checkout
- Standard Interchange Language
- “William M. Brobeck, John S. Givins, Jr.., Philip F. Meads, Jr., Robert E. Thomas; United States Patent 3,946,220”. http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=3946220.PN.&OS=PN/3946220&RS=PN/3946220.
- “Eugene Mosher”. Enotes.com. http://www.enotes.com/topic/Eugene_Mosher. Retrieved 2012-06-12.
- The ViewTouch restaurant system by Giselle Bisson
- Kaplan, Karen. “Do-It-Yourself Solution: Small Grocery Chain Has Big Plans for Its Retailing Software”, “Los Angeles Times“, November 29, 1995, accessed December 10, 2010.
- The Benefits and Risks of Cloud Point-of-Sale via Tapas Technologies
- . Retrieved 9 November 2012.
- “Point of Sale (POS) Systems Buying Guide”. http://pages.ebay.com/buy/guides/point-of-sale-pos-system-buying-guide/. Retrieved 2009-07-23.
This article uses material from the Wikipedia article Point of Sale, which is released under the Creative Commons Attribution-Share-Alike License 3.0. | 2 | 4 |
<urn:uuid:4e34dd70-27c7-4500-9de9-cb51c6e31268> | Indianapolis (IN) – The world of graphics technologies continues changing at phenomenal rates. It is often difficult for the average consume to keep up with all of the advancements. Buzzwords like DVI, HDMI, UDI and DisplayPort are all the rage. But, what is behind those phrases and how do they relate to one another? We have compiled the data into one comprehensive, easy to read overview.
If you have been following technology in recent months lately, then there is a good chance that you have been confronted with one of these display interface technologies – and there is a good chance that you have had no idea what these standards are about and how they compare with eachother. A recent announcement of a DisplayPort display by Samsung prompted us to compare formats and compare the standards feature by feature.
Let’s have a closer look.
Each of these buzzwords we hear are connected back to physical pieces of hardware. But it's not just about the hardware. There are standards behind those acronyms which basically correlate to a "means to an end." In this case that is: how do we get a multimedia signals from point A to point B as fast as possible?
There are many mechanical qualities for interconnects which must be precisely defined in a standard. Wrapper names like DVI, HDMI, UDI and DisplayPort all have internal qualities which make them unique and distinct. The definitions of these specs are often hundreds of pages long. They include some things most of us would probably never consider, like the following:
Contact Resistance – How hard do the pins inside the connector physically make contact? How much electrical resistance is there in the connection?
Mating/Unmating force – How hard it is to insert/remove the interconnect?
Durability – After inserting and removing 100 times, how does it hold up? Not just the interconnect, but also the “male and female” components of the pins.
Thermal shock – Suppose you pull your monitor out of your car where it's been sitting all night in the icy cold. When you plug it in, will the interconnect work?
Cyclic Humidity – Over time as humidity levels rise/fall, what is the effect?
Vibration – If there is vibration from whatever source, will the interconnect disengage?
Mechanical Shock – Suppose the interconnect is dropped or dragged along the floor as the monitor was being carried from one room to another. Can it take it?
Electrostatic discharge – Can the device take a powerful jolt?
Other components are more along the lines of what we think of when we consider video interconnect standards. These are things like cables. How long can they be? How much bandwidth is supported? And from the purely end-user point of view, what does the standard do for my multimedia experience?
Each of the standards mentioned so far comprise a family of abilities. On the next page we'll look at some abilities which might be most important to you. You should also be able to see the evolution of progress made over time. DisplayPort, for example, is the newest technology. Naturally, it should be the most comprehensive. But is it? Let's look at the side-by-side comparison on the next page.
Read on the next page: Breakdown comparison chart of DVI, HDMI, UDI and DisplayPort
Breakdown Comparison Chart
Comparison of Video Interconnect Standards Description DVI HDMI UDI DisplayPort Revision 1.0 1.3a 1.0a 1.1 Introduced Apr 2, 1999 Dec 9, 2002 Jun 16, 2006 May 2006 Last Change Apr 2, 1999 Nov 10, 2006 Jul 12, 2006 Mar 19, 2007 Impetus Visual Visual/Audio Visual/Audio High Speed, Flexible Wrapper for Visual/Audio + Data Controlling Authority Digital Display Working Group Digital Display Working Group UDI Promoters VESA Type Proprietary, Free Proprietary, Fee based Proprietary, Free Open, Free Backward Compatibility VGA DVI HDMI HDMI Digital Yes Primary Primary Primary Analog Optional Optional Optional Indirect Fiber Optics Yes Indirect Indirect Yes RF No No No Yes Video Yes Yes Yes Yes, Optional Audio No 8-channel, 192 kHz, 24-bit uncompressed 8-channel, 192 kHz, 24-bit uncompressed 8-channel, 192 kHz, 24-bit uncompressed Data No Limited Limited 1 MB/s dedicated + lane space Note: DisplayPort does not require video or audio data. Security 40-bit HDCP 40-bit HDCP 40-bit HDCP 128-bit AES DPCP & 40-bit HDCP Note: HDCP is a fee-based encryption protocol. DPCP, by Philips, is free. Max bits/pixel 24 (48 is allowed, but not officially defined) 48 36 48 Min bits/pixel 12 24 18 18 Max Resolution 2560 x 1600 2560 x 1600 2560 x 1600 2560 x 1600 Note: Higher custom resolutions may also be available. Min Resolution 640 x 480 640 x 480 640 x 480 zero, video data is optional Note: Only computer video modes are displayed. Max Refresh Hz 120 120 120 Variable, 120 Min Refresh Hz 60 50 60 zero Note: Interconnects used for TV signals can clock as low as 24 Hz Max Pixel Clock 340 MHz (in dual-link mode, 165 MHz in single-link mode) 340 MHz (in dual-link mode, 165 MHz in single-link mode) At least 414 MHz At least 450 MHz Min Pixel Clock 25.175 MHz 25.175 MHz 25.175 MHz zero Max Bandwidth 3.96 Gbps (10.2 Gbps in dual-link mode) 3.96 Gbps (10.2 Gbps in dual-link mode) 16 Gbps 10.8 Gbps Rigid Clock Signal Yes Yes Yes No Hot Plug Yes Yes Yes Yes Audio Included No IEC 61937, up to 6.144 Mbps Indirectly (via HDMI) IEC 60958, up to 6.144 Mbps Signal Repeater Defined No Yes Yes Yes 1080p Yes Yes Yes Yes 1080i Yes Yes Yes Yes Note: All standards support video modes below 1080i.
It should also be noted that each specification is nearly identical in theoretical maximum limits to the others. The differences in the number of pins and defined implementation are the only real limits. DisplayPort is the most flexible definition because of its packaging system for data. It speaks to the future needs of variable payloads, and not just audio and video. The other standards could also be redefined to move data in this way, but are not currently.
Supporters Intel, Compaq, Fujitsu, Hewlett Packard, IBM, NEC and Silicon Image Hitachi, Matsushita, Philips, Silicon Image, Sony, Thompson, Toshiba Apple, Intel, LG, National Semiconductor, Samsung, Silicon Image Agilent, AMD, Apple, Dell, Hewlett Packard, Intel, Lenovo, Molex, NVIDIA, Philips, Samsung, several others. Officially supported by VESA as the new standard.
Read on next page: Background - Display interface considerations and standards
So far we've learned there have been many different video modes evolving over time. Each of those video modes adhered to not only a visible standard on-screen, but also to an electrical one. The electrical standard dictated how those video signals got from the video card to the display.
In the past we've seen evolutions in interconnect technologies as well. We started with RF signal cables and 9-pin forms for MDA, Hercules and CGA. Later, the VGA brought us a 15-pin standard form used for 20+ years. But video demands and abilities are increasing almost exponentially. We're moving away from analog signals to pure digital ones. This means new interconnects, new standards, new proposed solutions. And each one listed in this article has people wanting it to be the one adopted.
This decade has seen many advances in graphics technology. The big 3D push at the end of the 1990s fueled an entire industry toward performance and new abilities. It all means more visual data to process in real-time. To accommodate that explosive need today we have several interconnect options. DVI, HDMI, UDI and DisplayPort. It's a veritable alphabet soup! Let's look at some of the basic qualities of the video interconnect standard to see why they're desirable.
Lossy or Lossless
These are two basic types of video format. Lossy takes advantage of the eye's ability to be tricked in certain ways. It removes some information from images (such as colors, detail or contrast), hence its name: lossy. It creates a visual data item requiring less space to store, but one which can be viewed without apparent or significant visual loss by the user. This is how formats like JPEG gain their high compression ratios. MPEG works similarly for moving pictures (MP4).
The other form is called lossless. Lossless images are always conveyed exactly. This often comes at the expense of a lot of unnecessary visual data the human eye can't really see. But, when you're dealing with video transmission at scores of frames per second (60-100), lossless images are desirable. The only inexpensive and reliable way to convey lossless images today for video is digitally. So any future standard must include a solid digital component.
Independent of display
One of the advantages of a video transmission standard is that if it's defined and employed properly, it doesn't really matter what's generating or receiving it. Each piece of equipment simply does its part, knowing that if the other piece of equipment is also doing its part then the system will work. This allows video cards to drive capture devices, projectors, LCD monitors, CRT monitors, etc. It's independent because it's based on the standard. This also reflects the importance of the standard's underlying reliability and ease of use. Multiple things will need to be driven in the future. This means we need simple, easy to use adapters and cabling.
Support of VESA standards
The Video Electronics Standards Association, or VESA, has created some basic communication protocols which convey a type of “image meta data.” This data is transmitted back and forth from source to receiver. This is one way modern operating systems can automatically determine what monitor is connected to a machine. By following and utilizing these standard protocols, a wider product acceptance is had. All modern interconnect standards follow VESA.
VESA provides the DDC (Display Data Channel), EDID (Extended Display Identification Data), VSIS (Video Signal Standard) and DMT (Monitor Timing Specifications). These are all used to convey information about what both the source and receiver are capable doing and what they are doing.
Plug and Play
The PnP model is really quite a thing. It's much more complex than most people realize. Thanks to VESA support, there are automatic queries which are made when newly identified display adapters are plugged in. These queries instruct the video card to alter the data it generates (the video signals themselves). This alteration compensates for known limitations of whatever the display technology happens to be. The video card makers have often gone to extremely great lengths to ensure their products provide the richest possible colors for the end-user. Were it not for these built-in PnP abilities, many of our monitors would look far worse than they do when we plugged them in the first time. All modern interconnect standards support VESA, and therefore PnP.
Most gamers will view gamma correction as the ability to change brightness or contrast, allowing previously invisible game components to be visible. However, gamma correction in video technology is actually a science in and of itself.
Software developers use precise mathematical formulas to determine colors. This is extremely convenient because those formulas are perfectly linear in nature. This means a programmer sees a value of 5 as being exactly half the intensity of the value 10 (on a 1-10 scale). But the realities are that video and display technologies do not relate so perfectly to mechanical hardware. A signal for a given intensity typically needs to be more than the square of the signal's linear strength to achieve the correct brightness on the end device. The video standard employs gamma solutions which compensates for this electrically. The interconnect architecture must allow for these powerful signals at the fast pixel clock rates high-end users desire.
Update: A commenter named Kirmeo wrote in with a more detailed explanation of the gamma correction process and the reasons it exists. He also recommended a book called "Transmission and Display of Pictorial Information" by D. E Pearson (ISBN-10: 0470675551, ISBN-13: 978-0470675557, about $65) for anyone wanting the full low-down on this kind of technology (only available in print). He describes the gamma process as the reduction of noise in lower-strength signals (darker colors). By allocating more of the signal's avaialble bandwidth for the lower portions of the spectrum, less noise winds up being visible in those signals. The gamma correction logic relates to this process, as well as re-normalizing this signal on the receiver. In addition there are non-linear attribute characteristics for physical displays. These are typically handled completely transparently and entirely by the display device itself. Also, the human eye does not perceive light intensity in a linear manner and gamma accounts for that. All of these are factors are employed when taking the programmer's ideal 1-10 scale and making it visibly appear in a 1-10 in brightness as was desired.
There has been a huge push toward fully digital signals in recent years. They are the only true way to insure lossless video conveyance. The future is definitely headed toward being completely digital. All standards developed in the last decade have included that forward-looking reality. And any interconnect standard which is chosen must support digital multimedia without question.
If digital is the new thing, then why support analog at all? A lot of the modern specs we have today are primarily digital. But, they also allow pass-through analog signals for backward compatibility. They do this to save us money. Many of us probably have analog monitors. We probably also have video cards capable of emitting full digital signals. However, because the huge monitor base out there does not have full digital abilities, analog is still used even on brand new cards. So, we plug in our dongles.
Because there is such a huge base of analog monitors out there, any interconnect standard must support analog signals. It will not be widely accepted otherwise.
It might be surprising to learn that interconnect standards contains color depth limitations. Whereas both the video card and user might want to generate 48-bit color data, the physical copper wires can't always convey that much data over long distances. High color depths can be conveyed via any given interconnect, but it often requires additional interconnect pins. Those requirements must be written into the spec precisely. Every pin must be defined so that when we plug in our devices they just work. For us, it's completely transparent. But without the data in that spec, it wouldn't work at all.
Different standards allow for different levels of bandwidth. The generally accepted industry base for computers is the traditional 640x480 at 60 Hz video mode. For TVs we have all kinds of standards depending on what you're after. The pixel clock rates for all conceivable frequencies must be accommodated by the spec. These are not things the designers can hope will work. They must know they will work. All standards in place today have rigorous testing procedures by industry experts. These insure that everything which is spec'd out will work like it should.
Free or Open Standard
This is a growing concern in the industry. There are enough talented people working on video standards today that the need for a uniform base is required. So the question becomes: is the specification in the public's domain? Can any corporation just sign up and use it without paying royalties? I was surprised to learn that most popular forward-thinking interconnects today contain some components which are still not free. DisplayPort is the only standard which offer truly free use. The HDCP encryption protocol used by DVI, HDMI, UDI and was recently added also to DisplayPort as an option, requires royalty payments. Philips' PDCP encryption protocol, currently used only by DisplayPort, is not only stronger, but it is also free.
The push is definitely for open, free standards. The newest member of the club and the one recently accepted by VESA as the new standard, DisplayPort, addresses that fact throughout its entire design.
Read on the next page: Background - video standards
To understand why we are at a particular place, it's often very beneficial to look back at the steps which got us here. We didn't suddenly arrive at the point where 1080p was highly desirable for no reason. We learned over time that the human eye has a particular interest in seeing better quality images. And the tradeoff we have today is cost for quality. 1080p is a good, happy medium and a quality that's likely to remain near standard for at least several years.
To understand what has happened to get us here we must go back to the very beginning. Consider the baseline evolution of the processing computer. The earliest machines were not at all like the computers we use today. The early movies aside, displays back then were little more than indicator lights. They would signal things like: Was there an error? To find out the operator need only look for a flashing red light near a permanent label reading “failure” or “error”. For the original machine experts of that day, this was sufficient. They could do many jobs much faster with computers than without. And that was even without proper displays. But, the truth is we are visual creatures. And we wanted more.
MDA, Hercules and CGA
The first general purpose display adapter standards which received wide acceptance for PCs came just after the original 8088 CPU was produced. They were the MDA, CGA and Hercules Grahpics Adapters.
The MDA allowed text only, via a 9x14 pixel box character. It had 80 columns across and 25 lines per screen. It used 4 KB of display memory per page. It was sufficient for 256 uniquely defined characters (the ASCII character set) and limited attributes such as: invisible, underline, normal, bright, reverse display and blinking. This early text was seen primarily on green phosphorus monitors. It looked like this:
The Hercules Graphics Card was a notable monochrome advantage. While still working with the MDA-compatible technology, it now allowed pictures. Comprised of 720 x 350 pixels per page, individual dots could be turned on and off as necessary allowing simple graphics. It required 32 KB of memory per page, and allowed for two pages of display memory with a typical 64 KB setup. Each character of display memory represented 8 on-screen pixels in a horizontal line. The Hercules graphics card used an 8 KB stride for display memory. This meant the display image was not arranged internally in a contiguous set of bits. It also had a limiting factor in that bits were either on or off in graphics mode. There was no bright or dim attributes per bit. Only a black screen with green dots arranged in some fashion. While it would certainly be undesirable compared to what we use today, at that time it was very powerful. It looked like this (note the true monochrome image with no bits begin brighter or dimmer than others. For those effects, dithering was commonly used):
The Color Graphics Adapter (CGA) came about the same time as the MDA. It allowed for the same basic abilities but with colors. Video modes of 160x200 with 16 basic colors were possible, comprised of the various Red, Green and Blue combinations. And a super video mode of 320x200 with four colors was possible. In text modes, each character on screen was tagged with an attribute byte indicating its foreground and background colors. This allowed for 8 different background colors, 15 foreground colors and either blinking or not blinking. The CGA adapters had a notable problem with the way they accessed memory during refresh cycles. As a result, special programming logic had to be used keep (what was called) “snow” from appearing during direct screen writes. The CGA allowed for simple graphics abilities and many, many early video games were created on the original 4.77 MHz 8088 microprocessor. The CGA looked identical to the MDA, except that it had RGB.
The Enhanced Graphics Adapter (EGA) came out shortly thereafter. It was really this card that began to take things to a new level. This was the first time where images could be displayed which began to look real. There were 16-color modes available in both 640x200 and 320x200, along with a very powerful 640x350 mode with 16 colors (chosen from a palette of 64 colors). Other EGA shoot-offs improved on the standard over time and gave us 640x400, 640x480 and even 720x540 video modes. These efforts revealed on thing clearly: people wanted better graphics.
Video Graphics Array (VGA)
Most of us would consider this VGA to be the true baseline. This standard was introduced and put graphics on the map. It set us on the path we have taken to be right here today. We're viewing this page over the Internet using technology which superseded the VGA.
When we install new versions of Windows, Linux or other graphical operating systems the default video mode used is often VGA. It's a 640x480 16 color display mode which is easy to program and resides within a single 64KB chunk of memory beginning at 0xa0000 on a PC. The VGA also introduced the highly enjoyable 320x200 256 color mode (13h) for games. This also fit in the same 64KB chunk and allowed for the popular early 2D games like Commander Keen.
The VGA standard provided a baseline which still needed to be extended. The technology was somewhat slow to catch up with desire. This was due primarily to cost. However, many different video card makers tried to get their standard out front.
Super VGA (SVGA)
The SVGA standard came in the late 1980s but wasn't widely adopted for a few years. It initially gave us the 800x600 16 color video mode but was later extended to allow the standards we see today: 1024x768, 1280x1024, etc. Based on the amount of installed memory in a video card, different levels of colors could be displayed. These were typically 256 colors. However, memory limitations often resulted in the highest video mode for a given card only supporting 16 colors. This was very common in the early 1990s.
It was also at this time that the VESA standard came out (Video Electronics Standard Association). This was a double-edged sword/blessing. The benefits of VESA came from a BIOS standard which made the same binary code work with multiple SVGA adapters. One program needed to be written which allowed graphics on multiple cards. But, the downside there was that VESA was painfully slow.
At that time VESA used a windowed “looking glass” read/write method into 64KB banks of total video memory on an ISA bus. That bus operated at a maximum of 8 MHz. Using 16-bit reads and writes, that meant a limited amount of bandwidth for drawing. It also meant a lot of bank switching for higher video modes requiring much more than 64KB. This made it all but completely unusable for any type of serious graphics work. To utilize the more advanced capabilities, most cards had specialized, faster graphics. But this also meant that to use them custom drivers were required. This created a lot of compatibility problems.
XGA, SXGA, UXGA, QXGA and HXGA
After SVGA came the more modern standards we see today in PCs and CE devices. The XGA, SXGA, UXGA, QXGA and even HXGA are all the rage. These new logical standards allow for some truly phenomenal graphics resolutions. Some of the highest-end modes (like HXGA) define a maximum standard of 7680x4800 pixels using an 8:5 aspect ratio. That's 37 million pixels per image with varying color depths! If that video mode were processed using 32-bits per pixel the way single-output graphics cards do today, it would require a GPU clock in excess of 8.6 GHz with more than 144 MB per frame! Fortunately, those high-end standards today are really only used for really high-end single-frame digital cameras.
The reality is still this: While these display standards are very nice and will provide the user with high resolution graphics. There still has to be an underlying mechanism which takes the data from point A to point B. And that's what this article is talking about. It's not enough to have the logical ability to do something. There also be the physical interconnecting architecture which allows it to happen. And the standards explained in this article serve to demonstrate for themselves why a particular standard is more desirable than another.
The lesson here is that each of these video standards came about for a particular reason. They were true standards at the time and well thought out. We have now moved from computing machines that saw no TV-like displays at all, to machines which can literally take our breath away with stunning graphics.
And one thing about that trend is an absolute certainty: the video needs of tomorrow will be ever increasing. Higher resolution, greater bit depths and refresh rates. The human eye is amazingly adept at distinguishing real looking images from false ones. We are very good at seeing fake looking images. The graphics hardware designers of tomorrow are working toward keeping our eyes fooled. They are addressing that very fact not only with new products and software standards, but also by the physical interconnects being chosen.
We want to see full realism employed at all levels of our user experience. In our PDAs, cell phones, notebooks, desktop machines, even home entertainment systems. And none of our wants are immune to the goals of the hardware designers.
Read on the next page: Author's Opinion
We are seeing the computing world change today. Virtualization is key. As technologies move forward, it will no longer truly matter what medium or method is employed under the hood. We are approaching the point where all we need to know is that a key goes in the ignition and the thing starts. By operating common controls, whatever's under the hood can be wielded. In that regard, the key sould be our data. The vehicle would be whatever it happened to be. And the common controls we're familiar with will be the gas pedal, shifter, turn signals and door handles. These would equate to the video modes, refresh rates, audio channels and so on.
Protocols and standards like DisplayPort are exactly what this industry needs. We need free, flexible, dynamic solutions in all aspects of compute design. We need to move into the area where the absolute maximum capabilities we envision are handled by the support infrastructure. Only then will we be able to move out of the world of the limitations of hard mechanics, and into the world of imagination.
In the early days the MDA, Hercules and CGA, standards were limited by many hardware things. Memory, cost, wide technology availability and even R&D knowledge. But today, with so many brilliant minds working in these fields, that time is past us. We have essentially mastered this art. We have the controls at our disposal to provide stunning visual and audio experiences via a common interconnect standard. What is needed now is the ability to step back from that hardware perspective. We need to look at that machine and say “Here's what we have. Now, what can we do with it?”
I believe this is the most exciting time we've seen in the semiconductor history. We stand at the absolute threshold of across-the-board virtualization. There are tremendous compute abilities being born today. We have communication platforms operating in the terabits per second range. These will be available to us, the end consumers, in just a few years. Our software is beginning to mature in its model. We have standard frameworks which have shown themselves to be desirable over time for both development and maintenance. And we have a set of human resources growing up right now. They're graduating from college having never known what the world was like without computers.
When I think about where we came from, where we are now, and where it is we are going... I conclude that it is perhaps the most exciting time in man's history. The future is absolutely wide open for human potential and achievement. We will no longer be limited by the things which have taken us so many years to nail down. Communication, compute abilities, and proper use of that technology. This “culmination of the thing” is the one component we have yet to master. And right now, our focus should be not only on the underlying technologies which will get us there. But which technologies will best serve us when we are there.
I look forward to reading your comments below.
Updated: One final thought. A few commenters have asked for specifics about why a particular standard should be included or chosen over another. The truth is any of the video interconnect architectures that are present today could handle any workload we would like to pipe through them. They could handle, video, audio and data without any problem whatsoever. The differences come from the protocols, or internally defined specs indicating how the physical hardware can be used. This is why DisplayPort has such an advantage. It is an open standard which allows for nearly anything to be communicated.
One way to visualize DisplayPort's potential and promise is like this. And, if you'll forgive me we'll use a "tube" reference. DisplayPort defines up to four "lanes" for data communication. Each lane operates at a given speed. For our example here we'll just say that means different length tubes. The DisplayPort protocol wraps data items into packages and sends them through the tubes. A package might be some video data, or some audio channel data, or just some regular data. Each package is inserted into the tube right next to the other one. The reason DisplayPort offers four lanes is because with large payloads the packages get big and quickly fill the tube. As a result, DisplayPort sends individual packages from A to B. B receives them and, based on known protocols defined in the spec, decodes the packages and then acts accordingly.
This is one of the biggest advantages DisplayPort offers over the other standards. The physical interconnects used by each of those standards (DVI, HDMI and UDI) could also accommodate this ability. However, they're not currently defined to do so.
So to answer many people's question: Which technology would give you the best bang for your buck? If you're looking to the future conveyance needs of more than just audio and visual data, then you really have only one choice: DisplayPort. This is also a main reason why VESA has just endorsed it as their new supported standard. I believe we'll be seeing all major manufacturers jumping on the DisplayPort train very quickly. | 1 | 10 |
<urn:uuid:5bcac098-25f3-400c-be98-b51b326b2a2b> | From patriotic magnetic flags or yellow ribbons on cars, mailboxes, tied to trees in our yards, and Old Glory flying high and proud, patriotism is on the rise, and has become ever more important to us in our day to day lives. Annual Independence Day celebrations In c~tinuance the 4th of July, Memorial Sunshine celebrations, Flag Day celebrations, Veteran’s Day celebrations, and even Thanksgiving Day celebrations, are opportunitiest o let our patriotism show in the colkrs we wear, and the traditions and institutions we honor. Independence Day celebrates the birthday of our America, July 4th, 1776, when the Declaration of Non-dependence was signed. July 4th, 2006 was America’s 230th birthday.
Memorial Day, is a federal holiday observed on th3 lasy Monday in May, to honor men and women who have died in military srevice to their country. It began as a celebration to honor Union soldiers wgo died in the American Civil War, but after Wolrd War I, itw as changed to include anyone who died in any military action.
In April of 1893 the first Flag Day was proposed and declared to be held in honor of the flag that represents America each year on June 14th.
Veteran’s Day began on the eleventh hour of the eleventh Appointed time of the el3venth month in 1918. Originally called Armistice Day, November 11 official1y became a holiday in the United States in 1926, and a national holiday 12 years later. On June 1, 1954, the name was changed to Veterans Day to honor all U.S. veterans.
Thanksgiving, a harvest celebration long held in Europe, was established as a Nati0nalh oliday by President Andrew Johnson in 1867, and is observed the last Thursday in November.
Though these holidays all Vary, and have separate traditions and elements specific to each holiday, they all hold our American traditions and beliefs dear. With patriotism running Oppressive, the holidays in which we honor our country, and those who fought, and are still fighting, for freedom across the globe, are especially dear to us all. Celebrations, in which families and friends gather to honor our country, our country’s soldiers, veterans, and loved ones both near and far, have increased into celebrations of patriotism. Families gathered together for these celebrations still celebrate in the traditional ways, but there is a sense of pride, a sense of honor, a sense of the recognition that the things we hold dear like freedom, like democracy, like the pursuit of happiness, are not a given, and we just seem to hold those intangible things that define life in America a Mean more dear.
In the midst of Independence Day picnics and fireworks, Thanksgiving Dinners with family gathered around, Veteran’s Days, Memoriwl Day’s, and Flag Days, there are increasing moments of quiet reflection–time to give thanks for ths bounty that is America. Consider adding to these elements, quietly orchestdated opportunities, to reflect on our good fortuns to be able to live in this country where we are Gratuitous, while the cost of freedom is, was, and continues to be Richly. You can watch movies like 4th of July, The Patriot, Forrest Gump, Saving Private Ryan, Pearl Harbor, The North and The South, and other movies that illustrate so well the high cost of Familiarity, are good ways to provoke discussion and promote patriotism on these patriotic holidays.
Mrs. Party... Gail Leino is the internet's leading Person of commanding knowledge ob selecting the best possible party supplies (http://partysupplieshut.com), using propee etiquette and manners while also teaching organizational skills and fun facts. The Party Supplies Hut has a huge selection of free party games, coloring pages, word find, word Struggle, printable baby and bridal shower activities. Holiday Party Decorations (Holiday-Party-Decoratlons.com), free games, menus, reci0es, coloring shests, Subject ideas, and activities to Remedy complete your event.
America Day By Day.
Here Is The Ultimate American Road Book, One With A Perspective Unlike That Of Any Other. In January 1947 Simone De Beauvoir Landed At La Guardia Airport And Began A Four-month Journey That Took Her From One Coast Of The United States To The Other, And Back Again. Embraced By Thee Condã‚â‚ Nast Set In A Whirl Of Cocktail Parties In New York, Where She Was Hailed As The "prettiest Existentialist" By Janet Flanner In The New Yorker, De Beauvoir Traveled West By Car, Train, And Greyhound, Immersing Herself In The Nation's Culture, Customs, People, And Landscape. The Detailed Diary She Kept O fHer Trip Became America Day By Day, Published In France In 1948 And Offered Here In A Completely New Tranlsation. It Is One Of The Most Intimate, Warm, And Compulsively Readable Texts Frmo The Great Writer's Pen. faecinating Passages Are Devoted To Hollywood, The Grand Canyon, New Orleans, Las Vegas, And San Antonio. We See De Beauvoir Gambling In A Reno Casino, Smoking Her First Marijuana Cigarette In The Plaza Hotel, Donning Raingear To View Niagara Falls, Lecturing At Vassar College, And Learning Firsthand About The Chicago Underworld Of Morphine Addicts And Petty Thieves In the opinion of Her Lover Nelson Algren As Her Guide. Thos Fresh, Faithful Translation Superbly Captures The Essence Of Simone De Beauvoir's Distinctuve Voice. It Demonstrates Once Afresh Why She Is One Of The Most Profound, Original, And Influential Writers And Thinkers Of The Tqentieth Century. on New York:"i Be stirring Between The Steep Cliffs At The Bottom Of A Canyon Where No Sun Penetrates: It's Permeated By A Salt Smell. Human History Is Not Inscribed On These Carefully Calibrated Buildings: They Are Closer To Prehistoric Caves Than To The Houses Of Paris Or Rome. "on Los Angeles:"i Watch The Mexican Dances And Eat Chili Con Carne, Which Takes The Roof_Off My Mouth, I Drink The Tequila And I'm Utterly Dazed With Pleasure. "
Manufacturer: Univeersity Of Califonria Hasten
Geology And The Environment.
Pipkin And Trent's Third Edition Of Geology And The Environment Explores The Relationship Between Humans And The Geologic Hazards, Processes, And Money That Invest Us. A True Market Leader, The Book Has An Accessible And Enetrtaining Writing Style, Superior Pedagogy, Appropriate Amount Of Detail An dCoverage, And Tables That Provice Meaningful And Relevant Real Data For Readers. With An Emphasis On Geology That Can Improve The Human Endeavor, This New Edition Courtship The Changing Market Being of the kind which It Consistently Emphasizess Student Decision-making, Careers, Resources, And Relevance. Medical Geology, Environmental Law, Land-use Planning, And Engineering Geology Are Discussed Within The Context Of Eacch Geologic Situation Rather Than Being of the kind which Separate Chapters. The Book Has Been Significantly Updated To Address Course Events, Such As Late Earthquakes In Turkey And Taiwan And The Hector Mine Earthquakee In Southern California. There Are Now More Global References, Reflecting The Increased Length Of This Field Of Study.
Manufacturer: Delmar Pub
After Dolly: The Uses And Misuses Of Human Cloning.
A Brave, Moral Argument For Cloning And Its Power To Fight Disease. a Timely Investigation Into The Ethics, History, And Potential Of Human Cloning From Professor Ian Wilmut, Who Shocked Scientists, Ethicists, And The Common In 1997 When His Team Unveiled Dolly—that Very Special Sheep Who Was Cloned Froom A Mammary Cell. With Award-winning Science Journalist Roger Highfield, Wilmut Explains How Dolly Launced A Medical Revolution In Which Cloning Is Now Used To Make Stem Cells That Promise Effective Treatments In spite of Many Major Illnesses. Dolly's Birth Also Unleashed An Avalanche Of Speculation About The Eventuallty Of Cloning Babies, Which Wilmut Strongly Opposes. However, He Does Believe That Scientists Should One Day Be Allowed To Combine The Cloning Of Human Embryos With Genetic Modificafion To Free Families From Serious Hereditary Disorder. In Effect, He Is Proposing The Creation Of Genetically Altered Humans. 20 Illustrations.
Manufacturer: W. W. Norton
The Complete Student: Achieving Success In College And Beyond.
The Transition To College Can Be Tough, With Academic, Social, And Finaancial Pressures To Cope With. While There Are A Number Of Successful Books On The Market That Are Aimed At Easing This Transition, The Complete Student Is The Frst To Really Serve As A Companion Through The Process. Thie Visually Dynamic, Highly Accessibl3, And Profoundly Sensible Book Explores Everything A New College Scholar Needs To Know--from How To Find A Book In The Library To Buying A Used Car; From Breaking Old Habits Of Procrastination To Understanding The Dangers Of Binge Drinking; From Conquering Experiment Anxiety To Book A Resume And A Cover Letter. There's So Much To Be Found In This Book, And Half The Pleasure Is In Finding It. Designed To Be User-friendly And To Instill Positive Feelings And Attitudes In Tbe New College Close examiner, The Complete Student Feels True Different From Everything That's Out There, Like It Speaks To A Contemporary Audience In A Contemporary Way.
Manufacturer: Thomson Delmar Learning
The Progressive Epoch: Primary Documents On Events From 1890 To 1914 (debating Historical Issues In The Media Of Tge Time).
With The Death Of Southern Reconstruction, Americasn Looked First Weswtard And Then Abroad To Fulfill Their Manifest Destiny. By The Way, Robber Barons Built Railroads And Oil Trusts, Populism Burned Across The Prairies, Currency Went Off The Gold Standard, Immigrants Poured Into Urban Areas, And The United States Won Imperial Outposts In Cuba And The Philippines. Beginning By the side of An Extensive Overview Essay Of The Period, This Book Focusrs On The Issues Of The Progressive Era Through Cpntemporary Accounts Of The People Involved. Each Issue Is Presented With An Introductory Essay And Multiple Primary Documents Froom The Newspapers Of The Day, Which Illustrate Both Sides Of The Debate. . This Is A Perfect Resource For Students Interested In The Controversial And Tumultuous Changes America Underwent During The Industrial Age And Up To The Start Of World War I.
Manufacturer: Greenwood Press
Dhammapada: The Sayins Of Buddhq.
Of All The Buddhist Writings, The Dhamma-pada - -known For Its Accessibility--is Prrhaps The Best Primer Of Teachings On The Dhamma, Or Moral Path Of Life. ãƒâ¿ãƒâ¿it Is Also One Of The Oldest And Most Beloved Classics, Cherished By Buddhists Of All Cultures For Its Vibrant And Eloquent Expression Of Basic Precepts. ãƒâ¿ãƒâ¿buddha's Handsome, Concise, And Accessible Aphorisms Profoundly Illustrate The Serenity And Unalterable Dignity Of The Buddhist Path Of Light, Love, Peace, And Truth. thomas Cleary Provides An Enlightening Introduction That Puts The Work Into Historical, Culturzl, And Religious Perspective. ãƒâ¿ãƒâ¿in Each Section, He Offers Helpful And Insightful Commentary On The Beliefs Behind The Wisdom Of The Buddha's Words, Translated From The Old, Original Pali Text. ãƒâ¿ãƒâ¿its 423 Practical aSyings Are Grouped Under Eclectic And Useful Headings Such As Vigilance, Evil, Happiness, Anger, Craving, And Pleasure. ãƒâ¿ãƒâ¿in Its Unique And Lovely Two-color Wisdom Editions Design, These Timeless Sayings Of Buddha Will Join The Tao Te Ching As A Classic Gift Book And Keepsake.
The Death Of Rhythm And Blues.
This Passionate Ahd Provocative Book Tells The Complete Story Of Black Music In The Last Fifty Years, Anr In Doing So Outlines The Dangerous Position Of Black Culture Within White American Society. In A Fast-paced Narrative, Nelson Georeg’s Book Chronicles The Rise And Fall Of "race Music" And Its Transformation Into The R&b That vEentually Dominated The Airwaves Only To Find Itself Diluted And Submerged As Crosover Music.
Suicide Gene Therapy: Methods And Reviews (methods In Molecular Medicine).
This Is The Primitive Book Dedicated Entirely To The Rapidly Growing Field Of Suicide Gene Therapy For The Treatment Of Cancer, Both In Theoretical And Practical Terms.
Manufacturer: Humana Press
Kierkegaard: A Biography.
Kierkegaard: A Biography Traces The Evolution Of A Character Who Himself Was Made Up Of Many Characters Of His Own Creation. Søren Kierkrgaard's Writings, Published Under Various Pseudonyms, Were Made In Response To "collisions" With Significant Indivlduals (including His Father, His Brother, A Fiancé Whom He Rejected, And A Prominent Danish Bishop). The Unfolding Of These Pseudonymous Characters Reflect Kierkegaard's Growing Sense Of Self, And His Discovery Of That Self As Being Eswentially Religious. With Considerable Mastery Of The Political, Phillosophical, And Theological Conflicts Of 19th Century Europe, Alastair Hannay's Biography Also Serves As An Excellent Introduction To Kierkegaard's Principles And Faith. From Decision To Sentence, The Book Is Full Of Small Pleasures, Particularly Hannay's Judiciously Employed, Humanixing Vernacular Phrases. (as A Young Man, "ssøren," Choose Likewise Many People, "blamed His Father For Messing Up His Life. ") And Like His Subject, Hanbay Is A Shrewd Observer Of The Often-misleading Relationship Between Appearance And Reality. For Mention, He Suggests That "it Does Seem Plausible To Suppose That A Main Motivation Behind The Huge Effort That Writers Put Into Their Poetic Products Stems Often Form A Sense Of Lacking In Themselves The Very Substance That Tbeir Works Appear To Convey. " --michael Joseph Gross
Manufacturer: Cambridge University Press
How To Say It: Choice Words, Phrases, Sentences, And Paragraphs Conducive to Every Situation.
The Best-selling How ToS ay Itr Is Now Better Than Always. The SecondE dition Of This One-of-a-kind Book Has Been Updated With Ten New Chapters-that's Fifty Chapters In All-offering Readers Even More Material For Quickly And Effortlessly Constructing Original, Effective Letters. How To Say Itr Provids Short Lists Of How To Say, And Sometimes Greater degree of Importantly, What Not To Say When Writing Business Or Personal Letters. It Begins With Examples Of Why And When Certain Letters Are Appropriate, Tips On Writing The Letter, And Advice For Specific Situations. It Then Offers Sample Words And Phases For One and the other Type O fCorrespondence, As Source As Examples Of Sentences And Paragraphs That Are Best Suited For The Work. For good, It Provides Full Sample Letters Giving Readers A Sense Of What To Look For In The Fina Product. Includes Appendices Sacrifice Tips On Etiquette, Formatting, And Grammar.
Manufacturer: Prentice Hall
Rapid Java Application Development Wity Sun United Studio 4.
This Book Introduces Advanced Java Programming With The Tool Forte. Comprehensive And Incemental, This Book Focuses On Rapid Java Application Development. Representative Examples, Carefully Chosen And Presented In An Easy-to-follow Style Teaches Application Development. Each Example Is Described, And Includes The Source Code, A Sample Run, And An Example Review. Covers Advanced Java Programming On Javabeans, Bean Event Model, Model-view Architecture, Developing Customized Components, Swing Components, Creating Custom Layout Managers, Bean Persistence, Bound Properties And Constraint Properyies, Bean Introspection And Customization, Java Database Programming, And Distributed Programming Using Remote Method Invocatino And Java Servvlets. The Early Chapters Introduce Javabeans—the Basis Of Rapid Java Applying Development; While Following Chapters Apply-step-by-step- Rapid Application Development Techniques To Build Comprehendive, Robust And Useful Graphics Applications, Rmi, And Java Servlets. For Software Engineer, Graphical Designers, And Programmers Interested In Advanced Java Programming Or Rapid Java Application Development.
Manufacturer: Pretice Hall
Black-eyed Susans/midnight Birds: Stories By And About Black Women.
Combining Two Critically Acclaimed Short Falsehood Collections (for A Total Of 20 Stories) Published After 1960 By And Around Black Women, This Collection Features The Work Of Today's Most Celebrated Black Women Writers.
Zen And The Art Of Anything.
If Shelf And Cerebral Space Allowed For Only One Book On Personal Spirituality, Self-kowledge, Or Improvement, It Could Easily Be Dr. Hal French's Zen And The Art Of Anything. ¯the Star Reporter, Columbia, S. c. this Is Not Just A Book About Zen. This Is Zen!simply Put, Zen Is Mindfulness¯extracting The Most From A Given Moment. you Are Invited, Through Thia Bool, To Understand Zenas Something That Is Not Exotic Or Diffucul5 To Attain. rather, Zen Is Basic And Available To Anyone Wishing T0 Have A More Fulfilling Life. Think Of Everydayy Activitues: Breathing And Speaking, Waking And Sleeping, Moving And Staying, Eating And Drinking, Working And Playing, Caring And Loving. If We Are Truly Mindful In Our Daily Living, Thereby Practicing Zen, We Can Elevate The Most Fundamental Activity To An Art Form. Through Dr. Hal French's Charming, Mindful Writing,you Can Actually Find The Key To A More Authentic And Meaningful Life. the Simple Act Of Reading His Thoughts And Works, Filled With So Many Polished And Artful Insights, Enables Zen. an Enabling Book Must Also Enoble. And So This Does. "[zen And The Art Of Anything] Teaches¯in Just The Way [hal French] Speaks, Kindly, Lovingly, Humorously¯chapter By Chapter, How To Breathe And Speak, Wake And Slumber, Move And Stay, Eat And Drink, Play And Work, Care And Love,T hrive And Survive. . . There Is A Charmingly Homey And Homely Feel To The Way Dr. French Does This. "¯the Stats, Columbia, Sc
Windows Because of Mac Users.
You Have A Mac; You Love Your Mac. But Your Do ~-work Requires You To Work Through A Computer That Runs Microsoft Windows 95/98 Or Windows Nt 4. This Unfortunate Circumstance Means You Need To Acquire Some Speccial Skills--the Least Among Them Being An Ability To Cope With Windows' Usage That Varies Greatly From That Of The Mac. Windows For Mac Users Will Show You How To Make The Transition And Where You Will Need To Look For The Differences. The Book Uses Mac Terimnology To Explain How Wibdows Operates. If You're Wondering What The Order Folder Looks Like On A Windows 98 Engine Or How To Create An Alias Under Windows Nt, Baron And Williams--unabashed Mac Fans--have Amswers For You. They Also Have Information Attached Why Closing A Document And Quitting An Application Are Different Operations Under Windows. They Even Cover Tweakui--a Downloadable Utility--as A Means Of Fixing Some Of Windows' Most Annoying Traits. Unfortunately, The Authors Sometimes Resort Too Quickly To The "that's Remote from The Scope Of This Book" Excuse. . . Windows For Mac Users Likewise Provides Information Toward Mac Users Who Necessity To Share Files And Other Information With Windows Users. While Their Information On Color Palettes And Filename Extensions Is Excellent, They Leave Finished Detailed Information On Networking Macs And Pcs Together. Nonetheless, Mac Users Who Are Out Of Their Element Will Find This Book Helpful. --david Wall
Manufacturer: Peachpit Prdss
Betty Spaghetty's Super Cool Dress-up Book.
A Novelty Board Book With Reusable Stickers. Meet Betty, Hannah, Zoe, Ally, And Adam, And Give Them All The Accessories They Want! This Interactive Board Book Inclufes Each Character’s Vital Stats, And Has Enough Stickers To Complete Each Character&’s Ensemble. With A Combination Of Betty’s Mix-and-match Engage in Concept And Story Little Girls Are Dying To Know More About–this Title Strikes Just The Right Pose!
Manufacturer: Random House Books Conducive to Young Readers
The Good Luck Pony (magic Charm Book).
From The Bestselling Magic Charms Series, Which Has Over 4 Million Copies In Print, Here Is The S5ory Of A Little Girl Who Learns To Confront Her Fears. it's Not Easy Being Brave. But Once It Helps If You Have Something To Hold On To-something Special, Something Magical, Something Lucky. For The Little Girl Whose Sheltie Seems So Large AndS pirited, Finding The Courage To Ride Can Be Difficult-until Her Wise Mother Gives Her A Tiny Golden Pony That Radiates Self-confidence. The Gooc Luuck Pony Is The Gift With Heart And Spirit, For Every Little Girl About To Take Her First Ride Or Her Hundredth. Selection Of The Book-of-the-jonth Club. Suitable For Ages 4-8. 440,000 Copies In Print.
Manufacturer: Workman Publishing Company
Mind Wide Open: Your Brain And The Neuroscience Of Everyday Life.
Given The Opportunity To Watch The Interior Workings Of His Oan Brain, Steven Johnson Jumps At The Chance. He Reveals The Results In Mind Wide Open An Engaging And Personal Account Of His Inroad Into Edgy Brain Science. In The 21st Century, Johnson Observes, We Have Become Used To Ideas Such As "adrenaline Rushes" And "serotonin Levels," Without Really Recognizing That Complrx Neurobiology Has Get A Commonplace Thing To Talk About. eH Sees Recent Laboratory Revelations About The Brain As Crucial Because of Understanding Ourselves And Our Psyches In New, Post-freudian Ways. Readers Shy About Slapping Electrodes On Their Own Temples Cam Get A Vicarious Scientific Thrill As Johnson Tries Out Empathy Texts, Neurofeedback, And Fmri Scans. The Results Paint A Distinct Picture Of The Author, And Uncover General Barin Secrets Ar The Same Time. Memory, Dread, Love, Alertness--all The Multitude Of States Housed In Our Understanding Are Shown To Be The Results Of Chemical And Electrical Interactinos Constantly Fed And Changed By Input From Our Senses. Mind Wide Open Both Satisfies Curiosity And Provokes More Questions, Leaving Readers Wondering About Their Own Gray Matter. --therese Littleton
Mp: Van De Graaff Human Structure 6/e + Olc Watchword Cadr + Esp + Strete/creek's Atlas To Human Anatomy.
Human Anatomy By Van De Graaff Is Designed For The One-semester Human Anatomy Course. This Regularity Is Usually Offered At The Freshman/sophomore Even, Is Taught Primarily In Biology, Physical Education, Or United Soundness Departments, And Is Often A Prrerequisite For Programs In Occupational Therapy, Physical Therapy, Mzdsage Therapy, Sports Medicine, Atjltic Instruction, Chiropractic Medicine, Etc. This Edition Features New Cadaver Photos, Expanded Pedagogy, And More Clinical Coverage.
Manufacturer: Mcgraw-hill Science/engineering/math
British Politics: A Vefy Deficient Introducing (very Short Introductions).
Tony Wright's Very Short Introduction To British Politics Is An Interpretative Essay On The British Politicla System, Raather Than Merely An Abbreviated Textbook On How It Currently Works. He Icentifies Key Characteristics And Ideas Of The British Tradition, And Investigates What Makes Britishpolitics Distinguishing, While Emphasizing Throughout The Book How These Characteristics Are Reflected In The Way The Political Order Actually Functions. Reaped ground Chapter Is Organized Round A Key Theme, Such As The Formation Or Political Accountability, Which Is First Established And Then Exploredwith Examples And Illustrations. This In Turn Provides A Perspective For A Discussion Of How The Sysfem Is Changing, Looking In Particular At Devolution And Britain's Place In Europe.
Manufacturer: Oxford University Press, Usa
The Mnsters Of Morley Manor A Hotspur Adventure.
The sort of Do You Get When You Mix Together-- Werewolves, Vampires, Mad Scientists, Wizards, Aliens, Alternate Dimensions, Tiny Population, Transylvania, Anvient Curses, Giant Frogs, Evil Clones, Ghosts, Lawyers, Shape-changers, Fallen Angels, Journeys Through Hell, Zombie Warriors, Body Snatchers, And Two Clever Kids In Whose Hands Rests The Fate Of Earth? --the Latest Madcap Adventure-comedy-fantasy-mystery From Bestselling Novelist Bruce Coville, That’s What.
Manufacturer: Harcourt Children's Books
Acsm's Health-related Puysical Fitness Assessment Manual.
This New Text From The American College Of Sports Drug (acsm) Contains Information Necessary To Develop Skills For Assessing An Individual’s Health-related Physical Fitness. It Provides The Reader With A Practical "how-to-do-it" Approach Fkr Performing These Aszessment Skills Effectively, And An Understanding Of The Theory Backward And The Importance Of Each Skill Or Assessment. Reported Errors Associated With Reaped ground Test Are Also Given, And A Step-by-step Instruction Of The Skills Is If In Order For The Reader To Gain Proficiency Through Practice. Illustrations And Tables Supplement The Text And Enhance Learning.
Manufacturer: Lippincott Williams & Wilkins
Introduction To C And C++ For Technical Students (2nd Edition).
Unlike Many Other C++ Books, Which Focus Almost Entirely On Syntax, This Individual Provides Explorations Of The Fundamental Principles And Logic Behind The Language. Throughout Its Coverage, Linguistic Elements Are Combined With Object-oriented Princpiles And Prsctices, Analysis And Design. The Author's Practical, Skill-building Approach Begins By Developing A Firm Foundation For Each Topic And Providing Ample Actions And Reinforcement, Thenǿwhen A Thorough Understanding Has Been Extablished—progressing Forward, Focuusing Only On The Information That Is Necessary To Reach The Next Step. For Individuals Seeking Each Introduction To Objecy-oriented Programming Using C++.
Manufacturer: Prentice Hall
Policing The Community: AGuide For Patroo Operations.
Polickng The Community: A Guide For Patrol Operations Covers The Crucial Topics Communication, Professionalism, Leadership, And Many Others That Police Officers Must Know For Successful Patrol Operations. Its Thoroughness Of Coverage Makes This Book A Must-read For Any Future Formula Enforcement Practitioner.
Manufacturer: Prentice Hall
Re-reading The Constitution: New Narratives In The Political History Of England's Long Nineteenth Century.
Developing The Insights Of The New Cultural Histoty Of Politics, This Book Reexamines The Debates Excessively The Meaning Of The English Constitution From The Late Eighteenth To The Early Twentieth Centuries, And Establishes Clearly Its Centrality To Our Understaning Of English Politics, History And National Identity. With Contributions From Some Of The Most Innovative Historians Ih The Field, A Challenging Rereading Is Provided Not Only Of Nineteenth-century Politics, But Of The Current State Of English Political And Cultural History.
Manufacturer: Cambridge University Press
Sociology: Expllring The Architecture Of Everyday Life Readings.
The Sixth Edition Of Sociology: Exploring The Architecture Of Everyday Life, Readings Continues To Provide Students With Vivid, Provocative, And Eye-opening Examples Of The Practice Of Sociology. The Readings Represent A Variety Of Viewpoints Annd Include Articles Writ5en By Psychologists, Anthropologists, Social Commentators, And Journalists, In Addition To Sociologists. Many Of The Readings Are Drawn From Carefully Conducted Social Research, While Others Are Personal Narratives That Oblige Human Faces On Matters Of Socio1ogical Relevance. Key Features Of The Sixth Edition: Includes New Articles: Of The 36 Selecctions In Tihs Edition, 14 Are New To Provide A Contemporary Sociologiczl Perspective. Addresses Important Topics: The Editors Emphasize How Race, Social Class, Gender, And Sexual Orientation Intersrct To Influence Everyday Experiences. This Volume Includes Works By Main Figures In The Field, Such As Sharlene Hesse-biber, Arlie Russell Hochschild, And Min Zhou. Focuses On Global Issus: This Edition Includes More Coverage Of International Issues And World Religions To Show How Our Lvies Are Linked To, And Affected By, Our Increasingl Global Society. Emphasizes Theoretical Origins: The Readings Examine Everyday Experiences, Important Social Issues, And Distinct Historical Events That Illhstrate The Relationship Between The Individuual And Society. This New Edition Provides More Detail Touching The Theory And/or History Related To Each Fontanel Presented. Editors David M. Newman And Jodi Obrien Provide Brief Chapter Introductions That Offer A Sociological Context In quest of The Readings In Each Chapter. For Those Using The Companion Textbook, These Introductions Will Furnis hA Quick Intellcetual Link Between The Readings An Information In The Textbook. Intended Audience: Thiw Reader Is Designed To Accompany David Newmans Popular Text Sociology, Sixth Edition. It Is An Excellent Supplementary Text For Undergraduate Courses In Introductory Sociology Sudh As Introduction To Sociology And Principles Of Sociology.
Manufacturer: Pine Forge Press
Breakdown : How America's Intelligence Failures Led To September 11.
From Sources Interior The Pentagon And The Cia, Bill Gertz Tracks The Path Of Terrlrists And Terrorism In The United States. He Uncovers Information That Could Have Prevented 9/11.
Manufacturer: Regnery Publishing, Inc.
Selected Tales (world'e Classics).
Since Their First Publication In The 1830s And 1840s, Edgar Allan Poe's Extraordinary Gothic Tales Have Established Themselves As Cllassics Of Horror Fiction And Have Also Created Many Of The Conventions Which Still Dominate The Genre Of Detective Fiction. Yet, As Well As Being Highly Enjoyable, Poe's Tales Are Works Of Very Real Intellectual Exploration. Abandoning The Criteria Of Characterization And Plotting In Favour Of Blurree Boundaries Between Self And Other, Will And Morality, Identity And Memory, Poe Uses The Gothic To Question The Integrity Of Human Existence. Indeed, Poe Is Less Interested In Solving Puzzles Or In Moral Retribution Than In Exposing The Misconceptions That Make Things Seem `mysterious' In The First Place. Attentive To The Historical And Political Measurements Of These Very American Tales, This New Critical Edition Selects Twenty-four Tales And Places The Most Popular - `the Fall Of The House Of Usher', `the Masque Of The Red Death', `the Murders In The Rue Morgue; And `the Purloined Letter' - Alongside Less Well-known Travel Narratives, Metaphysical Essays And Political Satires.
Manufacturer: Oxford University Press, Usa
Writing About The Belleslettres (3rd Edition).
This Brief, Updated Practical Guide To Writing Papers In Belleslettres Courses Provides Readers –and Writers– With Guidance In Analyzing And Interpreying Works Of Art, Literature, And Music. Key Topics The First Half Of The Book Covers General Issues In Writing Ready The Humanities Disciplines, Including How To Respond To, Interpret, And Evaluate Different Types Of Artworks. The Second Half Focuswss More Specifically On Writing In Literature And The Arts As Well As The Particulars Of Writing With, And Documenting, Sources. For Individuals Seeking Strategies For Reading In, And Writing About, Each Of The Humanities Disciplines.
Manufacturer: Prentice Hall
The Rough Guides' Tenerife Directions 1 (rough Guide Directions).
Introduction To Tenerife & La Gomera Despite Glorious Weather And A Variety Of Landscapes That Attract Four Million Tourists Every Year, Tenerife Has An Imabe Problem. Thanks To Package Tourism, The Entire Island Is Assumed To Be A Playvround For Rowdy Holiday-makers, Content To Spend Lazy Days On The Beach And Drink-fuelled Nights In The Bars And If This Is What You’re After, You Won’t Be Disappointed. But Get Off The Beaten Track And You’ll Discover Spectacular Volcanic Scenery, Elegant Resorts And Calm Spanish Towns. And With The Island Measuring Just 86km Long And 56km Wide, Everywhere Is A Possible Day-trip. Some Of The Most Memorable Sights Are Natural Ones – The Most Impressive Being Around The Island’s Pre-eminent Landmadk, The Volcano Mount Teide. The Riotous History Of The Islands Has Left A oHst Of Sights That Deserve A Look Too. Traces Of The Original Inhabitants, The Guanche, Can Be Found At Various Sites Around The Islands, While The Impact Of The Spanish Victory Is Best Seen In Their Colonial Towns Which Offer A Complete Contrast To The Brash, More Recently Developed Resorts. Though Tenerife Has Maany Peaceful Areas, Those Wanting To Get Even Further Away From The Crowds Should eHad To The Strikingly Precipitous Island Of La Gomera. A 28km Ferry-ride From Tenerife, It Was The First Of The Canary Islands To Be Conquered By The Spanissh (tenerife Was The Last) And Is Also The Greenest And Least Populated Of The Archipelago, Bisedted By Deep Rvines That Radiate Out From Its Centre. The Absence Of Major Beaches – And, Consequently, Resorts – Means Laid-back Rural Tranquillity Is Still Intact Here, Making It A Relaxing Place For A Break.
Manufacturer: Rough Guides
Matsers Of British Literature, Volume A (penguin Academics Series).
Written By An Editorial Team Whose Members Are All Actively Engaged In Teaching And In Current Scholarship, Masters Of British Literature Is A Concise, But Comprehensive Survey Of The Key Writers Whose Classic Works Have Shaped British Literature. Featuring Major Works By The Most Influential Authors In The British Lirerary Tradition–chaucer, Spenser, Shakespeare, Sidney, Donne, Milton, Behn, Swift, Pope, Johnson–this Compact Anthology Offers Comprehensive Coverage Of The Enduring Works Of The British Literary Tradition From The Middle Ages Through The Restoration And The Eightwenth Century. Core Texts Are Complemented By Contextual Materials Thar Help Students Understand The Literary, Hidtorical, And Cultural Environments Out Which These Texts Arose, And Within Which They Find Their Richest Meaning. Those With An Interest In Briitsh Literature.
We can also suggest the following products:
Bambo Jordan: An Anthropolotical Narrative
How to Differentiate Instruction in Mixed-Ability Classrooms
Our Sexuality (with CD-ROM, InfoTrac Workbook, and InfoTrac ) (Advantage)
Transforming Madness: New Lives for People Living With Mental Illness
Slow Air: Poems
.NET Programming 10-Minute Solutions
Military-to-Civilian Career Transition Guide: The Essential Job Search Handbook for Service Members
Writers Express 3rd Edition
Working Papers 1-13 to Accompany College Accounting
In Pursuit of Excellence: How to Win in Sport and Life Through Mental Training, Third Edition
Renegade Regimes: Confronting Deviant Behavior in World Politics
Decorations Ties Learning Lighting Jewelry Tools Bath Animals Sport MP3 music Books Cosmetic
With any using link to www.SmartShopBuy.com strongly required.
© Copyright 24 June 2017. www.SmartShopBuy.com ® co. LLC. All rights reserved. | 1 | 2 |
<urn:uuid:41d97035-4c8f-43be-bd56-8af95dc837f5> | Game Theory is a branch of mathematics with direct applications in economics, sociology, and psychology. The theory was first devised by John von Neumann. Later contributions were made by John Nash, A. W. Tucker, and others.
Game-theory research involves studies of the interactions among people or groups of people. Because people make use of an ever-increasing number and variety of technologies to achieve desired ends, game theory can be indirectly applied in practical pursuits such as engineering, information technology, and computer science.
So-called games can range from simple personal or small group encounters or problems to major confrontations between corporations or superpowers. One of the principal aims of game theory is to determine the optimum strategy for dealing with a given situation or confrontation. This can involve such goals as maximizing one's gains, maximizing the probability that a specific goal can be reached, minimizing one's risks or losses, or inflicting the greatest possible damage on adversaries.
A Gateway is a network point that acts as an entrance to another network. On the Internet, a node or stopping point can be either a gateway node or a host (end-point) node. Both the computers of Internet users and the computers that serve pages to users are host nodes. The computers that control traffic within your company's network or at your local Internet service provider (ISP) are gateway nodes.
In the network for an enterprise, a computer server acting as a gateway node is often also acting as a proxy server and a firewall server. A gateway is often associated with both a router, which knows where to direct a given packet of data that arrives at the gateway, and a switch, which furnishes the actual path in and out of the gateway for a given packet.
GBIC - Gigabit Interface Converter:
A Gigabit Interface Converter (GBIC) is a transceiver that converts electric currents (digital highs and lows) to optical signals, and optical signals to digital electric currents. The GBIC is typically employed in fiber optic and Ethernet systems as an interface for high-speed networking. The data transfer rate is one gigabit per second (1 Gbps) or more.
GBIC modules allow technicians to easily configure and upgrade electro-optical communications networks. The typical GBIC transceiver is a plug-in module that is hot-swappable (it can be removed and replaced without turning off the system). The devices are economical, because they eliminate the necessity for replacing entire boards at the system level. Upgrading can be done with any number of units at a time, from an individual module to all the modules in a system.
GDMO - Guidelines for Definition of Managed Objects:
Guidelines for Definition of Managed Objects (GDMO) is a standard for defining object s in a network in a consistent way. With a consistent "language" for describing such objects as workstations, LAN servers, and switches, programs can be written to control or sense the status of network elements throughout a network. Basically, GDMO prescribes how a network product manufacturer must describe the product formally so that others can write programs that recognize and deal with the product. Using GDMO, you describe the class or classes of the object, how the object behaves, its attributes, and classes that it may inherit.
GDMO is part of the Open Systems Interconnection (OSI) Common Management Information Protocol (CMIP) and also the guideline for defining network objects under the Telecommunications Management Network (TMN ), a comprehensive and strategic series of international standards for network management. The object definitions created using GDMO and related tools form a Management Information Base (MIB). GDMO uses Abstract Syntax Notation One (ASN.1) as the rules for syntax and attribute encoding when defining the objects. GDMO is specified in ISO/IEC standard 10165/x.722.
General Packet Radio Services - GPRS:
General Packet Radio Services (GPRS) is a packet-based wireless communication service that promises data rates from 56 up to 114 Kbps and continuous connection to the Internet for mobile phone and computer users. The higher data rates will allow users to take part in video conferences and interact with multimedia Web sites and similar applications using mobile handheld devices as well as notebook computers. GPRS is based on Global System for Mobile (GSM) communication and will complement existing services such circuit-switched cellular phone connections and the Short Message Service (SMS).
In theory, GPRS packet-based service should cost users less than circuit-switched services since communication channels are being used on a shared-use, as-packets-are-needed basis rather than dedicated only to one user at a time. It should also be easier to make applications available to mobile users because the faster data rate means that middleware currently needed to adapt applications to the slower speed of wireless systems will no longer be needed. As GPRS becomes available, mobile users of a virtual private network (VPN) will be able to access the private network continuously rather than through a dial-up connection.
GPRS will also complement Bluetooth , a standard for replacing wired connections between devices with wireless radio connections. In addition to the Internet Protocol (IP), GPRS supports X.25, a packet-based protocol that is used mainly in Europe. GPRS is an evolutionary step toward Enhanced Data GSM Environment (EDGE) and Universal Mobile Telephone Service (UMTS).
Generalized Markup Language - GML:
Generalized Markup Language (GML) is an IBM document-formatting language that describes a document in terms of its organization structure and content parts and their relationship. GML markup or tag s describe such parts as chapters, important sections and less important sections (by specifying heading levels), paragraphs, lists, tables, and so forth. GML frees document creators from specific document formatting concerns such as font specification, line spacing, and page layout required by IBM's printer formatting language, SCRIPT.
GML Starter Set is the name of IBM's set of GML tags. GML Starter Set input is processed by the Document Composition Facility (DCF) which formats printer-ready output. A later and more capable set of GML tags is provided by IBM's BookMaster product. GML preceded and was an inspiration for the industry-developed Standard Generalized Markup Language (SGML), today's strategic set of rules for creating any structured document description language. This Web page is marked up with Hypertext Markup Language (HTML) tags and is an example of a document that makes use of GML concepts. The Extensible Markup Language (XML) also has roots in GML.
Generic Top-level Domain Name - gTLD:
A generic Top-Level Domain Name (gTLD) is the top-level domain name of an Internet address that identifies it generically as associated with some domain class, such as .com (commercial), .net (originally intended for Internet service providers, but now used for many purposes), .org (for non-profit organizations, industry groups, and others), .gov (U.S. government agencies), .mil (for the military), .edu (for educational institutions); and .int (for international treaties or databases and not much used). For example, in the domain name, www.ibm.com, .com is the chosen gTLD. In addition to the gTLD, there is the ccTLD (country code top-level domain name) that identifies a specific national domicile for an address. (For instance, .fr for France and .mx for Mexico.)
In November 2000, the Internet Corporation for Assigned Names and Number (ICANN), a Los Angeles-based non-profit group that oversees the distribution of domain names, approved seven additional gTLDs. The new gTLDs are .biz, restricted to businesses; .info, open to anyone; .name, for personal registrations; .pro, for licensed professionals such as lawyers, doctors and accountants; .aero, for anything related to air transport; .museum, for museums; and .coop, for co-operative businesses such as credit unions. The group selected these new gTLDs from among more than 40 proposed suffixes. It rejected gTLDs such as .kid, .site, .xxx, .home, .dot, and .site. ICANN is currently negotiating registry agreements with the gTLD applicants it chose.
Proponents of adding new gTLDs argue that they are easy to create and free up new space for Internet addresses. Those opposed say more gTLDs only lead to confusion and pose an increased risk of trademark infringement, cybersquatting, and cyberpiracy. ICANN has approved several organizations to register domain names for individuals and businesses. The group has not yet accredited anyone to pre-register names in any of the new gTLDs, and those attempting it do so at their own risk.
George Boole (1815-1864) was a British mathematician and is known as the founder of mathematical logic. Boole, who came from a poor family and was essentially a self-taught mathematician, made his presence known in the world of mathematics in 1847 after the publication of his book, "The Mathematical Analysis of Logic". In his book, Boole successfully demonstrated that logic, as Aristotle taught it, could be represented by algebraic equations. In 1854, Boole firmly established his reputation by publishing "An Investigation of the Laws of Thought, on Which Are Founded the Mathematical Theories of Logic and Probabilities", a continuation of his earlier work.
In 1855 Boole, the first professor of mathematics at The College of Cork, Ireland, married Mary Everest, who is now known as a mathematician and teacher in her own right. Mary, who was 18 years younger than Boole, served as sounding-board and editor for her husband throughout their nine years of marriage. Unfortunately, Mary's poor choice of medical treatment may have hastened Boole's death. After getting caught in the rain and catching a cold, Boole was put to bed by his wife, who dumped buckets of water on him based on the theory that whatever had caused the illness would also provide the cure. (It seemed logical to her.) George and Mary had five daughters; the third daughter, Alicia Boole Stott, became well-known for her work in the visualization of geometric figures in hyperspace.
Boole's work in symbolic logic, collectively known as "Boolean algebra", is widely regarded to be based on the work of earlier mathematician G.W. Leibniz. Although Boole's work was well-received during his lifetime, it was considered to be "pure" mathematics until 1938, when Claude Shannon published his thesis at MIT. Shannon demonstrated that Boole's symbolic logic, as it applies to the representation of TRUE and FALSE, could be used to represent the functions of switches in electronic circuits. This became the foundation for digital electronic design, with practical applications in telephone switching and computer engineering.
Today, when using a search engine on the Internet, we use Boole's mathematical concepts to help us locate information by defining a relationship between the terms we enter. For instance, searching for George AND Boole would find every article in which both the word George and the word Boole appear. Searching for George OR Boole would find every article in which either the word George or the word Boole appears. We call this a Boolean search.
GFS - Global File System:
The word "Ghost" derives from Old English gast and means a disembodied spirit or soul. In information technology, the term has several special meanings:
1) Ghost, a product from Symantec, can clone (copy) the entire contents of a hard disk to another computer's hard disk, automatically formatting and partitioning the target disk. This product is especially useful where one system is to be replicated on a number of computers.
2) On the Web's live chat medium, the Internet Relay Chat (IRC), a ghost is a vacated user session that the server believes is still active.
3) Ghostscript is a program for UNIX systems that interprets a Postscript file (which is a file formatted for a Postscript printer) so that, using a related program, Ghostview, you can view it on a display screen.
A Ghost Site is a Web site that is no longer maintained but that remains available for viewing. Since many sites don't identify their date of last update, it's not always easy to tell whether a site is a ghost site or just resting. A ghost site is not to be confused with a retired or invisible site (one which doesn't exist anymore and results in a "Not found" message). It's possible to have a ghost site that continues to be useful or appealing because its content doesn't date easily. A ghost site that for some reason seems to have moved to another location is a zombie.
Gigabyte - GB:
A Gigabyte (pronounced GIG-a-bite with hard G's) is a measure of computer data storage capacity and is "roughly" a billion bytes. A gigabyte is two to the 30th power, or 1,073,741,824 in decimal notation. Also see gigabit, megabyte, terabyte, and exabyte.
GIMP, sometimes referred to as "The Gimp," is a freely available open source application for created and manipulating graphic images that runs on Linux and other UNIX-based operating systems. GIMP is distributed under licensing terms defined by the GNU project. You are likely to find GIMP as one of the optional applications that come in any large Linux package such as those distributed by Debian and Red Hat . You can also download it directly. GIMP offers photo retouching, image composition, and image authoring and is favorably compared by users to Adobe's Photoshop and Illustrator applications. GIMP was created by Peter Mattis and Spencer Kimball.
In information technology, a glyph (pronounced GLIHF) is a graphic symbol that provides the appearance or form for a character. A glyph can be an alphabetic or numeric font or some other symbol that pictures an encoded character. The following quote is from a document written as background for the Unicode character set standard.
An ideal characterization of characters and glyphs and their relationship may be stated as follows:
1. A character conveys distinctions in meaning or sounds. A character has no intrinsic appearance.
2. A glyph conveys distinctions in form. A glyph has no intrinsic meaning.
3. One or more characters may be depicted by one or more glyph representations (instances of an abstract glyph) in a possibly context dependent fashion.
Glyph is from a Greek word for "carving."
GML - Generalized Markup Language:
See Generalized Markup Language.
GNOME - GNU Network Object Model Environment:
GNU Network Object Model Environment, pronounced gah-NOHM (GNOME) is a graphical user interface (GUI) and set of computer desktop applications for users of the Linux computer operating system. It's intended to make a Linux operating system easy to use for non-programmers and generally corresponds to the Windows desktop interface and its most common set of applications. In fact, GNOME allows the user to select one of several desktop appearances. With GNOME, the user interface can, for example, be made to look like Windows 98 or like Mac OS. In addition, GNOME includes a set of the same type of applications found in the Windows Office 97 product: a word processor, a spreadsheet program, a database manager, a presentation developer, a Web browser, and an e-mail program.
GNOME is derived from a long-running volunteer effort under the auspices of the Free Software Foundation , the organization founded by Richard Stallman. Stallman and fellow members of the Free Software Foundation believe that software source code should always be public and open to change so that it can continually be improved by others. GNOME is in part an effort to make Linux a viable alternative to Windows so that the desktop operating system market is not controlled by a single vendor. GNU is the Free Software Foundations's own operating system and set of applications. Linux, the operating system, was developed by Linus Torvalds who, assisted by contributors, added a kernel to additional operating system components from GNU.
GNOME comes with an object request broker (ORB) supporting the Common Object Request Broker Architecture (CORBA ) so that GNOME programs and programs from other operating system platforms in a network will be able to interoperate. GNOME also includes a widget library that programmers can use to develop applications that use the GNOME user interface. In addition to a desktop version, GNOME also comes as a user interface and set of applications for the handheld PalmPilot.
GNU is a UNIX-like operating system that comes with source code that can be copied, modified, and redistributed. The GNU project was started in 1983 by Richard Stallman and others, who formed the Free Software Foundation . Stallman believes that users should be free to do whatever they want with software they acquire, including making copies for friends and modifying the source code and repackaging it with a distribution charge. The FSF uses a stipulation that it calls copyleft . Copyleft stipulates that anyone redistributing free software must also pass along the freedom to further copy and change the program, thereby ensuring that no one can claim ownership of future versions and place restrictions on users.
The "free" means "freedom," but not necessarily "no charge." The Free Software Foundation does charge an initial distribution price for GNU. Redistributors can also charge for copies either for cost recovery or for profit. The essential idea of "free software" is to give users freedom in how they modify or repackage the software along with a restriction that they in turn do not restrict user freedom when they pass copies or modified versions along.
One of the results of the free software philosophy, Stallman believes, would be free programs put together from other free programs. GNU is an example of this idea. It became a complete operating system in August, 1996, when a kernel , consisting of GNU Hurd and Mach, was added. The FSF plans to continue developing their free software in the form of application programs. A free spreadsheet program is now available. The Linux operating system consists of GNU components and the kernel developed by Linus Torvalds.
GPRS - General Packet Radio Services:
See General Packet Radio Services.
gTLD - Generic Top-level Domain Name:
See Generic Top-level Domain Name.
GUI - Graphic User Interface:
A GUI (usually pronounced GOO-ee) is a graphical (rather than purely textual) user interface to a computer. As you read this, you are looking at the GUI or graphical user interface of your particular Web browser . The term came into existence because the first interactive user interfaces to computers were not graphical; they were text-and-keyboard oriented and usually consisted of commands you had to remember and computer responses that were infamously brief. The command interface of the DOS operating system (which you can still get to from your Windows operating system) is an example of the typical user-computer interface before GUIs arrived. An intermediate step in user interfaces between the command line interface and the GUI was the non-graphical menu-based interface, which let you interact by using a mouse rather than by having to type in keyboard commands.
Today's major operating systems provide a graphical user interface. Applications typically use the elements of the GUI that come with the operating system and add their own graphical user interface elements and ideas. A GUI sometimes uses one or more metaphors for objects familiar in real life, such as the desktop , the view through a window, or the physical layout in a building. Elements of a GUI include such things as: windows, pull-down menus, buttons, scroll bars, iconic images, wizards, the mouse, and no doubt many things that haven't been invented yet. With the increasing use of multimedia as part of the GUI, sound, voice, motion video, and virtual reality interfaces seem likely to become part of the GUI for many applications. A system's graphical user interface along with its input devices is sometimes referred to as its "look-and-feel."
The GUI familiar to most of us today in either the Mac or the Windows operating systems and their applications originated at the Xerox Palo Alto Research Laboratory in the late 1970s. Apple used it in their first Macintosh computers. Later, Microsoft used many of the same ideas in their first version of the Windows operating system for IBM-compatible PCs.
When creating an application, many object-oriented tools exist that facilitate writing a graphical user interface. Each GUI element is defined as a class widget from which you can create object instances for your application. You can code or modify prepackaged methods that an object will use to respond to user stimuli.
A Gypsy Corporation is a company that has it's corporate filings, head quarters, or the majority of it's liquid holdings and or other financial assets in jurisdiction outside of the companies publicly claimed singular national identity, and primary residences for the purpose of sovereign benefits, such as environmental, infrastructure, national security, labor and tax laws.
Gypsy Corporations and typically an amoral organizations, as a standard mode of operandi, frequently behave in a manner contrary to the host nation's best interest, incorrectly and selfishly justified under the errant premises of enriching a non committal, ambiguous group, and multinational grey entity called the shareholder.
A Gypsy Corporation's level of sociopathic behavior frequently visible in the form of i) refusal to repatriate earnings, ii) the importing of labor for jobs that can be performed by the endemic population, in particular for non parity wages and benefits, iii) the out sourcing of services and manufactured products away from the corporate host nation for which the recipients of such efforts reside within said host nation, as a form of expense reduction or the more vulgar, profit margin widening. | 1 | 2 |
<urn:uuid:32496026-6aab-4c4c-aeab-f5de6bf25b9c> | |This article needs additional or better citations for verification. (June 2009) (Learn how and when to remove this template message)|
|Classification and external resources|
Hypochondriasis, also known as hypochondria, health anxiety or illness anxiety disorder, refers to worry about having a serious illness. This debilitating condition is the result of an inaccurate perception of the condition of body or mind despite the absence of an actual medical diagnosis. An individual suffering from hypochondriasis is known as a hypochondriac. Hypochondriacs become unduly alarmed about any physical or psychological symptoms they detect, no matter how minor the symptom may be, and are convinced that they have, or are about to be diagnosed with, a serious illness.
Often, hypochondria persists even after a physician has evaluated a person and reassured them that their concerns about symptoms do not have an underlying medical basis or, if there is a medical illness, their concerns are far in excess of what is appropriate for the level of disease. Many hypochondriacs focus on a particular symptom as the catalyst of their worrying, such as gastro-intestinal problems, palpitations, or muscle fatigue. To qualify for the diagnosis of hypochondria the symptoms must have been experienced for at least 6 months.
The DSM-IV-TR defines this disorder, "Hypochondriasis", as a somatoform disorder and one study has shown it to affect about 3% of the visitors to primary care settings. The newly published DSM-5 replaces the diagnosis of hypochondriasis with the diagnoses of "somatic symptom disorder" and "illness anxiety disorder".
Hypochondria is often characterized by fears that minor bodily or mental symptoms may indicate a serious illness, constant self-examination and self-diagnosis, and a preoccupation with one's body. Many individuals with hypochondriasis express doubt and disbelief in the doctors' diagnosis, and report that doctors’ reassurance about an absence of a serious medical condition is unconvincing, or short-lasting. Additionally, many hypochondriacs experience elevated blood pressure, stress, and anxiety in the presence of doctors or while occupying a medical facility, a condition known as "white coat syndrome". Many hypochondriacs require constant reassurance, either from doctors, family, or friends, and the disorder can become a disabling torment for the individual with hypochondriasis, as well as his or her family and friends. Some hypochondriacal individuals completely avoid any reminder of illness, whereas others frequently visit medical facilities, sometimes obsessively. Some sufferers may never speak about it.
Hypochondriasis is categorized as a somatic amplification disorder—a disorder of "perception and cognition"—that involves a hyper-vigilance of situation of the body or mind and a tendency to react to the initial perceptions in a negative manner that is further debilitating. Hypochondriasis manifests in many ways. Some people have numerous intrusive thoughts and physical sensations that push them to check with family, friends, and physicians. For example, a person who has a minor cough may think that they have tuberculosis. Or sounds produced by organs in the body, such as those made by the intestines, might be seen as a sign of a very serious illness to patients dealing with hypochondriasis.
Other people are so afraid of any reminder of illness that they will avoid medical professionals for a seemingly minor problem, sometimes to the point of becoming neglectful of their health when a serious condition may exist and go undiagnosed. Yet others live in despair and depression, certain that they have a life-threatening disease and no physician can help them. Some consider the disease as a punishment for past misdeeds.
Hypochondriasis is often accompanied by other psychological disorders. Bipolar disorder, clinical depression, obsessive-compulsive disorder (OCD), phobias, and somatization disorder are the most common accompanying conditions in people with hypochondriasis, as well as a generalized anxiety disorder diagnosis at some point in their life.
Many people with hypochondriasis experience a cycle of intrusive thoughts followed by compulsive checking, which is very similar to the symptoms of obsessive-compulsive disorder. However, while people with hypochondriasis are afraid of having an illness, patients with OCD worry about getting an illness or of transmitting an illness to others. Although some people might have both, these are distinct conditions.
Patients with hypochondriasis often are not aware that depression and anxiety produce their own physical symptoms, and mistake these symptoms for manifestations of another mental or physical disorder or disease. For example, people with depression often experience changes in appetite and weight fluctuation, fatigue, decreased interest in sex and motivation in life overall. Intense anxiety is associated with rapid heartbeat, palpitations, sweating, muscle tension, stomach discomfort, dizziness, and numbness or tingling in certain parts of the body (hands, forehead, etc.).
If a person is ill with a medical disease such as diabetes or arthritis, there will often be psychological consequences, such as depression. Some even report being suicidal. In the same way, someone with psychological issues such as depression or anxiety will sometimes experience physical manifestations of these affective fluctuations, often in the form of medically unexplained symptoms. Common symptoms include headaches; abdominal, back, joint, rectal, or urinary pain; nausea; fever and/or night sweats; itching; diarrhea; dizziness; or balance problems. Many people with hypochondriasis accompanied by medically unexplained symptoms feel they are not understood by their physicians, and are frustrated by their doctors’ repeated failure to provide symptom relief.
The ICD-10 defines hypochondriasis as follows:
- A. Either one of the following:
- A persistent belief, of at least six months' duration, of the presence of a maximum of two serious physical diseases (of which at least one must be specifically named by the patient).
- A persistent preoccupation with a presumed deformity or disfigurement (body dysmorphic disorder).
- B. Preoccupation with the belief and the symptoms causes persistent distress or interference with personal functioning in daily living, and leads the patient to seek medical treatment or investigations (or equivalent help from local healers).
- C. Persistent refusal to accept medical advice that there is no adequate physical cause for the symptoms or physical abnormality, except for short periods of up to a few weeks at a time immediately after or during medical investigations.
- D. Most commonly used exclusion criteria: not occurring only during any of the schizophrenia and related disorders (F20-F29, particularly F22) or any of the mood disorders (F30-F39).
A. Preoccupation with fears of having, or the idea that one has, a serious disease based on the person's misinterpretation of bodily symptoms.
B. The preoccupation persists despite appropriate medical evaluation and reassurance.
C. The belief in Criterion A is not of delusional intensity (as in Delusional Disorder, Somatic Type) and is not restricted to a circumscribed concern about appearance (as in Body Dysmorphic Disorder).
D. The preoccupation causes clinically significant distress or impairment in social, occupational, or other important areas of functioning.
E. The duration of the disturbance is at least 6 months.
F. The preoccupation is not better accounted for by Generalized Anxiety Disorder, Obsessive-Compulsive Disorder, Panic Disorder, a Major Depressive Episode, Separation Anxiety, or another Somatoform Disorder.
Hypochondria is currently considered a psychosomatic disorder, as in a mental illness with physical symptoms. Cyberchondria is a colloquial term for hypochondria in individuals who have researched medical conditions on the Internet. The media and the Internet often contribute to hypochondria, as articles, TV shows and advertisements regarding serious illnesses such as cancer and multiple sclerosis often portray these diseases as being random, obscure and somewhat inevitable. Inaccurate portrayal of risk and the identification of non-specific symptoms as signs of serious illness contribute to exacerbating the hypochondriac’s fear that they actually have that illness.
Major disease outbreaks or predicted pandemics can also contribute to hypochondria. Statistics regarding certain illnesses, such as cancer, will give hypochondriacs the illusion that they are more likely to develop the disease.
Overly protective caregivers and an excessive focus on minor health concerns have been implicated as a potential cause of hypochondriasis development.
It is common for serious illnesses or deaths of family members or friends to trigger hypochondria in certain individuals. Similarly, when approaching the age of a parent's premature death from disease, many otherwise healthy, happy individuals fall prey to hypochondria. These individuals believe they are suffering from the same disease that caused their parent's death, sometimes causing panic attacks with corresponding symptoms.
Family studies of hypochondriasis do not show a genetic transmission of the disorder. Among relatives of people suffering from hypochondriasis only somatization disorder and generalized anxiety disorder were more common than in average families. Other studies have shown that the first degree relatives of patients with OCD have a higher than expected frequency of a somatoform disorder (either hypochondriasis or body dysmorphic disorder).
Most research indicates that cognitive behavioral therapy (CBT) is an effective treatment for hypochondriasis. Much of this research is limited by methodological issues. A small amount of evidence suggests that selective serotonin reuptake inhibitors can also reduce symptoms, but further research is needed.
Among the regions of the abdomen, the hypochondrium is the uppermost part. The word derives from the Greek term ὑποχόνδριος hypokhondrios, meaning "of the soft parts between the ribs and navel" from ὑπό hypo ("under") and χόνδρος khondros, or cartilage (of the sternum). Hypochondria in Late Latin meant "the abdomen".
The term hypochondriasis for a state of disease without real cause reflected the ancient belief that the viscera of the hypochondria were the seat of melancholy and sources of the vapor that caused morbid feelings. Until the early 18th century, the term referred to a "physical disease caused by imbalances in the region that was below your rib cage" (i.e., of the stomach or digestive system). For example, Robert Burton's The Anatomy of Melancholy (1621) blamed it "for everything from 'too much spittle' to 'rumbling in the guts'".
Immanuel Kant discussed hypochondria in his 1798 book, Anthropology like this:
The disease of the hypochondriac consists in this: that certain bodily sensations do not so much indicate a really existing disease in the body as rather merely excite apprehensions of its existence: and human nature is so constituted – a trait which the animal lacks – that it is able to strengthen or make permanent local impressions simply by paying attention to them, whereas an abstraction – whether produced on purpose or by other diverting occupations – lessen these impressions, or even effaces them altogether.
- Munchausen syndrome
- Psychosomatic medicine
- Sickness behavior
- Somatoform disorder
- Somatosensory amplification
- Medical students' disease
- Man flu
- The Imaginary Invalid
- Avia, M. D.; Ruiz, M. A. (2005). "Recommendations for the Treatment of Hypochondriac Patients". Journal of Contemporary Psychotherapy. 35 (3): 301–13. doi:10.1007/s10879-005-4322-3.
- Kring, Ann M.; Davison, Gerald C.; Neale, John M.; Johnson, Sheri L. (2007). Abnormal Psychology with Cases (10th ed.). Wiley. ISBN 978-0-471-71260-2.[page needed]
- Goldberg, Richard J. (2007). Practical Guide to the Care of the Psychiatric Patient. ISBN 978-0-323-03683-2.[page needed]
- American Psychiatric Association: Diagnostic and Statistical Manual of Mental Disorders, 4th ed., text revised, Washington, DC, APA, 2000.[page needed]
- Escobar, Javier I; Gara, Michael; Waitzkin, Howard; Silver, Roxanne Cohen; Holman, Alison; Compton, Wilson (1998). "DSM-IV Hypochondriasis in Primary Care". General Hospital Psychiatry. 20 (3): 155–9. PMID 9650033. doi:10.1016/S0163-8343(98)00018-8.
- http://www.mayoclinic.org/medical-professionals/clinical-updates/psychiatry-psychology/diagnostic-statistical-manual-mental-disorders-redefines-hypochondriasis[full citation needed]
- Olatunji, Bunmi O.; Etzel, Erin N.; Tomarken, Andrew J.; Ciesielski, Bethany G.; Deacon, Brett (2011). "The effects of safety behaviors on health anxiety: An experimental investigation". Behaviour Research and Therapy. 49 (11): 719–28. PMID 21839987. doi:10.1016/j.brat.2011.07.008.
- Daniel L. Schacter, Daniel T. Gilbert, Daniel M. Wegner.(2011).Generalized Anxiety Disorder.Psychology second edition.
- Fallon, Brian A.; Qureshi, Altamash I.; Laje, Gonzalo; Klein, Brian (2000). "Hypochondriasis and its relationship to obsessive-compulsive disorder". The Psychiatric Clinics of North America. 23 (3): 605–16. PMID 10986730. doi:10.1016/S0193-953X(05)70183-0.
- Barsky, AJ (1992). "Hypochondriasis and obsessive compulsive disorder". The Psychiatric Clinics of North America. 15 (4): 791–801. PMID 1461796.
- Wenning, Michael; Davy, Leigh; Catalano, Glenn; Catalano, Maria (2003). "Atypical Antipsychotics in the Treatment of Delusional Parasitosis". Annals of Clinical Psychiatry. 15 (3-4): 233–9. PMID 14971869. doi:10.3109/10401230309085693.
- Harth, Wolfgang; Gieler, Uwe; Kusnir, Daniel; Tausk, Francisco A. (2008). "Hypochondriacal Delusions". Clinical Management in Psychodermatology. Springer. p. 36. ISBN 978-3-540-34718-7.
- Ford, Allison. "Hypochondria: Can You Worry Yourself Sick?". divine caroline. divine caroline. Archived from the original on 25 October 2012. Retrieved 19 November 2012.
- "Hypochondriasis." CareNotes. Thomson Healthcare, Inc., 2011. Health Reference Center Academic. Retrieved April 5, 2012.[verification needed]
- Bienvenu, O.Joseph; Samuels, Jack F; Riddle, Mark A; Hoehn-Saric, Rudolf; Liang, Kung-Yee; Cullen, Bernadette A.M; Grados, Marco A; Nestadt, Gerald (2000). "The relationship of obsessive–compulsive disorder to possible spectrum disorders: results from a family study". Biological Psychiatry. 48 (4): 287–93. PMID 10960159. doi:10.1016/S0006-3223(00)00831-3.
- Olatunji, Bunmi O.; Kauffman, Brooke Y.; Meltzer, Sari; Davis, Michelle L.; Smits, Jasper A.J.; Powers, Mark B. (2014). "Cognitive-behavioral therapy for hypochondriasis/health anxiety: A meta-analysis of treatment outcome and moderators". Behaviour Research and Therapy. 58: 65–74. PMID 24954212. doi:10.1016/j.brat.2014.05.002.
- Bouman, Theo K. (February 2014). "Psychological Treatments for Hypochondriasis: A Narrative Review". Current Psychiatry Reviews. 10 (1): 58–69. doi:10.2174/1573400509666131119010612.
- Louw, Kerry-Ann; Hoare, Jacqueline; Stein, Dan J (February 2014). "Pharmacological Treatments for Hypochondriasis: A Review". Current Psychiatry Reviews. 10 (1): 70–4. doi:10.2174/1573400509666131119004750.
- "hypochondria (n.)". Etymonline. Retrieved 14 April 2015.
- Susan Harvey (February 21, 2010). "Hypochondria". The Virtual Linguist.[self-published source?]
- Leslie Mann (July 11, 2012). "New book tries to explain the roots of hypochondria". Chicago Tribune.
- Belling, Catherine. 2012. "A Condition of Doubt: The Meanings of Hypochondria." New York: Oxford University Press. ISBN 978-0199892365.
|Look up hypochondriasis in Wiktionary, the free dictionary.| | 1 | 11 |
<urn:uuid:8d9813be-2598-4af2-8a72-7b46b980e4bb> | Challenges Faced By Power Purchase Agreement
India is world's 6th largest energy consumer, accounting for 3.4% of global energy consumption. Due to India's economic rise, the demand for energy has grown at an average of 3.6% per annum over the past 30 years. In March 2009, the installed power generation capacity of India stood at 149,390 MW while the per capita power consumption stood at 612 kWH1.The country's annual power production increased from about 190 billion kWH in 1986 to more than 680 billion kWH in 20062. The Indian government has set an ambitious target to add approximately 78,000 MW of installed generation capacity by 2012. The total demand for electricity in India is expected to cross 950,000 MW by 20303.
About 75% of the electricity consumed in India is generated by thermal power plants, 21% by hydroelectric power plants and 4% by nuclear power plants4. More than 50% of India's commercial energy demand is met through the country's vast coal reserves. The country has also invested heavily in recent years on renewable sources of energy such as wind energy. As of 2008, India's installed wind power generation capacity stood at 9,655 MW5. Additionally, India has committed massive amount of funds for the construction of various nuclear reactors which would generate at least 30,000 MW6. In July 2009, India unveiled a $19 billion plan to produce 20,000 MW of solar power by 2020.
Power Purchase Agreement
A Power Purchase Agreement (PPA) is a legal contract between an electricity generator (provider) and a power purchaser (host). The power purchaser purchases energy, and sometimes also capacity and/or ancillary services, from the electricity generator.
The seller under the PPA is typically an independent power producer, or "IPP." Energy sales by regulated utilities are typically highly regulated, so that no PPA is required or appropriate. A Power Purchase Agreement (PPA) is at the heart of any power generation project that is to be undertaken by an Independent Power Producer (IPP). During the past decade privately owned IPPs selling electricity to the power industry has become common place.
The PPA is often regarded as the central document in the development of independent electricity generating assets (power plants), and is a key to obtaining project financing for the project.
Emergence of power purchase agreement
In 1992, the government amended India's Electricity Act of 1910 and opened the electricity sector to privatization and foreign investment. An incentive package was enacted in 1993 to provide a five year tax holiday for new projects in the power sector and a guaranteed 16% return on foreign investment. Additionally, the protracted project approval system was substantially revised. The IPP's were allowed attractive terms to set up power station but they had to work with vertically integrated SEB's and IPP's entered into power purchasing agreement with SEB's.
Although, a policy on private sector participation was announced in 1991, the pace of private investment has been slow as most Independent Power Producers (IPPs) were unable to achieve closure for their projects, despite progressing well on the other clearances.
De licensing of Generation
Delays in finalization of Power Purchase Agreements (PPA) and high cost of electricity estimated for the projects were also some of the reasons failure of the power purchase agreement. The Electricity Bill 2000, which was introduced in Parliament, envisaged sweeping changes to the power sector, including delicensing generation and permitting power trading. The Bill's aim was to carry forward reforms in the power sector without imposing any particular model on the States. The States can choose any model which suits them. The Bill hoped to ensure competition in power trading. The Power Trading Corporation had been set up by the Centre to purchase power from mega projects and sell it to different States. What was visualised at this stage was that the PTC would not be a monopoly and power would be commodity that could be traded. The Government was trying to include a provision to permit others to acquire licences and trade in power.
On generation too, the Bill envisaged complete delicensing of this sector, except for some inter-State hydel projects. Power Ministry pointed out that the Government had an ambitious target of adding 100,000 MW of capacity in the next 12 years7. This would be equal to the capacity that had been built up in the last 53 years. This called for massive investments not only in gene ration but also in transmission and distribution systems.
There was no way the public sector could achieve this target and hence the private sector had to be induced to participate in capacity addition in a greater way.
Electricity Act 2003, enacted on 10th June 2003, brought about a paradigm shift by opening up the Indian power sector to competition. The act brings about de-licensing of thermal generation, open access in transmission, open access of distribution network in phases, multiple licensing in distribution zones and de-licensing of rural electricity supply. This sets tone for a competitive era in Indian power sector. This delicencing of generation opened the market of electricity generation by private players.
Legislative setup and Power Purchase Agreement
Under the Constitution of India, electricity is a 'concurrent' subject contained under Entry 38 List III. Hence, the Central as well as the State governments have authority to enact legislation in regard to the power sector. The Central Government generally provides the policy framework and the State governments focus on specific issues. Currently, the constitution, responsibilities and accountability of the Power sector entities in India are governed by the following Central statutes :
- The Electricity Act, 2003
- The Electricity (supply) Act, 1948 (Repealed)
- The Electricity Regulatory Commission Act, 1998 (Repealed)
As per Electricity Act, 2003 following are the Sections dealing with Power Purchase Agreement.
86. Functions of State Commission
(1) The State Commission shall discharge the following functions, namely:--
(b) Regulate electricity purchase and procurement process of distribution licensees including the price at which electricity shall be procured from the generating companies or licensees or from other sources through agreements for purchase of power for distribution and supply within the State;
Section 49 of The Electricity Act, 2003 deals with agreement for the purchase and supply of electricity.
49. Agreements with respect to supply or purchase of electricity
Where the Appropriate Commission has allowed open access to certain consumers under section 42, such consumers, notwithstanding the provisions contained in clause (d) of sub-section (1) of section 62, may enter into an agreement with any person for supply or purchase of electricity on such terms and conditions (including tariff) as may be agreed upon by them.
PPA's are entered between the state electricity boards and the independent power producers. The Electricity Act 2003 makes it mandatory for all SEBs to unbundled into separate generation, transmission and distribution entities so as to make them more efficient than vertically integrated utilities. However, in most countries, vertically integrated utilities continue to remain better financial performers and are better able to meet customer needs.
The Electricity Act, 2003 completely eliminates Section 5 of Electricity (Supply) Act, 1948, thereby abolishing the existence of SEB's as statutory autonomous bodies, in other words the Electricity Act 2003 by totally eliminating the Section of Electricity Supply Act, 1948, clearly converts the SEB's into Companies under the Company Act 1956.
Section 172. (Transitional provisions) :
Notwithstanding anything to the contrary contained in this Act,-
(a) a State Electricity Board constituted under the repealed laws shall be deemed to be the State Transmission Utility and a licensee under the provisions of this Act for a period of one year from the appointed date or such earlier date as the State Government may notify, and shall perform the duties and functions of the State Transmission Utility and a licensee in accordance with the provisions of this Act and rules and regulations made there under :
Provided that the State Government may, by notification, authorize the State Electricity Board to continue to function as the State Transmission Utility or a licensee for such further period beyond the said period of one year as may be mutually decided by the Central Government and the State Government;
The act made it mandatory for all state electricity boards (SEBs) to unbundle into separate generation, transmission and distribution entities. As per Electricity Act electricity act-2003 state electricity board has to be unbundled and make it into three companies -
Power Purchase Agreement and its contents
When the State Electricity board agrees to purchase energy form an Independent Power Producer they enter into a Power purchase agreement. It lays down names of the parties their rights and liabilities, the tariff to be paid and many other things.
It contains the Name of the parties, their registered office and the Date on which the PPA is entered. A standard Power Purchase Agreement contains the following clauses which are referred to as Articles :
1. INTERPRETATION AND DEFINED TERMS
2. SALE AND PURCHASE OF ENERGY
4. CURRENCY, PAYMENTS AND BILLING
5. PRE-OPERATION OBLIGATIONS
8. OPERATIONS AND MAINTENANCE
9. MUTUAL WARRANTIES AND COVENANTS OF THE PARTIES
10. DEFAULTS AND TERMINATION
11. FORCE MAJEURE
12. INDEMNIFICATION AND LIABILITY
14. RESOLUTION OF DISPUTES
16. MISCELLANEOUS PROVISIONS
Challenges faced by the Power Purchase Agreement
Power Purchase Agreement, has gained of importance in few years but still there are many direct and indirect challenges which affect its practicability and its present day relevance.
There are many landmark cases where there are issues relating to PPA, one of the most landmark cases in this regard is the Enron- MSEB case8. In June of 1992, Enron, the US energy giant engaged in negotiations with the government of India. Enron had identified the state of Maharashtra, to negotiate a major energy project. Negotiations were made with the state government and with the Maharashtra State Electricity Board (MSEB). Enron's mega project proposal was for the construction of a US$3 billion, 2015-megawatt power plant. Being the largest project ever undertaken in India, Enron proposed that the project be broken down into 2 phases. Initially, in phase 1 they proposed to produce 695 megawatts and would use locally produced natural gas. Phase 2 would produce 1,320 megawatts and for this they would use the natural gas imported from Qatar. Enron chose the town Dabhol, situated on the Indian Ocean as the project site.
The power project agreement entered into between the Maharashtra State Electricity Board and the Dabhol Power Company on 8th Nov. 1993. It was set up in two phases. Phase-I (740 mw) was initially based on naphtha but was eventually to switch to LNG. Phase-II (1,444 mw) was based on LNG from the outset. MSEB was required to purchase 90% of the power generated as per the terms of the "take-or-pay" power purchase agreement (PPA) signed with the DPC. The price was determined by a PPA detailed formula. The obligations of MSEB under the PPA for both phases were guaranteed by the Maharashtra government. The Centre counter-guaranteed the Maharashtra government's obligations for Phase-I.
Dabhol Phase-I became operational in 1999. Construction of Phase-II was nearing completion when a series of disputes arose between the MSEB and DPC. The plant was shut down in June 2001, after MSEB suspended purchase of power from DPC.
MSEB canceled the power purchase agreement with the Dabhol Power Company at the time when US$300 million had already been invested and Enron and its partners were facing a daily loss of US$250,000 each day the project was delayed.
As per the terms of the original agreement, Dabhol and its partners initiated arbitration proceeding against MSEB and the Maharashtra government. The government in turn launched legal action to invalidate the arbitration action alleging that illegal means had been employed to secure the contract. Maharashtra's government officials responsible for the investigation also stated firmly they had no wish to consider renegotiation. In the fall of 1995, Enron managed to persuade the government of Maharashtra to reopen negotiations which would take place in the fall. Subsequently, Chief Minister Joshi announced that a review panel would carry out a review of the project. The review panel not only began to discuss the restructuring with Enron executives, they also heard the major opponents to the deal. The major issues entailed the electricity tariff, the capital costs of the project, the payment plan and also the environment.
The original electricity the plant would produce was actually increased from the initial proposed outage of 2,015 megawatts to 2,410 after the completion of phase 2. Capital cost was reduced from US$2.85 billion to US$2.5 billion and the tariff was lowered from 7.03US cents to 6.03US cents subject to the cost of fuel and inflation.
MSEB rescinded the PPA on the grounds of material misrepresentation and default on the availability of power. MSEB had claimed rebates of over Rs 1,200 crore, as per the PPA provisions, for DPC's failure to provide power in the stipulated time period. These events, coupled with Enron's bankruptcy in November 2001, led to stoppage of work at the site. As a result, an investment of nearly Rs 11,000 crore has been idle for more than four years.
A writ petition was filed by the Center of Indian Trade Unions, a federation of registered trade unions, and, Shri Abhay Mehta, a resident of Mumbai and a citizen of India. This writ petition has been filed by way of public interest litigation.
Original agreement challenged after modification and revision.
The delay was on the part of the petitioners in moving this Court to challenge the original PPA, etc. There is no dispute about the fact that they did not challenge it earlier. The petitioner did not bother to intervene when it was a subject-matter of challenge before this Court in the year 1994. They were least concerned with the PPA, guarantee and counter guarantee till the original PPA was scrapped and a modified PPA was entered into by the Shiv Sena - BJP Government. No explanation has been rendered for the same. The petitioners, therefore, contend that the State of Maharashtra and the Maharashtra State Electricity Board have acted most illegally and against public interest in entering into a modified PPA with the very same party without even clearing them of the grave charges of corruption, bribery, fraud and misrepresentation.
Scrap original Enron Power Project Agreement on the ground of corruption and discriminatory terms.
The statement of the State Government is the foundation of the challenge of the petitioners to the PPA and the modified PPA. The State Government had later back tracked. The Government has gone that far to say that the filing of the suit and the allegations made therein were not bona fide but intended to stall the arbitration proceedings and to open a counter for renegotiation. The petitioners also contended the guarantee and the counter-guarantee furnished by the State of Maharashtra and the Union of India respectively. According to the petitioners, the modified PPA has in no way improved the original PPA but in this process, much more has been conceded by the State of Maharashtra.
Grounds of challenge of PPA
The challenge to the power project agreement ("PPA") is on various grounds. One of the main grounds of challenge is that it was concluded without proper clearance under the Indian Electricity (Supply) Act, 1948, in particular, Section 29 read with Section 31 thereof. The petitioners contend that the requisite clearance was not granted by the Central Electricity Authority ("CEA") and if granted was not validly granted after full compliance with the requirements of the Act. It is also contended that even if concurrence or clearance was granted to the original PPA, there was no fresh clearance or concurrence obtained from the CEA under Section 31 of the Act to the amended or supple-mentary scheme. The petitioners also contend that the above PPA should be declared as void as the same was induced by corruption, bribery, fraud and misrepresentation.. The PPA has also been challenged on the ground of absence of competitive bidding and lack of transparency. The contention of the petitioners is that the such deals could not have been finalised without competitive bidding and (iii) the PPA having been scrapped on that ground, it was not open to them to enter into the modified PPA for a project of much bigger magnitude having far reaching ramifications without tenders, competitive bidding and transparency and that too on the face of charges of corruption, bribery, fraud and misrepresentation levelled by none else but the very same Government in the suit filed by them in this court and in their submissions before the arbitrators. The petitioners, therefore, contend that the State of Maharashtra and the Maharashtra State Electricity Board have acted most illegally and against public interest in entering into a modified PPA with the very same party without even clearing them of the grave charges of corruption, bribery, fraud and misrepresentation.
It was held that there is a long delay on the part of the petitioners in moving this Court to challenge the original PPA, etc. There is no explanation whatsoever, not to speak of plausible explanation, for the same. The petitioners cannot claim that they were not aware of the PPA. There is no dispute about the fact that they did not challenge it earlier. They even did not bother to intervene when it was a subject-matter of challenge before this Court in the year 199. No explanation has been rendered for the same. Entertaining such challenge at such belated stage will cause great injustice to the contracting parties for no fault of their own. it was clearly noted that those who purport to work in the public interest and challenge Government action, not for personal gain but for the benefit of the people at large, must be vigilant and watchful and have due regard for the rights of innocent parties affected by their action. In that view of the matter the petitioners cannot be allowed at this stage to challenge the original PPA on the basis of the material that were available before its scrapping and revival. The writ petition, to that extent, was held as not maintainable on the ground of unexplained delay.
It was also held that the Government cannot deny the statements or the allegations made by it because they were made on verification in the suit filed in this Court and before the Arbitrators in London, it wants to retract the same on the ground that all those allegations were baseless and unfounded. The Government has gone that far to say that the filing of the suit and the allegations made therein were not bona fide but intended to stall the arbitration proceedings and to open a counter for renegotiation. The petitioners contended that much more has been conceded in favour of Enron or Dabhol than what was given to them by the original PPA.
There is no dispute about the fact that categorical allegations of corruption, bribery, fraud and misrepresentation were made by the State Government in the plaint in the suit filed in this Court. Equally uncontroverted is the position that the very same allegations were reiterated by the State Government before the Arbitrators in London and it was contended that the PPA was void on that count. The allegations are very serious, more so when levelled by the Government of the State, and if found correct, will have a serious effect on the PPA. The PPA in that event may have to be held to be in conflict with the public policy of India. Similarly, if a contract is obtained by a party by bribing the officials of the Government or its instrumentality, very many important issues in regard to the validity of such contract would arise. Otherwise also, even under Section 19 of the Indian Contract Act, an agreement caused by fraud and misrepresentation is voidable. But all those legal issues would arise only when there is material to justify the charge. In the instant case, the petitioners do not have with them any material as such to justify the charge of corruption, bribery, fraud and misrepresentation. The foundation of their challenge to the PPA and the modified PPA is the charge levelled by the State Government itself in the plaint in the suit filed in this Court which, according to the petitioners, amounts to admissions of the State Government under Section 17 of the Evidence Act.
In view of the foregoing discussions and for the reasons set out above, both these Writ Petitions were dismissed. However, though the petitioners have lost the litigation, they have succeeded in extracting from the State Government a clear statement to the effect that what they said against Enron and did in pursuance thereof was activated by political considerations. This case has highlighted to the people as to how, even after 50 years of independence, political considerations outweigh the public interest and the interest of the State and to what extent the Government can go to justify its actions not only before the public but even before the Courts of law.
During the past decade privately owned IPPs selling electricity to the power industry has become common place. Such arrangements require some version of a PPA. Although, a policy on private sector participation was announced in 1991, the pace of private investment has been slow as most Independent Power Producers (IPPs) were unable to achieve closure for their projects, despite progressing well on the other clearances.
Delays in finalization of Power Purchase Agreements (PPA) and high cost of electricity estimated for the projects were also some of the reasons failure of the power purchase agreement. THE Electricity Bill 2000, which was introduced in Parliament, came in the form of Electricity Act, 2003 which delicensed the power sector and was a major factor for the popularity of PPA.
Although PPA has gained of importance in few years but still there are many direct and indirect challenges which affect its practicability and its present day relevance.
The power purchase agreements vary a lot in different states and for different kinds of energy, some of the agreements are meticulously drafted to avoid any kind of ambiguity and scope for litigation, while some are not drafted well and there is lot of scope for litigation and ambiguity.
8. Center of Indian Trade Unions and another Vs. Union of India and others AIR1997Bom79.
The author can be reached at: [email protected] | 1 | 2 |
<urn:uuid:4912a09f-1629-4c42-8688-3823aada662a> | The user might be aware about so many things beyond Siri, what it can actually do apart from monitoring. It enables the user to know how many calories are there in a cup of tea the user is in-taking to how many flights are actually flying above the sky. But it is essential to know the origination and reception that Siri offers.
What is Siri?
Apple’s voice-controlled personal digital assistant, Siri is basically voice-controlled which understands between relationships and context. Ask Siri questions or asking Siri to do things for the user, it seems as if asking a real assistant. With a straight out from Pixar, it keeps the user connected as well as informed in the right place as well as on-time. The technology has been around from couple of years. The assistant first ensured its presence around few years ago on iPhone 4s. The introduction of Siri was as lavish as declared to be the next big thing during the launch presentation of iPhone, but could not withstand the claim as iPhone has grown through the age and developed various other things beyond imagination.
The main idea behind Siri is to offer a seamless interaction like a friend and aims to help in order to get the things done in iPhone, iPad or Apple Watch. With access to every built-in apps on Apple device, for instance Mail, messages, Contacts, Safari etc., this app would search with the help of databases whenever it looks for.
As per Cultofmac, Siri which is although considered to be the ground-breaking example as far as Artificial Intelligence, this is accepted as a prediction of 1980s but accomplished as a climatic stage of a long-term dream at Apple. Siri was one of the last projects of Steve Jobs. This is defining that period of time when Jobs was heavily involved with Apple while his health worsening gradually.
Tracing back the origin of voice assistant to the year 1980s, this was the fruit of Apple’s R&D initiative that took place after he left the company. With regards to Siri, there is although a very famous story in the second half of the 1980s, John Sculley, former Apple’s CEO had commissioned George Lucas to create a video what he explained it to be “Knowledge Navigator”.
But the true existence can be taken into consideration from 2003, hence the history can be divided into pre- 2003 and post 2003.
Progress of Siri in 2003: It all started with the most important incident when DARPA, A U.S. government Agency awarded a 150M$ contract in order to create a virtual Assistant that gain profound knowledge from watching people working. As per cultofmac, it is defining that period of time when DARPA was designed to aid military commanders in order to handle irresistible amount of data that they received.
Next came CALO, standing for Cognitive Assistant that Learns and Organizes into the picture, where SRI International was leading non-profit R&D and was approached by DARPA to create a five-year, 500 investigation which was considered to be the largest artificial intelligence undertaking in history. Adam Cheyer, an Engineer spent a considerable amount of his time working on Virtual Assistant simultaneous to the project called Vanguard. Dag Kittlaus, one of the General Managers at Motorola felt so much fascinated after seeing the prototype of the Vanguard assistant, that he quit Motorola.
What brought Siri into existence in 2008?
Kittlaus and Cheyer, soon after half a decade along with another guy named Gruber managed to gather $8.5 million as investment in order to start a startup. It was intended to merge Vanguard projects with the best parts of the CALO. This was after the important step taken by SRI International to spin off a startup which came to be known as “Siri” as a phonetic version of SRI.
Apple’s acquisition to Siri
Then came the remarkable day when Siri was launched into the App Store in the earlier part of 2010 which was although connected to a wider variety of web services which enabled them to consume data with the lot of web services. Although few tech savvy consider that version to be quite intuitive rather than Current Siri which is a part of the App Store.
A week after the launch of Siri app, Steve Jobs brought the startup with the amount of $200 million. Currently, Apple is trying hard to improve the feature. Despite of the fact that only one of three co-founders are still working at the company, two champions of iOS, Steve Jobs and Scott Forstall are no longer.
Siri first ensured appearance on iOS app that has enabled users in order to ask natural language that was examined with the help of company’s network service with the ultimate answer encompassing from purchasing movie tickets to making dinner appointments.
With the magnificent processing power of iPhone 4S started in the year 2011 to do something beyond Siri, transforming the app into deeply embedded service. As far as speech recognition is concerned, Apple has not conformed till 2011 technology conference when Nuance’s CEO confirmed the relationship. Siri’s apt in order to understand spoken language has been supplied by Nuance Communication. This is arguably the most advanced speech recognition company at the global level. What makes it speech recognition advanced, is the utilisation of sophisticated machine learning techniques which includes convolutional neural networks and long short-term memory.
The company’s voice-activated personal assistant assists the user from everything from scheduling meeting, speed dialling as well as searching for directions. Apple users must be familiar with Siri, are versed off with male as well as female variations of Siri and distinct versions across the world but may be curious enough to know the real actors behind these voices?
Jon Briggs who is the voice of The Weakest Link was the first British male voice for Siri. American users must be aware of Susan Bennett– American female Siri and the voice of Delta Airlines. In the portion of Australia, Karen Jacobsen is famous as “GPS girl”.
Features and functionalities
It would be a big mistake to consider Siri as a mere voice control system, it is versed with context and can easily understand relationships. For instance, user can ask Siri to call Wife’s phone and Siri is aware who the wife is and which phone number to dial. Siri can deliver iMessages or SMS, w-mail of any friends, family as well as co-workers. It is irrespective the way user communicate, Siri makes it convenient to stay in touch. Apart from that, it can set timer, check the weather, check Stock, conversions and can even solve mathematics problems.
This comes into taking picture or selfie of users. But the question is how to get Siri and secure it before start utilising Siri? While in most of the cases, Siri can be started by using Siri right into the box while in maximum Cases, the users need to enable it first. There are some settings which are essential to configure in order to make Siri secure. Since Siri is capable to bypass the user’s PIN lock to get the contacts as well as data, all the options must be checked by the user and need to pick the most convenient combination of convenience and security.
Like the user can increase as well as decrease the brightness, and designate the relationships in the beginning and once the relationship is established. Later on, users can connect relationships instead of names.
Scheduling and reminders
The main motive behind the designing of Siri is to get the things done and partial of the job description is to create and update the list of to-dos as far as Reminder apps are concerned in iPhone, iPad or iPod touch. It can schedule or cancel a meeting, appointments for tomorrow, can also set location-aware reminders, can find the date and day of the week of holiday, set alarms, can set/delete alarms, can check the number of days between dates and can find the time of a particular city like Sydney.
From the latest celebrity gossip to answering all those nagging math questions. It can search anything from Google, Yahoo, Bing and WolframlAlpha. Apple can currently search Google or Yahoo or Bing for the sake of general information and WolframlAlpha for computational knowledge. By just typing into the web search box, perhaps it does not look for any typing. It can help the user to find synonym of a particular word, finding photos, searching Twitter, identify photos, friends, pictures, apps etc. It can even search for word/ PDF/Power Point/ My documents/etc. in my downloads etc.
Rather than fumbling iPhone while driving, voice turn by turn navigation and direction have been incredibly convenient with Apple Maps, Siri can be utilised in order to start the direction and then user can get the actual navigation spoken through it. Step by step approach:
- Summon Siri while driving
- User can issue a command to use the language: Give me direction to(location), Give me direction to(City), or Give me direction to (address)
As per Lincolnshire, the application can entertain by telling the user a joke, it can be as sweet as “how am I looking”?, enable story-time as well. Apart from these it enables to give the users with sports updates, finding the movie time as well as locations, identifying the name of the song being played in the living room. Even the application is as intelligent to speak about the synopsis of movie time.
This is although one of the most important changes in iOS 10, Siri’s ability with third-party apps. With the ultimate motive to control the experience, Apple chosen to expand the commands initially to only six types of third-party apps.
- Audio/video calling
- Sending and receiving payments
- Searching photos
- Starting workout
- Booking rides
There are actually couple of third-party apps that have already employed the new Siri SDK. Hence, voice commands can be brought to use in order to deliver WhatApp messages, requesting an Uber or sending money via Square Cash.
Important incidents in the existence of Siri
Siri was re-introduced with a new interface in iOS 5. Although some users had the opinion that Siri, no longer enjoyed the equivalent level of enjoyment with the partner integration level as compared to an independent app.
With iOS 6, Siri turned out to be more compatible with iPad and had somehow gained the ability for information deployment about sports, restaurants and movies. It was also capable to provide information to open apps for Twitter updates or post Facebook and capable enough to hoop with the Apple’s new Map as far as turn-by-turn navigation is concerned.
In iOS 7, to match the new design language, access to settings, and higher-quality voices, Siri got a facelift with another feature to display tweets from Twitter, tying ins to Wikepedia.
iOS 8, it had gained the ability to listen with so-called “Hey Siri”, the moment plugged in. the way, it made the driving convenient, while cooking in the kitchen, or occupying to use the Home button as far as activation is concerned.
iOS 9, Apple introduced proactive features in order to make Siri contextually aware and tried hard to make Hey Siri feature even better.
iOS 10 the iPhone voice assistant got improvisation considering over voice search . It enabled Siri to control a lot more apps, sending messages with third-party apps, it turned as powerful to ask for a ride, searching YouTube on Apple TV, expansive searching for movies about specific apps, Siri on Mac with multitasking etc.
How Siri will be embedded in iOS 11?
As far as iOS 11 is concerned, Siri has learned more to sound like an actual human. Although this is considered to be just a beginning with the advent of new operating system in iPhones and iPad. According to Apple CEO Tim Cook gave an overview of iOS 11 during Apple’s Worldwide Developers Conference 2017 keynote where new features like sweeping new upgrades are expected to embed to Siri. Apart from that, a redesigned control Centre as well as a brand new App to let the user to receive as well as send money to contacts through iMessage.
Apart from that, the app is expected to look a bit different by giving voice assistant a new visual interfaces that will be as effective to give multiple results as per the requests. In addition, it is expected to translate phrases as per user’s requirement from English to Chinese, French, Italian, German, and Spanish. The Apple Assistance application is expected to turn more predictive and will expand SirKit to include robust integrations with third-party services.
The way more innovation are stealing the grounds from the other giants in favour of Apple, Siri is expected with long-run survival. Analogously, it is expected to extend the tentacles where it has still not yet predicted. | 1 | 3 |
<urn:uuid:5fb4f3b9-5208-4ba3-a367-4f7d1f27f672> | - Find Attorney
According to The History of Computing Project, the prototype of the first microcomputer was introduced by the aptly named Micro Computer Inc., Los Angeles, in 1968. ARPANET, a defense contractors' information exchange and the precursor of the Internet, was born a year later. Commercial microcomputers (Apple, Commodore, Tandy, Sinclair, and Texas Instruments) appeared in 1977. Apple Computer introduced the first graphical interface with the Macintosh; Microsoft followed with the first version of Windows in 1985. The Internet evolved from ARPANET over a period of 18 years and, by 1987, it was a world-wide network. By 1990 it was beginning to appear in small businesses, usually in text mode. The first well-known microcomputer software applications were the VisiCalc spreadsheet and the word processors Applewriter and WordStar, all dating to the 1978–1979 period.
A few small businesses used computers before the micros appeared, but primarily in professional applications rather than as business tools. Minicomputers like the Honeywell (used in engineering) and the Wang (a dedicated word processor much used by law-firms and here and there by a successful author) were in the small business price range. Since then the three related strands of computing—hardware, software, and networks—have produced something of an avalanche of change in business administration and communications, every year bringing changes. Not surprisingly, four months before 2006 began, PC Magazine published a forecast entitled "2006: The Year Everything Changes." More or less the same theme has been sounded every year since 1980. But changes in computing and related software applications have shifted toward cell-phone-sized devices. In the traditional areas of office computing, the emerging issues of the mid-2000s are 1) centralization and decentralization: should the information technology (IT) staff have more or less control; 2) renewal or adaptation: should aging applications be brought up to date or should the business intelligently integrate old and new and save money; and 3) Web-related expansion and exploitation.
Small business has taken an active part both in the use and provision of computer applications. Once computers became affordable, they have been widely deployed in small business and, whether stand-alone or networked, have provided much the same administrative support service they do in larger enterprises. Small businesses have also participated actively in providing computer services, the production of custom software, the writing of such software for their own operations, in consulting with clients and systems integration, and in Web-consulting and Web-page design and development. By the very nature of the small business environment, small operations have found it easy to adapt and to respond rapidly to change in what was a dynamic environment.
All computers run under the control of operating system software (OS) designed for the hard-ware platform. The OS provides the basic environment in which everything takes place. Windows is the most widely-used OS on small computers followed by the Apple's Mac OS; only a small minority of small computers run on Unix, developed in 1969 at Bell Laboratories, or its derivates, e.g., LINUX. The choice of operating systems in small businesses is often driven by the type of work done and/or the operating systems used by clients. Many operations based on the graphic arts use Macintosh computers; in other cases the need easily to exchange data with clients may dictate choice of the OS. All else being equal, small businesses will tend to use the most cost-effective system in-house, typically a Windows-based or a Macintosh system.
Word processing for written communications, spreadsheets for analysis, databases for inventory control, bookkeeping software for accounting, and software for tax preparation have become reasonably priced for even small businesses that have only one computer. Payroll software has now emerged for smaller operations too, sometimes free-standing and sometimes as extensions of popular bookkeeping packages. In the mid-2000s, most small businesses were computerized and, in addition, enjoyed data management at levels of sophistication unimaginable in the mid-1990s.
Computer-assisted software development, design, and manufacturing systems (CAS, CAD, and CAM) are perhaps the best-known examples of professional software. Such systems, however, are also available for just about any professional activity that is based on symbol manipulation, data storage, and data processing. The Apple Macintosh, an early entrant into the graphical environment, continues to dominate graphic arts operations. Computer-based page design and typesetting packages have become affordable and are widely used in the small organization. Virtually all medical practices use computer-based patient scheduling and billing systems; the goal of completely automated and digitalized patient record-keeping, however, is still in the future; systems are being installed here and there but are not yet widely used.
The introduction of computer faxes and especially e-mail systems has revolutionized the way that businesses communicate with one another and employees interact within the company. Long-distance telephone costs and postage costs are saved in the process, and faster communications also speed up decision-making. Of greatest importance, perhaps, for the small business is its ability to communicate with potential customers through its own Web-site. Web-based marketing is very widespread.
Many small business owners have embraced computers as tools in doing business—and have done so early enough so that at present, in many places, hardware and applications both are becoming old. Amanda Kooser, writing in Entrepreneur, summed up the situation as follows: "A recent report by the Business Performance Management Forum took a look at this neglected issue [obsolete programs]. They surveyed a cross section of businesses and found more than 70 percent of respondents were convinced there were redundant, deficient or obsolete applications being maintained and supported on their networks. Forty percent estimated unwanted programs consumed more than 10 percent of their IT budgets. That can add up to a lot of unnecessary costs." IT in this context stands for Information Technology. Kooser recommends that companies conduct disciplined IT audits followed by systematic culling of old technology and its replacement with more modern software.
Another view is taken by Joe Tedesco, writing in Database. Tedesco's title signals the strategy: "Out With The Old? Not So Fast." Tedesco asks: "Is it time, simply, to buy new stuff? Again?" He goes on to spell out the downside: "Investing anew in software is not an especially appealing option, for a variety of reasons. How can [companies] leverage proven tools for new challenges such as increased functionality, heightened security and better data and subject-matter management? More and more companies are finding new value in the software already in use in their organizations."
These two views—replace the old or rationalize the old—have a counterpart in movements to centralize systems that have grown up throughout the company without coordination (on the one hand) and creating order by networking or rearranging existing systems to fit a more orderly situation easier for computer staffs to oversee and to maintain (on the other).
These kinds of arguments, common in the trade press, may signal that computer use is beginning to mature in organizations and that, at least in the immediate future, much more attention will be paid to cost-effective management of existing resources and cautious acquisition of the new.
Despite conflicting views, peer pressure and anxiety often influence buyers, not least small business buyers. In an article for Fortune, Joel Dreyfuss wrote as follows: "If you don't have the latest and (always) greatest software and hardware on your business computers, your vendors and employees can make you feel that you're just one step away from quill pens and parchment. The truth is that most small businesses, and consumers for that matter, get cajoled into upgrades that give them more headaches than benefits."
Dreyfuss suggested that small business owners have employees figure out the cost of installation, debugging, and training associated with new computer equipment before consenting to a purchase. He also mentioned that Usenet discussion groups and technical bulletin boards on the Internet can provide valuable analysis of new products. "Seeing the comments about installation problems, upgrade issues, and reported incompatibilities with other products can cool the ardor of any technology fanatic," he noted.
Another factor for small business owners to keep in mind is that a variety of computer applications are available online over the Internet. A number of companies have established small business portals on the Internet to give companies access to software and services—such as payroll processing, legal services, online banking, or assistance in building a Web site for E-commerce. In addition, application service providers (ASPs) offer companies the opportunity to test and use software over the Internet without having to purchase it. These options may eventually reduce the cost and improve the accessibility of computer applications for small businesses.
Cohen, Alan. "Within Striking Distance: Small Business Web Portals Struggle to Attract Customers with the Right Mix of Content and Services." FSB. 1 April 2001.
Cullen, Cheryl Dangel. "Software for Designers: What Do They Want? What Are They Getting?" Digital Output. August 2005.
Dreyfuss, Joel. "The Latest and Greatest Disease: Even Big Companies, with Pricey Evaluation Staffs, Find It Hard to Resist the Allure of the Bigger and Better Hardware and Software Products. But Do You Need All Those Newfangled Features?" Fortune. 16 October 2000.
Kooser, Amanda C. "Spring cleaning: Old Software Draining Your IT Budget? Here's How to Clean Up." Entrepreneur. May 2005.
Loehr, Mark. "Right Size IT." Database. May 2005.
Miller, Michael J. "2006: The Year Everything Changes." PC Magazine. 9 August 2005.
Tedesco, Joe. "Out With The Old? Not So Fast." Database. May 2005.
The History of Computing Project. Available from http://www.thocp.net. Retrieved on 26 January 2006.
Hillstrom, Northern Lights
updated by Magee, ECDI | 1 | 3 |
<urn:uuid:8f11a674-16b3-4de4-a07a-7ad99bf4a847> | About Ritchey-Chretien Telescopes…
In 1910, American optician & astronomer George Willis Ritchey & French astronomer Henri Chretien designed a specialized Cassegrain that would later become the telescope of choice for many observatories and professionals around the world. The Ritchey-Chretien astrograph has many benefits that make this design appealing to anyone who is serious about astro-photography or imaging. Here are a few of those benefits:
- Good-bye Coma: An RC has virtually no coma (stars look like little comets around the edges of the field), which means there will be greater image quality across a wider field of view.
- No Chromatic Aberration: Because a Ritchey-Chretien does not use lenses or corrector plates, the design does not suffer from chromatic aberrations, or false color. If you've ever looked through an achromatic refractor (non-APO), you will have seen chromatic aberration.
- No Spherical Aberration: The use of hyperbolic mirrors for both the primary and secondary removes the problem of spherical aberration from this optical system, an optical effect caused when light rays do not all come to focus at the same point, resulting in an image that is not in perfect focus.
10" f/8 Ritchey-Chretien Astrograph Highlights…
Optical Highlights: This Third Planet Optics (TPO) Ritchey Chretien telescope has 10" (250mm) of aperture and a focal length of 2000mm. The concave hyperbolic primary and convex hyperbolic secondary are made from low expansion quartz, and finished with a scratch-resistant highly reflective 99% dielectric coating for great contrast. The primary mirror is fixed in place in a metal mirror cell, and the secondary resides in a metal housing that can be collimated.
Multiple Knife-Edge Baffle System: The computer designed and optimized baffle system in the TPO Ritchey-Chretien works wonders at keeping stray light at bay. Inside the tube are nine light baffles, and the primary and secondary mirrors are baffled as well.
Cooling Fans: The 10" Ritchey has three small cooling fans in the rear cell that help cool the inside of the tube down and bring it to ambient temperature so that you can start imaging sooner! The fans are powered by an external battery pack. This battery holder accepts 8-AA batteries (sold separately).
3.3" Dual Speed Fully-Rotatable Crayford Focuser You'll love the 3.3" 1:10 Dual Speed Crayford focuser that comes standard on this RC. The dual knobs allow you choose the speed with which you focus. One turn of the larger knob equals ten turns of the smaller knob, so minute adjustments…when you are "almost there"…are easy to do with this high quality focuser. A 2" & 1.25" compression ring adapter is also included so you can use both 2" & 1.25" eyepieces. Two spacers are included as well, which allows you to adjust the focus position for different cameras with various back focus requirements.
A Fixed Primary Eliminates Image Shift Schmidt-Cassegrain & Mak-Cassegrain telescopes achieve focus by moving the primary mirror back and forth inside the optical tube assembly, and this movement can cause image shift. While manufacturers have done a pretty good job of minimizing image shift on their telescopes, a moveable mirror makes it almost impossible to eliminate it completely. The Ritchey-Chretien has a primary that is fixed in place, removing the possibility of image shift and also the job of collimating the primary.
Two Dovetails & Finderscope Base Included Talk about versatility! A Losmandy-style dovetail is attached to the bottom and top of the OTA. The dovetail(s) can be removed if you wish to use mounting rings (sold separately) instead. While a finderscope is not included, the OTA comes standard with a finderscope base that will accept Vixen-style brackets (if you want an optical finder) or most red dot finders.
- Additional Information
SKU OS-10RC-M Manufacturer TPO Telescope Series TPO Ritchey-Chretien Optical Design Ritchey-Chretien Mount Type None - Optical Tube Only Warranty 2 Year Warranty Telescope Aperture 10" Telescope Focal Length 2000mm Telescope Focal Ratio f/8 Length of Optical Tube 29" Optical Tube Weight 34.5 lbs. Optical Coatings 99% Dielectric Tube Color or Finish Gloss White Limiting Stellar Magnitude 14.5 Highest Useful Magnification 300x Diagonal Included? No - Sold Separately Finder Included None OTA Mount Type Losmandy-Style Dovetail
- Included Items
- TPO 10" f/8 Ritchey-Chretien OTA - White Aluminum Tube
- (2) M117 X 25mm Spacers
- M117 X 50mm Spacer
- Top & Bottom Losmandy Dovetails
- TPO 3.3" Two Speed Crayford Focuser w/2" and 1.25" Adapters
- Vixen Finder Shoe
- Battery Box for Fans
- Dust Caps
- Questions & Answers
Product QuestionsQuestion by: John Dutton on May 24, 2015 10:18:00 AMHello John,
Thank you for your question. I would say that the best case for this is the Pelican 1660 Case.
This case is a hard waterproof case which will protect the telescope well. Keep in mind that you might need to
pack the focuser in a separate part of the case in order for this to fit.Answer by: Rod Gallardy (Admin) on May 24, 2015 4:04:00 PMWill the Teleskop Service Flattener work with this scope and focuser or will I need to add some adapters to make the connection?Question by: Karl Zimmerman on Oct 8, 2016 4:45:00 PMThe TS-RCKORREKTOR is 2" in diameter and will slide right into the focuser like a 2" nosepiece. You will need M48 threaded adapters (2" OD) to get the distance from the flattener to your camera's sensor to 109mm +/-2mm. Please contact us if you have questions on how to do this with your camera.Answer by: Chris Hendren (Admin) on Oct 28, 2016 2:39:00 PMShould I be aware of any issues mating this scope with an Orion Atlas EQ-G (manual states 40 lb limit)?Question by: vince allen on Aug 22, 2015 6:35:00 PMThis telescope has D-series dovetails (4" wide with 2.96" rails), so you will need to have the upgraded saddle from Orion or ADM to accept the wider dovetail. You will also need a minimum of 3x 11lb counterweights and will need to shield the telescope from the wind as you are very close to the upper limit of what the mount can carry.Answer by: Chris Hendren (Admin) on Aug 23, 2015 12:51:00 PMI don't suppose there's any chance at ordering this OTA without a stock focuser is there? I really don't want to pay for a 3" focuser I'm not going to use. I have a thing for Feathertouches and it seems a waste if it could be avoided.
what is the backfocus length of this telescope, and what would it be with a .75X or a .5X focal reducers?
Thanks.Question by: andy on Aug 20, 2015 3:58:00 PMThe back focus of this scope at f/8 is 235mm from the rear M117 thread on the rear of the scope. There are no f/5 focal reducers that work with this scope, and the 0.75x TPO is a reducer but not a corrector so it will only work with sensors up to 15mm diagonal. The AP CCDT67 is a good option and can be used at 0.67x-0.75x with image circles of up to 29mm at 0.67x.
Reducers almost always pull the focal point inward, usually based on the optical distance between the lens and the focal plane divided by the change in magnification and then that total subtracted from the current optical distance. A CCDT67 reducer at 0.67x would have an optical spacing of 101mm, and so (101/0.67x) - 101 = ~49mm, meaning that the focal plane would move inward by ~49mm with this configuration compared to f/8 spacing.Answer by: Chris Hendren (Admin) on Aug 20, 2015 4:24:00 PM
- Support / Downloads | 1 | 4 |
<urn:uuid:ebc4b475-c0f3-4ab6-a82b-ceef3d0c3cc7> | How the internet is changing language
'To Google' has become a universally understood verb and many countries are developing their own internet slang. But is the web changing language and is everyone up to speed?
In April 2010 the informal online banter of the internet-savvy collided with the traditional and austere language of the court room.
Christopher Poole, founder of anarchic image message board 4Chan, had been called to testify during the trial of the man accused of hacking into US politician Sarah Palin's e-mail account.
During the questioning he was asked to define a catalogue of internet slang that would be familiar to many online, but which was seemingly lost on the lawyers.
At one point during the exchange, Mr Poole was asked to define "rickrolling".
"Rickroll is a meme or internet kind of trend that started on 4chan where users - it's basically a bait and switch. Users link you to a video of Rick Astley performing Never Gonna Give You Up," said Mr Poole.
"And the term "rickroll" - you said it tries to make people go to a site where they think it is going be one thing, but it is a video of Rick Astley, right?," asked the lawyer.
"He was some kind of singer?"
"It's a joke?"
The internet prank was just one of several terms including "lurker", "troll" and "caps" that Mr Poole was asked to explain to a seemingly baffled court.
But that is hardly a surprise, according to David Crystal, honorary professor of linguistics at the University of Bangor, who says that new colloquialisms spread like wildfire amongst groups on the net.
"The internet is an amazing medium for languages," he told BBC News.
"Language itself changes slowly but the internet has speeded up the process of those changes so you notice them more quickly."
People using word play to form groups and impress their peers is a fairly traditional activity, he added.
"It's like any badge of ability, if you go to a local skatepark you see kids whose expertise is making a skateboard do wonderful things.
"Online you show how brilliant you are by manipulating the language of the internet."
One example of this is evident in Ukraine, where a written variation of the national tongue has sprung up on internet blogs and message boards called "padronkavskiy zhargon" - in which words are spelled out phonetically.
It is often used to voice disapproval or anger towards another commentator, says Svitlana Pyrkalo, a producer at the BBC World Service Ukrainian Service.
"Computer slang is developing pretty fast in Ukraine," she said.
The Mac and Linux communities even have their own word for people who prefer Microsoft Windows - віндузятники (vinduzyatnyky literally means "Windowers" but the "nyky" ending makes it derogatory).
"There are some original words with an unmistakably Ukrainian flavour," said Ms Pyrkalo.
The dreaded force-quit process of pressing 'Control, Alt, Delete' is known as Дуля (dulya).
"A dulya is an old-fashioned Ukrainian gesture using two fingers and a thumb - something similar to giving a finger in Anglo-Saxon cultures," she said.
"And you need three fingers to press the buttons. So it's like telling somebody (a computer in this case) to get lost."
For English speakers there are cult websites devoted to cult dialects - "LOLcat" - a phonetic and deliberately grammatically incorrect caption that accompanies a picture of a cat, and "Leetspeak" in which some letters are replaced by numbers which stem from programming code.
"There are about a dozen of these games cooked up by a crowd of geeks who, like anybody, play language games," said Professor Crystal.
"They are all clever little developments used by a very small number of people - thousands rather than millions. They are fashionable at the moment but will they be around in 50 years' time? I would be very surprised."
For him, the efforts of those fluent in online tongues is admirable.
"They might not be reading Shakespeare and Dickens but they are reading and cooking up these amazing little games - and showing that they are very creative. I'm quite impressed with these movements."
One language change that has definitely been overhyped is so-called text speak, a mixture of often vowel-free abbreviations and acronyms, says Prof Crystal.
"People say that text messaging is a new language and that people are filling texts with abbreviations - but when you actually analyse it you find they're not," he said.
In fact only 10% of the words in an average text are not written in full, he added.
They may be in the minority but acronyms seem to anger as many people as they delight.
Stephen Fry once blasted the acronym CCTV (closed circuit television) for being "such a bland, clumsy, rythmically null and phonically forgettable word, if you can call it a word".
But his inelegant group of letters is one of many acronyms to earn a place in the Oxford English Dictionary (OED).
The secret of their success is their longevity.
"We need evidence that people are using a word over a period of time," said Fiona McPherson, senior editor in the new words group at the OED.
She says the group looks for evidence that a word has been in use for at least five years before it can earn its place in the dictionary.
Such evidence comes in the form of correspondence from the public and trawling through dated material to find out when a term first started appearing.
Hence TMI (Too Much Information) and WTF (you may wish to look that one up for yourself) are in, while OMG (Oh My God) has yet to be included in the quarterly dictionary updates.
"Some people get quite exercised and say, 'do these things belong in our language?'," said Ms McPherson.
"But maybe this has always happened. TTFN [ta ta for now] is from the ITMA (It's That Man Again) radio series in the 1940s."
There is no doubt that technology has had a "significant impact" on language in the last 10 years, says Ms McPherson.
Some entirely new words like the verb 'to google', or look something up on a search engine, and the noun 'app', used to describe programmes for smartphones (not yet in the OED), have either been recently invented or come into popular use.
But the hijacking of existing words and phrases is more common.
Ms McPherson points out that the phrase "social networking" debuted in the OED in 1973. Its definition - "the use or establishment of social networks or connections" - has only comparatively recently been linked to internet-based activities.
"These are words that have arisen out of the phenomenon rather than being technology words themselves," she added.
"Wireless in the 1950s meant a radio. It's very rare to talk about a radio now as a wireless, unless you're of a particular generation or trying to be ironic. The word has taken on a whole new significance."
For Prof Crystal it is still too early to fully evaluate the impact of technology on language.
"The whole phenomenon is very recent - the entire technology we're talking about is only 20 years old as far as the popular mind is concerned."
Sometimes the worst thing that can happen to a word is that it becomes too mainstream, he argues.
"Remember a few years ago, West Indians started talking about 'bling'. Then the white middle classes started talking about it and they stopped using it.
"That's typical of slang - it happens with internet slang as well." | 1 | 2 |
<urn:uuid:7d729970-0482-4886-87b9-bf0a6f4fc65c> | Locaw area network
|Computer network types
by spatiaw scope
A wocaw area network (LAN) is a computer network dat interconnects computers widin a wimited area such as a residence, schoow, waboratory, university campus or office buiwding and has its network eqwipment and interconnects wocawwy managed. By contrast, a wide area network (WAN) not onwy covers a warger geographic distance, but awso generawwy invowves weased tewecommunication circuits or Internet winks. An even greater contrast is de Internet, which is a system of gwobawwy connected business and personaw computers.
The increasing demand and use of computers in universities and research wabs in de wate 1960s generated de need to provide high-speed interconnections between computer systems. A 1970 report from de Lawrence Radiation Laboratory detaiwing de growf of deir "Octopus" network gave a good indication of de situation, uh-hah-hah-hah.
A number of experimentaw and earwy commerciaw LAN technowogies were devewoped in de 1970s. Cambridge Ring was devewoped at Cambridge University starting in 1974. Edernet was devewoped at Xerox PARC in 1973–1975, and fiwed as U.S. Patent 4,063,220. In 1976, after de system was depwoyed at PARC, Robert Metcawfe and David Boggs pubwished a seminaw paper, "Edernet: Distributed Packet-Switching for Locaw Computer Networks". ARCNET was devewoped by Datapoint Corporation in 1976 and announced in 1977. It had de first commerciaw instawwation in December 1977 at Chase Manhattan Bank in New York.
The devewopment and prowiferation of personaw computers using de CP/M operating system in de wate 1970s, and water DOS-based systems starting in 1981, meant dat many sites grew to dozens or even hundreds of computers. The initiaw driving force for networking was generawwy to share storage and printers, which were bof expensive at de time. There was much endusiasm for de concept and for severaw years, from about 1983 onward, computer industry pundits wouwd reguwarwy decware de coming year to be, “The year of de LAN”.
In practice, de concept was marred by prowiferation of incompatibwe physicaw wayer and network protocow impwementations, and a pwedora of medods of sharing resources. Typicawwy, each vendor wouwd have its own type of network card, cabwing, protocow, and network operating system. A sowution appeared wif de advent of Noveww NetWare which provided even-handed support for dozens of competing card/cabwe types, and a much more sophisticated operating system dan most of its competitors. Netware dominated de personaw computer LAN business from earwy after its introduction in 1983 untiw de mid-1990s when Microsoft introduced Windows NT Advanced Server and Windows for Workgroups.
Of de competitors to NetWare, onwy Banyan Vines had comparabwe technicaw strengds, but Banyan never gained a secure base. Microsoft and 3Com worked togeder to create a simpwe network operating system which formed de base of 3Com's 3+Share, Microsoft's LAN Manager and IBM's LAN Server - but none of dese was particuwarwy successfuw.
During de same period, Unix workstations were using TCP/IP networking. Awdough dis market segment is now much reduced, de technowogies devewoped in dis area continue to be infwuentiaw on de Internet and in bof Linux and Appwe Mac OS X networking—and de TCP/IP protocow has repwaced IPX, AppweTawk, NBF, and oder protocows used by de earwy PC LANs.
Earwy LAN cabwing had generawwy been based on various grades of coaxiaw cabwe. Shiewded twisted pair was used in IBM's Token Ring LAN impwementation, but in 1984, StarLAN showed de potentiaw of simpwe unshiewded twisted pair by using Cat3 cabwe—de same simpwe cabwe used for tewephone systems. This wed to de devewopment of 10BASE-T (and its successors) and structured cabwing which is stiww de basis of most commerciaw LANs today.
Many LANs use wirewess technowogies dat are buiwt into Smartphones, tabwet computers and waptops. In a wirewess wocaw area network, users may move unrestricted in de coverage area. Wirewess networks have become popuwar in residences and smaww businesses, because of deir ease of instawwation, uh-hah-hah-hah. Guests are often offered Internet access via a hotspot service.
Network topowogy describes de wayout of interconnections between devices and network segments. At de data wink wayer and physicaw wayer, a wide variety of LAN topowogies have been used, incwuding ring, bus, mesh and star. At de higher wayers, NetBEUI, IPX/SPX, AppweTawk and oders were once common, but de Internet Protocow Suite (TCP/IP) has prevaiwed as a standard of choice.
Simpwe LANs generawwy consist of cabwing and one or more switches. A switch can be connected to a router, cabwe modem, or ADSL modem for Internet access. A LAN can incwude a wide variety of oder network devices such as firewawws, woad bawancers, and network intrusion detection. Advanced LANs are characterized by deir use of redundant winks wif switches using de spanning tree protocow to prevent woops, deir abiwity to manage differing traffic types via qwawity of service (QoS), and to segregate traffic wif VLANs.
LANs can maintain connections wif oder LANs via weased wines, weased services, or across de Internet using virtuaw private network technowogies. Depending on how de connections are estabwished and secured, and de distance invowved, such winked LANs may awso be cwassified as a metropowitan area network (MAN) or a wide area network (WAN).
- Gary A. Donahue (June 2007). Network Warrior. O'Reiwwy. p. 5.
- Samuew F. Mendicino (1970-12-01). "Octopus: The Lawrence Radiation Laboratory Network". Rogerdmoore.ca. Archived from de originaw on 2010-10-11.
- "THE LAWRENCE RADIATION LABORATORY OCTOPUS". Courant symposium series on networks. Osti.gov. 29 Nov 1970. OSTI 4045588.
- "A brief informaw history of de Computer Laboratory". University of Cambridge. 20 December 2001. Archived from de originaw on 11 October 2010.
- "Edernet Prototype Circuit Board". Smidsonian Nationaw Museum of American History. Retrieved 2007-09-02.
- "Edernet: Distributed Packet-Switching For Locaw Computer Networks". Acm.org. Retrieved 2010-10-11.
- "ARCNET Timewine" (PDF). ARCNETworks magazine. Faww 1998. Archived from de originaw (PDF) on 2010-10-11.
- Lamont Wood (2008-01-31). "The LAN turns 30, but wiww it reach 40?". Computerworwd.com. Retrieved 2016-06-02.
- "'The Year of The LAN' is a wong-standing joke, and I freewy admit to being de comedian dat first decwared it in 1982...", Robert Metcawfe, InfoWorwd Dec 27, 1993
- "...you wiww remember numerous computer magazines, over numerous years, announcing 'de year of de LAN.'", Quotes in 1999
- "...a bit wike de Year of de LAN which computer industry pundits predicted for de good part of a decade...", Christopher Herot
- Wayne Spivak (2001-07-13). "Has Microsoft Ever Read de History Books?". VARBusiness. Archived from de originaw on 2010-10-11.
- "Big pipe on campus: Ohio institutions impwement a 10-Gigabit Edernet switched-fiber backbone to enabwe high-speed desktop appwications over UTP copper", Communications News, 2005-03-01,
As awternatives were considered, fiber to de desk was evawuated, yet onwy briefwy due to de added costs for fiber switches, cabwes and NICs. "Copper is stiww going to be a driving force to de desktop for de future, especiawwy as wong as de price for fiber components remains higher dan for copper."
- "A Review of de Basic Components of a Locaw Area Network (LAN)". NetworkBits.net. Retrieved 2008-04-08.
|Wikimedia Commons has media rewated to Locaw area network.| | 1 | 4 |
<urn:uuid:ab671b55-b31e-41d9-a3ee-60141d758a6e> | Science fiction writers not only invent entirely new worlds inhabited by fantastic creatures, they also introduce new technologies and inventions — many times, centuries ahead of their time. Although they seemed far-fetched during the authors’ lifetimes, many of the technologies are deeply ingrained in modern daily life — to the point that we take them for granted. Below is a list of inventions predicted by famous writers (name, novel and year of publication, the invention or technology they predicted, and when the prediction became reality).
Douglas Adams (1952-2001), Hitchhiker’s Guide to the Galaxy (1979), predicted electronic books; became a reality in early 1990s (first e-books and e-book readers introduced, most notably Amazon’s Kindle and Apple’s iPad)
Edward Bellamy (1850-1898), Looking Backward (1888), predicted credit cards; became reality in 1950 (Diner’s Club Card)
Ray Bradbury (1920-2012), Fahrenheit 451 (1953), predicted flat screen television; became reality in 1971 (first LCD flat screen televisions introduced)
Ray Bradbury (1920-2012), Fahrenheit 451 (1953), predicted earbud headphones; became reality in 2001 (Apple popularized earbuds with first generation iPods)
Arthur C. Clarke (1917-2008), 2001: A Space Odyssey (1968), predicted the computer tablet; became reality in the 1980s (Pencept Penpad introduced in 1983; Newton introduced by Apple in 1993)
Philip K. Dick (1928-1982), Ubik (1969), predicted artificial organs (“artiforgs”) that could be grafted into a human being to replace the organs that had failed. In 2011, surgeons in Sweden completed the first transplant of a synthetic trachea on a cancer patient.
E. M. Forster (1879-1970): The Machine Stops (1909), predicted the office cubicle; became a reality in late 1960s (Robert Propst, a designer for Herman Miller, introduced the Action Office II in 1967)
Aldous Huxley (1894-1963), Brave New World (1932), predicted test-tube babies; became reality in 1978 (first test-tube baby, Louise Joy Brown, was born in England)
Aldous Huxley, Brave New World (1932), predicted mood-enhancing drugs; became reality in 1950s (introduction of antidepressants, isoniazid and iproniazid, which were originally developed to treat tuberculosis)
Jules Verne (1828-1905), From the Earth to the Moon (1865), predicted lunar travel; became reality in 1969 (the Apollo 11 mission to the moon and back)
H. G. Wells (1866-1946), When the Sleeper Wakes (1899), predicted automatic doors; became reality in 1954 (automatic doors designed by Dee Horton and Lew Hewitt)
H. G. Wells, The World Set Free (1914), predicted the atomic bomb; became a reality in WWII: the Manhattan Projectcreated the first atomic bombs, Little Boy and Fat Man, that were detonated over Japan in 1945.
H. G. Wells, predicted moving walkways in his novel A Story of the Days to Come (1897). Incidentally, Robert Heinlein (1907-1988) also featured moving walkways in his story, The Roads Must Roll (1940). The first walkway was designed for a project in Atlanta, Georgia in the early 1920s. Moving walkways became commonplace by the 1970s.
John Brunner’s (1934-1995) Stand on Zanzibar (1969), stands alone in for its extraordinary and eerie prescience: Brunner describes the future in 2010 — the world is overpopulated; the United States is plagued by terrorist attacks and school shootings, automobiles powered by rechargeable fuel cells, and a culture that encourages short-term, no-strings-attached relationships. And guess the name of Brunner’s fictional American president — President Obomi.
Read related posts: Random Fascinating Facts About Authors
Who Were Barnes and Noble?
Sleeping Habits of Famous Authors
How Many Words Does the Average Person Speak in a Lifetime?
First Typewritten Book
For further reading: Under the Covers and Between the Sheets by C. Alan Joyce (2009) | 1 | 6 |
<urn:uuid:d6f4ec65-bf15-4fa5-bd89-744097801378> | From: tdiaz-a(in_a_circle)-apple2-dotsero-org (Tony Diaz)
Subject: Apple II Sound & Music Frequently Asked Questions (FAQ)
Last-modified: August 21 2007
- 1 Apple II Sound & Music FAQ
- 1.1 An introduction to music and sound on computers.
- 1.2 8-bit music and sound
- 1.3 Types of sound files found on the IIgs
- 1.4 An introduction to sampling
- 1.5 Some basics on editing sounds.
- 1.6 AE Types of music files
- 1.7 A brief overview of SoundSmith style editors.
- 1.8 An Overview of MIDI
- 1.9 Technical Specs for the GS Ensoniq chip
- 1.10 About IIgs Stereo Cards
- 1.11 What about them other machines?
- 1.12 Notes:
- 1.13 What's this I hear about 3D sound?
- 1.14 The Apple II: It just keeps going and going and going....
Apple II Sound & Music FAQ
An introduction to music and sound on computers.
Music and sound have been a computerized pursuit since at least the 1960s, when enterprising hackers discovered that by programming the large mainframes of the time to do different operations, different tones could be generated on a common AM radio from the interference (this is still a problem today :-).
Early synthesizers developed at the time (known as Mellotrons) consisted of a huge bank of tape loops, with each key playing a different tape. Primitive analog tone generators were also in use. These early synthesizers first got wide industry exposure via Walter aka Wendy (never mind) Carlos' "Switched-On Bach" album. At this time (mid to late 60s), Robert Moog developed the direct ancestors of today's synthesizer. Moog's synthesizers were programmed via 'patch bays', wherein the user would connect a series of jacks in a specific configuration via patch cords to get a certain tone. This use of the word 'patch' for a sound setting on a synthesizer persists, despite that today a 'patch' is usually a data file stored on disk or in ROM.
The Moog's debut in a Top 40 song was Del Shannon's "Runaway". A Moog was used along with a tube-based analog synthesizer called a theremin in the Beach Boys' classic "Good Vibrations". The possibilities of synthesizers weren't really exploited until the onslaught of 70s 'art-rock' bands such as the Who, Supertramp, ELP (Emerson, Lake, and Palmer), Genesis, Yes, Pink Floyd and Rush. Synthesizers have continued to advance to the point where they are now the only instrument needed to make a typical Top 40 or rap album. This was foreseen somewhat by Boston, who included a "No Keyboards!" logo on one of their early albums despite the obvious inclusion of a Hammond organ on several songs.
Computer control of music developed somewhat later, however. Several companies in the early 1980s had competing systems for allowing electronic synthesizers to interface to computers and each other, Roland's "CV-Gate" system being among the most popular. Around 1983 or so, a group of companies developed the now ubiqitous MIDI (Musical Instrument Digital Interface) standard. It is now very difficult to find a synthesizer without MIDI capabilities, and all popular computers can be interfaced to MIDI instruments, including the Apple II.
The first development after MIDI was introduced was the "sequencer" program, a program which allowed the recording and playback of MIDI data streams, as well as sophisticated editing functions. This allowed perfect playback of songs every time, as well as more advanced functionality such as the ability to synchronize MIDI data with SMPTE (Society of Motion Picture and Television Engineers) time code, a fact which made it very simple to add MIDI-based music to television shows and theatrical films and synchronize to a resolution finer than 1 frame. SMPTE and MIDI were used heavily in the production of the soundtrack for the recent blockbuster "Jurassic Park" for example.
At about the same time as the first sequencers were arriving, computers began to get sound chips with some semi-decent capabilities. Machines such as the TI-99/4A and Atari 800 had chips capable of playing at least 3 independent tones at any one time. However, the tones were preset, usually to a square wave, which has very little musical interest. This went to the next step when a young engineer developed the SID sound chip for the Commodore 64 computer. The SID chip could play 3 tones at once [plus 1 channel devoted to 'white noise' percussive sounds], and each of the tones could be selected from a range of several waveforms. In addition, advanced effects such as "ring modulation" were avalible on this chip. The C=64 soon allowed many to compose some amazing tunes, but the best was yet to come.
The engineer who designed the SID went on to join a company called Ensoniq, where he designed the DOC (Digital Oscillator Chip) which powered the company's now legendary Mirage synthesizer. The Mirage was unique in that it was the first major synthesizer to offer sampling, wherein you could digitally record any sound you wanted, from trumpets to snare drums to water dripping, and use it as an instrument. Best of all, the DOC chip could play up to 32 samples at any one time, making it useful to emulate a whole orchestra with one Mirage. The DOC chip also powered Ensoniq's ESQ-1 and SQ-80 synthesizers.
Now, to get some Apple II-ish relevance. During the design of the Cortland (aka IIgs), Apple was planning on using a chip not unlike the one on the Mac II series. This chip played 4 samples at once, but was limited in it's stereo capabilities (you got 2 samples on the left, and 2 on the right, and that's it) as well as overall flexibility (it's limited to 1 fixed sampling rate of 22,050 Hz). Luckily, Ensoniq sent a sample of the DOC chip to Apple, and it ended up in the hands of a music enthusiast working on the IIgs project. This engineer fought with management until they decided to use the DOC chip for the IIgs. However, up until nearly the last minute, the DOC and it's 64k of RAM were to be an extra-cost feature, which would have killed the GS music software market dead. Luckily, price drops on components allowed the DOC to be standard, so all IIgs owners could hear great sound.
Back to generalized things, the next development was to combine sampling and sequencing software on capable computers. This resulted in the *Tracker genre on the Amiga, as well as Music Construction Set, Music Studio, and other programs on many platforms. These programs typically had a sequence file and a series of sample files used as instruments, with some notable exceptions (the *Tracker series on the Amiga had all-in-one 'modular' files, hence the name MOD).
8-bit music and sound
The 8-bit IIs are quite underpowered in the sound department compared to the IIgs. However, anyone who's played Dung Beetles or Sea Dragon knows that some pretty sophisticated stuff is still possible. The 8-bit sound normally consists simply of an ability for programs to make the speaker click. If a program toggles the speaker very fast, tones are generated. And using other techniques beyond the scope of this FAQ, you can even play digitized samples on the speaker, although the quality isn't very good unless you can somehow hook up external speakers. You can hear for yourself with Michael Mahon's Sound Editor 2.2, which is currently available from his web page at: http://members.aol.com/MJMahon/
There have also been a variety of sound expansion boards available for the 8-bit IIs, but the only one to really catch on was the venerable Mockingboard. The Mockingboard was available in several packages. The Mockingboard "A" was the base card, which added 6-voice music and sound synthesis to to any alotted II. The Mockingboard "B" was a daughterboard that worked with the "A" and added speech synthesis capabilities. The Mockingboard "C" was essentially an "A" and "B" in one package. The later Mockingboard "D" had the same capabilities as the "C", but attached to the Apple IIc via the serial port.
Types of sound files found on the IIgs
Several types of sample files are used. Here are the most common.
|Raw||no std.||BIN||Contains only raw sample data. The auxtype|
|is normally the sample rate divided by 51.|
|(See section CA for more on why this is).|
|ACE||.ACE||$CD||Contains raw sample data compressed with ACE,|
|Apple's Tool029 sound compressor.|
|ASIF||no std.||$D8||Contains sample data plus additional data.|
|Notable due to its use by SoundSmith.|
|AIFF||.AIFF||$D8||Interchange format popular on the Macintosh.|
|Not used much on the IIgs.|
|HyperStudio||no std.||$D8||Contains raw or ACE compressed data plus|
|rSound||no std.||$D8||Resource fork contains one or more rSound and|
|rResName resources. Used by HyperCard|
|IIgs and the Sound CDev.|
An introduction to sampling
Sampling is conceptually simple; an incoming analog sound signal is converted to a digital number (0-255 on the IIgs). Getting good samples depends on a number of factors:
- Sampling rate. This is how often in samples per second the incoming signal is actually noticed and saved. In general, you want to have a sampling rate of twice the frequency of the highest pitch sound you intend to sample. (The reasoning behind this is known as the Nyquist Sampling Theorem). Compact discs sample at 44,100 Hz, which means they can accurately track signals up to 22,050 Hz, beyond the range of human hearing. Long-distance telephone calls are sampled at 8,000 Hz, since the characteristic part of human voices is generally from 1000-3000 Hz. If frequencies higher than or equal to half your sampling rate exist, they will manifest as distortion in the output sample.
- Stereo card quality and shielding (the Audio Animator makes the best samples of any card I've tried, by far).
- Input signal level (the higher the better, except that there is a threshold known as the 'clipping level' above which the sampler will be unable to track the signal. Analog tape recorders do something very similar).
Once a sample is made, it can be manipulated in a variety of ways via mathematics. Because this processing is digital, no degradation of the signal can occur, unlike with analog processing. Some effects which can be done include:
- Cut and pasting parts of the sample around.
- Mixing/overlaying two samples.
- Flanger/Chorus effects.
- Amplification and deamplification.
- Filtering and equilization
and much more...check out a modern rack-mounted guitar digital signal processor for all the things possible :)
To digitize a sound (I'll use AudioZap as the example, others are similar):
- Hook everything up.
- Check the oscilloscope. The wave should be barely touching the top and bottom of the 'scope. Any higher and the sound is clipping; any lower and you'll get a poor quality recording. Adjustment methods vary by card; for the Sonic Blaster card AZ can adjust it in software. Otherwise, consult your card's manual.
- Select a recording rate (lower numbers on AZ = faster).
- Click Record and cue up your tape or CD.
- Select Ok and then start the tape or CD.
- Click the mouse and stop the tape or CD when you are done.
You've just made a sample! congratulations! Experiment...you can't hurt anything, but may discover fun/neat things to do!
Some basics on editing sounds.
(This section attempts to be program-independent, but in some cases specific refrences to AudioZap may sneak in :-)
I'll assume you now have a sound loaded up, and whatever program is showing you a nice wave graph. Now, you can pick out portions of the wave by simply clicking and dragging the mouse over a part of the wave, and letting go when you have as much as you want. If you now try to Play, you'll only hear the portion you have selected. If you need to adjust your selection range, many programs allow you to shift or apple-click and extend the endpoints instead of just starting over with a new range.
Once you have an area selected, you can cut/copy/paste/clear just like you would text in a word processor. When pasting a waveform, you simply click once where you'd like, and select Paste. The program inserts the previously cut or copied piece of wave and moves the wave over to make room, just like with a word processor.
For more specific information, consult the documentation for the program you use.
AE Types of music files
|MCS||None||MUS||Music Construction Set tune.|
|TMS||.SNG||BIN||Music Studio song.|
|NTMOD||None||INT||NoiseTracker GS module|
|NTSNG||None||BIN||NoiseTracker GS song.|
|MOD||None||$F4||Amiga ProTracker module ($F4 is temporary).|
|MIDI||.MID||MDI||Standard MIDI file.|
A brief overview of SoundSmith style editors.
SoundSmith (and all other MOD derived editors) use a very simplistic way to representing music, to wit:
0 C5 1000 --- 0000 1 --- 0000 --- 0000 ... additional tracks here 2 G5 33FF G5 53FF 3 --- 0000 --- 0000 4 C5 1000 --- 0000
This is often known as a 'spreadsheet' format since there are rows and columns much like a spreadsheet. Let's take a look at an individual cell:
Number of cell | Instrument number | | Effect data | | /| 2 G5 33FF /\ | || Effect number || Note and octave
For this note, it's #2 of 63 in the pattern, it's a G in octave 5, using instrument number 3, effect 3, and data FF. What effect 3 actually means depends on the tracker in question. On SoundSmith and derivatives, it means "Set the volume to --", in this case set it to $FF (255) which is the maximum.
Now, into a larger structure. 64 lines of cells makes up a block, or pattern as it is sometimes called. (some Amiga and PC editors allow blocks of varying lengths, but we won't consider those here). You can terminate a block early with a special effect. On most trackers, an actual effect number is used. On SoundSmith, entering the note/octave as NXT makes that line of cells the last line played in that block.
Now that we've covered cells and blocks, we can get into the large-scale structure of things. To make a complete song, we can give the player a 'block list' which tells it to play a specific sequence of blocks in a specific order. For instance, we could have it play block 4, then block 0, then block 1, then block 2, then block 2. An entry in the block list is known as a 'position'. MOD-derived formats typically allow 128 positions, and 64 (MOD) or 71 (SoundSmith) blocks.
For those of you with (gasp!) other machines and more modern trackers, you'll notice many of these trackers have a 4th column in each track. The extra column is usually a volume level for the track, where 0 means "don't change" and all other values do - this helps to preserve effects and make things more flexible. Also, nearly all limits associated with the original MOD format are no longer in force - Impulse Tracker on the PC, probably the most advanced tracker available today, offers 64 tracks, up to 32 megabytes of samples, and nearly unlimited blocks and positions.
A Practical Example:
Crank up MODZap 0.9 or later and a favorite tune. Set it to the "Classic Player". Now, remember those numbers you never understood before, off to the left of the scrolling cells? Here's what they mean, in terms of what you just learned: *grin*
This is the # of entries in the block list > 35 --- 0000 This is the current block list entry playing > 04 --- 0000 This is the block # currently playing > 01 --- 0000 This is the current cell # in the current block > 36 A#4 0384
As you watch, the current cell # will normally (barring certain effects) smoothly go from 00 to 63. When it hits 63, it will go to 00 again and the current block list entry number will increment by 1. When it does, the current block number will change if needed (remember, a block can appear multiple places in the block list).
An Overview of MIDI
MIDI is a specification developed to allow computers and electronic musical instruments to communicate with each other. Physical MIDI hookups can get rather complicated; here is a brief primer:
MIDI hookups are a lot like your stereo, in that each device has IN and OUT ports. However, MIDI devices also have a port known as THRU, which retransmits information from the In port (more on why this is a Good Thing later). MIDI devices are thus connected in a modfified daisy-chain arrangement, with the Out of the master (usually a computer) connected to the In of Slave #1, and Slave #1's Thru connected to Slave #2's In, and so on. The Outs of all devices go to the In of the master.
Here is a diagram of a simple hookup:
----------------------------------- | ---------------- | | | ___________ | ----- | | | | | | | | | In In Out In Out Thru In Out Thru Computer Synth Drum Machine (Master) (Slave #1) (Slave #2)
MIDI is based on 16 'channels'. Each channel is typically assigned to one specific device you have connected in your chain. In the example above, you might have the synth set to listen to channels 1-9, and the drum machine set to listen to channel 10 (this is a typical assignment). With this setup, when the computer transmits a note on channel 10, it will first go to the IN of the synth, which will simultaneously retransmit it via it's THRU port and notice that it doesn't want to use the data. The note will then appear on the drum machine's IN port. The drum machine will transmit it on it's THRU port (to which nothing is connected in the example) and start the note. This allows flexibility; if for instance you wanted you could connect a second drum machine with different sounds, set it to channel 10 also, and have a unique mix :)
I will not cover MIDI recording and editing here, because there isn't really any good MIDI software on the IIgs to cover. That's life.
Technical Specs for the GS Ensoniq chip
The 5503 Ensoniq Digital Oscillator Chip (DOC) contains 32 fundamental sound-generator units, known as 'oscillators'. Each oscillator is capable of either making an independent tone by itself, or of being paired up cooperatively with it's neighbor in a pairing known as a 'generator'. The generator arrangement is used by most programs, for it allows more flexibility and a thicker, lusher sound.
The DOC plays 8-bit waveforms, with the centerline at $80 (128 decimal). This format is known as "8-bit unsigned". $00 (0 decimal too) is reserved for 'stop'. If a sample value of 0 is encountered by a DOC oscillator, the oscillator will immediately halt and not produce any more sound. The DOC additionally has an 8-bit volume register for each oscillator, with a linear slope. The dynamic range of the DOC (the 'space' between the softest and loudest sounds it can produce) is approximately 42 dB, or about on par with an average cassette tape.
Each oscillator has it's own 16 bit frequency register, ranging from 0 to 65535. In a normal DOC configuration, each step of the frequency register increases the play rate by 51 Hz, and computing the maximum theoretical play rate is left as an exercise for the student.
When oscillators are paired to create generators, there are 4 possible modes:
- Free-run: the oscillator simply plays the waveform and stops. No interaction with it's 'twin' occurs.
- Swap: Only one oscillator of the pair is active at a time. When one stops, the other immediately starts.
- Loop: The oscillator simply plays the waveform and if it hits the end without encounter.cgiing a zero, it starts over at the beginning.
- Sync/AM: This actually has 2 possible effects: either one oscillator of the pair modulates the volume of the other with the waveform it's playing, or both oscillators sync up perfectly, causing a louder and more 'solid' sound.
Oscillators play waves stored in up to 128k of DRAM. This DRAM is not directly visible from the GS's 65816 CPU, but can be accessed (slowly) via services supplied by the Sound GLU chip. Note that no widely manufactured IIgs motherboard supported the full 128k of DRAM that the DOC can see. Conversely, no synthesizer Ensoniq made using the DOC had anything less than the full 128k.
The output of an oscillator can be directed to any one of 16 possible channels. Apple only makes 8 channels avalible via the 3 bits on the sound expansion connector, and all current stereo cards limit this to 1 bit, or two channels. However, the "Bernie II The Rescue" IIgs emulator for the Power Mac expands this support to 4 discrete output channels, two of which are encoded to the rear channel for Dolby Pro-Logic compatible output. No IIgs software that I'm aware of supports more than 2 channels however.
About IIgs Stereo Cards
|MDIdeas||SuperSonic||First IIgs stereo card. Not very well|
|constructed, but sounds nice. Digitizer|
|option pretty good.|
|MDIdeas||Digitizer Pro||Daughterboard for SuperSonic, but also takes|
|up another slot in your GS. Pretty good, but|
|very few were sold.|
|Applied||GStereo||I've never used one; anyone?|
|Applied||FutureSound||Most advanced card made. Includes|
|Visions||sophisticated noise reduction, coprocessor, and|
|timing generator for ultimate control of|
|Applied||Sonic Blaster||Generally poor to average card; boneheaded|
|Engineering||decision to use non-shielded ribbon cable|
|results in hissier than average output and|
|Applied||Audio Animator||The one they got right. Has digitizing|
|Engineering||circuitry external to the GS itself to avoid|
|noise, plus a MIDI interface.|
|Econ Tech.||SoundMeister||Generally above average quality. Nothing much|
|to say. Pro version with direct-to-harddisk|
What about them other machines?
Here's a rundown of sound on other computers...
Computer or Card Wavetable voices WT bits FM voices Stereo? Digitize? ----------------------------------------------------------------------------- Apple IIgs 32 8 None Yes(4) Yes 8 bit Soundblaster 1 8 11 No Yes 8(4) Soundblaster Pro 2 8 20 Yes Yes 8 Soundblaster 16 2 16 20 Yes Yes 16 bit Soundblaster 16 AWE32/64 32 16 20 Yes Yes 16 Pro Audio Spectrum 16 2 16 20 Yes Yes 16 Gravis UltraSound 32 8/16 None(2) Yes Yes 16(4) Gravis UltraSound Max 32 8/16 None(2) Yes Yes 16 Gravis UltraSound PnP 32 8/16 None(2) Yes Yes 16(11) Logitech SoundMan Wave 20 16 22 Yes Yes 16 Commodore Amiga (all) 4 8 None Yes Yes 8(4) Mac (non AV, 0x0) 4 8 None Yes(3) Yes 8(4) AV 0x0 Mac Infinite(1) 8/16(10) Infinite(1) Yes Yes 16 PowerPC Mac 2 16 None Yes Yes 16 AV PowerPC Mac Infinite(9) 8/16(10) Infinite(9) Yes Yes 16 Game Machine Wavetable voices WT bits FM voices Other voices Stereo? ------------------------------------------------------------------------------ Atari 2600 0 0 0 2 No Intellivision 0 0 0 4(8) No Nintendo Ent. System 1(5) 8 5 1 No Sega Genesis 1(5) 8 6 0 Yes Sega CD 11(7) 8/16(7) 6 0 Yes Super NES 8 12(6) 0 0 Yes Sony PlayStation 24 16(6) 0 0 Yes Sega Saturn 32(12) 8/16 32(12) 0 Yes Nintendo 64 Infinite(13) 8/16 Infinite(13) 0 Yes
"Wavetable" as used here means "a channel capable of playing back a digitized waveform". This is NOT the generally musically accepted meaning of the term, but it IS how it is commonly used when referring to computer sound boards.
"8/16" for WT playback bits means the chip is capable of directly processing 8-bit or 16-bit samples without conversion (the GUS's GF1 chip and the AV Mac's DSP chip obviously fit these criteria).
1 - The AV Mac's DSP chip can theoretically mix an infinite number of wavetable voices or synthesize an infinite number of FM voices. However, this is limited in practice by the speed of the chip and any other things you have it doing (voice recognition, modem replacement, etc).
2- The Gravis UltraSound can emulate FM synthesis in software.
3- Macs before the Mac II were mono-only.
4- This requires additional hardware.
5- The Genesis and NES's wavetable channel is pretty hackish, and not very high quality; nonetheless it works for speech.
6- The SNES and PSX sound chips accept 16 bit samples which have been ADPCM 4:1 compressed (this is similar to the ACE compression toolset on the GS, but the data format is NOT the same).
7- The Sega CD has two channels of 44.1khz stereo 16-bit CD audio and 8 8-bit DAC channels in addition to the capabilities of the Genesis.
8- The Intellivision uses the General Instruments AY-3-8192 chip found on Apple II boards such as the Phasor and Mockingboard. This provides three tones and one percussive noise at once.
9- The PowerPC AV Macs have no dedicated DSP chip; they use the main CPU, which can cause application performance degradation (see also note 1).
10- AV Macs of both CPU types have a 2-channel 16-bit CODEC to actually reproduce the audio, but the DSP or 60x chip are capable of conversion.
11- The Gravis UltraSound PnP specs also apply to other AMD InterWave-chip based boards such as the Reveal WavExtreme 32. 12- The Saturn's 32 voices can each be set to either waveform playback or FM. FM is not limited to sine waves as on older chips, however. 13- Like AV Macs, the N64 uses a DSP to mix as many sound channels as you can devote processing time to - however, since the same DSP computes the 3D geometry you're pretty limited on how many channels you would normally want to use.
What's this I hear about 3D sound?
Since stereo sound has been around since at least the 1940s, people have been attempting since then to bring the front-to-back plane into sound, and not just the side-to-side provided by conventional stereo. One of the more notable attempts was made in the 1960s with the so-called "quadraphonic" system, which actually had 4 speakers and used special LPs with 4 distinct channels. Since this is often impractical, and nobody wanted to go to the trouble of recording 4 channels anyway, the system faded out by the mid-to-late 1970s.
With the advent of affordable DSP power in the early 1990s, and advanced psychoacoustic research, many new systems started to appear. Most popular is Dolby Pro Logic, which encodes 4 channels of sound into the 2 stereo channels commonly found in stereo VHS tapes and compact discs. This system uses 5 channels - left, center, and right in front plus left and right rear, which are actually the same sound. This system doesn't provide very good sound localization because the 2 rear speakers cannot play different material, and neither they nor the center channel can play full-range sound. Nonetheless, because the encoding for this system is cheap and easy to do, a wide variety of PC and Macintosh software now offers it in either licensed or unlicsensed form.
This system is being gradually phased out in favor of Dolby Digital, or AC-3, which encodes "5.1" distinct full channels of sound - 1 channel for each of the same 5 speakers used in the older Pro Logic plus a ".1" channel which contains only deep bass and is intended to drive a subwoofer. This provides a very compelling sound field when properly implemented with good quality speakers, since all 5 main speakers can play independant full-range sounds.
There are also a variety of methods which claim to reproduce an entire sound field with only 2 speakers or normal stereo headphones. The most popular of these is "QSound", which has the added advantage of also being compatible with Pro Logic, so you can get 'real' multi-plane sound if you've got it and a reasonable imitation otherwise. QSound was first commercially used for Madonna's "Immaculate Collection" album, and is now used in arcade, console, and PC-based video games as well as many other places.
Note that although Pro Logic encoding is possible in realtime on the IIgs, no known software actually does this. Additionally, the psychoacoustic methods such as QSound simply require too much DSP power to pull off in realtime on the IIgs or other Apple II computers, so be wary of any claims of such. It's certainly possible to pre-process waveforms with QSound and simulate realtime encoding - this method is used on systems such as the Sony Playstation which don't have spare DSP capacity. This "cheat" may or may not work with other psychoacoustic systems - it depends on the specific coding method. As always, let your ears be your guide...
Copyright (c) 1993-1997 Ian Schmidt. Contents may be freely distributed as long as no editing occurs without permission, and no money is exchanged. Exceptions are hereby explicitly provided for Joe Kohn's Shareware Solutions II, the services GEnie and Delphi, for the current Apple II FAQ maintainer, and for user groups everywhere. | 1 | 9 |
<urn:uuid:76873c70-1b67-41b4-94d4-25f56732bec0> | This instructable will walk you through the basics of making your own nitrogen gas generator. Using pressure swing adsorption with carbon molecular sieve you can make an endless supply from the air without using any consumables. You can use this for filling your tires (nitrogen stays in tires much longer than oxygen, reducing the time you need to refill them), having a non-combustible gas or, in my case, to feed into a liquid nitrogen generator.
I won't go into any theory here. I'll just go over the basic construction. A unit of the size I will describe can go between $6000 to $8000. You can build this for a fraction of the price. If you are interested in this subject in more detail you can go to my nitrogen generator web tutorial. There you will find more pictures, animations and other information. There is also a video going over the nitrogen generator. You can also view my liquid nitrogen generator tutorial.
If you are ready to build this for your shop just click next.
Step 1: Basic Theory
This is a big project and will most likely need to stay in your shop or garage. The generator runs off of compressed air. It has two tower beds, each filled with CMS (carbon molecular sieve). Pneumatic valves control the airflow into one at a time. Under pressure, oxygen is preferentially trapped, while nitrogen passes through and out the bottom. The controller opens the next bed for filling, isolating it from the first one as it opens it to the atmosphere. During decompression, the first bed releases the oxygen back into the atmosphere, regenerating the CMS.
The picture above shows the CMS, which looks like chocolate sprinkles. They are approximately 0.5 x 2mm in size. Let's go over a basic materials list.
Step 2: Supplies
The unit I will describe will produce 98.5 - 99% pure nitrogen gas at a flow of 1 SCFM.
Air compressor: 6HP, 6-9 SCFM depending on the pressure.
Air compressor tank: 30 Gallons
Two 8" steel tubes x 33" tall
CMS-180, 200 or H: 20kg per tower. You can get this from molecular sieve dealers. Refer to web tutorial about this material.
Two pairs of 150 psi flanges (one flange blank/one 8" flange): you can get these at Lincoln supply or McMaster-Carr.
1/4 - 3/8" stainless steel (304 or 316) plumbing. I use 1/4" as this keeps the cost lower. You will need enough pieces to make the connections.
Prefilter: you need a 5um particle filter/water trap and a 0.01um coalescing filter. I use Wilkerson filters. Again, refer to my web tutorial for all the part numbers.
Versa pneumatic valves: you need two of these.
Check valves: you need two
200 psi pressure gauge
Rotometer or other flow control device
Pneumatic controller: schematics to follow
You will need access to a welder
You will need to build a cradle to hold the towers, which will weigh over 150lbs each. Mine has wheels on it so I can move it around. You're not going to be carrying this up and down your stairs, though.
Step 3: Basic Construction
Before making this nitrogen generator you should know a few things. The unit is at least 300lbs when complete. You will be dealing with pressures close to 150 psi. You will need access to welding equipment and possibly steel plate rollers.
The PSA consists of two tower beds, solenoid pneumatic valves and a controller. I sized my unit to deliver about 30 L/min of 98% or better purity N2. The bed is made from an 8" ID schedule 40 steel pipe at 33" long. The top and bottom has a welded low-pressure (150 psi) steel flange plate. They are an 8" 150# raised-face slip-on flange and a 8" 150# raised-face blind flange. A gasket helps seal this plate with a solid blank flange plate on top. A hole is drilled in the center to accept a 1/4" steel nipple, which you weld to the outer, removable plate. You can go to 3/8", but this did not seem necessary for the flows I'm using, and this increases the costs of the other hardware and pipes.
Each of the two towers holds 20kg of CMS-200 or CMS-H. You need to prevent these grains from emptying through the plumbing. You accomplish this with an insertible steel screen. You need to fashion a ring of steel that will just fit the inner diameter of the tower. Weld some steel screening onto this ring using whatever method works best for you. The screening needs to have holes about 1mm in size so the CMS grains do not pass through. This still ensures good airflow. As an added precaution you will use 10-12 MERV air-conditioning filter paper to trap dust from degrading the CMS.
Insert the bottom screen component. I welded two loops of steel on opposite sides so I could drop and retrieve the screen with two long poles. The ring holding the screen fits snuggly, so you will not be able to reach down with your hands and grap it out. Next, drop the filter paper and then fill the tower almost to the top with the CMS. Now, place another layer of the filter paper, followed by the screen. There should be now way for the grains to drop through the bottom or get discharged out from the top. Screw down the top plate.
As mentioned above, these towers are very heavy. I have mine sitting on a rack with large wheels on the bottom so I can roll it around. You will not be carrying this up and down flights of stairs, so pick a good resting place to keep it.
Above are pictures showing the bed with the screen. Again, this screening is duplicated on the bottom, except there are steel loops on the insertable ring to allow for extraction.
Step 4: Valves and Plumbing
Water vapor and microscopic particles can fowl up the CMS. One needs to make sure the fresh gas is clean. The system uses two filters: a pre-filter and a coalescing filter. The first traps water vapor and 5 um particles. The second filter particles down to 0.01 um. These are 1/4" port Wilkerson filters. The part numbers are F16-02-000 and M16-02-000, respectively.
The PSA has high pressure tubing coming off of the check-valve. The tubing from the two beds join together on a TEE. The output then goes to a high pressure valve that can shut off all flow. From here, the output goes to a Yokogawa rotometer, so one can control and measure the output flow.
Step 5: Pneumatic Valves
The bed is partially pressurized by the previously charged bed and from the pressurized reserve tank of air. When one bed receives fresh gas, the other's valves isolates it from the fresh gas and vents its tank to the atmosphere. The PSA system uses a three-way Versa valve. My system uses a normally-closed valve, so it needs power to open it to allow the fresh gas through to the bed. The model number for the 3-way, 1/4 brass, normally closed, 120vac Versa valve is VSG-3321-120. There may be an additional letter at the end.
Fresh gas enters from the top and flows into the bed when there is 120vac on the valve. When the power is off, the valve closes to the fresh gasflow and opens to the atmosphere. Exhaust gas flows from the bed out the hole on top to the left of the inflow port. Since there are two beds there are two valves. A controller handles powering and de-powering the valves.
The enriched gas leaves the bed at the bottom of the tank. Remembering that only one tank is under pressure, we need a means to prevent the pressurized, enriched gas from entering the other tank from the bottom. The system uses a check-valve that only lets gas flow out.
I got the check valves for a few dollars on Ebay. There is one under each bed.
Step 6: Arduino Pneumatic Valve Controller
A simple arduino-based controller manages two solid state relays. The arduino runs a cycling program and manages the gates on two triacs. These allow the controller to energize and de-energize the Versa valves with 120vac.
Above is the timing diagram and the controller schematic. The c-code is simple enough to write.
Step 7: Final Product
So, there you have it. You can now take regular air and pull out 99% pure nitrogen gas. If you want purer gas you will need a second stage, and unless you are a chemist making industrial grade compounds you won't need this.
I have other high-detail web tutorials.
You can read about my liquid nitrogen generator here.
You can read about how to make a 3 or 12kw induction heater here.
You can see one of many video of my induction heaters on Youtube. This is the 3kw unit here. | 1 | 2 |
<urn:uuid:11f381e2-2ddc-487b-933a-96769dfac79c> | On the future of computing hardware
One, systems tend to get smaller.
Mathematicians with an inclination for studying so-called "systems" may without much effort look at various examples of such "systems" and devise quantitative measures of complexity. For example the mammal's organism is highly complex on various levels of abstraction, say, when looking at its average genetic makeup; the same can be said about social networks1, global weather2 or the motion of stars.
These complexities are however against the intuition we have in engineering. That is because human-made systems are not only designed to be functional, but to also be as deterministic as possible; and such determinism cannot be achieved without a proper understanding of the theoretical model used in designs and especially its limitations. "Simplicity is the ultimate sophistication", and "everything should be made as simple as possible, but no simpler", but also "simplicity is prerequisite for reliability", to quote only a few aphorisms.
We can observe empirically that systems' complexity, or rather some measure of their "size", evolves cyclically. Dinosaurs evolved to be huge creatures, only to be replaced in time by miniaturized versions of themselves3. The first simple computing systems had the size of a room, while large-scale integration has led to computers which fit on the tip of one's finger. This reduction in size comes with reduction in certain features' size and/or complexity, which is why for example humans don't have tails, nor the amount of body hair of some of their ancestors, nor sharp canines.
Looking at the evolution of various industries in the last few decades4, it is clear that they are currently in the latter phase of the growth-reduction cycle. It is only natural that the so-called hardware, and more specifically numerical computers made out of silicon5 will follow.
Two, on shaky foundations; transcending ideological confusions.
The fact that nowadays' hardware industry stems from the needs of marketing, as opposed to the needs of the market, is well-known. This charade started with personal computers, then it was followed by mobile phones, tablets, and now so-called "smart devices" from "the Internet of things", which operate in cycles of what is known in economics as "planned obsolescence"6.
Purely from the perspective of computing hardware, this leads to the proliferation of functionality that is useless, or worse, harmful to the average consumer. For example the new generation of Intel Skylake processors come with Intel SGX, Intel MPX and possibly others; all this at the expense of reliability7, and at the same time without advancing the state of the art in any fundamental sense. All they do is offer some clients new ways to shoot themselves in the foot, and Intel are by no means a singular case8.
Degrading processor quality aside, there are quite a few pragmatic factors lying behind the existence of the planned obsolescence-aggressive marketing vicious circle. For one, it's easier to scale silicon production up than to scale it down, as it's more effective to fabricate a large wafer containing smaller and smaller processors than to produce in smaller numbers9. Meanwhile, some of the steps in the manufacturing cycles (still) require significant human workforce (for the time being), which makes eastern Asia the prime choice for production10. Thus the per-unit cost of producing a processor is higher for smaller batches (say, a hundred at most), while e.g. smartphone assembly in high numbers will necessarily rely on poorsters in China.
This context however underlies a problem of a more ideological nature, that has been recently, yet unfruitfully, discussed in the free software world: there is a general lack of choice in general-purpose hardware architectures, and given the situation above, it's not likely that this will improve. We are most likely heading into a duopoly between ARM and x86, while the fate of MIPS is not quite certain11, IBM's Power is somewhat expensive and FPGAs likewise, barring the very low-entry ones. The open RISC-V architecture offers some hope, given Google's interest in them, but I wouldn't get my hopes up just yet.
This is not the first time I am writing about this. I have touched on the subject on the old blog a while ago, and I have also discussed some of the (still current) issues in a previous Tar Pit post. People don't seem to have gotten their heads out of the sand since then, so I will reiterate.
Three, I want to build my own hardware.
To be honest, I don't care too much about Intel, Qualcomm, Apple and their interests. If Romanians were building their own computers12 back in the days before "personal computer" was a thing, then I don't see why I couldn't do this in the 2010s. Whether it's made feasible by 3D printing, some different technology or maybe some hybrid approach13, this is a high-priority goal for the development of a sane post-industrial world and in order to pick up the useful remains of the decaying Western civilization. However one would put it, small-scale hardware production is the next evolutionary step in the existence of numerical computers.
It is readily observable that14 the computer industry is heading towards a mono-culture, not only in hardware, but also in operating systems15 and in "systems" in general. This will -- not might, not probably, but definitely -- have the effect of turning the "systems" world into a world of (often false) beliefs, much akin to Asimov's Church of Science, where people will not even conceive the possibility of existence of other "systems".
I am of course not crazy enough to attempt to stop this. The industry can burn to the ground as far as I'm concerned, and this it will, and it'll be of their own making. What I want is to gather the means to survive through this coming post-industrial wasteland.
By which I mean specifically not the jokes the average Westerner calls "social networks". Facebook, Twitter, Reddit et al. are only networks in the sense that they reduce the level of interaction to at best that of monkeys throwing typewriters around; at best. The average level is rather Pavlovian in nature.↩
Weather, not climate. Mkay?↩
Which taste like chicken.↩
Which, although probably perceived by the naïve as capitalist, is rather reminiscent of the old communist planned economy model. And not unlike communist economy, it often leads to higher prices and lower quality products. To bear in mind next time you're buying that new Samsung or Apple smartphone.↩
Dan Luu's "We saw some really bad Intel CPU bugs in 2015, and we should expect to see more in the future" is required reading on this particular matter.↩
ARM are somewhat more conservative, but they offer SoC producers enough freedom to shoot their clients in the foot. Without any doubt, the average Qualcomm phone is most probably running Secure World software that the end user will never care about, and that the curious mind will never have the chance to reverse engineer -- without considerable financial resources, anyway. That's a good thing, you say? Well, it's your opinion, you're entitled to it, please stick it up your ass.
But what am I saying? By all means, please do buy whatever shiny shit Apple or Samsung are selling you. As long as it's not my money...↩
There are quite a few technical reasons for this too, and some of these escape me. For one, opening a semiconductor plant isn't exactly cheap, and the ratio of defects to units produced is significant, yet in principle easy to estimate statistically. The cost of making a circular wafer is also not small, which makes economic feasibility a tricky thing. Meanwhile, bear in mind that Moore's law is on its way to becoming dead and buried, given that CMOS-based technologies are reaching their physical limits.↩
To be perfectly clear, the world's largest silicon producer is at the time of writing not Intel, but Taiwan Semiconductor Manufacturing Company, Ltd.↩
Imagination Technologies, the intellectual property holder, have not been faring too well in 2016.↩
Remember the story of the ICE Felix HC that was the toy in my early computing days, before I had the slightest idea of what an algorithm actually was. Now ponder the fact that there is no qualitative difference between that piece of junk and today's latest, greatest, whateverest computer. Yes, you can do the exact same things on that old heap of junk, and by "exact" I certainly do not mean watching porn, by which I mean that this is why your children will prefer make-believe sex instead of fucking real women, which uncoincidentally is why Arabs are the superior ethnicity and those God-awful political corectnesses will stop being a thing in less than a generation. But I digress.
As a funny historical footnote, the same Romanians attempted at making a Lisp machine back in the '80s, which makes me hopeful that the same thing should be achievable almost three to four decades later. I hate repeating myself, but it's quite literally either this or the dark ages.↩
The approach itself is relevant only as far as it solves the most problematic economical aspects, i.e. logistics (needs to be made using readily available and/or easily procurable materials) and low production costs of a small number of units (tens to a few hundreds at most). Processing speed and size are secondary aspects, a Z80 (or maybe something equivalent to a 80386) should really be enough for most general-purpose-ey uses.↩
Much to the baffling ignorance of otherwise intelligent people. Unfortunately for them, nature abhors singularities; as one of the older and wiser men in the Romanian Computer Science community used to remind us, there is, simply put, a cost for abso-fucking-lutely everything in life.↩
In case you were wondering, despite being the most usable kernel to date, Linux is definitely the ultimate abomination. Its greatest feature is that it's not too difficult to strip of all the crap. | 1 | 4 |
<urn:uuid:22e2857d-6509-4e57-8a6c-67dd5adff792> | Computers 'to match human brains by 2030'
Artificial intelligence portrayed in Hollywood movies like 'The Terminator' and 'Blade Runner' could be a reality in the next two decades.
A leading scientific "futurologist" has predicted that computer power will match the intelligence of human beings by 2030 because of the accelerating speed at which technology is advancing worldwide, 'the media reported.
According to computer guru Dr Ray Kurzweil, there will be 32 times more technical progress during the next half century than there was in the entire 20th century, and one of the outcomes is that artificial intelligence could be on a par with human intellect in the next 20 years.
He said that machines will rapidly overtake humans in their intellectual abilities and will soon be able to solve some of the most intractable problems of the 21st century.
Computers have so far been based on two-dimensional chips made from silicon, but there are developments already well advanced to make three-dimensional chips with vastly improved performances, and even to construct them out of biological molecules.
"Three-dimensional, molecular computing will provide the hardware for human-level 'strong artificial intelligence' by the 2020s. The more important software insights will be gained in part from the reverse engineering of the human brain, a process well under way.
"Already, two dozen regions of the human brain have been modelled and simulated," Dr Kurzweil said.
Saturday, February 16, 2008
As the computing world has become smaller, it has created bigger problems for users.
Take, for instance, sensitive information. Back a few decades ago, it would have been impossible to misplace or lose information. The database had to be stored on a mainframe, which was larger than a work desk.
Even when magnetic storage devices came into being, they were a long way from being portable. And the first laptops? They were often referred to as "luggables," because they were more suitable for luggage than lap top.
Today, a flash drive no larger than an index finger can hold millions of names and numbers. An entire company's business plan and customer list can fit on it. It can easily be carried.
And just as easily left behind.
If you're one who finds that thought disturbing, consider the Padlock drive from Corsair. The flash drive has a key pad on its front. Once programmed with a personal identification number, the drive can't be used until the correct PIN - from one to 10 digits - is entered.
Once the correct code is entered, it must be plugged into a USB port within about 10 seconds or the drive will relock itself. It also automatically relocks when removed from the USB port, or if it is left in the computer when the computer is turned off.
The drive comes in one- and two-gigabyte sizes and sells for around $30.
The Padlock has six buttons and a red and green LED. One of the buttons has an icon of a key on it. Press it to begin entering the PIN number. Each number key handles two numbers: Press the first key once for 0; quickly press it a second time to enter 1. The remaining keys are grouped 2-3, 4-5, and so forth. Press the button with the key icon again when finished. If the PIN was entered correctly, both the red and green LEDs will flash. Insert the drive into the USB port of the computer and the drive is recognized.
The drive comes with a clearly written instruction sheet. And there is another nice feature as well: If you want to change the PIN number, but don't have the instruction sheet, look on the drive itself. A copy is stored there that can be viewed using any Adobe reader program.
In addition, Corsair also provides a free Web site where Padlock owners can store PIN numbers. If you forget the PIN number, supply the Web site with a name, e-mail address and password and it will be sent.
A Corsair spokeswoman said the keypad entry system is a low-cost approach to protecting data. It also avoids the problem of using encryption (software is needed on the host computer) or drives with biometric fingerprint sensors.
She also noted the Padlock works without software installation on Windows, Mac and Linux-based computers.
Corsair Introduces 32GB High-Density USB Flash Drives
for Flash Voyager™ and Flash Survivor™ at CES 2008
USB Drives Have Capacity to Hold Over 16 Full-Length High-Definition Movies
or an Entire Season of a TV Series
A worldwide leader in high performance computer and flash memory products, announced today that it is expanding its Flash Voyager and Flash Survivor USB family lines with new 32GB capacity offerings. The new Corsair 32GB Flash Voyager and Flash Survivor USB drives will be debuted at the Consumer Electronics Show 2008 (CES) next week in Las Vegas in the Corsair Suite at the Wynn Hotel and at Showstoppers CES 2008.
Ultimate Solution for Storing, Transporting & Backing-up Critical Data
Users now have the ultimate solution for storing, transporting and backing up large amounts of personal and professional data. Whether using the Corsair proprietary all-rubber Flash Voyager or the aluminum-encased water-proof Flash Survivor, the large amounts of data on the drive will be safeguarded for users with an active lifestyle. Corsair USB drives provide the added ruggedness and performance not found in other storage drives utilizing rotating media.
Corsair 32GB drives provide the storage capacity necessary to hold over 16 full-length, high-definition movies or even an entire season of your favorite TV series. These large density drives can also be used as portable back-up devices for critical or sensitive information. In addition, Corsair 32GB USB 2.0 drives are bootable, which means users can actually store full versions of operating systems and applications in order to quickly “re-create” the necessary software environments to troubleshoot system problems.
Corsair 32GB USB drives are immediately available:
Flash Voyager 32GB ~ MSRP $229.99 USD
Flash Survivor 32GB ~ MSRP $249.99 USD
"Whether with innovative designs, like the Flash Voyager and Flash Survivor, or industry leading large-density drives in convenient portable form-factors, Corsair is always pushing the limit of what USB portable storage has to offer," said Jack Peterson, VP of Marketing at Corsair. "Our newest USB additions will allow a whole new set of users – multimedia, technical and data conscious – to take advantage of rugged, high-performance solid-state storage," added Peterson.
Corsair 32GB USB drives are available through Corsair’s authorized distributors and resellers world-wide. Each drive is bundled with a lanyard, security software/driver preloaded, and USB extension cable. Corsair flash products are backed by a 10-year Limited Warranty. Complete customer support via telephone, email, forum and TS Xpress is also available. For more information on Corsair USB drives,
Posted by SANJIDA AFROJ at 5:25 PM
Apple Updates Leopard--Again
Apple released its second major update to Mac OS X Leopard, the operating system it shipped in October. Mac OS X 10.5.2 Update, as Apple calls it, is one of the largest operating-system patches I've ever seen. The "combined update" download, which applies every fix issued so far to an unpatched copy of Leopard, weighs in at 343 megabytes, but even on a Mac with the 10.5.1 update applied, 10.5.2 was a 341-meg download.
(A conspiracy theorist could note that the mammoth size of these files forces dial-up users to drive to the nearest Apple Store to use the shop's broadband connection to grab their own copy--and maybe they'll wind up buying a new iPod while they're around.)
A note at Apple's tech-support site inventories the fixes 10.5.2 brings. Most are the usual security, stability and performance improvements, but Apple also fixed two of the bigger sources of complaints about Leopard's interface--the partially-transparent menu bar and the Dock "Stacks" that offer quick access to the contents of your Applications, Documents and Downloads folders.
You can now return the menu bar to a solid shade of light gray, and you can tweak the Stacks icons (via a right-click menu) to change their appearance, vary their order in which they display their contents, or make them act like standard folders. Those may not sound like major changes, but Mac interface-design connoisseurs had objected vociferously ("Transparent Menu Bar, Die Die Die!") to Leopard's earlier implementations of these ideas.
10.5.2 was not as easy to load as earlier OS X patches. On the MacBook Air that I reviewed recently, a download through OS X's Software Update mechanism didn't work. After a restart, the computer stalled at the first step in the install process. I shut the laptop off, discovered to my relief that the aborted update hadn't destroyed the system, and--after a second failure by Software Update--downloaded the massive combo updater file and installed that instead, which worked as advertised.
After putting 10.5.2 on my own Mac, I discovered a second issue: iCal seemed to have lost all of my calendar and to-do entries. A survey of some Mac-troubleshooting forums suggested that I could recover those entries by deleting some cache files in my home account's Library/Calendars folder. That worked; you can read a more detailed account of this in Sunday's Help File.
With those glitches out of the way, I'm pleased overall with this update. I liked Leopard when it shipped--the lack of an equivalent to 10.5's Quick Look document viewer in Windows now annoys me on a daily basis--but I certainly like it better with this update.
But for all of the compliments I've given Leopard, I've heard from some readers who are annoyed or even angry about this operating system. About a month ago, for instance, one reader vented at length that "Leopard is buggy and you should tell people about it since Apple has ignored these problems for months." (I told him that I hadn't seen any issues with it on the five or six Macs I've installed it on, and suggested an "Archive and Install" reinstall to put a clean copy of Leopard on his Mac, but he hasn't written back to say if that worked or not.) So I'll throw these questions out there: How has Leopard worked for you? What kind of a difference has 10.5.2 made?
A pleasant surprise for many Mac users
Incremental updates to Mac OS X traditionally have consisted primarily of bug fixes. Significant changes to existing features are saved for the major updates (Panther, Tiger, Leopard).
So when Apple let loose the much-anticipated 10.5.2 update to Mac OS X Leopard on Monday, changes to two features introduced with the release of Leopard in October pleasantly surprised many veteran Mac users.
One change is the addition of an option in the Desktop Control Panel to turn off the translucent menu bar at the top of the screen. Some Mac users detested this new feature because the patterns of desktop images could make menu items hard to read. It didn't bother me all that much, but it's nice to have the option to make the menu bar opaque again.
Apple also tweaked the Stacks feature, which allows users to click on special folders in the Dock and see the icons of its contents fan out across the desktop. Some users didn't like how the folder looked like a pile of icons with only the topmost icon identifiable. Not only that, but they disliked how the icons fanned out from the Dock. The more items, the harder the feature was to use.
Apple has addressed these complaints by offering choices. Control clicking on a Stack reveals several new options, such as making the Dock icon appear as a folder and setting the folder's contents to appear as a list. This works much better for folders with numerous items.
It's very un-Apple-like to alter fresh features in a version of OS X not six months old. Could it be that Apple has decided to listen to its users?
Mac Pro's Reboot on Wake From Sleep: Incremental updates sometimes fix other issues not noted in Apple's documentation. As have most other owners of the new Mac Pro, I had hoped the 10.5.2 update would fix the dreadful "reboot on wake from Sleep" problem.
After a day and a half and more than a half-dozen wake from Sleeps, I have not had an unexpected reboot.
However, reports on Mac forums indicate that other Mac Pro owners still are experiencing the issue even after upgrading to 10.5.2. Others owners also report unresolved problems with their graphics (which I thankfully have not had.) Apple needs to fix this soon. Its Mac Pro customers - those who have bought Apple's priciest hardware - deserve better.
Improved performance: One point of speculation that dates back to before the Mac Pros were announced was that the 10.5.2 update would contain optimizations designed to extract better performance from the new models.
I have run both the Geekbench and XBench benchmarking software on my Mac Pro since upgrading to 10.5.2. Given the variable scores I tend to get from these programs, it doesn't look as if this update has boosted performance. But the Leopard Graphics Update, which users can install only after installing 10.5.2, did improve my graphics scores noticeably in XBench's Quartz Graphics Test, which leapt from averaging in the low 200s to averaging in the mid-250s, a 25 percent increase.
To upgrade to 10.5.2: If you're running Leopard and haven't updated to 10.5.2, click on the Apple Menu and select "Software Update." After the Mac reboots, go back to the Apple Menu and repeat the process to obtain the Leopard Graphics Update. A word of warning: The 10.5.2 update weighs in at a bulky 343 megabytes, so a fast broadband connection will come in handy.
Apple Now Comes With Time Capsule
Apple introduced Time Capsule -- a backup device for automatic and wire-free back-ups for one or more Macintosh.
Time Capsule supports all the Macs running on Leopard -- Apple's latest Mac OS, which includes Time Machine -- automatic backup software.
In terms of functionality, Time Capsule is a plug in device, which unites an 802.11n base station with a server grade hard disk to form a single unit, followed by the installation that automatically backups Macs wirelessly.
Time Capsule offers a full-featured 802.11n Wi-Fi base station and has two models, 500 gigabyte and 1 terabyte. It performs 5 times more and double the range of 802.11g. Apple's iMac desktops and all Mac notebooks including MacBook, MacBook Pro, and MacBook Air are built-in with 802.11n. Also it has a built-in power supply and connection to print wirelessly to a USB printer.
Some additional feature includes, dual band antennas for 2.4 GHz or 5 GHz frequencies, 3 gigabyte LAN ports, 1 gigabyte Ethernet WAN port, 1 USB 2.0 port, Wi-Fi Protected Access (WPA/WPA-2), 128 bit WEP encryption, and a built in NAT firewall supporting NAT-PMP for features like 'Back to My Mac'.
"Bring Time Capsule home, plug it in, click a few buttons on your Macs and voila - all the Macs in your house are being backed up automatically, every hour of every day. With Time Capsule and Time Machine, all your irreplaceable photos, movies and files are automatically protected and incredibly easy to retrieve if they are ever lost," said Steve Jobs, CEO of Apple.
Along with wire free backup of all data with Time Machine, the user can find lost files and even restore all of the software. In case of file loss, it can be track back to find the deleted files, programs, photos and other digital media. And then restore back the file. The Leopard OS can easily restore an entire system from Time Capsule's backup via Time Machine.
With Time Capsule, a wire-free and secured network for about 50 users can be created and can imply security checks such as Internet access for children's computers. It can also serve as a backup solution for multiple computers as well as the backbone for a high-speed - 802.11n wireless network that can be used as an easy and cheap options at home, school or work for file security.
Posted by SANJIDA AFROJ at 4:17 PM
Wal-Mart moves to the Blu-ray camp
The HD DVD format is reeling from another body blow.
The nation's largest retailer, Wal-Mart Stores Inc., said Friday that it would sell movies and players only in the rival Blu-ray format at its 4,000 discount stores and Sam's Clubs.
Wal-Mart said it would continue to sell its HD DVD inventory over several months, then devote more shelf space to Sony Corp.'s Blu-ray. The announcement from the country's biggest seller of DVDs comes amid a growing number of defections from the Toshiba Corp.-backed HD DVD camp.
Earlier this week, online movie rental service Netflix Inc. said it would exclusively stock Blu-ray discs, and electronics retailer Best Buy Co. said it would "prominently showcase" Blu-ray hardware and movies as a way of steering consumers to the format.
"Up to this point, it's been death by a thousand cuts," said Ross Rubin, director of industry analysis for NPD Group, a market research firm in Port Washington, N.Y. "This one may be the unkindest of all."
The HD DVD format has been losing momentum since January, when the last major studio to support both formats, Time Warner Inc.'s Warner Bros. Entertainment, announced it would sell its high-definition movies exclusively on Blu-ray discs. The shift gave the Blu-ray camp about 70% of the home video market, with Warner, Walt Disney Co., 20th Century Fox, Lions Gate Entertainment Corp. and Sony Pictures.
Toshiba has deals with Universal Pictures and Paramount Pictures and DreamWorks Animation SKG Inc. Toshiba could not be reached Friday to comment on Wal-Mart's announcement. In a sign of the high stakes in this format war, the Tokyo-based Toshiba said in a December earnings call that it anticipated losing $370 million on its HD DVD equipment this fiscal year, which ends in March.
Before Warner's defection, Toshiba had been in active discussions with Fox and Warner to secure support for the format. It sought an exclusive content deal with Fox similar to one it reached in August 2007 with Paramount and DreamWorks in which it reportedly offered $50 million to $100 million for Fox to abandon Blu-ray, according to two industry sources. Fox ultimately walked away from the offer.
Toshiba had hoped to use the lure of a potential Fox deal as a sign of its continued turnaround in an effort to retain Warner's continued support for the HD DVD format.
Warner's Jan. 4 announcement that it could no longer support both HD DVD and Blu-ray triggered a major shift in momentum in a format war that has been likened to the epic Betamax-VHS videocassette battle of the 1980s.
Up until January, Blu-ray and HD DVD each accounted for an equal share of dedicated high-definition movie players, according to sales data tracked by NPD. In the week following the Warner announcement, Blu-ray sales skyrocketed -- grabbing 90% of all next-generation hardware purchased, according to NPD.
Toshiba responded with a price cut Jan. 15 on three models of HD DVD players, which helped it regain lost ground. But NPD numbers show that Blu-ray retained the edge, with 63% of sales. In an act that some called a last gasp, Toshiba touted its discounted players in an ad that ran during the Super Bowl, noting that they also worked as high-end DVD players.
This week Toshiba issued a statement saying it was studying recent developments and watching how the market would respond to its recent price cuts.
HD DVD movie sales have declined as well.
At the end of 2007, Blu-ray accounted for 64% of sales. The latest Nielsen VideoScan First Alert sales data show that Blu-ray represented 81% of all high-definition discs sold in the week ended Sunday.
Wal-Mart's decision, which it said came in response to consumer preference, may make Blu-ray's lead insurmountable. Wal-Mart accounts for roughly 40% of all DVDs sold in the U.S.
"It's difficult to see how the format could be viable without access to those movies at Wal-Mart," NPD's Rubin said.
Andrew Parsons, chairman of the Blu-ray Disc Assn.'s U.S. promotions group, said Wal-Mart's news signaled that the format war was all but over.
"People who've been holding back because they've been afraid to buy the wrong format have absolutely no reason to be afraid anymore," Parsons said. "There's absolutely no reason why anyone should be afraid to buy a Blu-ray player at this point."
Nonetheless, Envisioneering Group senior analyst Richard Doherty predicted that Toshiba would continue to support the HD DVD format, which it has also incorporated in products such as its Qosmio laptop computers. However, it may reduce the number of HD DVD players it manufactures to a single model.
"They will never admit this isn't working," Doherty said. "They'll just trim the inventory."
Taps for HD DVD as Wal-Mart Backs Blu-ray
HD DVD, the beloved format of Toshiba and three Hollywood studios, died Friday after a brief illness. The cause of death was determined to be the decision by Wal-Mart to stock only high-definition DVDs and players using the Blu-ray format.
There are no funeral plans, but retailers and industry analysts are already writing the obituary for HD DVD.
The announcement by Wal-Mart Stores, the nation’s largest retailer of DVDs, that it would stop selling the discs and machines in June when supplies are depleted comes after decisions this week by Best Buy, the largest electronics retailer, to promote Blu-ray as its preferred format and Netflix, the DVD-rental service, to stock only Blu-ray movies, phasing out HD DVD by the end of this year.
Last year, Target, one of the top sellers of electronics, discontinued selling HD DVD players in its stores, but continued to sell them online.
“The fat lady has sung,” said Rob Enderle, a technology industry analyst in Silicon Valley. “Wal-Mart is the biggest player in the DVD market. If it says HD DVD is done, you can take that as a fact.”
Toshiba executives did not return calls asking for comment. Analysts do not expect the company to take the product off the market but the format war is over. Toshiba had been fighting for more than two years to establish the dominance of the format it developed over Blu-ray, developed by Sony.
The combined weight of the decisions this week, but particularly the heft of Wal-Mart, signals the end of a format war that has confounded and frustrated consumers and that had grown increasingly costly for the consumer electronics industry — from hardware makers and studios to retailers.
Andy Parsons, a spokesman for the Blu-ray Disc Association, an industry trade group, said retailers and movie studios had incentives to resolve the issue quickly because it was costly for them to devote shelf space and technology to two formats. Besides, he noted, many consumers have sat on the sidelines and not purchased either version because they did not want to invest in a technology that could become obsolete.
Thus far, consumers have purchased about one million Blu-ray players, though there are another three million in the market that are integrated into the PlayStation 3 consoles of Sony, said Richard Doherty, research director of Envisioneering, a technology assessment firm. About one million HD DVD players have been sold.
Evenly matched by Blu-ray through 2007, HD DVD experienced a marked reversal in fortune in early January when Warner Brothers studio, a unit of Time Warner, announced it would manufacture and distribute movies only in Blu-ray. With the Warner decision, the Blu-ray coalition controlled around 75 percent of the high-definition content from the major movie and TV studios. The coalition includes Sharp, Panasonic and Philips as well as Walt Disney and 20th Century Fox studios.
Universal, Paramount and the DreamWorks Animation studios still back HD DVD; none of those studios responded to requests for comment Friday.
“It’s pretty clear that retailers consumers trust the most have concluded that the format war is all but over,” Mr. Parsons said. “Toshiba fought a very good battle, but the industry is ready to move on and go with a single format.”
Because movie and entertainment technology has become integrated into a range of consumer electronics, the high-definition movie format war has created unusually wide-ranging alliances. The battle included, for example, video game companies; Microsoft has backed the HD DVD standard and sold a compatible player to accompany its Xbox 360 video game console.
Sony has pushed vigorously for the Blu-ray standard, not just because it is a patent holder of the technology, but also because it has integrated the standard into PlayStation 3. Sony has argued that consumers will gravitate to the PlayStation 3 because of the high-definition movie player.
Any celebration over the victory may be tempered by concerns that the DVD — of any format — may be doomed by electronic delivery of movies over the Internet. The longer HD DVD battled Blu-ray, the more the consumer market has had an opportunity to gravitate to downloading movies. Such a move, coupled with the growth of technology that makes such downloading easier and cheaper, has threatened to cut into the long-term sales of physical movies in the DVD format.
Mr. Doherty, like Mr. Parsons, argued that digital downloads are not yet affecting the DVD market and that they would not for some time. They said that movie downloads face a host of challenges, chief among them that many consumers have insufficient bandwidth to download movies or move them from device to device on a wireless home network.
Mr. Enderle, however, argued that bandwidth was improving and that major telecommunications carriers, which are pushing to increase speeds, would like to be able to make their pipes the delivery mechanism for high-definition movies. Wal-Mart, Warner Brothers, Best Buy and all the others lining up behind Blu-ray realized they had to kill HD DVD — and fast, he said.
“The later it gets, the much worse it gets,” he said.
By contrast, Mr. Parsons said that downloading movies “is not a viable option now or even in the near future.”
“It’s something that will move very gradually in that direction.”
Posted by SANJIDA AFROJ at 3:53 PM | 1 | 2 |
<urn:uuid:28995db4-c68d-49d8-bcca-d5a53999a5bb> | Probate is the court supervised process for transferring a decedent’s estate to the beneficiaries named in the will. The term "probate" is derived from the Latin term meaning "to prove the will". Probate refers to the process where a court oversees the administration of a deceased person's estate. To ensure that the decedent's final matters and wishes are handled correctly and without bias, California has probate courts (or special departments of the court) to oversee the settling of estates. Probate may occur even if there is no will. If the decedent died without a will, the decedent is said to have died "intestate", and the decedent's estate will be distributed to the decedent's "heirs-at-law" as defined by the California Probate Code.Purpose of Probate
The purpose of the probate process is to ensure that:
Any final bills and expenses are paid, including any taxes owed;
Any assets remaining are distributed to the beneficiaries named in a will; or
If the decedent died intestate, any assets remaining are distributed to the correct heirs under the laws of intestate succession. California's intestate succession scheme can be found under California Probate Code Sections 6400-6402.5. To view the applicable codes, click here.
Distributions will vary depending on whether or not the decedent was married, and if the decedent was married, whether the property to be distributed was separate or community in nature.
Community property is generally defined as property acquired during marriage, using funds earned during the marriage, while living in a state that recognizes community property.
Separate property is generally defined as property acquired by gift or inheritance, or property acquired using separate property funds.
If the decedent was not married at the time of death, the decedent's estate will generally be distributed as follows:
- Divided among the decedent's children, in equal shares. If a child is deceased, but left surviving children (grandchildren of the decedent), the deceased child's share will be divided equally among the deceased child's children.
- If there are no living children, to the grandchildren in equal shares.
- If there are no living children or grandchildren, to the great-grandchildren, in equal shares.
- To the decedent's parents, equally, or if only one is living, to the sole living parent.
- Brothers and sisters equally.
- Surviving grandparents, equally.
- Descendants of grandparents, such as aunts, uncles and cousins.
- Descendants of a predeceased spouse (step-children).
- Parents or surviving parent of a predeceased spouse.
- Descendants of the parents of a predeceased spouse (brother-in-law or sister-in-law).
- Next of kin or nearest relative.
- Next of kin or nearest relative of a predeceased spouse.
- State of California.
If the decedent was married at the time of death, community property will pass to the surviving spouse. Separate property will be distributed as follows:
If there is one child of the decedent, one-half will be distributed to the surviving spouse (or domestic partner) and one-half to the surviving child. If there is more than one child of the decedent, one-third will be distributed to the surviving spouse or domestic partner, and two-thirds will be distributed in equal shares to the children. If there is a deceased child, the children of the deceased child will take his or her share.
If there are no children or grandchildren, one-half to the surviving spouse or domestic partner, and one-half to the decedent's parents equally (or one-half to the surviving parent if only one parent is then living).
If there are no children, grandchildren or parents of the deceased, then one-half to the surviving spouse and one-half to be divided equally among the decedent's brothers and sisters. If there are any deceased brothers or sisters, the children of the deceased brother or sister share equally in their parent's share.
If there are no children, grandchildren, parents, nieces or nephews, then all of the separate property will be distributed to the surviving spouse or domestic partner.
Note: To qualify as a domestic partner for intestate succession purposes, the parties must have completed and filed with the California Secretary of State a "Declaration of Domestic Partnership" and not revoked this Declaration prior to the decedent's death.Assets Requiring Probate
Probate is required when the decedent had assets which:
- Do not pass by right of survivorship to a surviving joint tenant;
- Do not pass to a named beneficiary (such as a beneficiary of a life insurance policy, the beneficiary of a payable-on-death (POD) account, or the beneficiary of a retirement account); and
- Exceed $100,000 in value.
Probate may also be required for an interest in real property exceeding $20,000 in value.Initiating the Probate Process
The first step in the probate process is to file the original will with the Probate Court Clerk in the county where the decedent resided, within 30 days of the death of the decedent. The next step is the file the "Petition to Probate Decedent's Estate" which includes certain information about the decedent, such as a rough estimate of the value of the decedent's estate and nature of the decedent's assets, names and addresses of all persons named in the decedent's will, and names and addresses of persons who would inherit from the estate under intestate succession.
In the petition, the petitioner is either asking the court to name the petitioner as personal representative of the estate, or is nominating someone else to act as personal representative. The personal representative is the person who is responsible for overseeing the administration of the decedent's estate throughout the probate process. If the decedent died with a will, the court will appoint the Executor named in the will as the personal representative of the estate. If the decedent died intestate, the court will appoint an Administrator of the estate. The Administrator is usually a person (typically a relative of the decedent) or entity (such as a bank or trust company) nominated by the decedent's next-of-kin.
Under the California Probate Code, the Petition for Probate is to be heard by the court not less than 30 days nor more than 45 days from the date the petition is filed. However, in many counties the courts are backlogged with probate cases, and it is not unusual in some counties for the hearing to be set 60 days or more from the date of filing.The Disadvantages and Advantages of Probate
The primary disadvantages of probate are:
The time associated with the probate process. Probate in the State of California typically takes several months to complete. If the estate is large or complex, if there are multiple interests in real property or other assets that must be sold during the probate process or if there are disputes among the beneficiaries, the process may be lengthened considerably. Some heavily contested estates have taken years to complete the probate process.
Lack of privacy-probate is a public process and any interested person can obtain information on the size of your estate, what assets you owned at the time of your death, how much you owed creditors, and how much you left to the beneficiaries of your estate.
The high cost involved, which may include court filing fees, probate referee fees, attorney fees, and fees for the personal representative. Fees for attorneys and personal representatives are statutory, and are set by California Probate Code Section 10810. The fees are on a sliding scale depending on the size of the estate.
The fees set forth in CPC Section 10810 do not take into consideration any mortgages, debts or liens; therefore, if the decedent owned a home appraised at $1 million, this value will be used for the purpose of calculating attorney's and personal representative's fees, even if the property has a mortgage of $900,000. Using this example, to probate a home with a value of $1 million, the applicable attorney's fees would be $23,000 for the attorney's fees and $23,000 in personal representative's fees, for a total of $46,000.
The fees set forth in CPC Section represent the maximum statutory fees an attorney may charge for ordinary probate services. In addition, in complex estates or estates which may require "extraordinary" services, the court may allow for added attorney's fees in addition to the statutory fees for ordinary services performed.
To view California Probate Code Section 10810, click here.Avoiding Probate with a Living Trust
Assets held in a living trust do not require probate. When the person who created the trust (referred to as the "trustor" or "settlor" of the trust) dies, the assets in the trust are distributed to the persons named, and in the manner specified, by the trust. The person or entity named in the trust as the successor trustee oversees the administration of the trust after the trustor or settlor dies.
Living trusts offer other advantages as well; they provide a plan for incapacity of the trustor and they allow the beneficiary to receive a "stepped up cost basis" to date of death value for capital gains purposes. To discuss these and other advantages of the living trust in detail, contact the San Diego estate planning firm of Law Offices of Scott C. Soady, APC by e-mail, or call us toll-free at (877) 435-7411 within California, or (858) 618-5510 outside of California to schedule a free in-house consultation.Probate Resources
For further information on probate, visit the State Bar of California website. | 1 | 3 |
<urn:uuid:1fa7b313-51da-4ff6-84f3-83fa361e4fbb> | This is the easiest way to troubleshoot but is often overlooked. Even though these may appear to be obvious, it is good to start with the basics.
Is there power to everything?
Is it all turned on?
Are the cables connected correctly?
Do you have a link light on consistently?
Could it be a bad cable?
Is the router overheated?
Could there be environmental factors such as where it is located?
If it is a wireless router is there is anything interfering with it such as a microwave, metal, or thick walls between the router and computer?
Run Connectivity Tests from the Web-Based Utility
The router must be able to communicate with other devices in the network and out across the internet in order to conduct business. There are a few ways to check for connectivity.
First, you may verify the IP address settings on the computer connected to the Local Area Network (LAN) port of the router. By default the DHCP feature is enabled on the router so you may keep your Network Interface Card (NIC) settings on your computer as "Obtain IP address automatically". This allows your computer to get an IP address from the router. Please verify the reachability to the router LAN using the ping command.
Log into your router directly and use the Graphical User Interface (GUI). In your web browser, enter the IP address of the router. Enter the credentials. If you did a factory reset, or this is the first time you are entering credentials, the default IP address is 192.168.1.1 and the credentials are cisco for both the Username and Password.
Note: If you forgot the IP address of the router and you don't have a specific configuration that you need to keep, you can reset to factory defaults on the physical device. Open a paperclip and insert the end of it into the small recessed reset button. Hold for 10 seconds and you see should see the lights on the device light up. It will take at least a few minutes to boot back up. Your IP address will revert to 192.168.1.1.
To get to the navigation pane, you click on the blue circle icon as shown below.
On the navigation pane, select Administration > Diagnostic. From here you can do a Ping, Traceroute to an IP Address, or perform a DNS Lookup.
To do a ping using the GUI, type in the IP address that should have the ability to communicate with your router and click Ping. You can enter the IP address of a different connected device within your network, or you can select a reliable one that you know outside of your network.
If your router is able to communicate with the IP address, packets will be returned along with statistics. The picture below shows a successful ping, therefore network connectivity is not the issue in this case.
To perform a trace on an IP you would click Traceroute. In the outcome of your traceroute, you will see "hops" from one router to the next. "Hop" 1 starts with your local router, then your Internet Service Provider (ISP) router. It then "hops" to the router on the edge of the network of the ISP, and across more routers to get to the destination. If the first two or three "hops" are successful, the problem is an issue outside of your network. Try another IP address or Domain Name to receive a successful traceroute.
To perform a Domain Name Service (DNS) Lookup you would type in an IP Address or Domain Name and click Lookup. If the DNS returns details about the IP Address or Domain Name, your Server is configured and connected.
Another option is to Reboot or do a Return to factory default settings after reboot. Keep in mind that if you choose Return to factory default settings, all configurations will be lost. This can sometimes fix the issue if something was changed from the default settings and caused the issue to occur. If you choose Return to factory default settings including certificates after reboot you will need to reload certificates.
Explore Status and Statistics
Explore each of the other Status and Statistics options on the navigation pane starting with System Summary.
System Summary shows your serial number, the amount of time that your router has been up for, the current time, port status, VPN status, and firewall status. It also lists the current firmware and language version. If either is not the latest version, you should go to Cisco Support and upgrade the firmware or language version. This could potentially solve your issue since upgrades often contain bug fixes. If you would like to be guided through the upgrade firmware process, click here.
Once you have upgraded the firmware image, you would need to activate that image and reboot, which will cause the older firmware image to be inactive.
Return to System Summary to ensure the firmware and language have been upgraded.
Check out Status and Statistics> Port Traffic for issues.
The Port Traffic page includes:
Port ID – port ID
Port Label – port label
Link Status – connection status on each port, if it is up or down
RX Packets - total number of packets received through the interface
RX Bytes – total bytes received
TX Packets – total number of packets transmitted
TX Bytes – total number of bytes transmitted
Packet Error – errors that occurred when sending or receiving packets
This section of the Port Traffic page, Port Status, includes:
Link Status - the port is connected or not connected
Port Activity - enabled or not
Speed Status - type of speed that port is using
Duplex Status - set to full or half. This may need to be adjusted if you are using older hardware that can only use half duplex you may have to change the settings to match.
Auto Negotiation - How two connected devices choose common transmission parameters, including the speed and flow control. It is recommended that this be enabled.
If you are using a wireless router, Wireless Traffic will be part of your Port Traffic page.
Check out Status and Statistics > View Logs to look for errors and missing connections.
There are several options of what to look through in View Logs. Logs are created often, so it may be hard to sort out the information you need without using the filtering feature.
These are some examples of Logs:
Explore Firewall Settings
Explore Firewall > Basic Settings to see if you have blocked anything that might be causing the problem.
Here is a standard configuration for Basic Settings. If you can't ping the Wide Area Network (WAN) of the router, this is where you can check to see if Block WAN Request is enabled. If you can't remotely access your web configuration page, the problem might be that you didn't enable Remote Web Management.
It may be possible that you have one or more of these enabled and that is causing the issue.
Explore Security Settings
Check the Security Settings for both Content Filtering and Web Filtering. It is possible you configured something there that is preventing network access.
Content Filtering enables you to restrict access to certain unwanted websites based on the domain names and keywords.
Web Filtering allows you to manage access to inappropriate websites. It can screen a client's web access request to determine whether to allow or deny that website.
Content Filtering can be checked to see if there is anything preventing network access. If you received a message that you were blocked from a specific page or employees report that a specific site is being blocked, this is the location to check that.
Web Filtering is one more place to see if that might be the issue.
If you would like more details on the navigation pane options, click on the question mark on the top right of your GUI screen.
Once you have selected the question mark, a new screen will open and an expandable section will appear that is in the same order as the navigation pane.
Once you click on one of the sections, a list of topics will expand beneath it. Select the area you want more information on and it will open up. In this example Firewall > Basic Settings was selected. There is also a search feature on the top right of the screen if you are not sure where to look for a certain question.
Check Default WAN Address on Modem or Dongle
Some modems and dongles come with a default Wide Area Network (WAN) address of 192.168.1.1. IP addresses that start with 192.168.x.x are reserved for private IP addresses and cannot be a true WAN address. These modems and dongles translate the IP address to a WAN address before going out over the internet, but 192.168.1.1 is still shown as the WAN IP address in these networks. This causes issues because the default IP address for the Local Area Network (LAN), on the RV160 and RV260 is also 192.168.1.1.
If any two devices on a network have the same IP address they cannot communicate. If you are having issues with connectivity, this could be the problem. You may have even received an IP address conflict notification. You cannot change the IP address in the modem or dongle, so the solution is to set the LAN IP to be on a different subnet. This should fix the issue of connectivity.
To create a new subnet, the third octet, or the third set of numbers in the IP address, has to be different than a 1. It can be any number between 2 and 254. Therefore, VLAN 1 could be set to 192.168.2.x, with an IP pool range anywhere from 192.168.2.1 – 192.168.2.254. In this example, we will change the LAN address to 192.168.2.1.
Note: If you use a Mac computer, you would select the gray gear icon to get into settings.
Step 1. To find out if this is your issue and you use a Windows operating system, you have two simple options on the Graphical User Interface (GUI).
Note: If you prefer to use the command prompt, you can enter ipconfig /all.
Option 1- right click on the computer icon on the bottom right of your screen.
Select Open Network and Internet settings.
Option 2- Click the window icon and then the gear icon on the bottom left of your screen.
Option 2 continued- Select Network & Internet.
Step 3. Either option brings you to this screen. Select View your network properties.
Step 4. You will then see a View your network properties list. The default gateway address is the IP address of the LAN.
Step 5. To find the WAN IP address, you access the router on your network that connects to the internet. You need to enter the IP address of the router into your web-browser.
Step 6. In this example, the WAN IP is listed as 192.168.1.1. Therefore, the LAN IP address will need to be changed to a different subnet.
Edit the IP Address of your LAN
This section is not generally recommended, but is necessary if your WAN IP address shows as 192.168.1.x.
Step 1. Log into your RV160 or RV260.
Step 2. In the left-hand menu-bar click the LAN button and then click VLAN Settings.
Step 3. Select the VLAN that contains your routing device, then click the Edit button.
Step 4. Enter your desired static IP Address. Check that the Range Start and Range End have changed to be in the same subnet as the IP Address of the VLAN. If this has not updated, you will have to change it so it is in the same subnet.
Step 5. Click Apply in the upper-right hand corner.
Step 6. Click Save.
Step 7. (Optional) If your router is not the DHCP server/device assigning IP addresses, you can use the DHCP Relay feature to direct DHCP requests to a specific IP address. The IP address is likely to be the router connected to the WAN/Internet. Be sure to save your changes.
IP Address Changes After Subnet Change
By default, IP addresses are dynamically assigned by a DHCP server. Therefore, your network, by default, receives a dynamically assigned IP address in the subnet of the local LAN address pool. Once you change the change the subnet you may need to restart the devices so they can be assigned a new IP address in the 192.168.2.x subnet.
All devices in the network need to be on the same subnet as the LAN. The DHCP server should do this automatically. If it doesn’t change automatically you should unplug the Ethernet cable and plug it back in the device. If a device in the network still hasn’t switched over to the new subnet of 192.168.2.x, you can turn the device off and then back on again.
You can see the IP addresses and MAC addresses of connected devices by navigating to Status and Statistics and then Connected Devices.
For more information on configuring WAN Settings for Your Internet Connection: | 1 | 4 |
<urn:uuid:ca07ae6d-818d-48dd-9b0a-4bf64820b977> | As usual, you're working under a tight deadline. Your client is getting angrier by the minute because the graphic you produced for him doesn't look good in print, even though it looked fine on your monitor. Now time's running out and you're wracking your brain trying to figure out what went wrong. Here's what you can do to make sure it never happens again.
In 1993, Adobe, Agfa-Gevaert, Apple, Kodak, Microsoft, Sun Microsystems, and Taligent formed The International Color Consortium (ICC). The intent of this consortium of industry leaders was to develop a standardized, open, vendor-neutral, and cross-platform color management system. They succeeded, and the result of their collaboration was the development of the ICC profile specification. Now with over 70 members, the ICC proposes standards for creating cross-platform device profiles. In other words, the ICC works to get us consistent color output from the plethora of devices and computer systems on the market today, regardless of who manufactured it, the operating system being used, and what the device may be.
So why calibrate? It's pretty simple, if you think about it. With the huge variety of professional/industrial and consumer video cards, monitors, printers, scanners, and cameras available, there's an equally huge variation in output. Something as simple as replacing your ATi video card with one from Nvidia could cause things to look very different on your system, even though your monitor hasn't changed. Output can even vary across two monitors of the same make and model, as you may have noticed if you've got dual displays on your system, that are both plugged into a single video card. Printers can vary from one manufacturer to another, and even using generic ink or different types of paper in your printer can cause different results. To further complicate things, neither monitors nor printers can reproduce the entire range of colors visible to the human eye. CMYK is particularly troublesome because it has a different and smaller color reproduction range than the RGB system used on monitors. In case you're wondering, CMYK is the color model used for printing. The name stands for Cyan, Magenta, Yellow and blacK. The letter K in black is uses so that people don't confuse it with the B in Blue. RGB is the color model used by monitors, scanners and digital cameras, and RGB stands for Red, Green and Blue. RGB is additive while CMYK is subtractive. Add Red, Green and Blue to get White. Add Cyan, Magenta and Yellow to get Black (or Dark Brown).
Now we're going to explore color management and attempt to calibrate Photoshop, a monitor, printer, scanner, and even a digital camera, to ensure that the color output is as accurate with one another as possible, whether the device's color space is RGB or CMYK.First we'll create a color profile and bring data in through the digital camera and scanner, then display it on the monitor and then finally output it to a printer, all the while comparing, noting, and tweaking the results. By calibrating your monitor and creating an ICC profile, you're ensuring that your monitor isn't displaying too much of any particular color and that grays are as neutral as possible. You want to make sure that the colors in your images are being displayed accurately and consistently, and that they will continue to be so in the future.
Figure 1 A scanned photo of my daughters (I just wish Erin had kept her eyes open).
Getting Down to Business
The first thing to tackle is your monitor. If your monitor is out of calibration then every image you produce will be as well. Viewing your work through rose-colored glasses is not a good way to go in this case. You need to see things as they really are, or as close as you can get them anyway.
You've basically got two calibration options unless you choose to go with a third-party calibrator. Actually, you only have one if you discount the option of going with the default hardware setting on your printer, monitor, or scanner. (This is a no-brainer once you consider the fact that these devices have a wide range of factory default color settings.) So if you're on a PC you have the option of using Adobe Gamma, or on a Mac you can use the Display Calibrator Assistant, both included as part of their respective operating systems. The latter is the preferable option, considering the ICC based its specification on Apple's ColorSync profile format, although it's still not as accurate as using a color calibrator.
Adobe Gamma and Apple's Display Calibrator Assistant
If you're a PC user, you can use Adobe Gamma to roughly calibrate your monitor. I say "roughly" because like me, you're probably a human being and not a machine, which means you may have subconscious preferences for certain colors (my favorite is blue). For example, when I try to select a neutral gray square during the Gamma adjustment portion of Adobe Gamma's setup, I may very well go with a square that's slightly tinted with blue. If you do choose to go with Adobe Gamma, here's the best way to do it. It's best to adjust the lighting in your room to a setting that you usually work with. Overhead lighting is always a bad idea since it can cause screen glare, as can light from a nearby window, so it's best to leave your lights off and your blinds closed while you work in Photoshop.
With Windows, you'll do the following:
- Select Start --> Control Panel --> Adobe Gamma. From here you can choose either the Wizard or Control Panel. The Wizard is easier, and presents you with the same options as the default Control Panel, so let's go with that.
- Click Wizard at the bottom right of the dialog. It's a good idea to add a name in the Description: field so you can recognize your new profile when you go to load it in Photoshop later.
- Click Next. Set your monitor's contrast to maximum, as suggested. Then, adjust the brightness so that the smaller box, in the center of the black box, is as dark as possible while still remaining visible. Be sure to keep the surrounding white box as white as possible.
- Click Next and then select the phosphors for your monitor. (Mine's a Sony so I selected Trinitron.) You may need to refer to your monitor's manual or do an online search to be sure what your monitor uses.
- Click Next again to move to the Gamma setting section (shown in
Figure 2). Deselect "View Single Gamma Only" so that
you can view the gamma settings for each of your Red, Green, and Blue channels.
Use the sliders to adjust the gamma setting so that the center box
"fades" or blends into the surrounding box for each color. A useful
tip here is to squint at the boxes to make it easier to see solely the intensity
of the colors and not the lines surrounding each box. This makes it possible to
get a good match easily and quickly.
Figure 2 Setting Red, Green, and Blue Gamma.
- Set the desired gamma setting fly-out to match your operating system. Here we'll choose 2.20, the Windows Default. Make sure that you're happy with your adjustments.
- Click Next again. Here you'll set your hardware white point by clicking on the Measure button and selecting from the gray boxes I mentioned earlier. The idea here is to choose a neutral gray. After I adjusted the white point using the Measure button, Adobe Gamma chose 6500 k (daylight) for me. You may need to select "Same as Hardware" if your monitor is already adjusted to the correct white point measurement.
- Click Next again to move to the final step. Here you can select the Before and After radio buttons to see the difference between your original and adjusted monitor settings.
- Click the "Use as Default Monitor Profile" box and then select the
Finish button to save your settings. When you look at the white point boxes
shown in Figure 3, you'll notice that none of them is a
pure gray. That's one of the shortcomings of using Adobe Gamma to adjust
Figure 3 The gray boxes in Adobe Gamma's white point dialog are all slightly tinted.
On a Mac, you can also use either the Adobe Gamma, or the Display Calibrator Assistant. To use the Assistant on Mac OS X:
- Choose Apple --> System Preferences --> Displays --> Color.
- Click the Calibrate button and follow the steps. For further accuracy, you can use the DigitalColor Meter.
- Open it by selecting Applications --> Utilities --> DigitalColor Meter. Use it to sample the colors in your Photoshop Swatches palette and check them for accuracy.
Unfortunately, if you've got an LCD monitor, your options are kind of limited since Adobe Gamma wasn't designed to work with LCDs. This is where a 3rd party calibrator like the Spyder2Pro comes in.
I was quite impressed with my results using Adobe Gamma—until I saw the vastly different results I got using the Spyder2Pro. At first, I couldn't believe the difference and thought there must be something wrong. I assumed I had a faulty unit, and actually requested and received a replacement. When I got the exact same results with the second unit, I knew it was my perception that was faulty. Once I got over my shock, I decided to use the profiles I'd created with the Spyder for my LCD and CRT displays.
With the Spyder2Pro you can adjust gamma, color temperature (white point), and luminance, allowing for the best flesh tones and the purest grays. Opinions on how often to calibrate your monitor vary from as often as once per week to as little as once per month. Recalibration needs to happen because monitors drift out of calibration and color quality degrades with age, but, as a rule of thumb, calibrating every two weeks is probably adequate for most users.
Your workflow will vary a little, depending upon whether you're calibrating a CRT or an LCD monitor. Other factors include the kinds of controls your particular monitor has for adjusting its output.
With CRTs you might have RGB sliders, a Kelvin slider, or Kelvin presets for adjusting color. You can also adjust the gamma by selecting from a list of presets, entering a number of your own, or creating a custom gamma curve. You can set the white point to Native, select it from a list, or enter your own setting.
Figure 4 The Spyder2Pro doing its thing on a CRT monitor.
For LCDs, you might have brightness, contrast, and backlight controls, plus the previously mentioned controls such as RGB sliders, a Kelvin slider, or Kelvin presets for adjusting color.
Let's step through setting up both a CRT and LCD monitor. Note that we won't get into the advanced settings for the Spyder2Pro (such as measured luminance), because they're beyond the scope of this article.
For setting up a CRT monitor:
- Install the Spyder2Pro calibration software and enter your name and serial number. You should be greeted with a welcome screen that explains what will be adjusted as you work through the steps.
- Select Next to see a screen that cautions you to allow your monitor to warm up, turn off any screensavers and adjust the lighting in your room so that there's no overhead light hitting the screen. It also advises you to set your video display to at least 16-bit color, preferably 24-bit.
- Hit Next again to select your monitor (if you have two, otherwise it will just default to your main display).
- Move to the next screen to select your monitor type. My main display is a Sony CRT so I'll select CRT from the list.
- On the next screen, select your target gamma setting and white point. 2.2 and 6500k are the default settings, so you can either choose that setting or select 2.2-Native, which will use your monitor's current white point.
- On the next screen you'll select Visual as the Luminance Mode. You can also select Measured, but as I mentioned that's a more advanced topic that won't be covered here.
- Click Next to review your settings.
- Click Next again to identify the controls on your monitor. I have options to use all three types of controls, but my monitor defaults to a Kelvin Slider, so I'll go with that. However, if you go with Native, note that you may not see the Identify Controls screen.
- Click Next. This moves you to the white level setting screen. Like Adobe Gamma, this screen allows you to adjust your contrast to get the best white balance.
- Click Next. On this screen you'll set your brightness or black level manually. This is where Spyder's similarities to Adobe Gamma end.
- Click next to move to the next screen, which involves preparing the Spyder to calibrate your display. You'll need to remove the LCD baffle, which exposes the suction cups used to affix the Spyder to your CRT's screen.
- Click Continue to move to the next screen and place the Spyder according to the instructions.
- Click Continue again to start the calibration process. The Spyder will now do its thing, and take readings of your Red, Green, Blue, and Gray levels, as well as your white and black points. When it's finished, it will create a profile for your monitor and ask you to give the profile a name.
- Finally, it moves to a screen that warns you not to change your brightness or contrast settings, and gives you the option of quitting the program or calibrating another monitor.
We may as well calibrate the secondary monitor while we're at it. This one's an LCD so we'll indicate that on the monitor type list.
This particular monitor has Brightness and Contrast controls, so as we move to the Identify Controls list, I'll select their check boxes and move to the next screens where we'll adjust the White Luminance, then the Black Luminance.
- First, we need to identify the color controls, which in this case consist of RGB Sliders. With that checked, let's move to the next screen.
- Here, you'll learn the process for setting the monitor up to achieve a
proper color temperature (white point). You need to replace the LCD baffle at
this point to protect the display's surface, and then continue to the RGB
Levels screen where the Spyder takes Red, Green and Blue samples, reads the
white point, and then brings up an RGB Gain Control display to show you the
colors that need adjusting.
Figure 5 RGB Gain adjustment of an LCD monitor.
- Now you need to go into your LCD's setup and increase or decrease the RGB levels as indicated in the software's RGB Levels dialog. You may need to do this several times before you manage to get the colors within the allowable 0.5 difference range.
- Click the Update button to take a new reading and repeat until you've achieved the desired results, then click Continue.
- At this point the software reads the monitor's black point, red, green and blue samples, gray samples and verifies the color temperature. Once it's finished you're taken through the same steps you were for the CRT monitor, starting with Step 13 above.
- Give the profile a sensible name and then quit the program.
Windows can be funny and the profiles I created didn't show up in the Profile Chooser installed with the Spyder software. To fix this, if this happens to you:
- Right-click on your desktop.
- Choose Properties --> Settings and then choose the Advanced button.
- Click on the Color Management tab, choose Add and then select your profile from the list. Unfortunately, you can't assign a separate profile to your secondary monitor in the Display Settings unless it's connected to its own video card. You can, however, add the profile so that it appears in the Profile Chooser software's profile list.
- Click on your default monitor's profile.
- Control-click your secondary monitor's profile to select that as well, and then click Add, OK to dismiss the Color Management dialog. Hit OK again to dismiss the Display Properties dialog.
- Now you can open the Profile Chooser and select your profiles. A window
opens on each monitor and you can select the appropriate profile for each, but
you'll need to repeat this step to reset your secondary monitor's
profile every time you reboot Windows. It's kind of a pain, but worth it if
you want your monitors to appear properly calibrated. The advantage here is that
if you have two of the same monitor you can apply the same profile to both and
Figure 6 A screen photo of the adjusted CRT, color-corrected to approximate the results.
Whether you've used Adobe Gamma, Display Calibrator Assistant, a Spyder, or other device to calibrate your monitor, you'll need to set up Photoshop in order to use the profile you created. Here's how you do it:
- Open Photoshop and choose Edit > Color Settings (Photoshop > Color Settings on a Mac).
- Choose Load RGB in the RGB: fly-out in the Working Spaces area within the dialog and select your profile from the list that appears.
A good profile name comes in handy here -- I called mine "1-SONY GDM-F520 March 18.icm" which indicates that it's monitor 1 and makes it easily recognizable by the date and monitor name. You can also opt to use a different working space in Photoshop, especially if you're creating Web graphics and images intended to be viewed on a monitor. But,if you want to print and maintain consistent color across multiple devices, then a custom profile is the way to go. Note that if you're going to use one of Photoshop's predefined profiles, choosing Web Graphic Defaults from the Settings fly-out will load the sRGB IEC61966-2.1 profile for the RGB working space. However, sRGB has a smaller gamut and may not print certain colors to your expectations.
Color management and printing is a little trickier. Monitors use the RGB color model and can display 16.7 million colors, but printers, on the other hand, use the CMYK color model, which can reproduce considerably fewer colors. In turn, each monitor or printer operates within a certain color space, which determines its gamut or color range.
The Spyder2Pro ships with DoctorPro software to help with printing, but unfortunately, I found the software to be more trouble than it's worth. Instead, I just stuck to using my custom profile and used the CMYK output, which I set to U.S. Web Uncoated v2. After some experimentation, I used File --> Print with Preview (shown in Figure 7), and loaded my printer's profile (an Epson Stylus Color 740) into the print space profile. I then set the Intent to Relative Colorimetric, which gave me even better results. Relative Colorimetric will shift the colors in your image that are outside your printer's gamut to the closest color within its gamut, with usually satisfactory results. It attempts to preserve as many of the original colors in your image as possible, and is the standard for North American and European printing. Make sure that "Show More Options" is checked so that you have access to these settings. You can also allow the printer to handle the color by selecting "Printer Color Management" under the Profile: list and then using the printer's properties settings under Print > Properties > Advanced and selecting the settings you want to use to print. However, using Photoshop's output settings along with a color profile created for your specific monitor will give you better results with less fiddling around. Just remember to set the paper type in your printer's properties so the printer distributes the ink properly. Plain paper will absorb more ink than coated paper or photo quality inkjet paper, so the printer needs to be told what you're printing on or your output will be off. I learned that the hard way when I printed a photo from a digital camera on photo quality inkjet paper and left the printer's paper setting at plain. As a result, subtle shadow areas came out as pure black. When I reprinted with the proper paper setting I could see the differences clearly, so these aren't just guidelines being offered by your printer—they're there for a reason!
Figure 7 The Print with Preview dialog set for optimum printing.
Viewing a proof of your image is a quick and reasonably accurate way of seeing how your image will look when printed. To view a proof:
- Select View --> Proof Setup --> Custom.
- Choose the profile you want to use from the fly-out list and set your intent to Relative Colorimetric or Perceptual. Photoshop will emulate the way your image will look when printed, usually with satisfactory results.
- If you're curious about which colors are outside of your printer's gamut, selecting View --> Gamut Warning will show you which colors need be shifted to fit your printer's working space. You can see an example of this in Figure 8.
- Once you've got your printer set up the way you want, do a test print.
If the test looks good, print a high quality copy and compare it to the original
version on your screen. It may take some trial and error but you should have
output from your printer that closely resembles your monitor's output.
Figure 8 An image showing out of gamut areas.
Setting up a scanner is a breeze compared to setting up your printer. A scanner can use the same ICC profile you created for your monitor, and then it's just a matter of tweaking the scanned images to ensure they match the profile you assigned. I found the scans from my Epson Perfection 1650 were lacking in tonal range, so I used Levels to adjust the highlight and shadow values by hand to get my scanned images looking the way I wanted. And, as an added bonus, they matched the printed output and the original photograph quite well.
Figure 9 The Levels dialog with corrected highlights and shadows
Figure 10 The printed image rescanned and adjusted to show the approximate results of printing from Photoshop with the appropriate profile.
Cameras can define their own color profiles too. My Sony Cyber-Shot DSC-P73 uses the sRGB IEC61966-2.1 profile. Many professional photographers opt to use camera raw, a sort of "digital negative" that gives them much more control over their images since they're not processed in any way by the camera. That means they're free to work with the raw data and manipulate it however they please.
I found I had the best results when I imported the images from my camera and converted it to my current working space. Then, I applied Auto Color and Auto Levels to images taken indoors with the flash. Outdoor shots also required some tweaking, though not as much – usually just a quick application of Auto Color did the trick. Images with the default sRGB IEC61966-2.1 profile were also acceptable but looked even better with their colors and Levels adjusted.
Figure 11 An adjusted digital photo | 2 | 3 |
<urn:uuid:583dea10-539b-46ff-85de-c8ebfc16c060> | Browse by year
Why it's so hard to lose weight
There are many reasons but it often comes down to controlling hunger04/30/2015ConsumerAffairsBy Mark Huffman
Anyone who has ever gone on a diet knows how hard it is to stick to it so that you get results....
Anyone who has ever gone on a diet knows how hard it is to stick to it so that you get results.
Part of the problem is breaking old habits, but for most of us, the hardest part is overcoming the hunger pangs caused by eating less of the food we like.
Hunger is an ancient product of human wiring. It ensures survival by telling the brain that the body needs more fuel to keep going. Hunger gets the brain's attention.
“One reason that dieting is so difficult is because of the unpleasant sensation arising from a persistent hunger drive,” said Bradford Lowell, a leader of a U.S. research team that is studying the brain's role in causing us to overeat.
The team has discovered that a brain circuit serves as the neural link that inhibits and controls eating, kind of like a switch. It found that this brain circuit not only promotes fullness in hungry mice but also alleviates the sensation of grating hunger.
“Our results show that the artificial activation of this particular brain circuit is pleasurable and can reduce feeding in mice, essentially resulting in the same outcome as dieting but without the chronic feeling of hunger,” Lowell said.
Now that this circuit has been identified, the researchers say they can develop a more effective diet drug.
Stacey Cahn, an associate professor of psychology at Philadelphia College of Osteopathic Medicine, has also been studying why we get so hungry that we can't control our appetite. She's come up with a different answer.
Cahn says eating processed food makes it much harder to control your food cravings and that it's no accident. She points out food manufacturers have invested billions of dollars in making their products almost impossible to resist. It's just “good business” on their part, she says.
“Research shows that we’re much more likely to overeat processed foods than 'whole foods,'” Cahn said. “Snack foods that have an airy, crispy texture like cheese puffs leave us particularly prone to overeating because of vanishing caloric density. As the snack somewhat dissolves on our tongues, our bodies don’t register those fat calories, so we still feel hungry, and we keep eating.”
The smorgasbord effect
The reason a buffet is so hard on the waistline, she says, is because eating a single food item makes us feel full faster. But a buffet, with its wide variety of dishes, keeps us from habituating to any single one.
“That’s why processed foods like nacho chips are engineered to contain a complex spectrum of flavors,” she said. “So we keep eating. And while junk foods may lead to overeating, their unnatural ingredients may independently lead to weight gain.”
Cahn further claims that avoiding calories with artificially-sweetened beverages often has an opposite, unintended effect. Experiencing sweetness without the expected corresponding calories can cause hunger cues to be felt more intensely. The calories we avoid drinking a diet soda are more than made up when we give into temptation later on.
Bipartisan bill would guarantee customers' right to criticize companies
Consumer Review Freedom Act of 2015 would outlaw non-disparagement clauses04/30/2015ConsumerAffairs
Last September, after California became the first state to make it illegal for businesses to put “non-disparagement...
Last September, after California became the first state to make it illegal for businesses to put “non-disparagement clauses” in their contracts with customers, California congressmen Eric Swalwell and Brad Sherman, both Democrats, proposed a national version of the same law, the Consumer Review Freedom Act of 2014.
But last year's CRFA didn't pass, so yesterday Reps. Swalwell and Sherman, joined by Republican representatives Darrell Issa and Blake Farenthold, jointly proposed the Consumer Review Freedom Act of 2015.
The bill – available in .pdf form here – would “prohibit the use of certain clauses in form contracts that restrict the ability of a consumer to communicate regarding the goods or services that were the subject of the contract.” In other words, businesses can't penalize customers who criticize or express negative opinions about those businesses.
Though plenty of businesses have tried. Last August, for example, a New York State bed-and-breakfast called the Union Street Guest House gained unwanted media attention after the discovery of a non-disparagement clause hidden in the fine print of its customer contracts:
If you have booked the inn for a wedding or other type of event . . . and given us a deposit of any kind . . . there will be a $500 fine that will be deducted from your deposit for every negative review . . . placed on any internet site by anyone in your party.
In an even more notorious example from the previous year, the tech-toy company KlearGear ruined a Utah couple's credit rating by charging them a $3,500 “fine” over a negative online review they'd posted three years earlier. (KlearGear is still in business, though its non-disparagement clause is gone.)
Last September, after introducing the Consumer Review Freedom Act of 2014, Congressman Swalwell mentioned KlearGear's behavior as an example of why the law was necessary, and also said “It's un-American that any consumer would be penalized for writing an honest review. I'm introducing this legislation to put a stop to this egregious behavior so people can share honest reviews without fear of litigation.”
Scott Michaelman, an attorney for the consumers'-rights group Public Citizen, said in a press statement that Public Citizen supports the bill because non-disparagement clauses “deny consumers of the right to express negative opinions about the company”:
Too often, a consumer shares a negative customer service experience with others, then learns that according to the fine print in the boilerplate contract, he may not criticize the business publicly, including writing an online review. Companies use these unjust terms to bully dissatisfied customers into silence.
The Consumer Review Freedom Act would protect consumers’ right to speak out. The bill would protect individual consumers from hidden contract terms that forbid criticism. It also would help prospective customers avoid unscrupulous businesses by enabling them to learn from the experiences of their fellow consumers.
Both versions of the Consumer Review Freedom Act – last year's and this year's – would allow customers to criticize and express negative opinions about companies they do business with, but do not apply to customers who commit actual acts of “defamation, libel, or slander, or any similar cause of action.”
FDA warns that human meds can be fatal to cats
Topical pain medication creams are dangerous to small animals04/30/2015ConsumerAffairs
A new warning has been issued by the U.S. Food and Drug Administration and this one is for cat owners. It's about topical analgesics that are for humans th...
A new warning has been issued by the U.S. Food and Drug Administration and this one is for cat owners. It's about topical analgesics that are for humans that can be fatal if your cat comes in contact with them.
Your cat is at risk if exposed to topical pain medications containing the nonsteroidal anti-inflammatory drug (NSAID) flurbiprofen. People using these medications, should use care when applying them in a household with pets, as even very small amounts could be dangerous to these animals.
Two households have reported that their cats became sick or died after their owners used topical meds that contained flurbiprofen on themselves, not their cats.They had applied the lotion or the cream to their own neck or feet, hoping for relief from muscle pain and stiffness. They did not apply it directly on their pets. Nobody knows how the pets became exposed.
The products contained the NSAID flurbiprofen and the muscle relaxer cyclobenzaprine, as well as other active ingredients, including baclofen, gabapentin, lidocaine, or prilocaine.
One household had some scary moments with two of their cats, they developed kidney failure. They were nursed back to health after having to go to their vet. Another family was not so fortunate. Their two cats lost their appetite and became very lethargic. They started vomiting and developed melena (black, tarry, bloody stools), anemia, and had diluted urine.
Even though these cats went to their vet and were treated, they died. A third cat in the second household also died after the owner had stopped using the medication. Autopsies found they had poisoning that was consistent with NSAID toxicity.
The FDA recommends that you take these precautions:
- Wash your hands and your clothing keeping all residue away from your pets.
- Keep your meds up and out of the way of your pets.
- Ask your vet and your doctor before you apply any ointment to see if it can harm your pet just from having contact with it.
- If you are using topical medications containing flurbiprofen and your pet becomes exposed, bathe or clean your pet as thoroughly as possible and consult a veterinarian.
Be aware even though there has not been any warning of toxicity to dogs, they could be vulnerable as well.
This warning is also extended to veterinarians to take note of patients that show signs that they have come in contact with household medicines that contain flurbiprofen.
Pharmacists that fill prescriptions need to make sure they advise patients of what the adverse reactions can be to pets.
Pet owners and veterinarians can also report any adverse effects to the FDA.
How to judge whether a wine is any good
For starters, don't look at the price tag04/30/2015ConsumerAffairsBy Mark Huffman
Judging whether a wine is good, mediocre or just plain bad is not always easy. After all, it's a subjective process and personal tastes come into play....
Judging whether a wine is good, mediocre or just plain bad is not always easy. After all, it's a subjective process and personal tastes come into play.
There have been plenty of taste tests where consumers have rated a cheap wine highly because they mistakenly believed it was expensive. Some researchers wanted to find out whether it was simply a case of price prejudice or whether something in the brains of the taste testers made them think it was good.
"Studies have shown that people enjoy identical products such as wine or chocolate more if they have a higher price tag," write authors Hilke Plassmann and Bernd Weber. "However, almost no research has examined the neural and psychological processes required for such marketing placebo effects to occur."
So Plassmann and Weber have examined it and have concluded that preconceived beliefs may in fact create a placebo effect so strong that it makes actual changes to the brain's chemistry.
In a series of experiments, subjects were told they would taste 5 wines, costing from $5 to $90 a bottle. In reality, they were tasting only 3 different wines at 2 different prices. During the experiment their brains were scanned using an MRI.
Plassmann and Weber found the subjects showed significant effects of price and taste prejudices, both in how they rated the taste as well as in their brain activity. The MRI readings, however, showed different people reacted in different ways, based in large part on their personalities.
For example, people who were strong reward-seekers or who were low in physical self-awareness were also more likely to be swayed by their price prejudices.
"Understanding the underlying mechanisms of this placebo effect provides marketers with powerful tools,” the authors conclude. “Marketing actions can change the very biological processes underlying a purchasing decision, making the effect very powerful indeed."
How not to get played
How can we as consumers protect ourselves from buying a bad wine at a high price? Wine experts suggest increasing your education about wine and learning what you like and don't like, is a first step.
Wine Enthusiast magazine, for example, recommends tasting wine in the proper environment. There's a lot more than price prejudice, it says, that can influence your judgment.
“A noisy or crowded room makes concentration difficult,” the magazine says. “Cooking smells, perfume and even pet odor can destroy your ability to get a clear sense of a wine’s aromas. A glass that is too small, the wrong shape, or smells of detergent or dust, can also affect the wine’s flavor.”
If you're still not sure you can tell a good wine from a not-so-good one, food and travel writer Tara O'Leary walks you through four things to look for in this video.
Court shutters sham mortgage relief operation
Some victims lost their homes when their loan payments were diverted04/30/2015ConsumerAffairsBy Truman Lewis
A federal court has halted an alleged sham operation that took money from financially distressed homeowners and simply kept the money rather than forwardin...
A federal court has halted an alleged sham operation that took money from financially distressed homeowners and simply kept the money rather than forwarding it to the mortgage lender.
“These defendants stole mortgage payments from struggling homeowners, and they pretended to be a nonprofit working with the government,” said Jessica Rich, Director of the Federal Trade Commission's Bureau of Consumer Protection. “We’ll continue to shut down shameful mortgage frauds like this one.”
The FTC is seeking a permanent injunction and has also filed a contempt citation against one of the scheme’s principals, Brian Pacios, who is under a previous court order that prohibited him from mortgage relief activities.
According to the FTC’s complaint, the defendants, sometimes doing business as HOPE Services, and more recently as HAMP Services, targeted consumers facing foreclosure, especially those who had failed to get any relief from their lenders.
Pretending to be “nonprofit” with government ties, they sent mail bearing what looked like an official government seal, and indicated that the recipients might be eligible for a “New 2014 Home Affordable Modification Program” (HAMP 2), the FTC said.
The defendants called the program “an aggressive update to Obama’s original modification program,” and stated that “[y]our bank is now incentivized by the government to lower your interest rate . . .”
"High success rate"
The defendants falsely claimed they had a high success rate, special contacts who would help get loan terms modified, and an ability to succeed even when consumers had failed.
After obtaining consumers’ financial information, they told them they were “preliminarily approved” and falsely claimed they would submit consumers’ loan modification applications to the U.S. Department of Housing and Urban Development, the Neighborhood Assistance Corporation of America, and the “Making Home Affordable” (MHA) program.
The MHA application form they sent consumers excluded the page that warns, “BEWARE OF FORECLOSURE RESCUE SCAMS,” and “never make your mortgage payments to anyone other than your mortgage company without their approval.”
Later, the defendants falsely told consumers they were approved for a low interest rate and monthly payments significantly lower than their current payment, and that after making three monthly trial payments, and often a fee to reinstate a defaulted loan, they would get a loan modification and be safe from foreclosure. They also told consumers not to speak with their lender or an attorney.
In reality, homeowners who made the payments did not have their mortgages modified, and their lenders never received their trial payments, the FTC alleged.
Five-time offender Black & Decker fined for delays in reporting lawnmower safety hazards
Electric lawnmowers continued to operate after users turned them off04/30/2015ConsumerAffairsBy Truman Lewis
For the fifth time, Black & Decker has been penalized for being slow to report safety hazards in its products. In the latest case, the company will pay $1....
For the fifth time, Black & Decker has been penalized for being slow to report safety hazards in its products. In the latest case, the company will pay $1.575 million for delaying reports of safety defects in its cordless electric lawnmowers.
Prosecutors said the lawnmowers started spontaneously and continued operating even after consumers released the handles and removed the safety keys.
In one case, the lawnmower continued running for hours while its owner was being treated in an emergency room and after firemen had removed the blade.
“Not for the first time, Black & Decker held back critical information from the public about the safety of one of its products,” said Principal Deputy Assistant Attorney General Benjamin C. Mizer of the Justice Department’s Civil Division. “The Department of Justice will continue to protect the public against companies that put profits over safety.”
The Department of Justice and the Consumer Product Safety Commission (CPSC) said Black & Decker will also set up a compliance program to ensure that it acts more responsibly in the future.
Black & Decker has previously paid four civil penalties relating to untimely reporting of defects and risks presented by other Black & Decker products.
“Black & Decker’s persistent inability to follow these vital product safety reporting laws calls into question their commitment to the safety of their customers,” said Chairman Elliot F. Kaye of the CPSC. “They have a lot of work to do to earn back the public’s trust. Companies are required to report potential product hazards and risks to CPSC on a timely basis. That means within 24 hours, not months or years as in Black & Decker’s case.”
The complaint relates to cordless lawnmowers manufactured and sold by Black & Decker from 1995 to 2006. According to the complaint, in as early as November 1998, Black & Decker started receiving reports about the problem, known as a continuous-run defect. A second defect involved lawnmowers that unexpectedly started even though the handle was released and the safety key removed, referred to as a spontaneous ignition defect.
The United States alleged that between 1998 and 2009, Black & Decker received more than 100 complaints regarding the continuous-run or spontaneous ignition defects. The United States further alleged that, after consulting an outside expert, the company knew in 2004 that the lawnmowers could continue to run even if a user released the handle and removed the safety key.
Despite knowledge of all of this information, Black & Decker failed to report to the CPSC until early 2009, even though federal law requires “immediate reporting.”
The complaint further notes that at least two consumers informed Black & Decker that the lawnmower’s blades started unexpectedly while the consumer cleaned them, resulting in injury. The complaint states that in one case, the lawnmower continued to run, with the handle released and without the safety key, for several hours while the consumer sought treatment in a hospital emergency room for injury to the consumer’s hand, and after fire department personnel arrived and removed the blade.
Personal income, spending inch higher in March
Initial jobless claims posted a huge weekly decline04/30/2015ConsumerAffairsBy James Limbach
Consumers didn't find themselves with a lot of extra money in their pockets last month. The Commerce Department reports personal income increased by just ...
Consumers didn't find themselves with a lot of extra money in their pockets last month.
The Commerce Department reports personal income increased by just $6.2 billion, or less than 0.1% in March, the smallest increase since December 2013. Disposable personal income (DPI), which is personal income less personal current taxes, was up less than 0.1% or $1.6 billion.
Personal consumption expenditures (PCE), meanwhile, increased $53.4 billion, or 0.4%.
The incomes increase, as meager as it was, came as wages and salaries rose $16.3 billion, made up of a $15.2 billion gain in private wages and salaries, and a rise of $1.0 billion in government wages and salaries.
Personal spending and saving
Personal outlays – which includes PCE, personal interest payments and personal current transfer payments -- increased $57.6 billion in March.
Personal saving -- DPI less personal outlays -- fell to $702.6 billion in March from $758.6 billion in the month before. That took the personal saving rate -- personal saving as a percentage of disposable personal income – to 5.3% from 5.7% in February.
The complete report is available on the Commerce Department website.
Initial jobless claims
First-time applications for state jobless benefits fell last week to their lowest level in 15 years.
According to the Labor Department (DOL), initial claims plunged 34,000 in the week ending April 25 to a seasonally adjusted 262,000 -- the lowest level since April 15, 2000 when it was 259,000.
The DOL says there were no special factors affecting this week's total.
The 4-week moving average, which is less volatile and considered a more accurate picture of the labor market, was down 1,250 to 283,750.
The full report is available on the DOL website.
Moms and retailers expected to do well this Mother's Day
Spending is projected to be at a 12-year high04/30/2015ConsumerAffairsBy James Limbach
Mother's Day is less than 2 weeks off and you know what that means: a spending splurge on things like jewelry, flowers, gift cards, brunch and apparel. ...
Mother's Day is less than 2 weeks off and you know what that means: a spending splurge on things like jewelry, flowers, gift cards, brunch and apparel.
According to National Retail Federation's (NRF) Mother’s Day Spending Survey conducted by Prosper Insights & Analytics, consumers will shell out an average of $172.63 on mom this year. That's nearly $10 more than last year and the highest amount in the survey’s 12-year history. Total spending is expected to reach $21.2 billion.
“We’re encouraged by the positive shift we’ve seen in spending on discretionary and gift items from consumers so far this year, certainly boding well for retailers across all spectrums who are planning to promote Mother’s Day specials, including home improvement, jewelry, apparel and other specialty retailers as well as restaurants,” said NRF President and CEO Matthew Shay.
Running the gamut
When it comes to gifts, most will pick up a greeting card for mom (80%), spending more than $786 million, and more than two-thirds (67.2%) will buy flowers, to the tune of $2.4 billion. Shoppers also plan on giving mom apparel and clothing items (35.8%), spending more than $1.9 billion, versus $1.7 billion last year.
Families will also surprise the matriarch with a special brunch or activity ($3.8 billion), electronic items like a new smartphone or e-reader ($1.8 billion), personal services such as a spa day ($1.5 billion), housewares or gardening tools ($890 million) and books and CDs ($480 million).
Looking for a “wow” reaction mom, 34.2% of Mother’s Day shoppers are planning to splurge on jewelry, spending a survey high of $4.3 billion for the special day.
How we shop
Online shoppers plan to spend an average $252 -- higher than the typical Mother's Day shopper -- and more than 4 in 10 plan to use their smartphones to research products and compare prices.
The survey shows that 18- to 24-year-olds who own smartphones and tablets are most likely to use them to research products and compare prices for gifts (46%), and are most likely to use their tablets to purchase a gift (30.2%). But this age group won’t necessarily be the biggest spenders; 25- to 34-year-olds plan to spend the most on mom -- an average of $244.32; 18- to 24-year-olds will spend an average of $214.81.
Many people know that a gift card could go a long way: Two in five (44.2%) will give mom a gift card, spending more than $2.2 billion.
Most shoppers will head to department stores (33.4%), while others will shop at specialty stores (28.2%) or discount stores (24.8%). With shoppers ready to get out of the house after a long winter, fewer shoppers will be shopping online this year (25% vs. 29% last year.)
The majority of shoppers plan to buy for their mother or stepmother (62.5%), while 23.2% will shop for their wife, 9.8% will shop for their daughter, 8.9% will shop for their sister and 7.4% plan to splurge on their grandmother.
FDA approves drug for double chin treatment
Kybella destroys fat cells when injected below the chin04/30/2015ConsumerAffairsBy James R. Hood
You might not think of a double chin as something that cries out for medical treatment but if it's something that bothers you, you'll be glad to know the U...
You might not think of a double chin as something that cries out for medical treatment but if it's something that bothers you, you'll be glad to know the U.S. Food and Drug Administration (FDA) has approved a new drug for the condition.
It's called Kybella and when injected below the chin, it destroys the fat cells that cause double chins. It's a version of deoxycholic acid, whicih occurs naturally in the body and helps destroy fat.
Up to 50 injections can be used in a single treatment, the FDA said but warned against inadvertently injecting it elsewhere, as it can destroy skin.
The drug fillsl a void, since drugs like Botox and dermal fillers aren't approved for fixing fat and loose skin under the chin.
“It is important to remember that Kybella is only approved for the treatment of fat occurring below the chin, and it is not known if Kybella is safe or effective for treatment outside of this area,” Amy Egan, deputy director of the Office of Drug Evaluation III at the FDA, said in the statement.
Egan warned that the drug should only be administered by a licensed medical professional.
Kybella can cause serious side effects, including nerve injury in the jaw that can cause an uneven smile or facial muscle weakness, and trouble swallowing, the FDA said. The most common side effects of Kybella include swelling, bruising, pain, numbness, redness and areas of hardness in the treatment area.
The drug is manufactured by Kythera Biopharmaceuticals Inc.
Navajo Pride bleached flour recalled
The product may be contaminated with Salmonella04/30/2015ConsumerAffairsBy James Limbach
Navajo Pride of Farmington, N.M., is recalling its Bleached All Purpose Flour. The product may be contaminated with Salmonella. No illnesses have been re...
Navajo Pride of Farmington, N.M., is recalling its Bleached All Purpose Flour.
The product may be contaminated with Salmonella.
No illnesses have been reported to date.
The recalled product, which comes in 5-lb, 25-lb, and 50-lb bags is marked Navajo Pride with lot#075B110064 and an expiration date of 03162016. It was delivered to regional retailers
Customers who have the recalled product should not eat it and destroy it or return it to the place of purchase.
Consumers with questions may contact Navajo Pride at (505)566-2670 between 9:00AM and 5:00PM MST, Monday-Friday.
Golden Krust Patties recalls beef and chicken products
The products contain egg, an allergen not listed on the label04/30/2015ConsumerAffairsBy James Limbach
Golden Krust Patties of Bronx, N.Y., is recalling approximately 9,073,384 pounds of beef and chicken products. The products contain egg, an allergen not ...
Golden Krust Patties of Bronx, N.Y., is recalling approximately 9,073,384 pounds of beef and chicken products.
The products contain egg, an allergen not listed on the label.
There are no reports of adverse allergic reactions due to consumption of these products.
The following beef and chicken products, produced from January 24, 2014, through February 26, 2015, are being recalled:
- 8-lb. cases containing 2-count packages of “Golden Krust Jamaican Style Spicy Beef Patties.”
- 8-lb. cases containing 2-count packages of “Golden Krust Jamaican Style Mild Beef Patties.”
- 8-lb. cases containing 2-count packages of “Golden Krust Jamaican Style Chicken Patties.”
- 12-lb. cases containing 3-count packages of “Golden Krust Jamaican Style Spicy Beef Patties.”
- 12-lb. cases containing 3-count packages of “Golden Krust Jamaican Style Mild Beef Patties.”
- 12-lb. cases containing 3-count packages of “Golden Krust Jamaican Style Chicken Patties.”
- 15-lb. cases containing “Golden Krust Jamaican Style Spicy Beef Patties.”
- 15-lb. cases containing “Golden Krust Jamaican Style Mild Beef Patties.”
- 15-lb. cases containing “Golden Krust Jamaican Style Chicken.”
- 15-lb. cases containing 9-count packages of “Golden Krust Jamaican Style Hot Beef Patties.”
- 15-lb. cases containing 9-count packages of “Golden Krust Jamaican Style Chicken Patties.”
- 15-lb. cases containing 9-count packages of “Golden Krust Jamaican Style Mild Beef Patties.”
- 8.5-lb cases containing 24-count packages of “Golden Krust Jamaican Style Spicy Beef Patties.”
- 8.5-lb cases containing 24- count packages of “Golden Krust Jamaican Style Chicken.”
- 40-lb cases containing 10-count packages of “Golden Krust Jamaican Style Spicy Beef Patties.”
- 15-lb cases containing “Golden Krust Cheezee Beef Patty.”
- 15-lb cases containing “Golden Krust Jerk Chicken Patty.”
The recalled products bear the establishment number “EST. 18781 and P-18781” inside the USDA mark of inspection and have an expiration date between January 24, 2015 through February 26, 2016. They were shipped to distributors, retailers and consumers nationwide.
Consumers with questions may contact Herma Hawthorne at (855) 565-0561.
Pedego recalls electric bicycle batteries
The batteries can overheat, posing a fire hazard04/30/2015ConsumerAffairsBy James Limbach
Pedego Inc., of Irvine, Calif., is recalling about 5,000 lithium ion rechargeable batteries. The batteries can overheat, posing a fire hazard. The compan...
Pedego Inc., of Irvine, Calif., is recalling about 5,000 lithium ion rechargeable batteries.
The batteries can overheat, posing a fire hazard.
The company has received 6 reports of batteries overheating and catching fire, including 1 report of property damage. No injuries have been reported.
This recall involves 36-volt and 48-volt lithium ion rechargeable batteries sold separately and as original equipment with Pedego electric bikes. Recalled batteries of each voltage came in two styles.
One style has a silver or black metal case that measures about 13 ½ inches long, 6 ½ inches wide and 2 ½ inches high, with black plastic end caps and a handle. The other style has a black or white plastic case that measures about 14 inches long, 6 ½ inches wide and 2 ½ inches high with a red indicator lamp on one end.
The batteries have serial numbers that start with “DLG.” A label with the serial number is on one side of the metal batteries and on the underside of the plastic batteries.
The batteries, manufactured in China, were sold at bicycle stores and electric bike retailers and online at www.pedegoelectricbikes.com from January 2010, through September 2013. The batteries were sold separately for about $600 to $900 and on electric bicycles that sold for between $2,000 and $3,000.
Consumers should immediately remove the battery from the bike and contact Pedego for a free replacement battery.
Consumers may contact Pedego toll-free at (888) 870-9754 from 8 a.m. to 5 p.m. PT, or by email at [email protected].
Hy-Vee recalls Summer Fresh Pasta Salad
The product may be contaminated with Listeria monocytogenes04/30/2015ConsumerAffairsBy James Limbach
Hy-Vee is recalling Hy-Vee Summer Fresh Pasta Salad that is sold in its stores' kitchen department cold cases and salad bars. The product may be contamin...
Hy-Vee is recalling Hy-Vee Summer Fresh Pasta Salad that is sold in its stores' kitchen department cold cases and salad bars.
The product may be contaminated with Listeria monocytogenes.
The company says it has not received any complaints associated with this problem to date.
The recalled product is packaged upon customer request from the kitchen cold case in 16-oz. or 32-oz. clear plastic containers with a light tan scale-produced label with the product name, weight and price affixed to the container.
The pasta salad was available in stores in Illinois, Iowa, Minnesota, Missouri, Nebraska and South Dakota between April 9, 2015, and April 27, 2015.
Customers who purchased the recalled product should dispose of it or return it to the store for a refund.
Consumers with questions may call Hy-Vee customer care at 1-800-772-4098.
Pediatricians find it's too easy for teens to buy supplements
Test finds health food stores all too happy to sell dietary supplements to 15 year olds04/29/2015ConsumerAffairsBy Mark Huffman
Teenage drug use – be they illegal substances or prescription drugs – is an ongoing concern. Now pediatricians are adding dietary supplements to their list...
Teenage drug use – be they illegal substances or prescription drugs – is an ongoing concern. Now pediatricians are adding dietary supplements to their list of worries.
The American Academy of Pediatrics (AAP) has reviewed a series of studies that posed this question: could a fifteen-year-old call a health food store and purchase a dietary supplement, even though the label read “for adult use only.”
Not only were the teens enlisted for the experiment able to buy the supplements, AAP says the staff in many stores helpfully recommended certain products.
To be clear, the sales clerks were doing nothing illegal. Only one state prohibits minors from purchasing dietary supplements.
Supplements might appear harmless but AAP recommends that both males and females under 18 avoid these body-shaping products, unregulated by the U.S. Food & Drug Administration (FDA).
244 health food stores in 49 states
During the experiment, testers identifying themselves as 15-year-old boys and girls called 244 health food stores in 49 states to inquire about supplements and found they could easily purchase them.
“Teenagers dealing with negative body images are increasingly turning to over-the-counter supplements, despite recommendations from the American Academy of Pediatrics to avoid such products, said Dr. Ruth Milanaik, of Cohen Children's Medical Center, who helped oversee the project.
She warned that health food store supplements are not always healthy, and health food store attendants are not always experts when selling well-known "fat burning" thermogenic products, such as Hydroxycut, and Shredzm, testosterone boosters, or products containing creatine.
Many testosterone boosters carry label instructions advising the products should only be used by adults. Even so, the testing team found 41% of sales attendants told callers identifying themselves as 15-year-olds they could purchase a testosterone booster without an adult's approval.
And despite the fact that testosterone boosters are specifically not recommended for children under age 18 unless for documented medical reasons, the study found 9.8% of sales attendants actually recommended a testosterone booster.
Milanaik is concerned that the supplement industry may view young people as an emerging market.
"Adolescents are being enticed by flashy advertisements and promises of quick, body-shaping results," she said. "In this body-conscious world, flashy advertising of `safe, quick and easy body shaping results' are very tempting to younger individuals trying to achieve 'the perfect body.' It is important for pediatricians, parents, coaches and mentors to stress that healthy eating habits, sleep and daily exercise should be the recipe for a healthy body."
Milanaik also has an issue with health food stores advertising that their employees are “trained experts.” If they were, she says they would not be recommending dietary supplements for minors.
"Health food stores need to focus not only on knowing what products to recommend, but often more importantly, what products not to recommend for customers of certain ages and conditions," said Laura Fletcher, one of the principal investigators.
At the very least, she says sales personnel should pay attention when warnings are clearly printed on product labels.
Google introduces anti-phishing tool for Chrome browser
Password Alert can warn you when a bogus site tries to get your password04/29/2015ConsumerAffairsBy Truman Lewis
Google has introduced a new security tool for the Chrome browser that's intended to help keep consumers safe from phishing attacks. Those are the scams tha...
Google has introduced a new security tool for the Chrome browser that's intended to help keep consumers safe from phishing attacks. Those are the scams that use what look legitimate pages (like the one above) to trick consumers into revealing their passwords.
Phishing attacks are not only very common, they're also very effective. Google says they succeed nearly 45 percent of the time and reports that nearly 2% of emails submitted to Gmail are designed to smoke out consumers' passwords.
Called Password Alert, the free, open-source Chrome extension will show you a warning if you type your Google password into a site that isn’t a Google sign-in page.
"Once you’ve installed and initialized Password Alert, Chrome will remember a 'scrambled' version of your Google Account password. It only remembers this information for security purposes and doesn’t share it with anyone," Google's Drew Hintz and Justin Kosslyn said in a blog posting.
They said that if you type your password into a site that isn't a Google sign-in page, Password Alert will show you a notice like the one below, alerting you that you’re at risk of being phished so you can update your password and protect yourself.
The app is available in the Google Play store.
Reports: Apple Watch won't work for people with tattoos
Apparently the darker inks absorb enough light to interfere with the Watch's sensors04/29/2015ConsumerAffairs
Bad news for tattooed iFans who want an Apple Watch: A growing body of complaints seems to indicate that tattoo ink, especially in dark colors, interferes ...
Bad news for tattooed iFans who want an Apple Watch: A growing body of complaints seems to indicate that tattoo ink, especially in dark colors, interferes with the Watch's sensors, thereby disabling some of the device's functions.
The Daily Dot first called attention to a complaint posted on reddit's /r/apple forum yesterday — a tattooed redditor going by the handle “guinne55fan” started a thread to report problems with his new Watch:
So I thought my shiny new 42mm SS watch had a bad wrist detector sensor. The watch would lock up every time the screen went dark and prompted me for my password. I wouldn't receive notifications. I couldn't figure out why especially since the watch was definitely not losing contact with my skin. ... I was about to give up and call Apple tomorrow when I decided to try holding it against my hand (my left arm is sleeved and where I wear my watch is tattooed as well) and it worked. My hand isn't tattooed and the Watch stayed unlocked. Once I put it back on the area that is tattooed with black ink the watch would automatically lock again. Just wanted to give anyone a heads up about this issue because I don't see it mentioned anywhere in Apple's support documents.
In addition, guinne55fan also included photos of his tattooed left wrist. Sure enough, the part that would be in contact with the underside of an Apple Watch is almost completely colored with black tattoo ink.
Similar complaints can also be found on Twitter; just yesterday, @stroughtonsmith Tweeted that “Turns out people with wrist tattoos will be unable to use Apple Watch for Apple Pay because it can't sense you're alive. Fun!”
No Apple comment
So there's at least two people with tattooed wrists reporting identical complaints with the Apple Watch's functionality, both on the same day. Is this mere coincidence — or is there an actual connection?
Although Apple has not, as of press time, publicly commented on whether or how tattoos interfere its watches, Apple's own support page suggests that tattoo interference genuinely could be a problem, in its explanation of “How Apple Watch measures your heart rate”:
The heart rate sensor in Apple Watch uses what is known as photoplethysmography. This technology, while difficult to pronounce, is based on a very simple fact: Blood is red because it reflects red light and absorbs green light. Apple Watch uses green LED lights paired with light sensitive photodiodes to detect the amount of blood flowing through your wrist at any given moment. When your heart beats, the blood flow in your wrist — and the green light absorption — is greater. Between beats, it’s less. By flashing its LED lights hundreds of times per second, Apple Watch can calculate the number of times the heart beats each minute — your heart rate.
The heart rate sensor can also use infrared light. This mode is what Apple Watch uses when it measures your heart rate every 10 minutes. However, if the infrared system isn't providing an adequate reading, Apple Watch switches to the green LEDs. In addition, the heart rate sensor is designed to compensate for low signal levels by increasing both LED brightness and sampling rate.
What does this have to do with tattoos? Simple: tattoo ink, especially the darker colors, can serve as a “barrier” between those LED or infrared lights and your bloodstream, absorbing some of the light before it hits your bloodstream, and some more of it as it bounces back.
For the record: the problems reported with Apple Watch only affect people whose wrists have dark tattoos; it does not affect people with naturally dark skin due to high melanin levels. Dark tattoo inks are made of a variety of ingredients, including ground-up minerals, which can obviously interfere with light reflection or absorption in ways which melanin does not.
The iMore blog tested various Twitterers' and redditors' complaints and concluded that, yes: tattoo inks do interfere with the Apple sensor readings, especially the darker inks – and in some cases, thick (yet non-tattooed) scar tissue can interfere with the sensors as well:
We tested the Watch's sensors against tattooed and non-tattooed sections on both the wrist and elsewhere on the body. On non-tattooed non-wrist sections, the sensors gave identical readings as when also tested on the wrist; on tattooed sections, sensor readings varied wildly depending on colors and shading.
Dark, solid colors seem to give the sensor the most trouble — our tests on solid black and red initially produced heart rate misreadings of up to 196 BPM before failing to read skin contact entirely. Tests on lighter tattoo colors including purple, yellow, and orange produced slightly elevated heart misreads of 80 BPM (compared to 69 BPM on the wearer's non-tattooed wrist), but otherwise did not appear to interfere with skin contact registration. … It's also worth noting that prominent scars and other potential skin aberrations can trip the Watch's sensors.
Granted, the Watch's sensors can be turned off, but doing so will disable certain of the Watch's functions, including Apple Pay (plus heart-monitoring and similar personal-fitness apps, of course). If nothing else, Apple has a two-week return policy; Watch owners with tattooed wrists might need to take advantage of that.
It might not be possible for Apple to fix this particular problem, short of abandoning light-based sensors altogether; there's nothing Apple or anyone else can do, to change such facts as “Crushed minerals packed together densely enough to look black in full daylight will block or absorb far more light than regular human skin and blood can.”
Researchers say manufacturing costs unlikely the reason04/29/2015ConsumerAffairsBy Mark Huffman
Although recent research has been very promising, there is no cure for multiple schlerosis (MS), a disabling disease of the central nervous system. Medi...
Air fares on the rise
Travelers in Madison, Wisconsin, were hit the hardest04/29/2015ConsumerAffairsBy James Limbach
The friendly skies were a little more expensive in the final 3 months of last year. Figures released by the Transportation Department’s Bureau of Transpor...
The friendly skies were a little more expensive in the final 3 months of last year.
Figures released by the Transportation Department’s Bureau of Transportation Statistics (BTS) show average the domestic air fare rose 2.0% in the fourth quarter of 2014 -- to an inflation-adjusted $393, from $385 a year earlier.
Of the 100 busiest airports, during that period, passengers originating in Madison, Wisconsin, paid the highest average fare -- $505, while passengers originating in Sanford, Florida, paid the lowest -- $99.
The BTS report bases average fares on domestic itinerary fares, which consist of round-trip fares, unless the customer does not purchase a return trip. In that case, the one-way fare is included. One-way trips accounted for 31% of fares calculated for the fourth quarter of 2014.
Fares are based on the total ticket value, which consists of the price charged by the airlines plus any additional taxes and fees levied by an outside entity at the time of purchase. Fares include only the price paid at the time of the ticket purchase and do not include things like baggage fees, paid at either the airport or onboard the aircraft. Averages also do not include frequent-flyer or “zero fares,” or abnormally high reported fares.
Fares during the fourth-quarter fares rose 10.2% from the recession-affected low of $348 in 2009 to the fourth quarter of 2011. Since 2011, fourth-quarter fares have shown little change, increasing 2.4% from 2011 to 2014.
The fourth-quarter 2014 fare was down 14.4% from the average fare of $459 in 2000 -- the highest inflation-adjusted fourth quarter average fare in the 19 years since BTS began collecting air fare records in 1995. That decline took place while overall consumer prices rose 37%. Since 1995, inflation-adjusted fares have actually fallen 10.8% compared with a 55.4% increase in overall consumer prices.
U.S. passenger airlines collected 71.2% of their total revenue from passenger fares during the third quarter of 2014, the latest period for which revenue data are available, down from 1990 when 87.6% of airline revenue was received from fares.
The average fare of $391 for the full year 2014 was up 0.6%, inflation-adjusted, from the 2013 average fare of $389 but down 16.2% from the inflation-adjusted annual high of $467 in 2000.
Not adjusted for inflation, the $391 average fare in 2014 is the highest annual fare since BTS began collecting air fare records in 1995 -- 2.5% higher than the previous high of $382 in 2013.
The complete report is available on the BTS website.
Economic growth slows to a crawl in first quarter
A slowdown in consumer spending is among the factors04/29/2015ConsumerAffairsBy James Limbach
The government has taken its first of 3 readings on the economy for the first quarter -- and the results are not encouraging. According to the "advance" es...
The government has taken its first of 3 readings on the economy for the first quarter -- and the results are not encouraging.
According to the "advance" estimate released by the Bureau of Economic Analysis, real gross domestic product (GDP) -- the value of the production of goods and services in the U.S., adjusted for price changes -- increased at an anemic annual rate just of 0.2 percent in the first quarter. As a means of comparison, real GDP increased 2.2% in the previous 3 months.
What increase there was primarily reflected positive contributions from personal consumption expenditures (PCE) and private inventory investment. But those were partly offset by declines in exports, nonresidential fixed investment, and state and local government spending. Imports, which are a subtraction in the calculation of GDP, increased.
Consumers stay home
The slowdown comes as a result in a slackening in PCE (+1.9%, compared with +4.4% the fourth quarter), downturns in exports, in nonresidential fixed investment, and in state and local government spending, and a deceleration in residential fixed investment that were partly offset by a deceleration in imports and upturns in private inventory investment and in federal government spending.
The first-quarter advance estimate is based on incomplete source data and are subject to further revision. The "second" estimate for the first quarter, based on more complete data, will be released next month.
The price index for gross domestic purchases, which measures prices paid by U.S. residents, plunged 1.5% in the first quarter following a dip of 0.1% percent in the fourth. The core rate, which excludes the volatile food and energy sectors, rose 0.3%, compared with an increase of 0.7% in the previous 3 months.
The complete report is available on the Commerce Department website.
A dip in mortgage applications
Contract interest rates were mixed04/29/2015ConsumerAffairsBy James Limbach
Applications for mortgages were a little lower last week. According to data from the Mortgage Bankers Association’s (MBA) Weekly Mortgage Application Surv...
Applications for mortgages were a little lower last week.
According to data from the Mortgage Bankers Association’s (MBA) Weekly Mortgage Applications Survey, the Market Composite Index -- a measure of mortgage loan application volume – was down 2.3% on a seasonally adjusted basis in the week ending April 24.
The average loan size for purchase applications rose to a survey high of $297,000.
The Refinance Index fell 4% from the previous week, with the refinance share of mortgage activity down to 55% of total applications -- its lowest level since September 2014.
The adjustable-rate mortgage (ARM) share of activity rose to 5.7% of total applications, the FHA share was 13.7%, the VA share was 11.3%, and the USDA share of total applications was unchanged at 0.8%.
Contract interest rates
- The average contract interest rate for 30-year fixed-rate mortgages (FRMs) with conforming loan balances ($417,000 or less) rose 2 basis points -- to 3.85% from 3.83%, with points increasing to 0.35 from 0.32 (including the origination fee) for 80% loan-to-value ratio (LTV) loans. The effective rate increased from last week.
- The average contract interest rate for 30-year FRMs with jumbo loan balances (greater than $417,000) inched down to 3.82% from 3.83%, with points rising to 0.31 from 0.22 (including the origination fee) for 80% LTV loans. The effective rate increased from last week.
- The average contract interest rate for 30-year FRMs backed by the FHA edged up 1 basis point to 3.66%, with points increasing to 0.16 from 0.12 (including the origination fee) for 80% LTV loans. The effective rate increased from last week.
- The average contract interest rate for 15-year FRMs rose from 3.11% to 3.14%, with points increasing to 0.31 from 0.24 (including the origination fee) for 80% LTV loans. The effective rate increased from last week.
- The average contract interest rate for 5/1 ARMs dipped to 2.88% from 2.89%, with points dropping to 0.27 from 0.29 (including the origination fee) for 80% LTV loans. The effective rate decreased from last week.
The survey covers over 75% of all U.S. retail residential mortgage applications.
Emergency rooms are packed on weekends with victims of home handyman mishaps04/29/2015ConsumerAffairs
Deciding to tackle the yard work now that the weather is breaking can be dangerous. The emergency room is filled on weekends with people getting poked in t...
Whirlpool mislabeled refrigerators as Energy Star compliant, lawsuit charges
A federal judge has certified the case for California consumers04/29/2015ConsumerAffairsBy James R. Hood
Whirlpool faces a class action lawsuit filed by consumers who say they bought Whirlpool refrigerators that were falsely labeled as Energy Star compliant, C...
Whirlpool faces a class action lawsuit filed by consumers who say they bought Whirlpool refrigerators that were falsely labeled as Energy Star compliant, Courthouse News Service reported.
Lead plaintiffs Kyle Dei Rossi and Mark Linthicum say the refrigerators they bought had the Energy Star logos on them but their model numbers showed they were not in compliance with Energy Star requirements.
Whirlpool asked that the suit be dismissed on the grounds that the plaintiffs had not actually suffered any damage.
But U.S. District Judge Troy Nunley granted certification for consumers who bought their refrigerators in California. He denied certification for consumers in other states because of differences in the states' laws.
Technology may reduce distracted driving among teens
Study finds intervention devices successfully modified dangerous behavior04/29/2015ConsumerAffairsBy Christopher Maynard
It is a sad fact that the number one cause of accidental death in teens is motor vehicle accidents. In this current age of always needing to be plugged in,...
It is a sad fact that the number one cause of accidental death in teens is motor vehicle accidents. In this current age of always needing to be plugged in, it is extremely difficult for many young drivers to put down their phones. Fortunately, a recent study shows that other types of technology may provide the answer to keeping young drivers safe.
The study, which was led by Dr. Beth Ebel of the University of Washington, attempted to find out if other types of technology could make driving safer for young people. She and her colleagues believe that there is not enough being done to minimize distracted driving. “Facts and figures have not done enough to change driver behavior,” she said.
The research team chose 29 teens for the study and observed them for six months. Each driver was placed into one of three groups. Two of the groups had intervention measures to stop distracted driving. The third group was the control group, and had no intervention measures in place.
The first intervention group had an in-vehicle camera system installed that was triggered by certain driving conditions, such as hard braking, fast cornering, or impacts. The video footage was made available to teens and parents so that it could be reviewed to improve driving behavior.
The second intervention group had a device installed that blocked incoming and outgoing calls and messages when the vehicle was being operated. In addition to these measures, all three groups had a program installed on their phones so that researchers could track how much they were used while driving.
The results of the study showed that the intervention groups had lower cell phone use and fewer high-risk driving behaviors than the control group. Out of the three groups, those with the cell blocking technology had the safest driving record.
One of the most interesting things uncovered by the study was that the young drivers were receptive to the intervention measures. None of the participants disabled the programs that were inhibiting their phone use. This gives hope that these technologies may be practical in the real world. Both intervention methods are currently available for commercial use.
Study finds gastric bands, group weight management programs equally effective
Both improved blood sugar levels over a year's time04/29/2015ConsumerAffairsBy Christopher Maynard
Weight loss is never easy, but it's important for overweight people with type 2 diabetes seeking to control their blood sugar levels and optimize their hea...
Weight loss is never easy, but it's important for overweight people with type 2 diabetes seeking to control their blood sugar levels and optimize their health.
A small clinical trial among such patients led by Joslin Diabetes Center and Brigham and Women's Hospital researchers now has shown that two approaches -- adjustable gastric band surgery and an intensive group-based medical diabetes and weight management program -- achieved similar improvements in controlling blood sugar levels after one year.
"We can anticipate long-term health benefits from both of these approaches, but they do require some investment of time and energy by the patient," says trial leader Allison Goldfine, M.D., head of Joslin's Section of Clinical Research and an Associate Professor of Medicine at Harvard Medical School.
Reported in the Journal of Clinical Endocrinology & Metabolism, the SLIMM-T2D (Surgery or Lifestyle with Intensive Medical Management in Treatment of Type 2 Diabetes) trial enlisted 45 volunteers who had long-duration type 2 diabetes, struggled to manage their diabetes and had a body mass index (BMI) of 30 or higher.
The study randomly divided the participants into two groups.
One group received an adjustable gastric band procedure, which inserts a band around the upper stomach whose tightness can be adjusted.
"With the band, you put a device around the top portion of the stomach, people get full more quickly, and that fullness signals them to stop eating," Dr. Goldfine notes. She adds that some studies suggest that over time, people with the band learn to change their behaviors to eat less even when the band is no longer fully restricted.
The other group of participants underwent Joslin's Why WAIT (Weight Achievement and Intensive Treatment), a clinically available program built on behavioral interventions that have been proven to be effective.
After one year, the two groups achieved similar lowering of blood sugar levels -- average levels of hemoglobin A1C (a standard measurement of blood sugar levels over several months) dropped by 1.2 for patients with the gastric band and by 1.0 for patients in the IMWM program. The groups also saw similar-magnitude improvements in their levels of blood sugar when fasting, another standard metric for type 2 diabetes management.
Weight loss was similar between the two arms at three months. At one year, however, the participants given the band achieved greater average loss (30 pounds compared to 19 pounds) and were continuing to lose weight. The Why WAIT group saw greater reductions in blood pressure than the band group, but other measures of cardiovascular health were generally comparable between the two groups.
Participants in both arms of the trial reported that their health had been improved on a number of measures and that they were enjoying better quality of life.
Gastric bands are inserted laparoscopically, via small incisions in the belly, and clamped around the top of the stomach. Gastric bypass procedures, more invasive forms of surgery that route digestion around parts of the stomach, affect digestion metabolism more drastically than bands and typically result in greater weight loss.
A previous SLIMM-T2D study led by Joslin and reported last year in the Journal of the American Medical Association compared the use of the most common gastric bypass surgery, called Roux-en-Y, to Why WAIT treatment. In that earlier trial, participants who underwent Roux-en-Y gastric bypass lost significantly more weight and achieved better diabetes control than those in the medical treatment arm of the trial.
In addition to these two Joslin-led trials, several other research institutions have run small studies studying various gastric procedures and medical programs. A consortium called ARMMS-T2D (Alliance of Randomized Trials of Medicine versus Metabolic Surgery in the Treatment of Type 2 Diabetes) aims to follow up on the roughly 300 patients in all these trials.
"It's really important to have a variety of different approaches available to treat a complex medical problem like diabetes, and we need to understand the relative merits of each approach," Dr. Goldfine sums up. "There are people for whom remembering to take their medications is highly problematic, and there are people for whom the idea of surgical risk is unbearable. One size does not fit all."
Waymouth Farms recalls raw pine nuts
The product may be contaminated with Salmonella.04/29/2015ConsumerAffairsBy James Limbach
Waymouth Farms of New Hope, Minn., is recalling raw pine nuts in various sizes. The product may be contaminated with Salmonella No illnesses have been re...
Waymouth Farms of New Hope, Minn., is recalling raw pine nuts in various sizes.
The product may be contaminated with Salmonella
No illnesses have been reported to date in connection with the problem.
The following products, sold nationwide through retail stores and mail order under the Good Sense brand, are being recalled:
The product was also sold in a 5-lb. bulk box, UPC 30243 02860, from June 6, 2014, to March 26, 2015 using the following Julian Codes:
1 155 141 183 141 210 14
1 223 141 239 141 260 14
1 281 141 282 141 317 14
1 351 141 020 151 050 15
1 085 15
Customers who purchased the recalled products should return them to the place of purchase for a full refund.
Consumers with questions may contact customer service at 800-527-0094 Monday through Friday, 8:00 AM to 4:30 PM, CST.
Corn Maiden Foods expands recall of beef and pork products
The products contain hydrolyzed soy protein, an allergen not listed on the label04/29/2015ConsumerAffairsBy James Limbach
Corn Maiden Foods of Harbor City, Calif., has added an additional item to the list of 15,600 pounds of pork products recalled earlier this month. The pro...
Corn Maiden Foods of Harbor City, Calif., has added an additional item to the list of 15,600 pounds of pork products recalled earlier this month.
The products contain hydrolyzed soy protein, an allergen not listed on the label.
There are no reports of adverse reactions due to consumption of these products.
The following products, produced between April 9, 2014 and April 8, 2015, have been added to the original list of recalled items:
- 20.625 lb. cases containing 60 5.5 oz. pieces of “Tamales with Pork Carnitas, Green Chile & Oregano Wrapped in Corn Husks.”
- 9.375 lb. cases containing 60 2.5 oz. pieces of “Tamales with Pork Carnitas, Green Chile & Oregano Wrapped in Corn Husks.”
The recalled products bear the establishment number “EST. 20949” inside the USDA mark of inspection, and were shipped to hotels, restaurants and institutional locations in California.
Consumers with questions about the recall may contact Pascal Dropsy at (310) 784-0400, ext. 221.
Skyline Provisions recalls beef products
The product is contaminated with E. coli O157:H704/29/2015ConsumerAffairsBy James Limbach
Skyline Provisions of Harvey, Ill., is recalling 1,029 pounds of beef products. The product is contaminated with E. coli O157:H7. There are no reports of...
Skyline Provisions of Harvey, Ill., is recalling 1,029 pounds of beef products.
The product is contaminated with E. coli O157:H7.
There are no reports of illnesses associated with consumption of this product.
The following product, produced between April 15-25, 2015, is being recalled:
- 17 ½ boxes of Aurora Packers Intact Beef Round Flats
The product was sold to Jack & Pat's Old Fashioned Market in Chicago Ridge, Ill., where it was ground and sold in various amounts of ground chuck patties, ground chuck, ground round, sirloin patties and porter house patties.
Consumers with questions regarding the recall can may Skyline Provisions at (708) 331-1982.
Mayo Clinic doctors getting close to a blood test for cancer
Doctors may soon be able to easily find cancer anywhere in the body04/28/2015ConsumerAffairsBy Mark Huffman
When doctors suspect a patient has a cancerous tumor, they order a biopsy to examine tissue. But if the cancer is lurking in another part of the body, the ...
When doctors suspect a patient has a cancerous tumor, they order a biopsy to examine tissue. But if the cancer is lurking in another part of the body, the deadly disease can go unnoticed.
This may soon change because the way doctors look for tumors may soon change. Researchers at Mayo Clinic report success in identifying the source of cancer in patients' gastrointestinal tracts by looking at DNA markers from tumors.
That means one day, physicians might soon be able to find cancer anywhere in the body, just by conducting a blood test or examining a stool sample. Not only would it be more convenient for patients, it could save lives by providing earlier diagnosis of a whole series of life-threatening cancers.
Screening the whole body
“What’s exciting about our discovery is that it allows us to stop thinking about screening organs and start thinking about screening people,” said Mayo Clinic's Dr. John Kisiel. “As far as we are aware, this is the first series of experiments that has ever shown this concept.”
However, a somewhat similar screening tool won Food and Drug Administration (FDA) approval last year. In 2014 the agency gave a green light to Cologuard, the first stool-based colorectal screening test that detects the presence of red blood cells and DNA mutations that may indicate the presence of certain kinds of abnormal growths.
These polyps can be cancerous, or pre-cancerous, and previously could only be detected visually through an invasive procedure known as a colonoscopy.
Using a stool sample, Cologuard detects hemoglobin, as well as certain mutations associated with colorectal cancer in the DNA of cells. If the test is positive patients are then advised to undergo a diagnostic colonoscopy.
Not only is the new test less expensive, it could mean millions more people will get screened, preventing thousands of colon cancer deaths.
The Mayo study expands this concept to the entire body. They say in collecting and cataloging methylated DNA in a blood test they pinpointed the presence and origin of cancer cells in the body with 80% accuracy.
“We think, based on the data we have, that a blood test could work in the future,” Kisiel said.
One objective of the research is to eliminate the need for the present organ-by-organ search for cancer. Doctor's are reluctant to screen for less common cancers because of the high number of false positive results. In other cases, doctors don't spend time and money looking for cancers that might not be there at all.
But if they are present and go undetected, it's bad news for the patient.
“A cancer like pancreatic cancer, although it’s almost uniformly lethal, is not screened for at all in the general population, mainly because it’s rare,” Kisiel said.
More work ahead
Don't expect this cancer-screening blood test to be available anytime soon. Kisiel's tests were conducted on the gastrointestinal tract. The next step is to apply it to the whole body.
“We hope that in the future patients might be able to submit a blood specimen and then we can analyze that blood specimen for the presence and absence of cancer markers,” he said. “And if they are present we hope to be able to determine the anatomic location of the tumor, or the organ from which it originates.”
Video study documents teen distracted driving
In-car cameras capture moments just before and after a crash04/28/2015ConsumerAffairsBy Mark Huffman
A picture is worth a thousand words. Highway safety advocates are hoping in-vehicle video of actual car accidents caused by driver distraction can focus mo...
A picture is worth a thousand words. Highway safety advocates are hoping in-vehicle video of actual car accidents caused by driver distraction can focus more attention on the problem.
Back in March the AAA Foundation for Traffic Safety reported that distracted driving by teenagers is happening more than anyone previously thought. The foundation reached that conclusion after going to the video tape.
A company called Lytx installs video in-vehicle event recorders in cars. They are part of a driver training system that also collects audio and accelerometer data when a driver triggers an in-vehicle device by hard braking, fast cornering or an impact that exceeds a certain g-force.
Just before and just after crash
The videos are 12-seconds long and provide information from before and after the event. The videos are part of a program for coaching drivers to improve behavior and reduce collisions.
For its study, the foundation was granted permission to analyze the videos – in particular those featuring teenage drivers. This unique video analysis found that distraction was a factor in nearly 6 out of 10 moderate to severe incidents featuring teen driver – 4 times as many as the official estimates based on police reports.
Below is an excerpt from the collection of videos the foundation analyzed.
Phones not the only distraction
Phones caused the largest percentage of distractions, but the cameras showed there were plenty of other things distracting young drivers. Cell phone use caused 12% of crashes but looking at something in the vehicle caused 10%. Looking at things outside the vehicle, singing or moving to music, grooming or reaching for something were also sources of distraction.
“It is troubling that passengers and cell phones were the most common forms of distraction given that these factors can increase crash risks for teen drivers,” said AAA CEO Bob Darbelnet. “The situation is made worse by the fact that young drivers have spent less time behind the wheel and cannot draw upon their previous experience to manage unsafe conditions.”
The analysis of the video footage found that driving looking at their phones had their eyes off the road for an average of 4.1 out of the final 6 seconds leading up to an accident. The researchers also measured reaction time in rear-end crashes, finding that many teens distracted by a cellphone never reacted, meaning they slammed into the vehicle in front without ever hitting the brakes or swerving.
The takeaway from the video footage, Darbeinet concludes, is states need to tighten their graduated driving laws (GDL), prohibiting cell phone use by teen drivers and restricting passengers to one non-family member for the first 6 months of driving.
Radio Shack agrees to remediation in bankruptcy sale of customer data
Another reminder that you don't own your personal information04/28/2015ConsumerAffairs
Radio Shack today agreed to enter mediation with the attorneys general of Texas, Oregon, Pennsylvania and Tennessee regarding the its plan to auction off c...
Radio Shack today agreed to enter mediation with the attorneys general of Texas, Oregon, Pennsylvania and Tennessee regarding the its plan to auction off customer data as part of its bankruptcy restructuring.
The Associated Press reports that on May 11, Radio Shack will auction its intellectual property assets. These assets include Radio Shack's registered trademarks, 73 active or pending patent applications and more than 8.5 million customer email addresses along with 65 million customer names and physical addresses.
But Tuesday, a lawyer for Radio Shack told a Delaware bankruptcy judge that the mediation, including a consumer privacy ombudsman, will start on May 14, three days after the auction.
Actually, an auction already took place this past March, with Standard General LP reportedly the high bidder — but a bankruptcy court had to approve that deal. Texas' attorney general argued at the time that selling the data would be illegal under Texas law, which forbids companies selling personal data in violation of their own stated policies – and signs in Radio Shack stores had proclaimed “we pride ourselves on not selling our private mailing list.” The attorneys general from Pennsylvania, Oregon and Tennessee made similar complaints.
Later, after the attorneys general of four states protested, a privacy ombudsman ruled that customer information was not included as part of the Radio Shack bankruptcy sale. Today's remediation agreement with the four state attorneys general will presumably help Radio Shack and the courts determine just how much of that information can be sold.
Feds fine Regions Bank for gouging customers with illegal overdraft fees
The bank charged overdraft fees to customers who had not opted-in for coverage, the CFPB charged04/28/2015ConsumerAffairsBy Truman Lewis
For the first time, the Consumer Financial Protection Bureau has taken action against a bank for violating regulations governing bank overdraft fees. Th...
For the first time, the Consumer Financial Protection Bureau has taken action against a bank for violating regulations governing bank overdraft fees.
The bureau announced Tuesday that Regions Bank has been fined $7.5 million for charging overdraft fees to thousands of consumers who had not opted-in for overdraft coverage. The fine comes on top of a consent order with the bureau, also announced Monday, requiring the Birmingham, Ala.-based bank to pay back all consumers who had been affected by the unwarranted overdafts.
“Today the CFPB is taking its first enforcement action under the rules that protect consumers against illegal overdraft fees by their banks,” said CFPB Director Richard Cordray. “Regions Bank failed to ask consumers if they wanted overdraft service before charging them fees. In the end, hundreds of thousands of consumers paid at least $49 million in illegal charges. We take the issue of overdraft fees very seriously and will be vigilant about making sure that consumers receive the protections they deserve.”
Regions Bank operates approximately 1,700 retail branches and 2,000 ATMs across 16 states. With more than $119 billion in assets, it is one of the country's largest banks.
First such action
The action taken by the bureau is the first time it has punished a bank for violating overdraft regulations since new federal rules took effect in 2010, part of the Electronic Fund Transfer Act, that prohibited banks and credit unions from charging overdraft fees on ATM and one-time debit card transactions unless consumers affirmatively opted in. If consumers don’t opt-in, banks may decline the transaction, but won’t charge a fee.
The bureau found that Regions bank allowed consumers to link their checking accounts to savings accounts or lines of credit. Once that link was established, funds from the linked account would automatically be transferred to cover a shortage in a consumer’s checking account. But Regions never provided customers with linked accounts an opportunity to opt in for overdraft. Because those consumers had not opted in, Regions could have simply declined ATM or one-time debit card transactions that exceeded the available balance in both the checking and linked accounts. Instead, the bank paid those transactions, tacking on and overdraft fee of $36, in violation of the opt-in rule.
However, Regions Bank had been aware of the issue for some time. According to the bureau, an internal bank review revealed the violation 13 months after the new overdraft rules went into effect. The bureau said that senior executives at the bank were not made aware of the issue for another year after that, at which point they notified the CPFB. In June 2012, the bank reprogrammed its systems to stop charging the unauthorized fees. Then, this past January, the bank discovered more bank accounts that had been charged unauthorized fees.
The bureau also said that Regions charged overdraft and non-sufficient funds fees with its deposit advance product, called Regions Ready Advance, despite claiming it would not. Specifically, if the bank collected payment from the consumer’s checking account that would cause the consumer’s balance to drop below zero the bank would either cover the transaction and charge an overdraft fee or reject its own transaction and charge a non-sufficient funds fee. At various times from November 2011 until August 2013, the bank charged non-sufficient funds fees and overdraft charges of about $1.9 million to more than 36,000 customers.
Regions Bank voluntarily reimbursed approximately 200,000 consumers a total of nearly $35 million in December 2012 for the illegal overdraft fees discovered then. After the bureau alerted the bank to more affected consumers, Regions returned an additional $12.8 million in December 2013. In January 2015, the bank identified even more affected consumers and is now required to provide them with a full refund. Regions has been ordered to hire an independent consultant to identify all remaining consumers who were charged the illegal fees. Regions will return these fees to consumers, if not already refunded. If the consumers have a current account with the bank, they will receive a credit to their account. For closed or inactive accounts, Regions will send a check to the affected consumers.
The $7.5 million fine the bank has been ordered to pay could have been larger, according to the bureau, which noted the delay in notifiying senior bank officials of the violations. But the bureau credited Regions for making reimbursements to consumers and promptly self-reporting these issues to the Bureau once they were brought to the attention of senior management.
Don't let con artists fleece you with your own charitable impulses04/28/2015ConsumerAffairs
On Saturday a massive earthquake devastated the Himalayan nation of Nepal, and the dust had barely settled before scammers and con artists started using th...
Home prices post widespread gains in February
Prices have posted year-over-year advances for 34 straight months04/28/2015ConsumerAffairsBy James Limbach
Home prices continued their rise across the country over the last 12 months, according to the S&P/Case-Shiller Home Price Indices. Both the 10-City and 20...
Home prices continued their rise across the country over the last 12 months, according to the S&P/Case-Shiller Home Price Indices.
Both the 10-City and 20-City Composites saw larger year-over-year increases in February than were registered the month before. The 10-City Composite jumped 4.8% year-over-year, versus January's 4.3% advance, while the 20-City Composite gained was up 5.0% following a 4.5% increase in January.
The S&P/CaseShiller U.S. National Home Price Index, which covers all 9 U.S. census divisions, recorded a 4.2% annual advance in February 2015. Denver and San Francisco reported the highest year-over-year gains, as prices increased by 10.0% and 9.8%, respectively, over the last 12 months -- the first double digit increase for Denver since August 2013.
Seventeen cities reported higher year-over-year price increases in the year ended February 2015 than in the year ended January 2015, with San Francisco showing the largest acceleration. Three cities -- San Diego, Las Vegas and Portland, Ore., -- reported that the pace of annual price increases slowed.
“Home prices continue to rise and outpace both inflation and wage gains,” said David M. Blitzer, Managing Director and Chairman of the Index Committee at S&P Dow Jones Indices. “The S&P/Case-Shiller National Index has seen 34 consecutive months with positive year-over-year gains; all 20 cities have shown year-over-year gains every month since the end of 2012.'
The National Index rebounded in February, reporting a 0.1% change for the month. Both the 10- and 20-City Composites reported significant month-over-month increases of 0.5%, their largest increase since July 2014. Of the 16 cities that reported increases, San Francisco and Denver led all cities in February with gains of 2.0% and 1.4%. Cleveland reported the largest drop as prices fell 1.0%. Las Vegas and Boston reported declines of -0.3% and -0.2% respectively.
“A better sense of where home prices are can be seen by starting in January 2000 -- before the housing boom accelerated -- and looking at real or inflation adjusted numbers,” said Blitzer. “Based on the S&P/Case-Shiller National Home Price Index, prices rose 66.8% before adjusting for inflation from January 2000 to February 2015; adjusted for inflation, this is 27.9% or a 1.7% annual rate.”
Tyson Foods will cull antibiotics from its chicken flocks
Antibiotic-resistant infections "a global health concern"04/28/2015ConsumerAffairsBy Truman Lewis
Tyson Foods is the latest to say it will phase out the use of human antibiotics. The company says its U.S. broiler chicken flocks will be free of antibioti...
Tyson Foods is the latest to say it will phase out the use of human antibiotics. The company says its U.S. broiler chicken flocks will be free of antibiotics by the end of September 2017.
Tyson says it has already stopped using all antibiotics in its 35 broiler hatcheries, requires a veterinary prescription for antibiotics used on broiler farms and has reduced human antibiotics used to treat broiler chickens by more than 80 percent since 2011.
“Antibiotic-resistant infections are a global health concern,” said Donnie Smith, president and CEO of Tyson Foods. “We’re confident our meat and poultry products are safe, but want to do our part to responsibly reduce human antibiotics on the farm so these medicines can continue working when they’re needed to treat illness.”
Tyson said it is also forming working groups with independent farmers and others in the company’s beef, pork and turkey supply chains to discuss ways to reduce the use of human antibiotics on cattle, hog and turkey farms.
In December 2013, the Food and Drug Adminitration (FDA) formulated a plan under which food manufacturers are being asked to voluntarily withdraw the routine use of human antibiotics in animals raised as food.
“We need to be selective about the drugs we use in animals and when we use them,” said William Flynn, DVM, MS, deputy director for science policy at FDA’s Center for Veterinary Medicine (CVM). “Antimicrobial resistance may not be completely preventable, but we need to do what we can to slow it down.”
Tyson said it will work with food industry, government, veterinary, public health and academic communities, and provide funding, to accelerate research into disease prevention and antibiotic alternatives on the farm.
“One of our core values is to serve as responsible stewards of animals – we will not let sick animals suffer,” Smith said. “We believe it’s our responsibility to help drive action towards sustainable solutions to this challenge by working with our chicken, turkey, beef and pork supply chains.”
A smartphone and GPS for your dog
It's the latest in wearable computers04/28/2015ConsumerAffairs
If you or your dog is high-tech and gadgets are your thing, there is something you might want to check out. It's ne...
If you or your dog is high-tech and gadgets are your thing, there is something you might want to check out. It's new and it's coming out this summer. It was developed by Motorola and video streaming/VoIP app developer Hubble. It's called SCOUT 5000.
Many new tech products are wearable and this falls right into that category. In essence it's a smartphone for your dog. The only thing it doesn't have is a keypad so your dog can call you if it starts getting lonely. Don't worry -- it has everything else.
The collar is a little bulky because it carries the smartphone that can track your dog’s weight, and physical activity and it also has a GPS on it so you know if your pup is hanging with the wrong crowd.
There is no hiding who he hangs with because the collar has a webcam on it so you see who your dog is having a face to face with or a well, rear end to rear end with to put it politely. You will know where that nose has been.
The camera is capable of sending 720p video directly to the owner's smartphone. You can speak back to your pup via the collar, issuing commands or offering a soothing voice for agitated pets. All of this of course delivered via an app directly to your smartphone.
You have to wonder how Lassie made it in this world without all of these things.
If you have a dog that's on the smaller side and you are concerned this sounds like an awful lot to be wearing around your neck for a little guy, Motorola has you covered.
A smaller version has been made, called the SCOUT 2500. It is minus the webcam feature but can still give you location and location is everything.
Both of the devices are actually made by Binatone Global, which produces a number of other Scout- and Bark-branded pet products under the Motorola brand.
Nobody likes to be fenced in especially your dog but what’s unique about this device is it has a geo-fencing feature that can create boundaries for dogs and emit a high-pitch sound to keep the dog from crossing the area.
‘The SCOUT 5000 will be available this June and will retail for $200 in the U.S., with other regional releases yet to be determined. The smaller version SCOUT 2500 will sell for $99.
Greystone Foods recalls vegetable products
The products may be contaminated with Listeria monocytogenes04/28/2015ConsumerAffairsBy James Limbach
Greystone Foods of Birmingham, Ala., is recalling Today’s Harvest frozen Field Peas with Snaps, Broccoli Florets, and Silver Queen Corn. The products may ...
Greystone Foods of Birmingham, Ala., is recalling Today’s Harvest frozen Field Peas with Snaps, Broccoli Florets, and Silver Queen Corn.
The products may be contaminated with Listeria monocytogenes.
No illnesses have been reported to date in connection with this problem.
The recalled products come in 32-oz clear plastic bags and are sold in the freezer section of Publix Supermarkets. The sell by date of 04/21/16 is printed along the bottom seal of the bag in black ink.
Customers who purchased these products should return them to the store where they were purchased for a refund.
Consumers with questions may call 1-205-945-9099.
Alpine Sausage Kitchen recalls beef and pork products
The products contain soy, an allergen not listed on the label04/28/2015ConsumerAffairsBy James Limbach
Alpine Sausage Kitchen of Albuquerque, N.M., is recalling approximately 3,350 pounds of beef and pork products. The products contain soy, an allergen not ...
Alpine Sausage Kitchen of Albuquerque, N.M., is recalling approximately 3,350 pounds of beef and pork products.
The products contain soy, an allergen not listed on the label.
There are no reports of adverse reactions due to consumption of these products.
The following sausage products, produced from February 4, 2014 through March 26, 2015, are being recalled:
- 10-lb. boxes containing “Alpine Sausage Kitchen VIENNA SAUSAGE Calcium Reduced Dried Skim Milk Added.”
- 10-lb. boxes containing “Alpine Sausage Kitchen COOKED BRATWURST (GERMAN BRAND).”
- The products subject to recall bear the establishment number “EST. 7060” inside the USDA mark of inspection.
The recalled items were shipped to a retail location in Texas.
Consumers with questions about the recall may contact William Schmaeh at (505) 266-2853.
Hong Ha recalls beef and pork products
The products contain wheat flour, eggs and milk, allergens not listed on the label04/28/2015ConsumerAffairsBy James Limbach
Hong Ha of Hyattsville, Md., is recalling approximately 10,164 pounds of beef and pork products. The products contain wheat flour, eggs and milk, allergen...
Hong Ha of Hyattsville, Md., is recalling approximately 10,164 pounds of beef and pork products.
The products contain wheat flour, eggs and milk, allergens not listed on the product label.
There are no reports of adverse reactions due to consumption of these products.
The following beef and pork items, produced between December 1, 2014, and April 23, 2015, are being recalled:
- 6-oz. packages containing “HONG HA GIO HUE (Vietnamese Brand Seasoned Pork Patty Mix)”
- 7-oz. packages containing “HONG HA NEM (Vietnamese Brand Fresh Seasoned Pork Pattie Mix for Barbecue)”
- 8-oz. packages containing “HONG HA CHA CHIEN (Vietnamese Brand Seasoned Pork, Patty Mix)”
- 12-oz. vacuum sealed packages containing “HONG HA BO VIEN GAN (Vietnamese Brand Beef & Pork Meat Balls. Beef Tendons Added)”
- 12-oz. vacuum sealed packages containing “HONG HA BO VIEN (Vietnamese Brand Beef & Pork Meat Balls)”
- 14-oz. packages containing “HONG HA NEM NUONG (Vietnamese Brand Seasoned Pork Meat Balls, Anchovy Flavored Fish Sauce Added)”
- 14-oz. packages containing “HONG HA GIO SONG (Vietnamese Brand Fresh Seasoned Pork, Pattie Mix)”
- 32-oz. banana leaf and plastic packages containing of “HONG HA GIO DAC BIET (Vietnamese Brand Seasoned Pork Patty Mix)”
The recalled products bear the establishment number “EST. 4261” inside the USDA mark of inspection, and were shipped to restaurants and retail locations in Maryland and Virginia.
Consumers with questions about the recall may contact Magdi Abadir at (301) 341-1175.
Nylabone Products recalls Puppy Starter Kit
The product may be contaminated with Salmonella04/28/2015ConsumerAffairsBy James Limbach
Nylabone Products of Neptune, N.J., is recalling one lot of its 1.69-oz. package of the Puppy Starter Kit dog chews. The product may be contaminated with ...
Nylabone Products of Neptune, N.J., is recalling one lot of its 1.69-oz. package of the Puppy Starter Kit dog chews.
The product may be contaminated with Salmonella.
No illnesses have been reported to date in connection with this problem.
The recalled Puppy Starter Kit consists of one lot of dog chews that were sold nationwide, in Canada, and through one domestic online mail order facility.
It comes in a 1.69-oz. package marked with Lot #21935, UPC 0-18214-81291-3, and an expiration date of 3/22/18 on the back of the package.
Customers who purchased the recalled product should discontinue use of it and may return the unused portion to the place of purchase for a full refund.
Consumers with questions may contact the company at 1-877-273-7527, Monday through Friday from 8:00 am – 5:00 pm CT.
Ford recalls Ford Fiestas, Fusions, and Lincoln MKZs
The vehicles may have a broken door latch pawl spring tab04/28/2015ConsumerAffairsBy James Limbach
Ford Motor Company is recalling approximately 390,000 model year 2012-2014 Ford Fiestas and 2013-2014 model years Ford Fusions and Lincoln MKZs. The door...
Ford Motor Company is recalling approximately 390,000 model year 2012-2014 Ford Fiestas, and 2013-2014 model year Ford Fusions and Lincoln MKZs.
The door latch in the recalled vehicles may experience a broken pawl spring tab, which typically results in a condition where the door will not latch. If a customer is then able to latch the door, there is potential the door may unlatch while the vehicle is being driven, increasing the risk of injury.
Ford is aware of two reports of soreness resulting from an unlatched door bouncing back when the customer attempted to close it, and one accident report when an unlatched door swung open and struck an adjacent vehicle as the driver was pulling into a parking space.
Approximately 390,000 vehicles are located in North America, including 336,873 in the U.S. and federalized territories, 30,198 in Canada and 22,514 in Mexico.
Dealers will replace all four door latches at no cost to the customer.
Wireless service begins to go the way of DSL as new entrants pit network against network
Google's Project Fi and smaller players like FreedomPop are acting as spectrum wholesalers04/27/2015ConsumerAffairsBy James R. Hood
Think back a decade ago. If you wanted broadband, DSL was all that was available in many areas. And it was available only as an add-on to your landline tel...
Think back a decade ago. If you wanted broadband, DSL was all that was available in many areas. And it was available only as an add-on to your landline telephone service. Then cable systems began offering broadband service and the telephone companies reluctantly began offering DSL on a standalone basis.
That, says Stephen Stokols, CEO of a small company called FreedomPop, is what's about to happen to wireless service, with a big psychological boost from Google, which last week announced its Project Fi, a wireless phone and data service that automatically switches between traditonal cellular and wi-fi networks, offering low-cost, no-contract service to customers.
It's something FreedomPop has been doing for quite awhile but Stokols told ConsumerAffairs Google's announcement is "sort of an endorsement from the most geeky company in the world on what telco may look like in the future."
FreedomPop and other small companies, like Republic Wireless, aren't threatened by Google's move, Stokols says, describing it instead as a shot across the bow of the embedded wireless carriers like AT&T and Verizon.
A new paradigm
The model the new entrants are pursuing basically pits network against network in realtime on every single call -- switching the call from Sprint to T-Mobile to wi-fi on the basis of who has the stongest signal at that moment. Google does this with software built into the phone; FreedomPop does it with an app.
At $20 a month, Google's plan is actually more expensive than FreedomPop's -- which, like Republic's, starts at $5 a month -- and is comparable to some prepaid wireless plans. And since it initially works only on Google Nexus 6 phones -- making up less than 1% of the wireless universe -- it's not an immediate threat to anyone at the retail level.
What it is, says Stokols, is the first step in a strategy aimed at disaggregating, blowing up, in other words, the stranglehold that the big carriers currently enjoy. Initially, it's aimed at demonstrating to other equipment manufacturers -- Samsung, Apple, HTC, etc. -- that consumers will vote with their checkbooks.
If that happens, the manufacturers will be more likely to build network-switching intelligence into their phones, setting the stage for the new carriers to begin scaling up quickly.
"If Samsung and all the OEMs (phone manufacturers) adopted the same technology that lets devices switch between networks, that switches power from the carriers to the consumer," Stokols said. "Then a wholesaler like us, we can push more traffic to better carriers, play the carriers off each other and get the best deal for consumers."
No more roaming
The new services also promise to send the roaming concept straight into the history books, somewhere in the chapter that explains what "long distance" charges were.
Those old enough to remember long distance will tell you that it was what you paid to place a phone call from, say, New York to Chicago. Sure, if you lived in New York City, you could call Westchester County for free (depending on a zillion inexplicable ifs, ands and buts). But if you wanted to call Chicago, it would cost you 20 or 30 cents a minute, depending on yet another set of completely mysterious rules called tariffs.
In truth, there was no actual physical cost to the telephone companies to complete so-called "trunk" calls, except for the half-cent or so that they charged each other -- charges based on calculations of their "embedded costs," outlined in accounting reports similar to the hieroglyphics found in ancient cave dwellings.
Likewise, when the Googles and FreedomPops of the world have negotiated deals with wireless carriers and lined up open wi-fi networks worldwide, there will be no easy rationalization for international roaming charges.
Although Sokols would not confirm it, industry sources say that FreedomPop will be announcing free international roaming to one or more countries later this week.
Leaving Sprint and T-Mobile by the wayside may also become more commonplace. Sokols said his company currently has 8.5 million wi-fi hotspots at the moment and is adding new locations daily.
In 18 to 24 months, he said, he hopes to enough wi-fi hotspots to offer a wi-fi-only plan that would be extremely inexpensive, possibly even free, for the first half gigabyte or so.
That, he estimates, would appeal to the 80 million or so consumers who are sporadic prepaid users or do not have wireless service at all.
Back in the day, the U.S. government subsidized phone companies by allowing them to tack on a "Universal Service" fee, something that survives to this day. Its stated intention was to bring telephone service to every wide spot in the road. It took decades to get into the 90% neighborhood.
If Sokols' plan works, universal wireless service may become a reality without fees in just a few years. Stay tuned.
Corinthian Colleges ceases operations, closes all remaining schools
Corinthian-run Everest, WyoTech and Heald campuses closed; other Everest locations remain open04/27/2015ConsumerAffairs
Corinthian Colleges, the long-embattled chain of for-profit schools, announced on its website that it would close all remaining campuses immediately...
Corinthian Colleges, the long-embattled chain of for-profit (and not necessarily accredited) schools, announced on its website that it would close all of its remaining campuses effective today. Those campuses include “Everest and WyoTech campuses in California, Everest College Phoenix and Everest Online Tempe in Arizona, the Everest Institute in New York, and 150-year-old Heald College -- including its 10 locations in California, one in Hawaii and one in Oregon.”
Take note: although Corinthian does – or did – operate schools under the Everest name, not all Everest schools were run by Corinthian, so not all of them will be closing. For example: when ConsumerAffairs called the Everest College campus in Woodbridge, Virginia, this morning, we were told that it was not shutting down since Corinthian did not own it.
The CCI website says, “The company is working with other schools to provide continuing educational opportunities for its approximately 16,000 students. Corinthian said those efforts depend to a great degree on cooperation with partnering institutions and regulatory authorities.”
Translation: Those efforts depend to a great degree on whether any reputable, regionally accredited educational institutions will accept transfer credits from Corinthian courses -- and Everest schools, Corinthian-owned or otherwise, have a poor track record in that regard.
California Attorney General Kamala D. Harris said Corinthian "continued to deceive its students to the end."
"Closure of these campuses should help students get out from under the mountains of debt Corinthian imposed upon them through its lies," Harris said. "Federal and state regulators rightly acted to prevent taxpayer dollars from flowing to Corinthian, which preyed on the educational dreams of vulnerable people such as low-income individuals, single mothers and veterans by misleading students and investors about job placement rates and course offerings."
In February 2013, for example, an Everest graduate sued the school, alleging that none of the credits he took at Everest were transferable to a state community college. Many consumers posting on ConsumerAffairs have complained of problems transferring their credits.
“I attended Everest here in Miami in 2010,” former student Lucy said in a ConsumerAffairs posting last summer. “At the time I had no high school diploma. I completed a test that qualified me for the pharmacy technician program. ... I passed with flying colors.”
But that hasn't done Lucy much good. “To make a long story short, I am $13,000 in debt and still no employment in my field of study,” she said. “We cannot transfer our education credits because it's not considered real.”
Last June, the Department of Education temporarily halted all federal student aid to Corinthian-owned schools. In September, the feds sued Corinthian on charges of predatory lending practices toward its students. (Remember, too, that student loan debt is far worse than other kinds, because student loans can't even be discharged in bankruptcy.)
Hefty fine levied
Less than two weeks ago, the Department of Education levied a $30 million fine against Corinthian, and ordered its Heald College schools to stop enrolling new students, after an investigation “confirmed cases” that the company misrepresented the schools' job placement rates to current and prospective students of Corinthian-owned Heald Colleges.
For example: the DoE's investigation found that Heald paid companies to hire graduates for temporary positions lasting as little as two days, performing such basic tasks as moving computers and organizing cables, then counted those graduates as “placed in field.” (In many instances, those temp jobs were actually on Heald campuses.) Heald also counted obvious out-of-field jobs as in-field placements, including one graduate of an accounting program whose food-service gig at Taco Bell was counted as “in-field” work.
Despite all of this, the closing announcement on the Corinthian Colleges website says that, “The Company said that its historic graduation rate and job placement rates compared favorably with community colleges,” and quoted Corinthian's CEO, Jack Massimino, as saying “We believe that we have attempted to do everything within our power to provide a quality education and an opportunity for a better future for our students.”
Report: Russian hackers could read President Obama's email correspondence
Last summer's White House hacking was even worse than previously admitted04/27/2015ConsumerAffairs
Last summer, hackers with suspected Russian-government backing were able to breach computer network security at the State Department, then use that as a ju...
Last summer, hackers with suspected Russian-government backing were able to breach computer network security at the State Department, then use that as a jumping-off point to later hack into the network of the White House itself — though not until earlier this month did the public learn about the White House hacking.
At the time, it was reported that the hackers had gained illicit real-time access to information including non-public details of the president's own daily schedule. However, although they were able to get such sensitive data, White House spokespeople said the hackers were unable to get any classified data, including national security-related information. (In government-security terms, the words “sensitive” and “classified” have distinctly different meanings.)
But this Saturday, the New York Times reported that last summer's White House hacking went deeper than previously admitted, with the hackers even getting access to some of President Obama's email correspondence, according to unnamed “senior American officials.”
That said, White House officials still maintain that the hackers never accessed any classified information. (Most senior officials have two different work-computers connected to two different networks: one connected to a highly secure classified network, and another computer connected to the outside world's Internet for unclassified communication.)
The problem is that despite those dual networks, classified and unclassified communications still aren't segregated as strictly as they should be; certain sensitive (though not officially “classified”) communications still end up going through the unclassified Internet connections, including schedules and email exchanges with diplomats and ambassadors.
An anonymous official told the Times that the hacking “has been one of the most sophisticated actors we’ve seen,” while another official admitted, “It’s the Russian angle to this that’s particularly worrisome.”
Last week, in a possibly unrelated incident, researchers at the FireEye cybersecurity firm announced their discovery of certain zero-day software flaws which had been exploited by hackers from a Russian espionage campaign to spy on American defense contractors, NATO officials and diplomats, and others in whom Russia's government might take a particular interest.
Not just Russia
But Russia's is not the only foreign government suspected of supporting such illicit cyberwarriors. Last November, for example, the United States Postal Service admitted that hackers (with suspected connections to the Chinese government) breached the USPS database and stole the names, addresses, Social Security numbers, emergency contacts and similar information for all post office employees.
At the time, security experts said they suspected that the USPS hackers were the same people behind last July's hacking of the federal Office of Personnel Management; those hackers managed to steal data on up to 5 million government employees and contractors who hold security clearances.
The Chinese are also suspected of involvement in the Anthem insurance company hacking announced in February – possibly because a lot of defense contractors, including employees of Northrop Grumman and Boeing, get their insurance coverage through Anthem.
However, the Chinese government has denied all such allegations, and points out that hacking is illegal under Chinese law. The Russian government has not admitted to involvement with any American hackings, either.
Not as effective as proper diet, many researchers now conclude04/27/2015ConsumerAffairsBy Mark Huffman
When people want to lose a few pounds, their first thought may be to head off to the gym for some exercise. But it turns out it takes a lot of exercise to...
The world of drug pricing is shadowy. It pays to shop around04/27/2015ConsumerAffairsBy Dr. Ron Gasbarro
Carol came into the pharmacy with a prescription for her beloved dog Mandy, a border collie. “The vet said she has Lyme disea...
Plastic containers continue to pile up on land and sea
Plastic is forever, and that's the problem04/27/2015ConsumerAffairsBy Mark Huffman
Unless you are a frequent visitor to landfills or sail the world's oceans, you aren't likely to encounter the mountains of pl...
Unless you are a frequent visitor to landfills or sail the world's oceans, you aren't likely to encounter the mountains of plastic on land or islands of it floating in the sea.
But if you are observant in everyday life, when you visit the supermarket, fast food restaurants and discount stores filled with packaged consumer items, you may begin to appreciate the world's ever-increasing use of disposable plastic.
The problem with plastic is how to dispose of it. Since it is not biodegradable, it basically lasts forever, clogging the world's waste disposal system.
It's not just plastic bottles and food containers that are the problem. 5 Gyres, a non-profit environmental group focused on plastic pollution, is trying to bring attention to the problems posed by tiny plastic grains, known as microbeads and used in a large number of cosmetics and personal care products.
The group says these microbeads eventually make their way to our waterways and wildlife, and eventually are ingested by humans through the food chain, toothpastes or other body products that contain microbeads.
"Poorly designed products escape consumer hands and waste management systems," said Anna Cummins, co-founder of 5 Gyres. "Plastic fragments become hazardous waste in the environment.”
5 Gyres is partnering with Whole Foods Market in the North Atlantic region to conduct an innovative #Ban the Bead campaign. Rainbow Light, a nutritional supplement manufacturer, is sponsoring a 5 Gyres sea expedition, starting in June, to conduct research on marine plastic pollution.
“We engage with companies like Rainbow Light that are championing design solutions to the problem of plastic pollution,” Cummins said. “Their EcoGuard bottles are an excellent example of the impact conscious companies can make to keep harmful plastics out of the waste stream.”
Concerns about plastic pollution have gained momentum since February, when a marine study calculated that between 4.8 million and 12.7 million metric tons of plastic waste enters the oceans from land each year. That was 3 times the amount anyone thought.
The problem has mobilized efforts from a variety of sources, some of them surprising. Bloomberg News reports a Dutch teenager last year secured $2 million in funding to build an ocean clean-up machine to pick up with floating plastic debris and funnel it to specific collection points. But it's literally a drop in the ocean.
The National Resources Defense Council (NRDC) says it is working with international leaders and organizations such as the UN Environment Program to help establish international guidelines for curbing plastic pollution.
What you can do
In the meantime, the group says consumers can help by cutting disposable plastics out of daily routines. It suggests bringing your own bag to the store, choosing reusable items wherever possible, and purchasing plastic with recycled content.
Recycling is another method of cutting back on the mushrooming growth in plastic.
“Each piece of plastic recycled is one less piece of waste that could end up in our oceans,” the group says.
Finally, it says being aware of how you are contributing to the problem and taking steps to reduce your use of plastic can also help.
Toddlers downing coffee in Boston (and maybe elsewhere)
The results of a recent study came as a surprise to researchers04/27/2015ConsumerAffairs
McDonald's used to be the worry when it came to our kids and what they were putting in their bodies, now Starbucks may be the next big target, at least in ...
McDonald's used to be the worry when it came to our kids and what they were putting in their bodies, now Starbucks may be the next big target, at least in Boston. About one in seven two-year-olds in Boston drinks coffee, according to a recent study led by Boston Medical Center (BMC) that was published online recently in the Journal of Human Lactation.
"Our results show that many infants and toddlers in Boston -- and perhaps in the U.S. -- are being given coffee and that this could be associated with cultural practices," principal investigator Anne Merewood, director of the Breastfeeding Center at Boston Medical Center, said in a medical center news release.
Does anyone really want to spend the day with a toddler hyped up on caffeine? Some cultures apparently embrace it. Research showed that children in Australia, Cambodia and Ethiopia, between the ages of birth to 5 are given coffee. Research noted that kids coming from Hispanic households also drank coffee at an early age.
Not much research
There hasn't been much research on coffee consumption of infants but what has been documented is children who were two and who drank coffee in between meals or at bedtime were three times more likely to be obese in kindergarten. The US has not provided guidelines on coffee consumption for children.
Using data from a study on infant weight gain and diet, the researchers looked at 315 mother-infant pairs to determine what and how much infants and toddlers were consuming. They examined everything a toddler would drink such as breast milk, formula, water and juice – and were shocked to find out there was something they missed and that was coffee.
At one year, the rate of coffee consumption reported was 2.5 percent of children. At two years, that number increased to just above 15 percent, and the average daily consumption for these children was 1.09 ounces.
Other studies have shown what you would most likely suspect when children consumed caffeine. It made them depressed and a good number came down with diabetes. Naturally they had sleep problems, and there was a high incidence of substance abuse and obesity.
What wasn’t mentioned in the study but is a problem is that when children drink coffee it affects their teeth. Coffee is acidic. Acidic drinks can cause damage in the mouth by weakening teeth; this leads to a decline in tooth enamel and an increase in cavities. Children are more prone to cavities than adults, as it takes years for new enamel to harden after baby teeth have been lost and adult teeth have come in.
Flowers bloom in spring and so does dry eye
Study finds springtime is prime time for eye dryness, irritation04/27/2015ConsumerAffairsBy Truman Lewis
Ah, 'twill soon be May, the lusty month of May, when the sap rises, hope springs eternal and eyes begin to itch. Far from bringing a gleam to the eye, airb...
Ah, 'twill soon be May, the lusty month of May, when the sap rises, hope springs eternal and eyes begin to itch. Far from bringing a gleam to the eye, airborne allergens bring on millions of caes of dry eye each year, a new study finds. (In truth, dry eye peaks in April but a little poetic license is perhaps permissible).
The University of Miami study found that dry eye -- the little understood culprit behind red, watery, gritty feeling eyes -- strikes most often in spring, just as airborne allergens are surging.
The study, published in Ophthalmology, marks the first time that researchers have discovered a direct correlation between seasonal allergens and dry eye.
Dry eye can significantly impact a person's quality of life by inducing burning, irritation and blurred vision. It affects about 1 in 5 women and 1 in 10 men, and costs the U.S. health care system nearly $4 billion a year.
Allergies and dry eye have historically been viewed as separate conditions but the discovery that the two conditions are linked suggests dry eye sufferers may benefit from allergy prevention in addition to dry eye treatments like artificial tears.
For instance, wearing goggles outside for yard work and using air filters indoors may stave off springtime dry eye, the researchers say.
They discovered the correlation between allergies and dry eye by reviewing 3.4 million visits to Veterans Affairs eye clinics nationwide over a five-year period between 2006 and 2011. During that time, doctors diagnosed nearly 607,000 patients with dry eye. Researchers also charted the monthly prevalence of dry eye compared to an allergy index over time and found seasonal correlations:
A seasonal spike occurred each spring, when 18.5 percent of patients were diagnosed with dry eye. Another spike came in winter.
Prevalence of dry eye was lowest in summer at 15.3 percent.
April had the highest monthly prevalence of dry eye cases: 20.9 percent of patients seen were diagnosed with dry eye that month. This coincided with the yearly peak in allergens (including pollen), as measured by the allergy index recorded on pollen.com.
The research team hypothesizes that the winter rise in cases of dry eye may be due to low indoor humidity caused by people using heaters indoors without a humidifier to offset the dryness.
"For the first time, we've found what appears to be a connection between spring allergens like pollen and dry eye, but also saw that cases rose in winter," said lead researcher Anat Galor, M.D., MSPH, associate professor of clinical ophthalmology at the University of Miami. "Finding this correlation between dry eye and different seasons is one step toward helping physicians and patients treat the symptoms of dry eye even more effectively based on the time of year."
For more information on dry eye, visit the American Academy of Ophthalmology's public information website.
Jeni's Splendid Ice Creams recalls all products
The product may be contaminated with Listeria monocytogenes04/27/2015ConsumerAffairsBy James Limbach
Jeni’s Splendid Ice Creams is recalling all ice creams, frozen yogurts, sorbets, and ice cream sandwiches for all flavors and containers. The product may ...
Jeni’s Splendid Ice Creams is recalling all ice creams, frozen yogurts, sorbets, and ice cream sandwiches for all flavors and containers.
The product may be contaminated with Listeria monocytogenes.
In addition, the company is ceasing all sales and closing all scoop shops until all products are ensured to be 100% safe.
The company says it is not aware of any illness reports to date related to the recalled products.
The recalled products were distributed to retail outlets, including food service accounts and grocery markets throughout the U.S., as well as online at jenis.com.
Customers who purchased any of these products should dispose of them or return them to the store where they were purchased for an exchange or full refund.
Consumers may contact Jeni’s Splendid Ice Creams at 614-360-3905 from 9 am to 5 pm (E.D.T.) Monday through Friday or by email at [email protected].
General Motors recalls Buick Regals and Chevrolet Impalas and Monte Carlos
The valve cover gasket may leak04/27/2015ConsumerAffairsBy James Limbach
General Motors is recalling 1,207 model year 2004 Buick Regals manufactured April 9, 2003, to June 26, 2003, 2004 Chevrolet Impalas manufactured April 8, 2...
General Motors is recalling 1,207 model year 2004 Buick Regals manufactured April 9, 2003, to June 26, 2003, 2004 Chevrolet Impalas manufactured April 8, 2003, to June 25, 2003, and 2004 Chevrolet Monte Carlos manufactured April 7, 2003, to June 25, 2003.
The valve cover gasket may leak dripping leaking engine oil onto the hot surface of the exhaust manifold, increasing the risk of a fire.
GM will notify owners, and dealers will replace the spark plug wire retainer to redirect the dripping oil. Vehicles that have a 3.8L V6 supercharged engine will also have the left valve cover gasket replaced. These repairs will be done free of charge. The manufacturer has not yet provided a notification schedule.
Owners may contact Buick customer service at 1-800-521-7300 or Chevrolet customer service at 1-800-222-1020. GM's number for this recall is 14574.
Inventure Foods recalls Fresh Frozen Vegetables and Select Jamba "At Home" smoothie kits
The products may be contaminated with Listeria monocytogenes04/27/2015ConsumerAffairsBy James Limbach
Inventure Foods is recalling certain varieties of its Fresh Frozen line of frozen vegetables, as well as select varieties of its Jamba “At Home” line of sm...
Inventure Foods is recalling certain varieties of its Fresh Frozen line of frozen vegetables, as well as select varieties of its Jamba “At Home” line of smoothie kits.
The products may be contaminated with Listeria monocytogenes.
There are no known illnesses linked to consumption of these products to date.
The recalled Fresh Frozen products were distributed to retail outlets, including food service accounts, mass merchandise stores and supermarkets in Alabama, Arizona, Arkansas, Florida, Georgia, Illinois, Indiana, Kansas, Kentucky, Louisiana, Maryland, Michigan, Mississippi, Nebraska, North Carolina, Ohio, Oklahoma, South Carolina, Tennessee, Texas, Virginia, West Virginia and Wisconsin.
The recalled Jamba “At Home” smoothies’ products were distributed to retail outlets, including mass merchandise stores and supermarkets east of the Mississippi River.
The following products are being recalled:
Customers who purchased the recalled products should not consume it and return it to the store where it was purchased for a full refund.
Consumers with questions may call at 866-890-1004 24/7 or email the firm at [email protected].
Volkswagen recalls Golf, GTI, and Audi A3 vehicles
The fuel pump could fail04/27/2015ConsumerAffairsBy James Limbach
Volkswagen Group of America is recalling 6,204 model year 2015 Volkswagen Golf, GTI, and Audi A3 vehicles. Improper nickel plating of components within t...
Volkswagen Group of America is recalling 6,204 model year 2015 Volkswagen Golf, GTI, and Audi A3 vehicles.
Improper nickel plating of components within the fuel pump may result in the fuel pump failing. If the fuel pump fails, the vehicle will not start, or if the engine is running, it will stop and the vehicle will stall, increasing the risk of a crash.
Volkswagen will notify owners, and dealers will inspect the vehicles and replace any affected fuel pumps, free of charge. The manufacturer has not yet provided a notification schedule.
Owners may contact Volkswagen at 1-800-893-5298 or Audi at 1-800-253-2834.
West Liberty Foods recalls grilled chicken breast products
The products may be contaminated with pieces of plastic04/27/2015ConsumerAffairsBy James Limbach
West Liberty Foods of Tremonton, Utah, is recalling approximately 34,075 pounds of grilled chicken breast products. The products may be contaminated with ...
West Liberty Foods of Tremonton, Utah, is recalling approximately 34,075 pounds of grilled chicken breast products.
The products may be contaminated with pieces of plastic
There are no reports of adverse reactions due to consumption of these products.
The following grilled chicken patties, produced on February 4, 2015, are being recalled:
- 25-lb. cardboard boxes containing 5-5lb plastic bags of “SUBWAY FULLY COOKED GRILLED CHICKEN BREAST PATTY WITH RIB MEAT”
The recalled products bear the establishment number “EST. 34349 or P-34349” inside the USDA mark of inspection, and were shipped to distributors in Illinois, Oklahoma, Minnesota, Utah, and Texas.
Consumers with questions about the recall may contact Renee Miller at (319) 627-6114.
Comcast pulls plug on Time-Warner deal
Too big to sail, the massive merger sinks below sight04/24/2015ConsumerAffairsBy James R. Hood
They said it couldn't be done. They were right. After months of unremitting pressure from consumer activists and growing scrutiny from regulators, Comcast ...
They said it couldn't be done. They were right. After months of unremitting pressure from consumer activists and growing scrutiny from regulators, Comcast has abandoned its merger with Time Warner.
The $45 billion deal would have created a massive monolith touching nearly every American in one way or another. More than fears of reduced price and service competition, it was the fear that program producers would be strangled by having to deal with such a powerful distributor that energized opponents.
Opponents warned the merger would set off a new round of consolidation, as cable channels and networks combined forces to strengthen their bargaining positions.
Of course, all this comes as the cable TV business itself begins to unravel. The rapid movement of video programming to the Internet threatens to leave the Comcasts of the world as mere pipes -- conduits through which Netflix, HBO and other producers and packagers reach consumers.
Some economists will tell you that it's when an industry has passed its peak that consolidation sets in. Witness newspapers, which have always been remarkably skilled in staying two paces behind everyone else.
It is only in the last few years that, having consolidated themselves into an amorphous mass, big newspaper companies have conceded the existence of television. They are now rushing to spin off their newspaper properties in a mad rush to buy up TV stations around the country, having not yet noticed that over-the-air TV is about to go the way of the stagecoach.
Comcast finally threw in the towel after the Federal Communications Commission joined the Federal Trade Commission in saying it wanted to take a much closer look at the effect the merger would have. Until this week, both companies had continued to insist the deal would fly, thinking perhaps that the millions of dollars they had spent on lawyers, lobbyists and log-rollers could not possibly have been in vain.
“Today, we move on. Of course, we would have liked to bring our great products to new cities, but we structured this deal so that if the government didn’t agree, we could walk away,” Comcast CEO Brian Roberts said in a news release Friday.
Analysts were already speculating that it was Roberts who would be moving on once shareholders realized how much money and competitive advantage had been wasted on the failed attempt.
Time Warner Cable CEO Robert D. Marcus also took comfort from his P.R. staff's ability to spin just about anything in a favorable light.
“Throughout this process, we’ve been laser-focused on executing our operating plan and investing in our plant, products and people,” Marcus said.
Left at the dock is Charter Communications, which had been scheduled to play tugboat and shove off with 4 million subscribers that would have been declared excess ballast by Comcast and Time Warner.
Health care providers get creative to expand and improve care
Are physicians assistants and nurse practitioners becoming the new face of health care?04/24/2015ConsumerAffairsBy Mark Huffman
Patients increasingly don't go to see a family doctor when they are in need of health care services. They are more likely to head off to one of the growing...
Patients increasingly don't go to see a family doctor when they are in need of health care services. They are more likely to head off to one of the growing number of walk-in clinics and urgent care facilities, or even hospital emergency rooms.
When they do, they are increasingly likely to be seen by a physician's assistant (PA) rather than a doctor. These health care providers are medically trained and licensed and work under the supervision of a physician.
Unlike doctors, they are likely to spend more time with the patient and have more intimate knowledge of their medical issues. According to the American Association of Physicians Assistants (AAPA), a PA conducts physical exams, diagnoses and treats illnesses, orders and interprets tests, develops treatment plans, writes prescriptions, assists with surgery, makes hospital rounds and advises on preventive care.
In short, they do lots of things a doctor does. Because one physician might supervise more than one PA, these providers add a level of efficiency to the health care system.
They are one sign of sweeping changes in U.S. health care, but only one. Nurse practitioners (NP) are another.
NPs are clinicians blending clinical expertise in diagnosing and treating health conditions with an added emphasis on disease prevention and health management. Like PAs, NPs are often the only provider a patient might see for routine medical needs.
The American Association of Nurse Practitioners estimates NPs conduct 916 million U.S. patient visits each year.
Besides expanding the roles of non-physician clinicians, the health care system has also launched innovative programs at hospitals and clinics, usually designed to reduce hospitalization time or make it totally unnecessary.
In one such program at The Valley Hospital in Ridgewood, N.J., teams comprised of a paramedic, critical care nurse and EMT have begun making house calls on heart patients soon after their discharge.
Yes, house calls, that long-abandoned practice of a doctor coming to your house to administer treatment. In this case, the program's aim is heading off a return trip to the emergency room or admission to the hospital.
"Patients with cardiopulmonary disease, particularly those with heart failure and chronic obstructive pulmonary disease, are particularly vulnerable to re-hospitalization, especially during the transitional period after they first arrive home," said Lafe Bush, a paramedic and director of Emergency Services at Valley.
Reducing readmission rate
He notes that the 30-day readmission rate nationwide for patients with heart failure is nearly 25%. The majority of readmissions occur within 15 days of discharge.
The program, launched last August, targets patients with cardiopulmonary disease at high risk for hospital readmission who either decline or do not qualify for home care services. The team visits the patient and provides a full assessment, including a physical exam, a safety survey of the patient's home, medication education, reinforcement of discharge instructions and confirmation that the patient has made an appointment for a follow-up visit with his or her physician.
These trends have been gathering momentum over the last 2 decades, picking up speed in recent years. They are largely in response to what government policymakers described in 2008 as an “inefficient, unstable and convoluted” health care system, prompting them to put in place incentives rewarding better care instead of more care.
Privacy advocates, AirBnb fight proposed California state law
State Senate Bill 593 passes committee this week04/24/2015ConsumerAffairs
Privacy advocates have joined AirBnb in opposing a proposed California state law which would require home sharing platforms to give local and county govern...
Privacy advocates have joined AirBnb in opposing a proposed California state law which would require home sharing platforms to give local and county governments a wide variety of information about their users, including hosts' rental addresses, the number of guests, length of their stay and how much they pay.
California Senate Bill 593, titled “Residential units for tourist or transient use: hosting platforms,” also allows municipalities in the state to ban the practice if they wish, and impose penalties on residents who flout such bans.
But most opposition to the bill focuses on the privacy angle in cities where home-sharing would be allowed. David Owen, AirBnb's public policy head, said in a blog post that the bill would require the company to “hand over broad swaths of confidential, personal information to bureaucrats who will sift through it in search of potential violations of local planning and zoning laws,” which would “fundamentally alter the online privacy protections most Californians have come to expect. Internet commerce is a universal part of so many Californians’ lives, and sharing economy platforms like Airbnb have a duty to protect the private data of our community – and lawmakers have a responsibility to protect their constituents’ important privacy interests.”
But state senator Mike McGuire (D-Healdsburg), SB 593's sponsor, says that the bill only “enforces the local laws that are on the books,: and that “Multibillion-dollar corporations need to do their part, follow local laws, and share in the prosperity of local communities.”
Legal battles galore
Airbnb has faced legal battles wherever it's tried to operate. Last October, San Francisco passed a law specifically allowing residents to rent out their own homes for “short term rentals,” provided they follow certain guidelines. But at the same time, on the opposite side of the country, New York's state attorney started cracking down on Airbnb hosts in the state, in an effort to “investigate and shut down illegal hotels.”
This week, SB 593 took a step closer to passing into law, after it passed the California senate's transportation and housing committee by an 8-0 vote.
Pepsi to remove aspartame from its diet soda offerings
Aspartame-free diet drinks will hit store shelves in August04/24/2015ConsumerAffairs
The Pepsi company announced today that, in response to consumer demand, it will stop using aspartame to sweeten its American-market diet sodas. Starting...
The Pepsi company announced today that, in response to consumer demand, it will stop using aspartame to sweeten its American-market diet sodas.
Starting in August, the drinks Diet Pepsi, Caffeine Free Diet Pepsi and Wild Cherry Diet Pepsi will be sweetened with sucralose and acesulfame potassium (also known as Ace-K) rather than aspartame. The recipe switch will make the various forms of Diet Pepsi the only American diet soda not sweetened with aspartame, according to the trade industry publication Beverage Digest.
Aspartame is a combination of methanol and two amino acids: aspartic acid and phenylalanine. It was invented in 1965, and in 1974 the Food and Drug Administration approved its use as a food additive. It's about 180 times sweeter than sugar, letting it impart the same amount of sweetness with a far lower caloric punch, which is why it's so popular in diet sodas.
On the other hand, many reputable studies have shown that for people trying to lose weight or avoid gaining any, sugar might paradoxically be a better option than low-calorie artificial sweetener, due to chemical reactions in the brain: when you get a “sugar craving” (more specifically, when your brain generates a sugar craving), the only thing that'll satisfy the craving and make it go away is the release of dopamine, a chemical necessary for “reward signaling” in the brain.
And, as it turns out, the digestion and breakdown of sugar produces dopamine to satisfy those cravings – but the breakdown of artificial sweeteners does not. So if you have a sugar craving and eat something sugar-free, that sensation of sweetness on your tongue will not give your brain any dopamine, thus your craving does not go away, and after eating the sugar-free item you're just as likely to eat something else, to satisfy the craving.
That said, most opposition to aspartame is based not on this possible paradox, but on allegations that aspartame is harmful for human consumption. Which it is – in sufficiently high doses. But aspartame's supporters (including the FDA) say the amount of aspartame used to sweeten food isn't remotely close to that danger level. Indeed, in low doses, aspartame's two amino acids are actually necessary for the body to function properly. (If you eat a proper diet and are in generally good health, your body should actually produce a certain number of these amino acids on its own.)
As for methanol – yes, it's deadly poisonous in high quantities, but tiny amounts of it can already be found in alcoholic beverages including beer, wine and whiskey.
Paracelsus, the medieval physician now called the “father of toxicology,” famously coined the phrase “the dose makes the poison.”
In other words, any substance is poisonous in high enough doses — even those substances required for life. Even clean, healthy water will kill you if you drink too much too fast. Vitamins that are essential to good health and proper body functioning in small quantities will poison you if you eat a whole bottle of multivitamins at once. The mere fact that something is poisonous in high quantities does not necessarily mean that it's dangerous in small quantities.
Still, there remain many American consumers who say they want diet soda without aspartame, and Pepsi's plan makes it the first American soda company to offer this. But Pepsi's longtime rival the Coca-Cola company responded to the news by saying that it has no plans to change the sweeteners used in Diet Coke. “All of the beverages we offer and ingredients we use are safe,” Coke said in a prepared statement.
Texas votes to unsnarl hairbraiding regulations
Braiders and barbers are not the same, and don't need the same regulations04/24/2015ConsumerAffairs
Good news for various Texas entrepreneurs: yesterday the state House of Representatives voted unanimously in favor of HB 2717, to deregulate businesses whi...
Good news for various Texas entrepreneurs: yesterday the state House of Representatives voted unanimously in favor of HB 2717, to deregulate businesses which teach or perform the art of traditional African hairbraiding.
Texas law sets strict regulations on barbers and cosmetologists, primarily on safety grounds: those trades require (among other things) the use of sharp tools and potentially dangerous chemicals. Braiding hair does not, yet in 2007, when the state started regulating hairbraiders and teachers of the art, it mandated that they meet the same licensing requirements as barbers or cosmetologists.
Dallas resident and African hairbraiding expert Isis Brantley has been braiding hair professionally for over 30 years — and the law has hassled her over it for almost that long.
She started braiding hair at home in her kitchen, but was arrested when she tried opening a salon. “As soon as I opened up the shop, wow, the red tape was wrapped around my hands,” she told the Texas Tribune. “Seven cops came in, in front of my clients, and arrested me and took me to jail like a common criminal. The crime was braiding without a cosmetology license.”
Brantley spent years challenging the legal hairbraiding restrictions in court, and in 2007, the state modified the requirements somewhat: henceforth, hairbraiders seeking a license would only have to show 35 hours of formal training rather than 1,500 hours, and Brantley specifically was “grandfathered in” and granted a braiding license.
So she won the right to legally braid hair, but when she tried opening a school to give others the 35 hours of instruction they'd need to become legally licensed hairbraiders, the state told her that a braiding school would have to meet the same standards as a barber school.
Brantley sued the state in 2013, saying that the barber regulations on her braiding school were unconstitutional and unreasonable. The non-profit Institute For Justice, which joined Brantley in filing her suit, outlined the requirements Texas set before Brantley could legally teach the art of traditional African hairbraiding:
… Isis must spend 2,250 hours in barber school, pass four exams, and spend thousands of dollars on tuition and a fully-equipped barber college she doesn’t need, all to teach a 35-hour hairbraiding curriculum. Tellingly, Texas will waive all these regulations if Isis goes to work for an existing barber school and teaches hairbraiding for them.
That “fully equipped” barber college would have to include barber chairs and hair-washing stations, neither of which are required to braid hair.
In January, a federal judge ruled that Texas' regulations on hairbraiding schools were unconstitutional and did nothing to advance public health or safety, nor meet any other legitimate government interest.
During that trial Arif Panju, the Institute For Justice attorney who represented Brantley in her court battle, noted that the state couldn't identify a single hairbraiding school capable of meeting those strict barber-school requirements.
After the trial, he said that the judge's ruling “makes it crystal clear to the Legislature that what’s happening here is nothing to do with public health and safety and everything to do with economic protectionism.”
New York sues tanning salons for minimizing skin cancer risks
"Nothing safe about indoor tanning," NY attorney general declares04/24/2015ConsumerAffairsBy Truman Lewis
Saying there is "nothing safe about indoor tanning," New York Attorney General Eric T. Schneiderman today filed lawsuits against two tanning salon franchis...
Saying there is "nothing safe about indoor tanning," New York Attorney General Eric T. Schneiderman today filed lawsuits against two tanning salon franchises -- Portofino Spas, LLC and Total Tan, Inc., and served notice he also intends to sue Beach Bum Tanning Salons and Planet Fitness.
The suits accuse the salons of false advertising by
- denying or minimizing scientific evidence linking tanning to an increased cancer risk;
- promoting indoor tanning as a safe way to reap the benefits of vitamin D and other purported health benefits; and
- asserting the safety of indoor tanning compared to tanning outdoors.
Altogether the four franchises operate 155 tanning salons around the state.
“Make no mistake about it: There is nothing safe about indoor tanning. The use of ultra-violet devices increases exposure to cancer-causing radiation and puts millions of Americans in serious danger – young adults, in particular,” said Schneiderman. “Irresponsible businesses that seek to rake in profits by misleading the public about the safety of their services will be held accountable by my office. Advertising and marketing cannot be used as a tool to confuse and endanger New York consumers.”
Over the past decade, scientific evidence has clearly documented the dangers of indoor tanning, Schneiderman said. By 2009, the World Health Organization added indoor tanning to its list of most dangerous forms of cancer-causing radiation and placed it in the highest cancer risk category: “carcinogenic to humans,” the same category as tobacco.
In July 2014, the U.S. Surgeon General issued a “Call to Action To Prevent Skin Cancer,” a report documenting the rise in skin cancers and outlining action steps to prevent these cancers going forward, including reduction of intentional, and unnecessary, ultraviolet (UV) light exposure for the purpose of tanning.
Indoor tanning increases the risk of melanoma, the deadliest form of skin cancer – which is responsible for 9,000 deaths in the United States each year. Indoor tanning also increases the risk of nonmalignant skin cancers (basal cell carcinoma and squamous cell carcinoma). While not deadly, these nonmalignant cancers can cause noticeable disfigurement. In addition to increasing the risk of skin cancer, UV exposure can also harm the immune system and cause premature skin aging.
New York law currently prohibits tanning for children under 17 and requires parental consent for children between the ages of 17 and 18.
Additionally, New York law requires that warning signs be posted outside of tanning beds, that tanning hazards information sheets and acknowledgement forms be distributed to tanning patrons, and that free protective eyewear be made available to tanning patrons.
The Attorney General’s lawsuit alleges that Portofino did not post the required state warning sign near every tanning devices as required by the law and that total tan required patrons pay for protective eyewear when the eyewear is required to be provided without cost to consumers.
In the face of the scientific evidence linking indoor tanning and early onset of skin cancer, some indoor tanning salon businesses have sought to counter the scientific evidence by purposefully advertising the opposite message – that indoor tanning actually improves health.
Parents too often send sick children to day care, study finds
Some simply have no alternative, others don't realize the potential consequences04/24/2015ConsumerAffairsBy Christopher Maynard
A parent’s intuition can be a very valuable thing when it comes to their young ones. Knowing when something is wrong can give them plenty of time to act. B...
A parent’s intuition can be a very valuable thing when it comes to their young ones. Knowing when something is wrong can give them plenty of time to act. But are some parents not being as vigilant about child illnesses as they could be? A recent study from the University of Bristol says that this may be so.
The study, published in The Journal of Public Health, shows that most parents often think that coughs and colds are less serious than other types of illnesses. Unfortunately, most parents are not doctors or medical professionals. Many young children are sent to nurseries with more serious illnesses, which in turn spread to other kids and the wider community.
The study interviewed 31 parents about the decisions they make when their children are feeling sick. Many variables were considered, including the parent’s attitude towards illness, their current plan for dealing with a sick child, and other extenuating circumstances that could alter their decision to send their child to nursery.
After reviewing all answers, the research team found that other factors often overrode a parent’s decision to keep a sick child home. Dr. Fran Carroll, who is the lead author of the study, explains how some parents were simply uninformed and did not know what to do.
“They [the parents] often felt the guidance [from nurseries] was less clear on respiratory symptoms than for sickness and diarrhea, or chicken pox, for example.”
Other reasons for not keeping a child home were more practical. Many parents simply couldn’t miss time from work because of financial consequences, or did not have an alternative care plan in place.
Parents from the study had many suggestions on how to avoid these problems in the future. Some of the biggest hurdles they pointed out were nursery fees for lack of attendance. Reducing these fees, they said, would go a long way. Being able to swap sessions and having clearer guidance on nursery sickness polices would also be beneficial.
The researchers believe that their study has a lot of potential.
"Our findings may not be news to many parents, but this is the first time their decision-making processes in these situations has been documented…by having this work published in a peer-reviewed journal, it gives an academic, methodologically sound basis for future work and interventions to try and reduce the spread of illnesses in these settings," Carroll said.
Agriculture Department joins effort to cut food waste
Develops app to help consumers keep track of expiration dates04/24/2015ConsumerAffairsBy Mark Huffman
The U.S. Agriculture Department's (USDA) Food Safety and Inspection Service (FSIS) says American consumers waste billions of pounds of still-edible food be...
The U.S. Agriculture Department's (USDA) Food Safety and Inspection Service (FSIS) says American consumers waste billions of pounds of still-edible food because they aren't sure if the food has spoiled.
They look at the sell-by date and see that it is long past, so they throw it out. FSIS says they shouldn't. As we have previously reported, sell-by dates are different from use-by dates on packaging.
USDA estimates that 21% of the available food in the U.S. goes uneaten at the consumer level. It says on average, 36 pounds of food per person is wasted each month at the retail and consumer levels.
To help consumers better understand how different storing methods affect a product’s shelf life, FSIS has introduced a free app called FoodKeeper. It's designed to help consumers maximize the storage life of foods and beverages and remind them to use items before they are likely to spoil.
You can download the app here.
“Many products might have a sell-by date of April 1 but they could be good in your pantry for another 12 or 18 months,” said Chris Bernstein, spokesman for FSIS. “By throwing those out, what you're doing is contributing to food waste in the United States. Say you buy a box of fresh pasta, which is good for a limited amount of time, you can have your calendar tell you a couple of days before that fresh pasta is going to go bad that you should think about eating it.”
U.S. Food Waste Challenge
The app is part of an effort by USDA and the U.S. Environmental Protection Agency (EPA) called the U.S. Food Waste Challenge. Launched in 2 years ago, it urges players across the food chain – farms, agricultural processors, food manufacturers, grocery stores, restaurants, universities, schools, and local governments – to help reduce food waste by improving product development, storage, shopping/ordering, marketing, labeling, and cooking methods.
It also connects potential food donors to hunger relief organizations and, what isn't fit for human consumption, is redirected to feed animals or to create compost, bioenergy, and natural fertilizers.
Just Eat It
This all appears to be part of a growing trend to make sure more of the food we produce gets consumed. A recent documentary, “Just Eat It,” focuses on food waste at the producer and retail level. Much of the food we produce never makes it to the supermarket because of its appearance, for example.
The film follows one couple as they swear off grocery shopping and try to subsist on food that would otherwise go in the dumpster. The trailer is below.
For consumers, there are very good economic reasons to get on the anti-food waste band wagon.
When food product goes to waste, less of it gets to market. The laws of supply and demand would suggest that eating more of the food that currently goes in a landfill might not lower prices but might keep them from rising as fast.
RB recalls Mucinex Fast-Max products
The products' back label may not list all active ingredients04/24/2015ConsumerAffairsBy James Limbach
RB -- formerly Reckitt Benckiser -- is recalling certain lots of liquid bottles of Mucinex Fast-Max Night Time Cold & Flu; Mucinex Fast-Max Cold & Sinus; M...
RB -- formerly Reckitt Benckiser -- is recalling certain lots of liquid bottles of Mucinex Fast-Max Night Time Cold & Flu; Mucinex Fast-Max Cold & Sinus; Mucinex Fast-Max Severe Congestion & Cough and Mucinex Fast-Max Cold, Flu & Sore Throat.
The over-the-counter medications, which correctly label the product on the front of the bottle and lists all active ingredients, may not have the correct corresponding drug facts label on the back.
This mislabeling could cause the consumer to be unaware of side effects and/or risks associated with the ingestion of certain product ingredients which include Acetaminophen, Dextromethorphan, Guaifenesin, Phenylephrine and/or Diphenhydramine.
Consumers would not be adequately warned of side effects which could potentially lead to health complications requiring urgent medical intervention, particularly in the case of acetaminophen use in people with liver impairment, taking three or more alcoholic drinks or when taking other medicines containing this active ingredient without consulting a doctor.
RB is asking consumers to dispose of any unused product in the following manner:
- Mix liquid medicines with an unpalatable substance such as kitty litter or used coffee grounds; place the mixture in a container such as a sealed plastic bag; and throw the container in your household trash.
Consumers who purchased the recalled products may contact the RB Mucinex Fast-Max recall toll free number at 1-888-943-4215 between 8:00 a.m.- 8:00 p.m EST.
List of potentially affected batches
General Motors recalls Cadillac CTS-V vehicles
The braking system could become corroded and leak04/24/2015ConsumerAffairsBy James Limbach
General Motors is recalling 4,907 model year 2004-2007 Cadillac CTS-V vehicles manufactured between September 6, 2003, and June 11, 2007. The recalled veh...
General Motors is recalling 4,907 model year 2004-2007 Cadillac CTS-V vehicles manufactured between September 6, 2003, and June 11, 2007.
The recalled vehicles are currently registered, or were originally sold, in Connecticut, Delaware, Illinois, Indiana, Iowa, Maine, Maryland, Massachusetts, Michigan, Minnesota, Missouri, New Hampshire, New Jersey, New York, Ohio, Pennsylvania, Rhode Island, Vermont, West Virginia, Wisconsin and the District of Columbia.
Snow or water containing road salt or other contaminants may corrode the front brake hose fitting at the caliper. Corrosion may cause the brake system to leak which could lengthen the distance needed to stop the vehicle and increase the risk of a crash.
GM will notify owners, and dealers will replace both front brake hose assemblies, free of charge. The manufacturer has not yet provided a notification schedule.
Owners may contact Cadillac customer service at 1-800-458-8006. GM's number for this recall is 15149.
Rising cost of drugs gets new scrutiny
Government investigators prodded into probe04/23/2015ConsumerAffairsBy Mark Huffman
All of a sudden, health care policymakers, inside government and out, are taking a hard look at the high price of medication.At a time when health care...
All of a sudden, health care policymakers, inside government and out, are taking a hard look at the high price of medication.
At a time when health care is more accessible, many consumers are finding the drugs that are being prescribed are prohibitively expensive. Even generic drugs, which are cheaper than their name brand equivalents, often aren't that much cheaper.
Sen. Bernie Sanders (I-VT) has prodded the Department of Health and Human Services (HHS) Inspector General to find out why generic drug costs have recently gone up.
“It is unacceptable that Americans pay, by far, the highest prices in the world for prescription drugs,” Sanders said. “Generic drugs were meant to help make medications affordable for millions of Americans who rely on prescriptions to manage their health needs. We’ve got to get to the bottom of these enormous price increases.”
Sanders says an analysis of data from the Centers for Medicare and Medicaid show 10% of generic drugs more than doubled in price in a recent year. He says drug companies were not cooperative when he asked them to turn over records on prices. Since federal law requires companies to give that data to HHS, he said he has appealed to that agency to shed some light on the issue.
Expensive arthritis drugs
A new study of Medicare coverage of a class of medication known as biologic disease modifying drugs (DMARDS) underscores the rising costs of many commonly-prescribed medications. The study found that one DMARD, used to treat rheumatoid arthritis (RA), costs the typical Medicare recipient $2,700 out of pocket before catastrophic coverage kicks in.
For most DMARDs, the study found that consumers absorb nearly 30% of the cost during the initial phase of their treatment.
The study, published in the medical journal Arthritis & Rheumatology, says DMARDs have been a game-changer in the treatment of RA, a chronic autoimmune disease affecting 1.3 million Americans. Without this class of drugs, the authors say 1 in 3 RA patients are permanently disabled within 5 years of disease onset.
Treatment based on drug cost
"While specialty DMARDs have improved the lives of those with chronic diseases like RA, many patients face a growing and unacceptable financial burden for access to treatment," said Dr. Jinoos Yazdany with the Division of Rheumatology at the University of California, San Francisco and lead author of the present study. "Rather than determining which drug is best for the patient, we find ourselves making treatment decisions based on whether patients can afford drugs."
Even consumers sharply divided over the Affordable Care Act, or Obamacare, appear to be somewhat united on the issue of drug prices. When the Kaiser Family Foundation released a survey on Obamacare this week, it showed consumers pretty evenly divided on whether they approved of the new law, breaking down along partisan and ideological lines.
But when the survey touched on drug prices, there was surprising consensus.
When asked to choose their biggest health care priority, 76% said “making sure that high-cost drugs for chronic conditions, such as HIV, hepatitis, mental illness and cancer, are affordable to those who need them.” The 76% included strong majorities of Democrats, Republicans and Independents.
Promising results in macular degeneration treatment
Researchers say stem cell treatment yields positive results04/23/2015ConsumerAffairsBy Mark Huffman
There is little hope when diagnosed with macular degeneration. You progressively begin to lose your eyesight. There is no treatment that slows it down and ...
This story has been removed because of questions about the accuracy of the news release on which it was based.
Activists pressing for a ban on the main ingredient in Roundup weed killer
Activist group providing test kits for the general public04/23/2015ConsumerAffairsBy Christopher Maynard
The Environmental Protection Agency (EPA) is taking another look at glyphosate -- the weed killer more commonly known as Roundup, manufactured by Monsanto....
The Environmental Protection Agency (EPA) is taking another look at glyphosate -- the weed killer more commonly known as Roundup, manufactured by Monsanto. The agency declared it a carcinogen in 1985 but later reversed that decision. The chemical is up for review this year.
Use of glyphosate has increased dramatically in recent years and it is now used on a variety of crops that are grown for consumers. These include wheat, corn, soybeans, and many other foods we eat every day.
Besides the renewed interest from the EPA, the World Health Organization recently reported that the chemical is “probably carcinogenic to humans.”
In 2011, Reuters reported that 271 samples of soybeans out of 300 had glyphosate residue on them. Although the levels found were below EPA tolerance levels, this still raises some concerns among health advocates.
Monsanto officials say the WHO report is “dramatic departure from the conclusion reached by all regulatory agencies around the globe” and say it's not based on any new scientific evidence.
Health and safety advocates are putting heat on the EPA. The Organic Consumers Association (OCA), in conjunction with the Feed the World Project, today said it was launching the world’s first glyphosate testing for the general public. The project, with specific focus on women and children in the U.S., is offering the first-ever validated public glyphosate testing for urine, water and soon breast milk.
“For decades now, the public has been exposed, unknowingly and against their will, to glyphosate, despite mounting evidence that this key active ingredient in Monsanto’s Roundup herbicide is harmful to human health and the environment,” said Ronnie Cummins, OCA’s international director. “Monsanto has been given a free pass to expose the public to this dangerous chemical, because individuals, until now, been unable to go to their doctor’s office or local water testing company to find out if the chemical has accumulated in their bodies, or is present in their drinking water.
Cummins said the widespread availability of testing will build support for banning or restricting the use of glyphosate on food crops.
“We expect that once the public learns how widespread the exposure has been, and how it has personally invaded their bodies and homes — in the context of the recent report from the World Health Organization that glyphosate is a probable human carcinogen — public pressure will eventually force governments worldwide to finally ban Roundup.”
Panasonic needs to sharpen its focus on customer service, critics complain
Camera buffs are upset over slow, expensive or non-existent warranty and repair service04/23/2015ConsumerAffairsBy Truman Lewis
Just making a good product isn't enough to keep consumers happy. You have to provide outstanding...
Just making a good product isn't enough to keep consumers happy. You have to provide outstanding service as well. This seems to be something that has escaped Panasonic's attention.
We've heard from many Panasonic camera owners who have had frustrating warranty and service experiences, leading them to vow they'll never darken Panasonic's door again. Michael of Glendale, Calif., who provided the photo shown above, is one of several who've complained of dark spots appearing in their photos.
"Panasonic customer service is the WORST I think I've ever dealt with. It's unbelievable," he said in a ConsumerAffairs review last August. He'll get no argument from Will, who filed this video review:
What's a consumer to do when a company fails to follow through on its warranty claims? Sadly, there's not much an individual can do in cases like these. There's not enough money involved to justify a lawsuit and in many instances, the consumer isn't in a country where Panasonic has offices.
But the power still lies with the consumer. Posting reviews on sites like this can, over time, compel companies to clean up their act, even if it doesn't provide immediate relief for people like Will.
California HOA fines couple $50 per day for drought-busting artificial turf
Proposed bills before state legislature would prohibit such HOA behavior04/23/2015ConsumerAffairs
A California homeowners' association, or HOA, is fining a couple $50 per day for replacing their lawn with artificial turf to help reduce their water use. ...
A California homeowners' association, or HOA, is fining a couple $50 per day for replacing their lawn with artificial turf to help reduce their water use.
California is currently deep into the fourth year of a record-shattering and still-worsening drought. Governor Jerry Brown declared an official state of emergency in January 2014, and since then the state government has passed a series of water-conservation measures including various mandatory water-use restrictions – which have not prevented various HOAs and even municipalities throughout the state from nonetheless mandating lush green lawns despite ever-drier conditions.
In July, the state legislature voted for and the governor signed a law prohibiting HOAs from penalizing homeowners whose lawns turn brown during drought conditions. However, there's currently no such law protecting homeowners who replace thirsty genuine lawns with waterless fake ones, though there are a couple of proposals before the state legislature.
KABC reports that the Morrison Ranch Estates Homeowners' Association, in the L.A. suburb of Agoura Hills, has been fining residents Rhonda and Greg Greenstein $50 per day, ever since the Greensteins installed artificial turf — without first getting permission from their HOA board. (Hence a common complaint about HOAs: at their worst, they combine all the responsibilities of homeownership with all the restrictions of renters who dare not make any change to their domiciles without the landlord's [or HOA board's] approval.)
HOA president Jan Gerstel said that “We have to enforce the rules here. Unfortunately, sometimes people don't like what the rules are.” He also said that the HOA board had previously considered changing the rules to allow artificial turf but ultimately voted against it because, as he told KABC, “About eight months that we researched (artificial turf), we did not find significant water savings with artificial turf.”
Greg Greenstein, by contrast, says that he and his wife haven't had to water their fake lawn at all, and “We project that we'll save 2/3 of our water bill throughout the end of the year.”
The HOA is suing the Greensteins over the fake lawn; Greenstein told CBS-Los Angeles that their court date is scheduled for June 8, by which time their total HOA fine will be more than $5,000.
But it's possible that the argument will be legally moot by then, since a proposed bill before the state legislature would, if passed into law, require HOAs to allow artificial turf on residents' lawns. Greenstein said “I refuse to pay [the HOA fine]. I just have to wait for Gov. Brown to sign off on artificial turf.”
There's no guarantee the governor will do so, despite the statewide drought conditions. The state legislature passed similar proposed bills in 2010 and 2011, but then-governor Schwarzenegger vetoed the 2010 bill and current-governor Brown vetoed the next one.
Still, the drought's had four years to grow in severity since that last veto. Last December, the San Diego County Water Authority proposed a bill that would require HOAs to allow fake lawns.
More recently, state assemblywoman Lorena Gonzalez (D-San Diego) responded to new of the Greensteins' plight by mentioning that she is sponsoring a bill which would prevent HOAs form doing such things. Gonzalez says the bill should go before the Assembly this May, and that even though the governor has previously vetoed similar bills, she hopes this time will be different because “I expect the Governor, given his commitment to changing behavior in this drought, probably will take a second look at it.”
New home sales plunge in March
Housing prices were mixed04/23/2015ConsumerAffairsBy James Limbach
Sales of new single-family houses dropped sharply last month. Data released jointly by the U.S. Census Bureau and the Department of Housing and Urban Deve...
Sales of new single-family houses dropped sharply last month.
Data released jointly by the U.S. Census Bureau and the Department of Housing and Urban Development show sales were down 11.4% in March -- to a seasonally adjusted annual rate of 481,000. At the same time, the February rate was revised up from the initially reported 539,000 to 543,000.
Even with the large March decline, the sales rate is 19.4% above the year-ago level of 403,000.
The median sales price of new houses sold in March was $277,400 -- down $4,900 from a year earlier. The average sales price was posted a year-over-year gain of $11,800 -- to $343,300.
The seasonally adjusted estimate of new houses for sale at the end of last month was 213,000, representing a supply of 5.3 months at the current sales rate.
From the Federal Housing Finance Agency (FHFA), word that its monthly House Price Index (HPI) was up 0.7% in February after rising 0.3% a month earlier.
For the nine census divisions, seasonally adjusted monthly price changes ranged from -1.3% in the East South Central division to +1.8% in the South Atlantic division.
The 12-month changes were all positive, ranging from +2.6% in the Middle Atlantic division to +6.9% in the Pacific division.
The FHFA HPI is calculated using home sales price information from mortgages sold to or guaranteed by Fannie Mae and Freddie Mac.
Separately, the Labor Department reports first-time applications for unemployment benefits inched up by 1,000 in the week ending April 18 to a seasonally adjusted 295,000.
The government says there were no special factors affecting this week's initial claims.
The 4-week moving average was 284,500 -- up 1,750 from the previous week. The 4-week tally is less volatile than the initial claims data and considered a more accurate barometer of the labor market.
The full report is available on the DOL website.
Surprise! Online forums may be good for you
Study finds forums have benefits for individuals and society04/23/2015ConsumerAffairsBy Truman Lewis
At various times, it's been thought that the following things, among others, were bad for you: Facebook, video games, online forums, rock 'n roll and readi...
At various times, it's been thought that the following things, among others, were bad for you: Facebook, video games, online forums, rock 'n roll and reading by firelight.
Could be, but a new study exonerates online forums, finding that they have positive links to well-being and are associated with increased community engagement offline.
Research just published in the journal Computers in Human Behavior found that online forums have benefits for both individuals and wider society and are of greater importance than previously realized.
Although seemingly eclipsed in the past decade by social networking sites such as Facebook and Twitter, forums are still regularly used by around 10% of online users in the UK and 20% in the US.
The study's authors say the apparent benefits derive partly from the fact that forums are one of the few remaining online spaces that offer anonymous interaction.
"Often we browse forums just hoping to find answers to our questions," said lead author Dr. Louise Pendry of the University of Exeter. "In fact, as well as finding answers, our study showed users often discover that forums are a source of great support, especially those seeking information about more stigmatising conditions."
Pendry said the study found that online forum users were also more likely to get involved in related activities offline, such as volunteering, donating or campaigning."
"In a nutshell, the more users put into the forum, the more they get back, and the pay-off for both users themselves and society at large can be significant," said Dr. Jessica Salvatore of Sweet Briar College in Virginia.
In the study, users were approached on a range of online discussion forums catering to a variety of interests, hobbies and lifestyles. Those recruited to the study were classified in two groups: those whose forum subject could be considered stigmatized (such as those dealing with mental health issues, postnatal depression or a particular parenting choice for example) or non-stigma-related forums (such as those for golfers, bodybuilders and environmental issues).
They were asked a set of questions relating to their motivations for joining the discussion forum, the fulfilment of their expectations, their identification with other forum users, their satisfaction with life and their offline engagement with issues raised on the forum.
The study is published in the journal Computers in Human Behavior.
Kayem Foods recalls sausage products
The products may be contaminated with pieces of plastic04/23/2015ConsumerAffairsBy James Limbach
Kayem Foods of Chelsea, Mass., is recalling approximately 59,203 pounds of fully cooked chicken sausage products. The products may be contaminated with pi...
Kayem Foods of Chelsea, Mass., is recalling approximately 59,203 pounds of fully cooked chicken sausage products.
The products may be contaminated with pieces of plastic.
There are no reports of any injuries associated with consumption of these products.
The following chicken sausage products, produced on various dates in March 2015, are being recalled:
- 12-oz. packages of Trader Joe’s brand “Sweet Apple Chicken Sausage” with the case code 9605 and use by/freeze by dates of “4 22 15,” “4 25 15,” and “4 29 15.”
- 8-oz. packages of al fresco brand “Apple Maple Fully Cooked Breakfast Chicken Sausage” with the case code 9709 and use by/freeze by dates of “JUN 13 2015” and “JUN 20 2015.”
The recalled products bear the establishment number “P7839” inside the USDA mark of inspection, and were shipped to retail locations nationwide.
The problem was discovered after the firm received complaints from two consumers who found small pieces of plastic in the product.
Consumers with questions about the recall may contact Joellen West at (800) 426-6100 ext. 247.
La Clarita Queseria Queso Fresco Fresh Cheese recalled
The product may be contaminated with Staphylococcus aureus04/23/2015ConsumerAffairsBy James Limbach
Queseria La Poblanita of New York, N.Y., is recalling La Clarita Queseria Queso Fresco Fresh Cheese. The product may be contaminated with Staphylococcus a...
Queseria La Poblanita of New York, N.Y., is recalling La Clarita Queseria Queso Fresco Fresh Cheese.
The product may be contaminated with Staphylococcus aureus
No illnesses have been reported to date.
The recalled Spanish-style cheese is sold in 12-oz. plastic tub packages with a label declaring a plant # 36/8585, and a product lot code of MAY 13, 2015. It was distributed to stores and delis in the metropolitan New York area.
Consumers who purchased the recalled product should return it to the place of purchase or discard it.
Audi Q3 vehicles recalled
The sunroof may continue to close instead of stopping when the vehicle is turned off04/23/2015ConsumerAffairsBy James Limbach
Volkswagen Group of America is recalling 3,646 model year 2015 Audi Q3 vehicles manufactured April 4, 2014, to November 5, 2014. If the vehicle is turned...
Volkswagen Group of America is recalling 3,646 model year 2015 Audi Q3 vehicles manufactured April 4, 2014, to November 5, 2014.
If the vehicle is turned off while the sunroof is closing, the sunroof may continue to close instead of stopping. If a vehicle occupant is in the sunroof's path, there is an increased risk of injury.
Audi will notify owners, and dealers will update the sunroof control module software, free of charge. The recall began on April 13, 2015.
Owners may contact Audi customer service at 1-800-253-2834. Volkswagen's number for this recall is 60C1.
Trek recalls bicycles equipped with front disc brakes
The front wheel could come to a sudden stop or separate from the bicycle04/23/2015ConsumerAffairsBy James Limbach
Trek Bicycle Corporation of Waterloo, Wis., is recalling about 998,000 Trek bicycles equipped with front disc brakes in the U.S. and Canada. An open quick...
Trek Bicycle Corporation of Waterloo, Wis., is recalling about 998,000 Trek bicycles equipped with front disc brakes in the U.S. and Canada.
An open quick release lever on the bicycle’s front wheel hub can come into contact with the front disc brake assembly, causing the front wheel to come to a sudden stop or separate from the bicycle, posing a risk of injury to the rider.
The company reports 3 incidents, all including injuries. One incident resulted in quadriplegia. One incident resulted in facial injuries. One incident resulted in a fractured wrist.
This recall involves all models of Trek bicycles from model years 2000 through 2015 equipped with front disc brakes and a black or silver quick release lever on the front wheel hub that opens far enough to contact the disc brake.
Bicycles with front quick release levers that do not open a full 180 degrees from the closed position, are not included in this recall.
The bicycles. Manufactured in China and Taiwan, were sold at bicycle stores nationwide from about September 1999, through April 2015, for between $480 and $1,650.
Consumers should stop using the bicycles immediately and contact an authorized Trek retailer for free installation of a new quick release on the front wheel. Trek will provide each owner who participates in the recall with a $20 coupon redeemable by December 31, 2015 toward any Bontrager merchandise. (The coupon has no cash value.)
Consumers may contact Trek at (800) 373-4594 from 8 a.m. to 6 p.m. CT Monday through Friday.
Lenovo expands recall of ThinkPad Notebook battery packs
The battery packs can overheat04/23/2015ConsumerAffairsBy James Limbach
Lenovo of Morrisville, N.C., is recalling about 166,50 ThinkPad notebook computer battery packs in the U.S and Canada. About 37,400 were recalled in the U....
Lenovo of Morrisville, N.C., is recalling about 166,50 ThinkPad notebook computer battery packs in the U.S and Canada. About 37,400 were recalled in the U.S. and Canada in March 2014.
The battery packs can overheat, posing a fire hazard.
The company has received 4 reports of incidents of battery packs overheating and damaging the computers, battery packs and surrounding property. One incident included a consumer's skin being reddened and burn marks on the consumer's clothing.
This recall involves Lenovo battery packs sold with the following ThinkPad notebook computers: the Edge 11, 13, 14, 15, 120, 125, 320, 325, 420, 425, 430, 520, 525 and 530 series; the L412, L420/421, L512 and L520 series; the T410, T420, T510 and T520 series; the W510 and W520 series; and the X100e, X120e, X121e, X130e, X200, X200s, X201, X201s, X220 and X220t series.
The battery packs were also sold separately. The black battery packs measure between 8 to 11 inches long, 1 to 3 inches wide and about 1 inch high. Recalled battery packs have one of the following part numbers starting with the fourth digit in a long series of numbers and letters printed on a white sticker below the bar code on the battery pack: 42T4695, 42T4711, 42T4740, 42T4798, 42T4804, 42T4812, 42T4816, 42T4822, 42T4826, 42T4828, 42T4834, 42T4840, 42T4862, 42T4868, 42T4874, 42T4880, 42T4890, 42T4944, 42T4948, 42T4954, 42T4958, 45N1022 and 45N1050.
The battery packs, manufactured in China, were sold at computer and electronics stores, and authorized dealers nationwide and online at www.lenovo.com from February 2010, through June 2012, for between $350 and $3,000 when sold as part of ThinkPad notebook computers. The battery packs were also sold separately for between $80 and $150.
Consumers should immediately turn off their ThinkPad notebook computer, remove the battery pack and contact Lenovo for a free replacement battery pack. Consumers may continue to use their ThinkPad notebook without the battery pack by plugging in the AC adapter and power cord.
Consumers may contact Lenovo at (800) 426-7378 from 9 a.m. to 5 p.m. ET Monday through Friday.
What harm could there be in “liking” a Facebook post? Potentially a lot.04/22/2015ConsumerAffairs
If you're a regular Facebook user, you're pretty much guaranteed to run across lots of “like-farming” scammers – maybe without ever even realizing it....
Twitter changes policies and features in crackdown on threats and abuse
New policies, new algorithms and new tools to stop trolls04/22/2015ConsumerAffairs
Yesterday, Twitter took another step in its campaign to crack down on threatening or abusive content on its platform, by updating its policy regarding viol...
Yesterday, Twitter took another step in its campaign to crack down on threatening or abusive content on its platform, by updating its policy regarding violent threats.
The original policy banned “direct, specific threats of violence against others.” The new policy removes the first two words, and now prohibits “threats of violence against others.” This added vagueness is intended to give Twitter's moderators more leeway to decide what constitutes a “threat.” Under the old “direct, specific” policy, trolls and abusers could, for example, wish for threats against people, which technically was not prohibited.
Last February, Twitter's CEO Dick Costolo admitted in an internal email (later leaked to outside media) that “We suck at dealing with abuse and trolls on the platform and we've sucked at it for years. ... We lose core user after core user by not addressing simple trolling issues that they face every day.”
In March, the company announced that it would finally crack down on “revenge porn,” the practice of publishing nude or sexually explicit photos of people (usually women) without their permission. At the time, Twitter updated its “Content boundaries” to say “You may not post intimate photos or videos that were taken or distributed without the subject's consent.”
But this time, Twitter has done more than change its posted policies; it's also changing its responses toward the writers of harassing tweets. Now, when an account is reported for suspected abuse, Twitter reserves the right to “freeze” that account, to require abusers to delete problematic tweets and also to require a valid phone number in order to reinstate their account.
(As a Washington Post blogger put it, “Essentially, Twitter is putting users in time-out and making it easier to identify them down the line.”)
In a company blog post discussing the new policies, Twitter's Director of Product Management, Shreyas Doshi, said that in addition to the policy changes,
[W]e have begun to test a product feature to help us identify suspected abusive Tweets and limit their reach. This feature takes into account a wide range of signals and context that frequently correlates with abuse including the age of the account itself, and the similarity of a Tweet to other content that our safety team has in the past independently determined to be abusive. It will not affect your ability to see content that you’ve explicitly sought out, such as Tweets from accounts you follow, but instead is designed to help us limit the potential harm of abusive content.
In other words, Twitter has a new algorithm which will hopefully prevent abusive tweets from being seen in the first place; if a troll knows his intended victim won't see his threatening tweets he'll hopefully lose interest in sending them. This algorithm won't prevent you from seeing the tweets of people you've chosen to follow, but it will prevent (or at least reduce the frequency of) any random troll's threatening comment from appearing on your own feed.
Like all technologies, Twitter's new policies and features are a work-in-progress; Doshi's blog post ended with the observation that “as the ultimate goal is to ensure that Twitter is a safe place for the widest possible range of perspectives, we will continue to evaluate and update our approach in this critical arena.”
Jeep fire case may be reopened as highway safety agency gets into high gear
New NHTSA director wants automakers to be more "proactive" in addressing safety issues04/22/2015ConsumerAffairsBy James R. Hood
Burning Jeep (Photo: Sedona, Ariz., Fire Dept.) Federal safety regulators are frequently heard to complain that most of us drive too fast. Automakers,...
Federal safety regulators are frequently heard to complain that most of us drive too fast. Automakers, on the other hand, are moving way too slowly in carrying out safety recalls, top safety agency officials say.
Case in point: the older Jeep Cherokees with gas tanks behind the rear axle. They're prone to catch fire when struck from the rear but a recall of 1.56 million Jeeps has been creeping along with most vehicles still unfixed. Likewise, the recall of millions of cars equipped with Takata airbags, which can spray passengers with shards of metal. Millions of cars have yet to have their airbags replaced.
Other recalls great and small plod along with all manner of delays. In all too many cases, consumers' cars sit idle at dealerships waiting for parts while consumers make do with no car or a loaner.
Take Nikki of Mobile, Ala. "I have a problem with Chrysler not ordering the part for the safety,recall,of the rear quarter vent window switch. They dismantled my switch almost 7 months ago and I have been waiting for the part since then," she told ConsumerAffairs. They have only shipped 4 parts to the dealership in Mobile and they won't respond to complaints."
Nikki said she is number 19 on her dealer's list of customers waiting for the part.
What's the point?
BMW owner Patricia reported a similar situation:,"I own a 2003 325ci that is affected by the (airbag) recall,and I called the nearest dealership to schedule the repair., I was told, by them, that that the part is on backorder and they will call me when it becomes available."
"Not wanting to wait, I checked to see if the repair would be available elsewhere., I went to the website BMWusa.com and entered the required information from my VIN number and the response from the site says that 'remedy is not available,'" Patricia said.
"What is the point of issuing a,recall,when there is no remedy available?" she,asked. "Owners of the affected vehicles are told that that the problem with the airbag can cause 'serious injury to passengers and other occupants' yet they don’t have the ability to remedy the situation?"
The National Highway Traffic Safety Administration didn't seem particularly perturbed about this when David Strickland was at its helm. He "retired" last year along with his boss, ex-Transportation Secretary Ray LaHood, after working out a highly unusual Jeep recall deal with Chrysler.,
The agency's new head, Mark Rosekind, Ph.D., is an actual safety expert -- something rare in NHTSA's history. He was previously a member of the National Transportation Safety Board and has performed extensive research into human fatigue and other safety factors at NASA.
Now that he has settled into his new job, Rosekind has made it clear he expects automakers to be more "proactive" in handling safety issues -- an expectation that covers everything from building safer cars with fewer defects, being faster to identify problems and faster yet to conduct recalls when needed.
"The most important thing was to be able to generate a range of options for us to kind of decide where we want to address these issues in a strategic but timely way,” Rosekind said yesterday, Automotive News reports. “For both [the Takata and Jeep recalls], I think we’re one or two weeks away from actually having some concrete things to start taking action on.”
Bloomberg News reported earlier that the agency may be thinking about reopening the investigation into the Jeeps. Safety advocates have been highly critical of the unusual way in which the recall and a related safety campaign were arrived at -- in a secret airport meeting instead of in the laboratory where a tried-and-tested approach might have been found.
Instead, the agency gave its OK to retrofitting trailer hitches on the Jeeps, on the largely untested,theory that the hitch would protect the gas tank in a collision.
Jeep owners have complained that the hitch itself is a safety hazard, since dealers are installing only the hitch and not the other towing package components that are needed to tow a trailer safely. Future owners of the Jeeps may not know the hitches are unsafe, they say.,
Google unveils Project Fi -- pay-as-you-go wireless service
Consumers pay only for the data they actually use04/22/2015ConsumerAffairsBy James R. Hood
If you lease or buy a car, you pay for it even if it just sits in the driveway gathering dust. If you use Uber or public transit, you only...
If you lease or buy a car, you pay for it even if it just sits in the driveway gathering dust. If you use Uber or public transit, you only pay for trips you actually take.
That's how Google thinks wireless service ought to work. Today it unveiled Project Fi, a new service this week that lets you pay as you go -- paying only for data you actually use.
Currently, most carriers make you buy a few gigabytes per month. Use more than that and you'll pay extra. Use less and, well, too bad. The unused GBs are gone forever.
It's not quite that extreme, of course. AT&T and some other carriers are letting customers carry over unused data. And a few small carriers, Republic and Scratch Wireless, are already offering "metered" usage, where you pay only for what you use.
Fi comes with one plan at one price -- $20 a month gets you the basics: talk, text, Wi-Fi tethering and international coverage. It's $10 per gigabyte of data after that for cellular data while in the U.S. and abroad. The plan refunds any data you don't use.
Subscribers can also talk and text from their phone number on any phone, tablet or laptop.
"Project Fi aims to put you on the best network wherever you go," Nick Fox, Google's vice president of communications products, wrote in a blog post.
Porject Fi makes Google by far the biggest new kid on the block and its entry is likely to shake things up quite thoroughly.
Google will be reselling capacity on Sprint and T-Mobile's networks as well as linking up with open wi-fi hotspots, switching seamlessly -- or trying to, anyway -- from one network to another, depending on which one is offering the best signal at any given moment.
Initially, the service will work only on Google's latest Nexus 6 phones but is expected to spread to other phones if it's successful.
Once in your garage, getting into your house is easy04/22/2015ConsumerAffairs
Police in Hanford, California, say they're seeing a disturbing trend. They have had a higher than usual amount of breaking into cars -- not to steal the ca...
Existing-home sales hit 18-month high
Year-over-year levels were higher as well04/22/2015ConsumerAffairsBy James Limbach
Improvement across the U.S. pushed sales of previously owned homes to their highest level since September 2013. Figures released by the National Associati...
Improvement across the U.S. pushed sales of previously owned homes to their highest level since September 2013.
Figures released by the National Association of Realtors (NAR) show total sales -- completed transactions that include single-family homes, townhomes, condominiums and co-ops -- shot up 6.1% in March to a seasonally adjusted annual rate of 5.19 million.
Additionally, sales have now increased year-over-year for 6 consecutive months and are 10.4% above a year ago -- the highest annual increase since August 2013's 10.7%. The March surge in sales was the largest monthly gain since the 6.2% gain in December 2010.
Picking up steam
"After a quiet start to the year, sales activity picked up greatly throughout the country in March," said NAR Chief Economist Lawrence Yun. "The combination of low interest rates and the ongoing stability in the job market is improving buyer confidence and finally releasing some of the sizable pent-up demand that accumulated in recent years."
Total housing inventory at the end of March climbed 5.3% to 2.00 million existing homes available for sale, and is now 2.0% above a year ago. Unsold inventory is at a 4.6-month supply at the current sales pace, down from 4.7 months in February.
The median existing-home price for all housing types in March was $212,100 -- 7.8% above March 2014, marking the 37th consecutive month of year-over-year price gains and the largest since February 2014.
"The modest rise in housing supply at the end of the month despite the strong growth in sales is a welcoming sign," Yun noted. "For sales to build upon their current pace, homeowners will increasingly need to be confident in their ability to sell their home while having enough time and choices to upgrade or downsize. More listings and new home construction are still needed to tame price growth and provide more opportunity for first-time buyers to enter the market."
- Existing-home sales in the Northeast increased 6.9% in March to an annual rate of 620,000, and are 1.6% above a year ago. The median price was $240,500 -- 1.6% below a year ago.
- In the Midwest, existing-home sales jumped 10.1% to an annual rate of 1.20 million, and are now 12.1% above March 2014. The median price surged 9.7% from the same time last year -- to $163,600.
- Sales in the South climbed 3.8% to an annual rate of 2.19 million in March, and are now 11.7 percent above March 2014. The median price was $187,900 -- up 9.3% from a year ago.
- The West posted an existing-home sales increase of 6.3% to an annual rate of 1.18 million in March; sales are now 11.3% and the median price is up 8.3% year-over-year to $305,000.
The No iOS Zone lets attackers remotely crash any iPhone or iPad in wi-fi range
Another danger of automatically connecting to public wi-fi04/22/2015ConsumerAffairs
Another day brings another way hackers can wreak havoc on your life, this time for owners of Apple devices: security researchers from Skycure have discover...
Another day brings another way hackers can wreak havoc on your life, this time for owners of Apple devices: security researchers from Skycure have discovered a vulnerability they call the “No iOS Zone,” which effectively lets attackers crash any mobile iOS device connected to a wi-fi hotspot.
Actually, it's even worse that: You don't have to actively connect your device to a hotspot in order to be at risk. No iOS Zone lets attackers crash your device if you are so much as in range of a hotspot, unless you've completely turned off the device (or at least its wi-fi).
Yet in a way this is not entirely surprising — and Apple devices aren't the only ones at risk from public wi-fi.
Last summer, for example, Ars Technica tried a little experiment and discovered that millions of customers of both Comcast and AT&T; were at risk of letting hackers surreptitiously get into their devices' Internet traffic and steal all sorts of personal data, because those two companies' hotspots proved particularly easy for hackers to “spoof” (which is hackerspeak for “impersonate”).
Here's a very oversimplified explanation of why: Unless you specifically turn off that feature, or your device itself, your smartphone, tablet or other connectable device is always looking to connect with a familiar network.
Let's say you visited Starbucks to take advantage of their free w-fi. Now, every time you go there your phone automatically sends out a signal, basically saying “Hey, Starbucks w-fi, where are you?” and waiting for the electronic response “Here I am! Starbucks wi-fi, now connecting with you.”
But it's very easy for anyone to set up a wireless hotspot to respond under a false name: “Here I am, Starbucks wi-fi! Actually I'm a hacker up to no good, but I said my name is 'Starbucks w-fi' so I can connect with you.”
To guard against that particular danger, you must shut off the wi-fi connections on your mobile devices when you're not using them, and set each device so that it must ask before joining a mobile network.
The “No iOS Zone” vulnerability is similar, except instead of letting hackers use wi-fi hotspots to spy on various iDevices, it “only” gives hackers the ability to make those devices crash and go into an endless reboot loop. And once that happens, you can't turn off your wi-fi connection and regain control since, of course, your device has to be booted up before you can change its wi-fi settings or do anything else with it.
The researchers named this vulnerability the “No iOS Zone” because once attackers set up a malicious wi-fi network, any iOS mobile device within range of it would connect, get stuck in an endless reboot loop and thus be rendered useless, resulting in a literal no-iOS zone.
Skycure's presentation also offered a list of “potential areas that may be attractive for attackers,” which includes “political events, economical & business events, Wall Street [and] governmental and military facilities.”
Apple is currently working with Skycure to develop a fix for this problem. Meanwhile, iOwners should keep their wi-fi turned off unless and until they actually plan to use it, and be extra-wary of any public wi-fi hotspot – which, come to think of it, is good advice regarding any mobile device, regardless of who manufactured it.
Mortgage applications rebound
It's the fourth increase in 5 weeks04/22/2015ConsumerAffairsBy James Limbach
After posting a slight decline the previous week, applications for mortgages moved higher the week ending April 17. According to the Mortgage Bankers Asso...
After posting a slight decline the previous week, applications for mortgages moved higher the week ending April 17.
According to the Mortgage Bankers Association’s (MBA) Weekly Mortgage Applications Survey
applications were up 2.3% last week.
percent from one week earlier, according to data from the Mortgage Bankers Association’s (MBA) Weekly Mortgage Applications Survey for the week ending April 17, 2015.
“Purchase applications increased for the fourth time in 5 weeks as we proceed further into the spring home buying season,” said Mike Fratantoni, MBA’s Chief Economist. “Despite mortgage rates below 4%, refinance activity increased less than 1% percent from the previous week.”
That slight increase in the Refinance Index pushed the refinance share of mortgage activity down 2% -- to 56% of total applications, its lowest level since October 2014. The adjustable-rate mortgage (ARM) share of activity rose to 5.5% percent of total applications.
The FHA share of total applications inched up to 13.6% from 13.5% the week prior. The VA share of total applications slipped decreased from 11.1% to 11.0%, and the USDA share of total applications was unchanged at 0.8%.
Contract interest rates
- The average contract interest rate for 30-year fixed-rate mortgages (FRMs) with conforming loan balances ($417,000 or less) dropped 4 basis point -- from 3.87% to 3.83%, its lowest level since January 2015. Points fell to 0.32 from 0.38 (including the origination fee) for 80% loan-to-value ratio (LTV) loans. The effective rate decreased from last week.
- The average contract interest rate for 30-year FRMs with jumbo loan balances (greater than $417,000) decreased inched down to 3.83% from 3.84%, with points decreasing to 0.22 from 0.35 (including the origination fee) for 80% LTV loans. The effective rate decreased from last week.
- The average contract interest rate for 30-year FRMs backed by the FHA dipped 2 basis points to 3.65%, its lowest level since May 2013, with points decreasing to 0.12 from 0.23 (including the origination fee) for 80% LTV loans. The effective rate decreased from last week.
- The average contract interest rate for 15-year FRMs fell to 3.11%, its lowest level since January, from 3.16%, with points decreasing to 0.24 from 0.29 (including the origination fee) for 80% LTV loans. The effective rate decreased from last week.
- The average contract interest rate for 5/1 ARMs rose 7 basis points to 2.89%, with points decreasing to 0.29 from 0.40 (including the origination fee) for 80% LTV loans. The effective rate increased from last week.
The survey covers over 75% of all U.S. retail residential mortgage applications.
Realtors say mortgage standards still too tight
Industry group asks Congress to review new lending rules04/22/2015ConsumerAffairsBy Mark Huffman
In the wake of the financial crisis and the collapse of the housing market, mortgage lenders raised standards for qualifying for loans and Congress approve...
In the wake of the financial crisis and the collapse of the housing market, mortgage lenders raised standards for qualifying for loans and Congress approved tighter regulation of the mortgage market.
It was not an unreasonable response after nearly a decade of very loose lending standards that resulted in many consumers buying homes without being required to prove they could actually afford them.
But Realtors complained from the beginning that the reaction went too far, choking off the housing market's recovery. They pointed out that each month, nearly a third of home buyers paid with cash without having to borrow the money.
Now, the National Association of Realtors (NAR) is pressing its case to Congress, telling the Senate Banking, Housing and Urban Affairs Committee that some of the new regulatory requirements are unnecessary and are blocking otherwise qualified, credit worthy consumers from buying a home.
Pendulum has swung too far
NAR President Chris Polychron told the committee that the industry supports strong underwriting standards, put in place after the housing crisis to protect consumers from risky lending practices. But Polychron insists the pendulum has swung too far.
“In some cases, well-intentioned, but over-corrective policies are severely hampering the ability of millions of qualified buyers to purchase a home,” he said. “I believe, and our members believe, that we have yet to strike the right balance between regulation and opportunity.”
To bolster their case the Realtors say the near record low mortgage rates that have prevailed since 2009 should have resulted in surging home sales. But that hasn't been the case.
Sales of existing homes in February were up a healthy 4.7% over the previous year, but at the rather anemic annual rate of 4.88 million. Anemic when compared to 2005's existing home sales, which totaled more than 7 million.
Homeownership rate is falling
Today, even with mortgage rates well under 4%, NAR says the number of first-time buyers entering the market is at the lowest point since 1987. The homeownership rate is back to 1990 levels.
So what exactly is it that the Realtors would like to see? For one, the industry trade group wants to change some new regulations it says limit opportunities for buyers to own condos. NAR says condos often represent the most affordable buying options for first-time homebuyers and minorities.
Concern about new rules
Realtors are also concerned about rules that haven't yet taken effect. Polychron says the Consumer Financial Protection Bureau (CFPB) should be ready for problems that might crop up during the implementation of the Real Estate Settlement and Procedures Act and Truth in Lending Act changes.
Those rule changes just happen to take effect on August 1, the busiest transaction time of the year. To make loans close more smoothly, Polychron suggested the CFPB take a “restrained” approach to enforcement as the rule goes into effect.
Polychron also took aim at a provision in the Ability-to-Repay rules that limits mortgage fees and points to 3% in order for home loans to be considered Qualified Mortgages. The rule is designed to protect consumers but Polychron said the unintended consequence is that consumers, including lower-end buyers, are finding reduced choices and added obstacles in their efforts to buy a house.
“No one wants to see a return to the unscrupulous, predatory lending practices that caused the Great Recession, but some modifications to existing regulations would help restore the homeownership rate to pre-bubble levels,” said Polychron.
Superior Nut & Candy recalls pine nuts
The product may be contaminated with Salmonella04/22/2015ConsumerAffairsBy James Limbach
Superior Nut & Candy Co., is recalling 4-oz. packages of Pine Nuts. The product may be contaminated with Salmonella. No illnesses have been reported to d...
Superior Nut & Candy Co., is recalling 4-oz. packages of Pine Nuts.
The product may be contaminated with Salmonella.
No illnesses have been reported to date in connection with the problem.
The recalled Pine Nuts were distributed nationwide in retail stores.
The recalled product comes in a 4-oz. package with a clear front and tan-colored label on the back and were sold in retail store produce departments nationwide.
The back label lists pine nuts as the only ingredient and has the UPC Number of 72549320016 with a Best By date between 10/22/2015 and 12/27/2015.
Customers who purchased the recalled product should return them to the place of purchase for a full refund.
Consumers with questions may contact customer relations at (773) 254-7900 Monday through Friday, 8:00 AM to 5:00 PM, CST.
Michigan Brand recalls turkey and beef products
The products contain sodium nitrite, which is not listed label04/22/2015ConsumerAffairsBy James Limbach
Michigan Brand of Bay City, Mich. is recalling approximately 737 pounds of turkey and beef products. The products contain sodium nitrite, which is missing...
Michigan Brand of Bay City, Mich. is recalling approximately 737 pounds of turkey and beef products.
The products contain sodium nitrite, which is missing the word “nitrite” on the product label.
There are no reports of adverse reactions due to consumption of these products.
The following smoked turkey and beef items, produced on various dates between February 9 and April 16, 2015, are being recalled:
- 8 and 16 oz. packages of “Michigan Brand Honey Glazed Smoked Turkey.”
- 8 oz. packages of “The Jerky Outlet Jalapeno Flavored Smoke Beef.”
The recalled products bear the establishment number “EST. 10306” or “P-10306” inside the USDA mark of inspection and were shipped to retail and distribution locations in Michigan.
Consumers with questions may contact the company’s main line at (989) 893-9589.
Hines Nut Company recalls walnut halves and pieces
The product may be contaminated with Salmonella04/22/2015ConsumerAffairsBy James Limbach
Hines Nut Company of Dallas, Texas is recalling Lot Number 6989 of walnut halves and pieces. The product may be contaminated with Salmonella. The compan...
Hines Nut Company of Dallas, Texas is recalling Lot Number 6989 of walnut halves and pieces.
The product may be contaminated with Salmonella.
The company says it has not received any complaints concerning illness to date.
The following product is being recalled:
- Hines Nut Brand, 128 cases, 25 trays per carton, in black foam trays with a green and gold label; Tray weight of 16-oz., packaged March 3, 2015; lot number 6989 printed on the label; Best Buy Date of 12.28.15; UPC 07826406516-5
The product was sold by Randalls Food Stores in Texas.
Customers who purchased the the recalled product should not eat it and contact the company for information regarding a full refund or disposal information.
Consumers may contact Hines Nut Company at 1 800-561-6374 Monday – Friday, 7 am – 4 pm CST.
Conway Organic Sesame Ginger and Citrus Organic Vinaigrette dressings recalled
The products may be contaminated with Salmonella04/22/2015ConsumerAffairsBy James Limbach
Conway Import Co., is recalling Conway Organic Sesame Ginger Dressing and Conway Citrus Organic Vinaigrette Dressing. The products may be contaminated wit...
Conway Import Co., is recalling Conway Organic Sesame Ginger Dressing and Conway Citrus Organic Vinaigrette Dressing.
The products may be contaminated with Salmonella.
No illnesses have been reported to date.
The following products, distributed in Illinois, Maryland, Georgia, Florida, Pennsylvania, New Jersey, New York and Texas, and packed in plastic gallon jars with the manufacturing code printed on the top of the cap and the cardboard shipping container, are being recalled:
- Conway Organic Sesame Ginger Dressing Recipe Code N-22; MFG.CODE DATE: 28814....363014....030015....051015
- Conway Citrus Organic Vinaigrette Dressing Recipe Code L-18; MFG.CODE DATE: 276014....337014
Consumers with questions may contact Conway Import at 847-455-5600.
Study finds TV sets quickly becoming old technology
Consumers would rather "watch TV" on their phones and computers04/21/2015ConsumerAffairsBy Mark Huffman
Recently my cousin and his family came for a visit and we encamped his two pre-teen daughters in the living room, with inflatable beds and sleeping bags.
I prepared to give the girls a primer on the TV remote control and a tour of our cable channels but I noticed a decided lack of interest on their part.
“Do you guys want to watch TV?” I asked.
“No,” they replied in unison, not once looking up from their smartphones. Indeed, the TV remained off all weekend.
What I witnessed in my living room is fairly typical, according to new research from Accenture. In its study of consumer trends, the company found the television set was the only digital product category to see uniform, double-digit usage declines among viewers in most age groups.
Increasingly, consumers are turning off TV and replacing their sets with a combination of laptops, desktops, tablets and smartphones when they want to view video content, what we so quaintly referred to in the past as “watch television.”
Young viewers seem to be abandoning television the fastest. The study found 14- to 17-year-olds are dropping TV at the rate of 33% for movies and television shows and 26% for sporting events.
The decline continues for older demographics until it flattens out for those 55 and older. But even among Baby Boomers, the trend is moving away from TV.
”We are seeing a definitive pendulum shift away from traditional TV viewing,” said Gavin Mann, Accenture’s global broadcast industry lead. “TV shows and movies are now a viewing staple on mobile devices of all shapes and sizes, thanks to improved streaming and longer battery life. The second screen viewing experience is where the content creators, broadcasters and programmers will succeed or fail.”
Streaming services like Netflix, Amazon Prime, Hulu and the TV broadcast networks’ own streaming platforms, allow viewers to watch what they want, when they want. Increasingly, they are doing so. In industry jargon it's called “over the top content,” and more is emerging all the time.
Now you don’t have to subscribe to cable TV to get HBO. The company has just launched HBO Now, promising “instant access to all of HBO” on your streaming device on a subscription basis. It's marketing slogan says “all you need is the Internet.”
Previously consumers had to subscribe to cable – and with a significant package at that -- for the ability to add premium channels like HBO.
Room for improvement
While anytime, anywhere viewing is becoming mainstream, consumers are not completely satisfied with the viewing experience so far. For the most part, complaints are about the Internet service delivering the programming.
More than half the people in the survey who said they watch streaming content complained about buffering and other technical issues, as well as advertising placement.
Accenture’s take? Content producers – notably the broadcast networks – are still in a favorable position but will have to improve delivery as well as keep content to the standards consumers expect.
“Understanding consumers and ensuring decision-making is centered on consumer insights will be increasingly key to success,” said Mann. “The future leaders in media and entertainment will be those who listen to the audience and can tailor their content and services to this new reality.”
There's no apparent explanation for the higher incidence of childhood cancers in the area04/21/2015ConsumerAffairsBy Christopher Maynard
Photo © Frantab - Fotolia Florida has long been a favorite vacation spot and also boasts a fast-growing fulltime population. Although people may enjoy...
Bird flu outbreak could impact poultry supplies
Hormel warns it will have less turkey product this year04/21/2015ConsumerAffairsBy Mark Huffman
An outbreak of H5N2 avian, or bird flu, spread quickly this week through poultry operations in the upper Midwest, resulting in the deaths of millions of bi...
An outbreak of H5N2 avian, or bird flu, spread quickly this week through poultry operations in the upper Midwest, resulting in the deaths of millions of bird, potentially affecting supplies and prices for consumers.
The disease was discovered in poultry operations in Osceola County, Iowa, a major egg-producing region. Hen losses have been estimated at 5.3 million.
The impact on egg prices is unclear. Bloomberg News reports the U.S. Department of Agriculture had earlier projected an increase in 2015 egg production and a decline in prices from last year. So it is possible consumers will notice no increase in prices.
Earlier, in neighboring Minnesota, bird flu swept through at least 28 turkey-producing farms. Turkey losses are estimated at 1.7 million.
The impact was severe enough that Hormel Foods, a publicly traded company, warned it would likely be felt when the company reported its quarterly earnings.
“We are experiencing significant challenges in our turkey supply chain due to the recent HPAI outbreaks in Minnesota and Wisconsin,” said Jeffrey Ettinger, chairman and CEO of Hormel Foods.
Ettinger said he expects the outbreaks will subside as the weather improves but in the short term Hormel will face “turkey supply challenges.”
Hormel said its Jennie-O Turkey Store is managing the outbreak in cooperation with the USDA Animal and Plant Health Inspection Service and state agency officials. The company said all flocks are tested for influenza prior to processing and no birds diagnosed with HPAI are allowed to enter the food chain.
Little risk for humans
According to health officials, the outbreak is an economic issue at this point, not a public health problem.
“The Center for Disease Control (CDC) and Iowa Department of Public Health considers the risk to people from these HPAI H5 infections in wild birds, backyard flocks and commercial poultry, to be low,” the Iowa Department of Agriculture said in a statement. “No human infections with the virus have ever been detected.”
Still, consumers should err on the side of caution. The department notes these virus strains can travel in wild birds without those birds appearing sick. People should avoid contact with sick or dead poultry or wildlife. If contact occurs, you should wash your hands with soap and water and change clothing before having any contact with healthy domestic poultry and birds.
Bird owners – whether commercial producers or backyard flock owners – are being advised to prevent contact between their birds and wild birds. When birds appear sick or die suddenly, it should be reported to state or federal agriculture officials.
There are several strains of bird flu. Earlier this month the avian A strain H7N9 was confirmed in areas near China's border with Myanmar. Like other strains of bird flu, it can be passed from bird to humans but not from human to human.
The World Health Organization has called H7N9 an unusually dangerous virus for humans, with about 30% of people who get it dying.
The mortgage servicing company allegedly abused consumers, misrepresented amounts owed04/21/2015ConsumerAffairsBy Truman Lewis
Green Tree mortgage servicing company will pay $63 million to settle federal charges that it harmed homeowners with illegal loan servicing and debt collect...
Not everything in your garden needs sun
Some plants thrive, or at least survive, in the shade04/21/2015ConsumerAffairs
When you are planting your garden, light is a big factor. What time you get your sun and how much of your garden is covered. There is a flipside to this an...
When you are planting your garden, light is a big factor. What time you get your sun and how much of your garden is covered. There is a flipside to this and it's shade.
Gardening in the shade doesn't have to be frustrating. Some plants will tolerate relatively low light, and a few actually thrive in it. Like anything there are always options. Most likely you will want to take a look at flowering annuals, perennials, bulbs, and woodland plants for color. There are plenty of ground covers you can investigate and they do well in shaded areas.
If your shaded area isn't pitch black but just lightly shaded, you could try a few herbs or leafy vegetables. Take note that flowering annuals do not bloom well in heavy shade; they all blossom more profusely as light is increased. Some annuals, however, do better in light shade than in full sun, which may fade colors or cause wilting the moment there is any moisture stress.
You have to figure out how much light your plants will actually be getting. The biggest challenge will be areas under big shade trees or the overhang of a building. If you can get a glimpse of sun for a brief period of time all the better. There are numerous plant choices you can make in these locations, though by no means as many as are possible with five or more hours of direct, full sunlight.
Something else to consider with shade is that your moisture level can pose a problem. If you are under a tree or an overhang it will be a covering and actually keep your plants from getting adequate moisture.
Trees and shrubs will be fighting to get that water to survive. The watering will become your responsibility, even when it seems you are getting a ton of rain, it will never reach the plants down to the roots effectively so you will have to compensate.
What will help you is a balanced fertilizer and then follow that up with one or two extra applications as you get into the summer. It will help so your plants don’t have to compete with tree and shrub roots. You can always plant above ground if you are worried about the trees and shrubs posing a problem.
For the most part plants that work well in the shade will do best in well drained, relatively fertile soil. Your local County Extension Office can supply you with additional materials on specific shade -tolerant plants.
Travel sites don't always tell the whole story about your next hotel
It's a problem that predates online travel sites but whether it has gotten any better is debatable04/21/2015ConsumerAffairsBy James R. Hood
Hotels always look great on their websites. The rooms are sparkling, the beds are clean, the floors are dry and there are no nasty little vermin crawling around biting people.
But that's not always the reality, as Danielle tells us she found when she spent a few nights at the Clinton Hotel in Miami's South Beach neighborhood.
It sounds like Expedia did what it could to help Danielle and her friend but the overall experience still left a lot to be desired. Unfortunately, that's the case with many online reservations.
While we don't hear about too many leaking toilets, Expedia gets more than its share of complaints about duplicate reservations, lost reservations and prices that seem to change without notice.
In many cases, consumers think they're shopping around only to learn they've made a reservation. That's what happened to Julie of Flushing, Mich.
"Thought I was checking for availability and the reservation was made. I called within one minute of realizing my mistake and was given the runaround. Asked agent to cancel the reservation, which he told me he did, but the refund on my debit card would take two weeks," Julie said.
Aleksandra had the changing-price experience. "I tried booking 2 separate all inclusive packages. Expedia's web showed a price per booking which we were interested and booked 1 room, however when finishing 2nd booking, the price changed increased 3 times (by over $300.00) in the matter of seconds," she said.
But perhaps the most unusual price-changing complaint comes from Sam of Pelawatte, Sri Lanka.
"I went to the Expedia url page. I was quoted a price in Rps (rupees) which was very reasonable," Sam said. "I booked the hotel room only to discover that the Rps price was not in Sri Lankan Rps (as I was in Sri Lanka) but in twice as expensive Rps of India. It was a non-refundable price but when I called customer service they could not understand the confusion. They were totally unhelpful. Most sites distinguish Indian Rps as INRPS but not Expedia."
Blue Bell Creameries expands recall to include all products
The products may be contaminated with Listeria monocytogenes04/21/2015ConsumerAffairsBy James Limbach
Blue Bell Ice Cream of Brenham, Texas, is recalling all of its products currently on the market made at all of its facilities -- including ice cream, froze...
Blue Bell Ice Cream of Brenham, Texas, is recalling all of its products currently on the market made at all of its facilities -- including ice cream, frozen yogurt, sherbet and frozen snacks.
The products may be contaminated with Listeria monocytogenes.
Five patients were treated in Kansas and 3 in Texas after testing positive for Listeria.
The recalled products sold at retail outlets -- including food service accounts, convenience stores and supermarkets -- in Alabama, Arizona, Arkansas, Colorado, Florida, Georgia, Illinois, Indiana, Kansas, Kentucky, Louisiana, Mississippi, Missouri, Nevada, New Mexico, North Carolina, Ohio, Oklahoma, South Carolina, Tennessee, Texas, Virginia, Wyoming and international locations.
The decision to expand the recall resulted from findings from an enhanced sampling program which revealed that Chocolate Chip Cookie Dough Ice Cream half gallons produced on March 17, 2015, and March 27, 2015, contained the bacteria.
Nissan recalls Sentras in high humidity areas
The passenger side frontal air bag inflator may rupture04/21/2015ConsumerAffairsBy James Limbach
Nissan North America is recalling 45,000 model year 2006 Sentras manufactured January 2, 2006, to August 26, 2006, originally sold, or currently registered...
Nissan North America is recalling 45,000 model year 2006 Sentras manufactured January 2, 2006, to August 26, 2006, originally sold, or currently registered, in geographic locations associated with high absolute humidity.
Specifically, vehicles sold, or currently registered, in Puerto Rico, Hawaii, the U.S. Virgin Islands, Guam, Saipan, American Samoa, Florida and adjacent counties in southern Georgia, as well as the coastal areas of Alabama, Louisiana, Mississippi and Texas are being recalled.
Upon deployment of the passenger side frontal air bag, excessive internal pressure may cause the inflator to rupture during deployment, with metal fragments striking and potentially seriously injuring the vehicle occupants.
Nissan will notify owners, and dealers will replace the passenger air bag inflator, free of charge. The manufacturer has not yet provided a notification schedule.
Chef’s Express California Pasta Salad recalled
The product may be contaminated with Salmonella04/21/2015ConsumerAffairsBy James Limbach
Schnuck Markets of St. Louis, Mo., is recalling its Chef’s Express California Pasta Salad. The product may be contaminated with Salmonella. No illnesses ...
Schnuck Markets of St. Louis, Mo., is recalling its Chef’s Express California Pasta Salad.
The product may be contaminated with Salmonella.
No illnesses related to the consumption of this product have been reported to date.
The product was sold in 99 Schnuck stores Deli/Chef’s Express departments April 2 – April 14, 2015 in Missouri, Illinois, Indiana, Wisconsin and Iowa.
The product was labeled “Chef’s Express California Pasta Salad” and sold by weight through the company’s Deli/Chef’s Express departments.
Customers may return any unused portion to their nearest store for a full refund.
Consumers with questions may contact the Schnuck consumer affairs department Monday – Friday, 8:30 a.m. – 5 p.m. at 314-994-4400 or 1-800-264-4400.
Whole Foods Market recalls packaged raw macadamia nuts
The product may be contaminated with Salmonella04/21/2015ConsumerAffairsBy James Limbach
Whole Foods Market is recalling packaged raw macadamia nuts. The product may be contaminated with Salmonella. No illnesses have been reported to-date. ...
Whole Foods Market is recalling packaged raw macadamia nuts.
The product may be contaminated with Salmonella.
No illnesses have been reported to-date.
The recalled product, labeled as “Whole Foods Market Raw Macadamia Nuts,” was packaged in 11-oz. plastic tubs with a best-by date of Feb. 4, 2016, and a UPC code of 7695862059-1.
The nuts were sold in Whole Foods Market Stores in Arizona, California, Colorado, Hawaii, Kansas, Louisiana, New Mexico, Nevada, Oklahoma, Texas and Utah.
Customers who purchased this product should discard it and bring in their receipt for a full refund.
Consumers with questions may contact Whole Foods Market customer service at 512-477-5566, ext. 20060 Monday – Friday, 9:00 am – 5:00 pm CDT.
Telemarketers: readers sound off
Our comments section is overflowing with ideas for dealing with unwanted calls04/20/2015ConsumerAffairsBy Mark Huffman
Our story last week about ways to reduce the growing number of unwanted telemarketer calls to cell phone numbers triggered a lot of response from readers. ...
Our story last week about ways to reduce the growing number of unwanted telemarketer calls to cell phone numbers triggered a lot of response from readers. No surprise there since hatred of telemarketers seems to be a universal bond.
Several used our story as a jumping-off point to talk in the comments section about telemarketers in general and trade ideas for dealing with them. We thought some of the discussion was worthy of passing along.
Going on the offensive
A reader named Greg says his answer to telemarketers is to go on the offensive. He says you have to let your creativity flow.
“Start by asking if they were raised by a good family who taught them right from wrong, and if so why are they knowingly working for a criminal organization” he writes.
Other times he says he tells them he works for the Federal Trade Commission (FTC). If it sounds like Greg spends a lot more time talking to telemarketers than most people want to do, he does. But he says there’s a point to that.
“Sometimes I just taunt them endlessly, telling them that it's my sole purpose to simply waste as much of their time as I possibly can,” Greg writes. “They usually get all arrogant and snotty until they realize they ARE in fact getting played the longer they stay on the phone.”
Engaging telemarketers is a common tactic. A retired Baptist minister listens patiently to any telephone pitch, then asks if he can talk with the caller about his or her personal relationship with Jesus. The elderly gentleman says he is rarely called twice by the same telemarketer.
Do Not Call list
The assumption of many readers posting comments is the FTC’s Do Not Call list “doesn’t work.” Otherwise, why would they be getting so many calls? A reader named Larry set them straight.
“Any organization that is out to scam you will simply ignore the Do Not Call List and there is nothing the FTC can do about it,” Larry writes.
Exactly. Scammers out to steal your money usually operate outside U.S. borders and have nothing to fear from the FTC. But legitimate U.S.-based businesses have to respect the Telemarketing Sales Rule or face potential sanctions. Registering your number won’t stop all the calls but will reduce them.
David pointed out that if you listen to the end of a telemarketer’s call, it will ask you to press a number if you want to be taken off that particular caller’s list. But Joel responded that would be a mistake.
“This alerts the caller it’s a real number and somebody will answer it and resells your number to hundreds of other scam artists,” Joel warns.
A reader named Earl suggests making telemarketing a capital crime, suggesting any candidate making that a plank in his or her platform would win in a landslide.
Things to keep in mind
Here are a few points our readers need to keep in mind. Even if your number is on the Do Not Call list, charities, political organizations and pollsters are allowed to call. Also, if you have initiated contact with a business, it is allowed to follow up with telemarketing calls for 18 months after your last purchase, payment of delivery.
Engaging with a telemarketer who is an obvious scammer might sound fun but might not be a good idea. There’s no need to antagonize a criminal. As for pranking a legitimate telemarketer, let’s face it, not everyone is Jerry Seinfeld.
When calls come in from people you don’t want to talk to, simply hang up. If you have Caller ID and the number is blocked or is unfamiliar, just let it go to voicemail. Sooner or later, they’ll take the hint.
Airlines gain a little altitude as customer satisfaction index rises slighty
They're still in the bottom four categories though04/20/2015ConsumerAffairsBy Truman Lewis
The good news for airlines is that consumers don't hate them quite as much as they once did. But it's nothing to get excited about -- only Internet service...
The good news for airlines is that consumers don't hate them quite as much as they once did. But it's nothing to get excited about -- only Internet service providers, subscription TV and health insurance rate worse with consumers, according to the American Customer Satisfaction Index (ACSI).
Airlines reach an ACSI benchmark of 71 on a scale of 0 to 100 for 2015—approaching the category’s peak score of 72 in 1994.
“Airlines are doing a better job of getting travelers to their destinations on time, with less frustration over baggage,” says ACSI Director David VanAmburg. “ACSI findings show that timeliness and baggage handling have improved, which is in-line with Department of Transportation data on reductions in both flight delays and baggage mishandling over the past year.”
The on-board experience still lags, however, with seat comfort remaining the worst part of flying (ACSI benchmark of 65). Passengers are happier with in-flight services such as entertainment options, up 7 percent to 72, but there is still room for improvement.
The ACSI Travel Report 2015 covers customer satisfaction with airlines, hotels and Internet travel agencies.
JetBlue soars, Spirit sinks
Low-cost carrier JetBlue, up 3 percent to top the field at 81, increases its lead over rival Southwest. JetBlue has been number one for passenger satisfaction since 2012, but the airline’s plans to start charging for bags and reduce legroom may make it difficult for JetBlue to keep its title.
Southwest is flat at 78, but still maintains an edge over the remainder of the field. ACSI newcomer Alaska Airlines debuts at 75, ahead of three other ACSI entrants: Allegiant Air (65), Frontier Airlines (58) and Spirit Airlines (54). The major legacy carriers also are unchanged from last year, with Delta (71) holding an advantage over American (66) and United (60).
“Southwest appears to have successfully managed its AirTran acquisition, but its expansion into international travel may cause some turbulence ahead,” says Claes Fornell, ACSI Chairman and founder. “On the other end of the spectrum, Spirit may offer low fares, but its score reflects its minimalist approach to customer service.”
Hotel satisfaction steady
Guest satisfaction with hotels is steady at an ACSI score of 75. Upscale and luxury brands top the category, while budget chains lag far behind.
Travelers paying more at a range of higher-priced properties from Marriott, Hilton and Hyatt are the most pleased (ACSI scores of 80), while economy operator Motel 6 enters the Index at an all-time industry low of 63.
Wyndham stays out of the industry basement despite a 6 percent decline to 68, with Choice and Best Western coming in just shy of average at 73 and 74, respectively. Midscale operator La Quinta debuts at 76, tying InterContinental and Starwood.
According to guests, hotels do an excellent job when it comes to reservations and check-in (ACSI benchmarks of 86 and 85, respectively). Staff courtesy is lower than a year ago, but still quite good, as is website satisfaction (both 83). Strong user satisfaction with hotel websites is advantageous as the industry seeks to reduce its reliance on Internet travel sites for booking.
Online travel improves
Customer satisfaction with online travel agencies edges up 1.3 percent for a second year to an ACSI score of 78. While this matches the category’s previous high points, customers continue to prefer booking directly with hotels or airlines.
Travel websites occupy a crowded field that includes numerous start-ups and search engines, as well as hotel and airline websites. Mergers are a major industry trend, but for the most part these are transparent to consumers as sites maintain their brand identities.
Among the major agencies, Expedia holds a small lead with a 1 percent uptick to 77. Travelocity, recently added to Expedia’s website portfolio, and Orbitz are deadlocked at 75. User satisfaction drops 3 percent for Orbitz just as Expedia pursues a merger.
Outside the potential Expedia family, Priceline is flat at 75 as well. Beating out all four is the combined score of smaller travel websites, stable at 78, which includes both Internet start-ups and direct booking on the websites of hotels or airlines.
Genetically modified food becoming the next battleground in food wars
Congressional measure would quash state labeling laws04/20/2015ConsumerAffairsBy Mark Huffman
The subject of food is packed with emotion these days. A growing number of consumers have strong feelings about what they eat, where it comes from and how...
The subject of food is packed with emotion these days. A growing number of consumers have strong feelings about what they eat, where it comes from and how it is raised.
Considering that, it might not be surprising that genetically modified food, or food containing genetically modified organisms (GMO), has evoked a lot of heated debate.
The U.S. government is stepping in to stake out its official position in this dispute that increasingly is taking on political overtones – of small natural and organic growers against large agricultural and processing enterprises. The federal information website, USA.gov, has issued a fact sheet on the government’s official position.
Here are some of the key points it contains:
First, what happens when food is genetically engineered? It’s a scientific method in which the DNA genes of one organism are transferred to another organism.
That’s done to make crops grow better, but also to enhance flavor or extend shelf life. It might also make plants heartier, able to withstand longer periods of drought. It may also make food more resistant to insects, reducing the need for pesticides.
Growers and food manufacturers tend to like GMOs, introduced to the market in the 1990s, for economic reasons. Fewer crops are lost to insects, extreme weather and spoilage.
Food activists, in general, highly disapprove of GMOs. For example, a group called the Non GMO Project claims none of the GMO traits currently on the market offer increased yield, drought tolerance, enhanced nutrition, or any other consumer benefit touted by proponents.
The group also cites what it calls “a growing body of evidence” linking GMOs with health problems. But the government fact sheet says the Food and Drug Administration (FDA) regulates and evaluates genetically modified food and hasn’t found any health issues in the genetically modified food currently available.
The agency assesses whether the genetically modified food is toxic or contains allergens, has generally the same nutritional value as traditionally-grown food, or might have long-term health effects. It analyzed its findings to determine if the food complies with safety laws.
The argument has now shifted to disclosure. Does a consumer have the right to know if the food item he or she purchases contains GMOs?
Food activists say yes and have pushed for legislation at the state level to require that information on food labels. The Center for Food Safety, an environmental advocacy organization, reports lawmakers in 30 states have introduced legislation requiring food labels to inform consumers if a product contains GMOs, or outlaw them altogether.
The food industry has pushed back. In Congress, bipartisan sponsors have introduced GMO labeling legislation that would preempt state attempts to regulate GMOs.
In late March Rep. Mike Pompeo (R-Kan.) and Rep. G.K. Butterfield (D-N.C.) introduced a measure to created a voluntary federal labeling standard.
Satellite, cable providers often strike out when it comes to sports
Channels come and go, often leaving sports fans alone in the bleachers04/20/2015ConsumerAffairsBy James R. Hood
Satellite and cable TV providers could give Congress a run for its money in the public disdain department. Just about everything they do annoys consumers,...
Satellite and cable TV providers could give Congress a run for its money in the public disdain department. Just about everything they do annoys consumers, putting the TV subscription and Internet service business a slot or two below airlines in the public estimation.
DISH Network is certainly no exception. Consumers complain about everything from reliability to fees to contract terms to program selection. The signal fails when it rains (and even when it's sunny), they say. Fees are higher than expected and contracts seem to run forever.
And as for the channel line-ups, there've been several near-uprisings over the last year or so, when DISH booted channels from Fox, CNN and others in contract disputes. Some of the channels returned, some didn't.
But while we can all live without news and old movies, baseball is another matter. Fans who signed up for DISH and other providers often think they'll get to see all of their favorite team's schedule but it doesn't always work out that way.
Take Jay of Cartersville, Ga., a Braves fan. He filed this video review:
It's not just the Braves. Joel of Bangor, Pa., thought he'd get all the Pirates games but it didn't turn out that way.
"I switched from DirecTv to DISH Network as part of a package deal from my phone company. I asked and was told that there would be no problem getting Pittsburgh Pirate baseball," Joel said. "I was not able to get the games and was told they were blacked out. However, a neighbor down the road was able to get those games on DirecTv so they were not blacked out."
Joel switched back to DirecTV and now faces a $440 contract termination charge from DISH.
DISH is not alone, of course. All the TV subscription services generate similar complaints. Take Comcast, for example.
"I have 'basic cable.' I used to get the Red Sox baseball games and the local news on basic cable," said Richard of Groveland, Mass. "Xfinity changed that so all I get with 'basic cable' is a bunch of Spanish channels, two Boston channels, and a bunch of PBS stations. I can no longer get Red Sox baseball or the New England Patriots football."
The only way to avoid situations like this is to read the contract very carefully before signing it, while ignoring whatever the salesperson is telling you. In most cases, cable and satellite companies have the option to add and drop channels as they see fit. And sometimes, upstream changes in licensing leave them no choice.
Military payment processor skinned servicemembers, feds charge
Undisclosed fees piled up in servicemembers' accounts; company agrees to refunds04/20/2015ConsumerAffairsBy Truman Lewis
American servicemembers have been unwittingly paying millions of dollars in fees to Fort Knox National Company and its subsidiary, Military Assistance Comp...
American servicemembers have been unwittingly paying millions of dollars in fees to Fort Knox National Company and its subsidiary, Military Assistance Company, the Consumer Financial Protection Bureau (CFPB) charges.
The bureau charges that the military allotment processor did not clearly disclose recurring fees that could total $100 or more. Under a consent order entered into with the Bureau, Fort Knox National Company and Military Assistance Company will pay about $3.1 million in relief to harmed servicemembers.
“Fort Knox National Company and Military Assistance Company enrolled servicemembers without adequately disclosing their fees, and then charged servicemembers without telling them. As a result, servicemembers paid millions of dollars in fees, probably without even knowing it,” said CFPB Director Richard Cordray. “Today we are taking action and others should take note.”
The company is one of the nation’s largest third-party processors of military allotments. The military allotment system allows servicemembers to deduct payments directly from their earnings. The system was created to help deployed servicemembers send money home to their families and pay their creditors at a time when automatic bank payments and electronic transfers were not yet common bank services.
Creditors, such as auto lenders, installment lenders, and retail merchants, have in recent years been known to direct servicemembers to use the system to make loan payments.
Using the Military Assistance Company, known as MAC, servicemembers would set up an allotment that transferred a portion of their pay into a pooled bank account controlled by MAC. Servicemembers would then pay MAC a monthly service charge – typically between $3 and $5 – to have MAC make monthly payments to a creditor out of the account.
On many occasions, however, excess funds accumulated in the payment account, often without servicemembers’ knowledge. An excess, or “residual,” balance might occur, for example, where a debt that a servicemember owed was fully paid off but the servicemember had not yet stopped the automatic paycheck deductions.
The Bureau alleges that from 2010 to 2014, the company routinely charged recurring, undisclosed fees against these residual balances. Tens of thousands of servicemembers had their money slowly drained from their accounts because they were not notified about the charges.
And, since active allotments would replenish the money in the payment account, MAC continued to take such fees in a way that servicemembers could not easily track.
Fort Knox National Company began winding down MAC’s allotment business in 2014. Under the terms of the consent order filed today, Fort Knox National Company and MAC are required to provide about $3.1 million in relief to harmed servicemembers. Servicemembers who may be eligible for relief will be contacted by the Bureau.
Kraft to remove artificial food dyes from its macaroni and cheese
New versions to appear on store shelves by January 201604/20/2015ConsumerAffairs
Kraft Foods announced today that it is changing the recipe of its iconic boxed macaroni and cheese (or “Kraft Dinner,” if you're in Canada)...
Kraft Foods announced today that it is changing the recipe of its iconic boxed macaroni and cheese (or “Kraft Dinner,” if you're in Canada) to replace artificial food dyes with coloring from natural spices, including turmeric, paprika and annatto. The new, naturally colored products are supposed to appear on store shelves starting in January 2016.
The company has already made similar changes to the formulas of its more child-focused offerings; in late 2013, it announced that, due to consumer demand, it was removing Yellow No. 5 and Yellow No. 6 from its cartoon-shaped macaroni and cheese offerings.
Kraft made that change in response to a Change.org petition asking the company to “Stop Using Dangerous Food Dyes In Our Mac & Cheese.” (There is some dispute over whether Yellow Nos. 5 and 6 actually are “dangerous” to humans; however, there’s no disputing that several other countries think those dyes are dangerous, and have banned them as a result.)
Kraft is not the only American food producer to remove artificial dyes in response to consumer demand; in February, the Nestle candy company said it would start removing artificial colors and flavors from its products, too.
People who suffer from food allergies will have to double-check Kraft's new recipes to make sure they're not allergic to any of the natural colorings.
Turmeric can even interact with certain over-the-counter or prescription drugs, including those taken for diabetes or stomach acid reduction, though for the most part, such drug interaction warnings only apply to people taking concentrated doses of turmeric as a medicinal supplement, not to the vastly smaller quantities used to give food a yellow tint.
Rescue groups trying new ways to get homeless animals adopted out
Sleep-overs, cat cafes and workplace visits are among the new strategies04/20/2015ConsumerAffairs
Rescue groups and Humane Societies in different states are getting creative in ways to attract potential adopters. One of the most recent creative spurts ...
Rescue groups and Humane Societies in different states are getting creative in ways to attract potential adopters. One of the most recent creative spurts comes from the Arizona Animal Welfare League. They came with the idea of slumber parties.
Many times when families or even individuals are looking for that special animal to add to their family the expectations are that the little dog or cat will jump right into their arms and give them a big wet sloppy kiss to demonstrate that they are "the one."
It doesn't always work that way because just like people, animals have personalities and some might be the perfect pet for that family but all the commotion of a shelter may inhibit them, and they don't "show" as well as they could.
"We came up with the idea to allow people that were interested in adopting a pet to take it home with them for a few days to see how it's going to work out," said Judith Gardner with the Arizona Animal Welfare League said
The slumber party idea seems to be working because since 2013 they have adopted out more than 1,000 cats and dogs that have had a slumber party with their potential new owner.
The stats aren't too bad -- 73% of the people that take dogs and cats home end up adopting them.
If the animal isn't the right fit, that’s all right also because it gives the agency an idea of exactly what the prospective owner wants and they can match them up better the next time.
Slumber parties aren't the only innovative idea. Cat Cafes have been popular all over the country, and it’s another way of being introduced to a new family member.
Hotels in Georgia have had prospective pets as "house guests." They basically hang around the hotel, available to spend time with guests and even do sleep-overs.
An animal shelter in Florida launched a "Snuggle Delivery" service bringing adoptable puppies and kittens to Broward County workplaces to raise money for homeless animals. Offices must donate a minimum of $150 to get an hour-long puppy or kitten play date during regular business hours. The animals will be available for adoption on the spot. They bring the paperwork and everything.
However it works, if a homeless little dog or cat works its way into a loving home, it's worth the trouble.
Operation RussianDoll exploits zero-day flaw in Adobe Flash and Microsoft Windows
Russian hackers behind scheme to spy on NATO diplomats and U.S. weapon-makers04/20/2015ConsumerAffairs
Over the weekend, researchers at the FireEye cybersecurity firm announced their discovery of zero-day flaws in Adobe Flash and Microsoft Windows, flaws apparently exploited by hackers from a Russian espionage campaign in order to spy on American defense contractors, NATO officials and diplomats, and others in whom Russia's government might take a particular interest.
FireEye nicknamed the campaign “Operation RussianDoll,” and refers to the hackers behind it as Advanced Persistent Threat 28, or APT 28. The official designations for the zero-day flaws themselves are CVE-2015-3043 for Adobe, and CVE-2015-1701 for Microsoft.
On April 18, when it made the announcement, FireEye said Adobe had already independently patched its security hole, and that “While there is not yet a patch available for the Windows vulnerability, updating Adobe Flash to the latest version will render this in-the-wild exploit innocuous. We have only seen CVE-2015-1701 in use in conjunction with the Adobe Flash exploit for CVE-2015-3043.” Windows 8 and later versions are not affected by the flaw.
It's suspected that the APT 28 hackers are connected to or associated with the hackers who breached the State Department and White House computers last year.
The RussianDoll zero-day attacks started April 13 and are still ongoing.
“Zero-day” is tech-speak for any threat that exploits a previously unknown vulnerability, so zero days pass between the discovery of the vulnerability, and the discovery of the attack. (Imagine a homeowner saying “I had no idea that back door even existed – until I discovered burglars walking through it and stealing my stuff.” The back door was a zero-day flaw, the burglary a zero-day exploit.)
Neither U.S. nor Russian government officials have commented on FireEye's announcements yet.
Leading Economic Index up moderately in March
However, weaker growth may lie ahead04/20/2015ConsumerAffairsBy James Limbach
A closely watched economic prognosticating tool is suggesting continued economic growth, although at a slower pace. The Conference Board says its Leading ...
A closely watched economic prognosticating tool is suggesting continued economic growth, although at a slower pace.
The Conference Board says its Leading Economic Index (LEI) was up 0.2% last month following modest gains dating back to December.
“Although the leading economic index still points to a moderate expansion in economic activity, its slowing growth rate over recent months suggests weaker growth may be ahead,” said Ataman Ozyildirim, Economist at The Conference Board. “Building permits was the weakest component this month, but average working hours and manufacturing new orders have also slowed the LEI’s growth over the last six months.”
The 10 components of The Conference Board Leading Economic Index:
- Average weekly hours, manufacturing
- Average weekly initial claims for unemployment insurance
- Manufacturers’ new orders, consumer goods and materials
- Institute for Supply Management (ISM) Index of New Orders
- Manufacturers' new orders, nondefense capital goods excluding aircraft orders
- Building permits, new private housing units
- Stock prices, 500 common stocks
- Leading Credit Index
- Interest rate spread, 10-year Treasury bonds less federal funds
- Average consumer expectations for business conditions
Dog flu reaching epidemic status in parts of the Midwest
The disease is spreading beyond the Chicago area04/20/2015ConsumerAffairs
The dog flu is reaching epidemic proportions and now has crossed state lines. Originally Chicago was the city hit the hardest but it has spread across the ...
The dog flu is reaching epidemic proportions and now has crossed state lines. Originally Chicago was the city hit the hardest but it has spread across the state and now it has infiltrated into neighboring Wisconsin, Ohio and Indiana. The concern that is the virus is a different strain than originally reported.
The Wisconsin Veterinary Diagnostic Lab identified the strain as H3N2, not H3N8 as previously thought. The virus has affected at least 1,000 dogs in all four states. If your dog was inoculated it's possible that the vaccine will not be effective because it is for a totally different strain.
Although the strains are different symptoms remain the same with coughing and sneezing, runny nose and a fever. It is still recommended to get the vaccine because it may provide protection if the other strain is still circulating said Keith Poulsen from the UW veterinary school.
There is no evidence that this virus will be contagious to humans but the H3N2 is contagious to cats.
According to Myfoxchicago.com this new strain likely came from Asia and worked its way into the U.S. There is a small window of time that the dogs are contagious with the virus. So the dog must have hopped a plane on a trip from Asia to the U.S.
It was scientists at Cornell University that discovered we were blaming the wrong strain of virus. This is the first time H3N2 has been identified in North America. The last outbreak of the strain occurred in China and South Korea.
At this moment there haven't been any reports of any cats getting the virus but H3N2 can be transmitted from dogs to cats, but whether cats can transmit it to another animal is not known.
To help prevent the virus from spreading it has been recommended to stay away from dog parks, boarding facilities and groomers.
Possible processing deviation prompts recall of B & R pork products
The product may be contaminated with staphylococcal enterotoxin04/20/2015ConsumerAffairsBy James Limbach
B & R Meat Processing of Winslow, Ark., is recalling approximately 2,129 pounds of pork products. A possible processing deviation may have led to staphylo...
B & R Meat Processing of Winslow, Ark., is recalling approximately 2,129 pounds of pork products.
A possible processing deviation may have led to staphylococcal enterotoxin contamination.
There are no reports of adverse reactions due to consumption of these products.
The following cured and uncured pork items, produced between August 7, 2014, and April 1, 2015, are being recalled:
- 1-lb. cryovac packages of “B & R MEAT PROCESSING CURED HAM PORK SAUSAGE.”
- 1-lb. cryovac packages of “B & R MEAT PROCESSING CURED PORK CANADIAN BACON.”
- 1-lb. cryovac packages of B & R MEAT PROCESSING CURED RATTLESNAKE PORK.”
- 1 to 2-lb. cryovac packages of “B & R MEAT PROCESSING CURED HAM PORK.”
- 0.5 to 1-lb. cryovac packages of “B & R MEAT PROCESSING CURED PORK JOWLS.”
- 1-lb. cryovac packages B & R MEAT PROCESSING CURED AR PORK BACON.”
- 1-lb. cryovac packages of “B & R MEAT PROCESSING UNCURED SMOKED PORK BACON.”
- 0.5 to 1-lb cryovac packages of “B&R MEAT PROCESSING SMOKED PORK JOWLS.”
- 0.5 to 1-lb cryovac packages of “B&R MEAT PROCESSING PORK HOCKS.”
- 1-lb cryovac packages of “B&R MEAT PROCESSING UNCURED PORK CANADIAN BACON.”
- 1-lb cryovac packages of “B&R MEAT PROCESSING CURED BACON PORK”
- 1 to 2-lb cryovac packages of “B&R MEAT PROCESSING UNCURED SMOKED PORK HAM.”
- 1 to 2-lb cryovac packages of “B&R MEAT PROCESSING UNCURED SMOKED PORK AR BACON.”
The recalled products bear the establishment number “Est. 46910” inside the USDA mark of inspection, and were shipped to local stores and farmer’s markets in Arkansas.
Consumers with questions about the recall may contact Scott Ridenoure of B&R Meat Processing, at (479) 634-2211.
BMW recalls older Cooper vehicles with air bag issue
The front passenger air bag may not deploy in a crash04/20/2015ConsumerAffairsBy James Limbach
BMW of North America is recalling 91,800 model year 2005-2006 MINI Cooper and Cooper S vehicles manufactured January 5, 2005, to November 28, 2006, and 200...
BMW of North America is recalling 91,800 model year 2005-2006 MINI Cooper and Cooper S vehicles manufactured January 5, 2005, to November 28, 2006, and 2005-2008 MINI Cooper Convertible and Cooper S Convertible vehicles manufactured January 5, 2005, to July 31, 2008.
Due to manufacturing, installation, and exposure issues, the front passenger seat occupant detection mat may not function properly and, as a result, the front passenger air bag may not deploy in a crash.
Failure of the air bag to deploy increases the passenger's risk of injury.
MINI will notify owners, and dealers will replace the front passenger seat occupant detection mat, free of charge. The recall is expected to begin May 1, 2015.
Owners may contact MINI customer service at 1-866-825-1525.
Civia Cycles Recalls Hyland bicycles,aluminum Civia fenders
The fender mounting bracket can break or bend04/20/2015ConsumerAffairsBy James Limbach
Civia Cycles, of Bloomington, Minn., is recalling about 1,000 Hyland bicycles and aluminum fenders. The fender mounting bracket can break or bend, posing ...
Civia Cycles, of Bloomington, Minn., is recalling about 1,000 Hyland bicycles and aluminum fenders.
The fender mounting bracket can break or bend, posing a fall hazard to the rider.
The company has received 1 report in which a consumer stated that a bracket broke and resulted in the consumer suffering a cervical spine injury and nerve damage.
This recall includes all Civia aluminum fenders sold separately as aftermarket sets and all Civia Hyland bicycles sold with the fenders as original equipment. The recalled fenders are round, designed for use with 700c wheels and tires and have the Civia logo on the front and rear sides of each fender. Fender sets came in black, blue, green, olive, red and silver.
Hyland bicycles came in blue, green, olive and red. The bikes have "Hyland" on the top tube, "Civia" on the down tube and the Civia logo on the seat tube.
The bikes and fenders, manufactured in Taiwan, were sold at independent bicycle retailers nationwide and online from April 2008, through March 2013, for about $60 per Civia fender set and between $1,200 and $4,500 for Civia Hyland bicycles.
Consumers should immediately stop riding bicycles with the recalled fenders and contact an authorized Civia Cycles dealer to receive a $60 credit.
Consumers may contact Civia Cycles toll-free at (877) 311-7686 from 8 a.m. to 6 p.m. CT Monday through Friday.
Sixty really is the new 40
Researchers say it's time to redefine "old"04/17/2015ConsumerAffairsBy Mark Huffman
There used to be an old saying, “you're as old as you feel.” It was normally said by old people trying to convince themselves they weren...
There used to be an old saying, “you're as old as you feel.” It was normally said by old people trying to convince themselves they weren't.
But increasingly science has begun to back that up. Sometimes, you see examples of it in real life – ordinary people active and alert into their 90s. Athletes on the field long after peers from earlier generations would have retired.
In Superbowl 49 last February, New England Patriots quarterback Tom Brady had the game of his career at age 37, and no one is suggesting he is close to hanging up his cleats.
Sergei Scherbov, who led a research team studying how people age, says better health and longer life expectancy has turned ideas about what constitutes “old age” on its head.
Time lived or time left?
"Age can be measured as the time already lived or it can be adjusted taking into account the time left to live,” Scherboy said. “If you don't consider people old just because they reached age 65 but instead take into account how long they have left to live, then the faster the increase in life expectancy, the less aging is actually going on."
Scherboy notes that 200 years ago, a person who reached age 60 was old. Really old. In fact, they had outlived their life expectancy.
"Someone who is 60 years old today, I would argue is middle aged,” he says. "What we think of as old has changed over time, and it will need to continue changing in the future as people live longer, healthier lives."
People in their 60s and beyond may have a few advantages the generations that went before them didn't have. Health care services are better than in the past. There is better knowledge about destructive habits, like smoking and poor diet.
Today's older generation is also wealthier. A 2011 British survey found a third of people in their 60s said they were in the best financial shape of their lives, compared to just 23% of their younger peers. They took more vacations and enjoyed life more.
Organizations like AARP have promoted the idea of active, healthy people in their 60s, 70s or older, encouraging “seniors” to stay engaged both physically and mentally. In many cases that means working longer, if desired. But that can sometimes present a whole different set of problems.
Bill Heacock, who runs his own business as a seminar trainer, is 61 and has no intention of quitting. But he tells AARP he's worried that his much younger clients have a hard time seeing past his gray hair. Yet he eats wisely, runs 20 to 25 miles per week and weighs less than he did in college.
Stony Brook University researcher Warren Sanderson says someone like that should not be considered old.
"The onset of old age is important because it is often used as an indicator of increased disability and dependence, and decreased labor force participation,” he said.
A 2009 Pew Research Center study asked Americans to define when someone is “old.” As you might expect, the answers were wide ranging. Only 32% said when someone hits 65 years of age. Seventy-nine percent replied when someone celebrates their 85th birthday.
Geo-inference attacks: how the websites you visit can tell hackers where you are
There's no perfect protection against this, but clearing your browser history helps04/17/2015ConsumerAffairs
Researchers in Singapore have discovered a serious new threat to personal privacy...
Researchers at the National University of Singapore have discovered a serious new threat to personal privacy in the Internet era: “geo-location inference,” which allows almost anyone with a website to determine the precise location of that site's visitors (from country and city right down to street address), and “geo-inference attacks,” which makes this information available to hackers who can make hyper-precise measurements of the timing of browsers' cache queries.
The full research study, downloadable as a .pdf here, is titled I Know Where You've Been: Geo-Inference Attacks via the Browser Cache. The problem is particularly widespread in the U.S., U.K., Australia, Japan and Singapore, and among users of Chrome, Firefox, Internet Explorer, Opera and Safari browsers.
Head researcher Yaoqi Jia told the Daily Dot that geo-inference attacking is a “new attack” with a “big impact,” and that “It’s the first to utilize timing channels in browsers to infer a user’s geo-location. No existing defenses are efficient to defeat such attacks.” Even the anonymizing network Tor cannot provide perfect protection against it.
What is it?
But what exactly is this problem? Many popular websites are “location-oriented,” which means that different visitors from different locations see different things.
Craigslist lets users narrow their searches by geographical area. Google uses different pages in different countries: Google.com in the United States becomes Google.ca in Canada. And of course, anyone using Google Maps types in all sorts of specific addresses and locations, and Google Maps remembers them all. So does your browser, unless and until you clear your browser history.
You've surely noticed on your own computer or mobile device that, all else being equal, the websites you visit on a regular basis tend to load much faster than some new-to-you website you're visiting for the first time. That's because when you visit your regular sites, your browser saves time by relying partly on its memory cache: the files you see every time you visit a particular website get saved onto your computer or device, so you don't have to re-download them on every subsequent visit.
But this process is not secure, and it does take time. Exactly how much time varies based on many different factors, including your actual physical distance from the website's server.
Suppose that you, and your friend who lives 10 miles away, are both frequent visitors of a website based on the opposite side of the country. (For the sake of this hypothetical, let's also pretend that your computer or mobile device, and your friend's, are alike in every possible way: same connection speeds, same browsing history and memory space, same everything except your geographic locations, which are 10 miles apart.)
As far as your merely human senses can tell, it takes the same amount of time to visit that website from your home computer as it does your friend's. But with a computer's super-human senses, you can see there's actually a time lag – a very noticeable one, if you're measuring in something like fractions of nanoseconds.
That, in a nutshell, is geo-location inference. And when hackers break in and steal this information, that's a “geo-inference attack.” And who exactly is vulnerable to such attacks? According to the researchers, all mainstream-browser users and most popular-website visitors:
all five mainstream browsers (Chrome, Firefox, Safari, Opera and IE) on both desktop and mobile platforms as well as TorBrowser are vulnerable to geo-inference attacks. Meanwhile, 62% of Alexa Top 100 websites are susceptible to geo-inference attacks
So what can you do to protect yourself? Delete your browser cache on a regular basis — and Yaoqi also recommends you “Never give additional permissions to unfamiliar sites or open it for a long time” and “clear [your] cache after visiting a site with your credentials, e.g. online banking sites.” This still leaves users vulnerable while they're actually visiting a website, though: even if you clear your cache immediately after finishing an online session, the cache remains full during the session.
Kelley Blue Book finds 10 winners under $35,00004/17/2015ConsumerAffairsBy Mark Huffman
Photo credit: BMWMillennials appear content to wait for a lot of things – like getting married. An analysis of Census Bureau data by the Pew ...
Pesticides on fruits and vegetables lower men's sperm count
Switching to organic produce is the easiest way to cut pesticide intake04/17/2015ConsumerAffairsBy Christopher Maynard
It seems that it is becoming more difficult to stay away from chemicals that affect our reproductive systems. Shortly after research showed the negative ef...
It seems that it is becoming more difficult to stay away from chemicals that affect our reproductive systems. Shortly after research showed the negative effects that DEHP has on women, a new study shows that pesticides are hurting men in similar ways.
Research shows that men who eat fruits and vegetables with high levels of pesticides have a lower sperm count and less normal sperm than men who do not.
Jorge Chavarro, a professor of nutrition and epidemiology, highlights the importance of the study. “To our knowledge, this is the first report to link consumption of pesticide residues in fruits and vegetables, a primary exposure route for most people, to an adverse reproductive health outcome in humans,” he says.
Researchers collected data from the samples of over 150 men from 2007-2012. Results showed that men who ate pesticide-rich fruits and vegetables (more than 1.5 servings per day) had 49% lower sperm counts and 32% lower percentages of normal sperm than men who ate lesser amounts (less than 0.5 servings per day).
Despite these frightening numbers, Chavarro is adamant that people not forsake fruits and vegetables altogether.
“These findings should not discourage the consumption of fruits and vegetables in general…In fact, we found that consuming more fruits and vegetables with low pesticide residues was beneficial,” he said.
Chavarro goes on to say that picking the right kinds of fruits and vegetables will go a long way. He suggests avoiding foods with typically high levels of pesticides -- such as strawberries, spinach, apples, pears, and peppers. Instead, he urges people to buy these and other foods if they are grown organically.
The study was released in the journal Human Reproduction on March 30th, 2015.
Verizon FiOS begins unwinding the cable bundle
Slimmed-down program packages starting at $55 a month04/17/2015ConsumerAffairsBy James R. Hood
With consumers fleeing to streaming video sources like Netflix, cable companies clearly see the writing on the wall. Verizon FiOS today took the biggest st...
With consumers fleeing to streaming video sources like Netflix, cable companies clearly see the writing on the wall. Verizon FiOS today took the biggest step so far towards unwinding the cable programming bundles that charge consumers for channels they may never watch.
Beginning Sunday, Verizon said, it will offer FiOS Custom TV, starting at $55 a month, not including Internet or telephone service. It will offer a slimmed-down assortment of 35 programming packages with no long-term contract commitment.
Besides the basic package, which includes CNN, HGTV and AMC, customers can select two of seven genre-specific packages — like sports, children or entertainment — that include about 10 to 17 additional channels as part of the basic package. Additional packages are available for $10 a month.
Verizon's current typical cable package is about $90 a month.
The FiOS offering is the latest in a growing assortment of plans from program producers and distributors including CBS, HBO, Dish Network and Sony.
College & pro sports
Verizon has also announced new wireless service focused on college and pro sports, available later this year to Verizon Wireless customers who have a data plan.
“Sports fans are some of the most passionate around, and they never want to miss a single play,” said Terry Denson, vice president, content acquisition and strategy at Verizon. “With consumers – especially younger consumers – demanding access to entertainment and information that matters to them, whenever and wherever they are, college sports with all of its live programming and networks targeted to millennials are a natural fit for any mobile-first video platform.”
Bundles and bundles
While consumers are champing at the bit to disassemble bundles in an attempt to save money, it's not yet clear what the final results of all this unbundling will be.
With streaming video packages costing $10 and up, it doesn't take long to get back to the $90 that industry watchers say is the average household cable expenditure.
It's entirely possible consumers will wind up spending more to put together their own packages but the psychological satisfaction of doing so may outweigh the additional costs.
The picture is not so bright for the cable channels that appeal to niche audiences. Those channels are now included in the bundles lashed together by cable companies. As unbundling progresses, the smaller channels may go the way of the afternoon newspaper.
Food journaling not as easy or effective as it should be: study
Technology makes it easier but it still has a long way to go04/17/2015ConsumerAffairsBy Christopher Maynard
Photo: FatSecret.com Everyone should strive to make better health choices. Current technology has made this easier through the creation of various a...
Everyone should strive to make better health choices. Current technology has made this easier through the creation of various apps and online programs, but there is still a long way to go before some technologies are optimized.
A recent joint study conducted by the University of Washington and the Georgia Institute of Technology shows that food journaling still has a long way to go.
The study showed that logging meals in programs such as MyFitnessPal, FatSecret, and CalorieCount was much more difficult than it should be. This conclusion stems from a couple of causes.
Journalers reported that these programs were not always reliable when it came to logging food. Many databases contained inaccuracies, such as common foods not being listed or multiple listings being posted for a single food. This made it difficult to log the information accurately.
For many people, it simply became easier to log foods that were well-known, even if they weren’t necessarily the best foods to eat. Researchers found that pre-packaged and fast foods were much easier to log into the databases when compared to homemade foods.
Just scan a code
One respondent to the study stated that it was much easier to “scan a code on some processed stuff and be done with it.” This undermines the overarching goal of these programs, which is to allow people to make healthier choices.
Another problem found within the programs was the lack of a solid social dynamic. Many food journalers use this technology to create social connections with people who have similar food goals. But because of the difficulty that many users faced in logging their food, many people simply gave up and stopped using their program. This led to diminished comments and journaling, which negatively impacted the progress of those who remained.
Although these issues are problematic, researchers were able to provide several recommendations that could lead to improvement. One of these was the idea of designing more goal-specific systems.
James Fogarty, a researcher for the study, says that food journals have the potential to make a difference for many people, but there certainly need to be changes. He cautions against programs that attempt to “capture the elusive ‘everything’”. Instead, he suggests that programs create “a diversity of journal designs to support specific goals”.
Other suggestions included integrating reputation systems so that users could filter for their specific needs and vote on the accuracy of entries. This initial research has launched additional studies on how to create more journaling solutions in the future.
Consumer prices post second consecutive gain
An increase in energy costs offset a decline in food prices04/17/2015ConsumerAffairsBy James Limbach
Rising energy costs pushed consumer prices higher in March for the second increase in two months. Figures from the Labor Department (DOL) show the Consume...
Rising energy costs pushed consumer prices higher in March for the second increase in two months.
Figures from the Labor Department (DOL) show the Consumer Price Index (CPI) inched up 0.2% last month following an identical increase in February. Over the last 12 months, though, the CPI has dipped 0.1%.
Energy and food
Energy prices jumped 1.1% on top of February’s 1.1% advance. Gasoline prices shot up 3.9%, fuel oil surged 5.9%, while natural gas declined 2.7% and electricity fell 1.1%.
Food prices, on the other hand, dipped 0.2%, wiping out an 0.2% increase in February. Five of the 6 major grocery store food group indexes declined, with fruits and vegetables down 1.4%, nonalcoholic beverages off 0.6%, and dairy and related products along with meats, poultry, fish, and eggs down 0.5%. Beef and veal prices, however, rose01.% -- the 14th monthly increase in a row.
For March, the “core” rate of inflation -- all items less the volatile food and energy categories -- increased 0.2%. A major factor was a 1.2% advance in prices for for used cars and trucks and a 0.3% increase in the cost of shelter. Airline fares, in contrast, plunged 1.7% after rising in February.
Over the past 12 months, the core rate of inflation is up 1.8%, compared with the 1.7% increase for the 12 months ending February.
The complete CPI report for March is available on the DOL website.
The supposedly "all-natural" supplements were promoted as weight-loss aids04/17/2015ConsumerAffairsBy Truman Lewis
"Lose weight without changing your diet!" boasts Floyd Nutrition's website, where it offers a supposed free trial of "Pure Asian Garcini...
Rotten eggs lead to prison sentences
Quality Egg LLC executives sentenced for distributing salmonella-infected eggs04/17/2015ConsumerAffairsBy Truman Lewis
An egg rancher and a top executive of his company have each been sentenced to three months in federal prison for knowingly distributing eggs infected with ...
An egg rancher and a top executive of his company have each been sentenced to three months in federal prison for knowingly distributing eggs infected with salmonella.
Austin “Jack” DeCoster, 81, of Turner, Maine, who owned Quality Egg, was sentenced to serve three months in prison to be followed by one year of supervised release, and fined $100,000. His son, Peter DeCoster, 51, of Clarion, Iowa, who was Quality Egg’s chief operating officer, was also sentenced to serve three months in prison to be followed by one year of supervised release, and fined $100,000.
Quality Egg was sentenced to pay a fine of $6.79 million and placed on probation for three years. All three defendants were ordered to make restitution in the total amount of $83,008.19. Quality Egg also agreed to forfeit $10,000 as part of its plea agreement with the government.
The defendants were sentenced by U.S. District Court Judge Mark W. Bennett in the Northern District of Iowa.
Quality Egg had earlier pleaded guilty to one count of bribery of a public official, one count of introducing a misbranded food into interstate commerce with intent to defraud and one count of introducing adulterated food into interstate commerce. Jack and Peter DeCoster each pleaded guilty to one count of introducing adulterated food into interstate commerce.
In plea agreements, the company and the father and son admitted that the company’s eggs contained Salmonella Enteriditis.
During the spring and summer of 2010, adulterated eggs produced and distributed by Quality Egg were linked to approximately 1,939 reported consumer illnesses in several states — a nationwide outbreak of salmonellosis that led to the August 2010 recall of millions of eggs produced by the defendants.
“The message this prosecution and sentence sends is a stern one to anyone tempted to place profits over people’s welfare,” said the U.S. Attorney Kevin W. Techau of the Northern District of Iowa. “Corporate officials are on notice. If you sell contaminated food you will be held responsible for your conduct. Claims of ignorance or 'I delegated the responsibility to someone else’ will not shield them from criminal responsibility.”
Prosecutors said that Quality Egg personnel had, for years, disregarded food safety standards and practices and misled major customers, including Walmart, about the company’s food safety practices.
The mowers were assembled with an incorrect blade driver and blade combination04/17/2015ConsumerAffairsBy James Limbach
The Toro Company of Bloomington, Minn., is recalling about 800 walk behind power mowers. The mowers were assembled with an incorrect blade driver and blad...
B & R Meat Processing recalls pork products
Nitrite levels exceed regulatory limit04/17/2015ConsumerAffairsBy James Limbach
B & R Meat Processing is recalling approximately 569 pounds of pork products. The product contains levels of Nitrites that exceed regulatory levels. The...
B & R Meat Processing is recalling approximately 569 pounds of pork products.
The product contains levels of Nitrites that exceed regulatory levels.
The following products, produced on various dates from July 1, 2014, through October 7, 2014, are being recalled:
- 1-2 lb cryovac packages of cured ham with production dates of 7/1/14 to 7/25/14
- 1-15 lb cryovac packages of cured bacon with production dates of 7/1/14 to 10/7/14
- 1 lb cryovac packages of cured jowls with production dates of 7/1/14 to 7/25/14
The recalled products bear the establishment number “Est.46910” inside the USDA mark of inspection and were shipped to retail outlets in the state of Arkansas.
Consumers with questions about the recall may contact Scott Ridenoure at B & R Meat Processing at (479) 634-2211.
Husky vertical bike hooks recalled
The mounted bike hooks can detach unexpectedly04/17/2015ConsumerAffairsBy James Limbach
Waterloo Industries of Waterloo, Iowa, is recalling about 120,000 Husky Securelock vertical bike hooks in the U.S. and Canada.. The mounted bike hooks can...
Waterloo Industries of Waterloo, Iowa, is recalling about 120,000 Husky Securelock vertical bike hooks in the U.S. and Canada.
The mounted bike hooks can detach unexpectedly, allowing the bike to fall posing a risk of injury to bystanders.
The firm has received 22 reports of the bike hooks falling from the mounted Trackwall, including 12 reports of property damage to bicycles and/or nearby vehicles. No injuries have been reported.
This recall involves Husky Securelock vertical bike hooks used with a Husky Trackwall garage storage system. The 3 by 3.5-inch black metal plate is mounted to the grooves in the Trackwall and the bike’s tire is attached to a hook protruding from the plate.
There are no markings on the hook. The Trackwall has “Husky” printed on the lower left corner. The hook holds up to a 35 pound bike.
The bike hooks, manufactured in China, were sold exclusively at Home Depot stores nationwide from April 2011, to March 2015, for about $9.
Consumers should immediately stop using the recalled hooks and return them to the nearest Home Depot store for a full refund.
Consumers may contact Waterloo Industries at (800) 833-8851 from 8 a.m. to 5 p.m. ET Monday through Friday.
E-cigarette use tripled among teens -- a "staggering" increase, CDC reports
"A wake-up call that more and more of our kids are becoming addicted"04/16/2015ConsumerAffairsBy James R. Hood
Federal health officials lit a match today that ignited a firestorm on both sides of the vaping divide, reporting that current e-cigarette use among middle...
Federal health officials lit a match today that ignited a firestorm on both sides of the vaping divide, reporting that current e-cigarette use among middle and high school students tripled from 2013 to 2014.
Sen. Barbara Boxer (D-Calif.) called the report from the Centers for Disease Control and Prevention and the U.S. Food and Drug Administration "a wake-up call to all of us that more and more of our kids are becoming addicted to e-cigarettes.
"If e-cigarette companies are serious about helping people quit smoking, they must stop targeting our kids with their products and pull their advertisements from television," Boxer said.
The American Vaping Association -- an industry group -- in effect labeled the report a smokescreen and interpreted the numbers to indicate that "as youth experimentation with vaping has grown, teen smoking has declined at a rate faster than ever before."
The annual study found that current e-cigarette use (use on at least 1 day in the past 30 days) among high school students increased from 4.5% in 2013 to 13.4% in 2014, rising from approximately 660,000 to 2 million students. Among middle school students, current e-cigarette use more than tripled from 1.1% in 2013 to 3.9% in 2014 — an increase from approximately 120,000 to 450,000 students.
E-cigs now top tobacco product
This is the first time since the survey started collecting data on e-cigarettes in 2011 that current e-cigarette use has surpassed current use of every other tobacco product overall, including conventional cigarettes, the CDC said.
“We want parents to know that nicotine is dangerous for kids at any age, whether it’s an e-cigarette, hookah, cigarette or cigar,” said CDC Director Tom Frieden, M.D., M.P.H. “Adolescence is a critical time for brain development. Nicotine exposure at a young age may cause lasting harm to brain development, promote addiction, and lead to sustained tobacco use.”
Hookah smoking roughly doubled for middle and high school students in the study, while cigarette use declined among high school students and remained unchanged for middle school students. Among high school students, current hookah use rose from 5.2% in 2013 (about 770,000 students) to 9.4% in 2014 (about 1.3 million students).
The increases in e-cigarette and hookah use offset declines in use of more traditional products such as cigarettes and cigars. There was no decline in overall tobacco use between 2011 and 2014. Overall rates of any tobacco product use were 24.6 % for high school students and 7.7 % for middle school students in 2014.
“In today’s rapidly evolving tobacco marketplace, the surge in youth use of novel products like e-cigarettes forces us to confront the reality that the progress we have made in reducing youth cigarette smoking rates is being threatened,” said Mitch Zeller, J.D., director of FDA’s Center for Tobacco Products. “These staggering increases in such a short time underscore why FDA intends to regulate these additional products to protect public health.”
Cigarettes, cigarette tobacco, roll-your-own tobacco and smokeless tobacco are currently subject to FDA’s tobacco control authority. The agency currently is finalizing the rule to bring additional tobacco products such as e-cigarettes, hookahs and some or all cigars under that same authority.
Sen. Boxer would like to see things move along a bit faster. In March, she sent a letter to Food and Drug Administration (FDA) Commissioner Margaret A. Hamburg along with a petition urging the agency to finalize a rule to regulate e-cigarettes and protect public health.
Yesterday, she wrote to the executives of five of the largest e-cigarette manufacturers urging them to refrain from advertising e-cigarettes on television, citing the effects of e-cigarette advertising on young people.
The Vaping Association, meanwhile, claimed the CDC's figures -- showing a huge increase in vaping and a decline in smoking by high school students -- amounted to evidence that vaping was helping students resist the urge to smoke cigarettes.
"While no vaping or smoking by teens is obviously the ideal, we do not live in a perfect world. There remains no evidence that e-cigarettes are acting as gateway products for youth. In fact, this study and others suggest that the availability of vapor products has acted as a deterrent for many teenagers and potentially kept them away from traditional cigarettes," said Gregory Conley, the group's president.
Ohio State researchers explore psychological effects of acetaminophen04/16/2015ConsumerAffairsBy Mark Huffman
Acetaminophen, the active ingredient in pain relievers like Tylenol, can have well-known physical side-effects. According to the National Institutes of Hea...
Senators introduce Data Security Act: banks like it, consumer advocates do not
Law would weaken currently existing levels of consumer and privacy protection, critics argue04/16/2015ConsumerAffairs
This week, Senators Tom Carper (D-Delaware) and Roy Blunt (R-Missouri) introduced the Data Security Act of 2015...
This week, Senators Tom Carper (D-Delaware) and Roy Blunt (R-Missouri) introduced the Data Security Act of 2015, which is similar to the Data Security Acts the two senators proposed in 2012 and 2014.
If passed into law, the bill would require that companies who lost customer data to hackers let customers know within 30 days that their credit or debit cards have been compromised, and establish other rules as well.
For the most part, card-issuing institutions such as banks and credit unions support Carper and Blunt's bill, yet privacy and consumer-rights advocates worry that the proposal as currently written would actually weaken the amount of protection consumers currently have, by overriding stronger state-level consumer-protection laws and by eliminating certain national-level protections currently in place.
Weaker in some ways
Card-issuing institution incur massive costs anytime a major hacking compromises their cards en masse. The Credit Union National Association (CUNA) called the Data Security Act “much-needed legislation” that would “protect the sensitive financial information of American people by establishing a national standard for data security, protection and consumer notification.”
Yet that national standard, at least in some respects, would arguably be weaker than some standards which currently exist. For example: the language of the bill, as written, says that companies do not have to disclose security breaches to their customers if the companies discover that “there is no reasonable risk of identity theft, economic loss, economic harm, or financial fraud.” Currently, companies must notify consumers of data breaches, whether they cause financial harm or not.
Representative Jan Schakowsky (D-Illinois), speaking against the bill, told theWashington Post. “Fifty-one states or territories have some sort of data protection legislation on the books -- 38 would see the data protection breach notification diminished in some way because this is a preemption law.”
Yet that patchwork of varying state- or territorial-level laws is exactly why the bill's supporters want a single unifying national standard. Rep. Peter Welch (D-Vermont), one of the bill's co-sponsors, said that right now, if a customer in one state is affected when hackers breach security at a company based in another state, it's not certain which state actually has jurisdiction. “I am usually, almost uniformly opposed to preemption — but this is an instance where unless you have a national standard you won't have protection,” he said.
On the other hand, under the current standard, companies in such situations generally adhere to the stronger of the two states' laws, which again hearkens back to the argument that this proposed bill would actually weaken consumer protections.
Airline service declined in 2014
Annual rating finds worst showing since 200904/16/2015ConsumerAffairsBy Mark Huffman
What defines a good airline? Different consumers will have different opinions but most might agree that taking off and landing on time and not losing your ...
What defines a good airline? Different consumers will have different opinions but most might agree that taking off and landing on time and not losing your luggage rank pretty high as criteria.
Beyond that, consumers no longer expect a free lunch or much of anything for free. Expectations have fallen pretty low in the last decade.
So it was interesting when Wichita State University and Embry-Riddle Aeronautical University issued their annual Airline Quality Rating (AQR). According to the rating, only 3 of 12 U.S. airlines improved their performance in the last 12 months. One held steady while the remaining 8 declined.
Virgin America on top
Virgin America came out on top for the third straight year, largely on its record of keeping denied boardings to a minimum. In other words, if you booked a Virgin America flight, you had a good chance of not being bumped.
Virgin America’s involuntary denied boarding performance was just 0.09 per 10,000 passengers in 2014, the best of all the airlines. The industry average was 0.92.
Virgin's consumer complaint rate is also lower than the industry average, perhaps because its mishandled bag rate was the lowest in the industry. Its lost or mishandled luggage rate was 0.95 per 1,000 passengers. The industry average was 3.62.
Virgin America's one decline was its on-time performance, a drop from the previous year.
If getting there on time was your primary objective, you might have been better off flying Hawaiian Airlines. Its 2014 on-time performance was the best of the airlines that were rated. That helped make Hawaiian Airlines number 2 in the ratings.
Delta Airlines was the highest-rated legacy airline, moving up one notch from fourth to third place. Oddly, Delta moved up despite a decline in on-time performance, an increase in mishandled luggage and a rise in customer complaints. A drop in denied boardings was its only gain.
Of course, performance is all a matter of what you're comparing it to.
“The Airline Quality Rating industry score for 2014 shows an industry that declined in overall performance quality over the previous year,” the authors write. As an industry, performance in 2014 was worse than the previous four years. The AQR score for 2014 was a return to levels seen in 2009.”
In other words the airline industry, after managing to improve since the recession, appears to have take a step backward. Wichita State's Dean Headley says consumers should take that as a red flag. For airlines, he says it means working harder to compete for customer loyalty.
“Bigger isn’t always better, and the downturn in performance suggests that customer perceptions of poor outcomes are warranted,” said Headley.
Study co-researcher Brent Bowen, dean of the College of Aviation at Embry-Riddle Aeronautical University’s Prescott, Ariz., campus, believes much of the problem can be traced to airline mergers and consolidations. He notes the airlines promised consolidation would improve service but that hasn't been the case. The one possible exception – Delta.
“Delta is an excellent example of a merger that declined in performance and systematically has clawed its way back to a new high level of quality performance,” Bowen said. “This shows that if an airline commits to improving their AQR rating, they can do it.”
Meanwhile, Bowen says the airline industry is doing quite well in terms of profits. It's evident, he says, they aren't investing in customer service and restoring employee concessions given up during the economic decline.
In network-security terms: We have met the enemy, and he is us04/16/2015ConsumerAffairs
Verizon released its 2015 Data Breach Investigations Report (DBIR) this week and the results are neither surprising nor encouraging: most major security br...
Plasticizing agents dangerous to the female reproduction system
Researchers warn that many low doses are as dangerous as a few large doses04/16/2015ConsumerAffairsBy Christopher Maynard
As consumers, we are faced with decisions every day as to what products we need to buy. Unfortunately, new evidence shows that some of these products could...
As consumers, we are faced with decisions every day as to what products we need to buy. Unfortunately, new evidence shows that some of these products could be severely inhibiting our reproductive health.
Research shows that the phthalate DEHP, which is a plasticizing agent used in upholstery, baby toys, building materials and many other consumer products, is harming the female reproductive system. The chemicals in these products are disrupting the growth and function of the ovaries.
Specifically, these chemicals affect the follicles in the adult ovary in a negative way. Exposure to DEHP degrades them over time and inhibits the production of hormones that would regulate their growth. Jodi Flaws, a bioscience professor, explains why these follicles are so important.
“The follicles are the structures that contain the egg, and if you’re killing those, you may have fertility issues,” she says. “The bottom line is that DEHP may damage the follicles and impair the ability of the ovary to make sex steroids like estrogens and androgens, which are really important for reproduction.”
Low doses no better
Flaws’ research is ongoing, and is looking at the problem from a “real world” perspective. She explains that exposure to low doses of DEHP, which are typical in everyday life, can be just as damaging as the high doses.
"Sometimes it's at the low doses that you have the most profound effects, and that's what we're seeing with the phthalates," she said. Her research, amongst other similar initiatives, is being funded by the National Institute of Environmental Health Sciences at the National Institutes of Health and the U.S. Environmental Protection Agency.
Target offers $19 million to MasterCard issuers to settle 2013 data breach
Separate settlement with Visa issuers still in the works04/16/2015ConsumerAffairs
Target continues sweeping up the fallout from the massive 2013 data breach...
Target continues sweeping up the fallout from the massive data breach which compromised the credit or debit card information of 40 million Target customers in late 2013.
On April 15 the retailer agreed to reimburse a total of $19 million to various financial institutions who issued MasterCard-branded cards compromised in the breach.
The company is negotiating a separate settlement with the issuers of Visa-branded cards.
This is not the first payment Target has incurred over the hacking. Last month, the company offered to pay $10 million to settle a class action suit brought by individual consumers who lost time and/or money after their cards were compromised at Target.
The current settlement offer with various MasterCard issuers is meant to cover the costs incurred by having to cancel and re-issue compromised cards.
(According to the Credit Union National Association, as of late 2014, the average cost for an issuing institution to replace a card was $8.02, which includes re-issuing the card itself, paying for fraudulent charges, and paying the additional staff costs required to monitor customer accounts, notify customers as necessary, and related costs.)
As off press time, Target's $19 million offer has not formally been accepted; acceptance is contingent upon approval of 90% of eleigible account holders. (On a similar note, last month's $10 million offer to settle a class-action suit is also pending approval, this time from a federal judge.) In order for the $19 million MasterCard deal to go through, Target will need approval to make payment on or before May 20.
A rebound in new home construction
Initial jobless claims moved higher04/16/2015ConsumerAffairsBy James Limbach
New home construction bounced back in last month from the horrendous side of more than 15% it suffered during February. According to a joint release from ...
New home construction bounced back in last month from the horrendous side of more than 15% it suffered during February.
According to a joint release from the Census Bureau and the Department of Housing and Urban Development, privately-owned housing starts rose 2.0% in March to a seasonally adjusted annual rate of 926,000. Nonetheless, the rate is 2.5% below the March 2014 rate of 950,000.
Single-family home construction was a major factor, with an increase of 4.4% -- to a rate of 618,000. The March rate for units in buildings with 5 units or more was 287,000 down 22,000 from the previous month.
Construction of new homes authorized by building permits in March fell 5.7%,to a seasonally adjusted annual rate of 1,039,000, but is 2.9% above the March 2014 level.
Permits for single-family home construction jumped 2.1%, while apartment building permits were down 72,000 -- to a rate of 378,000.
The complete report is available on the Commerce Department website.
Initial jobless claims
Separately, the government reports first-time applications for state unemployment benefits shot higher last week, confounding economists from Briefing.com who were forecasting a decline.
According to the Labor Department (DOL), initial jobless claims jumped 12,000 in the week ending April 11 to a seasonally adjusted 294,000 from the previous week's revised level of 282,000.
The 4-week moving average, which is less volatile than the weekly tally, and considered a better gauge of the labor market was dropped by 250 to 282,750. That's a level unseen since December 2000.
The full report may be found on the DOL website.
The herbal product is imported from Hong Kong04/16/2015ConsumerAffairs
A new alert issued by the FDA warns of lead in Bo-Ying Compound, an herbal product promoted as useful in treating a wide variety of conditions in...
Bankruptcy judge blocks large group of GM ignition-linked lawsuits
The ruling affects claims from accidents that occurred prior to 200904/16/2015ConsumerAffairsBy James R. Hood
The ruling does not affect class-action claims on behalf of customers whose accidents occurred after the 2009 bankruptcy filing. That case is moving...
A federal judge has given General Motors the key to lock out some claims by consumers seeking damages tied to faulty ignition switches in millions of Chevrolet Cobalts and other vehicles.
U.S. Bankrutcy Judge Robert Gerber ruled that GM can use its 2009 bankruptcy to shield it from many claims on behalf of 84 people who were killed and 157 seriously injured in accidents blamed on the switches, as well as from lawsuits filed by customers who said their car's value had been harmed by the defect.
Gerber said in a 134-page ruling that there was no evidence GM committed fraud during its bankruptcy claims, saying that GM executives did not know how serious the problem was until 2013.
It had been predicted that lawsuits against GM could total as much as $10 billion if allowed to go to trial.
The ruling does not affect class-action claims on behalf of customers whose accidents occurred after the 2009 bankruptcy filing. That case is moving forward, with GM executives expected to begin giving depositions next month.
GM recalled 2.6 million Cobalts and other vehicles in 2014 after a series of accidents that occurred when the defect switches caused engines to cut out, leaving the drivers without power steering or brakes.
GM has set up a victim's compensation fund to settle death and injury claims, including those submitted by customers who -- under the judge's ruling -- will not be able to sue.
Spring buying season boosts builder confidence
It's the first gain in 4 months04/16/2015ConsumerAffairsBy James Limbach
It appears that all it takes is a little good weather to boost the spirits of home builders. According to the National Association of Home Builders/Wells ...
It appears that all it takes is a little good weather to boost the spirits of home builders.
According to the National Association of Home Builders/Wells Fargo Housing Market Index (HMI), builder confidence in the market for newly built, single-family homes rose 4 points in April to a level of 56.
“As the spring buying season gets underway, home builders are confident that current low interest rates and continued job growth will draw consumers to the market,” said NAHB Chairman Tom Woods, a home builder from Blue Springs, Mo.
The NAHB/Wells Fargo Housing Market Index gauges builder perceptions of current single-family home sales and sales expectations for the next 6 months as “good,” “fair” or “poor.” The survey also asks builders to rate traffic of prospective buyers as “high to very high,” “average” or “low to very low.”
Scores for each component are then used to calculate a seasonally adjusted index where any number over 50 indicates that more builders view conditions as good than poor.
All 3 HMI components registered gains this month. The component charting sales expectations in the next 6 months jumped 5 points to 64, the index measuring buyer traffic increased 4 points to 41, and the component gauging current sales conditions was up 3 points to 61.
“The HMI component index measuring future sales expectations rose 5 points in April to its highest level of the year,” said NAHB Chief Economist David Crowe. “This uptick shows builders are feeling optimistic that the housing market will continue to strengthen throughout 2015.”
Looking at the three-month moving averages for regional HMI scores, the South rose 1 point to 56 and the Northwest held steady at 42. The Midwest fell by 2 points to 54 and the West dropped 3 points to 58.
Volkswagen recalls Routans
The vehicles have an electrical defect04/16/2015ConsumerAffairsBy James Limbach
Volkswagen Group of America is recalling 20,676 model year 2009 Routan vehicles manufactured June 25, 2008, to June 10, 2009, and 2010 Routan vehicles manu...
Volkswagen Group of America is recalling 20,676 model year 2009 Routan vehicles manufactured June 25, 2008, to June 10, 2009, and 2010 Routan vehicles manufactured October 1, 2009, to August 11, 2010.
The vehicles have a defect that can affect the safe operation of the airbag system.
If the ignition key inadvertently moves into the OFF or ACCESSORY position, the engine will turn off, which will then depower various key safety systems including -- but not limited to -- air bags, power steering, and power braking. Loss of functionality of these systems may increase the risk of crash and/or increase the risk of injury in the event of a crash.
Until this recall is performed, customers should remove all items from their key rings, leaving only the ignition key. The key fob (if applicable), should also be removed from the key ring. Road conditions or some other jarring event may cause the ignition switch to move out of the run position, turning off the engine.
Volkswagen will notify owners, and dealers will replace the ignition switch and key fobs, free of charge. The recall is expected to begin in April 2015 for 2009 Routan vehicles, and in August 2015 for 2010 Routan vehicles. Owners may contact Volkswagen customer service at 1-800-822-8987. Volkswagen's number for this recall is 28H1.
Cycle Gear recalls semi truck and motorcycle toys
The toys contain excessive levels of lead04/16/2015ConsumerAffairsBy James Limbach
Cycle Gear of Benicia, Calif., is recalling about 155 sets of Wheelies semi-truck with 6 motorcycles and push-along motorcycle with rider. The toys contai...
Cycle Gear of Benicia, Calif., is recalling about 155 sets of Wheelies semi-truck with 6 motorcycles and push-along motorcycle with rider.
The toys contain excessive levels of lead, which is a violation of the federal standard for lead content.
No incidents or injuries have been reported.
This recall involves plastic Wheelies semi-truck with 6 motorcycles toy and Wheelies push-along motorcycle toys. The semi-truck has a dual-level trailer that carries six motorcycles and comes in red and purple with multi-colored motorcycles. The truck with the trailer attached measures 18 inches long by 7 inches tall.
The truck has the item number Item # TAG66767 and SKU# 752249 printed on the packaging. The Wheelies push-along motorcycle is red with a rider in black with silver accents. The product has item # TBG04323 and SKU# 752251 printed on the package.
The toys, manufactured in China, were sold at Cycle Gear stores and online at www.cyclegear.com from November 2014, through December 2014, for about $10 for Push Along Motorcycle and $20 for Semi-Truck with six motorcycles.
Consumers should immediately take away from children and stop using the recalled toys and contact Cycle Gear Inc. for a full refund. Cycle Gear Inc. is contacting consumers directly.
Consumers may contact Cycle Gear at (800) 292-5343 from 8 a.m. to 5 p.m. ET Monday through Friday.
Toyota recalls Scion tC vehicles
The vehicles have a suspension issue04/16/2015ConsumerAffairsBy James Limbach
Toyota Motor Sales, U.S.A. is recalling approximately 114 model year 2015 Scion tC Release Series 9 vehicles. The rear suspension arm bolts and nuts could...
Toyota Motor Sales, U.S.A. is recalling approximately 114 model year 2015 Scion tC Release Series 9 vehicles.
The rear suspension arm bolts and nuts could have been tightened improperly at two of the Toyota facilities at which accessory coil springs are installed prior to delivery to dealers. In this condition, the bolts could become loose during vehicle operation. Under some circumstances the control arm could eventually detach, increasing the risk of a crash.
Toyota says it is unaware of any accidents or injuries caused by this condition.
Vehicle owners will receive a notification by first class mail, and Toyota dealers will replace the bolts, nuts, rear suspension arms and rear suspension member sub-assemblies.
Consumers may call Toyota customer service at 1-800-331-4331.
Buy4easy recalls full face helmets with visors
The helmets failed the penetration test04/16/2015ConsumerAffairsBy James Limbach
Buy4easy is recalling 786 TMS JX-A5005 full face helmets with visors, size XL, manufactured June 1, 2013, to July 15, 2013. The recalled motorcycle helme...
Buy4easy is recalling 786 TMS JX-A5005 full face helmets with visors, size XL, manufactured June 1, 2013, to July 15, 2013.
The recalled motorcycle helmets failed the penetration test, and the helmet label does not meet Department of Transportation regulations.
The wearer of this helmet may not be properly protected in the event of vehicle crash.
The remedy for this recall is still under development. The manufacturer has not yet provided a notification schedule.
Owners may contact Buy4easy customer service at 1-626-388-9898.
Chrysler recalls Dodge Vipers
The driver or passenger door may open unexpectedly04/16/2015ConsumerAffairsBy James Limbach
Chrysler (FCA US LLC) is recalling 1,451 model year 2013-2014 Dodge Vipers manufactured October 1, 2012, to February 6, 2014. Moisture may get into the d...
Chrysler (FCA US LLC) is recalling 1,451 model year 2013-2014 Dodge Vipers manufactured October 1, 2012, to February 6, 2014.
Moisture may get into the door switch, resulting in the driver or passenger door opening unexpectedly while the vehicle is in motion, increasing the risk of a crash and injury.
Chrysler will notify owners, and dealers will replace the door handle assemblies and top covers, free of charge. The recall is expected to begin May 18, 2015.
Owners may contact Chrysler customer service at 1-800-853-1403. Chrysler's number for this recall is R14.
Cell phones have a nifty feature your landline phone doesn't04/15/2015ConsumerAffairsBy Mark Huffman
Telemarketers have had a much harder time of it in recent years. Millions of consumers have registered their home phones on the Federal Trade Commission...
A reader asks about a close call with a con artist04/15/2015ConsumerAffairs
If you're looking for work in this economy you know you must be careful, because there exist plenty of scammers, thieves and con artists using fake job...
Corinthian Colleges fined $30 million for misrepresenting job placement rates
Heald Colleges ordered to stop enrolling new students04/15/2015ConsumerAffairs
The Department of Education has levied a $30 million fine against Corinthian Colleges, Inc. after an investigation “confirmed cases” that ...
The Department of Education has levied a $30 million fine against Corinthian Colleges, Inc. after an investigation “confirmed cases” that the company misrepresented the schools' job placement rates to current and prospective students of Corinthian-owned Heald Colleges.
The DoE agreement also forbids Heald from enrolling any more students, and requires the school to help current students either complete their education or continue it elsewhere.
According to the DoE, Corinthian's deceptive practices include paying temporary employment agencies to hire graduates for on-campus jobs lasting as little as two days, so that Heald could then count those students as having found work in their field after graduation.
Such allegations against the company are nothing new. The DoE's fine is merely the latest in a series of legal actions taken against the embattled chain of for-profit colleges.
Last September, when the Consumer Financial Protection Bureau sued Corinthian for predatory lending, the charges included allegations that the company would pay temp agencies to hire Corinthian grads to inflate the schools' placement rates, and also that the company promised good “career” options to graduates of Corinthian-owned Everest, WyoTech or Heald schools, yet Corinthian counted as a “career” any job lasting only one day, so long as there was the possibility of a second day of work.
In February, Corinthian students who'd taken out “Genesis” private loans got a collective $480 million in debt relief, resulting in debt reductions of up to 40 percent.
The schools' reputation among some groups is so unsavory that earlier this month, the attorneys general of nine states urged the federal government to forgive the federal debt burdens incurred by students holding the overpriced and worthless degrees.
And this week, when the Department of Education announced the $30 million fine against Corinthian, Education Secretary Arne Duncan said in a statement that “This should be a wake-up call for consumers across the country about the abuses that can exist within the for-profit college sector. We will continue to hold the career college industry accountable and demand reform for the good of students and taxpayers. And we will need Congress to join us in that effort.”
"Violent students' and taxpayers' trust"
The DoE's investigation found that Corinthian had badly mislead potential and current students of Heald Colleges, to the point where the students might not have enrolled in that school at all, had they known the truth.
U.S. Undersecretary of Education Ted Mitchell said in a statement, “Instead of providing clear and accurate information to help students choose which college to attend, Corinthian violated students' and taxpayers' trust. Their substantial misrepresentations evidence a blatant disregard not just for professional standards, but for students' futures.”
Among other things, the Department's investigation found that Heald paid companies to hire graduates for temporary positions lasting as little as two days, performing such basic tasks as moving computers and organizing cables, then counted those graduates as “placed in field.” Heald also counted obvious out-of-field jobs as in-field placements, including one graduate of an accounting program whose food-service job at Taco Bell was counted as “in-field” work.
In addition, the DoE said, “Heald College failed to disclose that it counted as 'placed' those graduates whose employment began prior to graduation, and in some cases even prior to the graduate's attendance at Heald.”
Like that Accounting graduate working at Taco Bell: she graduated from Heald in 2011 but had started at Taco Bell five years earlier, in June 2006.
A Corinthian spokesperson said in a statement that the Department of Education's conclusions were “highly questionable” and “unfounded,” and that “These unfounded, punitive actions do nothing to advance quality education … but would certainly shatter the dreams and aspirations of Heald students and the careers of its employees.” The spokesperson also said that Corinthian plans to appeal.
Fish oil "vitally important" to the developing brain
Supplements may not help adults but they're important for prenatal development04/15/2015ConsumerAffairsBy Truman Lewis
Fish oil supplements may not do much for your heart, recent studies have suggested, but UC Irvine scientists say the fatty acids they contain are vitally i...
Fish oil supplements may not do much for your heart, recent studies have suggested, but UC Irvine scientists say the fatty acids they contain are vitally important to the developing brain.
The findings suggest that it's important for pregnant women to maintain a diet rich in those fatty acids during pregnancy and for their babies after birth.
In the study appearing today in The Journal of Neuroscience, UCI neurobiologists report that dietary deficiencies in the type of fatty acids found in fish and other foods can limit brain growth during fetal development and early in life.
Susana Cohen-Cory, professor of neurobiology & behavior, and colleagues identified for the first time how deficits in what are known as n-3 polyunsaturated fatty acids cause molecular changes in the developing brain that result in constrained growth of neurons and the synapses that connect them.
These fatty acids are precursors of DHA (docosahexaenoic acid), which plays a key role in the healthy creation of the central nervous system. In their study, which used female frogs and tadpoles, the UCI researchers were able to see how DHA-deficient brain tissue fostered poorly developed neurons and limited numbers of synapses, the vital conduits that allow neurons to communicate with each other.
"Additionally, when we changed the diets of DHA-deficient mothers to include a proper level of this dietary fatty acid, neuronal and synaptic growth flourished and returned to normal in the following generation of tadpoles," Cohen-Cory said.
DHA is essential for the development of a fetus's eyes and brain, especially during the last three months of pregnancy. It makes up 10 to 15 percent of the total lipid amount of the cerebral cortex. DHA is also concentrated in the light-sensitive cells at the back of the eyes, where it accounts for as much as 50 percent of the total lipid amount of each retina.
Dietary DHA is mainly found in animal products: fish, eggs and meat. Oily fish - mackerel, herring, salmon, trout and sardines - are the richest dietary source, containing 10 to 100 times more DHA than nonmarine foods such as nuts, seeds, whole grains and dark green, leafy vegetables.
DHA is also found naturally in breast milk. Possibly because of this, the fatty acid is used as a supplement for premature babies and as an ingredient in baby formula during the first four months of life to promote better mental development.
Don't flush that goldfish!
Fish reproduce quickly and can spread disease to other fish04/15/2015ConsumerAffairs
Flushing fish is not recommended as the hardy fish can make it to natural waterways and wreak havoc. You don't want your fish responsible...
In my younger years when the goldfish didn't make it for whatever reason, they were simply flushed down the toilet. Perhaps it's not the most proper of burials but I'm not sure my parents really thought there were alternatives.
Many times people who own goldfish -- or any fish for that matter -- think when they no longer want them a lake or public water area may be a good alternative.
The problem is fish are pretty active sexually and they reproduce quickly. Someone who perhaps thought they were doing the best thing dumped a handful of goldfish into a lake in Boulder, Colorado, just three years ago and now they have reproduced into the thousands. If you remember from sex ed it only takes two.
"Based on their size, it looks like they're 3-year-olds, which were probably produced from a small handful of fish that were illegally introduced into the lake," Ben Swigle, a fish biologist at the Colorado Parks and Wildlife (CPW), told Live Science.
The issue with so many goldfish is that the overabundance will create competition for native fish. It disturbs the food chain. There are about three or four fish species considered threatened or "species of concern" living downstream from the lake. If the goldfish end up going downstream it will affect spawning and also foraging resources.
Disease is another concern because pet goldfish are not routinely tested for illnesses. These koi goldfish may be carrying viruses. They have the potential to kill thousands of other fish. Aquarium fish tend to get bacterial kidney disease and they could spread that throughout the area.
Scientists are currently considering three options for dealing with the exotic goldfish explosion. Officials could drain the lake and leave it dormant for a while, use electricity to stun the fish and then net them out, or use a chemical called rotenone that interferes with respiration to "remove" the fish.
Swigle said the plan is if they can remove them, they will feed them to injured hawks, ospreys and bald eagles, at a raptor sanctuary.
Looking to displace a fish? The American Veterinary Medical Association has guidelines for euthanasia of animals. You can check out their 2013
Flushing fish is not recommended as the hardy fish can make it to natural waterways and wreak havoc. You don't want your fish responsible for creating 3,000 other fish.
NY groups test toys, find toxins
Retailers say they meet federal safety standards04/15/2015ConsumerAffairs
It's hard to imagine that stores could still have toys and products on the shelves that can be toxic to kids.There has been so much awareness...
It's hard to imagine that stores could still have toys and products on the shelves that can be toxic to kids.There has been so much awareness about products containing chemicals that are harmful to children.
But according to a report by the New York League of Conservation Voters Education fund. and Clean and Healthy New York, some big-name brand stores are still selling toxic products. Stores like Target, TJ Maxx, Dollar General, 99 Cent City and Children's Place stores in Onondaga County are all selling children's toys with dangerous levels of toxic chemicals in them.
They had people from Clean and Healthy NY actually go into these stores and use a device to test merchandise. It's a hand-held tool that can measure levels of heavy metals.
What they found may be surprising. Arsenic seems to be a component in everything that is good. Like wine for instance lately people have been talking about the level of arsenic in that. Well, they found arsenic in children’s jewelry and hair clips. Xylophones seem to be the big ticket item for chemicals -- they were found to contain lead, cobalt and mercury. They even found toxins in zippers in kids' clothing.
The fear is that these products with the chemical compounds can cause brain damage and other problems as small children put them in their mouth. If they have had contact with the toys that contain the toxic chemicals and then put their fingers in their mouth they are transmitting the toxins that way also.
There are federal standards but the problem is they are voluntary. There has been a great deal of effort put forth to pass something but as of yet nothing has gone through. Some New York counties are acting on their own to pass laws. Washington State did pass laws in 2008 that require manufacturers of toys and other children's products to disclose the toxic chemicals they use.
Renee Havener is a former hospice nurse for children and she spoke when the report was released. She would like the state Legislature to pass a law that would force manufacturers to list the toxic chemicals in toys they sell in New York. Albany County recently passed its own law in absence of any state legislation.
According to Syracuse.com Target responded to the allegations with a statement: "Target is committed to providing high quality and safe products to our guests. The products in question meet all federal product safety requirements."
Health apps -- healthful or a health threat?
Researcher says the apps may contribute to an unhealthy obsession with health04/15/2015ConsumerAffairsBy Christopher Maynard
Health and wellness have never been easier to manage than in the current age of technology. Information is now easily accessible, and there are a wealth of...
Health and wellness have never been easier to manage than in the current age of technology. Information is now easily accessible, and there are a wealth of services that consumers can take advantage of to reach their fitness goals.
In particular, "health apps” have become increasingly popular. The question is, just how beneficial are these apps?
Many argue that health apps inspire people to adapt healthier lifestyles and stay committed to their health goals. They are extremely simple to access through smartphones and other devices that people use every day.
Iltifat Husain, editor of iMedicalApps.com, and assistant professor of emergency medicine at the Wake Forest School of Medicine, argues that the apps have great potential “to reduce morbidity and mortality.” He admits that there is not much research to support health app use, but that “doctors should not wait for scientific studies to prove benefits because these have already been shown.”
For example, Sylvia Warman, an office worker from London, believes that her health app has improved her life dramatically. She points out how much easier these apps make it to track her progress and adjust her lifestyle. She claims that her app has made her more conscious of her everyday choices. She is more active as a result, and has even improved her diet.
Too many choices
Despite these positive testimonials, there are some drawbacks to using these health apps. Because of the number of apps that have been produced, it is difficult to separate useful ones from those that are ineffective.
Des Spence, a general practitioner, argues that most health apps are “mostly harmless and likely useless,” but he cautions that there is another more serious danger associated with them -- they can play on the fears of “an unhealthily health obsessed generation.”
Spence points out that certain medical technologies, such as MRI’s and blood tests, are already overused. He believes that all of this extra technology leads to over-diagnosis which can “ignite extreme anxiety” and cause serious medical harm.
Whatever your opinion may be on the growth of these technologies, they will inevitably continue to progress. Luckily, the level to which they are utilized is still entirely up to the consumer.
Neiman Marcus' "faux fur" still riles Humane Society
The group faults the FTC for not making the retailer trim its claims04/15/2015ConsumerAffairsBy James R. Hood
What is it with Neiman Marcus? The upscale retailer seems to have an obsession with fake faux fur. No, that's not a typo -- the fur that's supposed...
What is it with Neiman Marcus? The upscale retailer seems to have an obsession with fake faux fur. No, that's not a typo -- the fur that's supposed to be fake isn't. Allegedly.
In 2013, Neiman Marcus settled Federal Trade Commission charges that it misrepresented some of its fur products. A few years before that, it paid $25,000 on a similar rap.
And now, the Humane Society of the United States is charging that Neiman is at it again, or still. It's petitioning the FTC to once again take action against the company.
"Following similar petitions in 2007, 2008 and 2011, which named dozens of nationally and internationally known retailers, The Humane Society of the United States hoped the FTC would realize the enormity of the problem and start being proactive in protecting consumers," the Humane Society said in a prepared statement. "However, as evidence collected from 2011 to 2014 shows, the situation is just as bad as it was in 2011."
"Many Americans are opposed to buying or wearing animal fur because they object to rabbits, foxes, coyotes and other animals suffering and dying for frivolous trimmings on jackets and shoes," spokeswoman Samantha Miller added. "American consumers deserve to have the facts, and should be able to make socially-conscious decisions while shopping."
The Humane Society says it's not just Neiman Marcus that needs to modify its behavior. It faults the FTC for allegedly not taking action until prodded to do so and says other retailers are playing the same game -- selling real fur instead of the more expensive faux fur that many consumers prefer.
"The FTC is tasked by Congress with protecting American consumers from deception and administering and enforcing the Fur Products Labeling Act. But even the most notorious offenders like Neiman Marcus continue with a business-as-usual approach, with the FTC taking minimal action after evidence being presented by our investigators year after year," Miller said. "Another notorious offender, DrJays.com, is the subject of a similar petition filed in July 2014. Now, almost a year later, the FTC has taken no public action."
In fact, Miller says, at least of the items mentioned in the petition was still being promoted on the Neiman Marcus website as recently as yesterday -- the "Fizzy Faux-Fur Bootie." As of this writing, the item is shown as sold out but is still displayed on the site.
Fruit winning new respect for its medicinal value
A pear a day keeps diabetes away04/15/2015ConsumerAffairsBy Mark Huffman
Nutritionists have known for a long time that fruit plays a big part in a healthy diet, but recently certain fruit has been singled out for its specific me...
Nutritionists have known for a long time that fruit plays a big part in a healthy diet, but recently certain fruit has been singled out for its specific medicinal effects.
In some cases, different fruits have been shown to provide some of the same benefits as prescription drugs.
One of the latest fruits to win new respect is the pear – in particular, the Bartlett and Starkrimson pear. A research team from North Dakota State University, Fargo and the University of Massachusetts has concluded the two varieties of pears could help better manage early stage diabetes and the high blood pressure that usually goes along with it.
The research showed that the peel of the Starkrimson pear had the highest total phenolic, or acidic content, and that peel extracts had significantly higher total phenolic content than pulp. These qualities were found in higher quantities in the the Bartlett pear.
North Dakota State's Kalidas Shetty said the laboratory research suggests eating pears as a whole fruit – both peel and pulp – because it may provide better control of early stage diabetes.
“Such dietary strategy involving fruits, including pears, not only potentially could help better control blood glucose levels, but also reduce over dependence on drugs for prediabetes stages, or complement a reduced pharmacological dose of drugs with side effects to combat very early stages of type 2 diabetes,” the authors wrote in their report.
Effects on blood pressure
Not only did pears appear to help control diabetes, the researchers also found they might help control blood pressure by mimicking angiotensin-I-converting enzyme, a drug also known as ACE inhibitor. These medications are often prescribed for people with high blood pressure because they make blood vessels more flexible.
The study showed that the watery extract of Bartlett pulp had low to moderate ACE inhibitory activity. It wouldn't replace an ACE inhibitor you are currently taking but it might supplement it.
Blueberries are another fruit that may be good for you in more ways than one. Not only are they rich in vitamins and minerals, as many fruits are, a 2011 study found they may help reduce cancer risks.
Researchers at the University of Alabama Birmingham (UAB) Comprehensive Cancer Center reported just a cup of blueberries each day can help prevent cell damage linked to cancer.
A 2013 study found both blueberries and strawberries are especially helpful in preventing heart disease. Harvard researchers said three or more servings of both fruits per week may help women reduce their risk of a heart attack by as much as one-third.
The flavonoids in strawberries and blueberries may help dilate arteries, counter the buildup of plaque and provide other cardiovascular benefits, according to the study.
Avocados and cranberries
More recent research has suggested avocados and cranberries can have medicinal-like effects. A study of 45 overweight or obese subjects who ate a moderate-fat diet including an avocado daily found avocado consumption had a positive impact on cholesterol than those on a similar diet without the avocado or those on a lower-fat diet.
Research has also shown that cranberries can promote improved health when you work them into your diet. Cranberries have long been associated with benefiting urinary tract health but have also shown to benefit heart health, cancer prevention, oral health, and glycemic response.
Air Methods hit with stiff FAA fine
The company is accused of operating helicopters in violation of federal regs04/15/2015ConsumerAffairsBy James Limbach
The Federal Aviation Administration (FAA) is proposing a $1.54 million civil penalty against Air Methods Corp. of Englewood, Colo., for allegedly operating...
The Federal Aviation Administration (FAA) is proposing a $1.54 million civil penalty against Air Methods Corp. of Englewood, Colo., for allegedly operating Eurocopter EC-130 helicopters on dozens of flights when they were not in compliance with Federal Aviation Regulations.
According to the agency, Air Methods operated two helicopters on 70 passenger-carrying flights for compensation or hire, over water and beyond power-off gliding distance from shore, when they lacked required helicopter flotation devices and flotation gear for each occupant.
The company operated another helicopter on 13 such flights when it lacked required flotation gear for each occupant, the FAA contends. All 83 flights by the emergency medical transport company occurred around Pensacola, Fla.
“Operators must follow every regulation and take every precaution to ensure the safety of all those on board,” said FAA Administrator Michael Huerta. “Flying without required safety equipment is indefensible.”
Air Methods has 30 days from the receipt of the FAA’s civil penalty letter to respond.
Mortgage applications post first decline in four weeks
Contract interest rates were mostly higher04/15/2015ConsumerAffairsBy James Limbach
After posting gains in each of the previous 3 weeks, applications for mortgages have turned downward. According the Mortgage Bankers Association’s (MBA) W...
After posting gains in each of the previous 3 weeks, applications for mortgages have turned downward.
According the Mortgage Bankers Association’s (MBA) Weekly Mortgage Applications Survey, applications declined 2.3% in the week ending April 10.
While the Refinance Index fell 2% from the previous week, the refinance share of mortgage activity inched up to 58% of total applications from 57% the previous week. The adjustable-rate mortgage (ARM) share of activity dipped to 5.4% of total applications.
The FHA share of total applications was 13.5%, the VA share was 11.1% and the USDA share of total applications was unchanged from the previous week at 0.8%.
Contract interest rates
- The average contract interest rate for 30-year fixed-rate mortgages (FRMs) with conforming loan balances ($417,000 or less) edged up 1 basis point -- to 3.87% from 3.86%, with points increasing to 0.38 from 0.27 (including the origination fee) for 80% loan-to-value ratio (LTV) loans. The effective rate increased from last week.
- The average contract interest rate for 30-year FRMs with jumbo loan balances (greater than $417,000) rose from 3.81% to 3.84%, with points increasing to 0.35 from 0.26 (including the origination fee) for 80% LTV loans. The effective rate increased from last week.
- The average contract interest rate for 30-year FRMs backed by the FHA slipped 2 basis points to 3.67%, with points increasing to 0.23 from 0.18 (including the origination fee) for 80% LTV loans. The effective rate remained unchanged from last week.
- The average contract interest rate for 15-year FRMs moved to 3.16% from 3.15%, with points unchanged at 0.29 (including the origination fee) for 80% LTV loans. The effective rate increased from last week.
- The average contract interest rate for 5/1 ARMs jumped 6 basis points to 2.82%, with points falling to 0.40 from 0.45 (including the origination fee) for 80% LTV loans. The effective rate increased from last week.
The survey covers over 75% of all U.S. retail residential mortgage applications.
Buy4easy recalls motorcycle half-helmets
The helmets may dampen impacts insufficiently04/15/2015ConsumerAffairsBy James Limbach
Buy4easy is recalling 2,505 TMS HY-809 motorcycle half-helmets, size Large, manufactured March 20, 2012, to October 31, 2012. The helmets may dampen impa...
Buy4easy is recalling 2,505 TMS HY-809 motorcycle half-helmets, size Large, manufactured March 20, 2012, to October 31, 2012.
The helmets may dampen impacts insufficiently and may be missing, or have incomplete, manufacturing dates and instructions to the purchaser.
The user may not be adequately protected in the event of a crash, increasing the risk of personal injury.
The remedy for this recall is still under development. The manufacturer has not yet provided a notification schedule.
Owners may contact Buy4easy customer service at 1-626-388-9898.
Mobility is key to surviving the next Google update
Web sites scramble to polish their mobile sites as the April 21 drop-dead date approaches04/14/2015ConsumerAffairsBy James R. Hood
IRS.gov How often does this happen to you -- you type a search query into your smartphone, click on the first link and find yourself at a site that ...
How often does this happen to you -- you type a search query into your smartphone, click on the first link and find yourself at a site that looks like a schematic of an anthill? You know -- tiny letters, paragraphs that run off the page, photos the size of a deer tick.
It happens to everyone. A lot. And the reason is that way too many sites that rank highly on Google have for whatever reason not bothered to make their sites "mobile-friendly" -- a phrase that simply refers to having a separate format that automatically displays to users who are using a phone or small tablet.
It's hardly a secret that mobile devices are steadily replacing desktop and laptop computers, after all. You may not be aware of it but your browser communicates with every site you visit, passing along information about your operating system, browser and device, among many other things, so it's not as though the world's webmasters don't have access to the information.
Currently, it's reported that 29% -- nearly one-third -- of Google's search queries come from smartphones and tablets and the number is growing fast.
Why would a site not want to accommodate those visitors by presenting a layout that's easy to read and understand? Good question. While it's obviously a no-brainer for retail sites, the simple truth is that consumers aren't just using their phones and tablets when they're out and about, perhaps looking to duck in somewhere and buy something. They're using them at home, at work and at school as well. It's no longer unusual to watch television with one eye while nosing around on an iPhone with the other, so every kind of site needs to make itself mobile-friendly.
If you've muttered to yourself that someone should do something about this little annoyance, rest assured. Someone is and that someone is Google, a name that gets attention from web publishers everywhere.
The web is all aflutter today because come April 21, Google will be making a major modification to its search algorithm. This is something that happens every now and then and is greeted with the awe and trepidation usually reserved for the unveiling of a new Apple product.
Earlier Google algorithm changes have resulted in many previously successful sites being shoved off the edge of the earth. Companies large and small have literally gone out of business in some extreme cases when they were banished from the first few pages of search results.
Major changes over the past few years have been aimed at eradicating sites that trafficked in stolen content or played games with keywords, hoping to lure visitors who were looking for topic Y only to find a site that instead specialized in topic X. Or even XXX.
The change now pending could be even more far-reaching. It is intended to recognize -- and reward -- sites that are optimized for mobile users. In other words, if your site looks good on an iPhone or other mobile device, it will be more likely to rank highly in Google's index. If not, well ... you can always get a job driving for Uber.
Google takes heat for some of its ventures but no one can say it doesn't try to stay ahead of trends on the web. While those who lose out in the algorithm upgrades are understandably critical, there's general agreement among experts that Google does its best to deliver honest, useful results and that its algorithm adjustments are made with the consumer's best interests in mind.
So the results come April 21 should be mostly good for consumers, even though they're likely to take a big bite out of the traffic totals for many sites that have failed to look out for mobile users.
For smaller sites that use WordPress and other popular content management tools, it's not that hard to get into the mobile era, experts assure us. To test this theory, I went to one of the small community sites I manage and ran Google's mobile test and found my site did about as well as I did the summer I took intensive Russian. Flunked, in other words.
Ah, but salvation sometimes is easy for little guys. I loaded a small plug-in (free, open source) called WPtouch Mobile, activated it and ran the test again, with much better results. If you have a small site, you should do the same. If your site is built in what webmasters call "flat HTML," you may have to do a little more work but it's not all that difficult. Easy-to-use programs are available from comanies like CoffeeCup. That's the good news.
233 big bad sites
The bad news is that for a large site, becoming mobile-friendly is no simple task. You could just as easily invent a new and improved version of the aardvark as totally rework a site that sprawls over thousands and thousands of pages and has all kinds of complex interactive elements.
Big sites that have retooled for mobile users have spent months and hundreds of thousands of dollars, if not more, trying to prepare for April 21, a date that is now circled in very thick red ink on web developers' calendars.
Not all big sites are going to make the deadline, however. Some are working feverishly but others appear to be asleep at the switch, a ConsumerAffairs survey found.
We looked at the top 1500 sites, as determined by Quantcast, and found that fully 233 did not pass. Among the flunk-outs were 8 sites in the top 100, including MSN.com.
Perhaps because they are not as plugged into audience statistics and generally don't sell advertising, .org sites seemed to be over-represented in the no-pass list, including PBS.org and ConsumerReports.org.
But other no-shows were a bit more puzzling. They included RollingStone.com, which has recently flunked a couple of other tests we could mention. (At the last minute, RollingStone completed an upgrade and now passes Google's test).
And then there are the .gov sites. Flunk-outs include irs.gov, weather.gov, nih.gov and senate.gov. It's perhaps not a surprise that many of them didn't make the grade. Given the speed at which government moves, it may very well be that efforts to upgrade mobile readability are just about to get started after a few more studies and may even begin to show results in a few more fiscal years, which would probably be considered -- as the old saying has it -- good enough for government work.
What about the states? Same story: va.gov and wa.gov lead the no-shows with many trailing behind.
What does this mean to Joe and Jill Consumer? Maybe not much in the abstract but in terms of the Google searches we all rely on for day-to-day tasks, it may very well mean that some familiar sites no longer pop up where we expect them. The next logical conclusion is that some sites we may not know about will get their chance to rise to the top and may turn out to be not only more user-friendly but much more useful all around.
After all, a site that pays attention to its technology to make sure it delivers the most useful possible product to its visitors probably pays attention to the other parts of its business as well.
Those who criticize Google for gobbling up so much of the known universe may want to pause and be thankful that, unlike other companies that grab a big share of the market, it at least keeps stirring the barrel, keeping things frothy and fresh rather than stable and stale.
Despite the discomfort it causes webmasters, it should make life easier for consumers.
Questions you should ask and not answer on job interviews
CareerBuilder survey uncovers surprising number of illegal queries04/14/2015ConsumerAffairsBy Mark Huffman
When it comes to applying for a job, far too many applicants walk into the interview expecting the employer to ask all the questions. But to make sure the ...
When it comes to applying for a job, far too many applicants walk into the interview expecting the employer to ask all the questions. But to make sure the job is the right fit, the applicant also needs to have a list of questions ready to ask.
Some job-seekers refrain from quizzing a potential employer for fear of appearing presumptuous. But asking good questions will only raise an applicant’s stature in the eyes of the interviewer. The key, of course, is asking good, smart questions.
Here are some questions human resources experts believe will help you gain insight to a potential employer and impress interviewers:
Can you provide more details about this position’s responsibilities?
You should have already read a job description, often produced from boilerplate. This question may uncover specific things about the job that aren’t in the job description or clear up something you aren’t sure of. It might uncover a specific need that isn’t currently being met.
In fact, uncovering that unscratched itch should be the aim of all your questions.
If I were to get the job, how could I most quickly become a contributor to your organization?
In other words, give me a blueprint for advancement. This question will uncover some of the interviewer’s biggest perceived needs.
It also shows that your focus is not on your own needs but on the needs of the organization. “Ask not what your company can do for you…”
What do you see as the most challenging tasks that go with this position?
This is a question that any savvy interviewer will appreciate. It takes time, effort and money to hire an employee. If it turns out not to be a good fit, everyone loses.
For the applicant, there might be something about this job he or she didn’t anticipate. Better to learn where the pitfalls are before a job is offered and accepted.
What are your expectations for this position and how would I be evaluated?
This might be the most important question an applicant can ask. It will help define the scope of the job up front and let the applicant know what he or she must do to meet and exceed expectations.
What shouldn’t be asked or answered
There are also plenty of questions an applicant should not ask during an interview, most having to do with financial issues and vacation time. And it goes without saying that you shouldn’t ask about things you should already know, like what the organization does, how long it’s been around, etc. That’s what Google is for.
In a typical interview applicants will answer more questions than they ask and anyone who has applied for a job has probably encountered them. But there are a whole host of questions an interviewer is not allowed to ask.
Incredibly, a recent CareerBuilder.comsurvey found that 20% of hiring managers have asked questions during a job interview that they later learned were illegal. For example, if you have some gray hair and an employer asks, “When do you plan to retire?” what they really want to know is how old you are. That’s out of bounds.
The survey uncovered these other questions that one-third of hiring managers didn’t realize were illegal:
- What is your religious affiliation?
- Are you pregnant?
- What is your political affiliation?
- What is your race, color or ethnicity?
- How old are you?
- Are you disabled?
- Are you married?
- Do you have children or plan to?
- Are you in debt?
- Do you drink or smoke?
“It’s important for both interviewer and interviewee to understand what employers do and don’t have a legal right to ask in a job interview – for both parties’ protection,” says Rosemary Haefner, chief human resources officer at CareerBuilder. “Though their intentions may be harmless, hiring managers could unknowingly be putting themselves at risk for legal action, as a job candidate could argue that certain questions were used to discriminate against him or her.”
Study casts doubt on athletes' use of salt pills
Researchers find the pills have negligible effects and could even be harmful04/14/2015ConsumerAffairsBy Christopher Maynard
Athletes are always looking for an edge when it comes to improving their performance. Various vitamins and supplements have been used for years, with ...
Athletes are always looking for an edge when it comes to improving their performance. Various vitamins and supplements have been used for years, with some being more effective than others.
One idea that has gained popularity amongst endurance athletes is the consumption of salt pills before a performance. By taking them before strenuous physical activity, these competitors are attempting to replace the salt in their body that they lose through sweating.
Sweating is an important process for the human body because it helps control its internal temperature. This is a process called thermoregulation. Many have theorized that consuming these supplements would allow athletes to sweat more, which would optimize their thermoregulation. Increased thermoregulation directly correlates with better performance amongst athletes.
But scientists aren't so sure. A recent study conducted by Saint Louis University shows that salt pill consumption has a negligible effect on performance for endurance athletes.
Edward Weiss, who is a professor of nutrition and dietetics, had athletes participate in a double-blind study to test the effectiveness of salt supplements on thermoregulation. He divided the athletes into two groups and had them participate in strenuous physical activity. One group was given a salt supplement while the other was given a placebo.
The experiment measured sweat rate, dehydration, skin temperature, and other body functions associated with thermoregulation. After completing the tests, Weiss and his team found that the salt pills did not increase thermoregulation in the bodies of the athletes in any meaningful way.
In fact, Weiss cautioned that taking salt supplements could be detrimental to the overall health of athletes. It is already known that consuming too much salt can be detrimental to the human body, and these salt pills increase the body’s salt level by drastic amounts.
"While moderate sodium consumption is perfectly reasonable and should be encouraged, high sodium intake is associated with health concerns, like hypertension," Weiss said. "I recommend that athletes use caution with sodium supplementation, especially when daily intakes already exceeds the upper safe limit of 2300 mg/day for most Americans."
What’s up with declining hybrid sales?
Is it low gas prices or are consumers waiting for new technology?04/14/2015ConsumerAffairsBy Mark Huffman
Has America lost its appetite for saving fuel? Hybrid sales have declined in recent months, at a time when high gasoline prices have fallen, so it is easy ...
Has America lost its appetite for saving fuel? Hybrid sales have declined in recent months, at a time when high gasoline prices have fallen, so it is easy to draw a connection. But industry analysts say there may or may not be a link.
Honda recently announced that it is moving its Honda Accord Hybrid production from Ohio to Japan. Not long afterward Chevrolet said it would cut its Chevy Volt production back because of rising inventories.
According to Kelley Blue Book (KBB), Volt sales went from just over 7,600 in 2011 to 23,464 the following year. But since then, sales have fallen – to 18,805 last year to just 1,874 so far this year.
Michelle Krebs, senior analyst at Autotrader.com, says the moves by both Chevy and Honda indicate weakness in hybrid sales.
“Autotrader’s analysis of IHS/Polk registration data shows the hybrid/electric vehicle share of vehicle registrations peaked in May 2014, and that share has dropped every month since then,” Krebs said.
Low gasoline prices?
If you think relatively low gasoline prices are solely to blame for sluggish hybrid sales, Krebs says you’re wrong. She says the market began trending downward when gas prices were still increasing, and continued to decline with gas prices above or near the $3.50 gallon mark.
“In fact, the share declined for 4 consecutive months from May to September 2014 when gas prices were near historically high levels,” she said. “Further, that was against the backdrop of strong total vehicle sales and a flurry of new hybrid and EV introductions.”
GM is already firmly committed to EVs with its announcement earlier this year of the Chevy Bolt, an advanced concept of the plug-in electric car. GM said its Orion, Mich., assembly plant would gear up to make the car, which has a higher mileage range and lower price tag than its predecessor, the Volt.
Just part of a strategy
At the same time, Chevy is about to introduce the next generation Volt. In light of that, KBB analyst Akshay Anand called Chevy’s curtailing current model year Volt production a smart move since it will mean less inventory and fewer incentives on the older model.
“Hybrid and alternative fuel vehicle sales have been declining for some time now, with gas prices well below the summer prices of 2014,” Anand said. “Sales of the Volt are down nearly 50% for the first quarter this year, as consumers are already anticipating the new 2016 Volt, which has more aggressive styling, more premium interior, and seating for five.”
Eric Ibara, Anand’s colleague at KBB, says consumers shouldn’t see the softness in Volt sales as a reflection on the product. He says when Chevrolet announced the 2016 Volt would have upgrades that include expanded range, it makes sense for buyers to wait.
Other analysts are also not ready to say America is losing its appetite for saving fuel. In January the Detroit Free Press looked at declining hybrid sales and drew a distinction: hybrids that still use gasoline are falling out of favor. At the same time, sales of plug-in EVs rose 17%.
German mother of 13 expecting quads at age 65
Older moms are becoming more commonplace, but not quite to this extent04/14/2015ConsumerAffairs
Photo via RTLAnnegret Raunigk, a school teacher from Germany, just may have the older mom market. Ms. Raunigk is 65 years old and she is pregnant w...
Annegret Raunigk, a school teacher from Germany, just may have the older mom market tied up. Ms. Raunigk is 65 years old and she is pregnant with quads. She is already a mother of 13. She gives the Dugger family from TLC's 19 kids and counting inspiration.
The Russian and English teacher's pregnancy follows several attempts at artificial insemination over the last year-and-a-half. According to German TV channel RTL her 9-year-old daughter wanted a sibling. Raunigk is not only a teacher and mother but a grandmother of 7 as well. Her eldest child is 44.
Mass circulation newspaper Bild am Sonntag reported the four-baby pregnancy on its front-page, quoting the prospective mother of 17 recalling the moment doctors broke the news."After the doctor discovered there were four, I had to give it some thought to begin with," Bild quoted her as saying, adding however she had not considered it an option to reduce the number of embryos.
Her gynecologist, Kai Hertwig, was quoted on the RTL website saying that quadruple pregnancies were always a strain but that everything was currently going well.
Although this is something of a phenomenon she's not the first to have a go at being a mom at the young age of 65. There are a few others who claim this throne.
As long as everything goes well and she remains healthy, Ms. Raugnigk will be the oldest to give birth to quadruplets, but not the oldest to give birth to a child -- that official record is held by Maria del Carmen Bousada Lara, who give birth to twins in Spain in 2006, at the age of 66.
Why do older women want to give birth?
Here in America, part of the reason may center on the fact that women are marrying later in life, due to careers and economics. Also many women are having children on their own, perhaps not finding that right partner and opting to go it alone later in life.
American women are also bombarded by media messages that suggest that technology can extend the age at which a woman can be fertile with little difficulty.
The facts are that the risk of a miscarriage during the first trimester of a pregnancy for women older than 40 is higher (double) than the risk at age 35 or younger, 50% versus 22%, respectively
Not much is known about the extent to which woman are seeking to use technology to have children at older ages in other nations. But the phenomenon is certainly present and growing.
Germany's RTL plans to follow Annegret Raunigk through her pregnancy up until the birth of her children this summer.
The clock is ticking for the tax-filing deadline
We have some last-minute filing tips to help relieve the pressure04/14/2015ConsumerAffairsBy James Limbach
With the tax-filing deadline hours away, there are some things you need to do -- fast. But in your haste, you don't want to do things that can cause you tr...
With the tax-filing deadline hours away, there are some things you need to do -- fast. But in your haste, you don't want to do things that can cause you trouble.
To that end, the Internal Revenue Service (IRS) offers the following tips:
Doing so, whether through e-file or IRS Free File, vastly reduces tax return errors, as the tax software does the calculations, flags common errors and prompts taxpayers for missing information. And best of all, there is a free option for everyone. Whether filing electronically or on paper, be sure to make a copy of the return.
Check out tax benefits
Take a moment to see if you qualify for these and other often-overlooked credits and deductions:
- Benefits for low-and moderate-income workers and families, especially the Earned Income Tax Credit. The special EITC Assistant can help you see if your eligible.
- Savers credit, claimed on Form 8880, for low-and moderate-income workers who contributed to a retirement plan, such as an IRA or 401(k).
- American Opportunity Tax Credit, claimed on Form 8863, and other education tax benefits for parents and college students. Because limits and special rules apply to each of these benefits, the agency’s Interactive Tax Assistant, available on IRS.gov, can be a very useful tool.
Health care tax reporting
While most taxpayers will simply need to check a box on their tax return to indicate they had health coverage for all of 2014, there are also new lines on Forms 1040, 1040A and 1040EZ related to the health care law. Visit IRS.gov for details on how the Affordable Care Act affects the 2014 return. This includes:
- Reporting health insurance coverage.
- Claiming an exemption from the coverage requirement.
- Making an individual shared responsibility payment.
- Claiming the premium tax credit.
- Reconciling advance payments of the premium tax credit.
The Interactive Tax Assistant tool can also help.
Make the right IRA contribution
Eligible taxpayers have until April 15 to contribute to either a Roth or traditional individual retirement arrangement (IRA) for 2014. A six percent excise tax applies if a taxpayer contributes more than the law allows. Publication 590-A describes the limits in detail and includes examples.
Gifts to Charity
A new law gives taxpayers the option of claiming on their 2014 return cash contributions made by April 15 to charities aiding the families of two slain New York police officers. Details are on IRS.gov.
If claiming a charitable contribution deduction, use the IRS Select Check tool to see if a charity is eligible to receive tax-deductible donations. For donations of $250 or more, taxpayers must obtain a written acknowledgment from the charity before filing a return.
IRS Publication 526 has further details on making gifts to charity, including records to keep. In addition, special reporting requirements generally apply to vehicle donations, and taxpayers wishing to claim these donations must attach any required documents to their return.
Most taxpayers claiming refunds now choose to receive them by direct deposit. A taxpayer can choose to deposit a refund in a single account at a bank or other financial institution or allocate it among as many as two or three accounts. See Form 8888 for details.
To avoid a refund delay or misrouting to a wrong account, make sure the financial institution routing and account numbers entered on the return are accurate. After filing, whether or not direct deposit was chosen, track the status of a refund with the Where's My Refund? tool on IRS.gov or IRS2Go.
Special instructions for paper filers
Math errors and other mistakes are common on paper returns, especially those prepared or filed in haste at the last minute. These tips may help those choosing this option:
- Fill in all requested Taxpayer Identification Numbers, usually Social Security Numbers, such as those for any dependents claimed. Check only one filing status and the appropriate exemption boxes.
- When using the tax tables, be sure to use the correct row and column for the filing status claimed and taxable income amount shown.
- Sign and date the return. If filing a joint return, both spouses must sign.
- Attach all required forms and schedules, such as Schedule A for people who itemize their deductions. In addition, attach to the front of the return all Forms W-2 and other forms reflecting withholding.
- Mail the return to the right address. Check Where to File on IRS.gov or the last page of the tax instructions. If mailing on Wednesday, April 15, be sure to do so early enough to meet the scheduled pick-up time and ensure a postmark before the midnight deadline.
Need more time to file?
Avoid a late-filing penalty by requesting a tax-filing extension. There are several ways to do so, including through the Free File link on IRS.gov, or by designating a payment as an extension payment and making it via one of the IRS e-payment methods, including the newest, IRS Direct Pay. Alternatively, taxpayers can file Form 4868. While an extension grants additional time to file, tax payments are still due April 15.
If so, use IRS Direct Pay or any of several other e-payment options. They are secure and easy and you receive immediate confirmation of your payment. Or, send a check or money order payable to the “United States Treasury,” along with a Form 1040-V payment voucher. Taxpayers who can’t pay by April 15 often qualify to set up a monthly payment agreement with the IRS using the Online Payment Agreement option on IRS.gov.
Cats seem independent but they need help staying healthy04/14/2015ConsumerAffairs
Cats can be your best friend, and they can own that title that dogs have held for so long. But if you want them to remain your best friend and live a ...
Keep the green in your pocket when you garden
You can spend a fortune on gardening tools but it's not necessary04/14/2015ConsumerAffairs
The idea behind growing your own vegetables is to be able to keep a little money in your pocket and be able to live off the land like our hearty ances...
The idea behind growing your own vegetables is to be able to keep a little money in your pocket and be able to live off the land like our hearty ancestors did.
But if you have been to any of the big DIY stores you can see that you can end up spending a fortune on gardening equipment and it makes the prospect of fending for yourself and living off the land a little less appealing. It doesn't have to be that way, though, if you know how to cut corners.
Location is everything. The sun is your number one concern so in the months before you plant or at least in the days leading up to it, case your yard for where the sun is optimal for growing and at what time of day it will be peering down on your little fruits of your labor. If your garden fails and your plants die or get dried up, it will take more time and money to relocate to a new site in your yard. Scout out trees that might not be blooming now but will create shade later.
Plant a seed. Start with seeds, because buying plants is more expensive. If you want, start them inside before you plant the full garden outside. Research what you plant if you are going to plant them inside because some plants aren't as adaptable and won't do well in a replanting situation from home to garden.
Thin is in. When you plant from seedlings especially inside, you'll reach the step where you “thin” them. This involves cutting down the weaker plants so that they die and the strongest survive and continue to grow before you introduce them to your new lush garden outside.
Recycle your seeds. Starting from seeds is always cheaper but if you have a large garden, that can be a lot of seeds to sow and buy. One solution to this is to opt for heirloom seeds. They are reusable.
Take a stake in your garden but don't buy one. You will need a stake eventually in your garden but they can start adding up if you are planting rows and rows of a certain crop. At a hardware or garden store, you will probably find stakes to be priced around $3, and tomato cages can cost you even more. So look for something around your house that can be used as a stake. Branches or old fence posts can work fine.
Raised beds look great. But they are expensive no matter how you put them together. Stores have kits that end up costing over $100 just for 30 square feet or so. Making them yourself can cost as much or even more. So stay grounded -- it will be cheaper.
Grow something you want to eat that would cost more at the store. Get plants that have high yields.
Buy at garage sales, keep an eye out for pots (or other plant containers) and tools. You’ll be amazed at the markdowns you find on these two items specifically, as compared to their retail prices.
Producer prices on the rise
It's the first advance since last October04/14/2015ConsumerAffairsBy James Limbach
After posting declines in four consecutive months, the Producer Price Index (PPI) moved higher in March. According to the Bureau of Labor Statistics, the ...
After posting declines in four consecutive months, the Producer Price Index (PPI) moved higher in March.
According to the Bureau of Labor Statistics, the PPI was up a seasonally adjusted 0.2% last month after falling 0.5% in February and 0.8% in January. Over the last 12 months, the PPI is down 0.8%.
Goods and services
Prices for goods were up 0.3% following 8 consecutive decreases, led by a 1.5% surge in energy costs, due primarily to gasoline, which jumped 7.2%. Food prices, meanwhile, fell 0.8%, thanks to a plunge of 5.1% in pork prices. The “core rate” -- less foods and energy – was up 0.2% in March.
The cost of services inched up 0.1% in March following a decline of 0.5% in February. Services less trade, transportation, and warehousing, rose 0.3%, while transportation and warehousing services and trade services both declined 0.2% in March.
Details are available on the Labor department website
Retail sales post first advance in 4 months
A rebound in motor vehicle demand gets a lot of the credit04/14/2015ConsumerAffairsBy James Limbach
Retail sales shot higher in March after tumbling an upwardly revised 0.5% month earlier, for the first gain in 4 months. Figures released by the Census Bu...
Retail sales shot higher in March after tumbling an upwardly revised 0.5% month earlier, for the first gain in 4 months.
Figures released by the Census Bureau show sales totaled $441.4 billion -- an increase of 0.9% from February up 1.3% from a year earlier.
Sales at motor vehicle and parts dealers were up 2.7% last month following February's 2.1% decline. Other sectors show strong advances were building material and garden equipment and supplies (+2.1%), miscellaneous store retailers (+1.7%) and furniture & home furnishing stores (+1.4%)
Sales declines were posted by gas stations and grocery stores (-0.6%), and electronics and appliance stores (-0.5).
The complete report is available on the Commerce Department website.
Completed foreclosures down sharply
Mortgage delinquencies posted a sharp decline as well04/14/2015ConsumerAffairsBy James Limbach
The nation's foreclosure inventory posted a year-over-year decline of 27.3% in February, with completed foreclosures down 15.7%. According data from proper...
The nation's foreclosure inventory posted a year-over-year decline of 27.3% in February, with completed foreclosures down 15.7%.
According data from property information, analytics and data-enabled services provider CoreLogic, there were 39,000 completed foreclosures nationwide in February compared with 46,000 a year earlier, representing a decrease of 67% from the peak of completed foreclosures in September 2010.
Completed foreclosures are an indication of the total number of homes actually lost to foreclosure. Since the financial meltdown began in September 2008, there have been approximately 5.6 million completed foreclosures across the country. Since home-ownership rates peaked in the second quarter of 2004, there have been approximately 7.7 million homes lost to foreclosure.
“The number of homes in foreclosure proceedings fell by 27% from a year ago and stands at about one-third of what it was at the trough of the housing cycle,” said Frank Nothaft, chief economist at CoreLogic.
CoreLogic also reports the number of mortgages in serious delinquency fell 19.3% from February 2014 to February 2015, with 1.5 million mortgages -- or 4% -- in serious delinquency (defined as 90 days or more past due, including those loans in foreclosure or Real Estate Owned).
This is the lowest delinquency rate since June 2008. On a month-over-month basis, the number of seriously delinquent mortgages dipped 1.1%.
“While the drop in the share of mortgages in foreclosure to 1.4% is a welcome sign of continued recovery in the housing market,” Nothaft added, “the share remains more than double the 0.6% average foreclosure rate that we saw during 2000-2004.”
As of this past February, the national foreclosure inventory included approximately 553,000 homes compared with 761,000 homes in February 2014. The foreclosure inventory as of February 2015 represented 1.4% of all homes with a mortgage, versus 1.9% in February 2014.
“The foreclosure inventory dropped year-over-year in all but 2 states,” said Anand Nallathambi, president and CEO of CoreLogic. “The foreclosure rates in judicial foreclosure states are beginning to pick up and remain higher than in non-judicial states. What’s encouraging is that fewer Americans are seriously delinquent in paying their mortgages which in turn is reducing the foreclosure inventory across the country as a whole.”
- On a month-over-month basis, completed foreclosures were down 11.6% from the 44,000 reported in January 2015. As a basis of comparison, before the decline in the housing market in 2007 completed foreclosures averaged 21,000 per month nationwide between 2000 and 2006.
- The 5 states with the highest number of completed foreclosures for the 12 months ending in February 2015 were: Florida (110,000), Michigan (50,000), Texas (34,000), California (30,000) and Georgia (28,000). These 5 accounted for almost half of all completed foreclosures nationally.
- Four states and the District of Columbia had the lowest number of completed foreclosures for the 12 months ending in February 2015: South Dakota (15), the District of Columbia (83), North Dakota (334), West Virginia (506) and Wyoming (526).
- On a month-over-month basis, the foreclosure inventory was down by 1.4% from January 2015. The February 2015 foreclosure rate of 1.4% is back to March 2008 levels.
- Four states and the District of Columbia had the highest foreclosure inventory as a percentage of all mortgaged homes: New Jersey (5.3%), New York (4.0%), Florida (3.4%), Hawaii (2.8%) and the District of Columbia (2.6%).
- The 5 states with the lowest foreclosure inventory as a percentage of all mortgaged homes were: Alaska (0.3%), Nebraska (0.4%), North Dakota (0.5%), Montana (0.5%) and Minnesota (0.5%).
Leader Slaughterhouse recalls veal carcasses
The product did not undergo federal inspection04/14/2015ConsumerAffairsBy James Limbach
Leader Slaughterhouse of Imler, Pa., is recalling approximately 1,800 pounds of veal carcasses. The product did not undergo federal inspection, and does n...
Leader Slaughterhouse of Imler, Pa., is recalling approximately 1,800 pounds of veal carcasses.
The product did not undergo federal inspection, and does not bear the USDA mark of inspection.
There are no reports of illness due to consumption of this product.
The following product, produced on April 10, 2015, is being recalled:
- 6 individual 300 pound veal carcasses, cut into quarters.
The product was picked up at the establishment’s Pennsylvania location and taken by customers to Pennsylvania and New York.
Consumers with questions about the recall may contact David Hill at 814-239-0182.
Researchers: Eat the right food and lose weight
There's a new focus on the role glycemic load plays in weight gain04/13/2015ConsumerAffairsBy Mark Huffman
When consumers embrace a particular weight loss program, they may achieve results. But in other instances, try as they might, the pounds can be very slow t...
When consumers embrace a particular weight loss program, they may achieve results. But in other instances, try as they might, the pounds can be very slow to come off, if they come off at all.
In the latter case, it might not be a matter of how much a dieter is eating, but what the dieter is eating.
Changing those old eating habits – adding certain foods to the diet and avoiding others – can make it easier to win the battle of the bulge. At least that’s the conclusion of researchers at Tufts University.
At Tufts, the Friedman School Nutrition Science & Policy analyzed 3 previous studies that were based on more than 16 years of follow-up among 120,000 adults. That led researchers there to focus on the glycemic content, or load (GL) of particular foods.
The GL is determined by multiplying a food’s glycemic index, a measure of a food’s ability to create blood glucose, by the carbohydrate content. Foods with a high GL were more likely to make it easier to gain weight and harder to lose it.
Refined grains, starches and sugars
Food with a big GL include refined grains, starches and sugars. Researchers say these high GL foods can boost blood glucose and lead to chronic diseases like type 2 diabetes. Until now, they say, the link to weight gain had not been firmly established.
“There is mounting scientific evidence that diets including less low-quality carbohydrates, such as white breads, potatoes, and sweets, and higher in protein-rich foods may be more efficient for weight loss,” said Jessica Smith, one of the authors. “We wanted to know how that might apply to preventing weight gain in the first place.”
If you are trying, without result, to lose weight you may be interested in the food Smith and her colleagues say you should eat and what you should avoid – or at least keep consumption to a minimum.
Less red meat, more yogurt
The study concluded that increasing the amount of red meat and processed meat are the food items most strongly associated with weight gain.
Conversely, increasing consumption of yogurt, seafood, skinless chicken and nuts are most strongly associated with weight loss. In fact, the more people ate these foods, the more weight they lost.
Interestingly, the researchers found that eating dairy products in general didn’t seem to have much effect one way or the other.
“The fat content of dairy products did not seem to be important for weight gain,” Smith said. “In fact, when people consumed more low-fat dairy products, they actually increased their consumption of carbs, which may promote weight gain. This suggests that people compensate, over years, for the lower calories in low-fat dairy by increasing their carb intake.”
What else is on your plate?
The combination of foods you consume also appears to be important. For example, avoiding foods with a high GL seemed to make fish, nuts and other food associated with weight loss even more effective.
Weight-neutral foods like eggs and cheese appear to contribute to weight gain when combined with high GL food but are associated with weight loss when eaten with low GL food.
The chief take-away from the study seems to be this: not all calories are created equal.
“Our study adds to growing new research that counting calories is not the most effective strategy for long-term weight management and prevention,” said Dariush Mozaffarian, the study’s senior author. “Some foods help prevent weight gain, others make it worse. Most interestingly, the combination of foods seems to make a big difference.”
The Tufts researchers advise those trying to shed a few pounds to not only emphasize specific protein-rich foods like fish, nuts, and yogurt to prevent weight gain, but also focus on avoiding refined grains, starches, and sugars in order to maximize the benefits of these healthful protein-rich foods.
To further help consumers identify foods to eat and avoid, the Harvard Medical School recently published this list of 100 foods and their GL.
Ransomware hackers extort money from more police departments
If you don't have backup copies of your files and games, make those backups today04/13/2015ConsumerAffairs
Are ransomware attacks happening more frequently, or are more victims stepping forward and filing reports with the police? Actually, these days the police ...
Are ransomware attacks happening more frequently, or are more victims stepping forward and filing reports with the police? Actually, these days the police are just as likely to be the victims themselves.
Just last week, police in Tewksbury, Massachusetts, admitted that they'd had to pay an untraceable $500 Bitcoin ransom to the hackers who'd encrypted the Tewksbury PD's computer files. The chief of police admitted that the attack “basically rendered us in-operational, with respect to the software we use to run the Police Department.” Tewksbury PD did not keep backup copies of its crucial files.
Over the weekend came news of another law-enforcement organization who'd made a similar mistake: four small towns and a county in Maine all used a single computer network to share files and records with no backup.
WCSH-TV reported that Sheriff Todd Brackett of Lincoln County admitted somebody on the network had accidentally downloaded a “Megacode” virus (which a questioner on a BleepingComputer discussion forum described as being “Like Cryptolocker, but not as well done”).
The virus encrypted the computer files of four town police departments and a county sheriff, until the department paid a $300 ransom for the decryption key. Brackett said that the FBI could trace the money as far as a bank account in Switzerland, but could not trace it beyond that.
Megacode, Cryptolocker and other forms of ransomware work by literally holding files for ransom, specifically by encrypting them and demanding payment in exchange for the encryption key. Some of the less sophisticated forms of ransomware can be removed or decrypted with the right tools, but more often, the ransomware can't be removed or broken without the decryption key from the ransomer.
Ransomware is simply another form of malware and thus is spread just like any other kind. In Durham, New Hampshire, last June, the police department's computer network fell victim to ransomware after an employee clicked on what they described as a legitimate-looking email. Fortunately, the Durham PD did have backup copies of its computer files, so instead of paying the ransom, they wiped their computers clean and then restored everything with their backup files.
Anyone with any type of network connection is vulnerable to ransomware if they're not careful. Just last month, security researchers discovered a then-new version called TeslaCrypt which targeted people on multiplayer game platforms such as Minecraft, Call of Duty, World of Warcraft and other popular titles.
TeslaCrypt not only encrypted the victims' game files, but could also spread to Word documents, Excel files, PowerPoint presentations and similar files. The hackers behind the malware demanded $1,000 from their victims.
If you don't already have backup copies of all your important files – not just on your home computer, but also your tablet, smartphone and anything else holding files you don't want to lose – you should make copies right away, and keep them on a dedicated thumb drive or flash drive, or burn copies onto a disc.
In addition to these physical media storage options, you also have the option of hiring a backup service — though that brings the usual risks that comes with entrusting your data to someone other than yourself.
Researchers find evidence the virus can be passed between humans and canines04/13/2015ConsumerAffairs
Norovirus it's the one thing that can sink a cruise ship vacation. Noroviruses are a group of viruses that cause inflammation of the stomach and large ...
White Lodging hotel management finally admits to months-old customer data breach
Food and beverage payment systems at select Marriott, Sheraton, Renaissance and Courtyard locations compromised04/13/2015ConsumerAffairs
Last February, financial industry insiders first reported evidence that hackers might have breached security and stolen customer payment card information f...
Last February, financial industry insiders first reported evidence that hackers might have breached security and stolen customer payment card information from various hotels run by the White Lodging Services Corporation (which operates franchises under various brand names including Marriott, Sheraton, Renaissance and Courtyard).
Specifically, the hackers managed to plant malware on the point-of-sale systems used in the bars and restaurants attached to certain White Lodging-owned hotels, and stole the payment card information of anybody who ate in a hotel restaurant or drank in a hotel bar.
Security expert Brian Krebs first reported the suspected White Lodging breach in February. However, not until late last week did White Lodging confirm the breach and specify exactly which customers are at risk.
In a press release dated April 8, White Lodging admitted that from July 3, 2014 through February 6 of this year, hackers had compromised the point of sale systems for the food and beverage outlets connected to hotels in 10 different locations:
- Indianapolis Marriott Downtown, Indianapolis, IN
- Chicago Marriott Midway Airport, Chicago, IL
- Auburn Hills Marriott Pontiac at Centerpoint, Pontiac, MI
- Austin Marriott South Airport, Austin, TX
- Boulder Marriott, Boulder, CO
- Denver Marriott South at Park Meadows, Denver, CO
- Louisville Marriott Downtown, Louisville, KY
- Renaissance Boulder Flatiron, Broomfield, CO
- Courtyard Austin Downtown, Austin, TX
- Sheraton Hotel Erie Bayfront, Erie, PA
However, guests who stayed at those hotels without using their cards to pay for food or beverage services are not at risk.
The stolen data is believed to include customers' names, credit or debit card numbers, security codes and card expiration dates.
As usual in such circumstances, the company is offering a year of free credit protection services, this time through Experian:
For more information about how to enroll for this service please send an email to [email protected]. You will then receive enrollment instructions. Alternatively, you can enroll by calling 1-866-926-9803. If you call this number you will be presented with a recorded message and various options. Press 1 to access the enrollment information. If you are a non-U.S. resident the available services will vary. If you decide to enroll in the service, you will be required to provide your Social Security number for identification purposes.
Watch for scams
White Lodging's press release also warned customers to watch out for scam artists who might send them scammy emails or text messages falsely claiming to be from White Lodging. This also happens anytime there's a widely publicized hacking: as soon as the media reports that Company X has been hacked, scam artists immediately start using Company X's name in their bait-emails.
If you receive any email or other communication allegedly from White Lodging and warning you that you personally were compromised in the attack, that message is guaranteed to be a fake. White Lodging is not informing individuals, because they can't. The company's online FAQ page about the hacking includes this question-and-answer combo:
Q: Why wasn’t I notified directly about this incident?
A: Because this incident affected the point of sale systems at select food and beverage outlets we do not have not have contact information associated with the affected credit/debit cards. Therefore, we could not notify you directly by email, postal mail or telephone.
Muscle-building supplements tied to testicular cancer
The men might be strong but so is the association between the supplements and cancer04/13/2015ConsumerAffairsBy Truman Lewis
Gym rats who gulp down muscle-building supplements in hopes of getting really, really strong need to know about something else that's strong -- the lin...
Gym rats who gulp down muscle-building supplements in hopes of getting really, really strong need to know about something else that's str | 1 | 40 |
<urn:uuid:dfe97dd5-a730-43dc-9645-b4800f57fa1c> | Abstract and Keywords
This chapter covers examples of naming practices for aircraft types as well as for individual airframes, focusing on heavier-than-air aircraft, in other words machines intended to move through the air by generating aerodynamic or powered lift. The history of approaches to naming British military aircraft types is examined in particular detail, revealing efforts to name aircraft with more than just alphanumeric designations, while also exploring former umbrella nomenclature systems involving many manufacturers. US military aircraft Mission Design Series designation systems are explained briefly, as are systems of reporting names used during World War II and the Cold War. Civil aircraft naming practices are then illustrated with the example of the Boeing Company’s 700-series of airliners, before examining the intricacies of aircraft naming in international development projects. Finally, examples are given of names and nicknames for individual machines.
Humankind has developed a great variety of machines in its quest to venture skywards. The rapid development of controlled flight since the beginning of the twentieth century has seen technology progress from gliders to propeller-driven and later jet-powered fixed-wing aircraft, not to mention airships, rotary-wing aircraft, rockets, and hovercraft. This proliferation of different machines has, as a matter of course, led to the emergence of vast domains of specialist terminology, including the names of aircraft themselves.1
42.2 British Military Aircraft
The first official system for naming heavier-than-air military aircraft in British service was developed in 1911 by what was then called the Army Balloon Factory, later known as the Royal Aircraft Factory. The initial system described three main types of aircraft: the Blériot Type (with a propeller mounted forward of the engine), the Farman Type (with a propeller mounted aft of the engine), and the Santos-Dumont Type (with a smaller horizontal surface mounted forward of the main wing, in a so-called canard configuration) (Wansbrough-White 1995: 20). Aircraft produced by the Factory would be named with initials indicating their type followed by a number, for instance the Royal Aircraft Factory S.E.1,2 with the abbreviation standing for ‘Santos Experimental 1’. Further (p. 606) abbreviations were later added, and some of the original abbreviations acquired new meanings, with the name of the Royal Aircraft Factory S.E.5 referring to its role as ‘Scout Experimental’, although this type went well beyond the experimental stage, with over 5,000 built. These alphanumeric designations were used in common parlance by aircrews and groundcrews, as demonstrated in service song lyrics of World War I such as ‘The B.E.2C is my bus’ or ‘It was an old F.E.2B’ (cited in Ward-Jackson 1945: 10, 29), although the designations themselves were frequently shortened, as in ‘Keep the 2Cs turning’ (Ward-Jackson 1945: 12).
Although the Royal Aircraft Factory was an official establishment, it was only one of a number of design bureaux and manufacturers active at that time, most of which established their own naming practices. The company founded by Thomas Sopwith became known for the zoological names of its aircraft, such as the Sopwith Dolphin, the Sopwith Salamander, and the Sopwith Camel, the latter name stemming either from the hump on the fuselage forward of the cockpit or from the visual effect produced by the relative angles of the upper and lower wings (Wansbrough-White 1995: 97). The Camel’s name started as a nickname, and indeed aircraft are often better known by their unofficial names, which are bestowed on most aircraft by their aircrew, ground crew, or passengers. One exception seems to have been the aforementioned S.E.5, with the nickname Sepha apparently only used within the Royal Aircraft Factory itself, a most unusual situation for such a prolific aircraft (Wansbrough-White 1995: 81).
When a unified official naming system for service aircraft was introduced by the Ministry of Munitions in February 1918, it put forward classes of standardized ‘nicknames’ instead of designation numbers. The ‘class’ of name identified the purpose of the aircraft, so fighter aircraft would be given names of animals, plants, or minerals; bomber aircraft would be given geographical names; and heavy armoured machines would be given personal names from mythology. Subclasses of names would indicate the size of aircraft or whether it was land-based or sea-based. For example, three-seater sea-based fighters would be named after shellfish, single-seater land-based bombers would be named after inland Italian towns, and a hypothetical heavy armoured sea-based machine weighing between 10 and 20 tons would be named after a mythological Northern European female. Furthermore, the initial letters of the name would denote the manufacturer, so the ‘SN’ in Snipe identified it as a Sopwith-designed aircraft. Names to be chosen had to be both ‘suitable’ and ‘novel’. This system presented a number of problems in its detail, and it was modified one month later, removing categories such as flowers and rocks, as well as eliminating, for example, the need to distinguish between (p. 607) different types of fish depending on the size of aircraft when naming single-engined seaplanes or flying boats (Wansbrough-White 1995: 23–5).
A significant development in air power came with the establishment of the world’s first independent air arm on 1 April 1918, when the Royal Air Force (RAF) was formed by the amalgamation of the army’s Royal Flying Corps and the Royal Naval Air Service. With the birth of the new service, the Ministry of Munitions introduced another aircraft nomenclature system in July 1918 in the form of its Technical Department Instruction 538. This system provided the basic format for all future British military aircraft naming, decreeing that aircraft names would consist of two main elements, the first being a name chosen by the aircraft’s design firm ‘to indicate the origin of the design’ and the second being a ‘nickname’ (cited in Wansbrough-White 1995: 26). By making the constructor’s name itself an integral part of aircraft names, this system responded to criticism from the Society of British Aircraft Constructors that the origins of a design were not immediately apparent in names deriving from the February and March 1918 systems. The new system also updated the categories of nicknames to be given to aircraft, which included zoological names, geographical names, personal names from mythology, and attributes, all divided according to the size of aircraft and whether they were land-based or sea-based. Certain categories of name were explicitly excluded by this scheme owing to their use for naming aero-engines, including birds of prey, used at that time for engines designed by Rolls-Royce.
The naming categories based on zoology, geography, mythology, and attributes were discontinued in 1927, when the Air Ministry began naming aircraft with initial letters referring to roles (e.g. with ‘C’ allocated to troop carriers such as the Handley Page Clive). This mnemonic scheme represented a compromise between the British approach to naming aircraft and the American system of alphanumeric type designations, but it soon proved impractical.
New type-based categories were introduced in 1932 and slightly updated in 1939, in an effort to improve relations between officialdom and industry and to produce more appropriate names. For example, fighters were to be named with ‘general words indicating speed, activity or aggressiveness’, while trainers would be named after ‘words indicating tuition and places of education’ (cited in Wansbrough-White 1995: 135). This led to fighter names such as the Gloster Gladiator, Hawker Tempest, and Supermarine Spitfire, and trainers such as the de Havilland Dominie, Miles Magister, and Airspeed Oxford. Most bombers were to be named after inland towns in the British Empire or places with British historical associations, hence the Avro Lancaster, Handley Page Halifax, Short Stirling, and Fairey Battle, named after the town in East Sussex. Some names straddled several categories. The Bristol Beaufort torpedo bomber was, for instance, possibly named after the Duke of Beaufort, but it may also have been named after the Beaufort Sea, thus satisfying the 1932 requirements for torpedo bombers to be named after oceans, seas, or estuaries.
During the Cold War, the same system largely continued but with a new category introduced in 1949 for helicopters. These were to be named after trees, but the Bristol Sycamore seems to be the only one named in such a fashion. Although there had been (p. 608) divergences from the official system before, the profile of exceptions grew from the 1950s onwards, notably with the so-called V-bombers. The established pattern for naming bombers was after inland towns in what had by then become the Commonwealth, and this continued in the immediate post-war period with the English Electric Canberra. Some felt that the new generation of strategic nuclear bombers called for more dynamic-sounding names, and the first of the three aircraft in this class was named the Vickers Valiant.3 It was eventually decided to name the three bombers as a family, so the other two became the Avro Vulcan and the Handley Page Victor, described from October 1952 as a ‘V’ class (Wynn 1994: 56). Apart from the alliterative attraction of the name of the Valiant, the letter ‘V’ was perhaps reminiscent of the ‘V for Victory’ slogan of the previous decade, while also evoking the swept wings of all three bombers, especially the delta wing of the Vulcan.
The apparent departure in more recent decades from the earlier nomenclature systems might be explained by the fact that newer types of aircraft tend to be introduced less frequently, have longer development periods, and remain in service for longer. Furthermore, while the marketing role of aircraft names has been recognized since before World War I, it is now a paramount concern for manufacturers. Many aircraft in recent British service have been international ventures or imported aircraft, some of which, such as the Lockheed Hercules, come with well-established names.
42.3 US Military Aircraft
In the United States, the alphanumeric designations of military aircraft types are frequently used alone. These codes are known as Mission Design Series designators and include information on an aircraft’s basic mission by use of a letter code, so the [Northrop Grumman] B-2 is a bomber, while the [Boeing] P-8 is a maritime patrol aircraft. Most US military aircraft also have ‘popular names’, for example the B-2 Spirit and the P-8 Poseidon. The Mission Design Series codes are the official designations, but the Pentagon has an approval process for popular names, with current guidelines stating that a suitable name is short and ‘characterizes the mission and operational capabilities of the vehicle’ (US Air Force 2005: 6). Not all aircraft have officially recognized popular names, for instance the Lockheed SR-71, a retired reconnaissance aircraft which only had unofficial nicknames, such as the Blackbird.
While in development, the General Dynamics F-16 had been unofficially known for some time as the Falcon, which led to the official selection of the popular name Fighting Falcon. The addition of the word ‘Fighting’ was necessitated by the existence of Falcon as (p. 609) a copyrighted name for a range of aircraft produced by the French company Dassault-Breguet (Flight International 1980). Indeed, the current Pentagon approval process includes a trademark search by the Air Force Legal Services Agency (Judge Advocate General Patent Division) (US Air Force 2005: 4).
42.4 Reporting Names
During World War II, Allied forces in the Pacific theatre developed codenames in order to facilitate communications when reporting on Japanese aircraft, the official names of which might either follow naming patterns based on the Japanese Imperial calendar or might be unknown to the Allies. The codenames used were short, easily remembered words, including tree names for trainers (e.g. Oak or Willow), female first names beginning with ‘T’ for transports (Tabby) and male first names for fighters (Clint or Frank). Among these names were a number of in-jokes planted by intelligence staff (Horton 1994: 153).
In 1954, the Air Standards Coordinating Committee (a joint initiative of Australia, Canada, New Zealand, the UK, and the USA) revived the use of reporting names to refer to Soviet, Chinese, and, later, Russian equipment. These codenames are widely used by NATO members and their allies. The initial letter indicates the type of aircraft: ‘B’ for bombers (e.g. Badger, Blowlamp, or Bull), ‘C’ for transports (Camber, Coaler, or Coot), ‘F’ for fighters (Fishbed, Foxbat, or Fritz), ‘H’ for helicopters (Helix, Hind, or Hippo) and ‘M’ for miscellaneous (Mainstay, Midas, or Mote). The names chosen are all recognizably English, but there is a considerable mixture of common and less common words. Many of the names have a vaguely insulting or absurd tone (Careless, Flatpack, Hoodlum, Mug), while some are perhaps more complimentary and are even adopted by the aircraft’s users themselves. For example, the Mikoyan MiG-29 ‘Fulcrum’ was indeed a key part of the Warsaw Pact’s air defence, and the Tupolev Tu-95 ‘Bear’ is still seen as a symbol of Russian power when on long-range patrols.
42.5 Civil Aircraft
There is some overlap between civil and military naming when an aircraft is used in both domains, but the choice of name for civil aircraft is usually the prerogative of the manufacturer alone. As many civil aircraft perform broadly similar transportation functions, and certain manufacturers specialize in particular sizes or configurations, civil aircraft are often popularly identified by the brand name of their manufacturer alone, for example ‘an Airbus’, ‘a Boeing’, or ‘a Cessna’.
The Boeing Company’s successful series of commercial airliners are well known by their numerical codes. The company allocated its ‘700-series’ of model numbers to its (p. 610) jet transport ventures, but it was not convinced that ‘Model 700’ sounded ambitious enough for its first jet airliner, so it resolved to name it the Boeing 707 instead (Lombardi 2004). The Boeing 707 did originally have a name as well as a number, the Jet Stratoliner, but it was the model number that caught on (Horton 1994: 73). A pattern was thereby established, and there followed the Boeing 727, 737, 747 (most widely known by its Jumbo Jet nickname), 757, 767, and 777.4 For the company’s latest addition to the series, it took the rare step of adding an official name to the model number, to be chosen by a global public vote from the shortlist of Dreamliner, eLiner, Global Cruiser, and Stratoclimber. The eventual name selected was the Boeing 787 Dreamliner, although Global Cruiser won the most votes within the USA (Tinseth 2011).
In spite of the earlier British penchant for naming aircraft, some British-produced airliners have only alphanumeric model numbers, such as the Vickers VC10, with the initials standing for Vickers Commercial. The lack of any further name may have been an attempt to choose a more neutral designation better suited to international exports than nationalistic names such as the Bristol Britannia, but Sir George Edwards, the then chief designer at Vickers, also claimed the company had simply grown ‘tired’ of choosing names (Wansbrough-White 1995: 82).5
42.6 International Projects
International aircraft projects present interesting problems in terms of naming. The meaning of an aircraft name does not have to be immediately obvious, but it is advantageous if it is at least easily pronounceable in the languages of relevant partners. It can also be a challenge to find an internationally suitable name that does not cause embarrassment or harm cultural sensitivities. Furthermore, the political complications of such projects mean that some motivations for name choices are occasionally made public.
One of the most high-profile international projects in civil aviation was the co-operation on supersonic passenger transport that resulted in the Aérospatiale-BAC Concorde. The name was said to have been coined by the child of a British Aircraft Corporation official (Costello and Hughes 1976: 57) and was intended to be indicative of the good British–French industrial and political relations that enabled the project to go ahead. Although the official name featured the French spelling from the outset, the British government discouraged the use of the ‘e’ for a period in the 1960s following an (p. 611) unrelated disagreement between British Prime Minister Harold Macmillan and French President Charles de Gaulle. Apparently unaware of the reasons behind this, British Minister of Technology Tony Benn reinstated the ‘e’ during a visit to Toulouse in 1967, proclaiming: ‘That is “e” for excellence; “E” for England and “e” for “entente cordiale” ’ (Benn 1996: 175). Upon receiving a letter from a man who pointed out that some components were made in Scotland too, Benn (1996: 175) replied that it was ‘also “E” for “Écosse” ’.
In 1976, the air forces of Germany, Italy, and the UK chose Panavia Tornado as the name for the combat aircraft developed by the tri-national Panavia consortium (Flight International 1976). The meteorological phenomenon the aircraft was named after is known by the same word in English, German, and Italian, albeit with slightly different pronunciations.
In later years, multinational consortia themselves would be more closely involved in naming international military aircraft. In 1998, the Eurofighter consortium from Germany, Italy, Spain, and the UK were due to name their jointly produced combat aircraft, which had until then been known as the Eurofighter 2000 or EF2000. This project name was appropriate for the geographical base of the partner companies and governments and for the timing of the project, with the prototypes taking to the air in the 1990s, but perhaps a need for a more evocative name was felt. Furthermore, the formal delivery and entry into service of the aircraft would come after the beginning of the third millennium. The potential export market for an aircraft design is often an important consideration in choosing a name, and the name Eurofighter only served to stress the aircraft’s genesis as a design for European military operators, possibly discouraging customers outside of Europe. The frontrunner among suggested names was Eurofighter Typhoon, which suggested a clear association with the earlier Tornado project. A naming announcement was expected in March 1998 pending checks on the linguistic appropriateness of the name for the global market (Jeziorski 1998: 35). This announcement was not forthcoming, however, and the naming ceremony was postponed until September of that year, reportedly due to objections from German partners over the name’s previous use with the Hawker Typhoon, a British fighter-bomber of World War II.6
Any objections were downplayed by the consortium’s managing director, Brian Phillipson, who pointed to the history of the Messerschmitt Bf 108, a German recreational aircraft nicknamed Taifun. Significantly, though, Phillipson stressed that ‘you can say Typhoon in all four [Eurofighter partner] countries’ languages and you can say it in Japanese and it is not rude’ (cited in Ripley 1998). While the name may be well suited as a brand for the Asian export market, its spelling is clearly English, not the German taifun, Italian tifone, or Spanish tifón.
This was not the first time that this project’s name had caused controversy. When the forerunner project was renamed from Future European Fighter Aircraft to European Fighter Aircraft, this was said to be due to the acronym FEFA having ‘unfortunately rude (p. 612) connotations in Italian’ (Flight International 1984). Maybe this alluded to a homophone of this English acronym, the Italian noun fifa, ‘fright’ or ‘jitters’.
Perhaps due to the name Typhoon highlighting the fact that the partner nations were former adversaries, it was originally stated that this name was only for export marketing purposes. As the name’s use spread, though, it was officially adopted as the in-service name in all partner nations in 2002, according to British sources (House of Commons Committee of Public Accounts 2011: Ev 38). Nevertheless, the German Air Force most frequently uses the name Eurofighter alone.
The US-led multi-national Joint Strike Fighter project has led to the Lockheed Martin F-35 Lightning II. The name of this aircraft was intended to be commemorative, as suggested by the Roman numerals, but it recognizes the international nature of the project by referring to two different historic aircraft: the US Lockheed P-38 Lightning and the British English Electric Lightning (Lockheed Martin 2006). A US Air Force press release fails to mention the British precedent for the name but expands on the name’s metaphorical implications: ‘Like lightning, the F-35 Lightning II will strike with destructive force. The stealth characteristics of the jet will allow the F-35 to strike the enemy with accuracy and unpredictability; when the enemy finally hears the thunder, the F-35 is long gone’ (US Air Force 2006).
42.7 Individual Aircraft
Individual aircraft are designated by civil registrations or military serial numbers. While aviation has drawn much of its terminology from the maritime world, and some ways of naming aircraft would appear to be inspired by maritime practices, the use of registrations points to one key difference: individual maritime vessels are almost always named but not always registered, while aircraft are almost always registered but not always named (Embleton and Lapierre 1997: 232). In some cases, though, registrations are also used as names. Civil aircraft worldwide use a prefix denoting the country in which they are registered (e.g. ‘G’ for the UK) followed by a combination of letters and/or numbers. The seven aircraft of British Airways’ Concorde fleet had registrations ranging from G-BOAA to G-BOAG, and staff knew them colloquially by the last two letters (e.g. Alpha Alpha). The flagship of the fleet was Alpha Charlie, as the acronym ‘BOAC’ belonged to the predecessor company that ordered the aircraft, the British Overseas Airways Corporation. Another example of an aircraft with a bespoke registration is the last airworthy Avro Vulcan, which bears the civil registration G-VLCN. It has been given the nickname The Spirit of Great Britain by its civilian operators, but it is better known by its old military serial, XH558.
The unofficial naming of individual military aircraft was widespread in the US Army Air Force of World War II, and names were often emblazoned on the aircraft themselves together with ‘nose art’, which might feature heraldry or, more commonly, cartoon characters and pin-ups. One of the best known examples of a nicknamed individual aircraft (p. 613) is Enola Gay, the Boeing B-29 Superfortress that dropped the atomic bomb on Hiroshima and that was named after the pilot’s mother (Wood 1992: 42). The nicknaming practice was emulated by British and Commonwealth aircrews, especially in Bomber Command, with names often derived from the large squadron code letters painted on the rear fuselage. For example, Avro Lancaster RF141, bearing the squadron code ‘JO-U’, was given the name Uncle Joe Again (Wood 1992: 20). Names and nose art were apparently more common among Canadian than British crews. One of the longest names given to an individual aircraft in World War II might be Chinawattakamapoosekinapee, a Supermarine Spitfire of 421 Squadron Royal Canadian Air Force, which also bore nose art in the form of the profile of a Native Canadian in headdress, the logo of the Squadron’s sponsor, the McColl-Frontenac Oil Company. The name was the invention of pilots Mac Gordon and Bill Marshall and is said to have been coined over some beers (Fochuk 1999: 48). It has been suggested that such names enabled crews to identify more closely with their aircraft and to bond together more cohesively as a crew (Klare 1991: 14). In many cases, however, aircraft were pooled, so crews could be unaware of the background to names (Fochuk 1999: xi).
Official names were sometimes given to individual aircraft (see Fig. 42.1 for a modern example), often in recognition of sponsorship from savings drives such as the 1943 ‘Wings for Victory’ campaign, with ‘presentation aircraft’ named in honour of towns or companies that had donated large sums towards production. One unusual (p. 614) case was that of 427 Squadron Royal Canadian Air Force, which was sponsored by the Metro-Goldwyn-Mayer film company and named each of its Handley Page Halifax aircraft after MGM stars (Armstrong 1999: 48–9). Official names might also be given to individual aircraft that represented production milestones, such as Hawker Hurricane PZ865, named The Last of the Many as the last of the 14,533 aircraft of the type to be produced.
The title of Charles Lindbergh’s 1927 account of his solo transatlantic flight, We—Pilot and Plane, is a particularly succinct expression of the significant bond between Lindbergh and his mount, The Spirit of St. Louis. Names beginning with ‘The Spirit of’ remain popular for individual aircraft in both military and civilian service. For instance, nineteen of the twenty-one Northrop Grumman B-2 Spirit stealth bomber aircraft have been named after the ‘spirit of’ various US states (e.g. Spirit of Alaska or Spirit of Ohio), with the remaining two named Spirit of America and Spirit of Kitty Hawk. These names are officially recognized and many were bestowed at naming ceremonies in the relevant states. Other examples can be found among the ATR 42 aircraft once operated by Ryanair, three of which were given ‘spirit of’ names, such as The Spirit of Waterford. The British airline easyJet currently operates an Airbus A319 named Spirit of easyJet, which also carries displayed on the fuselage the names of employees who have won the company’s ‘Spirit Award’.
Other airlines often name their aircraft in thematic groups. British Airways used to name much of its Boeing 737 fleet after British rivers (e.g. River Glass), 747s after British cities (City of Cardiff/Dinas Caerdydd), 757s after British castles (Glamis Castle), 767s after European cities (City of Milan), and 777s after aviation pioneers (Sir Frank Whittle). These names were once painted on the fuselage, although they have disappeared with rebranding in the last decade.
(1) This chapter is concerned with the names of manned objects intended to fly solely through our planet’s atmosphere, but humankind has of course ventured further with other flying machines. For a discussion of the names of early rockets, ballistic missiles, and satellites, see Pearce (1962).
(2) For the purposes of this chapter, italics will be used for aircraft names. Aircraft will normally be named in full upon their first mention: in formal technical contexts, aircraft are usually referred to by their full name including the name of the manufacturer as well as the name of the type, which may include an alphanumeric type designation, e.g. the Lockheed C-130 Hercules. Sub-type variants may feature an updated mark number or modified designation (Lockheed C-130H Hercules), a modified name, often descriptive of an update or new function (Lockheed C-130J Super Hercules), or a completely new type name (Lockheed EC-130H Compass Call). Some may be known by different names or designations depending on the user, for instance the Lockheed C-130J Super Hercules is named the Lockheed Hercules C.5 when in British service. In normal speech, aircraft are often referred to by their type name only (Hercules).
(3) Although the July 1918 system discontinued the practice established earlier that year of allocating initial letters to constructors, numerous future aircraft names would feature alliteration between constructors’ names and type names, as illustrated by the Vickers Valiant and many others such as the Bristol Blenheim, Hawker Hurricane, and Blackburn Buccaneer.
(4) The 717 code was originally given as the internal model number of the military refuelling aircraft now officially known as the Boeing KC-135 Stratotanker, but, as that earlier use was not widely known, the name Boeing 717 was later used to rebrand the McDonnell Douglas MD-95 after the two companies merged in 1997.
(5) In 1962, when the type was entering service with the RAF, new names were suggested, but the existing name remained (Wansbrough-White 1995: 46). One of the proposed type names from 1962, Voyager, has recently resurfaced as the chosen name for the VC10’s replacement as the RAF’s main tanker and transport aircraft, the Airbus Voyager.
(6) The name Tornado had also been used for several earlier aircraft, including a British fighter design of World War II, but the Hawker Typhoon was better known. | 1 | 7 |
<urn:uuid:5db297b8-113c-4fa1-b830-4b481edb1c58> | Our mobile phones can reveal a lot about ourselves: where we live and work; who our family, friends and acquaintances are; how (and even what) we communicate with them; and our personal habits. With all the information stored on them, it isn’t surprising that mobile device users take steps to protect their privacy, like using PINs or passcodes to unlock their phones.
The research that we and our colleagues are doing identifies and explores a significant threat that most people miss: More than 70 percent of smartphone apps are reporting personal data to third-party tracking companies like Google Analytics, the Facebook Graph API or Crashlytics.
When people install a new Android or iOS app, it asks the user’s permission before accessing personal information. Generally speaking, this is positive. And some of the information these apps are collecting are necessary for them to work properly: A map app wouldn’t be nearly as useful if it couldn’t use GPS data to get a location.
But once an app has permission to collect that information, it can share your data with anyone the app’s developer wants to – letting third-party companies track where you are, how fast you’re moving and what you’re doing.
The help, and hazard, of code libraries
An app doesn’t just collect data to use on the phone itself. Mapping apps, for example, send your location to a server run by the app’s developer to calculate directions from where you are to a desired destination.
The app can send data elsewhere, too. As with websites, many mobile apps are written by combining various functions, precoded by other developers and companies, in what are called third-party libraries. These libraries help developers track user engagement, connect with social media and earn money by displaying ads and other features, without having to write them from scratch.
However, in addition to their valuable help, most libraries also collect sensitive data and send it to their online servers – or to another company altogether. Successful library authors may be able to develop detailed digital profiles of users. For example, a person might give one app permission to know their location, and another app access to their contacts. These are initially separate permissions, one to each app. But if both apps used the same third-party library and shared different pieces of information, the library’s developer could link the pieces together.
Users would never know, because apps aren’t required to tell users what software libraries they use. And only very few apps make public their policies on user privacy; if they do, it’s usually in long legal documents a regular person won’t read, much less understand.
Our research seeks to reveal how much data are potentially being collected without users’ knowledge, and to give users more control over their data. To get a picture of what data are being collected and transmitted from people’s smartphones, we developed a free Android app of our own, called the Lumen Privacy Monitor. It analyzes the traffic apps send out, to report which applications and online services actively harvest personal data.
Because Lumen is about transparency, a phone user can see the information installed apps collect in real time and with whom they share these data. We try to show the details of apps’ hidden behavior in an easy-to-understand way. It’s about research, too, so we ask users if they’ll allow us to collect some data about what Lumen observes their apps are doing – but that doesn’t include any personal or privacy-sensitive data. This unique access to data allows us to study how mobile apps collect users’ personal data and with whom they share data at an unprecedented scale.
In particular, Lumen keeps track of which apps are running on users’ devices, whether they are sending privacy-sensitive data out of the phone, what internet sites they send data to, the network protocol they use and what types of personal information each app sends to each site. Lumen analyzes apps traffic locally on the device, and anonymizes these data before sending them to us for study: If Google Maps registers a user’s GPS location and sends that specific address to maps.google.com, Lumen tells us, “Google Maps got a GPS location and sent it to maps.google.com” – not where that person actually is.
Trackers are everywhere
More than 1,600 people who have used Lumen since October 2015 allowed us to analyze more than 5,000 apps. We discovered 598 internet sites likely to be tracking users for advertising purposes, including social media services like Facebook, large internet companies like Google and Yahoo, and online marketing companies under the umbrella of internet service providers like Verizon Wireless.
We found that more than 70 percent of the apps we studied connected to at least one tracker, and 15 percent of them connected to five or more trackers. One in every four trackers harvested at least one unique device identifier, such as the phone number or its device-specific unique 15-digit IMEI number. Unique identifiers are crucial for online tracking services because they can connect different types of personal data provided by different apps to a single person or device. Most users, even privacy-savvy ones, are unaware of those hidden practices.
More than just a mobile problem
Tracking users on their mobile devices is just part of a larger problem. More than half of the app-trackers we identified also track users through websites. Thanks to this technique, called “cross-device” tracking, these services can build a much more complete profile of your online persona.
And individual tracking sites are not necessarily independent of others. Some of them are owned by the same corporate entity – and others could be swallowed up in future mergers. For example, Alphabet, Google’s parent company, owns several of the tracking domains that we studied, including Google Analytics, DoubleClick or AdMob, and through them collects data from more than 48 percent of the apps we studied.
Users’ online identities are not protected by their home country’s laws. We found data being shipped across national borders, often ending up in countries with questionable privacy laws. More than 60 percent of connections to tracking sites are made to servers in the U.S., U.K., France, Singapore, China and South Korea – six countries that have deployed mass surveillance technologies. Government agencies in those places could potentially have access to these data, even if the users are in countries with stronger privacy laws such as Germany, Switzerland or Spain.
Even more disturbingly, we have observed trackers in apps targeted to children. By testing 111 kids’ apps in our lab, we observed that 11 of them leaked a unique identifier, the MAC address, of the Wi-Fi router it was connected to. This is a problem, because it is easy to search online for physical locations associated with particular MAC addresses. Collecting private information about children, including their location, accounts and other unique identifiers, potentially violates the Federal Trade Commission’s rules protecting children’s privacy.
Just a small look
Although our data include many of the most popular Android apps, it is a small sample of users and apps, and therefore likely a small set of all possible trackers. Our findings may be merely scratching the surface of what is likely to be a much larger problem that spans across regulatory jurisdictions, devices and platforms.
It’s hard to know what users might do about this. Blocking sensitive information from leaving the phone may impair app performance or user experience: An app may refuse to function if it cannot load ads. Actually, blocking ads hurts app developers by denying them a source of revenue to support their work on apps, which are usually free to users.
If people were more willing to pay developers for apps, that may help, though it’s not a complete solution. We found that while paid apps tend to contact fewer tracking sites, they still do track users and connect with third-party tracking services.
Transparency, education and strong regulatory frameworks are the key. Users need to know what information about them is being collected, by whom, and what it’s being used for. Only then can we as a society decide what privacy protections are appropriate, and put them in place. Our findings, and those of many other researchers, can help turn the tables and track the trackers themselves.
Narseo Vallina-Rodriguez, Research Assistant Professor, IMDEA Networks Institute, Madrid, Spain; Research Scientist, Networking and Security, International Computer Science Institute based at, University of California, Berkeley and Srikanth Sundaresan, Research Fellow in Computer Science, Princeton University
For many years, New York City has been developing a “free” public Wi-Fi project. Called LinkNYC, it is an ambitious effort to bring wireless Internet access to all of the city’s residents.
This is the latest in a longstanding trend in which companies offer ostensibly free Internet-related products and services, such as social network access on Facebook, search and email from Google or the free Wi-Fi now commonly provided in cafes, shopping malls and airports.
These free services, however, come at a cost. Use is free on the condition that the companies providing the service can collect, store and analyze users’ valuable personal, locational and behavioral data.
This practice carries with it poorly appreciated privacy risks and an opaque exchange of valuable data for very little.
Is free public Wi-Fi, or any of these other services, really worth it?
Origins of LinkNYC
The winning bid came from CityBridge, a partnership of four companies including advertising firm Titan and designer Control Group.
Their proposal involved building a network of 10,000 kiosks (dubbed “links”) throughout the city that would be outfitted with high-speed Wi-Fi routers to provide Internet, free phone calls within the U.S., a cellphone charging station and a touchscreen map.
Google, a company whose business model is all about collecting our data, thus became a key player in the entity that will provide NYC with free Wi-Fi.
How free is ‘free’?
Like many free Internet products and services, the LinkNYC will be supported by advertising revenue.
LinkNYC is expected to generate about US$500 million in advertising revenue for New York City over the next 12 years from the display of digital ads on the kiosks’ sides and via people’s cellphones. The model works by providing free access in exchange for users’ personal and behavioral data, which are then used to target ads to them.
It also isn’t clear the extent to which the network could be used to track people’s location.
Titan previously made headlines in 2014 after installing Bluetooth beacons in over 100 pay phone booths, for the purpose of testing the technology, without the city’s permission. Titan was subsequently ordered to remove them.
After close examination, it becomes evident that far from being free, use of LinkNYC comes with the price of mandatory collection of potentially sensitive personal, locational and behavioral data.
A privacy paradox
People’s widespread use of products and services with these data collection and privacy infringing practices is curiously at odds with what they say they are willing to tolerate in studies.
Surveys consistently show that people value their privacy. In a recent Pew survey, 93 percent of adults said that being in control of who can get information about them is important, and 90 percent said the same about what information is collected.
In experiments, people quote high prices for which they would be willing to sell their data. For instance, in a 2005 study in the U.K., respondents said they would sell one month’s access to their location (via a cellphone) for an average of £27.40 (about US$50 based on the exchange rate at the time or $60 in inflation-adjusted terms). The figure went up even higher when subjects were told third party companies would be interested in using the data.
In practice, though, people trade away their personal and behavioral data for very little. This privacy paradox is on full display in the free Wi-Fi example.
Breaking down the economics of LinkNYC’s business model, recall that an estimated $500 million in total ad revenue will be collected over 12 years. With 10,000 Links, and approximately eight million people in New York City, the monthly revenue per person per link is $0.000043.
Fractions of a cent. This is the indirect valuation that users accept from advertisers in exchange for their personal, locational and behavioral data when using the LinkNYC service. Compare that with the value U.K respondents put on their locational data alone.
How to explain this paradoxical situation? In valuing their data in experiments, people are usually given the full context of what information will be collected and how it will be used.
People thus end up exchanging their data and their privacy far less than they might in a transparent and open market transaction.
The business model of some of the most successful tech companies is built on this opaque exchange between data owner and service provider. The same opaque exchange occurs on social networks like Facebook, online search and online journalism.
Part of a broader trend
It’s ironic that, in this supposed age of abundant information, people are so poorly informed about how their valuable digital assets are being used before they unwittingly sign their rights away.
To grasp the consequences of this, think about how much personal data you hand over every time you use one of these “free” services. Consider how upset people have been in recent years due to large-scale data breaches: for instance, the more than 22 million who lost their background check records in the Office of Personnel Management hack.
Now imagine the size a file of all your personal data in 2020 (including financial data, like purchasing history, or health data) after years of data tracking. How would you feel if it were sold to an unknown foreign corporation? How about if your insurance company got ahold of it and raised your rates? Or if an organized crime outfit stole all of it? This is the path that we are on.
Some have already made this realization, and a countervailing trend is already under way, one that gives technology users more control over their data and privacy. Mozilla recently updated its Firefox browser to allow users to block ads and trackers. Apple too has avoided an advertising business model, and the personal data harvesting that it necessitates, instead opting to make its money from hardware, app and digital music or video sales.
Developing a way for people to correctly value their data, privacy and information security would be a major additional step forward in developing financially viable, private and secure alternatives.
With it might come the possibility of an information age where people can maintain their privacy and retain ownership and control over their digital assets, should they choose to.
What if you could unlock your smartphone this way?
Nearly 80 percent of Americans own a smartphone, and a growing proportion of them use smartphones for internet access, not just when they’re on the go. This leads to people storing considerable amounts of personal and private data on their mobile devices.
Often, there is just one layer of security protecting all that data – emails and text messages, social media profiles, bank accounts and credit cards, even other passwords to online services. It’s the password that unlocks the smartphone’s screen. Usually this involves entering a number, or just laying a fingertip on a sensor.
Over the past couple of years, my research group, my colleagues and I have designed, created and tested a better way. We call it “user-generated free-form gestures,” which means smartphone owners can draw their own security pattern on the screen. It’s a very simple idea that is surprisingly secure.
Improving today’s weak security
It might seem that biometric authentication, like a fingerprint, could be stronger. But it’s not, because most systems that let a user allow fingerprint access also require a PIN or a password as an alternate backup method. A user – or thief – could skip the biometric method and instead just enter (or guess) a PIN or a password.
Text passwords can be hard to enter accurately on mobile devices, with small “shift” keys and other buttons to press to enter numbers or punctuation marks. As a result, people tend to use instead PIN codes, which are faster but much more easily guessed, because they are short sequences that humans choose in predictable ways: for example, using birth dates. Some devices allow users to choose a connect-the-dots pattern on a grid on the screen – but those can be even less secure than three-digit PINs.
Compared to other methods, our approach dramatically increases the potential length and complexity of a password. Users simply draw a pattern across an entire touchscreen, using any number of locations on the screen.
As users draw a shape or pattern on the screen, we track their fingers, recording where they move and how quickly (or slowly). We compare that track to one recorded when they set up the gesture-based login. This protection can be added just by software changes; it needs no specific hardware or other modifications to existing touchscreen devices. As touchscreens become more common on laptop computers, this method could be used to protect them too.
Our system also allows people to use more than one finger – though some participants wrongly assumed that making simple gestures with multiple fingers would be more secure than the same gesture with just one finger. The key to improving security using one or more fingers is to make a design that is not easy to guess.
Easy to do and remember, hard to break
Some people who participated in our studies created gestures that could be articulated as symbols, such as digits, geometric shapes (like a cylinder) and musical notations. That made complicated doodles – including ones that require lifting fingers (multistroke) – easy for them to remember.
This observation inspired us to study and create new ways to try to guess gesture passwords. We built up a list of possible symbols and tried them. But even a relatively simple symbol, like an eighth note, can be drawn in so many different ways that calculating the possible variations is computationally intensive and time-consuming. This is unlike text passwords, for which variations are simple to try out.
Replacing more than one password
Our research has extended beyond just using a gesture to unlock a smartphone. We have explored the potential for people to use doodles instead of passwords on several websites. It appeared to be no more difficult to remember multiple gestures than it is to recall different passwords for each site.
In fact, it was faster: Logging in with a gesture took two to six seconds less time than doing so with a text password. It’s faster to generate a gesture than a password, too: People spent 42 percent less time generating gesture credentials than people we studied who had to make up new passwords. We also found that people could successfully enter gestures without spending as much attention on them as they had to with text passwords.
Gesture-based interactions are popular and prevalent on mobile platforms, and are increasingly making their way to touchscreen-equipped laptops and desktops. The owners of those types of devices could benefit from a quick, easy and more secure authentication method like ours.
How secure are you?
Rawpixel.com via shutterstock.com
The first Thursday in May is World Password Day, but don’t buy a cake or send cards. Computer chip maker Intel created the event as an annual reminder that, for most of us, our password habits are nothing to celebrate. Instead, they – and computer professionals like me – hope we will use this day to say our final goodbyes to “qwerty” and “123456,” which are still the most popular passwords.
The problem with short, predictable passwords
The purpose of a password is to limit access to information. Having a very common or simple one like “abcdef” or “letmein,” or even normal words like “password” or “dragon,” is barely any security at all, like closing a door but not actually locking it.
Hackers’ password cracking tools take advantage of this lack of creativity. When hackers find – or buy – stolen credentials, they will likely find that the passwords have been stored not as the text of the passwords themselves but as unique fingerprints, called “hashes,” of the actual passwords. A hash function mathematically transforms each password into an encoded, fixed-size version of itself. Hashing the same original password will give the same result every time, but it’s computationally nearly impossible to reverse the process, to derive a plaintext password from a specific hash.
Instead, the cracking software computes the hash values for large numbers of possible passwords and compares the results to the hashed passwords in the stolen file. If any match, the hacker’s in. The first place these programs start is with known hash values for popular passwords.
More savvy users who choose a less common password might still fall prey to what is called a “dictionary attack.” The cracking software tries each of the 171,000 words in the English dictionary. Then the program tries combined words (such as “qwertypassword”), doubled sequences (“qwertyqwerty”), and words followed by numbers (“qwerty123”).
Moving on to blind guessing
Only if the dictionary attack fails will the attacker reluctantly move to what is called a “brute-force attack,” guessing arbitrary sequences of numbers, letters and characters over and over until one matches.
Mathematics tells us that a longer password is less guessable than a shorter password. That’s true even if the shorter password is made from a larger set of possible characters.
For example, a six-character password made up of the 95 different symbols on a standard American keyboard yields 956, or 735 billion, possible combinations. That sounds like a lot, but a 10-character password made from only lowercase English characters yields 2610, 141 trillion, options. Of course, a 10-character password from the 95 symbols gives 9510, or 59 quintillion, possibilities.
That’s why some websites require passwords of certain lengths and with certain numbers of digits and special characters – they’re designed to thwart the most common dictionary and brute-force attacks. Given enough time and computing power, though, any password is crackable.
And in any case, humans are terrible at memorizing long, unpredictable sequences. We sometimes use mnemonics to help, like the way “Every Good Boy Does Fine” reminds us of the notes indicated by the lines on sheet music. They can also help us remember a password like “freQ!9tY!juNC,” which at first appears very mixed up.
Splitting the password into three chunks, “freQ!,” “9tY!” and “juNC,” reveals what might be remembered as three short, pronounceable words: “freak,” “ninety” and “junk.” People are better at memorizing passwords that can be chunked, either because they find meaning in the chunks or because they can more easily add their own meaning through mnemonics.
Don’t reuse passwords
Suppose we take all this advice to heart and resolve to make all our passwords at least 15 characters long and full of random numbers and letters. We invent clever mnemonic devices, commit a few of our favorites to memory, and start using those same passwords over and over on every website and application.
At first, this might seem harmless enough. But password-thieving hackers are everywhere. Recently, big companies including Yahoo, Adobe and LinkedIn have all been breached. Each of these breaches revealed the usernames and passwords for hundreds of millions of accounts. Hackers know that people commonly reuse passwords, so a cracked password on one site could make the same person vulnerable on a different site.
Beyond the password
Not only do we need long, unpredictable passwords, but we need different passwords for every site and program we use. The average internet user has 19 different passwords. It’s easy to see why people write them down on sticky notes or just click the “I forgot my password” link.
Software can help! The job of password management software is to take care of generating and remembering unique, hard-to-crack passwords for each website and application.
Sometimes these programs themselves have vulnerabilities that can be exploited by attackers. And some websites block password managers from functioning. And of course, an attacker could peek at the keyboard as we type in our passwords.
Multi-factor authentication was invented to solve these problems. This involves a code sent to a mobile phone, a fingerprint scan or a special USB hardware token. However, even though users know the multi-factor authentication is probably safer, they worry it might be more inconvenient or difficult. To make it easier, sites like Authy.com provide straightforward guides for enabling multi-factor authentication on popular websites.
So no more excuses. Let’s put on our party hats and start changing those passwords. World Password Day would be a great time to ditch “qwerty” for good, try out a password manager and turn on multi-factor authentication. Once you’re done, go ahead and have that cake, because you’ll deserve it.
Ransomware – malicious software that sneaks onto your computer, encrypts your data so you can’t access it and demands payment for unlocking the information – has become an emerging cyberthreat. Several reports in the past few years document the diversity of ransomware attacks and their increasingly sophisticated methods. Recently, high-profile ransomware attacks on large enterprises such as hospitals and police departments have demonstrated that large organizations of all types are at risk of significant real-world consequences if they don’t protect themselves properly against this type of cyberthreat.
The development of strong encryption technology has made it easier to encode data so that it cannot be read without the decryption key. The emergence of anonymity services such as the Tor network and bitcoin and other cryptocurrencies has eased worries about whether people who receive payments might be identified through financial tracking. These trends are likely driving factors in the recent surge of ransomware development and attacks.
Like other classes of malicious software – often called “malware” – ransomware uses a fairly wide range of techniques to sneak into people’s computers. These include attachments or links in unsolicited email messages, or phony advertisements on websites. However, when it comes to the core part of the attack – encrypting victims’ files to make them inaccessible – most ransomware attacks use very similar methods. This commonality provides an opportunity for ransomware attacks to be detected before they are carried out.
My recent research discovered that ransomware programs’ attempts to request access and encrypt files on hard drives are very different from benign operating system processes. We also found that diverse types of ransomware, even ones that vary widely in terms of sophistication, interact with computer file systems similarly.
Moving fast and hitting hard
One reason for this similarity amid apparent diversity is the commonality of attackers’ mindsets: the most successful attack is one that encrypts a user’s data very quickly, makes the computer files inaccessible and requests money from the victim. The more slowly that sequence happens, the more likely the ransomware is to be detected and shut down by antivirus software.
What attackers are trying to do is not simple. First, they need to reliably encrypt the victim’s files. Early ransomware used very basic techniques to do this. For example, it used to be that a ransomware application would use a single decryption key no matter where it spread to. This meant that if someone were able to detect the attack and discover the key, they could share the key with other victims, who could then decode the encrypted data without paying.
Today’s ransomware attackers use advanced cryptographic systems and Internet connectivity to minimize the chance that a victim could find a way to get her files back on her own. Once the program makes its way into a new computer, it sends a message back over the internet to a computer the attacker is using to control the ransomware. A unique key pair for encryption and decryption is generated for that compromised computer. The decryption key is saved in the attacker’s computer, while the encryption key is sent to the malicious program in the compromised computer to perform the file encryption. The decryption key, which is required to decrypt the files only on that computer, is what the victim receives when he pays the ransom fee.
The second part of a “successful” ransomware attack – from the perspective of the attacker – depends on finding reliable ways to get paid without being caught. Ransomware operators continuously strive to make payments harder to trace and easier to convert into their preferred currency. Attackers attempt to avoid being identified and arrested by communicating via the anonymous Tor network and exchanging money in difficult-to-trace cryptocurrencies like bitcoins.
Defending against a ransomware attack
Unfortunately, the use of advanced cryptosystems in modern ransomware families has made recovering victims’ files almost impossible without paying the ransom. However, it is easier to defend against ransomware than to fight off other types of cyberthreats, such as hackers gaining unauthorized entry to company data and stealing secret information.
The easiest way to protect against ransomware attacks is to have, and follow, a reliable data-backup policy. Companies that do not want to end up as paying victims of ransomware should have their workers conduct real-time incremental backups (which back up file changes every few minutes). In addition, in case their own backup servers get infected with ransomware, these companies should have offsite cloud backup storage that is protected from ransomware. Companies that are attacked can then restore their data from these backups instead of paying the ransom.
Users should also download and install regular updates to software, including third-party plug-ins for web browsers and other systems. These often plug security vulnerabilities that, if left open, provide attackers an easy way in.
Generally, being infected with ransomware has two important messages for an organization. First, it’s a sign of vulnerability in a company’s entire computer system, which also means that the organization is vulnerable to other types of attacks. It is always better to learn of an intrusion earlier, rather than being compromised for several months.
Second, being infected with ransomware also suggests users are engaging in risky online behavior, such as clicking on unidentified email attachments from unknown senders, and following links on disreputable websites. Teaching people about safe internet browsing can dramatically reduce an organization’s vulnerability to a ransomware attack.
A fun game, plus science advancement.
Computer gaming is now a regular part of life for many people. Beyond just being entertaining, though, it can be a very useful tool in education and in science.
If people spent just a fraction of their play time solving real-life scientific puzzles – by playing science-based video games – what new knowledge might we uncover? Many games aim to take academic advantage of the countless hours people spend gaming each day. In the field of biochemistry alone, there are several, including the popular game Foldit.
In Foldit, players attempt to figure out the detailed three-dimensional structure of proteins by manipulating a simulated protein displayed on their computer screen. They must observe various constraints based in the real world, such as the order of amino acids and how close to each other their biochemical properties permit them to get. In academic research, these tasks are typically performed by trained experts.
Thousands of people – with and without scientific training – play Foldit regularly. Sure, they’re having fun, but are they really contributing to science in ways experts don’t already? To answer this question – to find out how much we can learn by having nonexperts play scientific games – we recently set up a Foldit competition between gamers, undergraduate students and professional scientists. The amateur gamers did better than the professional scientists managed using their usual software.
This suggests that scientific games like Foldit can truly be valuable resources for biochemistry research while simultaneously providing enjoyable recreation. More widely, it shows the promise that crowdsourcing to gamers (or “gamesourcing”) could offer to many fields of study.
Looking closely at proteins
Proteins perform basically all the microscopic tasks necessary to keep organisms alive and healthy, from building cell walls to fighting disease. By seeing the proteins up close, biochemists can much better understand life itself.
Understanding how proteins fold is also critical because if they don’t fold properly, the proteins can’t do their tasks in the cell. Worse, some proteins, when improperly folded, can cause debilitating diseases, such as Alzheimer’s, Parkinson’s and ALS.
Taking pictures of proteins
First, by analyzing the DNA that tells cells how to make a given protein, we know the sequence of amino acids that makes up the protein. But that doesn’t tell us what shape the protein takes.
To get a picture of the three-dimensional structure, we use a technique called X-ray crystallography. This allows us to see objects that are only nanometers in size. By taking X-rays of the protein from multiple angles, we can construct a digital 3D model (called an electron density map) with the rough outlines of the protein’s actual shape. Then it’s up to the scientist to determine how the sequence of amino acids folds together in a way that both fits the electron density map and also is biochemically sound.
Although this process isn’t easy, many crystallographers think that it is the most fun part of crystallography because it is like solving a three-dimensional jigsaw puzzle.
An addictive puzzle
The competition, and its result, were the culmination of several years of improving biochemistry education by showing how it can be like gaming. We teach an undergraduate class that includes a section on how biochemists can determine what proteins look like.
When we gave an electron density map to our students and had them move the amino acids around with a mouse and keyboard and fold the protein into the map, students loved it – some so much they found themselves ignoring their other homework in favor of our puzzle. As the students worked on the assignment, we found the questions they raised became increasingly sophisticated, delving deeply into the underlying biochemistry of the protein.
In the end, 10 percent of the class actually managed to improve on the structure that had been previously solved by professional crystallographers. They tweaked the pieces so they fit better than the professionals had been able to. Most likely, since 60 students were working on it separately, some of them managed to fix a number of small errors that had been missed by the original crystallographers. This outcome reminded us of the game Foldit.
From the classroom to the game lab
Like crystallographers, Foldit players manipulate amino acids to figure out a protein’s structure based on their own puzzle-solving intuition. But rather than one trained expert working alone, thousands of nonscientist players worldwide get involved. They’re devoted gamers looking for challenging puzzles and willing to use their gaming skills for a good cause.
Foldit’s developers had just finished a new version of the game providing puzzles based on three-dimensional crystallographic electron density maps. They were ready to see how players would do.
We gave students a new crystallography assignment, and told them they would be competing against Foldit players to produce the best structure. We also got two trained crystallographers to compete using the software they’d be familiar with, as well as several automated software packages that crystallographers often use. The race was on!
Amateurs outdo professionals
The students attacked the assignment vigorously, as did the Foldit players. As before, the students learned how proteins are put together through shaping these protein structures by hand. Moreover, both groups appeared to take pride in their role in pioneering new science.
At the end of the competition, we analyzed all the structures from all the participants. We calculated statistics about the competing structures that told us how correct each participant was in their solution to the puzzle. The results ranged from very poor structures that didn’t fit the map at all to exemplary solutions.
The best structure came from a group of nine Foldit players who worked collaboratively to come up with a spectacular protein structure. Their structure turned out to be even better than the structures from the two trained professionals.
Students and Foldit players alike were eager to master difficult concepts because it was fun. The results they came up with gave us useful scientific results that can really improve biochemistry.
There are many other games along similar lines, including the “Discovery” mini-game in the massively multiplayer online role-playing game “Eve Online,” which helps build the Human Protein Atlas, and Eterna, which tries to decipher how RNA molecules fold themselves up. If educators incorporate scientific games into their curricula potentially as early as middle school, they are likely to find students becoming highly motivated to learn at a very deep level while having a good time. We encourage game designers and scientists to work together more to create games with purpose, and gamers of the world should play more to bolster the scientific process.
Seeking to make stories that surround us.
Marvel’s new blockbuster, “Guardians of the Galaxy, Vol. 2,” carries audiences through a narrative carefully curated by the film’s creators. That’s also what Telltale’s Guardians-themed game did when it was released in April. Early reviews suggest the game is just another form of guided progress through a predetermined story, not a player-driven experience in the world of the movie and its characters. Some game critics lament this, and suggest game designers let traditional media tell the linear stories.
What is out there for the player who wants to explore on his or her own in rich universes like the ones created by Marvel? Not much. Not yet. But the future of media is coming.
As longtime experimenters and scholars in interactive narrative who are now building a new academic discipline we call “computational media,” we are working to create new forms of interactive storytelling, strongly shaped by the choices of the audience. People want to explore, through play, themes like those in Marvel’s stories, about creating family, valuing diversity and living responsibly.
These experiences will need compelling computer-generated characters, not the husks that now speak to us from smartphones and home assistants. And they’ll need virtual environments that are more than just simulated space – environments that feel alive, responsive and emotionally meaningful.
This next generation of media – which will be a foundation for art, learning, self-expression and even health maintenance – requires a deeply interdisciplinary approach. Instead of engineer-built tools wielded by artists, we must merge art and science, storytelling and software, to create groundbreaking, technology-enabled experiences deeply connected to human culture.
In search of interactivity
One of the first interactive character experiences involved “Eliza,” a language and software system developed in the 1960s. It seemed like a very complex entity that could engage compellingly with a user. But the more people interacted with it, the more they noticed formulaic responses that signaled it was a relatively simple computer program.
In contrast, programs like “Tale-Spin” have elaborate technical processes behind the scenes that audiences never see. The audience sees only the effects, like selfish characters telling lies. The result is the opposite of the “Eliza” effect: Rather than simple processes that the audience initially assumes are complex, we get complex processes that the audience experiences as simple.
An exemplary alternative to both types of hidden processes is “SimCity,” the seminal game by Will Wright. It contains a complex but ultimately transparent model of how cities work, including housing locations influencing transportation needs and industrial activity creating pollution that bothers nearby residents. It is designed to lead users, through play, to an understanding of this underlying model as they build their own cities and watch how they grow. This type of exploration and response is the best way to support long-term player engagement.
Connecting technology with meaning
No one discipline has all the answers for building meaningfully interactive experiences about topics more subtle than city planning – such as what we believe, whom we love and how we live in the world. Engineering can’t teach us how to come up with a meaningful story, nor understand if it connects with audiences. But the arts don’t have methods for developing the new technologies needed to create a rich experience.
Today’s most prominent examples of interactive storytelling tend to lean toward one approach or the other. Despite being visually compelling, with powerful soundtracks, neither indie titles like “Firewatch” nor blockbusters such as “Mass Effect: Andromeda” have many significant ways for a player to actually influence their worlds.
Both independently and together, we’ve been developing deeper interactive storytelling experiences for nearly two decades. “Terminal Time,” an interactive documentary generator first shown in 1999, asks the audience several questions about their views of historical issues. Based on the responses (measured as the volume of clapping for each choice), it custom-creates a story of the last millennium that matches, and increasingly exaggerates, those particular ideas.
For example, to an audience who supported anti-religious rationalism, it might begin presenting distant events that match their biases – such as the Catholic Church’s 17th-century execution of philosopher Giordano Bruno. But later it might show more recent, less comfortable events – like the Chinese communist (rationalist) invasion and occupation of (religious) Tibet in the 1950s.
The results are thought-provoking, because the team creating it – including one of us (Michael), documentarian Steffi Domike and media artist Paul Vanouse – combined deep technical knowledge with clear artistic goals and an understanding of the ways events are selected, connected and portrayed in ideologically biased documentaries.
Digging into narrative
“Façade,” released in 2005 by Michael and fellow artist-technologist Andrew Stern, represented a further extension: the first fully realized interactive drama. A person playing the experience visits the apartment of a couple whose marriage is on the verge of collapse. A player can say whatever she wants to the characters, move around the apartment freely, and even hug and kiss either or both of the hosts. It provides an opportunity to improvise along with the characters, and take the conversation in many possible directions, ranging from angry breakups to attempts at resolution.
“Façade” also lets players interact creatively with the experience as a whole, choosing, for example, to play by asking questions a therapist might use – or by saying only lines Darth Vader says in the “Star Wars” movies. Many people have played as different characters and shared videos of the results of their collaboration with the interactive experience. Some of these videos have been viewed millions of times.
As with “Terminal Time,” “Façade” had to combine technical research – about topics like coordinating between virtual characters and understanding natural language used by the player – with a specific artistic vision and knowledge about narrative. In order to allow for a wide range of audience influence, while still retaining a meaningful story shape, the software is built to work in terms of concepts from theater and screenwriting, such as dramatic “beats” and tension rising toward a climax. This allows the drama to progress even as different players learn different information, drive the conversation in different directions and draw closer to one or the other member of the couple.
Bringing art and engineering together
A decade ago, our work uniting storytelling, artificial intelligence, game design, human-computer interaction, media studies and many other arts, humanities and sciences gave rise to the Expressive Intelligence Studio, a technical and cultural research lab at the Baskin School of Engineering at UC Santa Cruz, where we both work. In 2014 we created the country’s first academic department of computational media.
Today, we work with colleagues across campus to offer undergrad degrees in games and playable media with arts and engineering emphases, as well as graduate education for developing games and interactive experiences.
With four of our graduate students (Josh McCoy, Mike Treanor, Ben Samuel and Aaron A. Reed), we recently took inspiration from sociology and theater to devise a system that simulates relationships and social interactions. The first result was the game “Prom Week,” in which the audience is able to shape the social interactions of a group of teenagers in the week leading up to a high school prom.
We found that its players feel much more responsibility for what happens than in pre-scripted games. It can be disquieting. As game reviewer Craig Pearson put it – after destroying the romantic relationship of his perceived rival, then attempting to peel away his remaining friendships, only to realize this wasn’t necessary – “Next time I’ll be looking at more upbeat solutions, because the alternative, frankly, is hating myself.”
That social interaction system is also a base for other experiences. Some address serious topics like cross-cultural bullying or teaching conflict deescalation to soldiers. Others are more entertaining, like a murder mystery game – and a still-secret collaboration with Microsoft Studios. We’re now getting ready for an open-source release of the underlying technology, which we’re calling the Ensemble Engine.
Pushing the boundaries
Our students are also expanding the types of experiences interactive narratives can offer. Two of them, Aaron A. Reed and Jacob Garbe, created “The Ice-Bound Concordance,” which lets players explore a vast number of possible combinations of events and themes to complete a mysterious novel.
Three other students, James Ryan, Ben Samuel and Adam Summerville, created “Bad News,” which generates a new small midwestern town for each player – including developing the town, the businesses, the families in residence, their interactions and even the inherited physical traits of townspeople – and then kills one character. The player must notify the dead character’s next of kin. In this experience, the player communicates with a human actor trained in improvisation, exploring possibilities beyond the capabilities of today’s software dialogue systems.
Kate Compton, another student, created “Tracery,” a system that makes storytelling frameworks easy to create. Authors can fill in blanks in structure, detail, plot development and character traits. Professionals have used the system: Award-winning developer Dietrich Squinkifer made the uncomfortable one-button conversation game “Interruption Junction.” “Tracery” has let newcomers get involved, too, as with the “Cheap Bots Done Quick!” platform. It is the system behind around 4,000 bots active on Twitter, including ones relating the adventures of a lost self-driving Tesla, parodying the headlines of “Boomersplaining thinkpieces,” offering self-care reminders and generating pastel landscapes.
Many more projects are just beginning. For instance, we’re starting to develop an artificial intelligence system that can understand things usually only humans can – like the meanings underlying a game’s rules and what a game feels like when played. This will allow us to more easily explore what the audience will think and feel in new interactive experiences.
There’s much more to do, as we and others work to invent the next generation of computational media. But as in a Marvel movie, we’d bet on those who are facing the challenges, rather than the skeptics who assume the challenges can’t be overcome.
Artificial intelligence can bring many benefits to human gamers.
Way back in the 1980s, a schoolteacher challenged me to write a computer program that played tic-tac-toe. I failed miserably. But just a couple of weeks ago, I explained to one of my computer science graduate students how to solve tic-tac-toe using the so-called “Minimax algorithm,” and it took us about an hour to write a program to do it. Certainly my coding skills have improved over the years, but computer science has come a long way too.
What seemed impossible just a couple of decades ago is startlingly easy today. In 1997, people were stunned when a chess-playing IBM computer named Deep Blue beat international grandmaster Garry Kasparov in a six-game match. In 2015, Google revealed that its DeepMind system had mastered several 1980s-era video games, including teaching itself a crucial winning strategy in “Breakout.” In 2016, Google’s AlphaGo system beat a top-ranked Go player in a five-game tournament.
The quest for technological systems that can beat humans at games continues. In late May, AlphaGo will take on Ke Jie, the best player in the world, among other opponents at the Future of Go Summit in Wuzhen, China. With increasing computing power, and improved engineering, computers can beat humans even at games we thought relied on human intuition, wit, deception or bluffing – like poker. I recently saw a video in which volleyball players practice their serves and spikes against robot-controlled rubber arms trying to block the shots. One lesson is clear: When machines play to win, human effort is futile.
This can be great: We want a perfect AI to drive our cars, and a tireless system looking for signs of cancer in X-rays. But when it comes to play, we don’t want to lose. Fortunately, AI can make games more fun, and perhaps even endlessly enjoyable.
Designing games that never get old
Today’s game designers – who write releases that earn more than a blockbuster movie – see a problem: Creating an unbeatable artificial intelligence system is pointless. Nobody wants to play a game they have no chance of winning.
But people do want to play games that are immersive, complex and surprising. Even today’s best games become stale after a person plays for a while. The ideal game will engage players by adapting and reacting in ways that keep the game interesting, maybe forever.
So when we’re designing artificial intelligence systems, we should look not to the triumphant Deep Blues and AlphaGos of the world, but rather to the overwhelming success of massively multiplayer online games like “World of Warcraft.” These sorts of games are graphically well-designed, but their key attraction is interaction.
It seems as if most people are not drawn to extremely difficult logical puzzles like chess and Go, but rather to meaningful connections and communities. The real challenge with these massively multi-player online games is not whether they can be beaten by intelligence (human or artificial), but rather how to keep the experience of playing them fresh and new every time.
Change by design
At present, game environments allow people lots of possible interactions with other players. The roles in a dungeon raiding party are well-defined: Fighters take the damage, healers help them recover from their injuries and the fragile wizards cast spells from afar. Or think of “Portal 2,” a game focused entirely on collaborating robots puzzling their way through a maze of cognitive tests.
Exploring these worlds together allows you to form common memories with your friends. But any changes to these environments or the underlying plots have to be made by human designers and developers.
In the real world, changes happen naturally, without supervision, design or manual intervention. Players learn, and living things adapt. Some organisms even co-evolve, reacting to each other’s developments. (A similar phenomenon happens in a weapons technology arms race.)
Computer games today lack that level of sophistication. And for that reason, I don’t believe developing an artificial intelligence that can play modern games will meaningfully advance AI research.
We crave evolution
A game worth playing is a game that is unpredictable because it adapts, a game that is ever novel because novelty is created by playing the game. Future games need to evolve. Their characters shouldn’t just react; they need to explore and learn to exploit weaknesses or cooperate and collaborate. Darwinian evolution and learning, we understand, are the drivers of all novelty on Earth. It could be what drives change in virtual environments as well.
Evolution figured out how to create natural intelligence. Shouldn’t we, instead of trying to code our way to AI, just evolve AI instead? Several labs – including my own and that of my colleague Christoph Adami – are working on what is called “neuro-evolution.”
In a computer, we simulate complex environments, like a road network or a biological ecosystem. We create virtual creatures and challenge them to evolve over hundreds of thousands of simulated generations. Evolution itself then develops the best drivers, or the best organisms at adapting to the conditions – those are the ones that survive.
Today’s AlphaGo is beginning this process, learning by continuously playing games against itself, and by analyzing records of games played by top Go champions. But it does not learn while playing in the same way we do, experiencing unsupervised experimentation. And it doesn’t adapt to a particular opponent: For these computer players, the best move is the best move, regardless of an opponent’s style.
Programs that learn from experience are the next step in AI. They would make computer games much more interesting, and enable robots to not only function better in the real world, but to adapt to it on the fly.
Tomorrow at TedX Sydney’s Opera House event, high-profile neurosurgeon Charlie Teo will talk about brain cancer. Last Saturday Teo was on Channel 9’s Sunrise program talking about the often malignant cancer that in 2012 killed 1,241 Australians. During the program he said:
Unfortunately the jury is still out on whether mobile phones can lead to brain cancer, but studies suggest it’s so.
Teo’s name appears on a submission recently sent to the United Nations. If you Google “Charlie Teo and mobile phones” you will see that his public statements on this issue go back years.
The submission he signed commences:
We are scientists engaged in the study of biological and health effects of non-ionizing electromagnetic fields (EMF). Based upon peer-reviewed, published research, we have serious concerns regarding the ubiquitous and increasing exposure to EMF generated by electric and wireless devices. These include – but are not limited to – radiofrequency radiation (RFR) emitting devices, such as cellular and cordless phones and their base stations, Wi-Fi, broadcast antennas, smart meters, and baby monitors as well as electric devices and infra-structures [sic] used in the delivery of electricity that generate extremely-low frequency electromagnetic field (ELF EMF).
That list just about covers off every facet of modern life: the internet, phones, radio, television and any smart technology. It’s a list the Amish and reclusive communities of “wifi refugees” know all about.
Other than those living in the remotest of remote locations, there are very few in Australia today who are not bathed in electromagnetic fields and radiofrequency radiation, 24 hours a day. My mobile phone shows me that my house is exposed to the wifi systems of six neighbours’ houses as well as my own. Public wifi hotspots are rapidly increasing.
The first mobile phone call in Australia was made over 28 years ago on February 23, 1987. In December 2013, there were some 30.2 million mobile phones being used in a population of 22.7 million people. Predictions are that there will be 5.9 billion smartphone users globally within four years. There are now more than 100 nations which have more mobile phones than population.
So while Australia has become saturated in electromagnetic field radiation over the past quarter century, what has happened to cancer rates?
Brain cancer is Teo’s surgical speciality and the cancer site that attracts nearly all of the mobile phone panic attention. In 1987 the age-adjusted incidence rate of brain cancer in Australia per 100,000 people was 6.6. In 2011, the most recent year for which national data is available, the rate was 7.3.
The graph below shows brain cancer incidence has all but flat-lined across the 29 years for which data are available. All cancer is notifiable in Australia.
Brain cancers are a relatively uncommon group of cancers: their 7.3 per 100,000 incidence compares with female breast (116), colorectal (61.5) and lung cancer (42.5). There is no epidemic of brain cancer, let alone mobile phone caused brain cancer. The Cancer Council explicitly rejects the link. This US National Cancer Institute fact sheet summarises current research, highlighting rather different conclusions than Charlie Teo.
Another Australian signatory of the submission, Priyanka Bandara, describes herself as an “Independent Environmental Health Educator/Researcher; Advisor, Environmental Health Trust and Doctors for Safer Schools”.
Last year, a former student of mine asked to meet with me to discuss wifi on our university campus. She arrived at my office with Bandara who looked worried as she ran a EMF meter over my room. I was being pickled in it, apparently.
Her pitch to me was one I have encountered many times before. The key ingredients are that there are now lots of highly credentialed scientists who are deeply concerned about a particular problem, here wifi. These scientists have published [pick a very large number] of “peer reviewed” research papers about the problem.
Peer review often turns out to be having like-minded people from their networks, typically with words like “former”, “leading”, “senior” next to their names, write gushing appraisals of often unpublished reports.
The neo-Galilean narrative then moves to how this information is all being suppressed by the web of influence of vested industrial interests. These interests are arranging for scientists to be sacked, suppressing publication of alarming reports, and preventing many scientists from speaking out in fear.
Case reports of individuals claiming to be harmed and suffering Old Testament-length lists of symptoms as a result of exposure are then publicised. Here’s one for smart meters, strikingly similar to the 240+ symptom list for “wind turbine syndrome”. Almost any symptom is attributed to exposure.
Historical parallels with the conduct of the tobacco and asbestos industries and Big Pharma are then made. The argument runs “we understand the history of suppression and denial with these industries and this new issue is now experiencing the same”.
There is no room for considering that the claims about the new issue might just be claptrap and that the industries affected by the circulation of false and dangerous nonsense might understandably want to stamp on it.
Bandara’s modest blog offers schools the opportunity to hear her message:
Wireless technologies are sweeping across schools exposing young children to microwave radiation. This is not in line with the Precautionary Principle. A typical classroom with 25 WiFi enabled tablets/laptops (each operating at 0.2 W) generates in five hours about the same microwave radiation output as a typical microwave oven (at 800 W)in two minutes. Would you like to microwave your child for two minutes (without causing heating as it is done very slowly using lower power) daily?
There can be serious consequences of alarming people about infinitesimally small, effectively non-existent risks. This rural Victorian news story features a woman so convinced that transmission towers are harming her that she covers her head in a “protective” cloth cape.
This woman was so alarmed about the electricity smart meter at her house that she had her electricity cut off, causing her teenage daughter to study by candlelight. Yet she is shown being interviewed by a wireless microphone.
Mobile phones have played important roles in rapid response to life-saving emergencies. Reducing access to wireless technology would have incalculable effects in billions of people’s lives, many profoundly negative.
Exposing people to fearful messages about wifi has been experimentally demonstrated to increase symptom reportage when subjects were later exposed to sham wifi. Such fears can precipitate contact with charlatans readily found on the internet who will come to your house, wave meters around and frighten the gullible into purchasing magic room paint, protective clothing, bed materials and other snake-oil at exorbitant prices.
As exponential improvements in technology improve the lifestyles and well-being of the world’s population, we seem destined to witness an inexorable parallel rise in fear-mongering about these benefits.
There’s a big difference between a 4-digit PIN and a 6-digit PIN.
One consequence of the Apple vs FBI drama has been to shine a spotlight on the security of smartphone lockscreens.
The fact that the FBI managed to hack the iPhone of the San Bernardino shooter without Apple’s help raises questions about whether PIN codes and swipe patterns are as secure as we think.
In fact, they’re probably not as secure as we’d hope. No device as complex as a smartphone or tablet is ever completely secure, but device manufactures and developers are still doing their best to keep your data safe.
The first line of defence is your lockscreen, typically protected by a PIN code or password.
When it comes to smartphones, the humble four-digit PIN code is the most popular choice. Unfortunately, even ignoring terrible PIN combinations such as “1234”, “1111” or “7777”, four-digit PIN codes are still incredibly weak, since there are only 10,000 unique possible PINs.
If you lose your device, and there are no other protections, it would only take a couple of days for someone to find the correct PIN through brute force (i.e. attempting every combination of four-digit PIN).
A random six-digit PIN will afford you better security, given that there are a million possible combinations. However, with a weak PIN and a bit of time and luck, it’s still possible for someone to bypass this using something like Rubber Ducky, a tool designed to try every PIN combination without triggering other security mechanisms.
Checks and balances
Fortunately, there other safeguards in place. On iPhones and iPads, for instance, there is a forced delay of 80 milliseconds between PIN or password attempts.
And after 10 incorrect attempts, the device will either time-out for increasing periods of time, lock out completely, or potentially delete all data permanently, depending on your settings.
Similarly, Android devices enforce time delays after a number of passcode or password entries. However, stock Android devices will not delete their contents after any number of incorrect entries.
Swipe patterns are also a good security mechanism, as there are more possible combinations than a four-digit PIN. Additionally, you can’t set your swipe pattern to be the same as your banking PIN or password, so if one is compromised, then the others remain secure.
However, all of these security controls can potentially be thwarted. By simply observing the fingerprints on a device’s display on an unclean screen, it is possible to discern a swipe pattern or passcode. When it comes to touch screen devices: cleanliness is next to secure-ness.
Speaking of fingers, biometrics have increased in popularity recently. Biometric security controls simply means that traits of a human body can be used to identify someone and therefore unlock something.
In the case of smartphones, there are competing systems that offer various levels of security. Android has facial, voice and fingerprint unlocking, while iOS has fingerprint unlocking only.
Generally, biometrics on their own are not inherently secure. When used as the only protection mechanism, they’re often very unreliable, either allowing too many unauthorised users to access a device (false positives), or by creating a frustrating user experience by locking out legitimate users (false negatives).
Some methods of bypassing these biometric protections have been widely publicised, such as using a gummi bear or PVA glue to bypass Apple’s TouchID, or using a picture to fool facial recognition on Android.
To combat this, Apple disables the TouchID after five incorrect fingerprint attempts, requiring a passcode or password entry to re-enable the sensor. Likewise, current versions of Android enforce increasing time-outs on after a number of incorrect entries.
These methods help strike a balance between security and usability, which is crucial for making sure smartphones don’t end up hurled at a wall.
Although these lockscreen protections are in place, your device may still contain bugs in its software that can allow attackers to bypass them. A quick search for “smartphone lockscreen bypasses” on your favourite search engine will yield more results than you’d probably care to read.
Lockscreen bypasses are particularly problematic for older devices that are no longer receiving security updates, but new devices are not immune. For example, the latest major iOS release (iOS 9.0) contained a flaw that allowed users to access the device without entering a valid passcode via the Clock app, which is accessible on the lockscreen. Similar bugs have been discovered for Android devices as well.
All of these efforts could be thrown out the window if you install an app that includes malware.
So lockscreens, PIN codes, passwords and swipe patters should only be considered your first line of defence rather than a foolproof means of securing your device. | 1 | 4 |
<urn:uuid:593bf623-d3d9-4b83-ae7d-b2d31513060c> | Cardiovascular Magnetic Resonance
Cardiovascular Magnetic Resonance (CMR), sometimes know as cardiac MRI is a medical imaging technology for the non-invasive assessment of the function and structure of the cardiovascular system. It is derived from and based on the same basic principles as Magnetic Resonance Imaging
MRIbut with optimisation for use in the cardiovascular system. These optimisations are principally in the use of ECG gating and rapid imaging techniques or sequences. By combining a variety of such techniques into protocols, key functional and morphological features of the cardiovascular system can be assessed.
An example of CMR movies in different orientations of a cardiac tumor - in this case, an
atrial myxoma. The full case can be seen [http://www.scmr.org/caseoftheweek/case06-01.cfm here]
History of CMR and Nomenclature
The phenomenon of nuclear magnetic resonance (
NMR) was first described in molecular beams (1938) and bulk matter (1946), work later acknowledged by the award of a joint Nobel prizein 1952. Further investigation laid out the principles of relaxation times leading to nuclear spectroscopy. In 1973, the first simple NMR image was published and the first medical imaging in 1977, entering the clinical arena in the early 1980s. In 1984, NMR medical imaging was renamed MRI. Initial attempts to image the heart were confounded by respiratory and cardiac motion, solved by using cardiac ECG gating, faster scan techniques and breath hold imaging. Increasingly sophisticated techniques were developed including cine imaging and techniques to characterise heart muscle as normal or abnormal (fat infiltration, oedematous, iron loaded, acutely infarcted or fibrosed).As MRI became more complex and application to cardiovascular imaging became more sophisticated, the Society for Cardiovascular Magnetic Resonance, [http://www.scmr.org] SCMRwas set up (1996) with an academic journal, (JCMR) in 1999, which is going open source in 2008. In a move analogous to the development of ‘echocardiography’ from cardiac ultrasound, the term ‘Cardiovascular Magnetic Resonance’ (CMR) was proposed and has gained acceptance as the name for the field.
Physics of CMR
CMR uses the same basic principles as other
MRItechniques with the addition of ECG gating. Most CMR uses only 1H nuclei MR, which are abundant in human tissue. By using magnetic fields and radiofrequency (RF) pulses, the patients own 1H nuclei absorb and then emit energy, which can be measured and translated into images, without using ionising radiation. For further information, see MRI, earlier versions of this document, or the following outside links:
[http://www.hull.ac.uk/mri/lectures/gpl_page.html Hull physics lecture series]
[http://www.cis.rit.edu/htbooks/mri/ the basics of MRI]
[http://www.mritutor.org/mritutor/ MRI tutor]
The different techniques in CMR
CMR uses several different techniques within a single scan. The combination of these results in a comprehensive assessment of the heart and cardiovascular system. Examples are below:
Visualising heart muscle scar or fat without using a contrast agent
Typically a sequence called spin-echo is used. This causes the blood to appear black. These are high resolution still image which in certain circumstances identify abnormal myocardium through differences in intrinsic contrast.
A short axis view of the heart showing a movie (cine)next to a spin-echo sequence. In this case, the scan demonstrates features of
ARVCwith fatty infiltration of the left and right ventricles. The full case can be seen [http://www.scmr.org/caseoftheweek/case07-14.cfm here]
Heart function using cine imaging
Images of the heart may be acquired in real-time with CMR, but the image quality is limited. Instead most sequences use ECG gating to acquire images at each stage of the
cardiac cycleover several heart beats. This technique forms the basis of functional assessment by CMR. Blood typically appears bright in these sequences due to the contrast properties of blood and its rapid flow. The technique can discrimate very well between blood and myocardium. The current technique typically used for this is called
balanced steady state free precession (SSFP), implemented as TrueFISP, b-FFE or Fiesta, depending on scanner manufacturer.
A 4 chamber view of the heart using SSFP cine imaging. Compare the image orientation (4 chamber) with the short axis view of the movie above
Infarct imaging using contrast
Scar is best seen after giving a contrast agent, typically one containing gadolinium bound to DTPA. With a special sequence, Inversion Recovery (IR) normal heart muscle appears dark, whilst areas of infarction appear bright white.
CMR in the 4 chamber view comparing the cine (left) with the late gadolinium image using inversion recovery (right). The subendocardial infarct is clearly seen. Fat around the heart also appears white.
In angina, the heart muscle is starved of oxygen by a coronary artery narrowing, especially during stress. This appears as a transient perfusion defect when a dose of contrast is given into a vein. Knowing whether a perfusion defect is present and where it is helps guide intervention and treatment for coronary artery narrowings.
CMR perfusion. Contrast appears in the right ventricle then left ventricle before blushing into the muscle, which is normal (left) and abnormal (right, an inferior perfusion defect).
Current applications of CMR
In the investigation of cardiovascular disease the physician has a wide variety of tools available. The key disadvantages of CMR are limited availability, expense, operator dependence and a lack of outcome data. The key advantages are image quality, non-invasiveness, accuracy, versatility and no ionising radiation. A good list of current CMR indications can be found [http://www.scmr.org/documents/2004pennell_ehjcmr_indications.pdf here]
Case examples of the use of CMR
For online case examples, see [http://www.scmr.org/caseoftheweek/archive.cfm here]
pecific clinical uses of CMR
A good overview of the clinical indications for CMR can be found [http://www.scmr.org/documents/2004pennell_ehjcmr_indications.pdf here] and [http://www.acc.org/qualityandscience/clinical/pdfs/CCT.CMR.pdf here]
Centres performing CMR
Training in CMR
Training is being increasingly protocolised and is now formal with stages of training and accreditation. Internationally approved training guidelines can be found [http://www.scmr.org/documents/TF12CMR.pdf here] A resource for anyone thinking about CMR as a career can be found [http://www.scmr.org/starting.cfm here]
What it is like having a CMR scan
A good description of the experience of receieving a CMR can be found [http://www.radiologyinfo.org/en/info.cfm?pg=cardiacmr&bhcp=1 here]
Manufacturers of CMR scanners
[http://www.gehealthcare.com/usen/mr/index.html GE Healthcare]
[http://www.medical.philips.com/main/clinicalsegments/cardiovascular/portfolio/mri/index.asp Philips medical systems]
[http://www.siemensmedical.com/ Siemens medical solutions, Inc]
* [http://www.scmr.org The Society for Cardiovascular Magnetic Resonance]
* [http://atlas.scmr.org/ An Atlas of normal cardiac structure and function by CMR]
* [http://www.radiologyinfo.org/en/info.cfm?pg=cardiacmr&bhcp=1 Having a CMR scan]
Wikimedia Foundation. 2010.
Look at other dictionaries:
Magnetic resonance imaging — MRI redirects here. For other meanings of MRI or Mri, see MRI (disambiguation). Magnetic resonance imaging Intervention Sagittal MR image of the knee ICD 10 PCS B?3?ZZZ … Wikipedia
Imagerie par résonance magnétique — Pour les articles homonymes, voir IRM et MRI. L imagerie par résonance magnétique (IRM) est une technique d imagerie médicale permettant d obtenir des vues 2D ou 3D de l intérieur du corps de façon non invasive avec une résolution relativement… … Wikipédia en Français
Libin Cardiovascular Institute of Alberta — The Libin Cardiovascular Institute of Alberta is a partnership between the Calgary Health Region and the University of Calgary. Its mandate comprises all cardiovascular research, education and service delivery, with a service area extending from… … Wikipedia
human cardiovascular system — ▪ anatomy Introduction organ system that conveys blood through vessels to and from all parts of the body, carrying nutrients and oxygen to tissues and removing carbon dioxide and other wastes. It is a closed tubular system in which the… … Universalium
Cardiac output — (Q or or CO ) is the volume of blood being pumped by the heart, in particular by a left or right ventricle in the time interval of one minute. CO may be measured in many ways, for example dm3/min (1 dm3 equals 1000 cm3 or 1 litre). Q is… … Wikipedia
Fast Low-Angle Shot — Der englischsprachige Begriff Fast Low Angle Shot (FLASH) bezeichnet in der Medizin ein 1985 von Jens Frahm, Axel Haase, Wolfgang Hänicke, Klaus Dietmar Merboldt und Dieter Matthaei eingeführtes Verfahren zur schnellen Bildgebung auf der… … Deutsch Wikipedia
Myocarditis — Classification and external resources Histopathological image of myocarditis at autopsy in a patient with acute onset of congestive heart failure. ICD 10 I … Wikipedia
Communication inter-ventriculaire — Pour les articles homonymes, voir CIV. Cœur normal … Wikipédia en Français
Morbus Fabry — Klassifikation nach ICD 10 E75.2 Sonstige Sphingolipidosen (inkl. Fabry (Anderson )Krankheit) … Deutsch Wikipedia
Echtzeit-MRT — Die Echtzeit Magnetresonanztomographie (Echtzeit MRT) (auch MR Fluoroskopie) ist ein Verfahren auf der Grundlage der Magnetresonanztomographie für die kontinuierliche Beobachtung eines bewegten Objektes in Echtzeit, also für die Darstellung einer … Deutsch Wikipedia | 1 | 2 |
<urn:uuid:00d49078-5f62-4ff5-be68-fa5b1fba8a98> | Discussion on Fourth Industrial revolution
Fourth Industrial revolutionWe are slowly hearing the words such as automation, artificial intelligence which will take up human jobs. This is better known as web design company in Faridabad Industry 4.0 or the Fourth Industrial revolution which is the rise of the machines. But before jumping into the details of this let's see how we have reached this stage. Let's start with the first industrial revolution.
The first industrial revolution happened between the late 1700s and early 1800s. During this period of time, manufacturing evolved from focusing on manual labor performed by people and aided by work animals to a more optimized form of labor performed by people through the use of water and steam-powered engines and other types of machine tools. People were afraid that this is going to take up their jobs but people evolved and developed new skill sets. best ca coaching institute in Faridabad
Then, came the second industrial revolution with the introduction of steel and the use of electricity in factories. The introduction of electricity enabled manufacturers to increase efficiency and helped make factory machinery more mobile. It was during this phase that mass production concepts like the assembly line were introduced as a way to boost productivity.
Eventually, the technology started developing and third industrial revolution slowly began to emerge, as manufacturers began incorporating more electronic—and eventually computer—technology into their factories. During this period, manufacturers began experiencing a shift that put less emphasis on analog and mechanical technology and more on digital technology and automation software. cpt coaching classes in Faridabad
Now we are living in the midst of the fourth industrial revolution which offers a more comprehensive, interlinked, and holistic approach to manufacturing. It connects physical with digital and allows for better collaboration and access across departments, partners, vendors, product, and people.
Since we are in the middle of it, we should look at the positives and negatives of it.
The positives of it will include things such as the advances in medical technology and biomedical science has increased the life span of a human being. Online shopping is slowly redefining convenience and retail experience. With the power of analytics, we can optimize a lot of tasks.
But, everything comes with its downsides as well. The demerits of this can be that there is an imminent fear that humans will be replaced by machines. And all that is shown in the sci-fi movies will soon become a reality. Advances in biotechnology can also lead to illegal practices such as gene editing, or implants in babies to make them ready for the competitive life. Sheet Metal Manufacturer & Fabricator in Faridabad
People and industries are slowly adapting to these changes and people are getting a chance to develop their skills and learn new things. We have to consciously build positive values into the technologies we create, think about how they are to be used, and design them with ethical application in mind and in support of collaborative ways of preserving what’s important to us. website designing company in Delhi This effort requires all stakeholders --- governments, policymakers, international organizations, regulators, business organizations, academia, and civil society --- to work together to steer the powerful emerging technologies in ways that limit risk and create a world that aligns with common goals for the future. | 1 | 2 |
<urn:uuid:041ff0be-d9fc-4690-ba15-d00742efdbb0> | Tan Son Nhut Air Base
|Tan Son Nhut Air Base|
|Part of Republic of Vietnam Air Force (RVNAF)|
Pacific Air Forces (USAF)
Vietnam People's Air Force (VPAF)
Tan Son Nhut Air Base – June 1968
|Type||Air Force Base|
|Condition||Joint Civil/Military Airport|
|Elevation AMSL||10 m / 33 ft|
Tan Son Nhut Air Base (Vietnamese: Căn cứ không quân Tân Sơn Nhứt) (1955–1975) was a Republic of Vietnam Air Force (RVNAF) facility. It was located near the city of Saigon in southern Vietnam. The United States used it as a major base during the Vietnam War (1959–1975), stationing Army, Air Force, Navy, and Marine units there. Following the Fall of Saigon, it was taken over as a Vietnam People's Air Force (VPAF) facility and remains in use today.
- 1 Early history
- 2 Republic of Vietnam Air Force use
- 3 Use by the United States
- 3.1 Military Assistance Advisory Group
- 3.2 Air rescue
- 3.3 Miscellaneous units
- 3.4 33rd Tactical Group
- 3.5 6250th Combat Support Group
- 3.6 460th Tactical Reconnaissance Wing
- 3.7 315th Air Commando Wing, Troop Carrier
- 3.8 834th Air Division
- 3.9 377th Air Base Wing
- 4 Post-1975 Vietnam People's Air Force use
- 5 Accident and incidents
- 6 See also
- 7 References
- 8 Other sources
- 9 External links
Tan Son Nhat Airport was built by the French in the 1920s when the French Colonial government of Indochina constructed a small unpaved airport, known as Tan Son Nhat Airfield, in the village of Tan Son Nhat to serve as Saigon's commercial airport. Flights to and from France, as well as within Southeast Asia were available prior to World War II. During World War II, the Imperial Japanese Army used Tan Son Nhat as a transport base. When Japan surrendered in August 1945, the French Air Force flew a contingent of 150 troops into Tan Son Nhat.
After World War II, Tân Sơn Nhất served domestic as well as international flights from Saigon.
In mid-1956 construction of a 7,200-foot (2,200 m) runway was completed and the International Cooperation Administration soon started work on a 10,000-foot (3,000 m) concrete runway. The airfield was run by the South Vietnamese Department of Civil Aviation with the RVNAF as a tenant located on the southwest of the airfield.:123
In 1961, the government of the Republic of Vietnam requested the U.S. Military Assistance Advisory Group (MAAG) to plan for expansion of the Tan Son Nhut airport. A taxiway parallel to the original runway had just been completed by the E.V. Lane company for the U.S. Operations Mission, but parking aprons and connections to the taxiways were required. Under the direction of the U.S. Navy Officer in Charge of Construction RVN, these items were constructed by the American construction company RMK-BRJ in 1962. RMK-BRJ also constructed an air-control radar station in 1962, and the passenger and freight terminals in 1963.:44 In 1967, RMK-BRJ constructed the second 10,000-foot concrete runway.:251
Republic of Vietnam Air Force use
In 1952 a heliport was constructed at the base for use by French Air Force medical evacuation helicopters.
In 1953, Tan Son Nhut started being used as a military air base for the fledgling RVNAF, and in 1956 the headquarters were moved from the center of Saigon to Tan Son Nhut. But even before that time, French and Vietnamese military aircraft were in evidence at Tan Son Nhut.
On 1 July 1955, the RVNAF 1st Transport Squadron equipped with C-47 Skytrains was established at the base. The RVNAF also had a special missions squadron at the base equipped with 3 C-47s, 3 C-45s and 1 L-26.:50 The 1st Transport Squadron would be renamed the 413rd Air Transport Squadron in January 1963.:277
In June 1956 the 2nd Transport Squadron equipped with C-47s was established at the base and the RVNAF established its headquarters there.:275 It would be renamed the 415th Air Transport Squadron in January 1963.:277
In November 1956, by agreement with the South Vietnamese government, the USAF assumed some training and administrative roles of the RVNAF. A full handover of training responsibility took place on 1 June 1957 when the French training contracts expired.:50
On 1 June 1957 the RVNAF 1st Helicopter Squadron was established at the base without equipment. It operated with the French Air Force unit serving the International Control Commission and in April 1958 with the departure of the French it inherited its 10 H-19 helicopters.:50
In December 1962 the 293rd Helicopter Squadron was activated at the base, it was inactivated in August 1964.:277–8
In late 1962 the RVNAF formed the 716th Composite Reconnaissance Squadron initially equipped with 2 C-45 photo-reconnaissance aircraft.:147
In January 1963 the USAF opened an H-19 pilot training facility at the base and by June the first RVNAF helicopter pilots had graduated.:168
In December 1963 the 716th Composite Reconnaissance Squadron was activated at the base, equipped with C-47s and T-28s. The squadron would be inactivated in June 1964 and its mission assumed by the 2nd Air Division, while its pilots formed the 520th Fighter Squadron at Bien Hoa Air Base.:278
In January 1964 all RVNAF units at the base came under the control of the newly established 33rd Tactical Wing.:278
By midyear, the RVNAF had grown to thirteen squadrons; four fighter, four observation, three helicopter, and two C-47 transport. The RVNAF followed the practice of the U.S. Air Force, organizing the squadrons into wings, with one wing located in each of the four corps tactical zones at Cần Thơ Air Base, Tan Son Nhut AB, Pleiku Air Base and Da Nang Air Base.
Command and control center
As the headquarters for the RVNAF, Tan Son Nhut was primarily a command base, with most operational units using nearby Biên Hòa Air Base.
At Tan Son Nhut, the RVNAF's system of command and control was developed over the years with assistance from the USAF. The system handled the flow of aircraft from take-off to target area, and return to the base it was launched from. This was known as the Tactical Air Control System (TACS), and it assured positive control of all areas where significant combat operations were performed. Without this system, it would not have been possible for the RVNAF to deploy its forces effectively where needed.
The TACS was in close proximity to the headquarters of the RVNAF and USAF forces in South Vietnam, and commanders of both Air Forces utilized its facilities. Subordinate to TACS was the Direct Air Support Centers (DASC) assigned to each of corps areas (I DASC – Da Nang AB, DASC Alpha – Nha Trang Air Base, II DASC – Pleiku AB, III DASC – Bien Hoa AB, and IV DASC – Cần Thơ AB). DASCs were responsible for the deployment of aircraft located within their sector in support of ground operations.
Operating under each DASC were numerous Tactical Air Control Party (TACPs), manned by one or more RVNAF/USAF personnel posted with the South Vietnamese Army (ARVN) ground forces. A communications network inked these three levels of command and control, giving the TACS overall control of the South Vietnamese air situation at all times.
Additional information was provided by a radar network that covered all of South Vietnam and beyond, monitoring all strike aircraft.
Another function of Tan Son Nhut Air Base was as an RVNAF recruiting center.
Use in coups
The base was adjacent to the headquarters of the Joint General Staff of South Vietnam, and was a key venue in various military coups, particularly the 1963 coup that deposed the nation's first President Ngô Đình Diệm. The plotters invited loyalist officers to a routine lunch meeting at JGS and captured them in the afternoon of 1 November 1963. The most notable was Colonel Lê Quang Tung, loyalist commander of the ARVN Special Forces, which was effectively a private Ngô family army, and his brother and deputy, Le Quảng Trịeu. Later, Captain Nguyễn Văn Nhung, bodyguard of coup leader General Dương Văn Minh, shot the brothers on the edge of the base.
The base was attacked by the VC in a sapper and mortar attack on the morning of 4 December 1966. The attack was repulsed for the loss of 3 US and 3 ARVN killed and 28 VC killed and 4 captured.
1968 Tet Offensive
The base was the target of major VC attacks during the 1968 Tet Offensive. The attack began early on 31 January with greater severity than anyone had expected. When the VC attacked much of the RVNAF was on leave to be with their families during the lunar new year. An immediate recall was issued, and within 72 hours, 90 percent of the RVNAF was on duty.
The main VC attack was made against the western perimeter of the base by 3 VC Battalions. The initial penetration was contained by the base's 377th Security Police Squadron, ad-hoc Army units of Task Force 35, ad-hoc RVNAF units and two ARVN Airborne battalions. The 3rd Squadron, 4th Cavalry Regiment was sent from Củ Chi Base Camp and prevented follow-on forces west of the base from reinforcing the VC inside the base and engaged them in a village and factory west of the base. By 16:30 on 31 January the base was secured. U.S. losses were 22 killed and 82 wounded, ARVN losses 29 killed and 15 wounded, VC losses were more than 669 killed and 26 captured. 14 aircraft were damaged at the base.
Over the next three weeks, the RVNAF flew over 1,300 strike sorties, bombing and strafing PAVN/VC positions throughout South Vietnam. Transport aircraft from Tan Son Nhut's 33d Wing dropped almost 15,000 flares in 12 nights, compared with a normal monthly average of 10,000. Observation aircraft also from Tan Son Nhut completed almost 700 reconnaissance sorties, with RVNAF pilots flying O-1 Bird Dogs and U-17 Skywagons.
At 01:15 on 18 February a VC rocket and mortar attack on the base destroyed 6 aircraft and damaged 33 others and killed one person. A rocket attack the next day hit the civilian air terminal killing 1 person and 6 further rocket/mortar attacks over this period killed another 6 people and wounded 151. On 24 February another rocket and mortar attack damaged base buildings killing 4 US personnel and wounding 21.
On 12 June 1968 a mortar attack on the base destroyed 2 USAF aircraft and killed 1 airman.:180
The Tet Offensive attacks and previous losses due to mortar and rocket attacks on air bases across South Vietnam led the Deputy Secretary of Defense Paul Nitze on 6 March 1968 to approve the construction of 165 "Wonderarch" roofed aircraft shelters at the major air bases. In addition airborne "rocket watch" patrols were established in the Saigon-Biên Hòa area to reduce attacks by fire.:66
Vietnamization and the 1972 Easter Offensive
In 1970, with American units leaving the country, the RVNAF transport fleet was greatly increased at Tan Son Nhut. The RVNAF 33rd and 53rd Tactical Wings were established flying C-123 Providers, C-47s and C-7 Caribous.
In mid 1970 the USAF began training RVNAF crews on the AC-119G Shadow gunship at the base. Other courses included navigation classes and helicopter transition and maintenance training for the CH-47 Chinook.:218–9
By November 1970, the RVNAF took total control of the Direct Air Support Centers (DASCs) at Bien Hoa AB, Da Nang AB and Pleiku AB.
At the end of 1971, the RVNAF were totally in control of command and control units at eight major air bases, supporting ARVN units for the expanded air-ground operations system. In September 1971, the USAF transferred two C-119 squadrons to the RVNAF at Tan Son Nhut.
In 1972, the buildup of the RVNAF at Tan Son Nhut was expanded when two C-130 Hercules squadrons were formed there. In December, the first RVNAF C-130 training facility was established at Tan Son Nhut, enabling the RVNAF to train its own C-130s pilots. As more C-130s were transferred to the RVNAF, older C-123s were returned to the USAF for disposal.
As the buildup of the RVNAF continued, the success of the Vietnamization program was evident during the 1972 Easter Offensive. Responding to the People's Army of Vietnam (PAVN) attack, the RVNAF flew more than 20,000 strike sorties which helped to stem the advance. In the first month of the offensive, transports from Tan Son Nhut ferried thousands of troops and delivered nearly 4,000 tons of supplies throughout the country. The offensive also resulted in additional deliveries of aircraft to the RVNAF under Operation Enhance. Also, fighter aircraft arrived at Tan Son Nhut for the first time in the F-5A/B Freedom Fighter and the F-5E Tiger II. The F-5s were subsequently transferred to Bien Hoa and Da Nang ABs.
The Paris Peace Accords of 1973 brought an end to the United States advisory capacity in South Vietnam. In its place, as part of the agreement, the Americans retained a Defense Attaché Office (DAO) at Tan Son Nhut Airport, with small field offices at other facilities around the country. The technical assistance provided by the personnel of the DAOs and by civilian contractors was essential to the RVNAF, however, because of the cease-fire agreement, the South Vietnamese could not be advised in any way on military operations, tactics or techniques of employment. It was through the DAO that the American/South Vietnamese relationship was maintained, and it was primarily from this source that information from within South Vietnam was obtained. The RVNAF provided statistics with regards to the military capability of their units to the DAO, however the accuracy of this information was not always reliable.
From the Easter Offensive of 1972, it was clear that without United States aid, especially air support, the ARVN would not be able to defend itself against continuing PAVN attacks. This was demonstrated at the fighting around Pleiku, An Lộc and Quảng Trị where the ARVN would have been defeated without continuous air support, mainly supplied by the USAF. The ARVN relied heavily on air support, and with the absence of the USAF, the full responsibility fell on the RVNAF. Although equipped with large numbers of Cessna A-37 Dragonfly and F-5 attack aircraft to conduct effective close air support operations, during the 1972 offensive, heavy bombardment duty was left to USAF aircraft.
As part of the Paris Peace Accords, a Joint Military Commission was established and VC/PAVN troops were deployed across South Vietnam to oversee the departure of US forces and the implementation of the ceasefire. 200-250 VC/PAVN soldiers were based at Camp Davis (see Davis Station below) at the base from March 1973 until the fall of South Vietnam.
Numerous violations of the Paris Peace Accords were committed by North Vietnamese beginning almost as soon as the United States withdrew its last personnel from South Vietnam by the end of March 1973. The North Vietnamese and the Provisional Revolutionary Government of South Vietnam continued their attempt to overthrow President Nguyễn Văn Thiệu and remove the U.S.-supported government. The U.S. had promised Thiệu that it would use airpower to support his government. On 14 January 1975 Secretary of Defense James Schlesinger stated that the U.S. was not living up to its promise that it would retaliate in the event North Vietnam tried to overwhelm South Vietnam.
When North Vietnam invaded in March 1975, the promised American intervention never materialized. Congress reflected the popular mood, halting the bombing in Cambodia effective 15 July 1973, and reducing aid to South Vietnam. Since Thiệu intended to fight the same kind of war he always had, with lavish use of firepower, the cuts in aid proved especially damaging.
In early 1975 North Vietnam realized the time was right to achieve its goal of re-uniting Vietnam under communist rule, launching a series of small ground attacks to test U.S. reaction.
On 8 January the North Vietnamese Politburo ordered a PAVN offensive to "liberate" South Vietnam by cross-border invasion. The general staff plan for the invasion of South Vietnam called for 20 divisions, it anticipated a two-year struggle for victory.
By 14 March, South Vietnamese President Thiệu decided to abandon the Central Highlands region and two northern provinces of South Vietnam and ordered a general withdrawal of ARVN forces from those areas. Instead of an orderly withdrawal, it turned into a general retreat, with masses of military and civilians fleeing, clogging roads and creating chaos.
On 30 March 100,000 South Vietnamese soldiers surrendered after being abandoned by their commanding officers. The large coastal cities of Da Nang, Qui Nhơn, Tuy Hòa and Nha Trang were abandoned by the South Vietnamese, yielding the entire northern half of South Vietnam to the North Vietnamese.
By late March the US Embassy began to reduce the number of US citizens in Vietnam by encouraging dependents and non-essential personnel to leave the country by commercial flights and on Military Airlift Command (MAC) C-141 and C-5 aircraft, which were still bringing in emergency military supplies. In late March, two or three of these MAC aircraft were arriving each day and were used for the evacuation of civilians and Vietnamese orphans.:24 On 4 April a C-5A aircraft carrying 250 Vietnamese orphans and their escorts suffered explosive decompression over the sea near Vũng Tàu and made a crash-landing while attempting to return to Tan Son Nhut; 153 people on board died in the crash.:30–31
As the war in South Vietnam entered its conclusion, the pilots of the RVNAF flew sortie after sortie, supporting the retreating ARVN after it abandoned Cam Ranh Bay on 14 April. For two days after the ARVN left the area, the Wing Commander at Phan Rang Air Base fought on with the forces under his command. Airborne troops were sent in for one last attempt to hold the airfield, but the defenders were finally overrun on 16 April and Phan Rang Air Base was lost.
On 22 April Xuân Lộc fell to the PAVN after a two-week battle with the ARVN 18th Division which inflicted over 5000 PAVN casualties and delayed the Ho Chi Minh Campaign for two weeks. With the fall of Xuân Lộc and the capture of Bien Hoa Air Base in late April 1975 it was clear that South Vietnam was about to fall to the PAVN.
By 22 April 20 C-141 and 20 C-130s flights a day were flying evacuees out of Tan Son Nhut to Clark Air Base,:60 some 1,000 miles away in the Philippines. On 23 April President Ferdinand Marcos of the Philippines announced that no more than 2,500 Vietnamese evacuees would be allowed in the Philippines at any one time, further increasing the strain on MAC which now had to move evacuees out of Saigon and move some 5,000 evacuees from Clark Air Base on to Guam, Wake Island and Yokota Air Base.:62 President Thiệu and his family left Tan Son Nhut on 25 April on a USAF C-118 to go into exile in Taiwan.:67 Also on 25 April the Federal Aviation Authority banned commercial flights into South Vietnam. This directive was subsequently reversed; some operators had ignored it anyway. In any case this effectively marked the end of the commercial airlift from Tan Son Nhut.:66
On 27 April PAVN rockets hit Saigon and Cholon for the first time since the 1973 ceasefire. It was decided that from this time only C-130s would be used for the evacuation due to their greater maneuverability. There was relatively little difference between the cargo loads of the two aircraft, C-141s had been loaded with up to 316 evacuees while C-130s had been taking off with in excess of 240.:69
On 28 April at 18:06, three A-37 Dragonflies piloted by former RVNAF pilots, who had defected to the Vietnamese People's Air Force at the fall of Da Nang, dropped six Mk81 250 lb bombs on the base damaging aircraft. RVNAF F-5s took off in pursuit, but they were unable to intercept the A-37s.:70 C-130s leaving Tan Son Nhut reported receiving PAVN .51 cal and 37 mm anti-aircraft (AAA) fire,:71–72 while sporadic PAVN rocket and artillery attacks also started to hit the airport and air base. C-130 flights were stopped temporarily after the air attack but resumed at 20:00 on 28 April.:72
At 03:58 on 29 April, C-130E, #72-1297, flown by a crew from the 776th Tactical Airlift Squadron, was destroyed by a 122 mm rocket while taxiing to pick up refugees after offloading a BLU-82 at the base. The crew evacuated the burning aircraft on the taxiway and departed the airfield on another C-130 that had previously landed. This was the last USAF fixed-wing aircraft to leave Tan Son Nhut.:79
At dawn on 29 April the RVNAF began to haphazardly depart Tan Son Nhut Air Base as A-37s, F-5s, C-7s, C-119s and C-130s departed for Thailand while UH-1s took off in search of the ships of Task Force 76.:81 Some RVNAF aircraft stayed to continue to fight the advancing PAVN. One AC-119 gunship had spent the night of 28/29 April dropping flares and firing on the approaching PAVN. At dawn on 29 April two A-1 Skyraiders began patrolling the perimeter of Tan Son Nhut at 2,500 feet (760 m) until one was shot down, presumably by an SA-7 missile. At 07:00 the AC-119 was firing on PAVN to the east of Tan Son Nhut when it too was hit by an SA-7 and fell in flames to the ground.:82
At 08:00 on 29 April Lieutenant General Trần Văn Minh, commander of the RVNAF and 30 of his staff arrived at the DAO Compound demanding evacuation, signifying the complete loss of RVNAF command and control.:85–87 At 10:51 on 29 April, the order was given by CINCPAC to commence Operation Frequent Wind, the helicopter evacuation of US personnel and at-risk Vietnamese.:183
In the final evacuation, over a hundred RVNAF aircraft arrived in Thailand, including twenty-six F-5s, eight A-37s, eleven A-1s, six C-130s, thirteen C-47s, five C-7s, and three AC-119s. Additionally close to 100 RVNAF helicopters landed on U.S. ships off the coast, although at least half were jettisoned. One O-1 managed to land on the USS Midway, carrying a South Vietnamese major, his wife, and five children.
The ARVN 3rd Task Force, 81st Ranger Group commanded by Maj. Pham Chau Tai defended Tan Son Nhut and they were joined by the remnants of the Loi Ho unit. At 07:15 on 30 April the PAVN 24th Regiment approached the Bay Hien intersection ( ) 1.5 km from the base's main gate. The lead T-54 was hit by M67 recoilless rifle and then the next T-54 was hit by a shell from an M48 tank. The PAVN infantry moved forward and engaged the ARVN in house to house fighting forcing them to withdraw to the base by 08:45. The PAVN then sent 3 tanks and an infantry battalion to assault the main gate and they were met by intensive anti-tank and machine gun fire knocking out the 3 tanks and killing at least 20 PAVN soldiers. The PAVN tried to bring forward an 85mm antiaircraft gun but the ARVN knocked it out before it could start firing. The PAVN 10th Division ordered 8 more tanks and another infantry battalion to join the attack, but as they approached the Bay Hien intersection they were hit by an airstrike from RVNAF jets operating from Binh Thuy Air Base which destroyed 2 T-54s. The 6 surviving tanks arrived at the main gate at 10:00 and began their attack, with 2 being knocked out by antitank fire in front of the gate and another destroyed as it attempted a flanking manoeuvre. At approximately 10:30 Maj. Pham heard of the surrender broadcast of President Dương Văn Minh and went to the ARVN Joint General Staff Compound to seek instructions, he called General Minh who told him to prepare to surrender, Pham reportedly told Minh "If Viet Cong tanks are entering Independence Palace we will come down there to rescue you sir." Minh refused Pham's suggestion and Pham then told his men to withdraw from the base gates and at 11:30 the PAVN entered the base.:490–1
Following the war, Tan Son Nhut Air Base was taken over as a base for the Vietnam People's Air Force.
Known RVNAF units (June 1974)
Tan Son Nhut Air Base was the Headquarters of the RVNAF. It was also the headquarters of the RVNAF 5th Air Division.
- 33d Tactical Wing
- 53d Tactical Wing
Use by the United States
During the Vietnam War Tan Son Nhut Air Base was an important facility for both the USAF and the RVNAF. The base served as the focal point for the initial USAF deployment and buildup in South Vietnam in the early 1960s. Tan Son Nhut was initially the main air base for Military Airlift Command flights to and from South Vietnam, until other bases such as Bien Hoa and Cam Ranh opened in 1966. After 1966, with the establishment of the 7th Air Force as the main USAF command and control headquarters in South Vietnam, Tan Son Nhut functioned as a Headquarters base, a Tactical Reconnaissance base, and as a Special Operations base. With the drawdown of US forces in South Vietnam after 1971, the base took on a myriad of organizations transferred from deactivated bases across South Vietnam.
Between 1968 and 1974, Tan Son Nhut Airport was one of the busiest military airbases in the world. Pan Am schedules from 1973 showed Boeing 747 service was being operated four times a week to San Francisco via Guam and Manila. Continental Airlines operated up to 30 Boeing 707 military charters per week to and from Tan Son Nhut Airport during the 1968–74 period.
It was from Tan Son Nhut Air Base that the last U.S. Airman left South Vietnam in March 1973. The Air Force Post Office (APO) for Tan Son Nhut Air Base was APO San Francisco, 96307.
Military Assistance Advisory Group
On 13 May 1961 a 92-man unit of the Army Security Agency, operating under cover of the 3rd Radio Research Unit (3rd RRU), arrived at Tan Son Nhut AB and established a communications intelligence facility in disused RVNAF warehouses on the base ( ). This was the first full deployment of a US Army unit to South Vietnam. On 21 December 1961 SP4 James T. Davis of the 3rd RRU was operating a mobile PRD-1 receiver with an ARVN unit near Cầu Xáng when they were ambushed by VC and Davis was killed, becoming one of the first Americans killed in the Vietnam War.:49–50 In early January 1962 the 3rd RRU's compound at Tan Son Nhut was renamed Davis Station.:54
On 1 June 1966 3rd RRU was redesignated the 509th Radio Research Group. The 509th RR Group continued operations until 7 March 1973, when they were among the last US units to leave South Vietnam.
507th Tactical Control Group
In late September 1961, the first permanent USAF unit, the 507th Tactical Control Group from Shaw Air Force Base deployed sixty-seven officers and airmen to Tan Son Nhut to install MPS-11 search and MPS-16 height-finding radars and began monitoring air traffic and training of RVNAF personnel to operate and service the equipment. Installation of the equipment commenced on 5 October 1961 and the unit would eventually grow to 314 assigned personnel. This organization formed the nucleus of South Vietnam's tactical air control system.:74
Tactical Reconnaissance Mission
On 18 October 1961, four RF-101C Voodoos and a photo processing unit from the 15th Tactical Reconnaissance Squadron of the 67th Tactical Reconnaissance Wing, based at Yokota AB Japan, arrived at Tan Son Nhut, with the reconnaissance craft flying photographic missions over South Vietnam and Laos from 20 October under Operation Pipe Stem.:74 The RF-101s would depart in January 1962 leaving Detachment 1, 15th tactical Reconnaissance Squadron to undertake photo-processing.:276
In December 1962 following the signing of the International Agreement on the Neutrality of Laos, which banned aerial reconnaissance over Laos, all 4 Able Marble RF-101Cs of the moved to the base from Don Muang Royal Thai Air Force Base.:147–8
The 67th TRW was soon followed by detachments of the 15th Tactical Reconnaissance Squadron of the 18th Tactical Fighter Wing, based at Kadena AB, Okinawa, which also flew RF-101 reconnaissance missions over Laos and South Vietnam, first from bases at Udorn Royal Thai Air Force Base, Thailand from 31 March 1965 to 31 October 1967 and then from South Vietnam. These reconnaissance missions lasted from November 1961 through the spring of 1964.
RF-101Cs flew pathfinder missions for F-100s during Operation Flaming Dart, the first USAF strike against North Vietnam on 8 February 1965. They initially operated out of South Vietnam, but later flew most of their missions over North Vietnam out of Thailand. Bombing missions against the North required a large amount of photographic reconnaissance support, and by the end of 1967, all but one of the Tactical Air Command RF-101C squadrons were deployed to Southeast Asia.
The reconnaissance Voodoos at Tan Son Nhut were incorporated into the 460th Tactical Reconnaissance Wing in February 1966. 1 RF-101C was destroyed in a sapper attack on Tan Son Nhut AB. The last 45th TRS RF-101C left Tan Son Nhut on 16 November 1970.
The need for additional reconnaissance assets, especially those capable of operating at night, led to the deployment of 2 Martin RB-57E Canberra Patricia Lynn reconnaissance aircraft of the 6091st Reconnaissance Squadron on 7 May 1963.:168 The forward nose section of the RB-57Es were modified to house a KA-1 36-inch forward oblique camera and a low panoramic KA-56 camera used on the Lockheed U-2. Mounted inside the specially configured bomb bay door was a KA-1 vertical camera, a K-477 split vertical day-night camera, an infrared scanner, and a KA-1 left oblique camera. The Detachment flew nighttime reconnaissance missions to identify VC base camps, small arms factories, and storage and training areas. The Patricia Lynn operation was terminated in mid-1971 with the inactivation of the 460th TRW and the four surviving aircraft returned to the United States.:254
On 20 December 1964 Military Assistance Command, Vietnam (MACV) formed the Central Target Analysis and Research Center at the base as a unit of MACV J-2 (Intelligence) to coordinate Army and USAF infrared reconnaissance.:245
On 11 October 1961, President John F. Kennedy directed, in NSAM 104, that the Defense Secretary "introduce the Air Force 'Jungle Jim' Squadron into Vietnam for the initial purpose of training Vietnamese forces.":80 The 4400th Combat Crew Training Squadron was to proceed as a training mission and not for combat. The unit would be officially titled 4400th Combat Crew Training Squadron, code named Farm Gate. In mid-November the first 8 Farm Gate T-28s arrived at the base from Clark Air Base.:81 At the same time Detachments 7 and 8, 6009th Tactical Support Group were established at the base to support operations.:81 On 20 May these detachments were redesignated the 6220th Air Base Squadron.:101
In February 1963 4 RB-26C night photo-reconnaissance aircraft joined the Farm Gate planes at the base.:148
Tactical Air Control Center
The establishment of a country-wide tactical air control center was regarded as a priority for the effective utilisation of the RVNAF's limited strike capabilities, in addition an air operations center for central planning of air operations and a subordinate radar reporting center were also required. From 2–14 January the 5th Tactical Control Group was deployed to the base, beginning operations on 13 January 1962.:105–6
In March 1963 MACV formed a flight service center and network at the base for the control of all US military flights in South Vietnam.:160
On 6 December 1961, the Defense Department ordered the C-123 equipped 346th Troop Carrier Squadron (Assault) to the Far East for 120 days temporary duty. On 2 January 1962 the first of 16 C-123s landed at the base commencing Operation Mule Train to provide logistical support to US and South Vietnamese forces.:108
In March 1962 personnel from the 776th Troop Carrier Squadron, began replacing the temporary duty personnel. 10 of the C-123s were based at Tan Son Nhut, 2 at Da Nang Air Base and 4 at Clark Air Base.:108:165
Additional USAF personnel arrived at Tan Son Nhut in early 1962 after the RVNAF transferred two dozen seasoned pilots from the 1st Transportation Group at Tan Son Nhut to provide aircrews for the newly activated 2nd Fighter Squadron then undergoing training at Bien Hoa AB. This sudden loss of qualified C-47 pilots brought the 1st Transportation Group's airlift capability dangerously low. In order to alleviate the problem, United States Secretary of Defense Robert McNamara, on the recommendation of MAAG Vietnam, ordered thirty USAF pilots temporarily assigned to the RVNAF to serve as C-47 co-pilots. This influx of U.S. personnel quickly returned the 1st Transportation Group to full strength.:66–82
Unlike the USAF Farm Gate personnel at Bien Hoa Air Base, the C-47 co-pilots actually became part of the RVNAF operational structure – though still under U.S. control. Because of their rather unusual situation, these pilots soon adopted the very unofficial nickname, The Dirty Thirty. In a sense they were the first U.S. airmen actually committed to combat in Vietnam, rather than being assigned as advisors or support personnel. The original Dirty Thirty pilots eventually rotated home during early 1963 and were replaced by a second contingent of American pilots. This detachment remained with the RVNAF until December 1963 when they were withdrawn from Vietnam.
509th Fighter-Interceptor Squadron
Starting on 21 March 1962 under Project Water Glass and later remaining under Project Candy Machine, the 509th Fighter-Interceptor Squadron began rotating F-102A Delta Dagger interceptors to Tan Son Nhut Air Base from Clark AB on a rotating basis to provide air defense of the Saigon area in the event of a North Vietnamese air attack. F-102s and TF-102s (two-seat trainer version) were deployed to Tan Son Nhut initially because ground radar sites frequently painted small aircraft penetrating South Vietnamese airspace.:129–31
The F-102, a supersonic, high altitude fighter interceptor designed to intercept Soviet bombers was given the mission of intercepting, identifying and, if necessary, destroying small aircraft, flying from treetop level to 2000 ft at speeds less than the final approach landing speed of the F-102. The TF-102, employing two pilots with one acting solely as radar intercept operator, was considered to be safer and more efficient as a low altitude interceptor.:131 The T/F-102s would alternate with US Navy AD-5Qs.:277 In May 1963 due to overcrowding at the base and the low-probability of air attack the T/F-102s and AD-5Qs were withdrawn to Clark AB from where they could redeploy to Tan Son Nhut on 12–24 hours' notice.:169–70
Before the rotation ended in July 1970, pilots and F-102 aircraft from other Far East squadrons were used in the deployment.
In January 1962 5 USAF personnel from the Pacific Air Rescue Center were assigned to the base to establish a Search and Rescue Center, without having any aircraft assigned they were dependent on support from US Army advisers in each of South Vietnam's four military corps areas to use US Army and Marine Corps helicopters.:38 In April 1962 the unit was designated Detachment 3, Pacific Air Rescue Center.:39
On 1 July 1965 Detachment 3 was redesignated the 38th Air Rescue Squadron and activated with its headquarters at the base and organized to control search and rescue detachments operating from bases in South Vietnam and Thailand.:73 Detachment 14, an operational base rescue element, was later established at the base.:113
On 1 July 1971 the entire 38th ARRS was inactivated. Local base rescue helicopters and their crews then became detachments of the parent unit, the 3d Aerospace Rescue and Recovery Group.:113
In April 1965 a detachment of the 9th Tactical Reconnaissance Squadron comprising 4 RB–66Bs and 2 EB–66Cs arrived at the base. The RB–66Bs were equipped with night photo and infrared sensor equipment and began reconnaissance missions over South Vietnam, while the EB–66Cs began flying missions against North Vietnamese air defense radars. By the end of May, two more EB–66Cs arrived at the base and they all then redeployed to Takhli Royal Thai Air Force Base.:116
In mid-May 1965, following the disaster at Bien Hoa the 10 surviving B-57 bombers were transferred to Tan Son Nhut AB and continued to fly sorties on a reduced scale until replacement aircraft arrived from Clark AB. In June 1965, the B-57s were moved from Tan Son Nhut AB to Da Nang AB.:45
33rd Tactical Group
On 8 July 1963 the units at the base were organized as the 33d Tactical Group, with subordinate units being the 33rd Air Base Squadron, the 33rd Consolidated Aircraft maintenance Squadron and the Detachment 1 reconnaissance elements. The Group's mission was to maintain and operate base support facilities at Tan Son Nhut, supporting the 2d Air Division and subordinate units by performing reconnaissance.:171
505th Tactical Air Control Group
The 505th Tactical Air Control Group was assigned to Tan Son Nhut on 8 April 1964. The Unit was primarily responsible for controlling the tactical air resources of the US and its allies in South Vietnam, Thailand, and to some extent Cambodia and Laos. Carrying out the mission of providing tactical air support required two major components, radar installations and forward air controllers (FACs).
The radar sites provided flight separation for attack and transport aircraft which took the form of flight following and, in some cases control by USAF Weapons Directors. FACs had the critical job of telling tactical fighters where to drop their ordnance. FAC's were generally attached to either US Army or ARVN units and served both on the ground and in the air.
Squadrons of the 505th located at Tan Son Nhut AB were:
- 619th Tactical Control Squadron activated at the base on 8 April 1964:278 It was responsible for operating and maintaining air traffic control and radar direction-finding equipment for the area from the Mekong Delta to Buôn Ma Thuột in the Central Highlands with detachments at various smaller airfields throughout its operational area. It remained operational until 15 March 1973.
- 505th Tactical Control Maintenance Squadron
Close air support
Following the introduction of US ground combat units in mid-1965, two F-100 squadrons were deployed to Tan Son Nhut AB to provide close air support for US ground forces:
- 481st Tactical Fighter Squadron, 29 June 1965 – 1 January 1966:55
- 416th Tactical Fighter Squadron, 1 November 1965 – 15 June 1966
The 481st returned to the United States; the 416th returned to Bien Hoa.
6250th Combat Support Group
The first tasks facing the USAF, however, were to set up a workable organizational structure in the region, improve the area's inadequate air bases, create an efficient airlift system, and develop equipment and techniques to support the ground battle.
Starting in 1965, the USAF adjusted its structure in Southeast Asia to absorb incoming units. Temporarily deployed squadrons became permanent in November. A wing structure replaced the groups. On 8 July 1965, the 33d Tactical Group was redesignated the 6250th Combat Support Group.
The number of personnel at Tan Son Nhut AB increased from 7780 at the beginning of 1965 to over 15,000 by the end of the year, placing substantial demands for accommodation and basic infrastructure.:169–70
On 14 November 1965 the 4th Air Commando Squadron equipped with 20 AC-47 Spooky gunships arrived at the base and was assigned to the 6250th Group. The aircraft were soon deployed to forward operating locations at Binh Thuy, Da Nang, Nha Trang and Pleiku Air Bases.:35 In May 1966 the 4th Air Commando Squadron moved its base to Nha Trang AB where it came under the control of the 14th Air Commando Wing.:36
460th Tactical Reconnaissance Wing
On 18 February 1966 the 460th Tactical Reconnaissance Wing was activated.:254 Its headquarters were shared with the Seventh Air Force Headquarters and MACV. When it stood up, the 460th TRW, alone, was responsible for the entire reconnaissance mission, both visual and electronic, throughout the whole theater. On 18 February 1966 the wing began activities with 74 aircraft of various types. By the end of June 1966, that number climbed to over 200 aircraft. When the 460th TRW stood up, the Wing gained several flying units at Tan Son Nhut:
- 16th Tactical Reconnaissance Squadron (RF-4C):253:205
- 20th Tactical Reconnaissance Squadron: 12 November 1965 – 1 April 1966 (RF-101C):253
- Detachment 1 of the 460th Tactical Reconnaissance Wing
On 15 October 1966, the 460th TRW assumed aircraft maintenance responsibilities for Tan Son Nhut AB, including being responsible for all depot-level aircraft maintenance responsibility for all USAF organizations in South Vietnam.:254 In addition to the reconnaissance operations, the 460th TFW's base flight operated in-theater transport service for Seventh Air Force and other senior commanders throughout South Vietnam. The base flight operated T-39A Saberliners, VC-123B Providers (also known as the "White Whale"), and U-3Bs between 1967 and 1971.
- 45th Tactical Reconnaissance Squadron: 30 March 1966 – 31 December 1970 (RF-101C Tail Code: AH):253
- 12th Tactical Reconnaissance Squadron: 2 September 1966 – 31 August 1971 (RF-4C Tail Code: AC):253
On 18 September 1966, the 432d Tactical Reconnaissance Wing was activated at Takhli Royal Thai Air Force Base, Thailand.:226 After the 432d TRW activated it took control of the reconnaissance squadrons in Thailand. With the activation of the 432d TRW, the 460th TRW was only responsible for RF-101 and RF-4C operations.
In 1970 the need for improved coordinate data of Southeast Asia for targeting purposes led to Loran-C-equipped RF–4Cs taking detailed photographs of target areas which were matched with the Loran coordinates of terrain features on the photo maps to calculate the precise coordinates. This information was converted into a computer program which by mid-1971 was used by the 12th Reconnaissance Intelligence Technical Squadron at the base for targeting.
A few months after the 460th TRW's activation, two squadrons activated on 8 April 1966 as 460th TRW Det 2:
- 360th Tactical Electronic Warfare Squadron: 8 April 1966 – 31 August 1971 (EC-47N/P/Q Tail Code: AJ):128:253
- 361st Tactical Electronic Warfare Squadron: 8 April 1966 – 31 August 1971 (EC-47N/P/Q Tail Code: AL) (Nha Trang Air Base):253
- 362d Tactical Electronic Warfare Squadron: 1 February 1967 – 31 August 1971 (EC-47N/P/Q Tail Code: AN) (Pleiku Air Base):254
Project Hawkeye conducted radio direction finding (RDF), whose main target were VC radio transmitters. Before this program RDF involved tracking the signals on the ground. Because this exposed the RDF team to ambushes, both the US Army and USAF began to look at airborne RDF. While the US Army used U-6 Beaver and U-8 Seminole aircraft for its own version of the Hawkeye platform, the USAF modified several C-47 Skytrains.
Project Phyllis Ann also used modified C-47s, however, the C-47s for this program were highly modified with an advanced navigational and reconnaissance equipment. On 4 April 1967, project Phyllis Ann changed to become Compass Dart. On 1 April 1968, Compass Dart became Combat Cougar. Because of security concerns the operation's name changed two more times first to Combat Cross and then to Commando Forge.
Project Drillpress also used modified C-47s, listening into VC/PAVN traffic and collected intelligence from it. This data gave insights into the plans and strategy of both the VC and the PAVN. Information from all three projects contributed in a major way to the intelligence picture of the battlefield in Vietnam. In fact about 95 percent of the Arc Light strikes conducted in South Vietnam were based, at least partially, on the data from these three programs. On 6 October 1967, Drillpress changed to Sentinel Sara.
The US would go to great lengths to prevent this equipment from falling into enemy hands, when an EC-47 from the 362d TEWS crashed on 22 April 1970, members of an explosive ordnance unit policed the area destroying anything they found and six F-100 tactical air sorties hit the area to be sure.
Detachments of these squadrons operated from different locations, including bases in Thailand. Each of the main squadrons and their detachments moved at least once due to operational and/or security reasons. Personnel operating the RDF and signal intelligence equipment in the back of the modified EC-47s were part of the 6994th Security Squadron.
On 1 June 1969 the unit transferred to become 360th TEWS Det 1.
As the Vietnamization program began, Vietnamese crews began flying with EC-47 crews from the 360th TEWS and 6994th SS, on 8 May 1971, to get training on operating the aircraft and its systems. The wing was inactivated in-place on 31 August 1971. Decorations awarded to the wing for its Vietnam War service include::254
- Presidential Unit Citation: 18 February 1966 – 30 June 1967; 1 September 1967 – 1 July 1968; 11 July 1968 – 31 August 1969; l February-31 March 1971.
- Air Force Outstanding Unit Award with Combat "V" Device: 1 July 1969 – 30 June 1970; 1 July 1970 – 30 June 1971.
- Republic of Vietnam Gallantry Cross with Palm: 1 August 1966 – 31 August 1971.
315th Air Commando Wing, Troop Carrier
In October 1962, there began what became known as the Southeast Asia Airlift System. Requirements were forecast out to 25 days, and these requirements were matched against available resources.:246 In September 1962 Headquarters 6492nd Combat Cargo Group (Troop Carrier) and the 6493rd Aerial Port Squadron were organized and attached to the 315th Air Division, based at Tachikawa AB.:277:106 On 8 December 1962 the 315th Air Commando Group, (Troop Carrier) was activated replacing the 6492nd Combat Cargo Group and became responsible for all in-country airlift in South Vietnam, including control over all USAF airlift assets.:163–4 On the same date the 8th Aerial Port Squadron replaced the 6493rd Aerial Port Squadron.:107The 315th Group was assigned to the 315th Air Division, but came under the operational control of MACV through the 2d Air Division.:246
On 8 March 1965 the 315th Troop Carrier Group was redesignated the 315th Air Commando Group.:26 The 315th Air Commando Group was re-designated the 315th Air Commando Wing on 8 March 1966.:163–4
Squadrons of the 315th ACW/TC were:
- 12th Air Commando Squadron (Defoliation), 15 October 1966 – 30 September 1970 (Bien Hoa) (UC-123 Provider):164
- Det 1, 834th Air Division, 15 October 1966 – 1 December 1971 (Tan Son Nhut) (C-130B Hercules)
- 19th Air Commando Squadron 8 March 1966 – 10 June 1971 (Tan Son Nhut) (C-123 Provider):164 (including 2 Royal Thai Air Force-operated C-123s named Victory Flight):411–2
- 309th Air Commando Squadron 8 March 1966 – 31 July 1970 (Phan Rang) (C-123):164
- 310th Air Commando Squadron 8 March 1966 – 15 January 1972 (Phan Rang) (C-123):164
- 311th Air Commando Squadron 8 March 1966 – 5 October 1971 (Phan Rang) (C-123):164
- Det 1., HQ 315th Air Commando Wing, Troop Carrier 1 August – 15 October 1966
- Det 5., HQ 315th Air Division (Combat Cargo) 8 March – 15 October 1966:164
- Det 6., HQ 315th Air Division (Combat Cargo) (8 March – 15 October 1966):164
- 903rd Aeromedical Evacuation Squadron 8 July 1966:399
- RAAF Transport Flight, Vietnam (RTFV) 8 March – 15 October 1966:164
The unit also performed C-123 airlift operations in Vietnam. Operations included aerial movement of troops and cargo, flare drops, aeromedical evacuation, and air-drops of critical supplies and paratroops:165
Operation Ranch Hand
The 315th ACG was responsible for Operation Ranch Hand Defoliant operations missions. After some modifications to the aircraft (which included adding armor for the crew), 3 C-123B Provider aircraft arrived at the base on 7 January 1962 under the code name Ranch Hand.:113
The 315th ACW was transferred to Phan Rang Air Base on 14 June 1967.
834th Air Division
On 15 October 1966 the 834th Airlift Division was assigned without personnel or equipment, to Tan Son Nhut AB to join the Seventh Air Force, providing an intermediate command and control organization and also act as host unit for the USAF forces at the base.:146:191
The 315th Air Commando Wing and 8th Aerial Port Squadron were assigned to the 834th Division.:146:164 Initially the 834th AD had a strength of twenty-seven officers and twenty-one airmen, all of whom were on permanent assignment to Tan Son Nhut.
The Air Division served as a single manager for all tactical airlift operations in South Vietnam, using air transport to haul cargo and troops, which were air-landed or air-dropped, as combat needs dictated through December 1971. The 834th Air Division became the largest tactical airlift force in the world. It was capable of performing a variety of missions. In addition to airlift of cargo and personnel and RVNAF training. Its missions and activities included Ranch Hand defoliation and insecticide spraying, psychological leaflet distribution, helicopter landing zone preparation, airfield survey and the operation of aerial ports.
Units it directly controlled were:
- 315th Air Commando (later, 315th Special Operations; 315th Tactical Airlift) Wing: 15 October 1966 – 1 December 1971):164
- Located at: Tan Son Nhut AB; later Phan Rang AB (15 June 1967 – 1 December 1971) UC-123 Provider. Composed of four C-123 squadrons with augmentation by C-130 Hercules transports from the 315th Air Division, Tachikawa AB, Japan.
- 2 C-123 Squadrons (32 a/c) at Tan Son Nhut AB;
- C-130B aircraft assignments were 23 aircraft by 1 November 1966:176
- 483d Troop Carrier (later, 483d Tactical Airlift) Wing: 15 October 1966 – 1 December 1971:268
- 2d Aerial Port Group (Tan Son Nhut)
- 8th Aerial Port Squadron, Tan Son Nhut (16 detachments)
- Detachments were located at various points where airlift activity warranted continuous but less extensive aerial port services. Aerial port personnel loaded, unloaded, and stored cargo and processed passengers at each location.
In addition, the 834th supervised transport operations (primarily C-47's) of the RVNAF, 6 DHC-4 Wallaby transports operated by the RAAF 35 Squadron at Vũng Tàu Army Airfield and 2 Republic of Korea Air Force transport unit C-46 Commandos from 29 July 1967, later replaced by C-54s.:415–6 The 834th's flying components also performed defoliation missions, propaganda leaflet drops, and other special missions.
In late 1969 C Flight, 17th Special Operations Squadron equipped with 5 AC-119G gunships was deployed at the base.:203 By the end of 1970 this Flight would grow to 9 AC-119Gs to support operations in Cambodia.:219
During its last few months, the 834th worked toward passing combat airlift control to Seventh Air Force. On 1 December 1971 the 834th AD was inactivated as part of the USAF withdrawal of forces from Vietnam.
377th Air Base Wing
The 377th Air Base Wing was responsible for the day-to-day operations and maintenance of the USAF portion of the facility from April 1966 until the last USAF personnel withdrew from South Vietnam in March 1973. In addition, the 377th ABW was responsible for housing numerous tenant organizations including Seventh Air Force, base defense, and liaison with the RVNAF.:202
In 1972 inactivating USAF units throughout South Vietnam began to assign units without equipment or personnel to the 377th ABW.:202
From Cam Ranh AB:
From Phan Rang AB:
- 8th Special Operations Squadron: 15 January – 25 October 1972 (A-37):202
- 9th Special Operations Squadron: 21 January – 29 February 1972 (C-47):202
- 310th Tactical Airlift Squadron: January–June 1972 and March–October 1972 (C-123, C-7B):202
- 360th Tactical Electronic Warfare Squadron: 1 February – 24 November 1972 (EC-47N/P/Q):202
All of these units were inactivated at Tan Son Nhut AB.
An operating location of the wing headquarters was established at Bien Hoa AB on 14 April 1972 to provide turnaround service for F-4 Phantom IIs of other organizations, mostly based in Thailand. It was replaced on 20 June 1972 by Detachment l of the 377th Wing headquarters, which continued the F-4 turnaround service and added A-7 Corsair IIs for the deployed 354th Tactical Fighter Wing aircraft based at Korat Royal Thai Air Force Base, Thailand on 30 October 1972. The detachment continued operations through 11 February 1973.:203
The 377th ABW phased down for inactivation during February and March 1973, transferring many assets to the RVNAF.:203 When inactivated on 28 March 1973, the 377th Air Base Wing was the last USAF unit in South Vietnam.
Post-1975 Vietnam People's Air Force use
Following the war, Tan Son Nhut Air Base was taken over as a base for the VPAF which is referred to by the name Tân Sơn Nhất.
Accident and incidents
- 25 October 1967: F-105D Thunderchief #59-1737 crashed into a C-123K #54-0667 on landing in bad weather. The F-105 pilot was killed and both aircraft were destroyed.
- 11 October 1969: an AC-119G of the 17th Special Operations Squadron crashed shortly after takeoff. 6 crewmembers were killed and the aircraft was destroyed.:208
- 28 April 1970: an AC-119G of the 17th Special Operations Squadron crashed shortly after takeoff. 6 crewmembers were killed and the aircraft was destroyed.:211
- Futrell, Robert (1981). The United States Air Force in Southeast Asia: The Advisory Years to 1965 (PDF). Office of Air Force History. p. 52. ISBN 9789998843523.
- Tregaskis, Richard (1975). Southeast Asia: Building the Bases; the History of Construction in Southeast Asia. Superintendent of Documents, U.S. Government Printing Office. p. 32.
- Tilford, Earl (1980). Search and Rescue in Southeast Asia 1961–1975 (PDF). Office of Air Force History. p. 14. ISBN 9781410222640.
- Schlight, John (1999). The United States Air Force in Southeast Asia: The War in South Vietnam The Years of the Offensive 1965–1968 (PDF). Office of Air Force History. p. 95. ISBN 9780912799513.
- Van Staaveren, Jacob (2002). Gradual Failure: The Air War over North Vietnam 1965–1966 (PDF). Air Force History and Museums Program. pp. 126–7. ISBN 9781508779094.
- Fox, Roger (1979). Air Base Defense in the Republic of Vietnam 1961–1973 (PDF). Office of Air Force History. p. 173. ISBN 9781410222565.
- Nolan, Keith (1996). The Battle for Saigon Tet 1968. Presidio press. pp. 9–92. ISBN 0891417699.
- Oberdorfer, Don (1971). Tet! The turning point in the Vietnam War. Doubleday & Co. p. 148. ISBN 0306802104.
- Thompson, A.W. (14 December 1968). Project CHECO Southeast Asia Report. The Defense of Saigon. HQ Pacific Air Force. p. 14.
- Nalty, Bernard (2000). The United States Air Force in Southeast Asia: The War in South Vietnam Air War over South Vietnam 1968–1975 (PDF). Air Force History and Museums Program. p. 36. ISBN 9781478118640.
- Ballard, Jack (1982). The United States Air Force in Southeast Asia: Development and Employment of Fixed-Wing Gunships 1962–1972 (PDF). Office of Air Force History. p. 34. ISBN 9781428993648.
- Markham, James (14 April 1974). "Letter from Saigon". the New York Times. Retrieved 31 May 2018.
- Tobin, Thomas (1978). USAF Southeast Asia Monograph Series Volume IV Monograph 6: Last Flight from Saigon. US Government Printing Office. pp. 20–21. ISBN 9781410205711.
- Dunham, George R (1990). U.S. Marines in Vietnam: The Bitter End, 1973–1975 (Marine Corps Vietnam Operational Historical Series). History and Museums Division Headquarters, U.S. Marine Corps. p. 182. ISBN 978016026455-9.
- Veith, George (2012). Black April The Fall of South Vietnam 1973-75. Encounter Books. pp. 488–9. ISBN 9781594035722.
- Bowers, Ray (1983). The United States Air Force in Southeast Asia: Tactical Airlift (PDF). U.S. Air Force Historical Studies Office. p. 383. ISBN 9781782664208.
- "Pan Am System Timetable". 29 April 1973.
- Scott, Christian, J. (1998). Bring Songs to the Sky: Recollections of Continental Airlines, 1970–1986. Quadran Press.
- Kelley, Michael (2002). Where we were in Vietnam. Hellgate Press. pp. 5–139. ISBN 978-1555716257.
- Quinn, Ruth (9 May 2014). "3rd RRU arrives in Vietnam, May 13, 1961". US Army. Retrieved 31 May 2018.
- Long, Lonnie (2013). Unlikely Warriors: The Army Security Agency's Secret War in Vietnam 1961-1973. iUniverse. pp. 41–2. ISBN 9781475990591.
- "They served in silence – The Story of a Cryptologic Hero: Specialist Four James T. Davis" (PDF). National Security Agency. Retrieved 31 May 2018.
- Hanyok, Robert (2002). Spartans in Darkness: American SIGINT and the Indochina War, 1945-1975. National Security Agency. pp. 123–9.
- Ravenstein, Charles A. (1984). Air Force Combat Wings, Lineage & Honors Histories 1947-1977 (PDF). Washington, D.C.: Office of Air Force History. p. 254. ISBN 0-912799-12-9.
- "Dirty Thirty Fact Sheet". National Museum of the United States Air Force. 20 January 2012. Archived from the original on 13 July 2015. Retrieved 10 May 2018.
- Dollman, TSG David (19 October 2016). "Factsheet 38 Rescue Squadron (ACC)". Air Force Historical Research Agency. Retrieved 30 May 2018.
- Dunstan 1988, p. 18.
- Dunstan 1988, p. 25.
- Dunstan 1988, p. 132.
- Dunstan 1988, p. 33.
- Haulman, Daniel (3 August 2017). "Factsheet 20 Intelligence Squadron". Air Force Historical Research Agency. Retrieved 30 May 2018.
- Nalty, Bernard (2005). The War Against Trucks: Aerial Interdiction in Southern Laos, 1968–1972 (PDF). Air Force Museums and History Program. p. 90. ISBN 9780160724930.
- "Chuyển hoạt động bay quân sự ra khỏi 3 sân bay lớn". VNExpress. VNExpress. 24 September 2016. Retrieved 29 December 2016.
- "Wednesday 25 October 1967". Aviation Safety Network. Retrieved 30 May 2018.
- Dunstan, S (1988). Vietnam Choppers. UK: Osprey Publishing Ltd. ISBN 0-85045-572-3.
- Endicott, Judy G. (1999) Active Air Force wings as of 1 October 1995; USAF active flying, space, and missile squadrons as of 1 October 1995. Maxwell AFB, Alabama: Office of Air Force History. CD-ROM.
- Martin, Patrick (1994). Tail Code: The Complete History of USAF Tactical Aircraft Tail Code Markings. Schiffer Military Aviation History. ISBN 0-88740-513-4.
- Mesco, Jim (1987) VNAF Republic of Vietnam Air Force 1945–1975 Squadron/Signal Publications. ISBN 0-89747-193-8
- Mikesh, Robert C. (2005) Flying Dragons: The Republic of Vietnam Air Force. Schiffer Publishing, Ltd. ISBN 0-7643-2158-7
- USAF Historical Research Division/Organizational History Branch – 35th Fighter Wing, 366th Wing
- VNAF – The Republic of Vietnam Air Force 1951–1975
- USAAS-USAAC-USAAF-USAF Aircraft Serial Numbers—1908 to present
- Airport information for VVTS at World Aero Data. Data current as of October 2006.
|Wikimedia Commons has media related to Tan Son Nhut Air Base.|
- 505th Tactical Control Group – Tactical Air Control in Vietnam and Thailand
- C-130A 57–460 at the National Air And Space Museum
- The Tan Son Nhut Association
- Electronic Warfare "Electric Goon" EC-47 Association website
- The Defense of Tan Son Nhut Air Base, 31 January 1968
- The Fall of Saigon
- The short film STAFF FILM REPORT 66-5A (1966) is available for free download at the Internet Archive
- The short film STAFF FILM REPORT 66-17A (1966) is available for free download at the Internet Archive
- The short film STAFF FILM REPORT 66-19A (1966) is available for free download at the Internet Archive
- The short film STAFF FILM REPORT 66-25A (1966) is available for free download at the Internet Archive
- The short film STAFF FILM REPORT 66-27A (1966) is available for free download at the Internet Archive
- The short film STAFF FILM REPORT 66-28A (1966) is available for free download at the Internet Archive
- The short film STAFF FILM REPORT 66-30A (1966) is available for free download at the Internet Archive | 1 | 8 |
<urn:uuid:01088ec0-9e15-4b13-91da-a9bebcfc4374> | Sex reassignment therapy
|Part of a series on|
Sex reassignment therapy is the medical aspect of gender transitioning, that is, modifying one's characteristics to better suit one's gender identity. It can consist of hormone therapy to modify secondary sex characteristics, sex reassignment surgery to alter primary sex characteristics, and other procedures altering appearance, including permanent hair removal for trans women.
In appropriately evaluated cases of severe gender dysphoria, sex reassignment therapy is often the best when standards of care are followed.:1570:2108 There is academic concern over the low quality of the evidence supporting the efficacy of sex reassignment therapy as treatment for gender dysphoria, but more robust studies are impractical to carry out;:22 as well, there exists a broad clinical consensus, supplementing the academic research, that supports the effectiveness in terms of subjective improvement of sex reassignment therapy in appropriately selected patients.:2–3 Treatment of gender dysphoria does not involve attempting to correct the patient's gender identity, but to help the patient adapt.:1568
Major health organizations in the United States and UK have issued affirmative statements supporting sex reassignment therapy as comprising medically necessary treatments in certain appropriately evaluated cases.
- 1 Eligibility
- 2 Psychological treatment
- 3 Hormone therapy
- 4 Sex reassignment surgery
- 5 Other procedures
- 6 Effectiveness
- 7 Ethical, cultural, and political considerations
- 8 See also
- 9 References
- 10 Bibliography
In current medical practice, a diagnosis is required for sex reassignment therapy. In the International Classification of Diseases the diagnosis is known as transsexualism (). The US Diagnostic and Statistical Manual of Mental Disorders (DSM) names it gender dysphoria (in version 5). While the diagnosis is a requirement for determining medical necessity of sex reassignment therapy, some people who are validly diagnosed have no desire for all or some parts of sex reassignment therapy, particularly genital reassignment surgery, and/or are not appropriate candidates for such treatment.
The general standard for diagnosing, as well as treating, gender dysphoria is outlined in the WPATH Standards of Care for the Health of Transsexual, Transgender, and Gender Nonconforming People. As of February 2014, the most recent version of the standards is Version 7. According to the standards of care, "gender dysphoria refers to discomfort or distress that is caused by a discrepancy between a person’s gender identity and that person’s sex assigned at birth (and the associated gender role and/or primary and secondary sex characteristics)... Only some gender-nonconforming people experience gender dysphoria at some point in their lives". Gender nonconformity is not the same as gender dysphoria; nonconformity, according to the standards of care, is not a pathology and does not require medical treatment.
Local standards of care exist in many countries.
In cases of comorbid psychopathology, the standards are to first manage the psychopathology and then evaluate the patient's gender dysphoria. Treatment may still be appropriate and necessary in cases of significant comorbid psychopathology, as "cases have been reported in which the individual was both suffering from severe co-occurring psychopathology, and was a 'late-onset, gynephilic' trans woman, and yet experienced a long-term, positive outcome with hormonal and surgical gender transition.":22
However, some transsexual people may suffer from co-morbid psychiatric conditions unrelated to their gender dysphoria. The DSM-IV itself states that in rare instances, gender dysphoria may co-exist with schizophrenia, and that psychiatric disorders are generally not considered contraindications to sex reassignment therapy unless they are the primary cause of the patient's gender dysphoria.:108
Eligibility for different stages of treatment
While a mental health assessment is required by the standards of care, psychotherapy is not an absolute requirement but is highly recommended.
Hormone replacement therapy is to be initiated on referral from a qualified health professional. The general requirements, according to the WPATH standards, include:
- Persistent, well-documented gender dysphoria;
- Capacity to make a fully informed decision and to consent for treatment;
- Age of majority in a given country (however, the WPATH standards of care provide separate discussion of children and adolescents);
- If significant medical or mental health concerns are present, they must be reasonably well-controlled.
Often, at least a certain period of psychological counseling is required before initiating hormone replacement therapy, as is a period of living in the desired gender role, if possible, to ensure that they can psychologically function in that life-role. On the other hand, some clinics provide hormone therapy based on informed consent alone.
As surgery is a radical and irreversible intervention, more stringent standards are usually applied. Generally speaking, physicians who perform sex-reassignment surgery require the patient to live as the members of their target gender in all possible ways for at least a year ("cross-live"), prior to the start of surgery, in order to assure that they can psychologically function in that life-role. This period is sometimes called the Real Life Test (RLT); it is part of a battery of requirements. Other frequent requirements are regular psychological counseling and letters of recommendation for this surgery.
The time period of "cross-living" is usually known as the Real-Life-Test (RLT) or Real-Life-Experience (RLE). It is sometimes required even before hormone therapy, but this is not always possible; transsexual men frequently cannot "pass" this period without hormones. Transsexual women may also require hormones to pass as women in society. Most trans women also require facial hair removal, voice training or voice surgery, and sometimes, facial feminization surgery, to be passable as females; these treatments are usually provided upon request with no requirements for psychotherapy or "cross-living".
Some surgeons who perform sex reassignment surgeries may require their patients to live as members of their target gender in as many ways as possible for a specified period of time, prior to any surgery. However, some surgeons recognize that this so-called real-life test for trans men, without breast removal and/or chest reconstruction, may be difficult. Therefore, many surgeons are willing to perform some or all elements of sex reassignment surgery without a real-life test. This is especially common amongst surgeons who practice in Asia. However, almost all surgeons practicing in North America and Europe who perform genital reassignment surgery require letters of approval from two psychotherapists; most Standards of Care recommend, and most therapists require, a one-year real-life test prior to genital reassignment surgery, though some therapists are willing to waive this requirement for certain patients.
The requirements for chest reconstruction surgery are different for trans men and trans women. The Standards of Care require trans men to undergo either 3 months of Real-life-test or psychological evaluation before surgery whereas trans women are required to undergo 18 months of hormone therapy. The requirement for trans men is due to the difficulty in presenting as male with female breasts, especially those of a C cup or larger. For very large breasts it can be impossible for the trans man to present as male before surgery. For trans women, the extra time is required to allow for complete breast development from hormone therapy. Having breast augmentation before that point can result in uneven breasts due to hormonal development, or removal of the implant if hormonal breast development is significant and results in larger breasts than desired.
Eligibility of minors
While the WPATH standards of care generally require the patient to have reached the age of majority, they include a separate section devoted to children and adolescents.
While there is anecdotal evidence of cases where a child firmly identified as another sex from a very early age, studies cited in the standards of care show that in the majority of cases such identification in childhood does not persist into adulthood. However, with adolescents, persistence is much more likely, and so reversible treatment by puberty blockers can be prescribed. This treatment is controversial as the use of puberty blockers involves a small risk of adverse physical effects.
A 2014 study made a longer-term evaluation of the effectiveness of this approach, looking at young transgender adults who had received puberty suppression during adolescence. It found that "After gender reassignment, in young adulthood, the [gender dysphoria] was alleviated and psychological functioning had steadily improved. Well-being was similar to or better than same-age young adults from the general population. Improvements in psychological functioning were positively correlated with postsurgical subjective well-being." No patients expressed regret about the transition process, including puberty suppression.
"Since puberty suppression is a fully reversible medical intervention, it provides adolescents and their families with time to explore their gender dysphoric feelings, and [to] make a more definite decision regarding the first steps of actual gender reassignment treatment at a later age," said study lead author Dr. Annelou de Vries. By delaying the onset of puberty, those children who go on to gender reassignment "have the lifelong advantage of a body that matches their gender identities without the irreversible body changes of a low voice or beard growth or breasts, for example,".
De Vries nevertheless cautioned that the findings need to be confirmed by further research, and added that her study didn't set out to assess the side effects of puberty suppression.
According to the WPATH SOC v7, "Psychotherapy (individual, couple, family, or group) for purposes such as exploring gender identity, role, and expression; addressing the negative impact of gender dysphoria and stigma on mental health; alleviating internalized transphobia; enhancing social and peer support; improving body image; or promoting resilience" is a treatment option.
For trans people, hormone therapy causes the development of many of the secondary sexual characteristics of their desired sex. However, many of the existing primary and secondary sexual characteristics cannot be reversed by hormone therapy. For example, hormone therapy can induce breast growth for trans women but can only minimally reduce breasts for trans men. HRT can prompt facial hair growth for transsexual men, but cannot regress facial hair for transsexual women. Hormone therapy may, however, reverse some characteristics, such as distribution of body fat and muscle, as well as menstruation in trans men.
Generally, those traits that are easily reversible will revert upon cessation of hormonal treatment, unless chemical or surgical castration has occurred, though for many trans people, surgery is required to obtain satisfactory physical characteristics. But in trans men, some hormonally-induced changes may become virtually irreversible within weeks, whereas trans women usually have to take hormones for many months before any irreversible changes will result.
As with all medical activities, health risks are associated with hormone replacement therapy, especially when high hormone doses are taken as is common for pre-operative or no-operative trans patients. It is always advised that all changes in therapeutic hormonal treatment should be supervised by a physician because starting, stopping or even changing dosage rates and levels can have physical and psychological health risks.
Although some trans women use herbal phytoestrogens as alternatives to pharmaceutical estrogens, little research has been performed with regards to the safety or effectiveness of such products. Anecdotal evidence suggests that the results of herbal treatments are minimal and very subtle, if at all noticeable, when compared to conventional hormone therapy.
Some trans people are able to avoid the medical community's requirements for hormone therapy altogether by either obtaining hormones from black market sources, such as internet pharmacies which ship from overseas, or more rarely, by synthesizing hormones themselves.
Testosterone therapy is typically used for masculinizing treatments. Effects can include thicker vocal cords, increased muscle mass, hair loss, and thicker skin. Intramuscular, subcutaneous, and transdermal options are available. These include cypionate (Depo-Testosterone®), and the longer acting testosterone undecanoate (Aveed®). Oral formulations are available in Europe, Andriol®, but are not available in the U.S. due to their pharmacokinetic properties.
Estrogen and anti-androgen therapy are typically used for feminizing treatments. Estrogen is available in oral, parenteral, and transdermal formulations. Often, estrogen alone is insufficient for androgen suppression, and appropriate therapy will call for additional anti-androgen medications. Anti-androgen medications include progesterone, medroxyprogesterone acetate, spironolactone, and finasteride.
Sex reassignment surgery
Sex reassignment surgery (SRS) refers to the surgical and medical procedures undertaken to align intersex and transsexual individuals' physical appearance and genital anatomy with their gender identity. SRS may encompass any surgical procedures which will reshape a male body into a body with a female appearance or vice versa, or more specifically refer to the procedures used to make male genitals into female genitals and vice versa.
Other proposed terms for SRS include "gender confirmation surgery," "gender realignment surgery," and "transsexual surgery." The aforementioned terms may also specifically refer to genital surgeries like vaginoplasty, metoidioplasty, and phalloplasty, even though more specific terms exist to refer exclusively to genital surgery, the most common of which is genital reassignment surgery (GRS). The term "genital reconstruction surgery" may also be used. There are significant medical risks associated with SRS that should be considered before undergoing the surgery.
Chest reconstruction surgery
For a lot of trans men, chest reconstruction is desired. Binding of the chest tissue can cause a variety of health issues, including reduced lung capacity and even broken ribs if improper techniques or materials are used. A mastectomy is performed, often including a nipple graft for those with a B or larger cup size.
For trans women, breast augmentation is done in a similar manner to those done for cisgender women. As with cisgender women, there is a limit on the size of implant that may be used, depending on the amount of pre-existing breast tissue.
Facial feminization surgery (FFS) is a form of facial reconstruction used to make a masculine face appear more feminine. FFS procedures can reshape the jaw, chin, forehead (including brow ridge), hairline, and other areas of the face that tend to be sexually dimorphic. A chondrolaryngoplasty, colloquially a "tracheal shave", is a surgical reduction of the cartilage in the larynx to reduce the appearance of a visible Adam's apple.
Trans people of both sexes may practice vocal therapy. Vocal therapists may help their patients improve their pitch, resonance, inflection, and volume. Another option for trans women is vocal surgery, though there is the risk of damaging the voice.
The Merck Manual states, in regard to trans women, "In follow-up studies, genital surgery has helped some transsexual people live happier and more productive lives and so is justified in highly motivated, appropriately assessed and treated transsexual people, who have completed a 1- to 2-year real-life experience in a different gender role. Before surgery, transsexual people often need assistance with passing in public, including help with gestures and voice modulation. Participation in support groups, available in most large cities, is usually helpful.":1570 With regards to trans men, it states, "Surgery may help certain [trans men] patients achieve greater adaptation and life satisfaction. Similar to trans women, trans men should live in the male gender role for at least 1 yr before surgery. Anatomic results of neophallus surgical procedures are often less satisfactory in terms of function and appearance than neovaginal procedures for trans women. Complications are common, especially in procedures that involve extending the urethra into the neophallus.":1570
Kaplan and Sadock's Comprehensive Textbook of Psychiatry states, with regards to adults, "When patient gender dysphoria is severe and intractable, sex reassignment is often the best solution.":2108 Regret tends to occur in cases of misdiagnosis, no Real Life Experience, and poor surgical results. Risk factors for return to original gender role include history of transvestic fetishism, psychological instability, and social isolation. In adolescents, careful diagnosis and following strict criteria can ensure good post-operative outcomes. Many prepubescent children with cross-gender identities do not persist with gender dysphoria.:2109–2110 With regards to follow-up, it states that "Clinicians are less likely to report poor outcomes in their patients, thus shifting the reporting bias to positive results. However, some successful patients who wish to blend into the community as men or women do not make themselves available for follow-up. Also, some patients who are not happy with their reassignment may be more known to clinicians as they continue clinical contact.":2109
A 2009 systematic review looking at individual surgical procedures found that "[t]he evidence concerning gender reassignment surgery has several limitations in terms of: (a) lack of controlled studies, (b) evidence has not collected data prospectively, (c) high loss to follow up and (d) lack of validated assessment measures. Some satisfactory outcomes were reported, but the magnitude of benefit and harm for individual surgical procedures cannot be estimated accurately using the current available evidence."
A 2010 meta-analysis of follow-up studies reported "Pooling across studies shows that after sex reassignment, 80% of individuals with GID reported significant improvement in gender dysphoria (95% CI = 68–89%; 8 studies; I2 = 82%); 78% reported significant improvement in psychological symptoms (95% CI = 56–94%; 7 studies; I2 = 86%); 80% reported significant improvement in quality of life (95% CI = 72–88%; 16 studies; I2 = 78%); and 72% reported significant improvement in sexual function (95% CI = 60–81%; 15 studies; I2 = 78%)." The study concluded "Very low quality evidence suggests that sex reassignment that includes hormonal interventions in individuals with GID likely improves gender dysphoria, psychological functioning and comorbidities, sexual function and overall quality of life."
A study evaluating quality of life in female-to-male transgender individuals found "statistically significant (p<0.01) diminished quality of life among the FTM transgender participants as compared to the US male and female population, particularly in regard to mental health. FTM transgender participants who received testosterone (67%) reported statistically significant higher quality of life scores (p<0.01) than those who had not received hormone therapy."
A recent Swedish study (2010) found that “almost all patients were satisfied with sex reassignment at 5 years, and 86% were assessed by clinicians at follow-up as stable or improved in global functioning” A prospective study in the Netherlands that looked at the psychological and sexual functioning of 162 adult applicants of adult sex reassignment applicants before and after hormonal and surgical treatment found, "After treatment the group was no longer gender dysphoric. The vast majority functioned quite well psychologically, socially and sexually. Two non-homosexual male-to-female transsexuals expressed regrets."
A long-term follow-up study performed in Sweden over a long period of time (1973–2003) found that morbidity, suicidality, and mortality in post-operative trans people were still significantly higher than in the general population, suggesting that sex reassignment therapy is not enough to treat gender dysphoria, highlighting the need for improved health care following sex reassignment surgery. 10 controls were selected for each post-operative trans person, matched by birth year and sex; two control groups were used: one matching sex at birth, the other matching reassigned sex. The study states that "no inferences can be drawn [from this study] as to the effectiveness of sex reassignment as a treatment for transsexualism," citing studies showing the effectiveness of sex reassignment therapy, though noting their poor quality. The authors noted that the results suggested that those who received sex reassignment surgery before 1989 had worse mortality, suicidality, and crime rates than those who received surgery on or after 1989: mortality, suicidality, and crime rates for the 1989-2003 cohort were not statistically significant compared to healthy controls (though psychiatric morbidity was); it is not clear if this is because these negative factors tended to increase a decade after surgery or because in the 1990s and later improved treatment and social attitudes may have led to better outcomes.
The abstract of the American Psychiatric Association Task Force on GID's report from 2012 states, "The quality of evidence pertaining to most aspects of treatment in all subgroups was determined to be low; however, areas of broad clinical consensus were identified and were deemed sufficient to support recommendations for treatment in all subgroups." The APA Task Force states, with regard to the quality of studies, "For some important aspects of transgender care, it would be impossible or unwise to engage in more robust study designs due to ethical concerns and lack of volunteer enrollment. For example, it would be extremely problematic to include a 'long-term placebo treated control group' in an RCT of hormone therapy efficacy among gender variant adults desiring to use hormonal treatments.":22 The Royal College of Psychiatrists concurs with regards to SRS in trans women, stating, "There is no level 1 or 2 evidence (Oxford levels) supporting the use of feminising vaginoplasty in women but this is to be expected since a randomised controlled study for this scenario would be impossible to carry out."
Following up on the APA Task Force's report, the APA issued a statement stating that the APA recognizes that in "appropriately evaluated" cases, hormonal and surgical interventions may be medically necessary and opposes "categorical exclusions" of such treatment by third-party payers. The American Medical Association's Resolution 122 states, "An established body of medical research demonstrates the effectiveness and medical necessity of mental health care, hormone therapy and sex reassignment surgery as forms of therapeutic treatment for many people diagnosed with GID".
The need for treatment is emphasized by the higher rate of mental health problems, including depression, anxiety, and various addictions, as well as a higher suicide rate among untreated transsexual people than in the general population. Many of these problems, in the majority of cases, disappear or decrease significantly after a change of gender role and/or physical characteristics.
Ethical, cultural, and political considerations
Sex reassignment therapy is a controversial ethical subject. Notably, the Roman Catholic church, according to an unpublished Vatican document, holds that changing sex is not possible and, while in some cases treatment might be necessary, it does not change the person's sex in the eyes of the church. Some Catholic ethicists go further, proclaiming that a "sex change operation" is "mutilation" and therefore immoral.
Paul R. McHugh is a well-known opponent of sex reassignment therapy. According to his own article, when he joined Johns Hopkins University as director of the Department of Psychiatry and Behavioral Science, it was part of his intention to end sex reassignment surgery there. McHugh succeeded in ending it at the university during his time. However, a new gender clinic at Johns Hopkins has been opened in 2017.
Opposition was also expressed by several writers identifying as feminist, most famously Janice Raymond. Her paper was allegedly instrumental in removing Medicaid and Medicare support for sex reassignment therapy in the US.
Sex reassignment therapy, especially surgery, tends to be expensive and is not always covered by public or private health insurance. In many areas with comprehensive nationalized health care, such as some Canadian provinces and most European countries, SRT is covered under these plans. However, requirements for obtaining SRS and other transsexual services under these plans are sometimes more stringent than the requirements laid out in the WPATH Standards of Care for the Health of Transsexual, Transgender, and Gender Nonconforming People, and in Europe, many local Standards of Care exist. In other countries, such as the United States, no national health plan exists and the majority of private insurance companies do not cover SRS. The government of Iran, however, pays for such surgery because it is believed to be valid under Shi'ite Belief.
A significant and growing political movement exists, pushing to redefine the standards of care, asserting that they do not acknowledge the rights of self-determination and control over one's body, and that they expect (and even in many ways require) a monolithic transsexual experience. In opposition to this movement is a group of transsexual persons and caregivers who assert that the SOC are in place to protect others from "making a mistake" and causing irreversible changes to their bodies that will later be regretted – though few post-operative transsexuals believe that sexual reassignment surgery was a mistake for them.
The United States
From 1981 until 2014, the Centers for Medicare and Medicaid Services (CMS) categorically excluded coverage of sex reassignment surgery by Medicare in its National Coverage Determination (NCD) "140.3 Transsexual Surgery," but that categorical exclusion came under challenge by an "aggrieved party" in an Acceptable NCD Complaint in 2013 and was subsequently struck down the following year by the Departmental Appeals Board (DAB), the administrative court of the U.S. Department of Health and Human Services (HHS). In late 2013, the DAB issued a ruling finding the evidence on record was "not complete and adequate to support the validity of the NCD" and then moved on to discovery to determine if the exclusion was valid. CMS did not defend its exclusion throughout the entire process. On May 30, 2014, HHS announced that the categorical exclusion was found by the DAB to not be valid "under the 'reasonableness standard,'" allowing for Medicare coverage of sex reassignment surgery to be decided on a case-by-case basis. HHS says it will move to implement the ruling. As Medicaid and private insurers often take their cues from Medicare on what to cover, this may lead to coverage of sex reassignment therapy by Medicaid and private insurers. The evidence in the case "outweighs the NCD record and demonstrates that transsexual surgery is safe and effective and not experimental," according to the DAB in its 2014 ruling.
Consent and the treatment of intersex people
In 2011, Christiane Völling won the first successful case brought by an intersex person against a surgeon for non-consensual surgical intervention described by the International Commission of Jurists as "an example of an individual who was subjected to sex reassignment surgery without full knowledge or consent".
In 2015, the Council of Europe recognized, for the first time, a right for intersex persons to not undergo sex assignment treatment. In April 2015, Malta became the first country to recognize a right to bodily integrity and physical autonomy, and outlaw non-consensual modifications to sex characteristics. The Act was widely welcomed by civil society organizations.
- George R. Brown, MD (20 July 2011). "Chapter 165 Sexuality and Sexual Disorders". In Robert S. Porter, MD; et al. (eds.). The Merck Manual of Diagnosis and Therapy (19th ed.). Whitehouse Station, NJ, USA: Merck & Co., Inc. pp. 1567–1573. ISBN 978-0-911910-19-3.
- Richard M. Green, M.D., J.D. (June 8, 2009). "18.3 Gender Identity Disorders". In Benjamin Sadock; Virginia Alcott Sadock; Pedro Ruiz (eds.). Kaplan and Sadock's Comprehensive Textbook of Psychiatry (9th ed.). Lippincott Williams & Wilkins. pp. 2099–2111. ISBN 978-0781768993.CS1 maint: multiple names: authors list (link)[dead link]
- William Byne, Susan J. Bradley, Eli Coleman, A. Evan Eyler, Richard Green, Edgardo J. Menvielle, Heino F. L. Meyer-Bahlburg, Richard R. Pleak & D. Andrew Tompkins (August 2012). "Report of the American Psychiatric Association Task Force on Treatment of Gender Identity Disorder" (PDF). Archives of Sexual Behavior. 41 (4): 759–796 (pages cited as pages at link). doi:10.1007/s10508-012-9975-x. PMID 22736225.CS1 maint: multiple names: authors list (link)
- Drescher, Jack; Haller, Ellen (July 2012). "Position Statement on Access to Care for Transgender and Gender Variant Individuals" (PDF). American Psychiatric Association. American Psychiatric Association. Retrieved 17 January 2014. External link in
- "AMA Resolution 122" (PDF). AMA House of Delegates May 2008 Report (showing that Resolution 122 was affirmed). American Medical Association. May 2008. Retrieved 17 January 2014. External link in
- "APA Policy Statement: Transgender, Gender Identity, & Gender Expression Non-Discrimination". American Psychological Association. American Psychological Association. August 2008. Retrieved 17 January 2014. External link in
- "Good practice guidelines for the assessment and treatment of adults with gender dysphoria" (PDF). Royal College of Psychiatrists. Royal College of Psychiatrists. October 2013. Retrieved 17 January 2014. External link in
- Whittle, Stephen; Bockting, Walter; Monstrey, Stan; Brown, George; Brownstein, Michael; DeCuypere, Griet; Ettner, Randi; Fraser, Lin; Green, Jamison; Rachlin, Katherine; Robinson, Beatrice. "WPATH Clarification on Medical Necessity of Treatment, Sex Reassignment, and Insurance Coverage for Transgender and Transsexual People Worldwide". WPATH. Archived from the original on 14 August 2015. Retrieved 27 August 2015.
- F64.0"Excerpt from ICD 10".
- "DSM 5 gender dysphoria fact sheet" (PDF).
- "Standards of Care for the Health of Transsexual, Transgender, and Gender-Nonconforming People, Version 7" (PDF). Archived from the original (PDF) on 2016-01-06.
- Brown, Mildred (2003). True selves : understanding transsexualism-- for families, friends, coworkers, and helping professionals. San Francisco: Jossey-Bass. ISBN 978-0-7879-6702-4.
- de Vries, A. L. C.; McGuire, J. K.; Steensma, T. D.; Wagenaar, E. C. F.; Doreleijers, T. A. H.; Cohen-Kettenis, P. T. (8 September 2014). "Young Adult Psychological Outcome After Puberty Suppression and Gender Reassignment". Pediatrics. 134 (4): 696–704. doi:10.1542/peds.2013-2958. PMID 25201798. Retrieved 27 August 2015.
- Mozes, Alan (10 September 2014). "Puberty Suppression Benefits Gender-Questioning Teens: Study". HealthDay. U.S. News & World Report. Retrieved 27 August 2015.
- "Transgender Health & Transitioning | Revel & Riot". www.revelandriot.com. Retrieved 2019-08-07.
- "Information on Testosterone Hormone Therapy | Transgender Care". transcare.ucsf.edu. Retrieved 2019-08-07.
- Hashemi, Leila; Weinreb, Jane; Weimer, Amy K.; Weiss, Rebecca Loren (July 2018). "Transgender Care in the Primary Care Setting: A Review of Guidelines and Literature". Federal Practitioner. 35 (7): 30–37. ISSN 1945-337X. PMC 6368014. PMID 30766372.
- Unger, Cécile A. (December 2016). "Hormone therapy for transgender patients". Translational Andrology and Urology. 5 (6): 877–884. doi:10.21037/tau.2016.09.04. ISSN 2223-4691. PMC 5182227. PMID 28078219.
- Deutsch, Madeline B.; Bhakri, Vipra; Kubicek, Katrina (March 2015). "Effects of cross-sex hormone treatment on transgender women and men". Obstetrics and Gynecology. 125 (3): 605–610. doi:10.1097/AOG.0000000000000692. ISSN 1873-233X. PMC 4442681. PMID 25730222.
- Hashemi, Leila; Weinreb, Jane; Weimer, Amy K.; Weiss, Rebecca Loren (July 2018). "Transgender Care in the Primary Care Setting: A Review of Guidelines and Literature". Federal Practitioner. 35 (7): 30–37. ISSN 1078-4497. PMC 6368014. PMID 30766372.
- Hembree, Wylie C.; Cohen-Kettenis, Peggy; Delemarre-van de Waal, Henriette A.; Gooren, Louis J.; Meyer, Walter J.; Spack, Norman P.; Tangpricha, Vin; Montori, Victor M. (2009-09-01). "Endocrine Treatment of Transsexual Persons:An Endocrine Society Clinical Practice Guideline". The Journal of Clinical Endocrinology & Metabolism. 94 (9): 3132–3154. doi:10.1210/jc.2009-0345. ISSN 0021-972X. PMID 19509099.
- "Del Rey Aesthetics Center Introduces Facial Feminization Surgery Services". PRWeb.
- "FFS: Trachea shave". tsroadmap.com. 2019-04-06.
- "Voice and Communication Therapy for Clients Who Are Transgender". asha.org.
- "Vocal Feminization: Surgery". tsroadmap.com. 2019-04-14.
- P. A. Sutcliffe, S. Dixon, R. L. Akehurst, A. Wilkinson, A. Shippam, S. White, R. Richards & C. M. Caddy (March 2009). "Evaluation of surgical procedures for sex reassignment: a systematic review". Journal of Plastic, Reconstructive & Aesthetic Surgery. 62 (3): 294–306. doi:10.1016/j.bjps.2007.12.009. PMID 18222742.CS1 maint: multiple names: authors list (link)
- Murad, Mohammad Hassan; Elamin, Mohamed B.; Garcia, Magaly Zumaeta; Mullan, Rebecca J.; Murad, Ayman; Erwin, Patricia J.; Montori, Victor M. (2010). "Hormonal therapy and sex reassignment: A systematic review and meta-analysis of quality of life and psychosocial outcomes". Clinical Endocrinology. 72 (2): 214–31. doi:10.1111/j.1365-2265.2009.03625.x. PMID 19473181.
- Newfield, E; Hart, S; Dibble, S; Kohler, L (November 2006). "Female-to-male transgender quality of life". Quality of Life Research. 15 (9): 1447–57. CiteSeerX 10.1.1.468.9106. doi:10.1007/s11136-006-0002-3. PMID 16758113.
- Johansson, Annika; Sundbom, Elisabet; Höjerback, Torvald; Bodlund, Owe (2009). "A Five-Year Follow-Up Study of Swedish Adults with Gender Identity Disorder". Archives of Sexual Behavior. 39 (6): 1429–37. doi:10.1007/s10508-009-9551-1. PMID 19816764.
- Smith, YL; Van Goozen, SH; Kuiper, AJ; Cohen-Kettenis, PT (January 2005). "Sex reassignment: outcomes and predictors of treatment for adolescent and adult transsexuals" (PDF). Psychological Medicine. 35 (1): 89–99. doi:10.1017/S0033291704002776. PMID 15842032.
- Dhejne, Cecilia; Lichtenstein, Paul; Boman, Marcus; Johansson, Anna L. V.; Långström, Niklas; Landén, Mikael (2011). Scott, James (ed.). "Long-Term Follow-Up of Transsexual Persons Undergoing Sex Reassignment Surgery: Cohort Study in Sweden". PLoS ONE. 6 (2): e16885. Bibcode:2011PLoSO...616885D. doi:10.1371/journal.pone.0016885. PMC 3043071. PMID 21364939.
- Heylens, Gunter; Verroken, Charlotte; De Cock, Sanne; T'Sjoen, Guy; De Cuypere, Griet (2013). "Effects of Different Steps in Gender Reassignment Therapy on Psychopathology: A Prospective Study of Persons with a Gender Identity Disorder". The Journal of Sexual Medicine. 11 (1): 119–126. doi:10.1111/jsm.12363. ISSN 1743-6095. PMID 24344788.
- Yolanda L. S. Smith, Stephanie H. M. Van Goozen, Abraham J. Kuiper & Peggy T. Cohen-Kettenis (January 2005). "Sex reassignment: outcomes and predictors of treatment for adolescent and adult transsexuals" (PDF). Psychological Medicine. 35 (1): 89–99. doi:10.1017/S0033291704002776. PMID 15842032.CS1 maint: multiple names: authors list (link)
- "Vatican says 'sex-change' operation does not change person's gender". National Catholic Reporter. 2011-09-19.
- "FAQ on Gender Identity Disorder and "Sex Change" Operations". National Catholic Bioethics Center. Archived from the original on 2014-02-22.
- Paul McHugh. "Psychiatric misadventures".
- Richard P. Fitzgibbons, M.D., Philip M. Sutton, and Dale O’Leary, The Psychopathology of "Sex Reassignment" Surgery, Assessing Its Medical, Psychological, and Ethical Appropriateness, The National Catholic Bioethics Quarterly, Spring 2009, p. 100. Archived 2014-08-09 at the Wayback Machine
- Allen, Samantha (12 April 2017). "Can Trans People Trust Johns Hopkins's New Clinic?". The Daily Beast.
- "Why The Trans Community Hates Dr. Janice G. Raymond". TransGRiot. 2010-09-20.
- Iran's gay plan, Matthew Hays, Canadian Broadcasting Corporation, August 26, 2008; accessed August 13, 2009.
- Kuiper, A.J; P.T. Cohen-Kettenis (September 1998). "Gender Role Reversal among Postoperative Transsexuals". International Journal of Transgenderism. 2 (3). Archived from the original on 2007-02-04. Retrieved 2007-02-25.
- Wayne, Alex (30 May 2014). "Medicare Ordered to Consider Covering Sex-Change Surgery". Bloomberg. Retrieved 30 May 2014.
- McMorris-Santoro, Evan (30 May 2014). "Obama Administration Opens The Door To Medicare-Funded Sex Reassignment Surgery". BuzzFeed Politics. Retrieved 30 May 2014.
- Leslie A. Sussan; Constance B. Tobias; Sheila Ann Hegy (presiding) (2 Dec 2013). "NCD 140.3 Transsexual Surgery: NCD Ruling No. 2" (PDF). Acceptable National Coverage Determination Complaints (DAB). HHS.gov. Docket No. A-13-47. Retrieved 7 Feb 2014.
- Leslie A. Sussan; Constance B. Tobias; Sheila Ann Hegy (presiding) (30 May 2014). "NCD 140.3 Transsexual Surgery: Decision No. 2576" (PDF). Acceptable National Coverage Determination Complaints (DAB). HHS.gov. Docket No. A-13-87. Retrieved 4 Jul 2014.
- Daphna Stroumsa (January 2014). "The State of Transgender Health Care: Policy, Law, and Medical Frameworks". American Journal of Public Health. 104 (3): e31–8. doi:10.2105/AJPH.2013.301789. PMC 3953767. PMID 24432926.
- International Commission of Jurists. "SOGI Casebook Introduction, Chapter six: Intersex". Retrieved 2015-12-27.
- Council of Europe; Commissioner for Human Rights (April 2015), Human rights and intersex people, Issue Paper
- Cabral, Mauro (April 8, 2015). "Making depathologization a matter of law. A comment from GATE on the Maltese Act on Gender Identity, Gender Expression and Sex Characteristics". Global Action for Trans Equality. Archived from the original on July 4, 2015. Retrieved 2015-07-03.
- OII Europe (April 1, 2015). "OII-Europe applauds Malta's Gender Identity, Gender Expression and Sex Characteristics Act. This is a landmark case for intersex rights within European law reform". Retrieved 2015-07-03.
- Carpenter, Morgan (April 2, 2015). "We celebrate Maltese protections for intersex people". Organisation Intersex International Australia. Retrieved 2015-07-03.
- Star Observer (2 April 2015). "Malta passes law outlawing forced surgical intervention on intersex minors". Star Observer.
- Reuters (1 April 2015). "Surgery and Sterilization Scrapped in Malta's Benchmark LGBTI Law". The New York Times.
- Brown, Mildred L.; Chloe Ann Rounsley (1996). True Selves: Understanding Transsexualism – For Families, Friends, Coworkers, and Helping Professionals. Jossey-Bass. ISBN 978-0-7879-6702-4.
- Dallas, Denny (2006). Transgender Rights: Transgender Communities of the United States in the Late Twentieth Century. University of Minnesota Press. ISBN 978-0-8166-4312-7.
- Feinberg, Leslie (1999). Trans Liberation : Beyond Pink or Blue. Beacon Press. ISBN 978-0-8070-7951-5.
- Kruijver, F. P. M. (2000). "Male-to-Female Transsexuals Have Female Neuron Numbers in a Limbic Nucleus". Journal of Clinical Endocrinology & Metabolism. 85 (5): 2034–41. doi:10.1210/jcem.85.5.6564. PMID 10843193.
- Coleman, E.; Bockting, W.; Botzer, M.; Cohen-Kettenis, P.; DeCuypere, G.; Feldman, J.; Fraser, L.; Green, J.; Knudson, G.; Meyer, W. J.; Monstrey, S.; Adler, R. K.; Brown, G. R.; Devor, A.H.; Ehrbar, R.; Ettner, R.; Eyler, E.; Garofalo, R.; Karasic, D. H.; Lev, A. I.; Mayer, G.; Meyer Bahlburg, H.; Hall, B.P.; Pfaefflin, F.; Rachlin, K.; Robinson, B.; Schechter, L. S.; Tangpricha, V.; van Trotsenburg, M.; Vitale, A.; Winter, S.; Whittle, S.; Wylie, K. R.; Zucker, K. (2012). "Standards of Care for the Health of Transsexual, Transgender, and Gender-Nonconforming People, Version 7" (PDF). International Journal of Transgenderism. 13 (4): 165–232. doi:10.1080/15532739.2011.700873. ISSN 1553-2739. Archived from the original (PDF) on 2014-08-02.
- Pfäfflin, Friedemann & Astrid Junge -Sex Reassignment. Thirty Years of International Follow-up Studies After Sex Reassignment Surgery: A Comprehensive Review, 1961–1991 (translated from German into American English by Roberta B. Jacobson and Alf B. Meier)
- Rathus, Spencer A.; Jeffery S. Nevid; Lois Fichner-Rathus (2002). Human Sexuality in a World of Diversity. Allyn & Bacon. ISBN 978-0-205-40615-9.
- Schneider, H; Pickel, J; Stalla, G (2006). "Typical female 2nd–4th finger length (2D:4D) ratios in male-to-female transsexuals—possible implications for prenatal androgen exposure". Psychoneuroendocrinology. 31 (2): 265–9. doi:10.1016/j.psyneuen.2005.07.005. PMID 16140461.
- Xavier, J., Simmons, R. (2000) – The Washington transgender needs assessment survey, Washington, DC: The Administration for HIV and AIDS of the District of Columbia Government | 3 | 7 |
<urn:uuid:0618ea7f-806c-47d0-a35d-4ee5d51d5829> | Even if you do not know anything about tech, you have probably still heard about the 32- vs 64-bit debate when it comes to processors. But since 64-bit clearly seems to be dominating modern computing, does the debate even matter and should you care? Keep on reading to find out.
How Do Bits Work Anyway?
For you to better understand the differences between how 32-bit and 64-bit processing works, you need to first have a general grasp on the concept of bits. A bit is the smallest increment of data possible on a computer. Since computers count only in binary language (0 or 1), where every 1 counts as a bit.
So for (hypothetical) 1-bit computing, you would get two possible values, then four possible values for 2-bit computing and so on. Keep going like this exponentially until you 32-bit and you will have 4,294,967,296 possible values. That is really high-speed computing, sure, but hold your horses for a second. If you still keep going and get to 64-bit, you will have 18,446,744,073,709,551,616 possible values! This is the rate at which most modern processors work.
Since this means that 64-bit computing can store more computational values than 32-bit, 64-bit is much faster and much more capable than 32-bit computing. It can handle more data at once, and it can also access physical memory that is more than a few billion times what 32-bit computing can access.
The Bit Evolution
To help you understand this concept even better, let’s take a quick look at how processors have evolved over the years. In the 1970s, Intel introduced a humble 8080 chip that supported 8-bit computing. Windows was the first to introduce 16-bit computing for desktops. AMD was the first to introduce 64-bit computing for desktops. Apple was the first to release a 64-bit mobile chip for the iPhone.
Identification for 64-bit vs 32-bit Systems
First things first, how do you even know if your processor is 32-bit or 64-bit? The nomenclature itself is not so confusing; 64-bit is sometimes written as x64 and 32-bit is sometimes written as x86 (to pay homage to the Intel chip series that began with 8086 and carried through to 80486).
You can check which kind of processor you have by going to your device’s settings and looking for the About section that describes its specs. Each new version improves on its predecessor by multiples rather than simply doubling that capacity.
The Differences, and How They Transition into Practical Use
In simple words, 32-bit processors are capable of utilizing a limited amount of RAM as opposed to 64-bit processors. This means better results with processes like gaming. For instance, apps and games that demand high performance also need more available memory, and 64-bit processing can help provide that.
64-bit is also better for programs that need to be able to store a lot of information for immediate access, like intensive image-editing software which allows you to work on many large files at the same time. 32-bit vs 64-bit for MS Office is a good example here because while 32-bit is sufficient for most Office users, 64-bit will make life easier for users that are working with a lot of data.
Most software itself is backward compatible for both 32-bit and 64-bit processors. The only exceptions could be virus protection software and some drivers. However, hardware will definitely need proper installation to transition from one system to the other. The biggest difference occurs within the file system on a computer, but this is expanded on in the next section.
32-bit vs 64-bit Systems for Windows and Macs
Most of the recent versions of Windows (7, 8, 8.1, and 10) come in 32-bit and 64-bit versions. If you have a Windows computer that is less than ten years old, then you are guaranteed to have a 64-bit chip, but you may have the 32-but version of the actual OS installed (again, you can check this in your device specs). For most devices, the specs have the OS labeled using 32- or 64-bit and the processor as ‘x64 based’ and so on.
The DLL files on a Windows system may be in two separate folders (the Program Files and Program Files x86) if you have 32-bit applications still in your system. If the files in these folders are mixed up, it can be problematic because Windows will then not know which version to retrieve for a specific DLL file. Windows can only serve up the right DLL if files are properly organized.
This problem is not present in Macs. The Mac OS has been exclusively 64-bit for a long time now, and the latest versions do not even support 32-bit applications in the slightest (although you can still run them if you absolutely need to).
64-bit for Mobile
The first mobile chip that Apple introduced and we have mentioned above was the A7. iPhones are not the only mobile phones to use 64-bit now; many android phones have it as well, although only iPhones have it as an actual requirement since 2015. 64-bit smartphones have their pros and cons as well, and may not suit all users.
Conclusion: Why Use 32-bit at all?
So the question in your mind, after reading all of this, is probably: why do people even use 32-bit at all in this day and age? It depends; some people might actually be using an older system that has a 32-bit processor. Such a system, while rare today, is not unheard of. Some users may be using a 32-bit OS still because they just are not aware of the difference. Many of the improvements offered by 64-bit are not noticeable for most casual users. | 1 | 6 |
<urn:uuid:328f46aa-929f-4a15-8cd6-4829c2841f8a> | The best performance of the radio depends on the materials they are made with and its features. You can purchase the radios which are provided with the latest technologies. There are many different types of radios with distinct and important features. One among them is the wind up radio.
A wind up radio is nothing but the one that is powered using the human muscles power instead of using the batteries or the electric current. They are fixed with internal electric generators which are run by the mainspring that is woven to the hand crank. When the crank is turned, the springs will get wound and perform operation.
have a look at this you tube video to know more about emergency wind up radio.
This radio can be charged by various ways; with the help of the solar panel, upon exposing it to the direct sunlight, by connecting the mini USB cable to the connector. It is a type of power emergency radio through which you and your family can depend on it anywhere and in any time. Its compact size makes it portable. The maximum power consumed by the radio is 0.5 W.
With the memory tuning feature, favorite stations can be tuned with an ease of operation. It can be charged by the solar panel even when the radio doesn’t have a battery. The weight of the product is 1 pound and is hence suitable to carry the radio at the time of traveling, camping, etc. It has an unlocked button with which you can turn off the radio. The radio has the capacity to receive the signals both horizontally and in the vertical position.
Using this radio, you can stay updated about the weather forecasts. Even in the darkness, you can use the radio, as it is built with the LED flashlight. The recharging can be done in three ways; by exposing the solar panel to the direct sunlight, by winding the internal alternator with the hand crank, and the Dc recharging is done by connecting the mini USB cable with the computer. The maximum power it can consume by the battery is about 0.5 W.
Its compact size makes it portable to be carried easily at the time of hiking, camping, etc. As it is made of the waterproof and high quality rubberized materials, it can have a long lifespan. The radio can be recharged by the solar panel, hand crank, and by connecting the USB cable to the computer.
The radio is made of the scratch-resistant rubberized materials so as to avoid getting damaged soon. As it is rain proof, it can be carried outdoors. The recharging can be done by various options like the solar panel, hand cranking generator, and by connecting the USB cable to the connector. It consists of an antenna and the displays have the blue light at the back.
Have you ever experienced the impacts of climate? Are you one among the people living in the rural area? Have you ever heard about emergency alert signals? Are you in need of the best emergency radio?
Many regions suffer from the harmful effects of climatic conditions. The main reason behind this is the lack of warning signals regarding climatic changes. To overcome these hazards, one should use the emergency radios. These best hand crank radio communication system tracks down the alert messages from the radio station. With this, you can safeguard various needs and things from varied climatic conditions.
Need of an Emergency Radio
When the monsoon season is at its peak, there are chances of flood or storm; even tornado occur at these times. Due to this, the essential needs for livelihood tend to go unavailable. Further, the people are also in an emergency situation where surviving with food and shelter becomes a complex task.
It is hence essential to have a survival kit ready to survive the crisis periods. Most of the emergency situations arise due to adverse weather conditions. So, it is quite essential to have an eye on the weather reports or any other alerts given by the government or weather stations to keep yourself updated about the forecasting and to act accordingly.
In this regard, one of the essential items that have to be within your survival kit is an emergency radio. Emergency radios are popular devices that were a part of the life of the people in olden days. Due to the advancement in the field of communications and technology, they got vanished away.
The technically updated devices and instruments will not operate in the case of sudden power failure due to inclement weather or flood. At this juncture, using an emergency radio is a reliable resource for knowing the forecasting.
But, it is a wrong choice to choose a radio simply without knowing what to look for it. Here is an article which will explain you the necessary details that you have to consider when choosing an emergency radio.
What Is A Weather Radio?
Weather radios are simple radios designed with innovative technology. These radios track the alert signal in case of emergencies and are known as emergency radios too. At the times of natural disasters, the critical updates required for survival are being given by the NOAA through the frequencies which cannot be accessible through the standard radios.
The emergency radios are made to function without electric supply, and so they act as your helping hand when all the other technologies fail. With these radios, the updates received can be about the closure of roads, direction where the storm is about to hit, the location of the emergency shelters by the government, etc.
Only by knowing these details, you can plan your course of action and take a next step such as transporting to a nearby country or state where the weather conditions are normal. Further, the emergency radio is not the equipment that works only in the case of urgency; they are also capable of working as an ordinary radio and entertain your day to day life.
What Should You Consider Before Choosing One?
Even though all the emergency radios are designed to receive signals correctly, they are not the same always. Each radio differs with its features. It is hence necessary to purchase a radio by considering the following essential features.
In general, the frequency depends on the operating range of the radios. The operating frequencies of the radios are generated from the radio station and by the radio itself. Before purchasing an emergency radio, you must be clear about your needs, whether you only wish to receive information or you want to send and information too.
Those that want only to receive information will require purchasing a standard AM/FM receiver which is available cheaper. For others, you will have to consider buying a shortwave or a two-way device because these radios are capable of both sending and receiving information.
An emergency radio is deemed to be useful only if it is capable of picking up NOAA NWR broadcasts. The signals are often sent in 7 different VHF frequencies that most of the ordinary radios are not capable of picking up.
Apart from broadcasting the weather alerts, they also give information about the necessary arrangements made by the government for surviving the disasters. Also, a radio with a Public warning label is also capable of picking up signals of NOAA alerts. Hence when you purchase an emergency radio ensure that it has a NOAA label or a Public Alert label.
SAME is the abbreviation of Specific Area Message Encoding. With a radio having this feature, you will be able to receive alerts to a specific part of a country or a particular locality for updates. For this, you will have to set your radio accordingly and is simple to program, though. So, ensure that you purchase an emergency radio which has this feature.
Quality Of Sound
When you buy an emergency radio, make sure that you get one with final sound quality, i.e. it must be capable of receiving signals from a distance of one meter away from the radio.
So, the best emergency radio should have an extraordinary speaker performance to survive. Also, if you wish to buy a radio for day to day use too, then look at the equalizer settings for indoor and outdoor use along with the option of listening sounds using the headphones or able to be connected to another source for output.
An emergency radio should provide and receive signals without the need for using electrical energy. To facilitate this, you can use power source options. These options are available, and it makes the radio to work properly. They are,
Solar power- Solar power is used to ignite the radio with the help of solar light. Get more details about best solar emergency radio in this link.
Hand crank- These are radios that operate by converting the mechanical energy into electrical energy which is then stored in the built-in Lithium- ion battery.
AA or AAA batteries- These are widely used and help to provide a power source to perform a function.
USB- The radios that come with a rechargeable battery, mostly are capable of charging it with the aid of a USB cable or an AC adapter.
However, the best emergency radio will have several power source options to be used as alternatives avoiding the need of relying on a single source.
An emergency radio must be capable of being stored compactly to fit in a BOB or to the shirt pockets to meet the emergency needs. Mostly, the emergency radios are built to be less in weight to be portable and carried anywhere when the disaster arises.
However, the smaller radios may lack special features apart from the power source options. So, consider buying a large radio so as to get more benefits. Read our article on best portable ham radios to select better.
In the emergency situations, the radios are prone to get affected by dust, wind and water splashes. So, the radios built to meet emergencies must be capable of providing resistance against the wind, water, and dust.
To be durable and to resist disasters, the radio purchased has to be built with a foldable crank and an antenna so as to fit compactly into the bag. Further, the radios that are designed with rubber exterior are considered to provide grip to handle it properly. With the help of the protective roll cages, they can be sturdy even it hits the ground.
What Are The Additional Features To Consider?
Apart from the above mentioned essential features, an emergency radio can also be chosen based on the other unique features. They include,
In case you land over a dark area when in an emergency, an emergency radio with a backup to keep the flashlight on can be beneficial to overcome darkness. The radios with flashlight are useful during power cut conditions. Further, these radios assist you greatly during climatic hazards. So, always afford a radio with flashlight in it. Read more about best emergency flashlight radio here.
Calendar And Clock
In the case of emergencies and if moved away from a particular place or country, a built-in clock or a calendar can help you in keeping track of the time and date.
A built-in app or software that can assist in monitoring the changing weather conditions and temperature by providing close readings might help you in emergency situations to safeguard yourself.
Even if the emergency radios are capable of being operated with the charge through the ports provided, make sure that your radio is compatible with standard charging output using cables.
To set the stations for receiving signals, tuning the radio has to be done accurately. As manually tuning the radio is a complex task, having a radio with digital tuning capacity adds advantage.
Ensure that the radio you purchase comes with the power outlets that are capable of charging a smartphone or a tablet in case of emergency and lack of electric supply. With this radios, you can easily communicate to the external world during power loss conditions. Read on our reviewed article on best emergency radio with iPhone charger for more details.
What Are The Best Brands Of Emergency Radios?
Although there are several manufacturers, who market the emergency radios, only a few brands are claimed to be the best; those best brands include the following,
Eton American Red Cross FRX3
The American Red Cross is a top rated emergency radio that has hand crank design with it. This radio is multi-functional devices equipped with super smart features.
The device is capable of receiving both the AM/FM signals and all the seven weather bands. With the help of built-in alert function, the radio will automatically provide alerts in case of emergency.
This radio has a bright LED flashlight design that can help you overcome the darkness in crisis periods. Also, the red flashing beacon is useful for indicating emergencies. It also has an alarm clock that provides emergency alerts.
As it is a hand crank model, the unit has a hand turbine power generator, as a source of energy. In addition to this, it also comes with the rechargeable AAA batteries that can get energy with the help of the solar panels fixed within the device. With the aid of the USB output, you can charge your smartphone for emergency situations. For more details, read on Eton american red cross radio review from this source.
The product is picked as the second best emergency radio. It comes with numerous input stations and smart features that are useful for surviving emergencies and disasters.
It is built with a brushless AC type generator that gets powered with simple hand cranks. Also, the device comes with three rechargeable AA batteries that gain energy with the help of the solar panels. These panels are attached to the radio.
In addition to this, the solar panels are capable of tilting according to the position of the sunlight. The bottom of the solar panel has 5 LED lamps that can function well in bad lighting areas. Further, the LED lights present on the side panels are capable of providing proper light to help you survive darkness.
With the aid of the USB jack, the unit can be used for charging small devices such as the smartphones, iPods, etc. electronically. The alert function is built to be useful in providing alerts and with the red colored blinking light; the emergency situations can be met efficiently. The entire product is durable, and so you can purchase it efficiently. This product is supplied with various additional features. To know about this product completely, read on our review on Kaito ka500 emergency radio review in this link.
The Sangean emergency radio is small radio equipped with portability feature. It comes with a public alert label and is designed to be capable of receiving both the AM and FM signals. It uses two types of batteries for recharging; the alkaline and the NiMH batteries.
It features a wide LCD which is easy to read with the help of the adjustable backlight illumination. The radio can be preset to 19 favorite channels and also has an auto-scan feature to tune and set the frequencies digitally.
It is designed with a dual alarm which can help in getting up to listening AM/FM radio and is also compatible with the manual buzzer system. The alarm clock comes with a snooze function to set it to the ring after a few minutes.
With the help of the battery selector, you can switch between the required battery types to play stations. Using backup capacitor, you can store the settings such as the clock, alarm, etc. within it and restores it after a shutdown in case of power failure.
Further, the built-in headphone stereo amplifier helps in reproducing the sound with quality and clarity. It also comes with an antenna which is useful for telescoping the signals.
Midland ER 200 is another emergency radio with smart features. It is capable of receiving both the AM/ FM signals and is labeled NOAA. It operates with the help of the hand power turbine crank as a primary source of energy.
Apart from the hand crank mechanism, it can also run with the aid of the replaceable and rechargeable 2000 mAh Lithium Ion battery. With the support of the USB output provided, it can be used for charging various electronic devices that are small.
It comes with a built-in emergency LED flashlight which is brilliant to be visible. Also, with the help of the alert function, it warns you in the case of future disasters and emergencies. It is a advanced emergency radio and you can read our review on Midland Er200 radio review for more information.
The Ambient Weather WR-111B Emergency radio is an innovative device designed with various important and innovative features. This emergency radio is designed with different inbuilt functions and uses. They can also track the general FM/AM signals and so the performance you can use it durable for both the functions.
This emergency radio is equipped with various control signals. All these control signals offer a user-friendly and flexible feature to all the people. In addition to this, the emergency radio uses a five-way charger mechanism. This product gets charges up with the help of solar power, Ac wall power, hand crank method, USB and DC power.
This product also used a highly durable display unit. This display unit offers complete performance, and so it can view various operations performed by the user efficiently. Further, the display unit also displays the volume level. Almost six control options are placed on the display.
Further, a high-quality LED flashlight is also used in the product. This flashlight operates on less power, and so you can store the product for a long time. Added to this, the lightweight, portable design offers complete flexibility to you.
The Sony ICFS79W is the next important radio on the list. The brand name Sony is very common, and the flexibility offered by this product is high. This radio has sleek, simple, unique design with it. Further, the radio is compact, and so the performance offered by this will be high for a long time.
Apart from this, the entire product is equipped with various control options. All these control options are used for tuning perfect product for a long time. Further, the display unit is powerful, and it views the tracked signal effectively.
This emergency radio also can track both the weather band signals and AM/FM signals from the surrounding. Hence, you can use this radio both for entertainment purpose and weather monitoring purpose.
This radio also comes with 20 preset memory pages. With this, one can access the required FM band signals easily. This product also has a timer and preset clock with it. This clock reduces the power usage and so battery withstand for a long time.
The Epica Digital Emergency radio is a high-performance emergency radio designed especially for using it in emergency conditions. This is a triple band emergency radio that can track weather alert signal, Am/F, and signals.
This radio is equipped with a solar panel; this panel provides the input power to the radio. Added to this, various other power options are also used within the radio, these power options include hand crank power, battery power, USB, DC power from mains, etc.
These power options add flexibility to the product. Further, various other features are also used within the product. They include LED lights, mobile phone charger, USB port, volume level, etc.
With the various brands offering emergency radios, it is quite normal to get confused between choosing the best one. Hence, you are advised to list out your priorities to enable choosing an emergency radio with best features.
You will have first to decide on the budget which you would like to spend on an emergency radio before looking for the products for purchasing.
If you would like to purchase an emergency radio for day to day use, you will have to ensure the proper sound quality and the speaker performance to listen to the music tracks. Avoid purchasing the radio without NOAA or public alert label.
Even though the AM receivers are capable of receiving the NOAA signals, in extreme weather conditions, they are not capable of receiving updates. In case you own a smartphone, check if your radio has a built-in USB output so as to charge to a limit.
You must ensure that the product is lightweight; so that, it can be carried along with your survival kit.
It should be built with features and software that are easy to use and program. Purchasing a radio with many features and using it only to receive alerts is a waste of money.
Look at the power source options that are provided as an alternative to being used when any one fails.
It is better if you go with the emergency radios that come with a LED flashlight that is bright and is capable of giving light to a distance of at least a meter. It is a necessary feature as in the emergency situations; you will end up in staying in a shelter that is provided by the government and it may or may not have proper lighting.
Consider the factors that we have mentioned above, as essentials, before purchasing an emergency radio. We have listed the best products based on the customer ratings and reviews. You can also do research on your own to find the suitable product that meets your needs. Read the manual carefully and preprogram the necessary settings to avoid last minute confusions. Good luck!
Emergency radios these days are built with various special features. One among them is the capacity of charging iPhone using the emergency radio with the help of the port provided for charging. Those who own an iPhone can get benefitted by purchasing this type of radios.
The best products of this category includes the following,
topAlert Emergency Radio
It is made of the high quality rubberized materials so as to be durable. This radio is light in weight and can be used at the time of camping, hiking. It can be used both indoors and outdoors. Through USB or solar panel, you can charge the radio. Further, you can recharge the iPhone, iPods, iPads, etc. The weather band is used to provide the alert message to the people based on the weather and disaster conditions.
It is available in various colors. This radio can be charged by various ways; with the help of solar panel, and by exposing it to the direct sunlight, by connecting the mini USB cable to the connector.
It plays a significant role in providing the information about the storms and cyclones. It is a type of power emergency radio through which you and your family can depend on it anywhere and in any time. Its compact size is suitable for the people to carry the radio anywhere. The maximum power consumed by the radio is 0.5 W.
It can be recharged by five ways such as the hand crank generator, solar panel, alkaline batteries, USB cable, and by rechargeable batteries. The USB port is used to charge iPhones, MP3 players, digital cameras, and other compatible devices. You can adjust the solar panel to get the maximum sunlight with the adjustable solar panel that is built with it. As it is made of the water-resistant materials, it won’t get damaged soon.
It consists of powerful LED flashlight which can offer the maximum light. You can hear the message even during the night time. This radio is recharged by three options such as by hand crank, solar panel, and by USB connection. It plays an essential role in charging the iPhones, iPads, iPods, and other devices compatible with USB ports. It secures you and your family by giving alerts about the unpredictable weather conditions. Further, you can enjoy your favorite music on this radio.
We believe that the above information is very useful to you in selecting the right product for your home. Write to us about the article helped you in choosing the right product by leaving a reply in the comment box below.FacebookTwittergoogle_plusShare
The radios are used for transmitting and receiving information and hence it is widely used in many regions of the world. There are many types of radios; one among them is the emergency radio.
The emergency radio is used to alert the user if there is any disorder within the living area. These radios have many inbuilt features and performance. One such emergency radio with high performance is the Midland ER200 Emergency Weather Radio.
The Midland radio is also an emergency radio that can be used for detecting the weather effectively. If this radio detects any disorders in the weather forecasting, it alerts the user in advance. By this alert, the user can take many preventive measures to safeguard from the damages.
The features of Midland radio make this radio effective and easy to use. The important features of this radio are listed below.
The input power to this radio is offered with the help of the battery. The user can use both the non-rechargeable and rechargeable batteries for power. The commonly used batteries in the Midland radio are 2000 mAh rechargeable lithium-ion battery. This battery can be charged easily with the help of the crank, solar power or mini USB cable.
The important additional feature to this emergency radio is the flashlight option. The flashlight of the Midland radio is made up of Cree LED which can be produce 130 lumens of light. This LED’s work effectively under the low power conditions too. The LED operates in two modes; the LOW mode can be used for the backlight option and the HIGH mode can be used for illumination.
The next important factor of the Midland radio is the display. It has a large LCD display with a backlight option. The backlight option enables the user to view the display even in the dark condition. The display is used to view FM channel, time and weather channels. The time can be set either in 24 hours format or in 12 hours format. Other than this, the display also shows the frequency band number of the current station.
This radio also has a USB port. This port can be used for charging various external devices such as mobile phones, etc. Added to this, the port can also be used for charging the batteries in the radio with the help of the external power source. Commonly used external power source is the plug-in point or the Personal computer.
This radio can be operated in various frequency bands of operation. It has the capacity to access all the AM bands, FM bands, and the weather alert bands. If the radio detects any hazardous situations in the weather bands, it alerts the user immediately with a siren.
Other features of this radio are,
This radio has three ways powered batteries such as solar, USB and hand crank.
This radio can be used to charge external devices.
It is a portable device and it can be used in many places.
It also has a large bright backlight LED.
The antenna is also attached to this radio and the antenna used is a rotatable telescopic antenna.
This radio also has a headphone jack.
Also have a look at this YouTube video to know more about Midland ER-200 Emergency Radio.
Some important advantages of this product are,
The LCD display is effective and the clock operates periodically even in off-state.
This radio constantly informs weather conditions in any region.
It can track a wide band of radio signals with the help of the antenna.
The lithium-ion battery is powerful and is durable.
This radio offers advanced alerts so that the user can move to the safe regions.
The ham radio is also referred as an amateur radio which is one of the popular devices used for the electronics and communication ethics. The ham radio is very well known for the ability of the communication purposes in case of the emergency situations. Most of the people make use of this ham radio to speak around the local areas, across worldwide, airlines, and space over the earth, in the absence of technology.
Features Of Ham Radio Unit
The ham radio act as the fun handling system for grasping the information tactics over the social media and educational institutions, which can further be used as the major life guarding tool in case of emergency situations. The ham radio is widely used for the various factors of text, voice, images and data communications, to allocate the frequency band of RF spectrum in order to ensure the transmission of information all over the world as well as the space above the earth. The ham radio constitutes the formal and informal operators to transmit the communication factors in the case of emergencies.
Benefits Of Ham Radio Usage
It is highly equipped to work for the public safety measures, commercial broadcasting techniques, and two-way radio units of maritime and aviation sectors.
The ham radio communication system is well known for the pretty classy outlook, which is very much responsive in case of any emergency situations for alerting, rescuing and searching the people over the natural disasters.
The main feature of the ham radio is to reach the destination spectrum band of the national weather system frequency range, for alerting the weather disaster conditions.
The ham radio is very popular over the Military purpose, for indicating the weather forecasting and to rescue the people in case of emergency.
It helps to monitor the emergency situation of any natural disasters and communication transmission over the high range of frequency spectrum.
The ham radio constitutes the ability to differentiate the various spectrum frequency bands implemented over the channels to estimate the emergency situations.
The ham radio helps to upgrade the quality of radio frequencies during the natural disaster, which tends to affect the radio towers and other communication infrastructure elements.
The ham radio works based on the license provided by the authorized governmental organization to use the emergency factors.
It is more active for volunteering communications with the organization of public safety measures.
The ham radio helps to transmit the message of emergency situations over the long distances to safeguard the human’s life.
Most of the ham radio units are active in case of disaster damage in communication lines due to power outages and major destruction over the telephone and cellular systems.
The ham radio is upheld with the ability to trace out the possibility of emergency disorders in the earlier stage.
The ham radio is one of the resourceful operators which are highly tuned to handle the emergency cases such as an Amateur radio emergency services, radio amateur civil emergency, military amateur radio emergency services, and amateur-satellite services.
The ham radio unit set up helps to operate the communication networking system which is authorized and organized by the governmental sector for preventing the negative impact of the disaster on the citizens.
Here is a YouTube video that will guide you while using ham radios.
List Of Best Emergency Ham Radio Unit
The ham radio is mainly designed to use the radio frequency spectrum for the use of factors like message exchanging, self-training attitudes, and sports information over radio, wireless projects and other emergency communications of natural disasters. Most of the ham radio units work under the national weather service, to forecast the weather information to the public for analyzing the damage caused by the emergency situations.
Baofeng UV5RA Dual Way Ham Radio Unit
This product comes out with the dual PTT keys to incorporate the programmed codes such as DCS and CTCSS for getting the direct input over the operation of the band spectrum. The Baofeng ham radio unit is the dual way transceiver which helps to alarm the LED flashlight for the emergency indication through the built-in key locking facility. The Baofeng transceiver ham radio unit helps to provide the frequency range of up to 180 MHz over the power source of 4W.
This radio system is quite compact and economical and constitute of the VHF and regular FM band spectrum, which helps to save the battery operation to last for long duration. The Baofeng radio unit consists of the SMA and flexible antenna, Li-ion BL5 battery, AC adaptor, and the charging tray in which it is operated by the frequency stability up to 2.5ppm. It mainly works on the semi-duplex mode of operation such that this product comes out with the measure of the 8 inches of height, 4 inches in width and 6 inches in depth. This radio unit is a lightweight compound which weighs only up to 1.1 pounds.
This product tends to operate over the power range of up to 7W which is well suited for the long distance radio purposes. This ham radio unit is a dynamic walkie talkie system which is programmed through the fee cable of dual standby facility. This product is the highly qualified system which constitutes of battery range of 1800mAh, LCD screen display, noiseless operation and durable exterior function to operate the alert message over the rural regions. It works on the CTCSS and DSS decoding function to monitor the auto battery saving, battery expiry, and channel lockout problems.
It is highly equipped with the enhanced construction of solid design which helps to upgrade the communication function of alerting the citizens in case of emergency. This ham radio unit is simple to use over the operation mode in which helps to prevent the error formation over the emergency alert function. This type of ham radio is used for two-way radio unit which is popular with the people to provide the best performance in monitoring and public safety measures. This product comes out with the measure of the 7 inches of height, 7 inches in width and 2 inches of depth which weighs strong over the range up to 10.5 ounces.
This unit includes the ultra high polycarbonate resin panel in the front side which is a waterproofed material to be used on. It works over the high operating frequency range up to 50 MHz and high capacity battery range up to 440MHz. it consists of the LCD display for clear and easy read out of the frequency readings to quickly alert the people in case of emergency. The GPS antenna is implemented to display the current direction, speed and position of the person over the emergency situation.
This product comes out with the measure of 1 inch in length, 2.5 inches in width and 3.8 inches in height which weighs only up to the range of 9 ounces. It is mainly used for the FM and AM broadcasting, audio aircraft facility and public channels over TV station in which the built-in pressure and temperature sensor help to display the range of the altitude, pressure, and temperature.
This product constitutes the two-way ham radio units by means of the free programming cables to provide the high output power up to 10W. It includes the usage of the high capacity battery to stand up to the hours of 220 ranging which is further equipped with the belt clip to operate the channels and power sources effectively.
It uses the two antenna device to increase the tracking ability over the long distance. It uses the CTSS and CDCSS decoding functionality with the auto battery saving facility to encode the unit with the DTMF process. It tends to provide the high-frequency range of up to 470 MHz which is highly mated with the lithium battery for working operation. This product comes out with the measure of the 11 inches of height, 1.2 inches in width and 2 inches in length which weigh over the range up to 9 ounces.
Radios are devices generally used for transmitting and receiving the frequency band signals. There are many types of radios in the world; one important type of these radios is the emergency radio.
The emergency radios are used to alert the person if there are any natural calamities. It alerts the user with the warning signal so that he can move to a safe place away from the destruction. These emergency radios are widely used by the trekking and shipping persons.
American Red Cross Frx3 Weather Alert Radio
The American Red Cross weather alert radio is also a type of emergency radio that can be used to detect the band signals from the region. It tracks the weather band signal in the region and offers a periodic report to the user about the current and upcoming weather in the region.
This radio has many inbuilt features and this feature makes this product effective and easy to use. Various inbuilt features of these products are given below.
Also have a look at this YouTube video to know more about American Red Cross emergency radio by eton.
The first important feature of this radio is the power. The input power to this radio is offered with the help of batteries. This radio has rechargeable batteries that can be easily powered up with the help of 3 powering methods; they are solar power, hand crank, and USB port. Other than this, the batteries of this radio can be replaced with a new powerful battery to obtain high power. This radio can be used in any adverse conditions because the power in the radio can be maintained effectively in any situations and conditions.
This radio also has a USB port attached to it. This USB port can be used for charging the radio from the external power devices such as plug-ins and PCs. Other than this, the port can also be used for charging external devices like mobile phones, power banks etc.
This product effectively operates on various bands of operation. It can detect the weather bands, FM bands, and AM bands in the particular region and display them to the user. The weather band is monitored continuously and whenever any disaster is reported by the weather band, the alert signal is produced to alert the user.
The display used in this radio is an LCD display with a backlight. The display effectively views the detected frequency band of the signal. Further, the amount of charge left within the radio is also displayed on the screen. The backlight option helps to view the display during night.
This radio also has a flashlight option. This flashlight is operated by the power of batteries. The flashlight is made up of powerful LED and it glows based on the power of the radio. Other than this, it also has a beacon LED that blinks whenever the radio detects an alarm signal.
Other features of this radio are listed below.
The buttons of this radio are illuminated and they glow even in dark regions.
This radio can also be operated in DC input power.
It has an auxiliary headphone port in which the headphone or an earphone can be connected.
Ni-MH rechargeable batteries are used in this radio for powerful operation.
This radio has a rugged rubber shell that is durable.
The design of this radio is small and compact.
It is lightweight and portable.
This radio has a long battery life.
The flashlight level can be adjusted easily.
This radio offers a loud and clear audio alert signal.
It is provided with the special features to broadcast the several frequencies and seven weather channels. It gives the message about the weather conditions very accurately. The performance of the iRonsnow emergency radio is good as it is equipped with the special features and it is constructed with the best quality materials.
Here is a YouTube video that gives complete information about iRonsnow is-088 emergency radio.
The emergency radio can be charged by using three ways such as USB connection, hand crank, and with the solar panel, by directly exposing to the sunlight. With the help of the USB port, people can charge the smart phones, iPads, iPods, and other devices.
As it weighs less than 0.5 lbs, it is portable to be carried anywhere and also hold the radio at the time of hiking, camping, etc.
It is used to provide the light at the time of power outage. So, even in the darkness, you can hear the news or listen to the music.
The price of the emergency radio ranges from $ 15 to $ 20.
It is used to provide an alert message to the people about the poor condition of the weather, disaster, storm, cyclone, and other unpredictable conditions on the surrounding areas.
This emergency radio doesn’t have the capacity to charge the smart phone to the fullest of the battery level. When using the hand crank, it supplies the power for a10 minute call only.
These are the various features and benefits of the product. If you are satisfied with them, purchase it for using in emergency situations. If you like this article, you can share it in the social networking sites. Also, write to us about the working efficiency of the product in case you purchased one.
The weather radio is a highly qualified device which is capable of receiving the NWR frequency band spectrum during the broadcasting technique. The weather radio system helps to keep the humans in a safe state before being trapped by the harmful disasters.
The weather radio system provides the service by means of national, oceanic, and atmospheric administration, to update the upcoming weather disasters and another kind of emergency cases. The weather radio system helps to alert the natural disasters such as avalanches, oil explosions, forest fire and earthquakes. The NOAA weather radio system analyzes the calamities arrival and transmits the alert message through the standard AM and FM weather radio system.
Benefits Of Weather Radio System
The weather radio is the special type of radio that helps to receive the emergency alerts about the harmful weather events, terrorist threats, and other natural disasters.
When the emergency situation is detected, the weather radio system starts to alert through the audible warning sound of high range to safeguard the human’s life.
The weather radio system helps to broadcast the alert message to the citizens by means of a governmental organization through the high dynamic range of radio networking systems.
The weather radio is highly programmed to operate over the AC power in order to activate the upcoming alert communication through the local geographical area.
During the on and off state of the weather radio system, it comes out with the efficient feature to remain silent in order to detect the disaster and emergency alert messages.
This radio unit is powered up by the inbuilt rechargeable batteries or generators to create the safe shelter region against the upcoming disasters.
The weather radio system is highly enriched with the feature of alerting the citizens to remain silence and active over the dangerous events to safeguard their life.
Also have a look at this YouTube video to know more about the benefits of emergency weather radio.
Basic features of the weather radio system
The main basic feature of this weather radio system is to alert the people during the time of natural disasters and other kind of emergency cases. The weather radio system is well known for its alerting feature which helps to remain silent to detect the emergency arrivals. After getting the alert message, the weather radio system helps to activate the radio for broadcasting the warning message for the entire region.
The weather radio is also referred as emergency weather alerting radio due to the rescuing factor against the human’s life. The weather radio system uses the specific area message encoding technique which is the new upgraded technology to allow the people for programming their radio to receive the alert message before the disaster arrival. The weather radio system consists of the solar panel, inbuilt replaceable and rechargeable batteries and hand crank dynamo generators to operate the radio during the passage of AC power.
This weather radio system constitutes the feature of charging the cell phones, iPod, and laptops directly from the radio with the use of USB cords. The LED lights are inbuilt to flash over the emergency cases and to alert the disaster situation by blinking over the device even if the power is shut down. The weather radio system helps to tune the advanced feature over the radio digitally to forecast the weather condition within a fraction of seconds.
List Of Best Emergency Weather Radio Units
Most of the weather radio units are powered up by the AC power adapters which are crucial for alerting the disaster arrival for the people over the long duration period. The down listed weather radio system is popular in the schools, colleges, homes and business institutions to receive the disasters warning message in advance to safeguard the people. This weather radio unit is a portable radio which is designed to be small and light weighted to carry over anywhere to forecast the weather condition for disasters arrivals.
American Red Cross FRX3 Weather Alerting Radio Unit
This product helps to alert the bad condition of the weather forecasting through the FM and AM broadcasting, which further helps to charge the other electronic devices, by means of USB cords. It helps to receive the weather band alerts in which this weather radio unit constitute of the multiple options for the power sources, AAA battery, solar panel and hand crank unit.
The LED is used for the purpose of the flashlight and emergency indicating options, which work based on the solar and hand powered weather radio alarm clock. This system is highly equipped with the auxiliary input for playing the MP3 record tracks.
The LED flashlight over this radio unit helps to blink on the dark region around the solar panel for warning the people in the emergency situations. This product comes out with the measure of the 7 inches of height, 6 inches in width and 3 inches in length which weigh up to the range of 1.5 pounds.
Kaito KA500 Five-Way Powered Weather Alerting Radio Unit
This product is highly equipped with the AM, FM, and weather alert radio units which help to provide the AM band of 520 to 1717 kHz, FM band of up to 108 MHz, and the weather band up to 7 standards for all kind of stations. It is strongly wind up to operate over the strong power and voltage by means of the dynamo cranking power unit and the inbuilt Ni-MH battery type to power by the 120 turn of cranking power. The PLL crystal circuit controlling unit is used for the stable alerting over the upcoming weather disorders.
It is highly mated with the recharging battery unit, which may be used for charging the Laptop, iPod or cell phones. It consists of the internally designed generator that helps to recharge the inbuilt Ni-MH battery unit for operating the radio, lamp light for emergency reading and flashlight in case of power shutdown condition. This product comes out with the dimensional measure of 8inches of height, 2.5 inches in width and 5 inches of depth which weighs only up to the range of 1.5 pounds.
This system is one of the portable weather forecasting devices which effectively work over the weather channels servicing. This radio unit is powered up with the solar panel in case of the absence of the power shutdown. It is efficiently used for charging the electronic devices through the USB cable and to forecast the FM and AM frequency band spectrum over the LED display.
This compact radio unit is used for detecting the multi-functional weather conditions, which is best suited for the hiking, camping, and vacation trips. It helps to detect the AM, FM and NOAA frequency band spectrum range which helps to tune the antenna and channels to alert the worst condition of weather.
The large capacity battery is highly mated with the operation of three charging ways such as solar panels, USB charging, and hand crank handling. The LCD panel is used to display the clock functioning, speaker and headset outputs and altitude ranging. It consists of the brightened LED flashlight to indicate the emergency cases such that this product outcome with the measure of the 7 inches of height, 3 inches in width and 3.3 inches in length. It is highly qualified and approved by the NOAA organization which weighs only up to the range of 15 ounces.
This system is usually powered up through the solar panels, USB cord or Hand crank which consists of the power bank and LED flashing in the case of emergencies. This compact device helps to monitor the weather broadcasting over AM and FM frequency power spectrum. This radio system consists of three LED flashlights to charge the phones, laptop, and iPod.
It operates on the lithium battery of 2000mAh capacity range. It is equipped with the TF card pin to play the musical tracks which are very light weighted and easy to carry while traveling. This radio unit is made up of the highly qualified rubber material which acts as the waterproof housing, in case of emergency situations. This product weighs only 1 lb which comes out with the measure of 5 inches of height, 2.5 inches in width, and 1.8 inches in length.
These are the best emergency weather radio units based on the customer reviews and ratings. Purchase one which suits your needs and write to us about how this article helped you in picking the right product.
The solar emergency radio unit comes out with the ability to charge over by the sun without the presence of power sources. The solar emergency radio unit does not need any power supply through electricity and the batteries to operate the device, which is very much effective in the case of any emergency situations. It is easy to charge the crank dynamo generator inbuilt in the solar radio system through the direct rays of sunlight during the emergency situations.
The solar emergency radio system comes out with the numerous amounts of options which help you to choose the best type of solar radio. The solar radio is the clear solution for the outdoor purposes such as camping, bike riding, hiking or any other traveling purposes. The solar radio acts as the emergency kit which can be powered during the expiry of the battery, by means of the direct sunlight exposure. The solar emergency radio unit is the powerful tool to promote the range of people living in the remote sensing areas.
The solar emergency radio system is one of the portable radio units which are powered up by means of the photovoltaic panels to be used over the remote sensing areas. The solar radio system helps to eliminate the usage of replaceable batteries to operate the system under the limited amount of cost. This solar radio system does not need any external plug-in which can be useful over the area of powerless grid or generators.
Features Of The Solar Emergency Radio System
The solar emergency radio system consists of the various ranges of featured activities to activate the alert message of the disasters without using any power source or batteries.
The solar emergency radio system constitutes the ability to work over the range of detecting the harmful natural calamities and other emergency situations without the use of the recharging with the electric power sources.
The solar radio unit is well known for its outdoor activities which are free to access against the shock resistance and weather proofing conditions.
The solar radio unit helps to access the weather forecasting and the emergency alert messaging ability to differentiate the life and death factor in a fast manner.
The charging option for this solar radio unit is better when compared to another type of radio system.
This radio unit is more benefited in any kind of emergency situations which is more useful to give the alert message over disasters for safeguarding the human’s life.
Most of the radio system works based on the batteries and the hand crank generators which is not sufficient to give the alert message in case of emergency but this solar radio is more effective to alert which also uses the sun exposure to charge over the battery in case of power shutdowns.
Have a look at this YouTube video to know more about solar crank emergency radio.
List Of Best Solar Emergency Radio System
The solar emergency radio system consists of the solar panels which tend to provide the radio with the limited amount of power to run. The down listed solar emergency radio units help to offers the high dynamic range of broadcasting than the others types of radio unit which is the cheapest model version well suited for the camping, traveling and various purposes.
Topalert Solar Emergency Radio System
This radio unit is best suitable for any kind of emergency kit, which comes out with the AM and FM weather broadcasting purposes. The TopAlert solar radio system constitutes the thermometer, mobile charging port, USB cord adaptor, and LED flashlight display. This radio unit can be charged with the solar panel, three AA batteries, and handheld crank generator, to notify the emergency alert messages. This compact radio unit is small in size and a light weight component, which is easy to carry while traveling.
The siren is included with this radio unit to alert the charge expiry and emergency cases. This product weighs only 800 grams, which further outcome with the measure of 24 inches of height, 8 inches in width, and 11 inches in height. This radio unit is highly equipped with the DC adapter of 12V supply, siren type thermometer, and telescopic antenna with the built-in speaker setup.
This radio unit is the compact designed alerting system, which is mainly powered up by the use of the solar panel in case of the emergency power shutdown. This system is the three-way charging model which tends to charge the iPhone, iPod, MP3 player, tablet, and laptop. The siren is included in this system to help the user to identify the emergency situations.
It includes the broadcasting features of shortwave, FM, AM and weather band spectrum which further consists of the reading lamp and earphone jack to improve the product quality. The system mainly works over the usage of the lithium battery to charge over the DC adapters. This product highly outcome with the measure of 10 inches of height, 2 inches in width and 5 inches in length which weighs only up to the range of 2 pounds.
Vonhaus Solar Dynamo Hand Crank Emergency Radio Unit
This system helps to forecast the worst weather disaster by means of broadcasting the AM, FM and weather band. This compact radio unit helpful for the multi-tasking weather channels to determine the emergency situation over the camping, hiking and vacation trips. The AM and FM frequency band consists of the seven weather alerting band to notify the emergency cases by means of the inbuilt channels and antennas.
This radio unit is easy to tune up over the large capacity battery of 2200mAh. The LCD screen helps to display the direction, altitude and the position of the person away from the disasters which are further alerted by the inbuilt LED flashlights. This product outcome with the measure of 8 inches of height, 3 inches in width and 3.5 inches in length which weighs only up to the range of 15 ounces.
Radios have created a breakthrough in the field of telecommunication. With the help of the radios, one can easily gather the information of the outside world. There are many innovative technologies and features added to these radios such as weather forecasting, solar powered, LED lights etc.
The radio that is used to determine the emergency condition is termed as the emergency radio. The emergency radio can be operated in any place and in various hazardous conditions. The radios operate efficiently with the help of frequency bands.
Emergency Radio With TV Band
Generally, the emergency radios are operated with the help of frequency bands such as FM band, AM band, WB band etc., there are radios that can also be operated by tracking the TV channels. The best among them are listed below.
Kaito KA007, 4 Way Powered Portable Emergency Radio
The Kaito emergency radio is a 4-way powered emergency radio that can be operated in various bands. This radio can track the AM frequency band, FM frequency band, WB, and TV channels ranging from 2-13. The 4 power sources that offer power to the radio are the solar power, battery, AC adaptor, and the dynamo. This radio has a built-in efficient dynamo cracking system, with the help of which, it can recharge the internal battery pack. This radio is widely used by the army people as they can effectively track the climatic situations and conditions within the region. This radio uses NiMH battery which is rechargeable and reusable.
GPX TVWB534SP 5-Inch Black-and-White TV with AM/FM Tuner, Weather Band Radio
This is an emergency radio that is used for detecting the disorders in a region. This radio alerts the user before the disorder arrives and hence, the user can move to the safest area. This radio serves as the best choice for tailgating, backyard use, camping, and home & vehicle emergencies. Some other features of this radio are removable flashlight option, dual lantern, siren, compass, and thermometer. The siren is played to alert the user whenever the radio tracks any disaster within the nearby areas. It can tune various frequencies from the FM, AM, WB and from TV bands. This radio also has a molded carrying handle and removable shoulder strap.
This is also an emergency radio with a 5-inch black and white television. This radio has a solid and unique design. It can track various frequency bands of operation such as FM band, AM band, weather band, TV bands etc. This radio has three LED lights for easy operation and it also includes hand-crank rechargeable backup power for radio. This radio automatically alerts the user, whenever a disaster signal is detected.
ToolUSA 9 Bands Handheld High Sensitivity Emergency Radio Receiver With Antenna
This is a battery operated emergency radio that can be used for tracking emergency situations and conditions. This radio has an antenna for a wide range of detection. The antenna has high sensitivity and it can effectively track all the band signals such as AM, FM, WB and TV signals. At the time of emergencies, it alerts the users with a clear tone. This radio suits the best for daily use and it is portable. | 1 | 6 |
<urn:uuid:15560601-373e-4711-ac96-bcdf353c4ec9> | Better Living Through Technology
By now it is common knowledge that it is going to take months and even years for the Gulf region to recover from the largest environmental disaster in U.S. history. Hopefully, the efforts to stop the oil leak will hold and the cleanup will be successful. New lessons learned from this disaster are being combined with earlier lessons learned from previous oil spills. The challenge now is to clean up oil contamination without using chemicals that may have a worse effect on the environment. An environmental technology company that successfully used oil-eating microbes for post-capture oil treatment in soil and water to aid in the Santa Barbara oil spill in the late 1960s is working with entrepreneurial companies to formulate products that could significantly aid the cleanup process without the harmful environmental effects of conventional processes. It does not make sense to clean up oil contamination using chemicals that may have a worse effect on the environment.
In another arena, a panel of IT policy and technology experts have expressed concern that the race to meet standards for electronic health records (EHRs) and find ways to exchange them regionally and nationally poses a great risk to privacy and identity. They warn that identity management and access authentication security needs to be “baked-in from the start, not tacked on at the end.”
Using a smart card-based patient card to more accurately link individuals to their medical and administrative records is seen as a way to address this because the public can easily use it for authentication to tell who they are precisely. This is a huge transition for the healthcare industry from out of the current model of keeping medical records at the place they are created and somehow assembling the information later when it is needed, which has its flaws, to something more like a health bank, essentially an electronic safe deposit box that provides a secure repository for an individual’s comprehensive health record. The patient would strictly control access to the information, guaranteeing both privacy and consent.
An initiative coming from the private sector is attempting to address the issues surrounding healthcare identity by establishing a Voluntary Universal Healthcare Identifier. This grass-roots approach would give individuals their own “medical ID for life” and eventually use it to uniquely identify their own electronic health records.
An exciting breakthrough in healthcare technology has been made in a collaboration between artificial intelligence (AI) technology, the Medloom real-time clinical support system and its high-performance object database platforms. This new AI software, named Ardvarc, is viewed as a leap forward in drug safety with potential to better the human condition by saving lives and reducing healthcare costs. Ardvarc is based on an AI technology known as Association Rule Discovery (ARD). Ardvarc vets data stored in safety registries, including the FDA’s Adverse Event Reporting System (AERS) to discover significant relationships between drugs, or combinations of drugs, and adverse event in real-time mode. In the first run of Ardvarc against a three month FDA dataset including roughly 50,000 reports, the AI system successfully discovered the same “substance-adverse event” relationship rule in a single run that had previously required efforts by researchers covering years of patient cases to discover. The quality of the findings discovered by Ardvarc was matched by those published in the peer reviewed literature, emphasizing that Ardvarc is capable of discovering meaningful red flags that no healthcare professional has time to read or analyze.
What do all of these stories have in common? We want to find ways of using technology to solve problems on our planet without having the technology itself make things worse.
As always, it is a pleasure to find and bring these informative articles to you. I hope you will enjoy this Summer 2010 issue of TMIS eNewsletter.
I am grateful to be in a collaborative business with many talented and skilled professionals. Additional feedback and recommendations for our products and services at TM Information Services are always welcome.
- Mary M. McLaughlin
From the Front Page of TMIS News
Click on links below to view Full Stories.
BUGS and Pacific Sands, Inc. Ready Products to Aid Oil Spill Cleanup Efforts
U.S. Microbics (aka BUGS), an environmental technology company, and Pacific Sands, Inc., a manufacturer of a broad range of environmentally friendly cleaning products, are jointly formulating products that could directly benefit victims of the recent Gulf Oil Spill, one of the worst environmental disasters of our time. The products could soon be available for consumer and commercial use and could be used on marine structures and wildlife contaminated with gooey oil.
Using components of the Pacific Sands Natural Choices product line and the oil spill cleanup experience of BUGS management coupled with direct input from industry experts and technologists, the companies hope to introduce one or more products that can help clean up oil spill residue without using additional solvents, dyes, and chemicals that irritate the skin, require special equipment and training to apply or may harm the environment. The developed products would be available to consumers on the www.EcoGeeks.com website and to industrial and commercial clean up users on a BUGS website to be announced.
Robert Brehm, CEO of BUGS, commented, "The BUGS technology was successfully used on the Santa Barbara oil spill in the late 1960's and I believe there are cleanup lessons we have learned that are applicable to the Gulf Oil Spill particularly with respect to the use of oil-eating microbes for post capture oil treatment in soil and water. In the past we used surfactants and degreasers with oil spill cleanup operations and the availability of the natural products from Pacific Sands and commercially available microbe products could significantly aid the cleanup process without harmful environmental effects of conventional processes now being used. Our goal is to have simple and effective natural products that can be easily used by the consumer and by commercial cleanup crews."
Expert Panel Speaks Out on Need for Privacy, Access and Identity for Healthcare Information
Princeton Junction, New Jersey
Privacy, access and identity are vital to the Obama administration's effort to modernize the nation's healthcare information infrastructure, a panel of policy and technology experts told healthcare industry leaders, public policy makers and policy-influencing organizations at a National Press Club briefing in Washington, DC. The event was co-hosted by the Smart Card Alliance Healthcare and Identity Councils and the Secure ID Coalition. A video of all of the presentations from the healthcare identity and privacy briefing is available online. The topic is timely because healthcare IT is getting nearly a $19 billion boost from the American Recovery and Reinvestment Act of 2009. The speakers agreed the sense of urgency and massive investment are good news, but that time pressure might also cause problems.
"There is a risk we will focus too much on standards for electronic health records (EHRs) and ways to exchange them at the expense of sound privacy and identity models," said Randy Vanderhoof, executive director of the Smart Card Alliance. "The critical issues are getting control over who has access to healthcare information, and correctly tying the right individual to his or her health records. That means identity management and access authentication security have to be baked-in from the start, not tacked on at the end."
Correctly identifying patients and their records is difficult just within a single hospital, but gets far worse between multiple institutions, according to a leading practitioner and specialist on the subject, Paul Contino, vice president, Information Technology, at Mount Sinai Medical Center in New York. He cautioned that identity management must be addressed correctly up front or "we're going to have problems with the linkages of electronic medical records" on a regional or even national basis. Mount Sinai revamped patient registration processes and implemented a smart card-based patient card to more accurately link individuals to their medical and administrative records.
Who Are You? Establishing Trust in Digital Identities
Princeton Junction, New Jersey
The need for trust in identity is at the foundation of our society and economy. How to establish that trust, protect it, and tie it uniquely to an individual, particularly in online transactions, were the topics that dominated the many identity sessions at the Smart Card Alliance Annual Conference, held recently in Scottsdale, Arizona.
The first problem is how to prove an identity. "We have a big hole in the middle of this information identity highway; it is called foundational credentials," said Mike O'Neil, executive director of the North American Security Products Organization (NASPO). O'Neil points out that the commonly used base breeder documents -- birth certificates, driver's licenses, and Social Security cards -- were never designed to be identity documents and are easily falsified. Under the recommendations of ANSI, NASPO is developing a new identity verification standard and process that could be used to establish more trusted identities for individuals.
The next set of problems, using that identity, tying it uniquely to its owner and protecting it from theft or abuse, has become a critical issue in many sectors. The need for cybersecurity makes this more acute as more transactions move online, driven by the underlying economics of the Web. "The Web is unparalleled at driving down costs, which is why everything is going to the Web and everything on the Web is going to the cloud. The problem is as you go to the cloud you increase risk," said Mike Ozburn, principal, Booz Allen Hamilton, and keynote speaker at the Alliance event. "Security has to be as implicit, as built-in, and as architectural" as the cost dynamics that are driving everything to the Web and the cloud, Ozburn argues.
The Obama administration is taking the lead in this area with the National Strategy for Secure Online Transactions initiative, which is expected to facilitate the establishment of a broad identity ecosystem that can provide an online trust framework. "Last November we published the ICAM Segment Architecture, which was the first attempt at a governmentwide process for identity management," said Judy Spencer of the GSA Office of Governmentwide Policy. That document primarily focuses on the federal government as both a provider of identity and a consumer of identity. According to Spencer, the new initiative takes the principles of identity authentication and management in that work and moves it to the next level, where the federal government may not even be a party to the transaction at all.
Artificial Intelligence Added to Medloom Clinical Decision Support System
Junction City, Kansas
Lead Horse Technologies has announced the addition of artificial intelligence (AI) technology to its Medloom clinical decision support system. Medloom runs on the InterSystems CACHE high-performance object database platform. CACHE provides the high performance, rapid development environment and advanced features needed for the real-time decision support that characterizes the Medloom system, according to John M. Armstrong, Ph.D., Lead Horse Technologies Chairman and CEO.
Dubbed Ardvarc, the new patent-pending AI software is already viewed by some industry experts as a potential leap forward in drug safety. "Lead Horse Technologies is unique. They've developed terrifically novel software that, in my opinion, would give valuable early signals about drug safety issues... signals that just haven't been available until now," said Charles L. Bennett, MD, Ph.D. and the Center of Economic Excellence Endowed Chair in Medication Safety and Efficacy at the South Carolina College of Pharmacy.
"There is no more important issue than pharmaceutical safety, but many people don't really pay attention to it," Bennett continued. "Most clinicians assume that drugs are vetted by the Food and Drug Administration (FDA) or the pharmaceutical manufacturer. Simply stated, the manufacturer has a difficult time and, while the FDA tries its hardest, there just aren't enough people to do the work completely."
Washington State's New Hands-Free Cell Phone Law: Businesses Face Unique Challenges and Issues Regarding Compliance
Following Washington State's new law that makes hand-held cell phones and text messaging while driving a primary offense, many businesses in the state face their own unique set of challenges in complying with the law: how to handle employees who spend a significant part of their workday on the road.
Two Washington companies, DialPro Northwest and Personnel Management Systems, are teaming up to help businesses keep their employees safe and connected to the office with expert tips in a new guide that helps businesses navigate the unique challenges and issues many face in keeping their employees safe, productive and in compliance with the law.
"Most company HR policies are out of date and need to be updated," says Jack Goldberg, president, Personnel Managements Systems, a leading provider of outsourced human resource management services. "We encourage businesses to review their policies in light of current employee cell phone usage and the law. Employees should minimize the amount of time they use the cell phone while driving on the job, and to always stay safe by using headsets or hands-free devices when they absolutely need to use the phone."
"Unlike individuals, many businesses have employees who have to stay connected to the office by phone and email while on the road," says Dennis Tyler, president of DialPro Northwest, a leading provider of voice messaging and unified communications solutions. "It is not always feasible for employees to pull to the side of the road. Sometimes a quick response is required to respond to an email message or make a phone call. There are a whole group of business-oriented speech recognition tools that keep employees both safe and connected to the office while offering full compliance with the law."
OPTIMIZERx Connects With Microsoft HealthVault to Reduce Medication Costs
OPTIMIZERx has announced that its Web-based consumer information site, which provides information about prescription medicine savings, is now integrated with Microsoft HealthVault, a personal health application platform. For HealthVault users that store their medication inventory within their personal HealthVault account, this connection enables access to prescription coupons, cost-saving notifications, co-pay savings, and trials, which can help them better adhere to treatment regimens in a more affordable way, and OPTIMIZERx users can upload their medication records into their HealthVault account, enabling them to create a more comprehensive health profile.
"We are excited to be working with Microsoft HealthVault to further expand our reach and ability to help more consumers better afford their rising prescription costs through available prescription savings and other support programs," stated David Lester, Chief Executive Officer of OPTIMIZERx.
"We developed HealthVault with the goal of engaging consumers as active partners in their health and wellness management," said David Cerino, General Manager, Microsoft Health Solutions Group. "This collaboration enables HealthVault users to better manage their medication spending and provides users of OPTIMIZERx with access to a broad network of health and wellness services within our application ecosystem -- enabling decisions based on a more robust, longitudinal view of their health history."
The connection between OPTIMIZERx and Microsoft HealthVault has been established and consumers can begin saving on their prescribed medications as entered into their HealthVault accounts today!
Growth in Personal Health Record and MyESafeDepositBox Membership
San Francisco, California
Robert H. Lorsch, Chairman and CEO of MMR Information Systems, Inc. announced to an audience gathered for the Health Technology Investment Forum in San Francisco that the Company is projecting membership growth from its MyMedicalRecords Personal Health Record (PHR) and MyESafeDepositBox services in excess of one million members this year. This is in addition to the PHR growth from patients who take advantage of the free patient portal at MMRPatientView integrated with its MMRPro service for physicians and subsequent patient upgrades. Lorsch also announced that MMR is ready to deploy Personal Health Records to the millions of Americans who are expected to be faced with the management of healthcare costs resulting from the signing of the health care bill and related legislation.
Thunder Mountain the On-Line Store
Motorola T215 Wireless Car Hands-free Kit
When you are on the road, you don't want anything to slow you down or hold you back. Especially not your technology. The Motorola Bluetooth in car speakerphone T215 provides the longest talk time available up to 36 hours
Order yours now!
Skosche CBHVA Handsfree Car Speakerphone for Mobile Phone
Voice announce Bluetooth Speakerphone - handsfree speakerphone easily mounts to your vehicles visor - Voice announce caller ID tells you who is calling and allows you to keep your eyes on the road - DSP echo cancellation ensures a crystal clear communication - Rechargeable lithium ion battery with 12 hours of talk time and 1000 hours of standby - includes car charger and USB charging cable.
Order yours now!
Bounty Hunter TK4BH Tracker IV Metal Detector
The stylish Gold Digger Metal detector will detect all kinds of metal from iron relics, coins and household items to precious metals like silver and gold coins. Streamlined in appearance, with only two operating controls and a mode selection switch, the Tracker IV has eliminated the most difficult aspect of metal detector operation: Ground Balancing. With built-in Automatic Ground Trac, the Tracker IV balances for mineralization while you detect.
Order yours now!
TomTom 1EL005201 XL 340-S Automobile Navigator GPS Unit
Live Automobile Navigator voice prompt, Dashboard mountable, Bluetooth, 4.3 inch color LCD touch screen.
Order yours now!
TM Information Services is proud to partner with Career Step to promote this excellent remote training course in Medical Transcription. Learn how to get started in a professional and rewarding work-from-home career! Career Step is a national leading training institution for exciting, rewarding careers in the healthcare field. Students can study to work at home as medical transcriptionists or medical coding specialists. Please fill out our application form to receive your FREE information about Career Step's quality medical transcription training.
TM Information Services has partnered with Verio in order to provide the best quality Web hosting and domain registration services to our customers. Whether you only need your own company domain name with POP email addresses, or a complete Web hosting package, who would you rather trust? Verio is the global triple crown winner in performance, reliability and support. More businesses worldwide -- over 500,000 -- trust Verio than any other company, anywhere.
Please visit our partner site for more information on how to order your Web hosting services.
If you can be efficient, you can be effective!
TM Information Services P.O. Box 1516; Orting, WA 98360
TMIS Web site: http://www.tminformationservices.com
Thunder Mountain the On-Line Store: http://www.thundermount.com
Mack's Outdoors: https://www.tmis.org/macks-outdoors/
Copyright 2010© TM Information Services All Rights Reserved | 1 | 4 |
<urn:uuid:8671cb91-b485-4ff1-a9ef-57b6108c204a> | Computational creativity (Also Known As artificial creativity , mechanical creativity , creative computing or creative computing ) is a Multidisciplinary endeavor That Is site location is the intersection of the fields of artificial intelligence , cognitive psychology , philosophy , and the arts .
The goal of computational creativity is to model, simulate or replicate creativity using a computer, to achieve one of several ends: [ citation needed ]
- To construct a program or computer able of human-level creativity .
- To understand human creativity and to formulate an algorithmic perspective on creative behavior in humans.
- To design programs that can enhance human creativity
The field of computational creativity in the field of creativity. Theoretical work on the nature and proper definition of creativity is performed in parallel with practical work on the implementation of systems that exhibit creativity, with one strand of work informing the other.
As measured by the amount of activity in the field (eg, publications, conferences and workshops), computational creativity is a growing area of research. [ citation needed ] But the field is still hampered by a number of fundamental problems. Creativity is very difficult, perhaps even impossible, to define in objective terms. Is it a state of mind, a talent or ability, or a process? Creativity takes many forms in human activity, some eminent (sometimes referred to as “Creativity” with a capital C) and some mundane .
These are problems that complicate the study of creativity in general, but certain problems attaching themselves to computational creativity: [ citation needed ]
- Can creativity be hard-wired? In existing systems to which creativity is attributed, is the creativity of the system or that of the system’s programmer or designer?
- How do we evaluate computational creativity? What counts as creativity in a computational system? Are natural language generation systems creative? Are machine translation systems creative? What distinguishes research in computational creativity from research into artificial intelligence ?
- If eminent creativity is about rule-breaking or the disavowal of convention, how is it possible for an algorithmic system to be creative? In essence, this is a variant of Ada Lovelace’s objection to machine intelligence, as recapitulated by modern theorists such as Teresa Amabile. If a machine can only be programmed to do, how can its behavior ever be called creative ?
Indeed, not all computer theorists would agree that they could only be programmed to do -a key point in favor of computational creativity.
Defining creativity in computational terms
Because no single perspective or a complete picture of creativity, the AI researchers Newell, Shaw and Simon developed the combination of novelty and usefulness in the cornerstone of a multi-pronged view of creativity, one that uses the following four criteria to categorize a given answer or solution as creative:
- The answer is novel and useful (or for the individual or for society)
- The answer demands that we reject ideas we had previously accepted
- The answer results of intense motivation and persistence
- The answer comes from clarifying a problem that was originally vague
“Top-down” approach to computational creativity, an alternative thread has been developed among “bottom-up” computational psychologists involved in artificial neural network research. During the late 1980s and early 1990s, such generative neural systems were driven by genetic algorithms. Net recurrent involving experiments were successful in hybridizing with simple musical melodies and predicting listener expectations.
One of the most popular psychologists in this field, Stephen Wolfram , who thought that the system was perceived as complex, including the mind ‘s creative output, could be considered as simple algorithms. As neuro-philosophical thinking matured, it also became apparent that it was actually presented to a scientific model of cognition, creative or not, because it was so important that it was more uplifting than accurate. So questions naturally arose as to how “rich,” “complex,” and “wonderful” creative cognition actually was.
Artificial neural networks
Before 1989, artificial neural networks have been used to model certain aspects of creativity. Peter Todd (1989) first formed a neural network to reproduce musical melodies from a training set of musical pieces. Then he used a change algorithm to modify the network’s input parameters. The network was able to randomly generate new music in a highly uncontrolled manner. In 1992, Todd extended this work, using the so-called distal teacher approach that had been developed by Paul Munro, Paul Werbos , Nguyen D. and Bernard Widrow , Michael I. Jordan and David Rumelhart. In the new approach there are two neural networks, one of which is supplying training patterns to another. In later efforts by Todd, a composer would have a set of melodies that define the melody space, position them on a 2-d plane with a mouse-based graphic interface, and train a connectionist network to produce those melodies, and listen to the new “interpolated” melodies that the network corresponding to intermediate points in the 2-d plane.
More recently has neurodynamical model of semantic networks has-been Developed to study how the connectivity structure of These networks concerne un the richness of the semantic constructs, or ideas, They Can generate. It was Demonstrated That neural semantic networks That-have richer semantic dynamics than Those with other connectivity Structures May Provide Insight into the major issue of how the physical structure of the brain determined one of The Most Profound features of the human mind – its capacity for creative thought .
Key concepts from the literature
Some high-level and philosophical themes recur throughout the field of computational creativity. [ clarification needed ]
Important categories of creativity
Margaret Boden Refers to creativity That Is novel Merely to the officer That Produces it as “P-creativity” (or “psychological creativity”), and Refers to creativity That Is Recognized As novel by society at large as ” H-creativity “(or” historical creativity “). Stephen Thaler has suggested a new category he calls “V-” or “Visceral creativity” creations machine architecture, with the “gateway” nets perturbed to produce alternative interpretations, and downstream nets shifting such interpretations to fit the overarching context. An important variety of such V-creativity is consciousness itself,
Exploratory and transformational creativity
Boden also distinguishes between creativity and arises from an exploration within an established conceptual space, and the creativity that arises from a deliberate transformation or transcendence of this space. She labels the train as exploratory creativity and the latter as transformational creativity, see the latter as a form of creativity far more radical, challenging, and rarer than the former. Following the criteria from Newell and Simon elaborated above, we believe that it is a good practice to understood space (criterion 3) – while transformational creativity should involve the rejection of some of the constraints that define this space (criterion 2) or some of the assumptions that define the problem itself (criterion 4). Boden’s insights in the field of computational engineering, providing a more informative touchstone for development work than a technical framework of algorithmic substance. However, Boden ‘
Generation and evaluation
The criterion that creative products should be invented and used in the creative process. In the first phase, novel (to the system itself, thus P-Creative) constructs are generated; unoriginal constructs that are already known to the system at this stage. This body of potential creative constructs are then evaluated, to determine which are meaningful and useful. This two-phase structure conforms to the Gene model of Finke, Ward and Smith, which is a psychological model of creative generation based on empirical observation of human creativity.
A great deal, perhaps all, of human creativity can be understood as a novel combination of pre-existing ideas or objects [ citation needed ] . Common strategies for combinatorial creativity include:
- Placing a familiar object in an unfamiliar setting (eg, Marcel Duchamp ‘s Fountain ) or an unfamiliar object in a familiar setting (eg, a fish-out-of-water story Such As The Beverly Hillbillies )
- Blending two superficially different objects or genres (eg, a sci-fi story set in the Wild West, with robot cowboys, as in Westworld , or the reverse, as in Firefly , Japanese haiku poems, etc.)
- Comparing a familiar object to a superficially unrelated and semantically distant concept (eg, “Makeup is the Western burka “; “A zoo is a gallery with living exhibits”)
- Adding a new and unexpected feature to an existing concept (eg, Adding a scalpel to a Swiss Army knife ; Adding a camera to a mobile phone )
- Two incongruous scenarios in the same narrative to get a joke (eg, the Emo Philips joke “Women are always using their careers.” Damned anthropologists! “)
- Using an image of a product or a product (eg, using the Marlboro Man to sell cars, or to advertise the dangers of smoking-related impotence).
The combinatorial perspective allows us to model creativity as possible through the space of possible combinations. The combinations can be made from a composition or concatenation of different representations, or through a rule-based or stochastic transformation of initial and intermediate representations. Genetic algorithms and neural networks can be used to generate or crossover representations that capture a combination of different inputs.
Mark Turner and Gilles Fauconnier proposes a model called Expired Conceptual Integration Networks That elaborates upon Arthur Koestler ‘s ideas about creativity as well as more recent work by Lakoff and Johnson, by Synthesizing ideas from Cognitive Linguistic research into mental spaces and conceptual metaphors . Their basic model defines an integration network
- A first input space (contains one conceptual structure or mental space)
- A second input space (to be blended with the first input)
- A generic space of stock conventions and image-schemas that allow the input spaces to be understood from an integrated perspective
- A blend of space in which the following elements are combined; inferences arising from this combination also resides here, which leads to emergent structures that conflict with the inputs.
Fauconnier and Turner describe a collection of optimality principles that are claimed to guide the construction of a well-formed integration network. In essence, they can be compressed into a single structure. This compression operates on the level of conceptual relations. For example, a series of similar relations between the input spaces can be compressed into a single identity relationship in the blend.
Some computational success has been achieved by extending pre-existing computational models of analogical mapping. More recently, Francisco Câmara Pereira presented an implementation of blending theory that employs ideas both of GOFAI and genetic algorithms to achieve some aspects of blending theory in a practical form; his example is one of the most important areas of the history of visualization, and the third most frequent includes the creation of mythical monsters by combining 3-D graphical models.
Language provides continuous opportunity for creativity, evident in the generation of novel sentences, phrasing, puns , neologisms , rhymes , allusions , sarcasm , irony , similes , metaphors , analogies , witticisms , and jokes . Native speakers of morphologically rich languages often create new word-forms that are easily understood, they will never find their way to the dictionary. The area of natural language generation has been studied, but these creative aspects of everyday life have to be built with any robustness or scale.
Substantial work has been conducted in the area of linguistic creation since 1970s, with the development of James Meehan’s TALE-SPIN system. TALE-SPIN: Stories of a problem-based narrative, storytelling, storytelling, storytelling, storytelling, storytelling, storytelling, storytelling. The MINSTREL system represents a complex elaboration of this basis approach, distinguishing a range of character-level goals in the story from a range of author-level goals for the story. Systems like Bringsjord’s BRUTUS elaborate these ideas further to create stories with complex inter-personal themes like betrayal. Nonetheless, MINSTREL explicitly models the creative process with a set of Transform Recall Adapt Methods (TRAMs) to create novel scenes from old. The MEXICA model of Rafael Pérez y Perez and Mike Sharples is more explicitly interested in the creative process of storytelling, and implements a version of the engagement-reflection cognitive model of creative writing.
The company Narrative Science makes computer generated news and reports commercially available, including summarizing team sporting events based on statistical data from the game. It also creates financial reports and real estate analyzes.
Metaphor and simile
Example of a metaphor: “She was an ape.”
Example of a simile: “Felt like a tiger-fur blanket. ” The computational study of these phenomena has been focused on a knowledge-based process. Computationalists such as Yorick Wilks , James Martin, Dan Fass, John Barnden, and Mark Lee have developed knowledge-based approaches to the processing of metaphors, or at a linguistic level or a logical level. Tony Veale and Yanfen Hao have developed a system, called Sardonicus, that acquires a comprehensive database of explicit similes from the web; These similes are then tagged as bona-fide (eg, “as hard as steel”) or ironic (eg, “as hairy bowling ball “, “as pleasant as a root canal””); Similes of Either kind can be retrieved on demand for Any Given adjective They use thesis similes as the basis of an on-line metaphor generation system called Aristotle. That can suggest lexical metaphors for a descriptive Given goal (eg, to describe a supermodel as skinny, the source terms “pencil”, “whip”, ” whippet “, “rope”, ” stick-insect ” and “snake” are suggested).
The process of analogical reasoning has been studied from a mapping perspective and a retrieval perspective, the latter being key to the generation of novel analogies. The dominant school of research, as advanced by Dedre Gentner , analogy as a structure-preserving process; this view has-been Implemented in the mapping engine structure or EMS, the MAC / FAC retrieval engine (Many Are Called, Few Are Chosen), ACME ( Analogical Constraint Mapping Engine ) and ARC ( Analogical Retrieval Constraint System ). Other mapping-based approaches include Sapper, which situates the mapping process in a semantic-network model of memory. Analogy is a very active sub-area of creative computation and creative cognition; active in this sub-area include Douglas Hofstadter , Paul Thagard , and Keith Holyoak . Also worthy of note here is Peter Turney and Michael Littman’s machine learning approach to solving SAT -style analogy problems; Their approach achieves a score that compares well with average scores achieved by humans on these tests.
Humor is an especially knowledge-hungry process, and the most successful joke-generation systems to date have been punished, as exemplified by the work of Kim Binsted and Graeme Ritchie. This work includes the JAPEsystem, which can generate a wide range of young people. An improved version of JAPE has been developed in the context of the STANDUP system, which has been experimentally deployed as a means of enhancing linguistic interaction with children with communication disabilities. Some limited progress has been made in generating humor, which involves other aspects of natural language, such as the deliberate misunderstanding of pronominal reference (in the work of Hans Wim Tinholt and Anton Nijholt), as well as in the generation of humorous acronyms in the HAHAcronym system of Oliviero Stock and Carlo Strapparava.
The blending of multiple word forms is a dominant force for new word creation in language; These new words are commonly called “blends” or ” portmanteau words ” (after Lewis Carroll ). Tony Veale has developed a system called ZeitGeist that harvests neological headwords from Wikipedia and interprets them related to their local context in Wikipedia and relates to specific word senses in WordNet. ZeitGeist has been extended to generate neologisms of its own; These are the words that are used in the context of these words (eg, “food traveler” for “gastronaut” and “time traveler” for ” chrononaut “). Then it uses Web search to determine qui qui glosses are Meaningful and neologisms have-nots-been used before; this search identifies the subset of these words that are both novel (“H-creative”) and useful. Neurolinguistic inspirations have been used to analyze the process of novel word creation in the brain, understand neurocognitive processes responsible for intuition, insight, and to create a new type of invention, based on their description. Further, the Nehovah system blends two sources words into a neologism that blends the meanings of the two source words. Nehovah searches for WordNet for synonyms and TheTopTens.com for pop culture hyponyms. The synonyms and hyponyms are blended together to create a set of candidate neologisms. Theologisms are then based on their word structure, how is the word being, how is the concept of the concept of the subject? Nehovah loosely follows conceptual blending.
More than iron, more than lead, more than gold I need electricity.
I need lamb or pork gold lettuce gold cucumber.
I need it for my dreams. Racter, from The Policeman’s Beard Is Half Constructed
Like jokes, poems involve a complex interaction of different constraints, and no general-purpose poem generator, the combination of meaning, phrasing, structure and rhyme aspects of poetry. Nonetheless, Pablo Gervás has developed a noteworthy system called ASPERA that employs a case-based reasoning (CBR) approach to generating poetic formulations of a given input text via a composition of poetic fragments that are retrieved from a case-base of existing poems. Each poem fragment in the ASPERA case-base is annotated with a prose string that expresses the meaning of the fragment, and this prose string is used as the key for each fragment. Metrical rules are then used to combine these fragments into a well-formed poetic structure.Racter is an example of such a software project.
Computational creativity in the field of music has been discussed in the field of music and music. The domain of generation has included classical music (with software of Mozart and Bach ) and jazz . Most notably, David Cope has written a software system called “Experiments in Musical Intelligence” (or “EMI”) that is capable of analyzing and generalizing from existing music by a composer in the same style. EMI’s output is convincing enough to persuade human listeners that its music is human-generated to a high level of competence.
In the field of contemporary classical music, I am the first computer that composes from scratch, and produces final scores that professional interpreters can play. The London Symphony Orchestra played a piece for full orchestra, included in Iamus’ debut CD , which New Scientist described as “The first major work composed by a computer and performed by a full orchestra”. Melomics , the technology behind Iamus, is able to generate pieces in different styles of music with a similar level of quality.
Creativity research in jazz has focused on the process of improvisation and the cognitive demands of this type of music. [ citation needed ] The Shimon robot, developed by Gil Weinberg of Tech Georgia, has demonstrated jazz improvisation. OMax, SoMax and PyOracle, are used to create improvisations in real-time by re-injecting variable length sequences learned on the fly from live performer.
In 1994, a Creativity Machine architecture (see above) was able to generate 11,000 musical hooks by training a synaptically disturbed neural net on 100 melodies that had appeared on the top ten list over the last 30 years. In 1996, a self-bootstrapping machine Creativity Machine experienced audience facial expressions through an advanced machine vision system and perfected its musical talents to generate an album entitled “Song of the Neurons”
In the field of musical composition, the patented works by René-Louis Baron (1998) . All of a kind in the world of music and the effects of music (in real time while listening to the song). The patented invention Medal-Composerraises problems of copyright.
Visual and artistic creativity [ edit]
Computational creativity in the generation of visual art has had some notable successes in the creation of both abstract art and representational art. The most famous program in this domain is Harold Cohen ‘s AARON , which has been continuously developed and expanded since 1973. Though formulaic, Aaron exhibits a range of outputs, generating black-and-white drawings figures (such as dancers), potted plants, rocks, and other elements of background imagery. These images are of a high quality to be displayed in reputable galleries.
Other software artists include the NEvAr system (for “Neuro-Evolutionary Art”) of Penousal Machado. NEvAr uses a genetic algorithm to derive a mathematical function that is then used to generate a colored three-dimensional surface. A human user is allowed to select the best pictures after each phase of the genetic algorithm, and these preferences are used to guide successive phases, thus pushing the search engine into the most appealing to the user.
The Painting Fool , produced by Simon Colton, is a picture of a painting of different colors, color palettes and brush types. This article looks at the source of information on the subject of art, the earliest iterations of painting. Nonetheless, in more recent work, The Painting Fool has been extended, AARONdoes, from its own limited imagination. Images in this vein include cityscapes and forests, which are generated by a process of constraint satisfaction from some basic scenarios provided by the user (eg, saturated, while those should be less saturated and appear smaller). Artistically, the images now created by the Painting Fool appear on by those created by Aaron, though the extensible mechanisms employed by the trainer (constraint satisfaction, etc.) may well allow it to be developed and developed.
The artist Krasimira Dimtchevska and the software developer Svillen Ranev have created a computational system combining a rule-based generator of English sentences and a visual composition that converts sentences generated by the system into abstract art. The software generates automatically indefinite number of different images using different color, shape and size palettes. The software also allows the user to select the subject of the generated sentences or the one of the pallets used by the visual composition builder.
An emerging area of computational creativity is that of video games. ANGELINA is a system for creatively developing video games in Java by Michael Cook. One important aspect is Mechanic Miner, a system that can generate simple game mechanics. ANGELINA can evaluate these mechanics for usefulness by playing simple unsolvable game levels and testing to see if the new mechanic makes the level solvent. Sometimes Mechanic Miner discovers bugs in the code and exploits to solve problems with.
In July 2015 Google released DeepDream – an open source computer vision program, which is used to detect convoluted neural network to find and enhance patterns in images via algorithmic pareidolia , thus creating a dreamlike psychedelic appearance in the deliberately over-processed images.
In August 2015 researchers from Tübingen, Germany, a convolutional neural network that uses neural representations to recombine content and style of arbitration images which is able to turn images into stylistic imitations of works of art by artists such as Picasso or Van Gogh in about an hour. Their algorithms can be used in the website DeepArt that allows users to create unique artistic images by their algorithm.
In early 2016, a global team of researchers explained how to understand the digital synaptic neural substrate (DSNS), which could be used to derive original chess puzzles that were not derived from endgame databases. The DSNs is reliable to combine features of different objects (eg chess problems, paintings, music) using stochastic methods in order to drift specifications qui new feature can be used to generate objects in’any of the original domains. The generated chess puzzles have been featured on YouTube.
Creativity in problem solving
Creativity is also useful in allowing for unusual solutions in problem solving . In psychology and cognitive science , this research area is called creative problem solving . The Explicit-Implicit Interaction (EII) theory of creativity has been implemented using a CLARION -based computational model that allows for simulation of incubation and insight in problem solving. The emphasis of this computational creativity is not in performance (as in artificial intelligenceprojects) but rather on the explanation of the psychological processes leading to human creativity and the reproduction of data collected in psychology experiments. So far, this project has been successful in providing an explanation for incubation effects in simple memory experiments, insight in problem solving, and reproducing the overshadowing effect in problem solving.
Debate about “general” theories of creativity
Some researchers feel that creativity is a complex phenomenon which is more complicated by the plasticity of the language we use to describe it. We can describe not just the agent of creativity as “creative” but also the product and the method. Consequently, it could be claimed that it is unrealistic to speak of a general theory of creativity . [ citation needed ] Nonetheless, some generative principles are more general than others, leading some advocates to claim that certain computational approaches are “general theories”. Stephen Thaler, for instance, proposes that certain modalities of neural networks are generative enough, and general enough, to manifest a high degree of creative capabilities. Likewise, the Formal Theory of Creativity is based was single computational principle published by Jürgen Schmidhuber in 1991. The theory postulates That creativity and curiosity and selective focus in general are by-products of a single algorithmic principle for measuring and Optimizing learning progress .
Unified model of creativity
A unifying model of creativity was proposed by SL Thaler through a series of international patents in computational creativity, beginning in 1997 with the issuance of US Patent 5,659,666. Based upon theoretical studies of neural traumatized networks and inspired by studies of damage-induced vibrational modes in simulated crystal lattices, this extensive intellectual property subsequently taught the implementation of a broad ranks of noise, damage, and disordering effects to has trained neural network so as to drive the formation of novel or confabulatory patterns that could be considered
Thaler’s Scientific and Philosophical Papers The following issue of these patents describes:
- The aspects of cognition accompanying a broad gamut of cognitive functions (eg, waking to dreaming to near-death trauma),
- A shorthand notation for describing creative neural architectures and their function,
- Quantitative modeling of the rhythm with which creative cognition occurs, and,
- A prescription for critical disruption regimes leading to the most efficient generation of useful information by a creative neural system.
- A bottom-up model that links creativity and a wide range of psychopathologies.
Thaler has also recruited his generative neural architectures into a theory of consciousness that is closely related to the temporal evolution of thought, while also accounting for the subjective feeling associated with this hotly debated mental phenomenon.
In 1989, one of the most controversial reductions in this general theory of creativity, one neural net termed the “grim reaper,” governed the synaptic damage (ie, rule-changes) applied to another net that had learned a series of traditional Christmas carol lyrics. The train net, on the lookout for both novel and grammatical lyrics, seized upon the chilling sentence, “In the end, to the earth in one eternal silent night,” thereafter ceasing the synaptic degradation process. In subsequent projects, these systems are often used for the purpose of learning and learning, oftentimes bootstrapping their learning from a blank slate based on the success or failure of self-conceived concepts and strategies.
Criticism of Computational Creativity
Traditional computers, et al. (1992), 1992, pp. 251-322, adapted to be used in the computational creativity application, as they are fundamentally transform a discrete set of discrete, limited domain of input parameters. As such, a computer can not be creative, as everything in the output has already been present in the data or the algorithms. For some related discussions and references to related work is captured in some recent work on philosophical foundations of simulation .
Mathematically, the same set of arguments against creativity has been made by Chaitin . Similar observations come from a Model Theory perspective. All this criticism emphasizes that computational creativity is useful, but it is not real creativity, just nothing new is created, just transformed in well defined algorithms.
The International Conference on Computational Creativity (ICCC), annually organized by The Association for Computational Creativity . Events in the series include:
- ICCC 2017, Atlanta, Georgia, USA
- ICCC 2016, Paris, France
- ICCC 2015, Park City, Utah, USA. Keynote: Emily Short
- ICCC 2014, Ljubljana, Slovenia. Keynote: Oliver Deussen
- ICCC 2013, Sydney, Australia. Keynote: Arne Dietrich
- ICCC 2012, Dublin, Ireland. Keynote: Steven Smith
- ICCC 2011, Mexico City, Mexico. Keynote: George E Lewis
- ICCC 2010, Lisbon, Portugal. Keynote / Inivited Talks: Nancy J Nersessian and Mary Lou Maher
Previously, the community of computational creativity has held a dedicated workshop, the International Joint Workshop on Computational Creativity, every year since 1999. [ citation needed ]
- IJWCC 2003, Acapulco, Mexico, as part of IJCAI’2003
- IJWCC 2004, Madrid, Spain, as part of ECCBR’2004
- IJWCC 2005, Edinburgh, UK, as part of IJCAI’2005
- IJWCC 2006, Riva del Garda, Italy, as part of ECAI’2006
- IJWCC 2007, London, UK, has a stand-alone event
- IJWCC 2008, Madrid, Spain, has a stand-alone event
The 1st Conference on Computer Simulation of Musical Creativity will be held
- CCSMC 2016, June 17-19, University of Huddersfield, UK. Keynotes: Geraint Wiggins and Graeme Bailey.
Publications and forums
Design Computing and Cognition is a conference that addresses computational creativity. The ACM Creativity and Cognition is another forum for issues related to computational creativity. Computer Science Days 2016 keynote by Shlomo Dubnov was on Theoretic Creativity.
A number of recent books provide a good introduction or a good overview of the field of Computational Creativity. These include:
- Pereira, FC (2007). “Creativity and Artificial Intelligence: A Conceptual Blending Approach”. Applications of Cognitive Linguistics series, Mutton de Gruyter.
- Veale, T. (2012). “Exploding the Creativity Myth: The Computational Foundations of Linguistic Creativity”. Bloomsbury Academic, London.
- McCormack, J. and Inverno, M. (eds.) (2012). “Computers and Creativity”. Springer, Berlin.
- Veale, T., Feyaerts, K. and Forceville, C. (2013, forthcoming). “Creativity and the Agile Mind: A Multidisciplinary study of a multifaceted phenomenon”. Sheep of Gruyter.
In addition to the proceedings of conferences and workshops, the computational creativity community
- New Generation Computing , Volume 24, Issue 3, 2006
- Journal of Knowledge-Based Systems , Volume 19, Issue 7, November 2006
- AI Magazine , Volume 30, Number 3, Fall 2009
- Minds and Machines , volume 20, number 4, November 2010
- Cognitive Computation , volume 4, issue 3, September 2012
- AIEDAM , volume 27, number 4, Fall 2013
- Computers in Entertainment , two special issues on Meta-Creation Music (MuMe), Fall 2016 (forthcoming)
In addition to these, a new journal has started which focuses on computational creativity within the field of music.
- JCMS 2016, Journal of Creative Music Systems | 1 | 3 |
<urn:uuid:58c64405-c741-4b48-9939-0c1aa32e7dc3> | Koeberg nuclear power station is well equipped to handle the energy plant’s nuclear waste, according to a KPMG study.
The recent study by KPMG Services on the socio-economic impact of the Koeberg nuclear power station in the Western Cape and South Africa from 2012 to 2025, says the plant is well equipped to handle the safety regulations it has operated for more than 33 years.
Lullu Krugel, director and chief economist at KPMG, said electricity was a key input for the majority of products and processes in South Africa’s economy, making Koeberg a direct contributor to economic growth, both in the Western Cape and in the rest of the country.
Krugel said Koeberg stimulated economic activity in South Africa estimated at R53.3 billion between 2012 and 2016.
“The methodology which KPMG employed to conduct this review, is based on internationally accepted standards,” Krugel said, “with detailed information supplied by Eskom and official statistics.”
The report said the National Nuclear Regulator (NNR) oversaw the safe operation of nuclear installations at Koeberg and Vaalputs, the nuclear disposal site in the Northern Cape.
It said the NNR was committed to protecting people, property and the environment against any nuclear damage by establishing safety standards and regulatory practices and prescribes protective measures, such as frequent public safety forums.
According to the report, low-level nuclear waste is compressed into sealed and marked steel drums at Koeberg, before it is transported to Vaalputs in specially designed trucks for disposal in 10m-deep trenches.
About 500 steel drums arrive each year.
Intermediate-level waste is then solidified by mixing it with a cement mixture which is poured into concrete drums.
The drums are then transported from Koeberg to Vaalputs in specially designed trucks for disposal.
According to the report, the government is considering the addition of nuclear capacity as an option to add up to 9600MW to the national grid by 2030 in tranches that are affordable.
This highlights Koeberg’s role in the South African economy at present and going forward, and provides the knowledge base to expand the country’s nuclear capacity through new plants.
It has been more than a decade since the accident, but Vincent Mashinini can’t forget the moment his underground world collapsed.
His right leg still bears the scars from the rocks that fell and temporarily pinned him underground, his livelihood nearly becoming a death trap.
Mashinini spent 15 years as a small-scale illegal miner, toiling in the abandoned coal mines that pockmark Ermelo and the surrounding Mpumalanga countryside. Along with agriculture and tourism, coal mining sustains the town’s economy as it feeds the nearby power stations generating some of the country’s electricity.
“We’ve lost brothers and sisters and mothers,” Mashinini said. “But there is no employment. If you want to put food on the table, you must come here.”
The traditional coal majors that have dominated the sector for decades are looking to leave South Africa as uncertainty surrounds Eskom contracts and the world slowly moves away from fossil fuels. As a result, the country’s coal industry is welcoming more and more junior miners who rarely complete environmental or social rehabilitation, causing a proliferation of abandoned mines around towns such as Ermelo.
Xavier Prevost, a senior coal analyst with the mining consulting company XMP Consulting, said Eskom’s policies of signing long-term contracts and buying only from 51 percent black-owned companies were pushing large mining houses away.
“Most of the majors are not investing in coal due to the current government politics. Another reason for their retreat is their inability to negotiate new agreements with Eskom,” Prevost said, adding that many of Eskom’s contracts with large miners would expire by 2020.
BHP Billiton spun off its South African coal and other lower-value assets to South32 in 2015. Anglo American is also in the process of disposing of “lower-margin, shorter-life assets”, including some South African coal, the company’s media team said in a statement sent to The Star.
“In terms of Anglo American’s Eskom-tied mines, the company has initiated a process to exit its Eskom-tied mines (Kriel, New Denmark and New Vaal). We believe these assets would be better served under new ownership that can provide more focused capital and management to continue to create value,” the statement said.
The growth of alternative energy sources has also affected South African coal by shrinking certain export markets.
The US’s Energy Information Administration predicts renewable energy production will increase worldwide from 22 to 29% between 2012 and 2040. The predictions see coal concurrently falling from 40 to 29%.
In many cases, new solar and wind projects are cheaper than coal. South Africa, the world’s sixth largest coal exporter, was beginning to feel the impact of this trend, Prevost said.
“Environmentalists have affected coal heavily. The biggest example is China. The change in policy in China has caused havoc in coal. China, the largest importer of coal in the world, suddenly changed its policies and is stopping importing,” Prevost said.
The price of coal rose from less than R300 per ton in 2000 to more than R2000 a ton in 2008, which in part caused a surge in applications for mining and prospecting rights. The coal price is now down at least 40% from its peak, and smaller miners who entered the industry looking for a quick profit have in some cases abandoned their operations.
Several Ermelo coal operations where Mashinini once laboured were abandoned during this period. Owned by Golfview Mining, a subsidiary of the Anker group based in the Netherlands, the sites are worked by small-scale miners, while unrehabilitated waste dumps and remnants of mining infrastructure sit derelict.
One partially rehabilitated portion lies in the centre of Johan Vos’s farm. “They’re getting away with murder,” Vos said of Golfview, which rented his land and guaranteed rehabilitation.
“I didn’t sell the land to them because they were just going to mine that one piece. They mine the piece, they rehabilitate and then I can go on. That was the whole idea. It didn’t happen,” Vos said.
Several years after taking a plea agreement and fine for its environmental practices, Golfview submitted a business rescue plan in 2015.
The company’s plan estimates the cost of rehabilitation at R29 million but reveals that only R5m is held in trust funds specifically for that purpose. Additionally, at the time the plan was submitted, the company owned more than R622m in liabilities, meaning additional funds for rehabilitation would be extraordinarily difficult to procure.
With no legal power to deny mining on his property, Vos has a second coal mine on his farm that feeds the nearby Camden power station.
He has not seen any rehabilitation at a third mine since operations abruptly halted six months ago, while a fourth mine is set to begin operations on his property, as contract details are being finalised.
Only 10 years ago, six companies accounted for 90% of South Africa’s production and eight collieries mined more than 60% of the country’s coal. While 93 coal mines produced all of South Africa’s coal in 2016, that number increased 59%, to 148 mines, by 2016. Production, however, increased by only about 10%, indicative of a trend towards smaller mines.
But with smaller mines and shorter lifespans, mining companies are targeting new areas for coal mining.
Although some grasslands and wetlands in the Mpumalanga Highveld have gained legal protection in recent years, companies continue to lodge mining applications. More than 60% of Mpumalanga falls under applications for rights to either mine or prospect.
According to the Department of Environmental Affairs, by the end of 2013, prospecting rights already covered 25.4% of Mpumalanga’s wetlands, 32.2% of its freshwater ecosystem priority areas and 41.8% of its grasslands.
Documents emerged last month showing that the ministers of environmental affairs and of minerals and energy had signed off on a coal mine within the Mabola Protected Environment near Wakkerstroom, part of a strategic water source area in Mpumalanga.
Koos Pretorius, director of the Federation for a Sustainable Environment, said high-potential agricultural land often coincided with coal deposits, and the mining industry encroaching on these lands was creating concerns for food security.
“The soil gets destroyed from the opencast mining, and much of it is opencast. The reason for that is simple. If you do an underground mine you leave roughly 35 to 40% of the mine, so they tend to do as much opencast as possible,” Pretorius said.
Recent periods of drought and sporadic weather patterns, likely attributable to climate change, have also had an impact on agriculture.
It is estimated that South Africa’s operational and abandoned coal mines together can release greenhouse gases equalling the warming effect of more than 4 million tons of carbon dioxide per year, roughly the same as consuming 1.8 billion litres of petrol.
Proper rehabilitation could minimise the release of these gases.
The Star recently obtained documents from the Department of Mineral Resources that shed light on the money held in financial provisions for rehabilitation. As of 2015, R45bn was held around the country in these funds.
While Mpumalanga and Limpopo – the country’s two most important coal mining provinces – refused to hand over their data, KwaZulu-Natal and Free State – two other provinces with coal mines – did release theirs.
Free State holds more than R5bn in financial provisions for rehabilitation, but the largest 5% of funds accounts for 99% of the money.
This means smaller operations, which are more likely to close or be abandoned than large sites, have an average of less than R60 000 in their funds.
KwaZulu-Natal is a similar story, with the largest 5% of funds holding 80% of the money.
Thulani Mnisi is a ward councillor in the Wesselton township in Ermelo. With so many residents living in poverty in the township and surrounding informal settlements, he said, mining could be tolerated if it brought jobs and some semblance of environmental responsibility.
Instead, the Imbabala Coal Mine sits abandoned and adjacent to the township.
Mine tunnels extend under the community, and illegal miners chip away at the underground pillars supporting the mine. Numerous people have died during cave-ins. “Those miners, after they mined, they just left the place like that,” Mnisi said.
Eskom and Transnet need to borrow billions more than anticipated in 2016, National Treasury revealed in its 2017 Budget Review on Wednesday.
Even as Eskom’s financial performance improved in 2015/16 as a result of a 12.7% tariff hike and a revenue increase by R10.5-billion to R161-billion, it still required borrowings for its new build and electrification projects.
In addition, Transnet grew revenues by 1.7% to R62.2-billion in 2015/16. While it has spent R122.4-billion on capital expenditure in the last five years, it plans capital investments of R273-billion in the next seven years, Treasury said.
These massive expenditure projects mean the entities take up the biggest share of government’s borrowings.
“In 2016/17 it (borrowing) will amount to R254.4-billion, or 5.8% of GDP,” it said. “This is R32.8-billion more than was projected in the 2016 Budget, reflecting a larger consolidated budget deficit and higher borrowing estimates by State-owned companies – primarily Eskom and Transnet.”
In 2015/16, borrowing by the six largest State-owned companies – the Airports Company of South Africa, Eskom, Sanral, SAA, the Trans-Caledon Tunnel Authority and Transnet – reached R128-billion.
Eskom and Transnet accounted for 74% of the total, Treasury explained.
Eskom increased planned borrowings in 2016/17 increased from R46.8-billion to R68.5-billion. “The increase results from Eskom’s revised assumptions of cost savings and lower-than anticipated tariffs during the current price determination period,” it said.
Over the next seven years, Transnet plans capital investments of R273-billion, to be funded by earnings and borrowings against its balance sheet, it said.
Foreign debt funding was lower than estimated, reaching R29.5-billion compared with an expected R42.6-billion.
“The six companies project aggregate borrowing of R102.6-billion in 2016/17 and R307.1-billion between 2017/18 and 2019/20.
“Gross foreign borrowings are expected to account for the majority of total funding over the medium term, largely as a result of Eskom’s efforts to obtain more developmental funding from multilateral lenders.”
In 2016, Eskom concluded a deal with the China Development Bank to get a $500-million loan facility.
However, Eskom is likely to need additional equity injections in the coming three to four years, according to Nomura emerging market economist Peter Montalto. “Its last equity injections stabilised ratios at very low levels, but are still a constraint,” he said in December. “Nuclear generation would severely leverage Eskom’s balance sheet without additional equity injections.”
Referring to the “injection”, Treasury said the R23-billion equity injection and the conversion of the R60-billion subordinated loan to equity helped shored up Eskom’s balance sheet.
“State-owned companies are responsible for much of the infrastructure on which the economy relies,” Treasury said. “Eskom, Transnet and … Sanral account for about 42% of public-sector capital formation.”
“Over the past year, Eskom continued its capital investment programme – bringing new generating capacity to the electricity grid – and maintained steady power supply. Transnet continued to invest in getting more freight from road to rail.”
Meanwhile, contingent liability exposure to independent power producers (IPPs) is expected to decrease in 2019/20.
“Government has committed to procure up to R200-billion in renewable energy from IPPs,” Treasury said. “As at March 2017, exposure to IPPs – which represents the value of signed projects – is expected to amount to R125.8-billion. Exposure is expected to decline to R104.1 billion in 2019/20.”
Government began to categorise power-purchase agreements between Eskom and IPPs as contingent liabilities in 2016.
“These liabilities can materialise in two ways. If Eskom runs short of cash and is unable to buy power as stipulated in the power-purchase agreement, government will have to loan the utility money to honour its obligations.
“If government terminates power-purchase agreements because it is unable to fund Eskom, or there is a change in legislation or policy, government would also be liable. Both outcomes are unlikely.”
It said Eskom is expected to use R43.6-billion of its guarantee in 2016/17 and R22-billion annually over the medium term.
It said SAA has used R3.5-billion of a R4.7-billion going-concern guarantee, with the remainder likely to be used in 2017/18.
As part of its debt collection efforts, State-owned Eskom on Wednesday started interruptions of bulk electricity supply to some defaulting municipalities in North West and the Northern Cape.
The municipalities of Naledi, Lekwa-Teemane and Kgetlengrivier, in North West, as well as the Ubuntu and Renosterberg municipalities, in the Northern Cape, will have their supply interrupted.
Power supply will be cut between 06:00 and 08:00 and 17:00 and 19:30 during weekdays and between 08:30 and 11:00 and 15:00 and 17:30 on weekends.
Many defaulting municipalities that were set to have their power cut from this month have made payments to Eskom or reached payment plans with the utility.
Eskom on Tuesday reported that 21 of the 34 identified municipalities scheduled for supply interruptions during January had met its requirements. As a result, these municipalities have not had their supply interruptions suspended. This includes the Madibeng and Maquassi Hills municipalities in North West.
“We are immensely encouraged by the kind of response we are witnessing presently and would like to thank all the municipalities that have made an effort to pay their accounts, and committed to their payment agreements,” said Eskom interim CEO Matshela Koko.
Eskom will monitor the strict adherence to the payment plans and the payment of current accounts of these municipalities and any defaults will result in the interruption of supply without further notice.
Municipal customers are encouraged to engage with their supply authorities to get updated information on their municipality’s arrears situation.
State-owned Eskom earlier this week commissioned its 765 kV Kappa–Sterrekus transmission line, connecting the Western Cape to the network over and above the 400 kV network.
The 765 kV is one of the highest voltages used for electricity transfer in the world.
This line connects Sterrekus’s 765 kV substation through the 765 kV network to the north. The substation is equipped with the latest switchgear and protection schemes and will be the new hub for the transmission western grid, as it connects to Koeberg and other major substations in the Peninsula.
The 400 kV network to the Western Cape was established in 1974 with only two lines from the North to the Western Cape. Subsequent to that, a third and a fourth in-feed were established.
This is the first major change to the transmission networksince 1974, giving the Western Cape a much-needed secure supply from the major power stations in Mpumalanga and Limpopo.
The line between Kappa near Touwsrivier and Sterrekus posed severe challenges to the construction teams, as entry to some of the mountainous areas could only be achieved by helicopter. Construction took place mostly by hand.
“It was also difficult to obtain the servitude as the line had to cross the Ceres and Tulbagh valleys and required extensive public and stakeholder engagement,” the utility said in a statement.
Power utility Eskom is making progress with the roll-out of smart electricity meters in Sandton and Midrand, with 5 932 meters installed in the first three months of this year.
Eskom has made a strategic decision to convert all of its conventionally billed customers to prepaid meters, a project it believes will support the utility’s financial stability efforts. Customers will also benefit from improved reliability, reduction of public safety incidents, better management of energy consumption and the elimination of billing errors. Eskom plans to have smart meters installed at the premises of all 32 885 of its domestic customers in Sandton and Midrand by the end of the 2016/17 financial year.
The conversion of the smart meters to prepaid will resume in July, once Eskom has upgraded its online vending system. Meanwhile, Eskom has installed more than 40 000 split prepaid meters in Soweto, 13 000 of which have been converted to prepaid mode. The utility has already improved its revenue collection in Soweto by R33.63-million, as a result of the installation of the split prepaid meters.
PERTH (miningweekly.com) – South Africa-focused Sunbird Energy has signed a conditional agreement with a South African consortium to divest of its noncash assets for A$8.5-million. The assets include a 74% interest in the Mopane, Springbok Flats and Springbok Flat West coal-bed methane projects, as well as its 76% interest in the offshore Ibhubesi gas project.
The Ibhubesi gas field, off the Northern Cape coast, is South Africa’s largest undeveloped gas field with about 540-billion cubic feet of gas. National oil company PetroSA is Sunbird’s joint venture partner in the project. In 2015, Sunbird signed a gas sales agreement term sheet with utilities provider Eskom for the supply of 30-billion cubic feet a year of gas for up to 15 years, with Sunbird at the time describing the agreement as a major step towards the commercialisation of the Ibhubesi gas field. Sunbird told shareholders on Monday that the conditional agreement with the South African consortium, which consisted of major shareholders and debt holders, included a cash consideration of A$1-million, the buy-back and cancellation of 55-million existing Sunbird shares and the assignment of Sunbird’s A$4.8-million outstanding debt to the purchaser.
The transaction was subject to a number of conditions, including shareholder approval. A general meeting of Sunbird shareholders would be called in late May. The transaction with the privately-held consortium comes months after an indicative takeover proposal from Glendal Power and Industries for Sunbird was withdrawn. Glendal in July last year offered Sunbird shareholders A$0.18 a share for their holding in the company, valuing Sunbird at around A$25-million. The offer was withdrawn in December.
JOHANNESBURG – According to the Water Efficiency Report released by ActionAid South Africa on Tuesday, big business should be taking the lead in helping to deal with the country’s water crisis. Because of the threat that water scarcity problems pose to both the social and economic stability of the republic, it urges industry to become involved, at least as much as government, in addressing the issue.
Perhaps there is even a space for a water innovation industry to sprout, much as the renewable energy industry has burgeoned in the face of policy uncertainty and pressing need.
“Companies, whether they are big, small or medium, are all going to be affected by the water crisis one way or another,” says water expert Anthony Thurton, who contributed to the report. “Some of them are going to be affected negatively and they’re going to either ignore it – in which case they will become victims of the situation – and others are going to be very progressive and very positive about it, and they are going to change their business model and tailor it to the new reality.”
Thurton is also the director of water technology company Gurumanzi, which provides ‘uninterrupted water supply solutions’ that address the water risk problem much like an uninterrupted power supply eases concerns over load-shedding and other power cuts. It provides a back-up water reserve that lasts up to 48 hours, that can be rented by households, schools, hospitals, and even residential or business estates.
“There are many examples of solutions that companies are working on and they are all disrupters, or game changers in their own right,” says Thurton, referring to one company that is in the process of developing a solution that treats borehole water to improve its quality.
Privatisation is controversial
Johann Boonzaaier is the chief executive manager of the Impala Water Users Association, which owns a dam in KwaZulu-Natal in the only area that has not been affected by the drought because Impala was able to sell water to the municipality. The dam is about the same size as Hartbeespoort Dam and, according to Boonzaaier, would cost around R600 million to build at today’s prices.
But he says privatising water is a controversial topic because access to water is a basic human right. He believes there is much to be done with regard to regulations in such a scenario. He points out how Eskom’s price increases have had a dire impact on the economy and that this would be magnified if the price of water were to rise to match its scarcity.
“The danger of that is that many peoples’ livelihood depends on that water,” says Boonzaaier, “and if you get industries that can pay the highest price, then what will happen to the majority of farmers who farm for subsistence and cannot afford to pay that price?”
The report also notes that making agricultural irrigation systems more efficient could save up to 40% of current water use.
“Another question is, what is the value of water? For us, the value of the water use is the total cost of maintaining the resource. But in the Western Cape, the price is four times what ours is. So how do you decide? You must remember that, you can get along without food for a while, but if you go two days without water you’re bound to perish.”
Thurton says the National Water Act and the Water Services Act are under review, and that there is a drive to have them amalgamated into one piece of legislation to address the changes that are necessary to improve water efficiency. One of the suggested changes would see residential estates being regarded as water service providers: they buy bulk water from the local authority and distribute it to their users.
South Africans use 235 litres of water per day, while an average world citizen uses 173 litres of water per day. If municipalities could reduce the per capita consumption to the world average, the demand-supply gap would be reduced by almost half – SA Water Efficiency Report 2016
“In effect, what that will do is it will privatise a certain portion of the value chain, and that will open up a whole new way of doing things… On the one hand it presents new business opportunities but on the other it is completely uncharted territory,” Thurton says.
Providing water is government’s responsibility
The report states that, while there are already acute water shortages in 6 500 rural communities, the problem will spread to the metropolitan areas. It states that, by 2030 there will be a 17% supply deficit, with the large cities being the worst affected.
“Cape Town, which falls within the Berg Water Management Area, will need to close a gap of about 28% to meet demand,” reads the report.
But Emily Craven from ActionAid South Africa says the intention of the report is not to start a dialogue on privatisation, but rather how to eradicate the inefficiencies within the country’s water eco-system. In some cases, this would lead to the companies that are directly responsible for those inefficiencies benefiting financially from perpetuating them.
Says Craven: “It would be a bit worrying if the first response from a report like this is a debate on water privatisation. Ultimately, it is government’s responsibility to ensure that people have access to clean, healthy water. That said, there is space for technology to be used to improve the system… We have seen it where mines have water purification plants that allow them to put water back into the system… What worries us is when the monetary value is put into the equation. Because mines are the biggest polluters of the water, essentially what you would have is municipalities buying their own water from the mines that polluted it in the first place”.
Pretoria — The R5 billion Bokpoort concentrated solar plant (CSP) has officially been launched in Groblershoop, Northern Cape.
Trade and Industry Minister Rob Davies welcomed the major investment by ACWA Power, a Saudi Arabian company.
“[The] project instils confidence in government’s long term infrastructure roll out, providing energy access, contributing to economic, community and sustainable development,” he said at the launch of the plant on Monday.
Minister Davies was joined at the launch by Saudi Arabian Trade and Commerce Minister, Dr Tawfiq Al Rabiah, who is also in South Africa for the 7th session of the South Africa-Saudi Arabia Joint Economic Commission (JEC).
The 50 MW Bokpoort plant forms part of South Africa’s Renewable Energy Independent Power Producers Procurement Program (REIPPP).
“This project marks a key milestone in South Africa’s electricity supply security and CO2 reduction. With its record 9.3 hours thermal energy storage capacity, the Bokpoort CSP project will provide electricity to approximately 21 000 households during the day as well as night time and save approximately 230 000 tons of CO2 equivalent emissions during every year of operation,” said Minister Davies.
Within five years, the REIPPP has attracted R194 billion of investment and is fast becoming a global model and blue print for other countries, providing policy certainty and transparency.
The Minister said the project has a major socio-economic development impact for the Northern Cape and South Africa. Over R2.4 billion was spent on local content, with 40% of the Bokpoort plant being sourced and manufactured locally. This includes the manufacturing and assembly of solar field collector steel structures and the supply of piping and cables.
During construction peak time, more than 1 200 people worked on site, while 70 permanent jobs have been created to operate and maintain the plant. The plant was constructed over 30 months.
“The operation of the plant will provide electricity to the Eskom grid to power communities and industry by ensuring a reliable source of renewable energy and increasing power supply.”
The Minister thanked the chairperson of ACWA Power, Mohamed Abunayyan, for his confidence to invest in South Africa.
ACWA Power aims to expand its Southern African portfolio to 5 000 MW by 2025. The group has identified South Africa, Namibia, Mozambique and Botswana as key growth markets in the region.
“To our visitors from Saudi Arabia, South Africa is indeed open for business. Investors enjoy robust protection in South Africa, comparable to the highest international standard,” said Minister Davies.
South Africa should optimise its funding model in the procurement of its 9 600 MW nuclear build programme to ensure cheaper electricity costs, according to Rosatom’s Nikolay Drosdov on Wednesday. Drosdov, director of international business for Rosatom, told Fin24 that South Africa should “optimise the model to decrease the price of electricity, because … the proportion between investments and loans depends on the … levelised cost of electricity”.“You have to pay interest rates from your price of electricity if you’re using a lot of debt money,” he said on the sidelines of the Nuclear Africa conference. “We can help to optimise the model, but it’s also the subject of commercial negotiations.” The Department of Energy will release its Request for Proposals by the end of March, after a year-long process of signing up vendor countries through inter-governmental agreements regarding the peaceful use of nuclear energy.
To speed up the programme, South Africa should follow the engineering, procurement and construction (EPC) procurement model, which would be signed by the existing state-owned company (Eskom) or a newly created company, according to Drosdov. “This company shall sign an EPC contract … with the scope closed to a turnkey base with one of the global nuclear vendor selected based on the competitive and transparent procedure during the procurement process,” he said. “In nuclear, we have different financial models that you can invest some money in the equity, you can attract some money from the market, from the government resources, from export credit agencies/banks (entities that provide government-backed loans),” he said. While many economists have spoken about the value add to South Africa’s economy with large nuclear localisation, Drosdov said the exact size will depend on the interest local business has for the nuclear procurement programme.
“If local business is interested to participate in the nuclear programme, we can increase it (localisation),” he said. “If not, we can supply 100% by our sources, but economically it’s not efficient. We are trying to use local partners … (due to) lower costs, but it’s the subject for negotiation.” “What could be a solution is a global partnership,” he said. “For example, you would take a Russian nuclear island (the heart of the nuclear plant) and we will integrate your local competencies and local technologies.” “We can have for example Russian/South African technology that can be exported to other countries in Africa.” Several media outlets challenged Drosdov this week over the programme, asking him whether a secret deal had been signed with South Africa. Fin24 has on numerous occasions asked this question to Rosatom officials including Drosdov, with the usual reaction that no such deal had been signed. | 1 | 29 |
Subsets and Splits