id
stringlengths 47
47
| text
stringlengths 426
671k
| keywords_count
int64 1
10
| codes_count
int64 2
4.68k
|
---|---|---|---|
<urn:uuid:6f29a6bf-01fe-4f30-b3a5-b58d3b6e2f74> | Hackers are always trying to find ways to hack into your Google account and steal your information. Luckily, Google has many tools that you can use to help keep your account secure. This wikiHow article will teach you how to keep your Google account safe from hackers.
Method 1 of 6:
Protecting Your Password
1Create a strong password. Don't use your name, birth date, pets or kids names, or the name of your street as your password: make it hard to guess.
- A strong password will be at least 10 characters in length, but the more the better. The longer your password is, the more time it will take the hacker to crack it.
- A strong password should contain at least one of each of the following characters: lower-case letters, upper-case letters, numbers, and special characters.
2Do not use your Google password anywhere else. Create a different password for every website that you use.
- It's not enough to use the same password with different numbers at the end (e.g., password1, password2 …).
- Consider downloading the Password Alert extension if you use Google Chrome. Password Alert will warn you whenever you enter you Google Password on a non-Google site, which can help protect you from phishing and accidentally using your Google password on another site. To use Password Alert, simply download it from the Chrome store, and then follow the onscreen directions.
3Consider Using a password manager. As you create more accounts and passwords, it'll likely be difficult to remember them all. There are many good password managers available that will encrypt and safely store your passwords, such as 1Password, LastPass, and KeePass.
- You might have a password manager built into your operating system — for example, Mac users have keychain available to them for free.
- If you don't want to use a password manager, consider using a passphrase, for example: “I like big butts and I cannot lie!” might become iLbBaIcL!
4Avoid sharing your Google password with anyone. Even people you trust, like your friends and family, might accidentally share your password with someone you don't trust.
5Only log in on trusted computers. If you are using a computer that you don't know or trust, then don't even log into your account. Hackers commonly use key loggers on computer systems that record everything you type, including passwords.
- If it's not possible for you to avoid typing a password into a computer you don't trust, then change your password once you're back at your own computer.
Method 2 of 6:
Accessing Your Security Settings
1Visit myaccount.google.com. You may be asked to sign in with your Google account if you aren't already.
2Click the "Security" tab. It's on the left side of the page.Advertisement
Method 3 of 6:
Making Use of Google's Security Settings
1Enable two-step verification. Two-step verification makes sure that even if a hacker guesses your password, then your account will still be safe. Every time you log in from a new device, you will get a code or notification from Google that you will have to enter or approve in order for the sign in to be successful.
- Google prompt is the most secure method of two-step verification, while an authenticator app is somewhere in the middle with voice or text message being the least secure (although any of these methods would be more secure than not having two-step verification at all).
2Regularly check your account activity. Google keeps a log of all major security events on your account and allows you to view them. The log will show the changes and the location of where the changes were made. If you click on the event, then you can see more information about it, such as the IP address of the computer that made the change, the device that was used, and a map of the location.
- If you see something that you don't recognize, then you should change your password immediately.
3Review your app passwords. Delete app passwords that you no longer use to make it harder to hack into your account. If you use an app that requires an app password, then you should look into other services or apps that don't require app passwords, as app passwords can allow hackers to bypass two-step verification.
- If you don't have any app passwords, then you can skip this step.
4Choose a secure PIN. Some Google services, like Google Pay, allow you to set a PIN that you can use to verify your identity. When you chose a PIN, use a completely random number. Don't use your birth date, home address, part of a phone number, or any other number that can be linked back to you.
- Your account may not have an option to set a PIN.
5Add a recovery phone and email. Adding recovery phone or email allows you to gain access to your account in case you ever forget your password. It can also allow you to take control of your account back from the hacker.
- Make sure that you only use an email address or phone number that you control, don't use the ones of friends or family. Even if you trust your friends or family, their account could be hacked, or phone stolen, which would then put your account at risk.
6Review the devices that are signed into your account and check third-party app access. Reviewing these areas on your account will allow you to make sure that only your current devices and services have access to your account. Make sure to remove any old devices and accounts that you don't use anymore. If you see something that you don't recognize, then you should immediately remove it and change your password.Advertisement
Method 4 of 6:
Using Security Checkup
1Go to myaccount.google.com. You may be asked to sign in with your Google account if you aren't already.
2Navigate to the "We keep your account protected" header. Click on the “Get started” link.
- You can directly access this page by visiting myaccount.google.com/security-checkup on your browser.
3Wait for the results. If your account is safe, then you will see a “No issues found” message.
4Review the results. You can review the Recent security events, Sign-in & recovery, Third-party access and Your devices from there. Click on each option to view more details.
- If any issues are found, then follow the recommended action to secure your account.
Method 5 of 6:
Taking Advantage of Other Security Settings
1Disable POP3 and IMAP access if you don't use it. POP3 and IMAP are communication methods that some email programs use to access your email. However, these methods to access your account can create a security risk because they bypass two-step verification. If you don't use an app the requires IMAP or POP, then you should disable them.
- To disable POP3 and IMAP access, navigate to Gmail, and then click on the settings gear in the upper right corner, click "settings", and then select the tab. Once there, select the disable option for both services, and then click on
- The Mail app on Windows 10 and the Gmail app on your phone should continue to work even if POP3 and IMAP are disabled.
2Set up Inactive Account Manager. Inactive Account Manager is a feature that makes sure that your Google account will be deleted or that access will be given to somebody else that you trust if you ever are unexpectedly unable to access you account. It's a good idea to set up Inactive Account Manager so that if you are unable to access your account, or if you forget about it, then your account will still be taken care of and your data will be safe.
3Avoid spam emails. Spam emails are annoying, but they can also be dangerous. Don't click on any links in spam emails and avoid even opening emails in your spam folder.
- Gmail also allows you to block emails from specific email addresses that you do not trust or want to hear from.
- Know how to spot a scam. If you suspect a phishing email, then report it. To avoid getting phished, beware of the following:
- Messages with poor grammar, spelling, and typos.
- Messages asking for your personal information such as your credit card info, driver's license, social insurance number, date of birth, etc.
- Messages claiming that your account will be deleted unless you give out your password.
Method 6 of 6:
Protecting Your Computer/Device
1Use up-to-date anti-virus software. Anti-virus software helps keep your computer secure by preventing, detecting and removing malware. There are several free anti-virus programs available online (popular ones include AVG Antivirus and Sophos). If you don't already have one, download one now, ensure that it's kept up to date, and run scans regularly.
2Keep all software up to date. In particular, ensure that your browser and operating system are updated.
4Set a device password or screen lock. Setting a password on your device will help make sure that your Google account will stay safe even if your device is stolen.Advertisement
QuestionHow do you know POP3 gives data to a hacker?Top AnswererPOP3 can allow a hacker to access your Google account because it bypasses two-step verification. However, it will only allow hackers to access your email, and you still need the account password to access data through POP3. If you use a program that requires POP3, then just make sure that you have a secure password and you should be fine.
- Consider changing your password and PIN every 6-12 months.
- If using a public computer (for example, a library computer), make sure to sign out each time you are finished with your session.
- It's a good idea to do the security checkup at least once a year.
- Always make sure that your browser is up to date. If your browser isn't up to date, then you should update it.
About This Article
- ↑ https://support.google.com/accounts/answer/32040
- ↑ https://www.computerworld.com/article/2495565/application-specific-passwords-weaken-google-s-two-factor-authentication--researchers-say.html
- ↑ https://www.howtogeek.com/199804/warning-your-%E2%80%9Capplication-specific-passwords%E2%80%9D-aren%E2%80%99t-application-specific/
- ↑ https://www.coolheadtech.com/blog/disable-imap-pop-in-google-apps
- ↑ https://support.google.com/accounts/answer/46526?hl=en | 1 | 2 |
<urn:uuid:aa5f1cab-feaa-4887-8c89-65b0a8ba0a3c> | The French Concession is the area of Shanghai that the French government administered from 1849 until 1946. The tree-lined avenues and the many fine old houses in the area still retain an air of the "Paris of the East". In particular, the many wrought iron fences and stair railings will look familiar to anyone who knows Paris or Montreal.
This has been a fashionable area for well over a century and is now very developed as well. There are plenty of large buildings, mainly upmarket residential and office towers, quite a few hotels and a number of enormous shopping malls. At the same time, many of the picturesque older buildings — even whole neighbourhoods — have been renovated. There are a huge number of boutiques, galleries, bars and cafes scattered through the area.
For many years after the French left, the area was administered by the Chinese as two districts, Xuhui (徐汇区 Xúhuì Qū) to the west and Luwan (卢湾区 Lúwān Qū) to the east. In 2011, Luwan was merged into the Huangpu district, but we cover it here because for the traveller it has more in common with the rest of the old French Concession.
In addition to the official administrative districts Xuhui and Luwan, the French concession area has some well-known streets and neighborhoods.
Xujuahui (old spelling Zi-ka-wei or Siccawei) is an area to the south-west of the French Concession. It was technically not part of it, but largely owned by the Catholic Church and effectively an extension of the Concession. The area has many buildings built by the Church during the French period and thereafter. The most prominent of these is St. Ignatius Cathedral, the neighbouring Bibliotheca Zi-ka-wei, a library built by the Jesuits; a number of preserved convent and school buildings; the Jesuit observatory; the T'ou-se-we Museum, housed in part of a former Jesuit orphanage with interesting displays on the history of Xujiahui, the orphanage and its workshop famed for producing works of Chinese and Western art; and the tomb of Xu Guangqi, an imperial official and famous Catholic convert whose family donated much of the land in Xujiahui to the church. This collection of buildings from Xujiahui's Catholic past is promoted as a themed walking tour called "Origin of Xujiahui", and boards with maps can be found near any of them with directions to visit the others.
Today, its central area is an enormous road intersection with a metro station (lines, and ) under it and much shopping around it. There is a large underground shopping area right in the station and at least half a dozen large malls or department stores nearby. From the station, you can get to most of them without going outdoors. Among other things, Xujiahui has Shanghai's largest cluster of consumer electronics vendors. It also has a lot of high-end residential and office space, and many restaurants.
There is a large road which starts by the cathedral and becomes an elevated road just beyond it. It leads to the Xinzhuang interchange and beyond that to Humin Road, the main route South into Minhang District for cars and buses.
When the French controlled the area, this street was Avenue Joffre. Today it is the main street of the Luwan area, and one of Shanghai's main shopping streets. In fact, Shanghai people seeking upmarket goods are at least as likely to look here as on Nanjing Road, which attracts more visitors from other parts of China than locals.
Many of the smaller streets nearby are also worth a look, especially when you want to get away from the busy streets. Explore the area between Julu Rd to the north, Huaihai Rd running through the center, and Jianguo Rd to the south. Pleasant tree-lined streets and local Shanghainese bustle, combined with a growing number of trendy boutiques and restaurants. Changle Rd and Xinle Rd are rapidly becoming the places to find small designer clothing shops. Interesting architecture built with French and Belgian money and showing mixed Chinese-European styles.
The trendy areas Xintiandi and Sinan Mansions described below are both near Huaihai Road (5-10 minutes walk away); Tianzifang is further afield, about 30 minutes walk, or a 15-minute taxi ride in normal traffic conditions.
Metro lineruns under Huaihai Road through the main shopping area — stations, listed east-to-west, are South Huangpi Road, South Shaanxi Road and Changshu Road. Line also comes to Changshu Road, and lines and to South Shaanxi Road.
Further west, Huaihai Road becomes mainly residential. Line's Shanghai Library and Jiaotong University stations are on the street and in this area.
Hengshan Road (old name Avenue Pétain) and nearby streets have what used to be Shanghai's largest cluster of dining and nightlife spots. It has since been surpassed by areas like Found 158 and Xintiandi. It is an upmarket area with few real bargains, but food and drink here are generally somewhat cheaper than in trendier and more touristy areas like Xintiandi. There are also a number of hotels and quite a bit of boutique shopping. For those interested in the history, the main points of interest are the former American College (no. 10), and the (still active) Community Church (no. 53), reminders of the large English-speaking community that also lived in the French Concession.
From Changshu Road, lineswings south; the next two stops are Hengshan Road and Xujiahui. Hengshan Road and the smaller streets off it have mainly older two-storey buildings, many of them now bars and restaurants, though nearby areas such as Xujiahui and Zhoajiabang Road are largely highrise.
You can reach this area on foot starting from Changshu Road station (line 1 or 7). At the cross street on the west side of the station, head south past the Starbucks. The first couple of blocks of this street are called Baoqing Road, but the name soon changes to Hengshan Road. Oscar's Pub, a block along on the left (corner of Fuxing Road) is a popular expat hangout.
Turning right at Hengshan Lu / Dongping Lu junction brings you to the terminus for the #816 bus to Minhang. Dongping Lu ends in a T junction; the US Consulate is across the top of the T. If you turn left instead of right at Hengshan Lu / Dongping Lu, you will pass a popular restaurant / upmarket grocery called Green & Safe. If you continue along Dongping Lu, you will reach a mostly-expats sports bar called The Camel. Continuing beyond those leads onto the west end of Fuxing Road, and into the area of smaller streets described under #Huaihai Road above.
Staying on Hengshan Road and walking towards Hengshan Road Metro Station, you will get to a new (2018) complex of restaurants on the south side of the street - Polish, South American and Chinese. Some have patios out front which are quite pleasant. There are no bars in that complex but most of the restaurants serve drinks. There are some large disco-style bars and nightclubs on the north side of the street. Phebe is probably the biggest club. There is also a busy bowling alley next to Phebe nightclub. Going south from Phebe brings you to another cluster of bars near the junction of Hengshan Lu and Gao'an Lu. These bars are quite tacky and are best avoided. Continuing south, there are also at least two high-class hotels within a few blocks.
Continuing several blocks south on Hengshan Road gets you to Xujiahui.
Xintiandi is an area of old shikumen houses, two-storey buildings on narrow lanes. It has been extensively redeveloped and now has new shopping malls, trendy bars and restaurants, and much tourism. It is sometimes considered a sanitized, touristy and upscale "Disneyland" version of the original old neighborhoods it displaced. It is certainly rather pretty, worth at least a look for any first-time visitor to Shanghai. Prices are generally on the high side, but there are some good deals to be had at off-peak times such as lunch specials in some restaurants and happy hour in bars. Although there are many shops here, most are international or Hong Kong-based chains.
Attitudes to Xintiandi among Shanghai's large expatriate community are quite mixed. The area certainly has many expat customers, and many consider some of its live music venues and dance clubs as among the best in the city. Others dismiss most or all of them as "poseurs' pubs", suitable only for a more-money-than-sense crowd.
There is a Xintiandi station on metro linesand ; walk north from there to reach the center of the area. Walking south from South Huangpi Road station on line is roughly the same distance.
Tianzifang is another area of shikumen housing that has been redeveloped. It is newer than Xintiandi and emphasizes arts, crafts and boutique shopping where Xintiandi has more stress on brand-name goods and entertainment. Unlike Xintiandi, the shikumen residences in Tianzifang have been preserved, rather than knocked down and rebuilt. Slightly further from the central part of the French Concession, Tianzifang first gained fame when several prominent artists took up residence there, taking advantage of the cheap rent. There are still galleries and artists' studios here, although handicraft, souvenirs and cafes now dominate.
The number one exit of Dapuqiao Station (line) is just across the street from Tianzifang.
Sinan Mansions is another redeveloped quarter, based around a dozen European-style villas dating from the early 20th century. This area is bounded by Fuxing Road to the north, Si'nan Road to the west, and Chongqing Road to the east. The villas have been renovated, and their front and back yards knocked through and paved over to become paths. The revamped villas now mostly house restaurants and bars. Shaded by tall plane trees planted by the French authorities 100 years ago, this is a pleasant area to stroll and perhaps stop for a coffee and some cake.
A group of buildings once belonging to the Catholic Church are near the Chongqing Road end, the largest of which is now an upmarket restaurant (Aux Jardins Massenet). Some of the villas in the same group are now an upmarket hotel (Hotel Massenet). (Sinan Road was Route Massenet in the French period.)
Only one villa, the former residence of Zhou Enlai, is preserved (as a museum) in a form which shows what these villas might have looked like when they were in residential use.
Wukang Road (old name Route Ferguson) is one of the best preserved residential streets of the French Concession. Still lined by ornate villas and grand apartment buildings, it is a favourite for visitors interested in Shanghai's diverse architectural heritage. The road connects Huashan Road to the north with the western part of Huaihai Road in the south. The narrow road is lined with plane trees, and is popular in autumn when the golden leaves cover the ground. Unfortunately, very few of the historic houses can be visited. There is one small area of the road which has been developed to house cafes, bars and restaurants, and is popular with expatriate residents of the area.
Longhua, formerly a suburban township and now part of urban Shanghai, was not part of the French Concession and is about a 30-minute walk or 10-minute cab ride further out from Xujiahui. Although close to the French and Catholic areas, until quite recently Longhua retained the look of a Chinese town. Much of the area has now been rebuilt in a fantasy-Chinese style.
Line Pudong.comes in from the north, crosses line at Jing'an Temple, intersects line at Changshu Road, makes three more stops further south in the Concession, then crosses the river into
Line old town), stopping in the French Concession at Xintiandi, S. Shaanxi Road, Shanghai Library and Jiaotong university. Further west, it goes to Changning and Hongqiao Airport. Going east, it crosses the Old Town, swings north via Nanjing Road East, and ends up in Yangpu.runs west from Laoximen (the 'Old West Gate' of the
Line Nanhui across the river, goes to several stations in the French Concession, intersects lines and at Xujiahui, and goes off to Changning, Putuo and Jiading to the north. At its western extreme it already (late 2017) extends outside Shanghai as far as Kunshan, and plans call for it to eventually link up to the metro systems of Suzhou and Wuxi.comes in from
In general this is a pleasant area to wander about in. Explore the sylvan streets and admire Shanghai's Art Deco residential architecture, reputedly the world's largest group of such homes, although not the most well-kept. Most historic buildings have a bronze plaque that details their original use. The area sandwiched between Fuxing and Huaihai Roads is particularly interesting with a sprinkling of tucked-away shops and discreet cafes, a refreshing alternative to the city's generally manic streetscape.
Around Fuxing RoadEdit
- 1 Fuxing Road (复兴路) (parallel to Huaihai Road, one block south, exit 6 of the line station at S Shaanxi Road). Walk along Fuxing Rd to see classical old buildings and much boutique shopping. The Shanghai Music Conservatory is near Fuxing Road. Some blocks west of Shaanxi Road here and there are a number of shops specialising in musical instruments, especially orchestral stringed instruments, and several shops for classical or jazz recordings.
- 2 Fuxing Park (复兴公园; Fùxīnggōngyuán), 105 Fuxing Zhong Rd (卢湾区复兴中路105号; Lúwānqū Fùxīngzhōnglù), ☏ . 06:00-18:00. This European-style park formerly known as French Park has gardens, open spaces and restaurants and clubs dotted throughout. Early in the morning, the park is filled with dancers (some Chinese styles, but mainly Western ballroom) players of various games (cards, mahjong, Chinese chess and Go), Tai Chi artists, and singing groups. Free.
- 3 Shanghai Former Provisional Government Site of the Republic of Korea, 306 Madang Road, Huangpu District 上海市黄浦区 马当路306弄4号, ☏ . During the Japanese occupation of Korea, it was the site of the Korean government in exile from 1919. Now owned and preserved as a museum by South Korea. Everything is explained in Mandarin and Korean, and with nothing in English.
- 4 Shanghai Museum of Arts and Crafts (上海工艺美术博物馆), Fenyang Road 79 (Metro: Line Changshu Road, then walk 1km), ☏ . 09:00 - 16:30. Inside of this renaissance building you can see ivory, jade and wood carvings, art on textiles, paper cutting, and folk crafts as well as artisans at work. ¥8.
Around Huaihai RoadEdit
Besides the large shopping area on the east of Huaihai Road, there are other attractions on and around the road:
- 5 Shanghai Propaganda Poster and Art Centre (PPAC), RM. BOC 868 Huashan Rd, Shanghai 上海华山路868号BOC室 (Go north from the line and metro station Jiaotong University or take a taxi to 868 Huashan Road. The museum is inside the apartment complex here. With any luck, the complex guard will point you in the right direction. The museum is found in the basement of building 4 (B).). Daily 10:00-17:00. This private collection is one of the most relevant and uncensored exhibits available to visitors interested in a glimpse of the politics and art of Mao-era China. Posters, memorabilia, photos, and even "大字报" (dazibao: big character posters) can be found in rotating exhibition. Due to the controversial nature of the historical items stored here, the museum is quite difficult to find, and unlabeled from the outside. Well worth the hunt, the museum boasts a wide array of art and political relics from 20th-century China. ¥20 admission.
- Yongping Lane. One of the secret gems on Hengshan Road is a lane open to the public. It includes 'Colca Peru Spanish Restaurant', other western food and drinks, a temporary art & fashion market along the zig-zagged lane, and the 'Yongjia Road 690 exhibition hall'. The lane goes from behind exit 4 of Hengshan Road station (line ), and exits at no. 690, Yongjia Road.
- 6 K11 Sky Garden, 300 Middle Huaihai Road (Metro: Line South Huangpi Road exit 3, across the street). 10:00-22:00. If you are on a low budget or you want to have some free fun, buy some cheap drinks in a grocery and go up by elevator to the 6th floor of the 61-floor K11 skyscraper. Then you turn right and right again, to find one of the two sky gardens (the other one is left and left), with empty tables and chairs and a nice view on the city. Best timing for this secret place is at evening with your partner and ideally full moon. The gardens exist for holding some special Saturday parties from time to time, and most of the time it's just free to visit.
Many of the consulates of foreign governments are also in this area; see the list in the main Shanghai article.
It seems a bit ironic that various militantly anti-imperialist Chinese lived in the French Concession and the Communist Party had its first national meeting here, but there were good reasons for this. For one thing, this area has always been one of the most pleasant and prestigious in Shanghai. Also, for revolutionaries — whether republicans opposing the Qing Dynasty before 1911 or Communists opposing the Kuomintang government later — an area under foreign law was considerably safer than one with Chinese law and police.
- 7 Site of the First National Congress of the CPC, 76 Xingye Lu (E side of Xintiandi). The building where the Communist Party of China (CPC) had their first national meeting in 1921. Free.
- 8 Zhou Enlai's Former Residence, 73 Sinan Rd, ☏ . This was the former Shanghai Office of the Delegation of the Communist Party in China from June 1946. It is now a museum telling the story of the Communist revolution in China and particularly Shanghai. See #Sinan Mansions above. Free.
- 9 Sun Yat-sen's Former Residence, 7 Xiangshan Rd, ☏ . Sun Yat-sen (Sun Zhong Shan in Mandarin) was the first leader of the Republic of China after the 1911 revolution that overthrew the Qing Dynasty. He is one of the very few political figures regarded favorably by the current governments of both China and Taiwan. The house was the Shanghai residence of Sun Yat-sen and his wife Soong Ching Ling. It was converted into a museum in 1961. ¥20.
- 10 Soong Ching Ling's Former Residence (上海宋庆龄故居), 1843 Huaihai Middle Rd (Walk a couple of blocks west from Shanghai Library station, line ), ☏ . 09:00-16:30. Like her father, brothers and sisters, she went to an American university (Wesleyan College in Macon, Georgia), spoke excellent English, and was deeply involved in Kuomintang politics prior to the Kuomintang defeat in 1949. Unlike the siblings (a sister married Chiang Kai Shek and a brother served as his Finance Minister), she remained in China and worked with Mao's government after that. Much state business took place at the residence. Today the house is a museum with many artifacts and photos. The grounds are very well maintained and there's a garage with a few formerly state-used cars as well. Gift shop. ¥20 (adults), ¥10 (students and persons aged between 60 and 70), Children, disabled people and persons aged over 70 may enter for free.
Chairman Mao's Shanghai house is now also a museum; it is outside this area, in Jing'an District in what was the British concession. The Kuomintang leader Chiang Kai Shek, who lost the civil war in 1949 and was driven off to Taiwan, also had a house in the French Concession (9 Dongping Road) but that has not been made a museum; it is now used as a Middle School attached to the Shanghai Conservatory of Music.
- 11 Xujiahui junction. A large road junction with Xujiahui metro station (lines , and ) underneath it.
- 12 Shanghai Jiaotong University (SJTU or "Jiaoda") (Jiatong University Station, lines , ). This is one of China's top technical schools. The new main campus is in Minhang, but the original campus in the French Concession is still used; it now does mainly continuing education courses such as Chinese-for-foreigners or MBA-for-executives
- 13 Guangqi Park and Memorial Hall, Nandan Road 17 (Xujiahui station, exit 1), ☏ . 05:00-18:00. The park was built around the tomb of Xu Guangqi, a prominent scholar and China's most notable Catholic convert, after whom the Xujiahui area and the Xuhui district are ultimately named. The tomb, recently restored according to its original set-up, is a curious combination of a large Christian cross as the grave marker, with a traditional Chinese "spirit way" lined with stone animals. The park features a lake, a bridge, a carved stone gate and statues. Another free entrance attraction in the park is the Xu Guangqi Memorial Hall in the southwest corner, a traditional house that showcases the scholar's activity and inventions and a vintage puzzle game. Another statue of the scholar is nearby, at metro exit 1. Free.
- 14 St. Ignatius Cathedral (exit 3 from the metro station). The area was under renovation in 2018, but the cathedral is open. Don't wear slippers or anything that resembles it, they are strict on decent outfit rules. Free.
- 15 Xujiahui Park. Always open. On the location of a former brick factory now stands the Xujiahui Park, which contains a man-made meandering brook (modelling in miniature the course of the Huangpu River), a lake with two black swans, basketball courts, and a children's playground.
- 16 Former site of Pathé, 811 Hengshan Road. The Red House, which once housed the Pathé China record company, is at the edge of Xujiahui Park.
- 17 Metro station exhibition. During metro service. Small, free museum exhibition at Shanghai Xujiahui station between exits 5 and 7. The exhibits are changed several times a year and sometimes English explanations are included. The history or tradition-related items are brought from nearby museums.
- 18 Shanghai Film Museum (Shanghai Film Group), North Caoxi Road 595 (Metro line or Shanghai Indoor Stadium exit 4), ☏ . 09:00 - 16:30 (last exit at 17:00). Starting from 1896, Shanghai has had a prominent role in Chinese movie making. German architect Tilman Thürmer transformed a film studio into an exciting experience of film evolution through over 70 interactive installations and thousands of historic exhibits. The museum contains 4 main exhibition units (floors) and an art cinema. (A very small free permanent exhibition can be seen in Shanghai Indoor Stadium metro station's concourse paid area above metro line 1: Portraits of Shanghai movie actors and Chinese description.)
- 19 Longhua Temple (metro lines and Longhua). One of the city's less-visited temples for foreign visitors, but an important one and one of the most popular for Shanghai residents during religious festivals. It is one of the largest ancient Buddhist temples in Shanghai. The first temple was built in 247 CE, but it was destroyed and rebuilt under the Song dynasty, 977 CE, and several times since then. It is a temple of the Ch'an sect of Buddhism, better known in the West by the Japanese name Zen. Longhua Pagoda, in front of the temple, is from the 10th century and one of the oldest standing buildings in the city.
- 20 Longhua Martyrs' Cemetery, 180 Longhua Lu (metro lines and , Longhua). Longhua Temple's old garden and orchard has now been converted into this large green space. Very few people are buried here, so it's more of a memorial garden and museum but the acreage is beautiful and large, akin to Luxun Park. The temple grounds became a "cemetery" because, during the civil war, Koumintang troops executed Communists on these grounds and the cemetery, poetry, fountains, steles, and sculptures commemorate those who were shot. The cemetery is an interesting example of Socialist Realist landscaping, but the peach blossoms - for which the temple's garden were famous - are still there, and attract many visitors when they flower in Spring. Free.
Longhua Cemetery and Temple are at the southern edge of the former French Concession, about a half hour walk or a ¥20 taxi ride from Xujiahui. The easiest way to reach them is by metro.
In Middle and South XuhuiEdit
- 21 Shanghai Botanical Garden (上海植物园), 997 Longwu Lu (Metro: Lines , Shanghai South Railway Station or Line Shilong Road, both still far from it. Bus: Line 56, 824 and others will take you directly there.), ☏ . 07:00-17:00. Covering 81.86 hectares, the garden has a diverse collection of Chinese plants. Its sections are the penjing (bonsai) garden, penjing museum, tropicarium, magnolia, peony, bamboo and conifer gardens. ¥40.
- 22 Guilin Park (桂林公园), 188 Caobao Lu (Metro: Line Guilin Park), ☏ . 06:00-18:00. A relatively small park with traditional Chinese gates, curved paths and pavilions. Good park for a walk if you happen to be nearby, but for genuine historical gardens you can consider travelling to Suzhou. ¥2.
- 23 Kangjian Park (Just across the street from Guilin Park). larger and more modern than Guilin Park
- 24 Shanghai Normal University, 100 Guilin Road. The Shanghai Normal University Xuhui Campus lies on both sides of Guilin Road, not far from Guilin Park. The east side features a lake and the west side is also pleasant for a walk.
- 25 Chinese "Comfort Women" History Museum (中国“慰安妇”历史博物馆), Level 2, Wenyuan Building, Shanghai Normal University Xuhui Campus East Side (上海师范大学徐汇校区东部文苑楼2楼). 09:00-16:00, closed on Mondays. A museum about the women who were forced into sexual slavery by the Japanese army during the Second World War. Free.
- 26 Long Museum West Bund (Dragon Art Museum West Bund Branch, 龙美术馆西岸馆), 3398 Longteng Avenue (龙腾大道3398号) (At the intersection with Ruining Road, about 750 meters east of Middle Longhua Road Station on Metro lines and ), ☏ , ✉ [email protected]. Tu-Su 10:00-18:00. The West Bund branch of the private art museum established by the wealthy collectors Liu Yiqian and his wife Wang Wei. The museum also has branches in Pudong District and the western city of Chongqing. Groups of 26 people or more are required to make a reservation at least 2 days before their visit. ¥100 (including tour guide).
The original campus of Shanghai Jiaotong University (Jiaoda) is in this area, though there are now several other campuses and the new main campus is in Minhang. The old campus has Chinese language courses for foreigners and an MBA program that is taught in the evenings, mainly for Chinese business people.
There are several schools offering Chinese cooking courses for visitors:
- The Kitchen At..., 383 Xiangyang S Rd, ☏ . a cooking school with various courses
- Chinese Cooking Workshop. There is also a location in Pudong.
- Cook in Shanghai. Courses include a market tour to get your ingredients. The company also offers courses in crafts such as paper cutting and Chinese painting.
Lots of additions to this district, on a seemingly weekly basis. Check out the entire Xujiahui area and Times Square Huaihai Road for some of the larger malls. Creative boutiques can be found on Julu, Changle, Anfu and Xinle Roads throughout the French Concession, in addition to a high concentration of one-of-a-kind buys for sale in Tian Zi Fang northwest of the Luwan Stadium.
If you are looking for anything electronic, Xujiahui is the place to start. The Metro station is under the intersection of five roads (see photo). It has shops (mostly food or clothing) and there is at least one shopping mall on each of the five corners.
- Pacific Digital Plaza Phase 2 (red building in lower right of photo, exit 10 of the metro station), has all sorts of consumer electronics — computers, digital cameras, game consoles, MP3 players, cell phones, memory cards, and computer accessories.
- Pacific Digital Plaza Phase 1 (exit 9) is better for computer parts and for repairs or services like printer cartridge refills. The idea of shopping at "PDP-1" may appeal to hackers; that was the model designation of DEC's first computer. If you look like a foreigner then at quiet times you will probably find you are constantly called out to by shop owners, which makes browsing quite challenging.
- Grand Gateway Mall (green dome between office towers, back and left in photo, exit 12) is the most upscale of these malls, and also the best-air conditioned in the summertime. The 5th and 6th floors offer a good selection of restaurants, and there are several more at ground level behind the mall. The 5th floor also has a large bookstore with a good selection of books in English. This mall is a lot larger than it looks in the photo; that dome starts above the sixth floor, and everything below it plus corresponding floors of both towers is shopping. The basement level has more shops including a large supermarket with a good, though pricey, selection of Western groceries.
Ruijin Second Road is a tree-lined boulevard in the heart of the French Concession, where you can experience the real Shanghai longtang (a narrow alley from house to house, which is a distinctive Shanghai architecture style). Don't forget to walk down Taikang Lu into Tian Zi Fang and burrow your way into the in process gentrification of the back alleys here. Old men air their magpies in spotless, tiny cages next to top flight restaurants and cafes. Shanghai T is a great place to buy a high quality T-shirt with a smart logo, "What recession?" Tian Zi Fang's renovation is still evolving and interesting shops and restaurants are opening and closing every day. The trendy stores exist side by side with the rhythms of "old school" Shanghai life -- and any time you can catch a glimpse of that, you should feel lucky.
- Diva Life Nail & Beauty Lounge, Ruijin Er Lu (Near Jianguoxi). A 2,700 ft² house, mixed with Chinese and European style. Established in 1933, this three story complex was once the home of the Jewish wine merchant H.L. Menken.
- Ferguson Lane (in a narrow alley off Wukang Rd). A 1930s building filled with restaurants and boutique shops.
- 1 Garden Books (Near junction of Changle and Shanxi S Rds). Good selection of Chinese travel guides, aromatic coffee, and flavorful ice cream. Their monthly bric-a-brac sales are a popular local social event.
- 2 Madame Mao's Dowry, Fu Min Lu. Cultural Revolution nostalgia. Prices are stiff enough that buying here is recommended only if cheaper places do not have what you want.
- Silk qipao shops. A row of shops along Chang Le Lu, between Mao Ming Lu and Shan Xi Nan Lu specializes in silk Qi Paos (traditional Shanghai-style silk dresses), which can be made to measure. The shops are especially popular with Japanese visitors staying at the nearby Okura Garden Hotel. An alternative destination for qipao in Shanghai is along North Shaanxi Road, near the junction with West Nanjing Road, where the most famous qipao workshops of Shanghai are located, including Long Feng. There are also some other qipao stores on Maoming Road, near Huaihai Road.
- Spin Ceramics, 360 Kangding Rd (at the corner of N. Shaanxi Rd). Designer ceramics by Chinese artists but with a Japanese flavor, in a stylish minimalist space.
- Torana House (164 Anfu Rd (just west of Wulumuqi Rd)). Has Tibetan and Chinese carpets and Tibetan furniture in a contemporary gallery.
- Pottery Workshop Shanghai, 176 Fumin Lu (Between Julu Lu and Changle Lu), ☏ . 09:00-17:00. Second location for this shop. ¥50-5000.
- 3 IAPM Mall, Huaihai Middle Road 999. (Metro: Lines , , South Shaanxi Road), ☏ . 10:00-23:00. This upscale mall houses big-name boutiques, restaurants and cafes. It features an atrium famous for its large-scale decoration exhibits during (Chinese and western) holidays.
Other nearby areas:
Along Maoming South Road by the Jin Jiang Hotel there are designer shops and art galleries. Don't forget your platinum credit card.
- Vegetarian Lifestyle, 77 Song Shan Rd (In an alleyway just S of Huaihai Rd), ☏ . until 22:00 daily. A beautifully appointed modern restaurant where everything is vegetarian. You will not find much in the way of fake meat that pervades most of the other vegetarian places. Instead, you will enjoy beautifully cooked dishes from all over the country in addition to a juice bar. Beer ¥30, ¥18 lunch special.
- 1 Spicy Joint, 3/F, K. Wah Center, 1028 Middle Huaihai Rd, near Donghu Rd (淮海中路1028号, 嘉华中心嘉华坊3楼, 近东湖路), ☏ . Ridiculously popular for their cheap stylized Sichuan food, in a young and fashionable environment. The waits for a table are notorious. ¥8-30.
- Foodrepublic Food Court (Sixth floor of Metro City mall, Xujiahui). Food court with many different restaurants. Most do not accept cash, though the restaurants that have their own separate seating area take both cash and cards, so you will have to add money to and put down a deposit for a stored-value card specific to the other Foodrepublics. You can see plastic models of all the dishes before you order them, so you will have a decent idea of what you are getting. Copious MSG (avoidable depending on what you're after; the CoCo Ichibanya Japanese curry place is safe in this regard) but mostly tasty and moderately priced.
- 2 Ah Da's Spring Onion Pancakes (阿大葱油饼), 120 Ruijin 2nd Road. Th-Tu 06:00-15:00. This humble stall moved here from another part of the French Concession. It sells hot cong you bing, and is a Shanghai establishment. Owner Wu Gencheng has been selling his creations for over 30 years. Queues can be notoriously long (up to 3 hours), and they frequently run out before closing time, so come early. ¥8.
- Amokka, Anfu Rd (W of Wulumuqi Road). Coffee bar and restaurant good for lunch stops.
- Cafe Dan, No. 41, Lane 248 Taikang Lu, near Sinan Lu 泰康路248弄41号, 近思南路 (leave Dapuqiao Metro Station (Line ) by exit 1, and cross Taikang Street. Cafe Dan is at the back of the lane opposite the station exit). Japanese style coffee house with very good tasting coffee and very nice Hokkaido Cheese Cake. Coffee starts around ¥40.
- 3 Cantina Agave, 291 Fumin Road, corner of Changle Road (Changshu Road metro, exit 1, turn left, walk a block, right, then another block), ☏ . Good Mexican food, beer and cocktails. They have another location in Pudong.
- Di Shui Dong (滴水洞), 56 Maoming Rd (Near Changle Rd, Metro: Shaanxi S Rd), ☏ . Fiery cuisine from Hunan, the birthplace of Mao Zedong, thus a menu full of "Mao's Shrimp", "Mao's Chicken" and such. Very popular with foreigners.
- Enoterra, 53-57 Anfu Rd (5 min walk from Changshu Road station (line and ), 5-10 min walk from Shanghai Library station (line )). 10:00-02:00. Excellent wine bar specializing in Argentinean and South African wines. Food is fairly basic with cheese and meat plates and salads, but the wine is outstanding.
- Hailaogui (海老亀; Hǎilǎogūi), 41 Yandang Rd (雁蕩路41号). Cafe specialising in sweet Chinese desserts claimed to have all sorts of beneficial effects for your health, particularly dishes made with turtle. No English menu. Milk pudding with ginger is ¥10.
- Secret Garden, Changle Rd (A short distance west from Garden Books). Serving Cantonese food in pleasant surroundings. The veranda-like space near to Secret Garden has been home to a succession of restaurants, the latest incarnation being a Greek restaurant.
- Shanghai Uncle, 211 Tianyaoqiao Rd, ☏ . A famous chain of three restaurants known for Shanghai flair with some Western accents. Known for their spare ribs, smoked fish and fatty pork with garlic.
- Uighur Restaurant (维吾尔餐厅), 280 Yi Shan Rd., ☏ . Claims to be the "original" Xinjiang cuisine restaurant in the city, and probably the best-known. The typical main dishes are moderately spicy preparations of lamb, but the menu also features some rather adventurous items like camel feet, sheep's eyeballs, and a bull's you-know-what. Also music and dance shows with the obligatory audience participation.
- 4 Glo London, No. 1 Wulumuqi South Road (on the corner of Dongping Road and Wulumuqi Road, opposite the United States Consulate), ☏ , ✉ [email protected]. An English style restaurant that does familiar (for westerners) style brunch. Rather expensive, but the food tastes pretty close to what you would get in a good restaurant in England. Coffee shop and a good bakery on the ground floor (half price on many items after 21:00), a disco-ish bar on the next floor, then a grill restaurant, and finally a rooftop barbeque. Nearby metro stops: go north along Wulumuqi for Line 10, Shanghai Library, north on Hengshan Road (a short block east) for Lines 1 or 7, Changshu Road, or south on Hengshan for the Hengshan Road Line 1 stop. ¥70.
- 5 ASE Mall (Sun Moon Light Center), 33 Caobao Road (Metro: Line , Caobao Road. The station lies below the mall, so there is access without leaving the building). 10:00 - 22:00. A huge mall finished in 2017, hosting mostly elite and highly decorated restaurants, Will's Fitness and a cinema. The building features a panoramic elevator, an inner yard, and neon signs that resemble old shopping streets.
- Casanova, far western end of Julu Rd. Very competent Italian fare.
- Le Saleya, 长乐路570号 (Between Xiangyang Rd and Shaanxi S Rd), ☏ . Closed 13:00-18:00. Neighborhood French Restaurant. Unpretentious and you can close your eyes and imagine you are on the Ile de la Cite. Prix fixe menu is ¥220 per person.
- Mesa, Julu Rd (W of Xiangyang Rd junction). Excellent and flavorful Western and fusion food accented by a fine wine list. The stunning view from the balcony is at its best in the spring and fall. Quite pricey.
- Otto, 85 Fumin Rd. A sophisticated Italian restaurant and wine bar.
- 6 Sasha's, No.11 Dongping Road (corner of Hengshan Road), ☏ . European-style food in a lovely old mansion. Large patio area. ¥200-400.
- Shintori, Julu Road. Japanese design restaurant with stunning features. Serving traditional and fusion cuisine.
- 7 Southern Belle, 433 Changle Lu. Good American food including some Tex-Mex dishes. The biscuits are disappointing, the mint juleps just fine.
Many bars, restaurants and nightclubs are clustered together in the Found 158 complex on Julu Lu. Xintiandi is another large bar / dining complex. It's possible to find good bars throughout the French Concession area.
- Wooden Paradise, No. 3, Lane 63 Fuxing Xi Lu, near Yongfu Lu (At the junction of Fuxing Lu & Yongfu Lu), ☏ . Great little cocktail bar; the entrance is at the back of the bar. Starting price ¥60.
- Boxing Cat Brewery (Boxing Cat Brewery (Yongfu Lu)), 82 Fuxing Xi Lu, near Yongfu Lu. This popular bar & restaurant provides pub grub & a decent selection of beers, including house beers. It also has an outdoor area that's popular. ¥50 for a beer.
- Senator Saloon, 98 Wuyuan Lu, near Wulumuqi Zhong Lu, ☏ . 17:00 - 02:00. A craft cocktail bar from the same people as Sichuan Citizen. Lots of choices with a particular focus on bourbon. Cocktails from around ¥70.
- 1 Jenny's Blue Bar, 7 Donghu Rd (Metro: Shanxi Rd S), ☏ . A friendly bar, run by Jenny in its present location since 2000. A wide choice of drinks, good music and sports coverage mean that it is popular both with expats and with visitors. Look out for the two cats that make themselves at home behind the bar, frequently sleeping on top of the television or the display units.
- 2 Oscar's Pub, 1377 Fuxing Lu at Baoqing Lu (Metro: Changshu Road (line or ) and walk south past Starbucks). Daily, 11:00 to late. Mainly an expat bar, in an area with several others. Dartboard, chess on Sundays, reasonable food and a range of drinks, a Filipino band. Cheap Tiger in happy hour, 11:00-20:00.
- Shanghai Brewery (Shanghai Brewery (Donghu Lu)), 20 Donghu Lu, near Huaihai Lu, ☏ . A brew pub with western food, big-screen TVs and English-speaking staff.
- Le Café des Stagiaires (Found 158), Found 158, B1/F Julu Lu, ☏ . 12:00 until late. Popular French bar & restaurant in the sprawling Found 158 complex. Complete with French comfort food. ¥50-70.
- Daga Brewpub, 100 Fuxing Lu, near Wukang Lu. Daily 10:00 - 01:00. Busy brewpub with a wide selection of beers. The pub places an emphasis on good quality Chinese beers but also has an impressive selection from the US & elsewhere. Serves food too. From ¥45 for a beer.
- The Refinery, 181 Taicang Lu, near Huangpi Nan Lu, ☏ . 10:00-02:00. This craft beer bar & restaurant in the upmarket Xintiandi bar and dining complex is a popular choice for expats and tourists. It's just south of the fountain in Xintiandi. From ¥50 for a drink.
- Asset Hotel Shanghai, 590 S Wanping Rd, Xuhui District. The hotel is close to well-known places such as Xujiahui and Zhaojiabang Rd. All rooms are air-conditioned and have their own private bathroom. Services include 24-hour front desk and room services, and free shuttle bus is offered to guests who want to explore the rest of the district. ¥250-600.
- Blue Mountain Youth Hostel (上海蓝山国际青年旅舍), 2F Bldg #1, 1072 Nong, Quxi Rd, Luwan District (卢湾区瞿溪路1072弄1号甲二楼) (Just opposite Luban Rd Metro Stn (Line 4)), ☏ , fax: . On the street just outside you will find several restaurants, small supermarkets, and fruitshops. The staff is helpful and speaks good English. Dorms 8/4-6 persons ¥40/¥50, Double/triple with bath ¥160/¥220.
- Shanghai Bin Guan (上海宾馆), 柳州路2号 (Across the street from Shanghai South Station), ☏ . If you took a bus to Shanghai South Station, this is a decent hotel within walking distance, you have to go through the underpass. have massage in your room if you want. It is very far South of the French Concession: two subway stops or ¥30 taxi from Xujiahui. ¥200-300.
- Anting Villa Hotel, 46 Anting Rd, Xuhui District, ☏ . Offers guestrooms equipped with air-conditioning, cable television (with lots of English channels), and an IDD phone. It is a must-try to dine in at their very own Maple Leaves Restaurant, that serves Shanghainese and Cantonese cuisines all day. It is in a beautiful, sleepy Concession neighborhood, down the road from the popular bar Cotton's, and close to subway lines 1, 7 and 9. Hotel Design also sees it used for many special occasions such as weddings in its Gardens.
- Hua Ting Hotel, 1200 N Caoxi Rd. A 5-star hotel, is also not far away, you may take Metro Line 1 for one stop and get off at Shanghai Stadium, the hotel is close to the metro station. It is not in the French Concession, but one stop or ¥20 by taxi away.
- Jianguo Hotel, 439 Caoxi Rd, Xuhui District, ☏ . Offers 454 air-conditioned guestrooms, all of which have high-speed Internet, color TV, mini-bar, and room safe. Some of its amenities include fitness center/gym, sauna, and an indoor pool. Restaurants include the CTC Restaurant that is known for its shark's fin, abalone, and other upscale Chinese delicacies; Shanghai Restaurant that serves authentic Shanghai dishes during lunch and dinner; and Yuliu Restaurant that serves Korean fare. It is in Xujiahui but several blocks south of the actual former Concession.
- Jin Jiang Tian Cheng Hotel, 585 Xujiahui Rd, The East Building, Jin Yu Lan Sq. Four-star business hotel with 148 guestrooms. Business and event facilities include a business center and event and conference venues of varying sizes. Leisure offerings include indoor swimming pool, disco and karaoke center. It is just south of the French Concession.
- Patina Court, No.68 Luban Road,Luwan District, ☏ . All rooms equipped with air-conditioning, cable TV and internet connection. Some of its facilities and services are Swimming pool, sauna, massage service, Room service, bar, cafe, Meeting facilities and business center. From ¥998.
- San Want Hotel, 650 Yishan Rd, Xuhui District, ☏ . Four-star hotel with 383 rooms with LCD TV, Japanese-style bathtub, refrigerator, and high-speed Internet access. Conference rooms, banquet hall and business center available. It is to the far southwest of the French Concession.
- 1 Somerset Xu Hui Shanghai (上海徐汇盛捷服务公寓), No 888 Shaanxi Nan Road, Xu Hui district, ☏ , ✉ [email protected]. Each of the 167 residences at Somerset Xu Hui is furnished with fully-equipped kitchens, home entertainment system and broadband Internet access.
- Sports Hotel, No 15 Nandan Rd, Xuhui District, ☏ . Four-stars. Outside the hotel are nearby tourist spots such as Shanghai Museum, People’s Square, and Yu Garden. Their guestrooms are equipped with air conditioning, a 25-inch cable TV, and electronic door locks. Some of their offered facilities includes an indoor swimming pool, gymnasium, sauna, and function and business rooms. It is halfway between Shanghai Stadium and Xujiahui Metro Station. Best rates on official website start at ¥553+.
- Andaz Shanghai, 88 Songshan Road (Xintiandi), ☏ , ✉ [email protected]. Andaz are Hyatt's boutique hotels; this is the first one in Asia.
- 2 Ascott Huai Hai Road Shanghai (上海雅诗阁淮海路服务公寓), No 282, Huaihai Road Central, Luwan district, ☏ , ✉ [email protected]. The serviced residence is in the Luwan District near Xintiandi. It studio, one-bedroom and two-bedroom apartments, each with its own kitchen, washing machine, home entertainment system and modern furnishings.
- Garden Avenue Hotel, 689 Old Humin Rd, Xuhui District. 5-star business hotel nowhere near Shanghai’s central business district. It has 218 rooms, fully-equipped banquet halls, and dining and recreational facilities. It is next to the South Railway Station and ¥30 taxi to away from Xujiahui.
- Hengshan Moller Villa, No 30 Shanxi Rd S Jing. Housed in a stunning piece of 1936 architecture. Rooms are frequently booked out so be sure to try and get one of the main rooms which contain wood panelling or business rooms that are much larger and come with a huge balcony that overlooks the garden.
- Hengshan Picardie Hotel, 534 Hengshan Road, Xuhui District, ☏ , fax: . A classic Art Deco-inspired hotel in the heart of French Concession. The hotel offers 254 rooms and dining and leisure facilities. Business amenities include a fully-equipped business center and venues such as Kaixuan Palace and Blossom Hall.
- 3 The Kunlun Jing An (previously Hilton Shanghai), 250 Huashan Lu, ☏ , fax: . Check-in: 14:00, check-out: 12:00. While still popular with business travelers for its recognizable name, the Hilton Shanghai has lost some of its luster, although its still considered a top end hotel in this area. Rooms are spacious and modern, and the staff are friendly, but the Hilton needs to step up its game.
- Jin Jiang Hotel, Luwan District, ☏ . Offers 434 luxurious rooms, all of which have individually controlled air-conditioning, mini-bar, color TV with cable channels, DVD player, and refrigerator. Some of its amenities include fitness center, beauty salon, spa, sauna, and massage service. Best rates on official website start at ¥1280.
- Jin Jiang Tower, 161 Changle Rd, Luwan District, ☏ , fax: , (reservations), ✉ [email protected]. Five star hotel on Changle Road. ¥1025.
- Lanson Place Jinlin Tiandi Residences Shanghai, No 3, Lane 168 Xingye Rd, Luwan District, Shanghai 200020, China, ☏ . Offers 3-bedroom residences with engaging views, either over the alleyways of the Xintiandi quarter or a tranquil lake. All apartments are equipped with air-conditioning, cable TV, fully equipped kitchen, and Internet connection. Some of its amenities include a home theater system, sauna and steam rooms, and a fully equipped gymnasium. Best rates on official website start at ¥2200.
- Mason Hotel, 935 Central Huaihai Rd, Luwan, French Concession, ☏ . This boutique hotel is cozy and comfortable and great for quiet nights sleep. All rooms have internet access and there's a 2nd floor lounge, a rooftop beer garden, a variety of restaurants and a Starbucks on the ground floor.
- Okura Garden Hotel, 58 Maoming Rd S, Luwan District, ☏ , fax: , ✉ [email protected]. Check-in: 14:00, check-out: 12:00. 5-star hotel built around the former French Club and its garden from 1926. Many art-deco decorations from that time are still visible in the hotel, especially in the Ball Room. Very central location along Huai Hai Road, with good accessibility and a subway stop (South Shaanxi Road, Line 1) next to the hotel grounds.
- Old House Inn, No 16, Lane 351, Huashan Rd, ☏ . With only a dozen rooms, its advisable to book ahead to secure one of these charming rooms, furnished in traditional Chinese style. All rooms have internet access and breakfast, which is included, is served in an outdoor courtyard.
- Royal Court Hotel Shanghai, Lane 622, 7 Middle Huaihai Rd, Luwan District. A five star business hotel. Air-conditioned rooms, conference halls, restaurant and bar, gym and sauna. Easily accessible from S Shanxi Rd Stn and Nanjing Rd.
- Intercontinental Shanghai Ruijin (上海瑞金洲际酒店), 118 Rui Jin 2 Rd. Villa style hotel with picturesque surroundings. All of the villas here, 5 in total, are designed in French style, some harking back to the start of the 20th century. The main building is particularly nice with a grand colonial vista. The hotel is situated in a park-like compound with small bridges, ponds, pavilions and marble fountains.
- Shanghai New-Westlake Hotel, No 22, Lane 133, Mao Ming South Road, Luwan District, ☏ . It is 5 km from the railway station. It offers 20 air-conditioned rooms with cable TV, free broadband Internet access, a private toilet, and a room safe.
- 4 Pullman Shanghai Skyway, 15 Dapu Rd, Luwan District (Subway station: Dapuqiao), ☏ , toll-free: , ✉ [email protected]. Check-in: 14:00, check-out: 12:00. The hotel impresses the most with its sheer size - over 50 floors, crowned by a gargantuan canopy, make it a visible landmark from much of southern Shanghai. The smallest rooms in the hotel are almost 50 sqm in size. On the inside, it is getting quite long in the tooth and gradual renovations slowly bring it in line with more modern Pullmans, but all of the features you'd expect of one are here, including a swimming pool, spa, gym, and health club. The upper room categories come with access to a generous Executive Lounge. Starting from ¥730.
- 5 Langham Xintiandi, 99 Madang Road, Xintiandi, Huangpu, ☏ . Luxury hotel right next to the Xintiandi old town ¥3000. | 1 | 3 |
<urn:uuid:72381ba4-b864-4b50-93d5-4e9a23e5816c> | With the seemingly constant increase of nuclear threats, be they accidental [Fukushima] or deliberate [North Korea], there has been a resurgence in Geiger counter and radiation detector sales. Lets take a look at how classic Geiger’s compare to a modern radiation detector.
Radiation and Radioactivity
In the event of a nuclear weapon attack, there will be a brief period of extremely high thermal radiation [heat] that is reduced greatly within seconds following the blast. This is accompanied by high levels [thousands of REMs] of cell-destroying ionizing radiation [X-rays, γ-rays, α-radiation, β-radiation, or neutrons] that can last hours, days and weeks following the blast. In the event of a nuclear accident, the radioactive fallout can last much longer [decades, or more] since the meltdown is usually at or near ground level, and radioactive fuels are not expended in a high-efficiency, weaponized air burst.
Ionizing radiation is basically the rapid shedding of neutrons or electrons, which are emitted by a radioactive source. These are moving at near the speed of light, and act like tiny cannon balls that pierce your body and permanently destroy cell structures and DNA strands. Since ionizing radiation is invisible, and it can take days, week or even years to “feel” it’s harmful effects, we need special instruments to detect it. These radiation detectors are loosely referred to as “Geiger counters”, named for the Geiger–Müller tube or G-M tube that is part of the detection circuitry on devices such as the Civil Defense instrument below.
Old “Civil Defense” Geiger counters from the Cold War can be found online and in mil-surp shops for relatively cheap prices. They are certainly better than nothing, but there are a few things you need to know before you buy one.
First, the true Geiger counters are models CDV-700 and CDV-718. The CDV-700 does not work well in a high-level scenario as the G-M tube becomes saturated and gives false, low readings. The CDV-718 is a great, all-around Geiger counter with a wider detection range and rugged, durable construction.
There are also ionization chamber survey meters such as models CDV-715, CDV-717 or CDV-720 that detect higher levels of radiation. These units are sensitive to moisture, so varying storage conditions over the decades will weigh heavily on how accurate these units are and how often they must be re-calibrated. Also, they must be calibrated by a certified entity, as they do not have a simple self-calibration function like modern, digital meters.
Overall, I feel that these devices are better used as collectors items rather than for a real-world nuke/SHTF scenario. They are bulky, heavy and unwieldy for a bug out situation. Also, they are somewhat limited in their detection spectrum, so you will need more than one to cover all of your bases. The difficulties of calibration are also an area of concern. They do have one thing going for them though, their military roots mean that they are built like a Soviet tank and can take a beating.
Modern Radiation Detectors
There is also a multitude of newer consumer-grade, modern Geiger’s with a wide variance in quality and performance. GQ Electronics has been producing high-quality, affordable radiation detectors for years now, and the GMC-500+ [Plus] is no exception. So, this was the specimen we chose to do our comparison.
The GMC-500+ is a dual-tube detector, and has a wide detection range that reads Beta, Gamma and X-Ray. It has tremendous battery life with a rechargeable and removable 18650 lithium 2600mah [this unit stayed on for 7 continuous days before it died]. It also has built-in WI-FI and a data logger for storing and capturing field test data for later analysis with the included data viewer software.
Here are some more specs from the manufacturer:
- Radiation Detection: Beta, Gamma and X-Ray
- Detection Dose Range: 0 ~ 42500 uSv/h
- Detection CPM Range: 0 ~ 982980 CPM
- LCD Display: Dot matrix with back light
- Power: 3.6V/3.7V battery / USB power
The compact size and weight of this unit makes it notably better suited for carrying around. It will likely require a small carry case as the unit is not as robust as the old Civil Defense models. You may also want to include a couple of Ziploc bags for use in the rain, as it is not water resistant either. However, with a modern battery and USB power, you can use it in combination with portable solar chargers and power banks for greatly extended usage.
Electromagnetic Pulse [EMP]
Bear in mind that whatever unit you buy should be stored in a Faraday Cage to protect it from an Electromagnetic Pulse [EMP]. Powerful and wide-reaching EMP’s are emitted during high-altitude nuclear blasts. You can protect your critical devices by placing them in a fully-enclosed conductive metal container with a fully insulated interior [top, bottom and sides] to prevent physical contact with the metal container. This can be done cheaply using a steel pot, aluminum foil, cardboard and some small bungees. | 1 | 2 |
<urn:uuid:f3363360-1ca8-4cd4-b83f-37389de74763> | The globalisation of mental illness
The burden (mortality and disability) caused by mental disorders across the globe is on the rise. Psychiatric services for treating mental health difficulties are well established in high-income countries such as the US and UK; and the World Health Organization has supported the setting up of similar services in low- and middle-income countries (LMIC). But is the globalising of psychiatric systems of diagnosis and treatment the most appropriate line of action? This article critically reflects on biomedical explanations of mental health difficulties; highlights concerns about the dearth of research into mental health difficulties in LMIC; discusses the lack of emphasis that psychiatry places on cultural factors; and raises the possibility that globalising notions of psychiatric illness may cause more harm than good.
There are huge inequalities in the availability of resources to support mental health needs across the globe; it is estimated that greater than 90 per cent of global mental health resources are located in high-income countries (WHO, 2005). This is all the more alarming when we consider that around 80 per cent of the world’s population live in low- and middle-income countries (LMIC: Saxena et al., 2006). In countries in Africa, Latin America, and south/south-east Asia under 2 per cent (and often less than 1 per cent) of expenditure on health tends to go to services for psychiatric conditions (compared to over 10 per cent in the USA) (Kleinman, 2009). And there isa gathering mental health storm: it is projected that by 2030, depression willbe the second biggest cause of disease burden across the globe (Mathers & Loncar, 2006), second only to HIV/AIDS.
When four out of five people in LMIC who need services for mental, neurological and/or substance-use disorders do not receive them (WHO, 2008), we have a clear ‘treatment gap’ – the difference between the levels of mental health services required by LMIC populations and what is actually available on the ground. Prominent clinicians and academics, as well as international organisations such as the World Health Organization (WHO, 2008, 2010), have called for the ‘scaling-up’ of services for mental health in LMIC. Scaling-up involves increasing the number of people receiving services; increasing the range of services offered; ensuring these services are evidence-based, using models of service delivery that have been found to be effective in a similar contexts; and sustaining these services through effective policy, implementation and financing (Eaton et al., 2011). Yet in light of the limited resources available to support mental health, it is pertinent to ask whether it makes sense to try to export systems of service delivery that have been developed in high-income countries to LMIC. Will this venture be sustainable in the longer term? More importantly, will these systems actually deliver added value for the increase in budgetary expenditure that will be required?
The seductive allure of biological psychiatry
In high-income countries mental health services tend to gravitate around psychiatry; the branch of medicine that is concerned with the study and treatment of mental illness, emotional disturbance, and abnormal behaviour. Biological psychiatry is an approach to psychiatry that aims to understand mental illness in terms of the biological function of the nervous system.
The rise of biological psychiatry promised great things. Biological explanations of mental illness permeated the public consciousness, and the hunt was on to discover the magical compounds that could redress the chemical imbalances that were purported to cause mental illness. Various different medications have been developed and the marketed. In the past 40 years the sales of psychotropic medications have increased dramatically. Yet despite the exponential rise in sales of these medications, the evidence for biological causes for mental illnesses such as depression and schizophrenia remain fairly weak (Nestler et al., 2002; Stahl, 2000). The continued absence of definitive evidence to support biological processes that are causal in mental illness has led to the suggestion that biological psychiatry is ‘a practice in search of a science’ (Wyatt & Midkiff, 2006). Despite these concerns, biological psychiatry continues to exert a strong influence on the delivery of mental health services in high-income countries such as the UK and America.
The seductive allure of the rationale underlying biological psychiatry is plain to see. If mental illnesses were to have universal biological causes, then standard treatments could be readily applied across the world irrespective of local differences and associated cultural differences. If evidence-based practices lead to positive outcomes in high-income countries, then similar positive outcomes will be observed in LMIC. Right?
This is where the picture gets a bit more complicated. Before we can answer this question we need to be clear on what we mean by: (1) ‘evidence-based practices’ and (2) ‘positive outcomes’. What is considered to be ‘evidence-based practice’ can serve powerful economic and political interests (Kirmayer & Minas, 2000). In 2007, US citizens alone spent £25 billion on antidepressants and antipsychotics (Whitaker, 2010). All this in spite of the fact that claims about drug effectiveness are at times overstated, and that pharmaceutical companies have been found to employ questionable research methodologies (Glenmullen, 2002; Valenstein, 1998; Whitaker, 2010). Professor David Healy (Psychiatrist, University of Cardiff) has stated that a ‘large number of clinical trials done are not reported if the results don’t suit the companies’ sponsoring (the) study’ (tinyurl.com/dxlh55w). The evidencebase is heavily skewed towards research conducted in high-income countries.
Since producing hard evidence dependson the costly standards of psychiatric epidemiology and randomised clinical trials, it can be difficult for clinicians or researchers in LMIC to contribute to the accumulation of knowledge (Kirmayer, 2006). The lack of mental health related research being conducted in LMIC countries is evident in the finding that over 90 per cent of papers published in a three-year period in six leading psychiatric journals came from Euro-American countries (Patel & Sumathipala, 2001). An inductive, bottom-up approach to research emphasising the importance of local conceptualisations of mental health difficulties and focusing on local priorities in different LMIC is required.
Even if the research capacity in LMIC can be increased, difficulties remain. The issue of what constitutes ‘positive outcomes’ in relation to mental illness has plagued clinical practice and research for many years. There is currently no accepted consensus on what constitutes positive outcome for individuals with mental illness. Traditionally, psychiatry has been concerned with eradicating symptoms of mental illness. However, it is important to appreciate that clinical symptoms do not improve in parallel with social or functional aspects of service users’ presentation (Liberman et al., 2002). Functional outcome relates to variables such as cognitive impairment, residential independence, vocational outcomes, and/or social functions (Harvey & Bellack, 2009). In this sense, using symptomatic remission as an indicator of recovery can yield better rates of good outcome than using indicators of functional recovery (Robinson et al., 2005).
Another important consideration relating to outcome in mental illness relates to the extent to which particular outcomes are culturally sensitive and inclusive (Vaillant, 2012). Marked disparities have been highlighted between ethnic minority groups and white people in outcome, service usage and service satisfaction (Sashidharan, 2001). The lack of culturally inclusive understandings of positive outcome in mental illness is compounded by the underrepresentation of black and minority ethnic groups in mental health related research. This has led to some concluding that there is a lack of adequate evidence supporting the use of ‘evidenced-based’ psychological therapies with individuals from black and minority ethnic populations (Hall, 2001). Considering these issues, it seems that the jury is in no position to deliver a verdict on whether ‘evidence-based’ practices for mental illness developed in high-income countries deliver positive outcomes in LMIC.
Diagnosis and culture
Despite the question marks that remain about the causes of mental illness, the veracity of the evidence base, what constitutes good outcome, and how inclusive mental health services are to cultural diversity within the population, the psychiatry-heavy perspective has a powerful say in how mental health difficulties are understood in LMIC. Dissenting voices have questioned the wisdom of this approach. One particular source of dissention relates to the process of psychiatric diagnosis. The international classification systems for diagnosing mental illnesses (such as depression and schizophrenia) have been criticised for making unwarranted assumptions that these diagnostic categories have the same meaning when carried over to a new cultural context (Kleinman, 1977, 1987). This issue has potentially been obscured by the fact that the panels that finalise these diagnostic categories have been criticised for being unrepresentative of the global population. Of the 47 psychiatrists who contributed to the initial draft of the most recent World Health Organization diagnostic system (ICD-10: WHO, 1992), only two were from Africa, and none of the 14 field trial centres were located in sub-Saharan Africa. Inevitably this led to the omission of conditions that had been described for many years in Africa (Patel & Winston, 1994), such as ‘brain fag syndrome’. (This was initially a term used almost exclusively in West Africa, generally manifesting as vague somatic symptoms, depression and difficulty concentrating, often in male students.)
ICD-10 does at least acknowledge that there are exceptions to the apparent universality of psychiatric diagnoses by including what are called culture-specific disorders. One such example is koro; a form of genital retraction anxiety which presents in parts of Asia. Prior to ICD-10 symptom presentations such as koro tended to be subsumed into existing diagnoses such as delusional disorder (Crozier, 2011). But the inclusion of culture-specific disorders only serves to perpetuate a skewed view of the impact of culture on mental health; ‘cultural’ explanations seem to be reserved for non-Western patients/populations that show koro(-like) syndromes, and not for diagnoses that are more prevalent in high-income countries (e.g. anorexia nervosa). Indeed it has been suggested that many psychiatric conditions described in these diagnostic manuals (such as anorexia nervosa, chronic fatigue syndrome) might actually be largely culture-bound to Euro-American populations (Kleinman, 2000; Lopez & Guernaccia, 2000). Because people living in ‘Western’ countries tend to see the world through a cultural lens that has been tinted by psychiatric conceptualisations of mental illness, they are blind to how specific to ‘Western’ countries these conceptualisations actually are.
Culture has been defined as ‘a set of institutional settings, formal and informal practices, explicit and tacit rules, ways of making sense and presenting one’s experience in forms that will influence others’ (Kirmayer, 2006, p.133). Interest in the potential interplay between culture and mental illness first arose in colonial times as psychiatrists and anthropologists surveyed the phenomenology and prevalence of mental illnesses in newly colonised parts of the world. This led to the development of a new discipline called transcultural psychiatry, a branch of psychiatry that is concerned with the cultural and ethnic context of mental illness.
In its early incarnation, transcultural psychiatry was blighted by the racist attitudes that prevailed at that time about the notion of naive ‘native’ minds. However, over time this began to change as people began to understand that psychiatry was itself a cultural construct. In 1977 Arthur Kleinman proposed a ‘new cross-cultural psychiatry’ that promised a revitalised tradition that gave due respect to cultural difference and did not export psychiatric theories that were themselves culture-bound. Transcultural (or cross-cultural) psychiatry is now understood to be concerned with the ways in which a medical symptom, diagnosis or practice reflects social, cultural and moral concerns (Kirmayer, 2006).
Tensions exist in transcultural psychiatry. Clinicians, who are motivated to produce good outcomes for service users, may work from the premise that there is cross-cultural portability of psychiatric or psychological theory and practice. Although well intended, this approach can be met with disapproval from social scientists who are focused on advancing medical anthropology as a scholarly discipline. However, it is becoming clear that in this era of rapid globalisation, mental health practitioners, social scientists and anthropologists need to come together and engage in constructive dialogue aimed at developing cross-cultural understanding about how best to meet the mental health needs of people across the globe.
The need for interdisciplinary working in promoting improved understanding about the interplay between culture and mental illness has been demonstrated by a growing body of evidence indicating that exporting Western conceptualisations of mental health difficulties into LMIC can have a detrimental impact on local populations. Ethan Watters’ book Crazy Like Us cites examples from different parts of the world (including China, Japan, Peru, Sri Lanka and Tanzania) where the introduction of psychiatric conceptualisations of mental illness has potentially changed how distress is manifested, or introduced barriers to recovery (e.g. the emergence of expressed emotion in the families of individuals with psychosis in Tanzania). Watters (2010) cites the work of Gaithri Fernando who has written extensively about the aftermath of the tsunami that struck Sri Lanka in 2006. Fernando claims that ‘Western’ conceptualisations of trauma and the diagnostic criteria for post-traumatic stress disorder (PTSD) were not appropriate for a Sri Lankan context. Fernando found that Sri Lankan people were much more likely to report physical symptoms following distressing events. This was attributed to the observation that the notion of a mind–body disconnect is less pronounced in Sri Lanka. Sri Lankans were also more likely to see the negative consequences of the tsunami in terms of the impact it had on social relationships. Because Sri Lankan people tended not to report problematic reactions relating to internal emotional states (e.g. fear or anxiety), the rates of PTSD following the tsunami were considerably lower than had been anticipated. Fernando concluded that Western techniques for conceptualising, assessing and treating the distress that people were experiencing were inadequate.
Watters also explores the way in which understanding about depression has changed in Japan over the last 20 years. This sobering tale allows Watters to explore how the interplay between cultural factors and notions of mental illness can be manipulated for financial gain. In the 1960s Hubert Tellenbach had introduced the notion of a personality type called Typus melancholicus. This idea heavily influenced psychiatric thinking in Japan. Typus melancholicus had substantial congruence with a respected personality type in Japan; ‘those who were serious, diligent and thoughtful and expressed great concern for the welfare of others… prone to feeling overwhelming sadness when cultural upheaval disordered their lives and threatened the welfare of others’ (Watters, 2010; p.228). Although at the end of the 20th century there had been a psychiatric term in the Japanese language for depression (utsubyô), this tended to relate to a rare and very debilitating condition. Prior to 2000 there had been no real market for prescribing antidepressant medications in Japan. However, shifting public perception about Typus melancholicus closer toward the Western conceptualisation of depression would have huge implications for antidepressant prescribing in Japan. Watters (2010) claims that GlaxoSmithKline’s enthusiasm to build a market for its new antidepressant medication in Japan dovetailed conveniently with a GlaxoSmithKline sponsored ‘international consensus group’ of experts on cultural psychiatry discussing cross-cultural variations in depression (Ballenger et al., 2001) concluding that depression was vastly underestimated in Japan. Depression is now conceptualised in Japan as affecting individuals (particularly men) who are too hard-working and have over-internalised the Japanese ethic of productivity and corporate loyalty. In the last few years, the market for antidepressants in Japan has grown exponentially. An important consequence of this ‘aggressive pharmaceuticalisation’, is that psychological and social treatments for depression are being ditched (Kitanaka, 2011).
Globalisation of mental health
There is a growing willingness to explore ways of addressing inequalities in the provision made for mental illness across the globe, but translating this willingness into effective action is fraught with potential danger. We must guard against assumptions that indigenous concepts of mental health difficulties in LMIC and strategies used in these contexts to deal with it are based on ignorance (Summerfield, 2008). Despite the apparent sophistication of laws, policies, services and treatments for mental illness in high-income countries, outcomes for individuals with mental health problems may not actually be any better than in LMIC. Research has failed to conclusively show that outcome for complex mental illnesses (such as psychosis) in high-income countries are superior to outcomes in LMIC (where populations may not had access to medication-based treatments) (Alem et al., 2009; Cohen et al., 2008; Hopper et al., 2007). The lack of academic and political engagement with alternative non-Western perspectives means that ‘Western’ narratives about ‘mental illness’ continue to dominate over local understanding (Timimi, 2010), yet we in high-income countries have much to learn about mental health provision; particularly in relation to promoting inclusion of black and ethnic minority members of the population.
To conclude, I would like to come back to the title. Rather than the globalisation of mental illness, perhaps what we should be aiming for is the globalisation of mental health. This is an immensely more inclusive aspiration. By promoting global mental health there is the potential for clinicians, academics, service users and policy makers from across the world to work together with a shared purpose. By exchanging knowledge, LMIC can benefit from hard lessons learned in high-income countries, and high-income countries can look afresh at how mental health difficulties are understood and treated. It will be important for clinicians and academics working in high-income countries to critically reflect on their own practice and question the accepted wisdom about mental health provision.
To assist with this knowledge exchange, a new MSc Global Mental Health programme has been launched at the University of Glasgow. Global mental health has been defined as the ‘area of study, research and practice that places a priority on improving mental health and achieving equity in mental health for all people worldwide’ (Patel & Prince, 2010). The programme seeks to develop leaders in mental health who can design, implement and evaluate sustainable services, policies and treatments to promote mental health in culturally appropriate ways across the globe. Global mental health is an emergent area of study. Momentum is building. Although the challenges are both numerous and complex, the prize is a worthy one. The cost of not acting can be counted in the ever-increasing number of people whose lives are being affected by mental health problems across the globe.
is in the Institute of Health and Wellbeing at the University of Glasgow
Alem, A., Kebede, D., Fekadu, A. et al. (2009). Clinical course and outcome of schizophrenia in a predominantly treatment-naïve cohort in rural Ethiopia. Schizophrenia Bulletin, 35, 646–654.
Ballenger, J.C., Davidson, J.R.T., Lecrubier, T. et al. (2001) Consensus statement on transcultural issues in depression and anxiety from the International Consensus Group on Depression and Anxiety. Journal of Clinical Psychiatry, 62, 47–55.
Cohen, A., Patel, V., Thara, R. et al. (2008). Questioning an axiom: Better prognosis for schizophrenia in the developing world? Schizophrenia Bulletin, 34, 229–244.
Crozier, I. (2011). Making up koro: Multiplicity, psychiatry, culture, and penis-shrinking anxieties. Journal of the History of Medicine and Allied Sciences, 67, 36–70.
Eaton, J., McCay, L., Semrau, M. et al. (2011). Scale up of services for mental health in low-income and middle-income countries. Lancet. doi:10.1016/S0140-6736(11)60891-X.
Glenmullen, J. (2002). Prozac backlash: Overcoming the dangers of Prozac, Zoloft, Paxil and other antidepressants with safe, effective alternatives. New York: Simon & Schuster.
Hall, G.C. (2001). Psychotherapy research with ethnic minorities: Empirical, ethical and conceptual issues. Journal of Consulting and Clinical Psychology, 69, 502–510.
Harvey, P.D. & Bellack, A.S. (2009). Toward a terminology for functional recovery in schizophrenia: Is functional remission a viable concept? Schizophrenia Bulletin, 35, 300–306.
Hopper, K., Harrison, G., Janka, A. & Sartorius, N. (Eds.) (2007). Recovery from schizophrenia: An international perspective. Oxford: Oxford University Press.
Kitanaka J. (2011). Depression in Japan: Psychiatric cures for a society in distress. Princeton, NJ: Princeton University Press.
Kirmayer, L.J. (2006). Beyond the ‘new cross-cultural psychiatry’: Cultural biology, discursive psychology and the ironies of globalization. Transcultural Psychiatry, 43, 126–144.
Kirmayer, L.J. & Minas, I.H. (2000). The future of cultural psychiatry: An international perspective. Canadian Journal of Psychiatry, 45, 438–446.
Kleinman, A.M. (1977). Depression, somatization and the ‘new cross-cultural psychiatry’. Social Science and Medicine, 11, 3–10.
Kleinman, A.M. (1987). Anthropology and psychiatry: The role of culture in cross-cultural research on illness. British Journal of Psychiatry, 151, 447–454.
Kleinman A. (2000). Social and cultural anthropology: Salience for psychiatry. In M.G. Gelder, J.J. Lopez-Ibor & N.C. Andreasen (Eds). New Oxford textbook of psychiatry. Oxford: Oxford University Press.
Kleinman A. (2009). Global mental health: A failure of humanity. Lancet, 374, 603–604.
Liberman, R.P., Kopelowicz, A., Ventura, J. & Gutkind, D. (2002). Operational criteria and factors related to recovery from Schizophrenia. International Review of Psychiatry, 14, 256–272.
Lopez, S.R. & Guernaccia, P.J. (2000). Cultural psychopathology: Uncovering the social world of mental illness. Annual Review of Psychology, 51, 571–598.
Mathers, C.D. & Loncar, D. (2006). Projections of global mortality and burden of disease from 2002 to 2030. PLoS Med, 3, e442.
Nestler, E.J., Barrot, M., DiLeone, R.J. et al (2002). Neurobiology of depression. Neuron, 34, 13–25.
Patel, V. & Prince, M. (2010) Global mental health – A new global health field comes of age. JAMA, 303, 1976–1977.
Patel, V. & Sumathipala, A. (2001) International representation in psychiatric literature: Survey of six leading journals. British Journal of Psychiatry, 178, 406–409.
Patel, V. & Winston M. (1994). The ‘universality’ of mental disorder revisited: Assumptions, artifacts and new directions. British Journal of Psychiatry, 165, 437–440.
Robinson, D.G., Woerner, M.G., Delman, H.M. & Kane, J.M. (2005). Pharmacological treatments for first-episode schizophrenia. Schizophrenia Bulletin, 31, 705–722.
Sashidharan, S.P. (2001). Institutional racism in British psychiatry. Psychiatric Bulletin, 25, 244–247.
Saxena, S., Paraje, G., Sharan, P. et al. (2006). The 10/90 divide in mental health research: Trends over a 10-year period. British Journal of Psychiatry, 188, 81–82.
Stahl, S.M. (2000). Four key neurotransmitter systems. In S.M. Stahl (Ed.) Psychopharmacology of antipsychotics (pp.3–13). London: Martin Dunitz.
Summerfield, D. (2008). How scientifically valid is the knowledge base of global mental health? British Medical Journal, 336, 992–994.
Timimi, S. (2010). The McDonaldization of childhood: Children’s mental health in neo-liberal market cultures. Transcultural Psychiatry, 47, 686–706.
Vaillant, G.E. (2012). Positive mental health: Is there a cross-cultural definition? World Psychiatry, 11, 93–99.
Valenstein, E.S. (1998). Blaming the brain: The truth about drugs and mental health. New York: Free Press.
Watters, E. (2010). Crazy like us: The globalization of the American psyche. New York: Free Press.
Whitaker, R. (2010). Anatomy of an epidemic. New York: Crown.
World Health Organization (1992). International statistical classification of diseases and related health problems (ICD-10). Geneva: WHO.
World Health Organization (2005). Mental health atlas 2005. Geneva: WHO.
World Health Organization. (2008). Mental health Gap Action Programme (mhGAP): Scaling up care for mental, neurological and substance abuse disorders. Geneva: WHO.
World Health Organization (2010). mhGAP intervention guide for mental, neurological and substance use disorders in non-specialized health settings: Mental health Gap Action Programme (mhGAP). Geneva: WHO.
Wyatt, W.J. & Midkiff, D.M. (2006). Biological psychiatry: A practice in search of a science. Behaviour and Social Issues, 15, 132–151.
BPS Members can discuss this article
Already a member? Or Create an account
Not a member? Find out about becoming a member or subscriber | 1 | 3 |
<urn:uuid:5df94d65-7af1-42d7-8037-896c7d478b7b> | TARANIS (Tool for the Analysis of RAdiations from lightNIngs and Sprites)
TARANIS is a CNES microsatellite mission, proposed by LPCE (Laboratoire de Physique et de Chimie de l'Environnement), and CEA (Commissariat a l'Energie Atomique) of France in collaboration with institutions from USA, Denmark, Japan, the Czech Republic and Poland. The TARANIS mission is devoted to the study of transient event energetic mechanisms that generate transient luminous emissions and gamma ray flashes in the terrestrial atmosphere above the thunderstorm areas. These emissions are a manifestation of a coupling between atmosphere, ionosphere and magnetosphere via intense transitory processes implying avalanching relativistic electrons with energies up to 30 MeV.
Note: In ancient mythology, TARANIS was the Gallic god of thunder and lightning.
• Identification and characterization of all possible emissions:
- TLEs (Transient Luminous Events) including sprites, jets, elves, and halos
- TGFs (Terrestrial Gamma-ray Flashes)
• Global mapping and occurrence rates, relation of TLEs, TGFs, associated electromagnetic emissions and high energy electrons
• Characterization of the parent lightning that cause TLEs and TGFs and precipitate electrons, investigation of WPIs leading to precipitated [LEP (Light induced Electron Precipitation)] and accelerated (runaway) electrons, effects on the radiation belt of low altitude sources, tracking of the variability of the radiation belts from electron and wave measurements, effects on thermospheric parameters (ionization rate, NOx, O3).
• Determination of the nature of the triggering processes and the source mechanisms
• Study of the explosive dissipation of energy in the ionosphere and magnetosphere.
The project is in Phase B as of 2009. The PDR (Preliminary Design Review) took place in June 2009.
The TARANIS Project received the go-ahead from the CNES Board of Directors on Dec. 9, 2010 (Ref. 11).
The CDR (Critical Design Review) is scheduled for April 2013.
Figure 1: Illustration of upper atmospheric TLE (Transient Luninous Event) phenomena of sprites, jets and elves (image credit: CNES) 11)
TARANIS is a minisatellite is of the CNES Myriade series, which started with the DEMETER microsatellite, launched in June 2004. The structure consists of four vertical struts and of four lateral panels made of alloy honeycomb. The general design concept allows having systematically one satellite face in the shadow. This face supports low temperature equipment and the star sensor, which requires not only low temperature, but also a FOV (Field of View) without any sun parasitic illumination. The thermal subsystem is partly passive, relying on the use of paints, MLI and SSM, and partly active using heaters. 12)
The TARANIS spacecraft is 3-axis stabilized requiring nadir pointing. The AOCS (Attitude Orbit and Control Subsystem) uses reaction wheels which are unloaded using magneto torquers. Electric power (200 W) is provided by two foldable panels. Each panel is covered with 7 strings of 18 cells (triple junction AsGa) having an efficiency of 26%. The solar array is articulated by a micro-step drive mechanism with a resolution of 1/64º. The Li-ion battery has a capacity of 15 Ah, consisting of 10 x 8 cells. A PCDU (Power Control and Distribution Unit) provides regulation and distribution.
Figure 2: Illustration of the TARANIS spacecraft (image credit: APC Laboratory of the University of Paris) 13)
Figure 3: Artist's view of the TARANIS spacecraft (image credit: CNES)
The data handling concept uses a central OBC (Onboard Computer), an in-house development with low power consumption and use of COTS parts. The integrated OBC provides the following features:
- Processor: transputer T805 @ 20 MHz with four high-speed data exchange links @ 5 Mbit/s, and a processing capacity of 2 MIPS
- Radiation hardened: 15 krad, SEL and SEU immune
- Telemetry and telecommand data coding and decoding according to CCSDS standards
- 1 Gbit RAM protected by EDAC
- Input and output configurable with micro software and addition of boards. A standard configuration includes 10 serial links RS-422, RS-485, acquisition of 16 analog input 12-bit coded control signals.
- The OBC power consumption is 6 W max, the mass is 3 kg. The OBC handles all S/C and payload functions.
The propulsion module is a blow-down system using hydrazine (4.5 kg corresponding to a ΔV of 80 m/s). It uses 4 thrusters with a 1 N thrust each. The propulsion module is being used for orbital maintenance as well as for EOL orbit maneuvers to allow a spacecraft reentry within a period of 25 years. The spacecraft has a mass of 152 kg, size: 98 cm x 107 cm x 115 cm. The design life is 2 years. The mass of the payload is ~ 35 kg.
RF communications: The source data are stored onboard in mass memory of 16 Gbit capacity. The communication links are in X-band to the CNES control station at Toulouse. The downlink data rate is 16.8 Mbit/s. The receivers are in hot redundancy, the OBC being locked on the first one from which a signal is received. The transmitters are in cold redundancy. The data volume is ~ 4 GB/day.
Figure 4: View of the deployed TARANIS spacecraft (CNES)
Launch: A launch of the TARANIS spacecraft as a secondary payload is planned for late 2015 from Kourou on a Suyuz or Vega vehicle (Ref. 12). A launch contract with Arianespace was signed in July 2012. 14)
Orbit: Sun-synchronous orbit with a drift of 2 hours/year, altitude 700 km, inclination = 98º, LTAN (Local Time on Ascending Node) at 13:30 hours.
Sensor complement: (MCP, XGRE, IDEE, IMM, IME-BF, IME-HF)
The payload is constituted by electric and magnetic antennas, 2 cameras, 4 photometers and X-ray and Gamma-ray (γ-ray) sensors. The onboard measurements will be complimented with ground-based observations and dedicated measurement campaigns. The sensor complement has a mass of ~35 kg and a power consumption of ~35 W (37 W on nightside, 21 W on the dayside of the orbit).
• Advance physical understanding of the links between TLEs and TGFs, in their source regions, and the environmental conditions (lightning activity, variations in the thermal plasma, occurrence of extensive atmospheric showers, etc)
• Identify the generation mechanisms for TLEs and TGFs and, in particular, the particle and wave field events which are involved in the generation processes or which are produced by the generation processes
• Evaluate the potential effects of TLEs, TGFs, and bursts of precipitated and accelerated electrons (in particular lightning induced electron precipitation and runaway electron beams) on the Earth atmosphere or on the radiation belts.
To maximize the scientific return of the data collected by TARANIS, the scientific payload is operated as a single instrument. The strategy adopted is twofold: a continuous monitoring of low resolution optical, field and particle data is performed and transmitted (the survey mode). Under alert, all TARANIS instrument analyzers will initiate a synchronized high resolution data mode (the event mode). Each analyzer includes memory to store high resolution data for a time interval including the event detection time (Ref. 16).
The event data are on-board recorded on triggering alerts only. The alert signals will be generated by the photometers (MCP-PH), the gamma detectors (XGRE), the electron detectors (IDEE), or the HF wave sensor (IME-HF). All the event management is made by MEXIC. To identify an electromagnetic signature associated with TGFs and TLEs, the parent lightning, a possible relationship between TGFs and TLEs, or an associated relativistic runaway electron beam, it is mandatory to collect simultaneously high resolution data from all TARANIS instruments. The relative time accuracy between the TARANIS instruments will be less than 10 ns, allowing meaningful intercomparison of the data sets. To allow the comparison with ground based and balloon measurements the absolute time accuracy onboard will be less than 1 ms. The available onboard mass memory and telemetry available are such that up to about 2 GB of high-resolution data (event mode) will be downloaded per day.
Table 1: Summary of TARANIS scientific payloads
Figure 5: Scientific payload accommodation onboard TARANIS (image credit: CEA, CNRS)
MCP (Micro Cameras and Photometers):
MCP is an instrument of CEA (Commissariat Energie Atomique - or the Atomic Energy Commission of France) + Tohoku University and Riken University (Japan). The objectives of the MCP experiment are: 18)
- To identify and characterize the TLEs (sprites, halos, elves, etc.), that is determining their duration, their brightness at different wavelengths, their size, relative location to their parent lightning....
- To locate the source regions of TLEs over the world
- To identify and characterize the strongest lightning flashes
- And to trigger other TARANIS instruments which may point out associated events.
The MCP instrument is comprised of two MicroCameras (MCP-MC) and four photometers (MCP-PH). MCP-MC will be used to locate lightning flashes and TLEs and to classify TLEs in their different categories (column or carrot sprites, elves, jets...). MCP-PH will be used to detect on board TLEs and strong lightning flashes and to characterize them temporally and spectrally.
MCP-MC: The concept of discriminating TLEs from images taken at the nadir has been validated by the LSO (Lightning and Sprites Observations) experiment onboard the ISS. It was suggested to distinguish TLEs from lightning by their spectral content. The concept is to observe simultaneously the same scene with two cameras. One of them, the “Lightning camera”, measures the lightning flashes in a spectral broad band. The other camera, the “Sprite camera”, is especially designed to maximize the contrast between the TLE and lightning events.
For TARANIS, the “sprite camera”, named MCS, will use a narrowband filter of 10 nm FWHM (Full Width at Half Maximum) centered at 762 nm while the “lightning camera”, named MCE, will use a narrowband filter at 777 nm of 10 nm FWHM instead of the broadband used by LSO. This narrowband includes one of the strongest emission bands of lightning flash due to atomic oxygen excitation. It was used in three last spaceborne instruments dedicated to lightning detection and location (OTD, LIS, and FORTE). The lightning brightness through this band is thus well documented. The TLE-producing thunderstorm size is hundreds of kilometers. A disk of ~500 km diameter is then a good compromise to detect TLEs. A sampling of about 1 km at nadir is convenient for space sprite observations.
The distance between TLEs and parent lightning flashes ranges from a few km to ~50 km. This 1 km resolution will allow having on the same or on successive images both TLEs and their parent lightning flashes. Measurements of the structures inside TLEs as sprite tendrils are performed from the ground. Recent ground-based observations use a very rapid camera (more than 10,000 frames/s) to describe the streamer physics inside sprites.
The space observations are adapted to the measurement of different emissions (including gamma emissions) and to obtain statistics which are more difficult to be performed from the ground. Such studies do not need high space and time resolutions. Standard video cameras (30 frames/s) are used by most of the observers all around the world. A frame rate of 10/s is acceptable to detect sprites from space and to separate the different strokes which make up a lightning flash.
The sources and the cameras have been modeled in order to evaluate the performances of MCP-MC. The sources are lightning flashes and TLEs. Lightning flashes are modeled as radiating surface located at the height of the cloud top (i.e. from 10 to 20 km altitude). Realistic shapes of lightning flashes are given by LSO observations. The radiance range for MCE is given by LIS measurements. The MCS lightning radiance range is deduced from the calculated ratio between wideband and narrowband measurement performed by LSO. For TLEs, a 3D finite element model of sprite has been developed. Column sprites are represented by 3 km diameter columns from 55 to 85 km altitude. Carrot sprites are modeled as truncated cones with a diameter of 3 km at the cone base (45 km altitude) and 15 km at its top (85 km altitude). A sprite volume radiance of 1.1 x 1012 photons/s/m3/sr at 762 nm (~9 x 10-7 W/m2 on the entrance pupil) has been calculated from ISUAL observations. Elves are modeled as radiating surfaces located altitudes of ~90 km altitude. Their shape is like a ring centered on the lightning flash. Their radiance is deduced from the ISUAL (Imager of Sprite Upper Atmospheric Lightning) instrument observations flown on FormoSat-2 (launch May 20, 2004).
The cameras are modeled taking into account optic design (distortion, PSF, focal length, aperture, transmission, ...), relative misalignment between cameras, CCD characteristics (quantum efficiency, dark current and its specific noises). They are intended to be on board the TARANIS spacecraft at an orbital altitude of ~ 700 km. The photon number arriving at the pupil level is calculated taking into account the source radiance and its duration, the characteristics of the optics and the filter and the distance from the source to the satellite. The number of photons is then converted in photoelectrons measured by each pixel of the CCD using its quantum efficiency.
MCP-PH: Most of the TLEs are triggered by a lightning flash. The delay between the lightning flash and the TLE varies from 0.5 ms to several tens of milliseconds. This delay depends on the physical mechanism at the origin of the different TLEs. An elve event generally occurs ~0.5 ms after the lightning flash. This time is a bit larger from 0.5 to 1 ms for halos and varies from 0.5 ms to few milliseconds for sprite columns. Carrot sprites can be delayed of several tens of milliseconds. The duration of an optical lightning flash observed from space is several hundred µs while elves and sprites last a few ms. Jets exhibit the longest events with duration exceeding hundreds of milliseconds. Hence, photometers need a better time resolution than the cameras for observing these time differences. MCP has to alert the TARANIS payload of a TLE occurrence. Triggering with cameras is more difficult than with photometers because the resolution in time is lower and the whole image processing is only possible on ground.
TLEs are mainly due to the excitation of vibrational levels of molecular nitrogen. These different levels require different electron energies. Photometric measurements at different wavelengths can then give important information on the physical mechanisms which produce TLEs.
MCP-PH will measure the irradiance in four different spectral bands:
• PH1: inside the N2 Lyman?Birge?Hopfield (LBH) UV band system from 160 to 260 nm
• PH2: the most intense line of the N2 second positive (UV) band at 337 ± 5 nm for TLEs
• PH3: the most intense line of the N2 first positive (NIR) band at 762 ± 5 nm for TLEs
• PH4: from 600 to 900 nm. This last spectral band will be dedicated to lightning flash measurements. This band has been used by the FORTE satellite to detect lightning from a similar orbit.
The FOV of PH1, PH2 and PH3 has to be the same as that of the camera. The mutual covering of photometer and camera FOV must be very high (more than 95%). PH4 will have a much larger FOV (disk of 700 km radius) than the other photometers to be similar to the FOV of other TARANIS instruments. The irradiance range requirement for PH1, PH2 and PH3 is derived from the ISUAL observations. This instrument is the first one to provide calibrated photometric observation in a wide spectral range. For the PH4 design, the statistics of the FORTE measurements have been used. The phenomena dynamics requires a sampling frequency of at least 20 kHz. The four photometers must be synchronized to be able to compare the recorded waveforms.
To detect onboard TLEs, special attention has been paid to find a way to discriminate efficiently TLEs from lightning flashes. One important ISUAL feedback is the confirmation that TLEs radiate in the LBH band system. At these wavelengths, the atmosphere fully absorbs the radiation coming from altitudes lower than 20 km, where lightning events occur. The upper part of the PH1 filter bandwidth (260 nm) is then chosen to remove by atmospheric filtering all lightning radiation which could reach TARANIS. With such a filter, PH1 will then measure only TLEs. The triggering strategy is to search the occurrence of a sharp peak inside each photometer waveform. A peak is detected when the signal level exceeds a fixed threshold over a determined time window. Threshold and time window will be adjustable during the flight. To send an alert, the TLE detection must be confirmed simultaneously by two or more photometers including PH1. These precautions are taken to avoid any false alerts due to high energy particle interaction with the photometers.
The MCP-MC instrument is comprised of the following elements:
• An optical module MC-U (MicroCamera-Unit) containing:
- Two optical heads, each one dedicated to a specific wavelength (762 nm for sprite and 777 nm for lightning with 10 nm FWHM)
- Two detection sub-assemblies, based on a cooled CCDs
- A CCD proximity electronic board, used for data quantization of the signals
- A mechanical structure.
• An electronics module, allocated 1m away from the optical one, containing:
- An electronics board for data compression and data formatting
- A mechanical structure allowing integration into a rack with the other electronics modules of the payload.
Table 2: Radiance ranges to be observed in the two MCP-MC instruments
The MC-U device has a volume of about 125 mm x 180 mm x 165 mm. The mass is estimated to be ~ 2 kg and its power consumption is < 4.5 W.
Detection subassembly: The detection subassembly is based on a qualified technology, used on many EADS SODERN star-trackers of the SED-6 and HYDRA family. The subassembly is based on a CCD 4720 from E2V, front illuminated, NIMO (Non Inverted Mode Operation). It’s a frame transfer CCD, with a circular mask on the image zone, which produces circular pictures of the observed scenes. The detector is cooled by a two-stage TEC (Thermoelectric Cooler).
Figure 6: Illustration of the MicroCamera Units (image credit: CEA, CNES)
The photometer instruments are composed of a sensor module PH-U and an “analyzer” module, i.e. an electronic board that is plugged into the MEXIC module.
1) The sensor module (PH-U):
The main choice was to dedicate one optical path per spectral band. Indeed sharing optics between several bands would have saved space, but it would have lead to more complex focal planes and optical coatings, at the limit of feasibility. Thus, the sensor is composed of four optical and electronic chains as depicted in Figure 7. The optics is composed of two stages, front and back to the spectral filter.
Figure 7: Optical & electrical chain for one channel (PH4 uses no PMT and HV module), image credit: CEA, CNES
The filter of each band defines the narrow spectral band, and has to be placed in a telecentric configuration so that each point of the FOV sees the same spectral band. The filters have one technical difficulty due to their narrowness. Moreover for PH1, the wavelengths are in the UV, and a low rejection ratio in the visible is mandatory.
Given the very low irradiance produced by the optical phenomena to be observed (down to a few tenths of a µW/m2), the detection module will be a photomultiplier, except for PH4 where the signal is 100 times higher. For that reason the focal planes include HV (High Voltage) modules to drive the cathodes of the photomultipliers. These HV modules provide a voltage up to several hundred volt from the 5 V power supply. The high voltage is defined on ground for each orbit in order to adapt the detection gain to the radiance of the TLEs and lightning. Furthermore, the amplifier module will have four possible gain factors that will be chosen, for each orbit, to fit to the dynamic range of the ADC in the analyzer. The detailed definition of the photomultiplier (material of the window and the photocathode) will be chosen with respect to the spectral performances.
A further issue has to be taken into account: the vibration levels that the module has to endure. Its most fragile components are the photomultipliers. Special care will have to be paid in the choice of the model, or in its accommodation. A damping mechanical system is currently under consideration.
The module has a size of 205 mm x 125 mm x 200 mm and a mass of ~ 2.5 kg with a power consumption of < 3.5 W.
Figure 8: Schematic view of the photometer sensor module (image credit: CEA, CNES)
2) The analyzer module (PH-A):
The PH-A is a PCB (Printed Circuit Board) to be inserted into the MEXIC module. It is dedicated to the photometer instrument. It is in charge of the following main functions. In interface with the sensor:
• Driving the HVs of the photomultipliers
• Driving the power supply of the calibration sources
• Providing the necessary power lines (+5V, ±12 V) and switching them on and off
• Acquiring (including data quantization) signals from the sensor.
The board is composed of power drivers, DAC and ADC in interface with the sensor, a FPGA to process all the data (it has about 130 in/out ports and its clock frequency is 20 MHz), a second FPGA for the interface with MEXIC, and a rolling memory. This 32 Mbit memory stores the high definition data in a rolling stack. In case of an “event” alert, the data are formatted and sent to MEXIC, with a time range starting before the event. If no event occurs, the high definition data are erased, only survey compressed data are sent. - The size of the PCB is 115 mm x 190 mm.
Calibration: A coarse calibration, or good health test, can be played onboard thanks to calibration sources inserted into the sensor module. Its architecture is as simple as possible: in each optical tube, a LED is mounted beside the detector. It is directed towards the outside that is under the optical filter. So its light, when powered on, will be reflected by the filter to the detector. The uniformity of this beam is not an issue, the only goal is the check that the signal delivered by the detectors are similar to those measured on the ground prior to launch. The four LEDs will be driven by the same power supply in parallel.
Fine calibrations will use vicarious targets, as dark oceans to get the dark level, and well known desert areas under moonlight to get the gain of each channel. Several orbits will be dedicated to the calibration of each instrument during the life of Taranis, including MCP.
XGRE (X-ray, γ-ray and Relativistic Electron experiment):
XGRE is an instrument of IRAP (Institut de Recherche en Astrophysique et Planétologie) of the University of Toulouse, and of APC (Astro Particule et Cosmologie) of the University of Paris, France. The objectives are:
• To provide measurements that can determine unambiguously the mechanism(s) that generate transient TGF (Terrestrial Gamma-ray Flash) events.
• To quantify:
- The total energy released per event
- The atmospheric altitude at which the burst is initiated
- the geophysical parameters that control the evolution of TGFs.
Instrument: The XGRE instrument is comprised of: 3 sensors for the detection of X-rays, γ-rays, and relativistic electrons
- Photon energy range: 20 keV-10 MeV
- Measurement of rise time: 10-100 µs; event duration: 10-100 ms; (1 µs relative time resolution)
- Range of relativistic electrons: 1 MeV - 10 MeV
- NaI scintillators: 900 cm2 detector area
- BC400 plastic scintillator in anti-coincidence (relativistic electron)
- Real-time alert for triggering event mode.
With a total area of 900 cm2, the instrument has to distinguish electrons and photons and measure their energy deposits in the range 20 keV - 10 MeV. It will time tag each TGF with an accuracy of about one microsecond and will be able to trigger the other instruments. In addition, it must locate coarsely (30°) the emitting zone and measure as precisely as possible the position of the low energy cutoff. The instrument comprises 3 sensors, sandwiches made up of 2 plastic scintillators enclosing a lanthanum bromide scintillator (LaBr3). The plastic scintillators are well adapted to the detection of the electrons and LaBr3 is a new scintillator, good gamma-ray spectrometer and very fast. From the relative counts between these 3 detectors, the direction of propagation of the particles can be deduced. The 3 sensors of 300 cm2 area each are planar and present between them angles of about 40°. From their relative counts, one will deduce the direction of the emitting zone. 19)
XGRE measurement requirements:
• High count rate per cm2 (104 – 105 cm-2 s-1)
- Dead time < 300 ns (106 s-1)
- triggers with 300 ns relative time accuracy
- events (TGFs) with 1 µs relative time accuracy
• Separation between γ-rays and electrons
- Energy range: 20 keV-12 MeV
- 30 % accuracy at 20 keV
- 9 % accuracy at 511 keV
- Energy range: < 1 MeV to >10 MeV
• Minimize risk of missing TGF
- Storage capacity of 200 000 photons
- Burst and survey algorithms
• Ascending or descending particle motion
• Zenithal and azimuthal direction (~30° resolution).
Figure 9: Schematic view of the XGRE instrument (image credit: IRAP, APC, Ref. 10)
Three sensors on the platform are oriented towards Earth + 1 oriented towards space.
IDEE (Instrument for the Detection of high Energy Electrons):
IDEE is a device of IRAP (Research Institute in Astrophysics and Planetology), CNRS/University of Toulouse) + University of Prague (Czech Republic). The objectives are to detect and characterize impulsive electron beams: 20)
• Up-going: runaway electron beam
• Down-going: lightning/TLE-induced electron precipitation
The instrumentation is comprised of 2 spectrometers for the measurement of high energy electrons. One spectrometer is accommodated with a sight axis making an angle of 60° with nadir; the second spectrometer is making an angle of 30° with the anti-nadir direction.
- IME-BF 1 component, DC - 1 MHz
- IME HF 1 component, 100 kHz - 30 MHz
Measurement of bursts:
- from ~ 150 keV to 4 MeV (higher energies are covered by XGRE)
- along B0 (2 detection modulus) and with discrimination in pitch (2 detection modulus) and with discrimination in pitch angles
- with time resolution of the order of 1 ms (or less).
IMM (Instrument for Magnetic Measurements):
IMM is a device of LPC2E (Laboratoire de Physique et de Chimie de l'Environnement, Orleans) of CNRS, France and of the Stanford Very Low Frequency (VLF) Group, Stanford University, Stanford, CA, USA. The objectives are to build an FPGA (Field Programmable Gate Array) to automatically detect the occurrence of a special kind of electromagnetic wave known as 0+ whistlers. The information provided by the 0+ whistler detector will allow for the unambiguous association of lightning with TLEs and TGFs, which in turn will offer a better understanding of the physical mechanisms behind them. Furthermore, variation in TLEs and TGFs is most likely driven by variations in the parent lightning strike that also generates a 0+ whistler. 21)
Note: “Whistlers” and “Sferics”are interchangeable terms of Earth's audible radio emissions: The term “sferics” refers to radio waves produced by a lightning discharge (an abbreviation of “atmospherics”). Lightning pulses that travel all the way to the magnetosphere and back are highly dispersed, much more so than tweeks; they are called 'whistlers' because they sound like slowly descending tones. Whistlers are dispersed, not because of the waveguide cutoff effect, but rather because they travel great distances through magnetized plasmas, which are strongly dispersive media for VLF signals. 22)
The objectives are to:
• Identify heating sources
- Sferics associated with TLEs and TGFs.
- Perturbations of VLF source signals associated with TLEs and TGFs
• Identify ES or/and EM wave fields generated by electron beams and other processes
The instrumentation is a compound triaxial system of search-coil magnetometers to measure the alternative magnetic field in the low and medium frequency ranges and the medium frequency wave analyzer (electric and magnetic). IMM is comprised of:
• 2 magnetic antennas (IMM): LF (DC-20 kHz), and MF (10 kHz-1 MHz)
• 3 electric antennas called IME (Instrument for Electric Measurements): 2 LF/MF (10 Hz-20 kHz), and 1 HF (100 kHz - 30 MHz)
- Detect and characterize transient signals in a wide frequency band
- Monitor ES or/and EM wave fields which could be used to trace electron beams
IME-BF (Instrument for Electric field Measurements-Low Frequency)
The IME-BF instrument is developed by the CNRS/LATMOS (Laboratoire Atmospheres, Milieux, Observations Spatiales) team in collaboration with LPCE (Laboratoire de Physique et de Chimie de l'Environment) of Orleans, and NASA. The objective of IME-BF (IME-BF stands for “Instrument de Mesure du champ Electrique Basse Fréquence”) is to measure the electric field in the low frequency range and the low frequency wave analyzer (electric and magnetic).
The instrument consists of two components: 23)
• IME: Electric field antenna of the dipole double sphere type, which measures the electric field from DC to 3.3 MHz . Two sphere sensor are boom mounted to measure:
- 1 component of the electric field
- from DC to 3.3 MHz
- ULF frequency range: 0 – 64 Hz
- VLF frequency range from few Hz to 20 kHz (with a frequency resolution of 94 Hz)
• BF-Analyzer: on-board data processing of three instruments, i.e.
- electric field antenna (IME/LATMOS)
- magnetic field antenna (IMM/LPCE) and
- Ion probe (SI/NASA)
Figure 10: IME mounted on the boom (image credit: LATMOS)
IME-HF (Instrument for Electric field Measurements-High Frequency)
The IME-HF instrument (IME-HF stands for "Instrument de Mesure du champ Electrique Haute Fréquence") has the objective to record waveform measurements of fluctuating electric fields in the frequency range from a few kHz up to 37 MHz, with the following scientific goals:
- Identification of possible wave signatures associated with transient luminous phenomena during storms
- Characterization of lightning flashes from their HF electromagnetic signatures
- Identification of possible HF electromagnetic or/and electrostatic signatures of precipitated and accelerated particles
- Determination of characteristic frequencies of the medium using natural waves properties
- Global mapping of the natural and artificial waves in the HF frequency range, with an emphasis on the transient events.
The IME-HF instrument consists of a double-wire Hertz dipole (the sensor) and the HF-A analyzer circuit. The sensor is intended to measure one component of the electric field from 100 kHz to 30 MHz along the axis of the two aligned wire antennas. This allows to measure events like TIPPs and contrary to ALEXIS or FORTE, the frequency range contains the plasma frequency of the ionosphere. Each antenna is equipped with a preamplifier. The antenna is shown in Figure 11 and its connection to the HF-A is shown in Figure 12. 24)
Figure 11: Schematic view of the IME-HF sensor antenna (image credit: LPC2E)
Figure 12: Connection of the antenna to the HF-A analyzer (image credit: LPC2E)
The analyzer is mainly dedicated to the data processing of the HF signal with 12 bit depth on the sampling frequency of 80 MHz. The block diagram of the HF-A is presented in Figure 13. Initially the IME HF-A performs a difference between the potentials measured by the antennas. This difference is fed through the 10th order bandpass anti-aliasing input filter. This signal is then processed in two ways.
Firstly, there is the “fast” serial A/D converter (fADC) with the sampling frequency of 80 MHz and 14 bit depth. Only 12 bits are transmitted further and processed. There is a gain control to select the desired 12 bit range. The 12 bit data from the fADC are stored in the circular RAM buffer for further processing and event detection by the FPGA.
Secondly, there is the “slow” signal path. The signal is divided by 9th order bandpass filters into 12 channels across the whole frequency range. The center frequencies of these bandpass filters are: 1.5, 4.5, 7.5, 10.5, 13.5, 16.5, 19.5, 22.5, 25.5, 28.5, 31.5 and 34.5 MHz. Each channel further contains its own amplifier and a 12 bit “slow” serial A/D converter (sADC) with a 12 µs time interval between samples. Data from the 12 channels are fed to the FPGA and processed there.
Figure 13: IME-HF block diagram (image credit: LPC2E)
The outside world communication of the IME-HF is provided by the MEXIC interface connected to the FPGA. Beside the telecommand/telemetry (TC/TM) the FPGA is responsible for data formatting, event detection and driver for the other hardware peripherals.
MEXIC (Multi Experiment Interface Controller):
MEXIC is a device of LPCE + CBK (Poland) used for payload management. The instrument is a set of 2 electronic boxes comprised of the 8 analyzers associated with the instruments, providing electronic power to the instruments, management of the payload modes and the interface with the Mass Storage and the platform computer. MEXIC also ensures the synchronization of the instruments from events detections by the photometers. 25)
The main functions of MEXIC are:
- provide secondary voltage to the various instruments
- receive data from the platform computer and decode the time tagged commands from the PL workplan established by the CMS, commands processed one-by-one (without having to manage due dates)
- receive and decode immediate commands sent by the platform (SHM, for example)
- manage the instruments (ON/OFF, configuration, modes)
- monitor correct operation of PL (Temperatures, consumption, etc.)
- implement the trigger strategy for instruments according to the event
- manage the acquisition sequence triggered by an event
- format and date data, both scientific and housekeeping in X-band
- format and date housekeeping data in S-band
- manage the transfer of data to the mass storage
- serve as a host structure for instrumentation analyzers
- make it possible make in-flight modifications to analyzer software
- make it possible to make in-flight modifications to the MEXIC software.
1) E. Blanc , F. Lefeuvre, “TARANIS: a microsatellite project dedicated to the study of impulsive transfers of energy between the Earth atmosphere - ionosphere and magnetosphere,” 36th COSPAR Scientific Assembly Beijing, China, July 16-23, 2006
2) F. Lefeuvre, E. Blanc, R. Roussel-Dupre, J. Sauvaud, “The Taranis project: Scientific objectives and instrumentation,” American Geophysical Union, AGU Fall Meeting 2005, San Francisco, Dec. 5-9, 2005, paper: AE13A-08
3) C. Bastien-Thiry, F. Lefeuvre, E. Blanc, D. Lagoutte, “TARANIS a Myriade mission dedicated to sprites characterization,” Proceedings of the 4S Symposium: `Small Satellite Systems and Services,' Chia Laguna Sardinia, Italy, Sept. 25-29, 2006, ESA SP-618
4) F. Lefeuvre, E. Blanc, “The TARANIS project: Complementarity and coordination with the ASIM mission,” ASIM (Atmosphere-Space Interactions Monitor) Science Workshop June 26-27, 2006, Noordwijk, The Netherlands
5) T. Neubert, “On Sprites and Their Exotic Kin,” Science, Vol. 300, May 2, 2003, URL: http://elearning-phys.uni-sofia.bg/~blagoev/Atmosphere_electricity/rumi/neubert03.pdf
6) E. Blanc , F. Lefeuvre, R. Roussel-Dupré, J. A. Sauvaud, “TARANIS: a microsatellite project dedicated to the study of impulsive transfers of energy between the Earth atmosphere, the ionosphere, and the magnetosphere,” Advances in Space Research, Vol. 40, Issue 8, 2007, pp. 1268-1275 , URL: http://nova.stanford.edu/~vlf/IHY_Test/Tutorials/TLEs/Papers/Blanc2007.pdf
7) Christophe Bastien-Thiry, “TARANIS a CNES Scientific Mission,” Workshop on Coupling of Thunderstorms and Lightning Discharges to Near-Earth Space, June 23-27, 2008, Corte, France, URL: http://www.oma.be/TLE2008Workshop/Session6/Bastien-Thiry.ppt
8) J.-L. Pinçon, “TARANIS - A microsatellite dedicated to the study of impulsive transfers of energy between the Earth atmosphere, the ionosphere, and the magnetosphere,” SuperDARN 2009 Workshop, May 11-15, 2009, Cargese, Corsica, France
9) F. Lefeuvre, E. Blanc, J. L. Pinçon, and the TARANIS team, “TARANIS - a satellite project dedicated to the physics of TLEs and TGFs,” Workshop on Coupling of Thunderstorms and Lightning Discharges to Near-Earth Space, June 23-27, 2008, Corte, France, URL: http://www.oma.be/TLE2008Workshop/Session6/Lefeuvre.ppt
10) J-L. Pinçon, E. Blanc, P-L. Blelly, F. Lebrun, M. Parro, J-L. Rauch, J-A. Sauvaud, E. Seran, “The TARANIS mission,” URL: http://www.asdc.asi.it/10thagilemeeting/dwl.php?file=workshop_files/10.Blelly_AGILE_10th_WS.pdf
11) “TARANIS satellite to study lightning,” CNES, March 1, 2011, URL:http://www.cnes.fr/web/CNES-en/9128-gp-taranis-satellite-to-study-lightning.php
13) APC (AstroParticle et Cosmologie) Laboratory, http://www.apc.univ-paris7.fr/APC_CS/en/experience/taranis/presentation
14) “Arianespace to launch Taranis satellite for CNES,” Space Travel, July 13, 2012, URL: http://www.space-travel.com/reports/Arianespace_to_launch_Taranis_satellite_for_CNES_999.html
16) J.-L. Pinçon, E. Blanc, P-L Blelly, M. Parrot, J-L Rauch, J-A Savaud, E. Séran, “TARANIS - scientific payload and mission strategy,” General Assembly and Scientific Symposium, 2011 , 30th URSI, Orleans, France, Aug. 13-21, 2011, URL: http://www.ursi.org/proceedings/procGA11/ursi/GHE1-5.pdf
17) F. Lefeuvre, E. Blanc, J. L. Pinçon and the TARANIS team, “TARANIS - a satellite project dedicated to the physics of TLEs and TGFs,” URL: http://www.oma.be/TLE2008Workshop/Session6/Lefeuvre.ppt
18) Elisabeth Blanc, Thomas Farges, Augustin Jehl, Renaud Binet, Philippe Hébert,Fanny Le Mer-Dachard ,Karen Ravel ,Mitsuteru Sato, “Taranis MCP: a joint instrument for accurate monitoring of Transient Luminous Event in the upper atmosphere,” Proceedings of the ICSO (International Conference on Space Optics), Ajaccio, Corse, France, Oct. 9-12, 2012, paper: ICSO-134
20) J.-A. Sauvaud, A. Fedorov, P. Devoto, C. Jacquey, L. Prech, Z. Nemecek, F. Lefeuvre, “IDEE, The Electron Spectrometer of the Taranis Mission,” Workshop on Coupling of Thunderstorms and Lightning Discharges to Near-Earth Space, June 23-27, 2008, Corte, France, URL: http://www.oma.be/TLE2008Workshop/Session5/Sauvaud.ppt
21) “Automated Detection of Whistlers for the TARANIS Spacecraft Overview of the Project,” Stanford VLF Group, URL: http://vlf.stanford.edu/research/automated-detection-whistlers-taranis-spacecraft
22) “Earth Songs,” NASA, URL: http://science.nasa.gov/science-news/science-at-nasa/2001/ast19jan_1/
24) Petr Vá?a, “Modelling the electromagnetic storms and simulation of the IME-HF Analyser onboard the TARANIS satellite,” Mater's Thesis, Université Paul Sabatier, Toulouse, France, 2009, URL: http://epubl.ltu.se/1653-0187/2009/102/LTU-PB-EX-09102-SE.pdf
The information compiled and edited in this article was provided by Herbert J. Kramer from his documentation of: ”Observation of the Earth and Its Environment: Survey of Missions and Sensors” (Springer Verlag) as well as many other sources after the publication of the 4th edition in 2002. - Comments and corrections to this article are always welcome for further updates. | 2 | 3 |
<urn:uuid:243817aa-545e-4921-b4c1-929c4bfd046c> | People who engage in self-harm deliberately hurt their bodies. The term 'self-harm' (also referred to as 'deliberate self-injury' or parasuicide) refers to a range of behaviours, not a mental disorder or illness (1). The most common methods of self-harm among young people are cutting and deliberately overdosing on medication (self-poisoning). Other methods include burning the body, pinching or scratching oneself, hitting or banging body parts, hanging, and interfering with wound healing (2).
In many cases self-harm is not intended to be fatal, but should still be taken seriously. While it might seem counter-intuitive, in many cases, people use self-harm as a coping mechanism to continue to live rather than end their life (3). For many young people, the function of self-harm is a way to alleviate intense emotional pain or distress, or overwhelming negative feelings, thoughts, or memories. Other reasons include self-punishment, to end experiences of dissociation or numbness, or as a way to show others how bad they feel (2,3).
Many young people might try to hide their self-harming behaviour, and only approximately 50% of young people who engage in self-harm seek help (4). Often, this is through informal sources such as friends and family, rather than professionals.
While every person is different, there are some warning signs that someone might be self-harming. Aside from obvious signs such as exposed cuts or an overdose requiring intervention, some less obvious signs could include (5):
- Dramatic changes in mood
- Changes in sleeping and eating patterns
- Losing interest and pleasure in activities that were once enjoyed
- Social withdrawal - decreased participation and poor communication with friends and family
- Hiding or washing their own clothes separately
- Avoiding situations where their arms or legs are exposed (eg, swimming)
- Dramatic drop in performance and interactions at school, work, or home
- Strange excuses provided for injuries
- Unexplained injuries, such as scratches or cigarette burns
- Unexplained physical complaints such as headaches or stomach pains
- Wearing clothes that are inappropriate to weather conditions (e.g. long sleeves and pants in very hot weather)
- Hiding objects such as razor blades or lighters in unusual places (e.g. at the back of drawers)
Onset, prevalence, and burden of suicide and self-harm in young people
The most recent 'Causes of death' publication from the Australian Bureau of Statistics (ABS) indicates that in 2012, suicide was the leading cause of death for young people aged 15-24, followed closely by road traffic accidents (6). In 2012, 70 males aged 15-19 years and 144 males aged 20-24 years died by suicide (6). For young females, 59 aged 15-19 years and 51 aged 20-24 years died by suicide (6). (The number of reported suicide deaths is likely to be underestimated for young people. These figures should be interpreted with caution as they are subject to an ABS revision process which could see them change, see Explanatory note 92 and 94 (7) for further information).
The number of young people who die by suicide in Australia each year is relatively low compared with the number who self-harm. It is difficult to estimate the rate of self-harm as evidence suggests that less than 13% of young people who self-harm will present for hospital treatment (4). Evidence from Australian studies suggest that 6-8% of young people aged 15-24 years engage in self-harm in any 12-month period (8,9). Lifetime prevalence rates are higher, with 17% of Australian females and 12% of males aged 15-19 years, and 24% of females and 18% of males aged 20-24 years reporting self-harm at some point in their life (10). The mean age of onset is approximately 17 years (10). While suicide is more common among young men, self-harm is more common among young women.
Taken together, suicide and self-harm account for a considerable portion of the burden of disability and mortality among young people. In those aged 10-24 years, self-harm is the seventh leading contributor to the burden of disease in both males and females (11). It is estimated that 21% of "years life lost" due to premature death among Australian youth was due to suicide and self-inflicted injury (12). In addition, non-fatal suicidal behaviour and self-harm are associated with substantial disability and loss of years of healthy life (12).
In adolescents, the risk factors for self-harm are similar to suicide. These include (2):
- Sex (female for self-harm and male for suicide)
- Low socioeconomic status
- Lesbian, gay, bisexual, or transgender sexual orientation
Significant life events and family adversity
- Parental separation
- Adverse childhood experiences
- History of physical or sexual abuse
- Family history or mental disorder or suicidal behavior
- Interpersonal difficulties
Psychiatric and psychological factors
- Mental disorder (in particular, depression, anxiety, and ADHD)
- Misuse of drugs and alcohol
- Low self-esteem
- Poor social problem-solving skills
Experiencing a mental health problem is a risk factor for both self-harm and suicide. Evidence suggests that the majority of people who present to hospital following an act of self-harm will meet diagnostic criteria for one or more psychiatric diagnoses at the time of assessment (1). Of these, more than two-thirds would be diagnosed as having depression. While not all young people who self-harm or contemplate suicide have a mental health problem, these behaviours do suggest the experience of psychological distress.
Personality disorders are commonly associated with self-harm in young people, and self-harm is a diagnostic feature of borderline personality disorder (2). However, most people who self-harm do not meet the diagnostic criteria for a personality disorder and it is unhelpful to assume that someone has a personality disorder based on self-harming behavior alone without conducting a thorough assessment (1).
2. Hawton, K., Saunders, K. E., & O'Connor, R. C. (2012). Self-harm and suicide in adolescents. The Lancet, 379(9834), 2373-2382.
3. Klonsky, E. D. (2007). The functions of deliberate self-injury: A review of the evidence. Clinical Psychology Review, 27(2), 226-239.
4. Rowe, S. L., French, R. S., Henderson, C., Ougrin, D., Slade, M., & Moran, P. (2014). Help-seeking behaviour and adolescent self-harm: A systematic review. Australian and New Zealand Journal of Psychiatry, 48(12), 1083-1095.
6. Australian Bureau of Statistics (2014). Causes of death, Australia 2012 Cat. no. 3303.0. ABS: Canberra.
7. Australian Bureau of Statistics. Causes of death, Australia, 2012 Explanatory notes, ABS: Canberra.
8. De Leo, D., & Heller, T. S. (2004). Who are the kids who self-harm? An Australian self-report school survey. Medical Journal of Australia, 181(3), 140-144.
9. Moran, P., Coffey, C., Romaniuk, H., Olsson, C., Borschmann, R., Carlin, J. B., & Patton, G. C. (2012). The natural history of self-harm from adolescence to young adulthood: a population-based cohort study. The Lancet, 379(9812), 236-243.
10. Martin, G., Swannell, S. V., Hazell, P. L., Harrison, J. E., & Taylor, A. W. (2010). Self-injury in Australia: a community survey. Medical Journal of Australia, 193(9), 506.
11. Gore, F. M., Bloem, P. J., Patton, G. C., Ferguson, J., Joseph, V., Coffey, C., ... & Mathers, C. D. (2011). Global burden of disease in young people aged 10-24 years: a systematic analysis. The Lancet, 377(9783), 2093-2102.
12. Australian Institute of Health and Welfare. Youth Australians: their health and wellbeing. 2007 Cat. no. PHE 87. Canberra: AIHW
Self-harm and suicide are behaviours, not psychiatric disorders, therefore neither is classified in the DSM-5 (1) or the ICD-10 (2). Similarly, suicidal ideation is relatively common and in itself is not a psychiatric disorder and therefore, is also not classified in diagnostic systems. However, while self-harm and suicidal behaviour do not constitute psychiatric diagnoses in and of themselves, it is widely recognised that they often occur in the context of a diagnosable mental disorder.
Studies consistently report that young people who suicide or make a serious suicide attempt often have a recognisable mental disorder at the time, such as depression, anxiety, conduct disorder or substance misuse (3,4).
While a number of tools/checklists/scales for risk assessment and management are available, these have poor predictive ability and should not be used in isolation to make treatment decisions (5). To assess whether a young person is engaging in self-harm or suicidal behaviour, a comprehensive clinical interview by a mental health professional is required.
General principals during an assessment (5,6):
- Initiate a therapeutic relationship by demonstrating acceptance of the person and empathy
- Engender hope when possible
- Explore the meaning of self-harm for that person
- Clarify current difficulties
- Observe their mental state (both verbal and non-verbal features)
A psychosocial assessment should include an assessment of needs and risks. These could include questions about the person's (5,6):
- Social and family circumstances
- Significant relationships that might be supportive or might represent a threat
- History of mental health difficulties
- Current mental health difficulties
- Use of drugs or alcohol
- Past suicidal intent or self-harm (e.g. methods, frequency)
- Current self-harm (e.g. methods, frequency)
- Current desire to die
- Current suicidal ideas
- Current suicidal plans
- Current suicidal intent
- Access to means to end their life
- Coping mechanisms and strengths (e.g. things that the person has used successfully in the part to cope with other difficult situations)
1. American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders: DSM-5. Washington, D.C: American Psychiatric Association.
2. World Health Organization. (1992). The ICD-10 classification of mental and behavioural disorders: clinical descriptions and diagnostic guidelines. Geneva: World Health Organization.
3. Wagner, B. M. (2009). Suicidal behavior in children and adolescents. Yale University Press.
4. Fleischmann, A., Bertolote, J. M., Belfer, M., & Beautrais, A. (2005). Completed suicide and psychiatric diagnoses in young people: a critical examination of the evidence. American Journal of Orthopsychiatry, 75(4), 676.
National Institute for Health and Clinical Excellence. Self-harm: longer-term management. (Clinical guideline CG133). 2012.
Morriss, R., Kapur, N., & Byng, R. (2013). Assessing risk of suicide or self harm in adults. BMJ, 347, f4572.
Before deciding upon the most appropriate treatment for a young person who is self-harming or engaging in suicidal behaviours, the management plan should address the young person's immediate safety.
A safety plan is an agreement made between you and the young person who is suicidal that involves actions to keep them safe. It consists of a written list of coping techniques and sources of support the person can use to alleviate the crisis (1).
The young person should be engaged as much as possible in making decisions about a safety plan. When developing the plan, focus on what the young person should be doing, rather than what they shouldn't. The plan should also be for a length of time that the young person feels they can cope with, so that they can feel able to fulfill the agreement and have a sense of achievement (2).
As part of the development of a safety plan, a decision needs to be made as to whether hospitalisation is required, or if the young person can utilise existing support networks, such as family and friends, in carrying out their safety plan. A safety plan should include (1):
- The young person's early warning signs
- Coping techniques that might help them feel better
- People and social settings that provide a distraction
- People they can contact for help
- Professionals or agencies they can contact for help, and
- How they can make the environment safe
A template of a safety plan is available here.
A recent systematic review and meta-analysis (3) found that dialectical behaviour therapy (DBT), cognitive behavioural therapy (CBT), and mentalization-based therapy (MBT) were the most effective interventions for young people who had made a suicide attempt or had self-harm behaviours. Cognitive behavioural therapy with an integrated problem-solving component has also been found to help with underlying factors that might maintain self-harm, such as depression, hopelessness, and problem-solving skills (4). However, these findings must be taken with caution as they are from single trials, and replication of these results is a research priority.
UK Guidelines for self-harm (5,6) suggest the following aims and objectives in the treatment of self-harm:
- Rapid assessment of physical and psychological need
- Effective measures to minimise pain and discomfort
- Timely initiation of treatment, irrespective of the cause of self-harm
- Harm reduction (from injury and treatment; short-term and longer-term)
- Rapid and supportive psychosocial assessment (including risk assessment and comordibity)
- Prompt referral for further psychological, social and psychiatric assessment and treatment when necessary
- Prompt and effective psychological and psychiatric treatment when necessary
- An integrated and planned approach to the problems of people who self-harm, involving primary and secondary care, mental and physical healthcare personnel and services, and appropriate voluntary organisations
- Ensuring that the special issues that apply to children and young people who have self-harmed are properly addressed, such as child protection issues, confidentiality, consent and competence.
1. Stanley, B., & Brown, G. K. (2012). Safety planning intervention: a brief intervention to mitigate suicide risk. Cognitive and Behavioral Practice, 19(2), 256-264.
2. Mental Health First Aid Australia. Suicidal thoughts and behaviours: first aid guidelines (Revised 2014). Melbourne: Mental Health First Aid Australia; 2014.
3. Ougrin, D., Tranah, T., Stahl, D., Moran, P., & Asarnow, J. R. (2015). Therapeutic Interventions for Suicide Attempts and Self-Harm in Adolescents: Systematic Review and Meta-Analysis. Journal of the American Academy of Child & Adolescent Psychiatry, 54(2), 97-107.
4. Brausch, A. M., & Girresch, S. K. (2012). A review of empirical treatment studies for adolescent nonsuicidal self-injury. Journal of Cognitive Psychotherapy, 26(1), 3-18.
5. National Institute for Health and Clinical Excellence. Self-harm. (Clinical guideline CG16). 2004.
6. National Institute for Health and Clinical Excellence. Self-harm: longer-term management. (Clinical guideline CG133). 2012.
The following authoritative guidelines provide evidence-based information about the practical treatment of self-harm and suicidal behaviours:
Mental Health First Aid Australia. Suicidal thoughts and behaviours: first aid guidelines (Revised 2014). Melbourne: Mental Health First Aid Australia; 2014.
Mental Health First Aid Australia. Non-suicidal self-injury: first aid guidelines (Revised 2014). Melbourne: Mental Health First Aid Australia; 2014.
National Institute for Health and Clinical Excellence. Self-harm: longer-term management. (Clinical guideline CG133). 2012.
Royal Australian and New Zealand College of Psychiatrists. (2009). Self-harm: Australian treatment guide for consumers and carers.
Birmaher, B., Brent, D., & AACAP Work Group on Quality Issues. (2007). Practice parameter for the assessment and treatment of children and adolescents with depressive disorders. Journal of the American Academy of Child & Adolescent Psychiatry, 46(11), 1503-1526.
National Institute for Health and Clinical Excellence. Self-harm. (Clinical guideline CG16). 2004.
Australasian College of Emergency Medicine and Royal Australian and New Zealand College of Psychiatrists. (2000). Guidelines for the management of deliberate self harm in young people.
Centre of Excellence in Youth Mental Health. MythBuster: Sorting fact from fiction on self-harm. 2010. Melbourne: Orygen Youth Health Research Centre.
Centre of Excellence in Youth Mental Health. MythBuster: Suicidal Ideation. 2009. Melbourne: Orygen Youth Health Research Centre.
Robinson, J., Hetrick, S. E., & Martin, C. (2011). Preventing suicide in young people: systematic review. Australian and New Zealand Journal of Psychiatry, 45(1), 3-26.
Hawton, K. K., Townsend, E., Arensman, E., Gunnell, D., Hazell, P., House, A., & Van Heeringen, K. (2009). Psychosocial and pharmacological treatments for deliberate self harm. The Cochrane Library.
Laye-Gindhu, A., & Schonert-Reichl, K. A. (2005). Nonsuicidal self-harm among community adolescents: Understanding the "whats" and "whys" of self-harm. Journal of Youth and Adolescence, 34(5), 447-457 | 1 | 3 |
<urn:uuid:a1d016a9-abab-479d-8333-bad3e7b2f2f3> | A wireless router is really a device that works just like a regular router together with a wireless access point. It allows internet access or perhaps a network of computer systems minus a cabled connection. And now wireless routers are most likely the most common kinds of the communications device and are commonly used that you could find nearly anywhere.
A router is really a hardware device that carries packets over networks and is attached to a minimum of two Internet. Serving as gateways, routers are the approach where two or more networks can be connected. The aim of these wireless routers would be to simply produce a link where data can effortlessly be sent right places.
The function of a wireless router is to maintain information configuration. This storage is known as a routing table.They be capable of filter the being able to access traffic, no matter incoming or outgoing, using the Internet Protocol address. The router is sort of a central hub and also the IP addresses are routed within the right direction by switches from the routing table.
You will find two kinds of wireless router that are perfect in their designated functions and also have different uses. The very first type allows you to definitely connect personal computers inside a single household provided the computer systems are inside a specific selection of the wireless router. Additionally, it allows the customers to gain access to the web remotely. The second kind of wireless router is able to cover a much wider area which is often used in office workplaces.
The wireless router has many flexible functions that can keep people work happily.If you are going to search for a good wireless router, some factors must be noticed. In determining on the router to make use of, you need to make certain it’s from the right protocol. And you also need to choose for the factors such as the rate of the web connection, where the router will probably be setup,and the router and the modem also need to well matched. Therefore, if you know more about the uses of router, you can make better use of it.
#1 ASUS Black Diamond Dual Band Wireless-N 600 Router (RT-N56U)
My Experience with the ASUS Black Diamond Dual Band Wireless-N 600 Router (RT-N56U)
ASUS is a better router than handful of in the high quality modems available on the market. I have bought NetGear 750 and following a few days of struggle, I desired to come back because of hard to depend on connections. I have attempted various channels for your Wi-fi compatability and none have aided. After reading through through our prime reviews for ASUS, I elected with this model and i am very glad that we make a choice that’s needing to repay now.
Excellent coverage (even in my 3rd floor and my router is about the very first floor)
Most reliable and never dropped only one time since the start (over 10 days now)
Better speed than these modems
Excellent UI (I am in contradiction to others relating to this point. I loved this UI much better than NetGear. May be, I am really a technical which might be the main reason too).
Wired systems are saved to a great speed while using the Gigabit ports together with other NetGear router completed pretty the identical round the wired network speed.
Media servers and print servers works well too without issues.
Two USB ports are very useful
In built Bit-torrent client and FTP servers certainly are a feature additions and so are very useful. I have not looked into much relating to this yet.
Static Insolvency professionals are allowed only for 8 items the concern for just about any handful of. However have only a few servers running which requires internal devoted IP and it must be sufficient for the moment. Climax a limitation, while not affecting me at this time around.
Overall, this is often a great router for the investment compared to handful of other modems in the marketplace wealthy on price with features lesser or similar to this ASUS model.
#2 Cisco-Linksys E4200 Dual-Band Wireless-N Router
My Experience with the Cisco-Linksys E4200 Dual-Band Wireless-N Router
I am a very very long time Linksys user and I have tried personally WRT54G in excess of 5 yrs. It truely does work perfectly until a couple of days ago once i realize that it droped wireless items 1-2 occasions each week. So, In my opinion now you have for a lot of serious upgrade. My choice is then Linksys E4200
After I have AT&T UVERSE, the 2Wire box from UVERSE is essentially the calble modem router. Initially, I believed the setup of E4200 will probably be just like the way i config WRT54G behind it. Wrong . . . it is a quite different. So, I share the setup steps here below, I am hoping this really is helpful for other UVERSE clients.
First, you need to set [email protected] of E4200 to remain in another change from 2Wire, i.e.,
Connect E4200 to energy support. Don’t turn it on to 2WIRE yet. Delay until the ‘cisco’ Introduced is ON and stable
Connect your pc using LAN cable towards the LAN port on E4200 (Port 1-4)
Open IE browser, enter 192.168.1.1 then ENTER. You have to start to see the ‘cisco’ router setup. If it’s actually a normal ‘cisco’ blue screen of death of dying, to research the buttom and select visit setup
Now, you have to start to see the ‘cisco’ SETUP screen
Choose Setup-Fundamental Setup-Ip
change from 188.8.131.52 to 192.168.2.1
ensure your web setup-connection type = Automatic Configuration – DHCP
Set timezone from the router according to where you stand
For individuals who’ve multiple modems, I would suggest your affect the router title from Cisco53324 to whatever your decision is. Using this method you’ll be able to identify it better afterwards (this is not mandatory but rather a enjoyable step to complete)
click SAVE (this button reaches the bottom in the page)
Now, you connect E4200 to 2WIRE.
Leave laptop mounted on E4200 formerly pointed out, you will employ it again to check on
With another LAN cable, connect one finish towards the port 1-4 on 2WIRE box as well as the other finish to E4200 Internet (Yellow) port. Inside a matter of seconds, E4200 should be capable of auto identify and hang up itself up. You must have a chance to start to see the two LEDs on E4200 yellow port on, the eco-friendly the very first is ON and yellow one blinks
To laptop. Click STATUS (top right corner)
Look for Internet Ip. It’ll display something such as 192.168.something
Whether or not this still shows … you have to click RENEW Ip button. It’ll fetch itself an excellent new Ip from 2WIRE
At this time around, you can look at to start another IE browser and visit google to check on it may access internet
Technically, at this time around your E4200 needs to be working ok behind 2WIRE. Any PC mounted on E4200 LAN Ports are ok.
Next, setup your brand-new wireless network. Now, at this time around, you don’t have to follow along with just what I really do. Things are determined by which type of wireless items you’ve. I’m speaking about whether individuals would be the old wireless A/B/G or perhaps the new N standard. Throughout my situation, nearly all my products are B/G and my partner and kids aren’t so technical, so, I really do that to make sure the wireless setting is okay without requiring to reconfig all people items once more
On ‘cisco’ Setup screen, click Wireless-ConfigurationView
Change from Auto to Manual
Network Mode = Mixed
Enter Network Title (SSID), throughout my situation I take advantage of exactly the same type of status for just two.4GHz and new status for five GHz
Click Wireless Security and hang up Security Mode = WPA2/WPA Mixed method of 2.4GHz network and enter in the Pass Phrase (I take advantage of exactly the same type of one). For that 5GHz network, choose what you look for using the options that include your N device. Throughout my situation, I merely set to ensure that it is similar to my 2.4 GHz netowrk
Click SAVE once again
At this time around, you are prepared. Incidentally, make sure to turn off the wireless from your 2WIRE box
I request my partner and kids to reunite their items easily. You just need to determine avalable network, interact with the 2.4G or 5G, reenter the identical passcode. Then all works fine.
For that qualities and satisfaction of E4200, some tips about what I’ve come across up to now.
Using LAN cable right to E4200. Excellent, it’s running at 1Gbps rather than 100Mbps of my WRT54G so, utilization of my exterior storage is unquestionably faster (since it is mounted on another port off E4200)
For wireless, signal for two main.4GHz is very strong. I buy all full bar evertwhere in side in addition to for the backyard. But 5Ghz might be a disappoiting, merely a room lower stair I buy only 2-3 bars. However I am a smaller amount worry (yet) since most of my products aren’t N standard.
It seems they provide other awesome features in E4200 like parents control, game control, traffic control etc. But in truth, I dont depend in it at this time around around.
Another really good feature could it be includes IPv6. This really is really the following large factor about IP ip. I dont put it to use yet (just stay with IPv4) however understand that E4200 assists me well to another a long time once i progressively upgrade.
#3 NETGEAR Home Theater Internet Connection Kit
My Experience with theĀ NETGEAR Home Theater Internet Connection Kit
It seems like today virtually every electronic component now requires internet connectivity for whatever reason. Using my family room, after i type this, you will discover 4 video components (blu-ray player, Roku, Cable, and Wii) 4 laptops. Perform have a very robust wireless router network and consists of labored well for people.
However, the router can be found inside the basement as well as, since i have was prone to gradually slowly move the Wii where you can computer 2 flooring around the second floor, I used to be concerned about the conventional in the wireless signal.
This method means you could connect where you can high-quality high-speed network anywhere in your home. Setup will be a breeze which is about as easy as hooking up your laptop up to docking station — there’s virtually only somewhere the plugs goes.
I have had powerline plugs formerly and also, since you just got one adaptor (connection) per package. People plugs cost about on the wireless router therefore i really didn’t realize that powerline plugs were an affordable solution. However, this unique solution is much more economical because it provides, basically, 4 connections for comparable cost just like a single powerline adapter package.
If this involves quality of video components attached to powerline versus connecting up utilizing a wireless router — in truth, I really can’t go to a factor one way or perhaps the other. I suspect it has associated with the fact the restricting aspect in data speed has associated with the transmission speed to your property and not the transmission house once the data has Became a member of your house. Consider it getting singleOr2 inch pipe just before your house together with single-inch pipe indoors… due to the fact there is a bigger pipe inside, the limit to simply how much can flow using the pipe is fixed with the smaller sized 1/2-inch pipe. This is actually the bottleneck.
These kits is bound to constitute benefit for older houses with “real plaster” walls plus much more solid building materials. They are not likely to replace your wireless solution nonetheless they give a simple method of stretching the capability of your property network. So that as more products and products require internet connectivity, the Netgear powerline adapter package will finish up essential if you want to maintain the integrity and bandwidth of your property network.
#4 Netgear N900 Wireless Dual Band Gigabit Router
My Experience with the Netgear N900 Wireless Dual Band Gigabit Router
That is hands lower the most effective router you can buy today. It is really an up-to-date version of WNDR3700 that people also examined. Because the set-up is not as easy as you may hope, for those who have setup other modems not given by Apple, its virtually straight-forward and also on componen with others.
Is fast? Yes! Not 900Mbps fast since the title forces you to believe nonetheless its as rapidly otherwise faster than other Wireless-N modems. The gigabit LAN ports are excellent, but is on componen with previous version.
Where this stays out might be the radios (by 50 percent.4 and 5 Ghz). They are a lot more effective than last years model enabling for just about any truly pervasive wireless solution in almost any home. Really I possibly could get good signal two houses lower from mine. In addition I’ve had this constantly on not under 2 several days and possess yet required to reboot it.
*Blazing fast wired connection (Gigabit)
*Fast Wireless (I’ve peaked at 270 Megabyte per second) though it is said you’re going to get 450 Megabyte per second (I have not seen any items that could do that).
*Range – I really could attend among my close neighbors home but nonetheless get yourself a solid connection
*Reliable – no restarting needed yet (over 2 several days)
*2 USB inputs (Share a printer together with a tough disk)
*Marketing in the “900″ – this bad-boy cannot create 900 Megabyte per second wireless nor do you know the consumer items of receiving that – its unfortunate they thought they have to do that
*Cost – just a little pricey, however, you receive everything you purchase
5/5 stars – You won’t search for a much better router
#5 Linksys E3200 High-Performance Simultaneous Dual-Band Wireless-N Router
My Experience with the Linksys E3200 High-Performance Simultaneous Dual-Band Wireless-N Router
I’ve had Linksys (‘cisco’) modems before and you’ll be pleased with them. I’ve recommended people to family people too. Only problem is prior modems were not easy to setup for novice clients and for your reason I desired to produce the modems I recommended to family people. The setup was too confusing round the account to make sure that it absolutely was simpler to merely setup the router round the account. The kinds of past tried to own wizard based connects nonetheless they frequently occasions didn’t work perfectly well, i usually place them while using administrator page. The company-new E3200 might be the easiest wireless router I have ever setup. The Linksys setup applications are excellent and may make developing a hidden network a good deal simpler for that average user and may reduce returns and help-desk use Linksys. I installed this program around the Mac laptop Professional and in addition it was quite simple to produce and take advantage of. The interface is very easy to use for setup too for modifying the router. For advanced clients you’ll be capable of still go to the administrator page to tweak designs for the particular need. The admin site might be the classic style Linksys administrator page. Personally it needed basically a few momemts to take advantage of inside the wireless ssid title and password throughout setup and allow the machine to accomplish the setup. Linksys sure hit a home target the setup interface. After installation this program supplies a connection manager that allows you to definitely certainly certainly certainly edit designs through getting an obvious to determine and understand interface. While using the interface you’ll be able to additional computers and items for your network (can do manual or through usb based easy connect that transfers designs to another computer), parental controls, guest network, and fundamental router designs.
Additionally for your easy setup the company-new Linksys E3200 router has several nice features like the following:
Dual band wireless N (2.4ghz and 5ghz)
Gigabit ethernet ports, total of 4 ports.
USB port for shared storage.
Service quality you prioritized network traffic.
If you are looking for a brand-new top finish dual band wireless N network i rapidly would sure start to see the new ‘cisco’ (Linksys) E series wireless modems. You’ll uncover numerous modems based on your needs and price point meaning you’ll be capable to choose the one that suits your unique needs. I don’t think you’ll uncover an simpler wireless router to produce if you are not at ease with network hardware and concepts.
#6 Airport Extreme 802.11N (5TH GEN)
My Experience with the Airport Extreme 802.11N (5TH GEN)
Although setup is quick, it is the top end 802.11n dual Radio wave bands and the introduction of my own, personal cloud storage (HD on USB port) making the Airport terminal terminal Extreme a best-in-class choice!
After reading through through another reviews, I understood this is easy and quick. I started a pot of coffee thinking I really could possess a cup while placing inside the Airport terminal terminal Extreme Base Station (AEBS) and establishing it. Listed below are the steps:
Attached an ethernet cable within the AEBS to my Internet service provider connection. Blocked inside the AC adapter and cord. AEBS powered up. Status light showed eco-friendly for just about any second, glowed amber for a lot of seconds, then showed amber prior to the AEBS was setup in the computer.
In the Mac laptop Professional (wireless access works appropriate for this), the airport terminal terminal utility application had already launched and was waiting for me (otherwise, visit ApplicationsUtilitiesAirPort Utility.application). Adopted instructions that incorporated typing in the router title and a pair of passwords. The default AEBS configuration selects channels and Radio wave bands instantly to optimize speed.
Blocked an additional hard disk drive (throughout my situation: Mac OS Extended (Journaled) formatted 1 TB HD) into AC outlet as well as the USB port. As soon as the HD had started, it switched up just like a MBP network drive device on ‘Finder’. I quickly created a folder, moved data, and study it back.
At this time around, the coffee machine beeped to inform me my coffee was ready. I used to be done just before the coffee being ready – three minutes from opening el born area to being operational! Gotta luv it.
Fundamental Performance Testing:
Not likely to permit the coffee be squandered, I proceeded having a couple of performance testing. I completed some very fundamental data throughput tests by moving files within the MBP using the AEBS for the HD. This test arrangement saved my Internet service provider download and upload data rates in the equation. For your wired tests, the MBP was mounted on one of the three AEBS Gigabit ports.
Test 1 (a control test configuration between MBP and HD via USB on MBP):
Email HD: 33.8 MBytes/sec
Read from HD: 34.3 MBytes/sec
Test 2 (wired data)
From MBP to AEBS via Gigabit port, then from AEBS to HD via USB): 13.6 MBytes/sec
From HD to AEBS via USB, then from AEBS to MBP via Gigabit port): 18.3 MBytes/sec
Test 3 (wireless data – 5 GHz Radio wave band)
From MBP to AEBS, then from AEBS to HD via USB): 7.8 MBytes/sec
From HD to AEBS via USB, then from AEBS to MBP): 12.6 MBytes/sec
Test 4 (range test, 5 GHz Radio wave band between MBP and AEBS getting a max capacity of 300 Mbits/sec):
3 feet, items in close closeness: 300 Mbits/sec
50 feet, inside, no ext walls in path: 243 Mbits/sec
70 feet, outdoors, one ext wall in path: 144 Mbits/sec
80 feet, outdoors, one ext wall in path: 104 Mbits/sec
Default designs seem to supply high bit-rate connections. Using ‘manual setup’ in Airport terminal terminal Utility.application, I examined several versions round the configuration without improving the speedOrvariety for your 802.11n wireless provided through the default setting in the APBS. Reading through through data within the HD back using the AEBS for the MBP was always faster than writing data for the HD. Including ethernet to the data path (Test 2 compared to check 1) reduced data rates in 2. Including Wi-fi compatability to the data path (Test 3 compared to check 2) reduced data rates to twoOr3. Range test performance was good for distances within 50 foot.
My own, personal storage cloud:
Initially, I imagined simply while using HD (USB port on AEBS) just like a network drive for just about any SVN (i.e., software version control) repository for software development in my MBP. But, I recognized this drive is a type of storage spot for my items (MBP, iPad, apple apple iphone, iTouch) that’s accessible easily in my local Wi-fi compatability. Having a VPN connection, everywhere with Wi-fi compatability access to the internet. Simply pointed out, I have my very own, personal cloud! We’re not really speaking of a revealing handful of GB either, but an entire TB of devoted exclusive mine-only cloud. Now, that’s awesome.
Why buy the fifth generation AEBS:
Top end 802.11n wireless performance
Quick and easy setup
USB port for affixing a tough disk or printer
Ability to produce my very own, personal AEBS Wi-fi compatability storage cloud for individuals my items
Guest utilization of Internet service provider without utilization of other items or attached USB device
Sleek clean stylish look
I am so thrilled with this particular purchase.
#7 Cisco Linksys EA3500 App-Enabled N750 Dual-Band Wireless-N Router with Gigabit and USB
You probably think that the Linksys EA3500 is just another router with also the so-so performance. But you should really check the device and have a look at it. Not only this device will deliver satisfying and promising result, you will also enjoy the various perks and benefits it offers. When you buy the Linksys EA3500 wireless router, you can get easiness and flexibility in your working or professional implementation.
The Important Elements – The reason why this Linksys EA3500 wireless router is very popular is because of the fast and speedy internet connection. You can get wireless transferring rates up to 300 Mbps – without extra charge or tweak. You can also enjoy the dual band system that will make your internet connection smooth and flawless; with so little network interference as possible. Since the router also comes with MIMO antenna for enhanced array, you can certainly widen up your connection and get bigger network coverage. The router may be equipped with Ethernet, but it is packed with Gigabit Ethernet ports for better speed that can reach 10 times faster than the Ethernet. Thanks to various different types of USB ports, you can also connect the router to other devices, such as printers or computers.
Flexibility and Usages – Not only this router is able to provide reliable and fast .connection, it is also able to deliver easiness and flexibility for you as an active user. For example, this router can provide active online performance for home theaters and household needs. It is also able to stream high quality music or video. If you want to enjoy multiplayer gaming, this router is able to deliver the best performance. Setting up the device and make changes over the setting is very easy and simple. So, what do you expect more from such powerful device?
Cisco Linksys EA3500 General Specs and Features
There are also several basic benefits that you can enjoy from this Linksys EA3500 such as:
Dual band support in order to boost power and performance so you can enjoy the video streaming ability and also easiness in the file sharing
Multiple radio systems for the wireless technology in order to enhance the coverage as well as avoiding dead spots
Different spatial steams for the 2.4G band and also the 5G one.
Most users say that this Linksys EA3500 Wireless router is a very special and handy device. It is definitely fast and the network coverage is quite impressive. They really like the easy implementation and setting whenever they want to operate this device. However, there are several concerns from the users, especially in terms of privacy setting and also the network coverage. For some users, the network coverage isn’t very impressive or wide. Despite the fast connection, the coverage is quite lame so they don’t really like it. Some people also have problems with the privacy setting, thinking that the company somehow manages to change the setting even without their permission so they feel that their privacy is being ‘attacked’ and trespassed. If you want to think further and consider more, this Linksys EA3500 is able to offer a lot, but you also need to consider other aspects carefully too.
#8 Apple Airport Extreme 802.11N 5TH GEN
Apple Airport 5th Gen was made public in the summer of 2011 and the selling rate has been increasing ever since. Among everything else, what makes the router get such an enthusiasm is the strong power of Wi-Fi connection, a feature that Apple alone can just name it Gigabit Wi-Fi. Anyway, Apple has its own long account of being the front-runner in wireless standard, by maintaining the high level of network performance while the router connects several devices at the same time, for example, as it is also the case for this Apple Airport Extreme.
The Gigabit Wi-Fi – The exceptional Wi-Fi connection of Apple Airport 5th Gen is mainly because of the dual band feature, simultaneously working on 5GHz and 2.4GHz for maximum compatibility and range. Consequently, all devices within the network range can automatically use the band with the best efficiency level, so hassle free without manual connection. You will have no difficulties in connecting, for example, your iPod or iPhone as it uses 2.4 GHz band and Apple TV or Mac computer as it uses 5GHz spectrum. In addition to the dual band feature, the router uses the latest wireless technology, 802.11n, allowing you to enjoy Wi-Fi performance five times better and twice wider than the wireless range of 802.11g network. The 802.11n also allows for multiple input and output (MIMO) data stream transmission, so you can transfer your data much faster.
Secure Connectivity – Apple Airport Extreme is the best wireless network for business, school, and home because of the secure connection as well as multiple functions, including printing, backups, entertainment, and many more. The Guest Networking feature allows you to control the internet sharing for 50 users and the AirPort Disk feature allows you to share your hard drive. The multi-function of connectivity is well supported with built-in NAT firewall for maximum security, not to mention the encryption technology of 128-bit and 40-bit WEP, WPA/ WPA2 protected access, and address filtering of MAC.
Apple Airport Extreme 5th Gen Detailed Specifications :-
The speed and security are just two main internet connection features of the router. Here are further specifications for you to check.
Dimension 6.5×6.5×1.3 inches
Weight 1.66 pounds
Compatible with the Wi-Fi certified 802.11n , 802.11g, 802.11a, or 802.11b and DHCP, PPPoE, NAT, VPN Passthrough (L2TP, IPSec, and PPTP), SNMP, DNS Proxy, IPv6 (manual tunnels and 6to4)
802.1X, LEAP, PEAP, TTLS, Fast, and TLS RADIUS authentication
1GB cable modem or DSL Ethernet WAN and 3GB network devices or computers LAN ports
2.0 USB port for external drive and printer
32-95 F (0-35 C) and 20%-80% operating temperature and humidity, -13-140 F (-25-60 C) and 10%-90% storage temperature and humidity
Internal antenna integrated
The base station, printed booklet, and power supply come with the Apple Airport Extreme purchase package. Flaws come in the web admin and 40 MHz absence, LAN ports, and limited features for routing. Still, if you need a router with easy setup, Gigabit ports, and routing speed more than 400 Mbps, the Apple Airport 5th Gen is what you need.
#9 D-Link DIR-826L Wireless-N600 Dual-Band Gigabit Cloud Router
The more common use of computer in the daily activities created higher demand for various computer related devices. The use of local networked connection for providing local area network is also common for supporting business activities as well as various home based computer activities like online gaming. The demand for creating simpler local networking solution created wireless networking technology. The use of wireless router is one of the solutions for creating simpler networking at home or commercial buildings. D-Link DIR-826L N600 wireless cloud router is one of the available wireless router products available on the wireless router market.
D-Link DIR826L N600 Wireless Router Specification – This small and compact wireless router is weighed 10.9 ounces and having the dimension of 4.5 X 3.8 X 5.5 inches. The D-Link N600 wireless router can provide 600 Mbps wireless connection; this available speed is practically very suitable for online gaming and also video streaming. The secure and reliable connection from this wireless router can be integrated with network amplifier. The available dual band technology can provide simple, interference free connection in bigger bandwidth. The 2.4 Ghz band can provide proper foundation for supporting high definition video streaming. It is compatible with Windows XP SP2 or later versions as well as with Mac OS X version 10.4 or newer. The 4 Gigabit Ethernet port and USB 2.0 port are available to connect various devices. This wireless router is available with one year warranty.
D-Link DIR826L N600 Wireless Router Features – The presence of dual band technology on the D-Link N600 wireless router is one of the most important features on this device. The dual band feature is providing interference free connection for connecting multiple computers and devices and share the available connection with simple bandwidth management. The high powered amplification on this wireless router provides bigger coverage. It is very suitable for home use or small to medium corporate users. The wider coverage combined with the SharePort Mobile feature is providing simple access for local files. Users can access photos, music and video collections wirelessly from a workstation, mobile phones or other mobile gadgets like PC tablet.
Dlink DIR826L Wireless Router Spesifications – One of the useful features available on the D-Link N600 wireless router is the simplicity of the installation and its setup. The introduces zero configuration technology provides simple setting and connection when adding new devices to the network. This wireless router can be connected and configurated with D-Link Cloud Router to provide secure and reliable wireless connection. The simple plug and play solution when adding devices on the network made anyone can use this wireless network easily. The network setup can be arranged from Android gadgets or iPhone using 128 bit encryption to provide the security solution.
The D-Link DIR-826L wireless router also designed with greener design. It is affordable and delivering nature and environment friendly design. This wireless router is capable to conserve the energy and made of harmless substances and materials. The addition of recyclable package is intended to ensure the this device do not add harmful substances from its waste.
#10 Cisco Linksys EA4500-NP N900 Dual-Band Wireless Router
The use of computer in our daily life is becoming a need; there are various activities that depend on the use of computer. The more common use of computer in various fields is one of the reasons for the quick development of computer technology. The creation and development of networked computers is one of the important development to support various activities involving computers and to simplify various complications when the share of data is required to perform certain computation activity. The creation of wireless router is intended to simplify the computer networking needs. Connecting workstations is simpler and easier using a wireless router. Cisco Linksys EA4500-NP N900 Dual-Band Wireless Router is one of the available wireless routers in the market; this device is small, light and very easy to use for solving computer networking requirements.
Cisco Linksys EA4500-NP Product Specification – This wireless router is using Wireless-N technology that can transmit or receive 3×3 connections. Linksys EA4500-NP is using the six internal antennas and working in four ports X Gigabit speed. It uses Native IPv6 support and already has the App enabled for the Cisco Software. This wireless router is compatible with Windows and Mac operating system. This Cisco wireless router require Windows XP SP3 or later Windows versions with Wi-Fi feature and DVD or CD drive available on the PC. The use of Mac operating system require OS X Leopard 10.5 or later versions with Wi-Fi feature and available DVD or CD drive for the driver installation. The package can be acquired with two years warranty from the manufacturer.
Cisco Linksys EA4500-NP Product Features – Cisco Linksys EA4500-NP is a suitable wireless router choice for providing the computer networking solutions in large house or a residential complex with high density. This wireless router can connect all of the wireless enabled devices as well as connecting the USB storage solutions or printing devices. The available ultra fast wireless connection can connect the devices for up to 450 Mbps transfer rate. The simultaneous dual band connection is designed to prevent any network interference and capable to maximize the data flow on the connected devices. Cisco Linksys EA4500-NP N900 wireless router can be a computer networking solution with Gigabit Ethernet port available with ten time faster connection than the connection provided by the standard Ethernet port. The 3X3 MIMO technology on the internal antenna is providing larger coverage and connection range with reliable connections.
There are other features offered by the Cisco Linksys EA4500-NP N900 wireless router with the Cisco Cloud Connect for simpler and easier access to the network using mobile gadgets. This feature enables the network user to monitor the broadband connection and also all of the connected devices. The children protection for internet browsing is available on the parental control application; it can provide better monitoring and protection for the younger users connected to the wireless network. The available manual setting to provide certain connection with more bandwidth can prevent delay or buffering during important events. This wireless router is very suitable for supporting video streaming and also online gaming activities.
#11 Netgear WNDR3700 Review
Who should buy this: Customers who need a high-performance, high-speed router for their home or small office.
The Netgear N600 Wireless Dual-Band Router offers excellent speed and performance for your small office or home. Its processor is the most powerful of any other consumer wireless router on the market today. It offers dual-band Wireless-N, USB storage access, gigabit Ethernet ports, and several other options which allow you a fully customizable experience with your Wi-Fi network. If you require a high level of throughput and unrivaled performance from your wireless network, the Netgear N600 is an excellent choice.
Speed and performance: The Netgear N600 has a 680 MHz processor which gives you power and flexibility for your network. There are also four gigabit Ethernet ports to enable wired speeds of up to 1000 Mbps. There are also eight internal antennas, with dedicated antennas for each band; the power amplifiers give you the maximum possible wireless range.
Dual-band performance at 2.4 GHz and 5 GHz frequencies. This gives the fastest throughput performance available, giving you a combined speed of up to 600 Mbps with a real throughput of over 350 Mbps on the network.
Easily add an external USB storage device and give everyone on the network access to it. Utilize the built in USB port on the router and with the Netgear ReadyShare feature, the storage device will be available to everyone on the network to share files, documents, music and more. ReadyShare supports a wide range of file formats, and has a simple plug-and-play performance.
When using the USB share option and using a protect feature, it only allows you access to protect files on the shared device if you log in using either Internet Explorer or Firefox.
The Netgear N600 is an excellent choice if you need high speed and great performance. For small businesses which want to be able to easily share files across a network, the USB port on the router makes it quick and easy to set up file sharing and keeps your shared files stored in a centralized location. The router setup is quick and easy and walks you through the process from start to finish simply by running the Smart Wizard Push n Connect disk.
#12 Cisco-Linksys E3000 Review
Who should buy this: Customers who want state of the art performance and security in a quick and easy to install wireless router which puts you in control of your Internet experience.
The Linksys E3000 Wireless-N Router is a great choice for users who want a highly-customizable WiFi router which gives them a lot of options, security, and speed without the high-dollar price tag of most high-end routers. The E3000 is currently on sale at Amazon.com for only $146.66 which is over 15% off. The router comes with a built-in USB port and UPnP AV media server to allow you to effortlessly share your files over your network.
You can also stream media content to a PS3, Xbox 360, or any other compatible device. The Wireless-N Router also comes with Cisco Connect software to help you be able to manage your wireless network, and will get you set up in a few simple steps. Blu-ray players, DVRs, gaming consoles can all be connected through the Linksys E3000 and it will give you optimal game performance with its 2.4 GHz and 5 GHz speeds.
Simultaneous Dual-Band: Experience smoother and faster video streaming, gaming and file transfers. The Linksys E3000 gives you the ability to use the 2.4 GHz band for emails or web surfing. Connect other computers or devices to each other and to the Internet using both the 2.4 GHz and 5 GHz bands.
Easy to use network customization in the easy to access Advanced Settings of the WiFi Router.
Peace of mind by having the ability to restrict Internet access during certain hours, or to block certain websites.
Cisco Connect software to give you a highly personalized experience. The Linksys E3000 Wireless Router with Cisco Connect gives you the ability to quickly add multiple Internet-enabled devices. You can also add password-protected Internet access to guests and visitors quickly and easily without having to give access to your computer, which helps to keep your family’s information private and secure.
The Linksys E3000 WiFi Router does not come with an instruction booklet in the box. There are full set up instructions on the startup CD, however, which walks you through the set up process from start to finish, and will test your new routers functionality and help with troubleshooting should you need assistance.
For a savvy consumer who wants a great product at an excellent value, the Linksys E3000 Wireless-N Router is a must. From set up to management, this is a very user-friendly router which gives you the power to control your Internet experience. The E3000 gives you the ability to add multiple computers, gaming consoles, and media players to the network easily and quickly, as well as giving you the power to control who has access to your files, and your Wi-Fi connection.
#13 Cisco-Linksys E2000 Review
Who should buy this: Customers who want an affordable Wi-Fi router which gives them a secure, extended range for their home network.
The Linksys E2000 Advanced Wireless-N Router is a great choice for users who want a customizable Wi-Fi router which gives them parental controls and password-protected Internet access on a separate network to allow visitors the ability to access the Internet without having access to your computers or data.
The Cisco Connect software allows you to quickly set up the wireless router in a few simple steps. The extended range is perfect for larger homes or customers who need to be able to cover a lot of square footage.
Selectable Dual Band technology to connect wireless devices such as computers, Internet-capable HDTV’s or Blu-ray players, gaming consoles at 2.4 GHz or 5 GHz.
Cisco Connect software allows you to set up your wireless network in a few simple steps.
Block specific websites or restrict Internet access during certain hours with Cisco Connect’s Parental Controls.
Four Gigabit Ethernet ports (10/100/1000 Mbps) allows you to share files more quickly with other Gigabit-enabled devices such as computers, hard drives and servers.
The E2000 gives you the ability to quickly and easily customize your network, and to add other Internet-capable devices to the network.
When using the default set up, the router creates a guest network which can slow down its performance speed. Either during set up or after set up, go into the customization and delete the guest account until it is needed.
The Linksys E2000 Advanced Wireless-N router is an excellent choice for users who want to be able to cover a large home or a lot of square footage, but still have customization and security options available to them. The selectable dual-band gives you the ability to connect gaming consoles, computers, and other wireless capable devices for the freedom of file transfers and media sharing between devices. The four Gigabit Ethernet ports allow you to connect other Gigabit Ethernet-enabled devices in order to quickly transfer data at speeds up to 1000 Mbps.
#14 Apple AirPort Extreme Base Station Review
Who should buy this: Customers who want an easy to use router with dual-band support and excellent performance. Mac users may also favor this wireless router over other viable alternatives available on the market.
The Apple AirPort Extreme Base Station comes with simultaneous dual-band support to allow for a perfect wireless access point for your school, home, or a small business. With 802.11n Wi-Fi access for Mac computers, as well as PCs and Wi-Fi devices such as your iPod touch, Apple TV and iPhone.
Simultaneous dual-band base station supports 801.11b/g as well as 802.11n.
An included USB 2.0 port to connect a printer and/or hard drive to share across the network.
Configure a network to easily share your internet connection with temporary visitors without giving them access to your shared files and folders.
Three GigaBit Ethernet ports.
The Apple AirPort Base Station allows for quick and easy network set up, even for networking novices.
Because the router is an Apple one, if you connect an external USB hard drive to it the drive needs to be a Mac formatted drive or needs to be running the MacDrive software.
Conclusion – The Apple AirPort Extreme Base Station is an excellent router for security, stability, and the ability to quickly add new devices and share files between computers. It works well for helping Macs and PC’s to be able to communicate and share files between them on a network, and allows for external file/print sharing within a home or small office network. Amazon has it in stock and on sale.
#15 D-Link DIR-655 Review
D-Link has released a respectable contender into the field with the introduction of its DIR-655 Xtreme N Gigabit Router. With a slew of features and better performance than most routers of the same price, we stand amazed at how much oomph this little thing has. Certainly worth every penny, the DIR-655 is a router that you need to have if you’re looking for a solution for a home or small office network. Let’s see how this router stacks up.
Design and Appearance – The DIR-655 doesn’t exactly stand out as far as appearance goes. It looks pretty similar to other routers, boasting three antennae and the typical LEDs you’d come to expect from a gigabit router. It’s all white, except for the front panel, which has a black band. A white strip is inside of it, which houses the blue LEDs you see in most electronics today. Not exactly something we’d write home about, but since it’ll be tucked away and out of sight anyway, this isn’t a big problem.
Features – Here’s where the DIR-655 distinguishes itself from the rest of the pack. To begin with, features can be added to this device as they’re released via firmware, which is an easy process that only takes a minute or two. This is done with the USB port on the back. Of special note is the Wi-Fi guest zone on the DIR-655, which separates your friends from the systems that use the network regularly — they can’t access files or change settings. Like any other network, the guest zone can be password-protected. This model also supports drive-sharing, and enjoys faster throughput than most of its competitors in this area. As far as ports go, it has the usual 4 LAN ethernet ports, and 1 WAN ethernet port for the modem.
Performance – The DIR-655 Extreme N Gigabit Wireless Router really amazed us with its speed and overall performance. According to various comprehensive tests, this D-Link model performed at 112.56Mbps throughput at short range, which is a seriously impressive number compared to the next-highest router, at a much smaller 83.3Mbps. We obviously recommend using this router at short range (in a house or medium office setting) for the best performance.
At Draft-N speeds, we enjoyed zipping around the Web without losing any time. We didn’t wait for websites to load, and streaming video was a breeze.
Conclusion – We can’t expect any router to be perfect, but the DIR-655 came pretty close to perfection. Overall, we give the DIR-655 very high marks, and highly recommend it for those who seek speed and performance without compromise. Amazon has it on sale (an incredible deal if you ask us).
Wireless Router Review’s guide to buying a Wi-Fi router
A wireless, or Wi-Fi, router is a device which allows you to connect multiple devices to the internet all at the same time. Wi-Fi routers also give you a way to move data, or even stream media between devices across your home or small office. There are a lot of wireless routers, and it can seem a bit daunting and overwhelming to try and choose your first wireless router or even to move on to bigger and better things when you outgrow your old router. Wireless routers offer a lot of easy to use security and personalization features. They are quick and easy to set up, and give you the control over who has access to your computers, your files, and even your internet connection.
There are a number of things to consider when purchasing a wireless router. What is it that you use your internet for? Do you just have one or two computers, or are you in a small office which could have a large number of computers connected during the day? What type of warranty is best for you? How much do you want to spend? What do all these wireless router-related acronyms mean?
This Wireless Router Review guide will help you decide. Feel free to take a look at our wireless router reviews as well.
What do you currently use your Internet for?
If you use the internet to surf the web and check email, then the latest and greatest gaming router is not going to be necessary for you. On the other hand, if you play a lot of graphics-intensive, web-based/run games, stream media over the internet and file share between a several computers, you definitely need to look into a high-end model.
All I do is browse the web and reply to email, what’s going to work for me?
If all you are doing is surfing the internet and replying to an occasional email, you don’t need to spend a lot of money on all the bells and whistles to get a great wireless router. Look for a router that has a set up CD and a good, solid warranty. Many of the well-known brand names offer a basic router which is high-quality but made to be affordable for consumers.
For a simple, easy to use product which will get you connected wirelessly in no time at all, the Cisco-Linksys WRT54GL Wireless-G Broadband Router is an excellent choice. Set up and connection is simple and does not require any know-how to make your connection secure. This Wi-Fi router also gives you the ability to plug up to four wired computers in, or wire in another hub to increase the size of your network. This router is an amazing deal for less than $100, but you can currently find it on Amazon.com at a great price.
I stream a lot of movies/TV/music and play bandwidth-reliant games. What’s my best option?
For media streaming or games that require a lot of bandwidth, look for a router with a built-in media server or USB port to connect a media server or file-sharing drive where media could be stored in a centralized location. Having the ability to directly plug devices in to a wired port is also a nice feature to have available. Wireless-N is optimal, and some of these routers allow for extremely high speeds across the network and Internet.
If you need a router that comes with a built-in USB port and UPnP AV media server to allow you to effortlessly share your files over your network, then might I recommend the Cisco-Linksys E3000 High-Performance Wireless-N Router. Attach your Blu-Ray, HDTV, gaming consoles and PCs to the router in minutes, and experience optimal gaming experience with simultaneous dual-band 2.4 GHz and 5 GHz speeds. No matter what you connect to the E3000, you can enjoy transfer speeds of up to 300 Mbps. This router sells for $180-220 but Amazon.com has a great deal on this wireless router.
If speed, security and the ability to customize your network without breaking the bank is important to you, look for a wireless router that comes with a software suite to help you get started. These are much easier to use than a user manual, because they walk you through the set up process step by step, and tell you when to plug which components in for maximum results. They will also assist you with setting up a network password, and adding devices.
I own an Apple TV and other Macs, what’s the best wireless router for me?
Any wireless router will work with your Apple TV and Mac-based laptops. Nevertheless, why not use a wireless router that is specifically designed for Apple-based hardware? At the current price, it is an admittedly high-end, but still affordable solution for your household or office network. The router is also small, classy, and beautiful which may or may not matter to you.
I’ve never set up a network. How do I customize it?
Many of routers these days come with software which allows you to fully customize the browsing experience of each and every device on the network. Do you have guests coming in from out of town that you’d like to have access to the internet, but not your computers or files? Do you have children that you worry about them getting onto sites you haven’t approved? Are you concerned they’re getting online when they should be sleeping or doing their homework? Look for a customized security suite that allows you to set up guest networks, browsing hours, or to block certain sites or types of sites from one or more computers without restricting every computer on the network. An example of this type of software would be the Cisco Connect software.
The Cisco Connect software is available on the Cisco-Linksys E3000 router, as well as the E2000 Advanced Wireless-N Router. The E2000 is a great choice for users who want a customizable Wi-Fi router which gives them parental controls and password-protected Internet access on a separate network to allow visitors the ability to access the Internet without having access to your computers or data. This is where the Cisco Connect software comes in handy, because it gives you that kind of control over your network and the access different users have to the internet. It offers extremely high speeds and a wide connectivity range at an affordable price. This router sells for less than $100 and is currently available on Amazon for even less than that.
What is the difference between the Wireless-G and Wireless-N Routers?
Wireless-A, Wireless-B, Wireless-G and Wireless-N are terms you might read a lot when you’re looking for a new wireless router. Wireless-A and –B are older standards which are not used much any longer. Wireless-G has been the standard and does offer a decent connection at an affordable price.
The newest wireless standard is the Wireless-N, and almost all new Wireless-Ready devices these days are coming with Wireless-N capable network adapters. Wireless-N routers will be backwards compatible to –A, -B, and –G network cards, so you’ll always be able to wirelessly connect to them from your PC, MacBook, iPad, iPhone touch, mobile device, or even older devices (provided they have a wireless network adapter).
What is the best gaming router?
For an awesome gaming experience, you want a router that gives you the ability to use super-fast speeds to stream video or other media, as well as to be able to transfer data quickly between the Internet and your PC or Mac in order to give the best possible, lag-free gaming experience.
A router with a Gigabit connection for high speed wired connection is ideal, especially if you want to be able to quickly transfer large HD Video files. Look for a router with the ability to have a USB drive media player in order to be able to quickly stream content from a media server to other devices within the network. Also, a great router will have a video mode built in to help optimize HD streaming without any stutters or lags.
A dual-band wireless N connection gives better connection and less interference when trying to move large amounts of data, such as while playing a multi-player game or streaming media. Also a great dual-band wireless router will have one network dedicated to streaming media while the other is dedicated to surfing the Internet.
The D-Link DGL-4500 Xtreme N Selectable Dual Band Draft 802.11n Gaming Router is an excellent choice to get all of those features and more. This router retails for $210.00 but is available on Amazon.com for a whole lot less (a great buy).
What is the fastest router?
Fastest router? That is an excellent question which has no real easy answer. There are a number of factors which could play a part in the effective speed of a wireless router. The number of walls surrounding the router, how many other electronics there are in the building, whether or not there are radio frequencies in the building, etc. All of these things can have an impact on the speed that a router will actually run at, regardless how of fast a given router is. But all things equal, under the same controlled testing conditions, what’s the fastest wireless router available to consumers and small businesses?
According to recent tests, the D-Link DIR-685 appears to be the fastest wireless router available in the consumer category. The DIR-685 has a lot of amazing features that are unique to this world class router. It’s also a very stylish and functional router with a number of options to make it work amazingly well for any home or small office settings. Despite its uncompromised features and speed, the DIR-685 is available on Amazon.com for much less than the market retail price of $249.99.
What is the best home router?
What you want to have in your home (or small office) is a router that uses stable, new technology to connect you and all of your devices to the internet. No muss, no fuss, simple and easy to use Wi-Fi routers that you can connect effortlessly even if you have no experience with creating a network. Of course these days it’s important to get an amazingly good deal, as well but keep in mind that you do get what you pay for.
Some things to look for is the latest technology, while Wireless-G is good for most home-based users, Wireless-N is the newest technology and the most up to date. It is also what almost all devices such as HDTV’s, gaming consoles and laptops are being sold with these days. You also want to make sure that the router is going to be easily and effortlessly installed, even if you have no experience with setting up a wireless network.
For Mac users, the Apple AirPort Extreme Base Station is without a doubt the best home router on the market and one we feel confident in recommending. It offers a lot of excellent features that will suit almost any home or small office needs and retails for $179.99 but is available on Amazon.com on sale. For PC users, we would recommend the Cisco-Linksys E3000 High-Performance Wireless-N Router as it gives superior performance, blazing high speeds, and offers a quick and easy set up. It is arguably the best home router money can buy (if you want a router that is better than this one you’ll need to spend a fortune). The E3000 also retails for $179.99 but is available on Amazon.com for much less.
If you can’t afford the best home routers on the market, then the cheapest home router that works like a charm is the Cisco-Linksys WRT54G2 Wireless-G Broadband Router. We wouldn’t blame you for buying it at such a ridiculous price.
What are the best routers?
We have a few recommendations for the best routers, but there are some things to look at when deciding what makes a possible router great. If you are going to be using your router for gaming and/or media streaming, you want a simultaneous dual-band router with Wireless-N connections. This will allow you to have one band dedicated to either gaming or media streaming, while the others are there for web browsing, checking email, etc.
A dedicated media server port, or the ability to have a media server on the router is another great feature to have available if you plan on streaming media.
Having an easy-to-use set up is another criterion to look for. Routers that come with step by step installation CDs which allow you to set up your own unique network name and customize the features on the network are good choices. If you have a need to have certain sites or types of sites disabled from one or more computer, or if you want the freedom to be able to cut off Internet access from specific devices during certain hours, that is another feature to look for.
To give you access to many of these great features, the Apple AirPort Extreme Base Station for Mac users is an excellent choice. For PC users we would recommend the Cisco-Linksys E3000 High-Performance Wireless-N Router. There are also a number of other great choices to fit in anyone’s budget. Take a look at our wireless router reviews to see a few examples.
Wireless Router Booster
Purchasing a wireless booster may be a good idea if you already have a wireless router and are experiencing problems. You may need a wireless router booster when:
Your computer connects to the wireless router, but the signal strength indicator (usually bars) shows that you are not getting full range/coverage.
Your internet download and upload speed is reduced compared to the speed that you are supposed to receive from your ISP. You can tell this is the case if connecting your computer directly to your ADSL modem (or equivalent) shows a significant increase in your connection speed.
Your data transfer speed (for example when streaming HD video or gaming) is severely inferior to that of the specifications of your modem.
Your house or office is large, and you are getting low or bad signal from your wireless router in certain parts of your house/office.
Why buy a wireless router booster?
A wireless booster is normally connected to your wireless router and can help you to boost your internet and intranet speed without any muss or fuss. There are even some wireless boosters on the market which you can plug directly into a USB port of your laptop or computing device. If your laptop has a wireless card, for example, you might be able to connect your wireless router booster to it via a USB port.
A few things to look for when purchasing a wireless signal booster are that it is compatible with your router, and that it will give you enough coverage for your home or office. For example, you wish to connect to your wireless router from your backyard or the pool, you may need a serious wireless router booster to increase the range and strength of your wireless signal.
Some of the advantages to wireless signal boosters are the ability to transmit wireless signals through walls or doors. If there are areas in your home which do not have good wireless coverage but you can’t move the router to reach those places more easily, then this is the solution you’re looking for. A wireless booster will give you that extra range that you were looking for and increase the internet signal in those areas. Additionally, since many wireless routers already come with an external antenna on them, if you want to increase your signal all you have to do is purchase a signal booster and replace one of the antennas with it in many cases.
Types of wireless router boosters
Many of the signal boosters, especially those which are just replacement antennas, are specific to a small number of makes/models of routers. Also, determine whether you need an omni-directional antenna or a simple high-gain antenna. The main difference between them is whether or not you have to aim the antenna to get the boosting power. An omni-directional antenna will not require you to aim it in a specific direction, but will instead boost the signal in all directions.
Wireless router boosters come in a wide range of varieties. For instance, there is the Alfa 9 dBi SBA Omni-Directional High-Gain Screw-on Swivel Antenna which is for a number of different Linksys or Netgear routers. This is an external antenna which simply attaches to the Wi-Fi router itself. Whatever direction you need additional coverage in, you simply swivel the antenna and point it in that direction. This is an excellent solution for when you already have all the ports on your wireless router filled but still need to be able to boost the signal.
There are also stand alone wireless boosters such as the Alfa AWUS036H which is a small, compact device with a swivel antenna. This is wired into one of your routers ports and then the swivel antenna gets turned and pointed in the direction which you need the extra coverage. If you have an open USB port on your router, there are also USB boosters such as the Alfa AWUS036NEH.
The GSI High-Powered Ultra-Secure 500mW is another example of a USB booster which gives an amazing range away from your router, and boasts roaming distances up to 10 times the normal output of the router. This device attaches to your router and can give extremely high-speed boosts to whatever device needs the extra power from the router.
Recommended wireless router boosters
Alfa 9dBi SMA Omni-Directional Antenna: for a number of different Linksys and Netgear routers, available on Amazon.com for just a few dollars.
Wireless USB 2.0 LAN 802.11G Adapter with High Gain 5dBi Antenna: available on Amazon.com for less than 20-30 bucks.
Alfa AWUS036H USB Wireless Long-Range WiFi Network Adapter: available on Amazon.com starting as low as $30-40.
GSI High-Powered Ultra-Secure 500mW: for blazing fast speeds is retailed at $79.99 but is currently on sale at Amazon.com for about one third the price (amazing deal at the time of writing).
Palm Plus High Power 802.11 b, g and n High Gain USB Wireless B, G and N adapter: which gives up to 300 Mbps is available on Amazon.com for $20-30.
Cisco Aironet 2.2 dBi Dipole Antenna: is marked at $49.99 but is currently available on Amazon.com for much less.
Hawking HWDN2A High-Gain Wireless-150N USB Network Dish Adapter: retails at $141.92, but is available on Amazon.com for less than half price.
Smarthome 4827 Boosterlinc Plug-In PLC Signal Booster: is a good choice to help boost the signal and amplify it even in a small space with a lot of signal-absorbing devices (televisions, stereos, computer gear, etc) and is available on Amazon.com for less than $100 (at the time of writing).
Hawking HWREN1 Hi-Gain Wireless-300N Range extender: re-broadcasts your wireless signal to help increase the range of the wireless network in your home or office. This device retails for $79.99 but is currently on sale on Amazon.com. | 1 | 18 |
<urn:uuid:76f65992-4132-429c-aa87-ed26d0e0debd> | "Never was anything great achieved without danger."
- – Niccolo Machiavelli
- – Sergeant Schlock plasma gun
Plasma is the fourth state of matter, beyond gas. It is a sea of ions, or charged particles exposed to so much energy, such as nuclear fusion or lightning, that the bind between electrons and nuclei become weakened and the electrons freely flow through the plasma as a current just like through a metal. Plasma emits truly heroic amounts of energy, both as heat and light, because the free electrons within the mass are frequently caught in the attractive pull of the nuclei, resulting in circular motion around the nuclei, which decelerates the electron, causing it to emit a photon in order to maintain conservation of energy (Bremsstrahlung emission). This means plasma is incredibly hot and incandescent, though unless it is optically dense (thick enough to reabsorb the vast majority of emitted photons), it cools rapidly. Seeing as it is a mass of charged particles, it is also highly susceptible to influence from electromagnetic fields. Plasma is the most common state of (regular) matter in the universe; this is because intergalactic space consists mostly of low-density plasma, and stars are giant balls of high-density plasma. Nuclear fusion requires hydrogen gas to be heated to the plasma state to overcome the Coulomb barrier between nuclei.
Plasma is produced by applying a very strong electromagnetic field to a gas, such as hydrogen. The plasma is then contained within an electromagnetic field, keeping it out of contact with the weapon's components which it would almost certainly damage. The tokamak, the most promising candidate for a working fusion reactor, exploits this by accelerating plasma in an intense electromagnetic torus until it becomes so pressurized, hot and fast enough that fusion is achieved, although the cost of power outweighs the possible electrical production benefits in the current generation of the technology.
In fantasy sci-fi, "plasma" weapons fire insubstantial ammunition that imparts extreme heat on impact, or just as with gauss weapons, plasma guns can be anything that sounds cool. Hard sci-fi plasma weapons come in types: "flamethrowers" or cannons that expel stored plasma, Fusion cannons, melta weapons where so much energy is expelled from a weapon that it turns the air outside into plasma, lightning guns such as Tesla coils, with a large pulse of energy sending a huge electrical charge that produces plasma conduits across the air, or Ion Cannon like the ones that GDI uses as its signature superweapon.
In the grim, dark, dark, horrible, grim, dark grimdark future of Warhammer 40,000, plasma weapons are a type of heavy weapon used by the Imperium, Chaos forces, the Tyranids, and the Tau. A few variants are also used by the Eldar and Dark Eldar, and the Orks have a few that they no doubt stole from the Imperium. They operate by firing a bolt of superheated energy that's hot enough to melt virtually anything in proportion to its size (pistols and guns can take out heavies while cannons can take out super-heavies and titans), with its primary drawback being the massive amount of heat generated per shot. Imperial and Chaos players like to use plasma weaponry for its raw, balls-out power and point efficiency.
Plasma technology is relatively rare in the Imperium as it is incredibly hard to produce and maintain properly in large quantities, and is thus scarce in both Imperial and Chaos forces. However, its power means that these forces use it every chance they get. The Tau are so advanced, their basic weapons are plasma-based. Because of their high firepower, plasma weapons are the go-to choice of most players when dealing with heavily-armored infantry such as Terminators or Tyranid Warriors.
There is also "blood plasma," which may be of interest to Vampire players, but the idea of a blood-plasma rifle is kinda gross, but can be made exceptionally awesome if done right such as sucking the life force of an individual. It is however possible that it's just another Bloodstrike Missile-esque thing.Pus-rifles on the other hand? Ew.
- 1 A Small Notice to all Devou/tg/uardsmen on Plasma Weaponry
- 2 Imperial Plasma Weapons
- 2.1 Sollex-Aegis Energy Blade
- 2.2 Plasma Pistol
- 2.3 Plasma-caster
- 2.4 Plasma Exterminator
- 2.5 Plasma Talon
- 2.6 Proteus Plasma Projector
- 2.7 Plasma Gun
- 2.8 Plasma Caliver
- 2.9 Plasma Repeater
- 2.10 Plasma Burner
- 2.11 Plasma Blaster
- 2.12 Phased Plasma-Fusil
- 2.13 Plasma Incinerator
- 2.14 Plasma Cannon
- 2.15 Plasma Culverin
- 2.16 Plasma Destroyer
- 2.17 Plasma Storm Battery
- 2.18 Hellfire Plasma Cannonade
- 2.19 Plasma Eradicator
- 2.20 Macro Plasma Incinerator
- 2.21 Hellex Plasma Mortar
- 2.22 Plasma Decimator
- 2.23 Plasma Blastgun
- 2.24 Plasma Destructor
- 2.25 Plasma Obliterator
- 2.26 Plasma Annihilator
- 3 Chaos Plasma Weapons
- 4 Tau Plasma Weapons
- 5 Tau Pulse Weapons
- 5.1 Pulse Pistol
- 5.2 Pulse Blaster
- 5.3 Pulse Carbine
- 5.4 Pulse Rifle
- 5.5 Burst Cannon
- 5.6 Pulse Submunitions Rifle
- 5.7 Long-Barrelled Burst Cannon
- 5.8 Heavy Burst Cannon
- 5.9 Pulse Submunitions Cannon
- 5.10 Pulse Bomb Generator
- 5.11 Pulse Blastcannon
- 5.12 Pulse Driver Cannon
- 5.13 Pulse Ordnance Multi-Driver
- 6 Tau Ion Weapons
- 7 Eldar Plasma Weapons
- 8 Ork Plasma Weapons
- 9 Tyranid Plasma Weapons
- 10 A Weapon that Beats Spess Mehreens? NEVAR!!
A Small Notice to all Devou/tg/uardsmen on Plasma Weaponry
Due to a recent modeling discrepancy with man-portable Imperial plasma weaponry (and GW making the errors canon), two things need to be said:
First, the Plasma Pistol, Gun, and Cannon only have one barrel. The bottom "barrel" is actually the access port to the main accelerator coil of the weapon. The accelerator coil is a ferromagnetic electromagnet, and thus loses magnetic potency as its molecules shift due to the powerful charges and pulses it experiences. This means it wears out with frequency, and needs to be replaced every so often, or at least often enough for the designers to add an access hatch.
Second, handheld plasma weaponry has a very small opening for the plasma to escape from. The large space that everybody seems so fond of drilling out to make an enormous-caliber barrel is in fact a magnetic plate shaped like a parabolic dish. The actual barrel hole, inside that plate, is small enough to get drilled with an in-universe pin vice(!) so you can see why no one tries to mould it (soft sci-fi weaponry is SRS BZNESS!) This is because the plasma, when it is being charged, is compressed to what is almost a dense gas in order to increase heat retention. The tiny plasma reactor within the weapon is breached for a quick moment, and the spill is immediately drawn along the length and through a narrow tube in a large, ferrometalic block. The block is supercooled to avoid catastrophic burnout, but it is mainly a focusing and directing mechanism. Through the tiny pinhole runs the plasma charge, but it does not touch the sides due to the block's strong magnetic field. When it reaches the muzzle, it is going so fast and is under so much pressure that the parabolic disk shape is necessary in order to focus the plasma as it nearly explodes out of the barrel. The disk face is also magnetized to avoid touching the plasma. The one exception to this is the latest Pattern of Plasma Cannon, which borrows design elements heavily from the Mars-pattern Plasma Destroyer mounted upon the Leman Russ Executioner. It should be noted that all Mars Pattern vehicle-mounted plasma weapons use the "newer" huge-caliber devices, while Ryza Patterns use the pinhole-barrel devices. This can be seen in the Mars-pattern Plasma Blastguns, and conversely on the Ryza-pattern Plasma Annihilator mounted on the Emperor-class Titan. Perhaps this is part of the reason why Ryza plasma technology does not overheat nearly as easily as Mars-pattern plasma tech?
This is basically the ONE downside of plasma weapons. They have a chance to overheat to the point where it explodes in the user's face. If any of a plasma weapon's shooting dice come up 1, the model has to make a save (armor or invulnerable) or take a wound. To multi-wound models, this is a noticeable dent, and to most models, which only have one wound, it's potentially deadly. Vehicles were immune to this rule until 6th Edition; with the advent of Hull Points, they now lose one Hull Point unless they save on a 4+.
Still, a lot of players aren't really concerned with this because of the simple fact that plasma is one of the most easily-accessible anti-armor weapons there is to IG, SM, and CSM players. Besides, it's not like you're rolling a 1 every turn, right? In any event, if your plasma gunner(s) are power armored, you usually won't care, although bad luck can cause a pretty nasty waste of points sometimes. If they are guardsmen, you have reserves so, again, you won't care, other than about the gun itself a bit.
It's a common belief that when a plasma weapon gets hot it always explodes and like many common beliefs, it's totally wrong. What happens most of the time is the weapon's fail-safe mechanism makes an emergency heat dump in order to avoid the containment breach and explosion (sometimes, albeit rare, plasma guns do experience meltdown and an explosion). Unfortunately for the wielder, this "heat dumping" takes the form of a cloud of superheated steam, hot enough to boil the unfortunate wielder's flesh alive. While such occurrences are almost guaranteed to be lethal to unarmored shooters, a Space Marine's power armour could provide some measure of protection from it.
Note that all of the previous paragraph is nullified in 8th edition; see the section below:
8th edition crunch
8th edition rulebook makes a rather radical departure in terms of how imperial plasma guns work in tabletop's crunch (as per fluff the overheating issue of most imperial plasma guns, barring some experimental models, remains as it was). Now a player can choose to fire his plasma weapons in normal or supercharged mode. Essentially, the ordinary overheat is not simulated for the sake of gameplay, but the overcharge fuck ups are, and you are out of the battle/the tank takes some bad damage if that happens.
- On normal firing mode the gun functions normally and doesn't have any equivalent to "gets hot" rule and is safe to fire.
- On supercharged mode, however, the gun gains +1 Strength and +1 Damage, but on "to hit" roll of one it suffers catastrophic meltdown that either slays the shooting model with no armor or invulnerable save allowed or inflicts mortal wounds, depending on the unit and the gun. Due to how modifiers work in 8th edition this happens more at night, but sometimes never happens at all.
Interestingly, the idea of an overheat-less normal mode appears in the fluff before the 8th Edition. The Fantasy Flight RPGs included the option for Plasma Guns to use an alternate fuel type that would stop them from Overheating, at the cost of not being able to Overcharge at all. On a similar note, Dawn of War 2 portrays some instances of plasma guns being able to be temporarily overcharged at the cost of overheating it (though the overheating drawback isn't potentially killing its user).
Imperial Plasma Weapons
Imperial and (and by extension, Chaos) Plasma weapons are powerful and point-efficient. They are easily capable of taking out light and medium tanks without the need to flank and target rear armor, and they can go through any infantry armor as well as being rapid-firing. However, Imperial/Chaos players are not terribly fond of Plasma weapons due to the "Gets Hot" rule. The overheating issue is likely caused by unreliable electromagnetic field generators failing to properly contain the Plasma, or perhaps that the Plasma is worth several thousands of kelvin in temperature and can easily melt the operator's face off in the event where the weapon needs to vent excess heat. According to the fluff, Imperial Plasma weapons are extremely rare and most are at least a few centuries old. This is due to the difficulty in mass-producing Plasma weapons on any Forge World other than Ryza, whose hat is that they Plasma the shit out of everything, and the difficulty in properly maintaining/using Plasma weaponry in light of the massive amounts of heat it gives off.
However, since ALL Space Marine Chapters, many Imperial Guard Regiments, and even Underhive Gangs seem to have access to them, this admonition comes across a little flat. They might just be extremely expensive and the Adeptus Mechanicus just says that they are "relics" so people will take better care of them. The official reason seems to be that they are "rare" in terms of numbers, a million Plasma Guns being made per year would be "small" compared to billions of Lasguns made per year. Fun fact: the Adeptus Mechanicus does have Plasma weapon patterns that do not overheat, but boggarts them as usual with the good shit. Seriously, even their Skitarii forces have to deal with the Gets Hot. Then again, given how frequent Imperial forces turn traitor ranging from said Chapters, Regiments, and even Gangs, taking whatever Post-Heresy tech they had with them; the Mechanicus should be understandably paranoid about how and what tech they choose to share like this with their allies (or more specifically, how they refuse to share) since their allies are notoriously unreliable and volatile, and their main enemy is a malevolent bunch of crazies who are more than happy to reverse-engineer and mass-produce their venerable machines with reckless abandon.
Note that prior to 3rd Edition of 40k, there was a split between Good Guy–Bad Guy Plasma technology, with team Imperium having the safer MkII Plasma weapons that required recharging between firing, and team Chaos being stuck with the older MkI Plasma weapons which were bulkier, covered in pipes, and able to spit out up to three shots per turn compared with the MkII's one every other turn, but with the chance of overloading and leading you to roll on a random table to see whether it just gets hot, vents plasma over the bearer, or blows up and takes out everyone in the squad. Even Dreadnoughts were not immune to this, with the MkI Heavy Plasma Gun able to spit out three smaller (coin sized) blast markers, but runs the risk of the weapon taking out the arm and possibly the entire chassis. When the ludicrous volumes of bookkeeping were scrubbed for 3rd Edition, the recharging mechanic was removed along with the stacks of counters, and all Plasma weapons then developed the Gets Hot rule as below, playing into that Edition's general theme of larger armies with more expendable and easy to kill models.
And 8th edition shakes things up again. Plasma can now choose between the Standard profile which does not have the Gets Hot rule, and the Supercharge profile which does – and that Gets Hot instantly kills the shooter, no saving rolls or multi-wound shenanigans allowed save for specific units and weapons (and even those take mortal wounds should Gets Hot trigger). Worth to be noted, however, that Supercharge also means you are getting an eight in Strength (wounding most tanks on 4+ or wounding MEQ on 2+) and dealing a two in Wounds per successful attack (capable of taking down a Terminator with a single shot or wrecking a Leman Russ battle tank with just six successful attacks). Between that and the heavy nerfs the Grav-weapons got, Plasma has re-established itself as a viable option in the metagame.
Sollex-Aegis Energy Blade
Although classed as a power weapon in-game, instead of a power field in its crunch, the energy blade actually uses hot plasma to cut things up in its fluff.
A product of information obtained from the Aegis Data Fragment – an STC database discovered in the Calixis Sector, the setting of Dark Heresy – and utilizing the properties of the Sollex focusing crystals, this is one of the rarest types of weapons in the Imperium – a blade of coherent high-energy plasma which materializes from the armored hilt as a blazing, roaring column of blue-white fire. Sounds familiar? Devastatingly powerful, only a few Sollex-Aegis energy blades are held by individuals outside of the Mechanicus, and their secrets are little understood even by their makers in the mysterious Tech-Priest sect of Sollex. Although potent beyond even most power weapons, they can also prove treacherous to the unwary as the energy blade can fluctuate, the laser containment fail or the insubstantial blade slip unexpectedly. While Power and Force weapons do not have the novelty of a Plasma blade, they are at the very least more reliable and won't stop working for arbitrary reasons. In the tabletop RPG, if your character uses an energy blade to parry, and with a hint of luck, you may destroy the enemy weapon like a Jedi would. Speaking of space wizards with laser (plasma, but whatever) swords, you can totally play as a Jedi in Dark Heresy by playing as an Imperial Psyker, and they most likely knew it too when they decided to add the weapon into The Inquisitor's Handbook. Back to the topic, if you fail at parrying with this energy blade, your character takes damage from the lightsaber.
Owing to the harsh actinic light and furnace-blast roar that the blade produces, the wielder cannot attempt to be stealthy while it is switched on. Additionally, the blade consumes canister fuel, just as a plasma pistol below does, with each canister worn on the belt and fed via a cable to the hilt. Note that this is an almost exact description of a Protosaber from Star Wars.
The Plasma Pistol: theoretically doubles as a Plasma Grenade provided that it's your last fuel cell and/or you really need that explosion.
Now, despite the significantly reduced size, the Plasma Pistol still has all the punch of the regular-sized Plasma Gun. What's the trade-off for a nuclear reactor that fits your holster, you ask? None... is what we'd like to say. You just have to cope with shooting slower and at a reduced range. Commonly carried by officers from both the Space Marines side and the Imperial Guard side, some specialized assault units such as Assault Marines may also at times be armed with Plasma Pistols, likely as an alternative to Inferno Pistols to shoot at tanks. Logically speaking, the Plasma Pistol should be the safest of the bunch since the gun is (should be) held away from your body when firing instead of being held up to your face while looking down the sights. Even if the gun were to actually melt down, you're likely to lose just your hand. Unless it explosively ruptured with the force of a grenade, it is unlikely for you to lose something critical. One of the latest pattern of Plasma Pistols available to the armed forces of the Imperium of Man is the Mark III "Sunfury," but only a handful of Chapters including the Novamarines are capable of manufacturing the new model. On the other side, Chaos Space Marines love to live dangerously by using hydrogen fuel in a higher quantum state. No, we have no idea what that really means. It's probably just hotter.
Also, Techmarines have access to the Plasma Cutter, which seems to be an allusion to our real-life tool. This Plasma Cutter, however, is better than ours since it also doubles as a rather effective weapon. Appearance-wise, it looks just like two Plasma Pistols attached together. Finally, there is Cypher's. His Plasma Pistol seems to be an ancient (as in, really good) kind since it doesn't get hot in 7th, and it uses Supercharge stats in 8th without the blowing up part.
Proof that you don't need to make your hand gigantic with a Power Fist to have a free hand while still having a side weapon like a Plasma Pistol.
The Plasma-caster is a vambrace fitted with what seems to be a Plasma Pistol with a bayonet attached to it, though it might be a bit more powerful since these things here are only for the best in the First Legion wearing Terminator Armor. So unlike the Bolt Caster with two Bolters on a Power Sword, the similarly-named Plasma-caster has an armor piece and a combat blade (if not powered) complementing the Plasma part. As stated, it is equipped by the Inner Circle Knights of the Dark Angels during the Legion days, specifically the Cenobium whose members are composed of Dark Angels who have attained the rank of cenobite. These cenobites are temporarily chosen to wreak havoc in their Terminator Armor for a deployment that requires their unique skills, whatever they may be. They are elite, and the Plasma-caster is their signature weapon. The cenobites all carry one on their left arm, while their right arm carries a Terranic greatsword. Although it is a vambrace, it does not replace the regular Terminator sleeve. No, it is worn over the gauntlet the Inner Circle Knight wears. So while in the artwork it certainly looks sleek as if the Knight himself wears a different sleeve on that side, the models of those said Knights definitely look like they have a shield that has a knife and a Plasma Pistol welded on.
Overall, the Plasma-caster gives you a free hand to possibly carry something else in addition to giving you the ability to fire superheated plasma, and a melee advantage through a sharp blade over the back of your wrist.
For a name with a word like that, you'd think it's a huge cannon or something. This gun is an "Exterminator" on a more local scale, like pest control.
The Plasma Exterminator is another Primaris-only gun, this time for the Inceptors. It is likely another variant of the Plasma Incinerator. Hellblasters don't use these and stick with their regular Plasma Pistols. Like the more standard twin Assault Bolters that usually comprise their weapons loadout, Inceptor Marines may wield two Plasma Exterminators together as an alternative to the maximization of ass-rape. While rather short-ranged because the rule of common sense states that smaller guns don't shoot very far, the Plasma Exterminator packs as much power as a Plasma Cannon despite that compact size. This may have something to do with additional safety features in order to make room for a much more destructive blast. The wiring adjacent to the feeding cable connected to the backpack is indicative of that. Perhaps the power requirement of this weapon makes it exclusive to Gravis Armor, which would allow non-Inceptors who happen to wear one to use one, unless the Plasma Exterminator completely relies on some system which is only present within the Inceptors' variant of Gravis Armor. Also, the part at the back may either be a spare hydrogen flask just in case the backpack runs out, or it's a high-speed autoloader crucial in maintaining the firepower of this overclocked Plasma Pistol.
Because it has a cabling which will likely take some time to disconnect, it is not recommended to use this weapon as a Plasma Grenade.
This weapon is the Plasma cherry garnished at the top of the Dark Angels' Ravenwing smoothie.
While Space Marine Bikes usually have either Bolters or Grenade Launchers as their primary armament, the upper echelons of the Dark Angels have access to something more... esoteric. And by the upper echelons, we specifically mean the Black Knights of the Ravenwing and its associated officers such as Apothecaries who may pimp their bikes out with the Plasma Talon, which is essentially a twin sawn-off Plasma Guns instead of the usual twin Boltguns. Indeed, the Plasma Talon seems to be an archaic weapon since only a notable few may attach one on their bikes in a Plasma-heavy Chapter. Given the size and firepower, it seems to be a Plasma "carbine" if there ever should be one. In fact, its effectiveness is at about the same spectrum as the Plasma Exterminator of the Inceptors, and it might as well be the same in design concept if the Black Knights/Bikers can tear off those two sawn-off Plasma Guns and use them when they crashed their bikes. By the looks of it, Cawl or his colleagues might have been responsible for a few "missing" Ravenwing bikes. In battle, the Ravenwing Black Knights drive at top speed upon their Mark IV Raven Pattern Assault Bikes towards the foe. On their approach, their Plasma Talons tear holes in the enemy lines before they ride over their quarry, cracking armor and sundering flesh with their Corvus Hammers as they go.
Speculations aside, the fact is that the Plasma Talon is a couple of short-ranged Plasma Guns used by the Black Knights of the Ravenwing.
Proteus Plasma Projector
Unlike most Plasma weaponry – or usual Flamers for the matter – the Proteus Plasma Projector doesn't shoot out plasma, instead it sprays the hot stuff out like a Flamer.
Hell, it even acts like a Flamer; it just uses Plasma as fuel. And who gets to use this awesome weapon? It's the Custodes, and not just some random Custodians either, you've got to be that guy who got mangled enough to warrant a Dreadnought internment first and even just that's not good enough. You've got to be that guy who's good enough to probably have his own novel to warrant the expensive Telemon Heavy Dreadnought before you have access to this thing; that is what it takes to have a couple of plasma-throwers attached to your special Telemon Caestus (Dreadnought Power Fist) and create a recipe of absolute Rape. Now, we can infer the Proteus name with a couple of meanings. Proteus itself is likely a Forge World/Moon by Neptune that our Forge World uses to designate a number of retro weapons and vehicles, which opens up the possibility that this thing is also another blast from the past, assuming that there actually is a plasma-thrower back then. This Proteus shares its last two parts of the name with the kind that is found on a starship's weapons battery. That big Plasma Projector does a lot of damage, but loses potency as the range increases, a reasonable explanation to this is that as the giant plasma ball travels over the cold space, it cools down, and a hot plasma is not as effective as when it was hotter.
With instances such as the Melta Cannon proving that two different weapons may be designated the same name, it is likely that these Plasma Projectors are another of such cases.
Everyone's favorite walking death trap.
An uncommon weapon even in the ranks of the Imperial Guard or Space Marines as mentioned above, most extant Plasma Guns are hundreds if not thousands of years old because, well, they tend blow up without proper handling/luck. As a testament to their design, they remain as deadly today as the day of their fabrication. Aesthetically speaking, they look stubby and also rather clunky, with a ribbed back and flared cooling vents along the front. Hydrogen flasks stick out of the butt and bottom of the weapon. These generally contain enough fuel for at least ten shots before you need to swap it out with a new one just like a magazine. If you don't like to carry lots of hydrogen canisters because you don't have Power Armor that can mag-lock stuff as well as handle the weight, an alternative is a backpack that can hold more hydrogen with a feed line to the gun. Plasma guns can generally be fired in two modes: the Standard mode which is not only normal, but also the safest and practically a flat-out upgrade over the conventional Boltgun; or the higher-powered Supercharge mode with a longer range and higher temperature, which is definitely not the safest, possibly lethal, and powerful enough to potentially one-shot a Terminator or Primaris Marine. The latter requires a short time to replenish the plasma back to firing levels after firing. Ryza plasma weapons don't have a safety issue if you don't over-charge. So, follow the instructions on your death gun and you won't get death gunned by your death gun.
But here's the twist for not following safety procedures and why it is possibly lethal – firing in such a high-power mode may result in catastrophic plasma containment breach, that is, crunch-wise, represented by the user being automatically slain on a hit roll of 1. No saves of any kind are allowed and it doesn't matter how many wounds they have left, the wielder gets removed from the table.
On the Plasma side, the Skitarii do not use the same Plasma Guns like the Space Marines and Imperial Guard. They have something arguably better.
For your information, "Caliver" is not a made-up word. It is a corruption of the word "Calibre/Caliber" which literally refers to the bore size of a firearm. Indeed, the Plasma Caliver here doesn't seem to have anything to do with bore size at all, so-so compared to the larger available sizes in fact. Historically speaking, the Caliver is a standardized form of the arquebus, an early form of the firearms we know of. This definition fits very well with the whole steampunk vibe the Mechanicus maintains for their Skitarii. As volatile as it is deadly, the Plasma Caliver exchanges range for a truly terrifying rate of fire, making it a Plasma machine gun. A squad of Skitarii armed with several Plasma Calivers lights up the night sky with each volley, which should say a lot on how much Plasma this thing spews out. The Plasma Caliver seems use three fuel cells attached together at a time, which we may allude to how it shoots three times on the board. If we assume as such, then the Plasma Caliver probably shoots pretty damn fast, emptying all the three fuel cells at the given moment. To say they risk life and limb in the process is a grave understatement, yet to their Tech-Priest masters, such collateral damage matters not at all. Either the Plasma Calivers are cheap to produce to be disposable, or the gun itself may be salvaged somehow after blowing up.
One would assume that the Iron Hands and Salamanders being the tech masters of the Astartes. Along with Dark Angels the experts on Plasma weaponry among the Space Marines. Would have a few Plasma Calivers in their arsenals by the end of the Indomitus Crusade. Or at least authorized to build their own. But sharing would make things less GrimDark. The Mechanicus themselves are known for hoarding the good shit. Emperor forbid that humanity have the ability to whip out huge swaths of their enemies at once. Caliivers are basically slightly worse Incinerators. The same can be said of Plasma Blasters. The slightly lower range over a standard issue Plasma Gun would be inconsequential. Nor much of a loss since "Plasma sniping" is what Astartes have Hellbasters and Devisators for.
For some reason in the current edition, the Plasma Caliver is identical to the Plasma Blaster where effectiveness is concerned with two shots to speak of.
Dark Angels are well-known for their array of Plasma weaponry in 40k. Wind the clock back about ten thousand years or so, and this is in their arsenal.
The Plasma Repeater is a 30K weapon for the Dark Angels, the smallest Plasma weapon to apply the principle of More Dakka. Spewing enough firepower at Melta range to make Loyalist/Traitor Terminators regret their life choices up to that point. As the First, Dark Angels have access to a panoply of war not from the forges of Mars, but from their equipment cache back from the Unification Wars on our Ancient Terra. This means the Plasma Repeater is one of the relics that are often difficult to replicate and mass produce, and/or possessing too much firepower to be reliable to handle. Thankfully, the Plasma Repeater isn't too much to handle, and it is one of the few weapons from the Age of Strife which the Dark Angels were exclusively sanctioned to deploy with. In fact, there are certainly a lot of these around back then too, since despite being relics which you'd expect the notable few such as commanders to carry like the Plasma Blaster with the Tartaros Terminator Sergeant, the Dark Angels may fit entire squads with Plasma Repeaters instead of the usual Plasma Guns. We do not know what the Plasma Repeater actually looks like, but judging from the strength, rate of fire and weapon type it'll most likely resemble a scaled down version of the Phased Plasma Fusil, albeit with a second barrel making it a double-barrelled Plasma submachine gun; sweet mother of cheese does that sound fucking rapetastic.
Sadly, the crunch doesn't match that rapetastic prospect. Their primary failings include short range, comparable to what the Plasma Pistol can pull off, but since it's salvo it lowers to a meager 6" if the unit moves. Their second failing is that only two units can take them (for the moment) since you can only give them to units who can take Plasma Guns (and no, Twin-linked Plasma Guns don't count because Forge World says so.) Legion Veterans have better weapons and rules, so that should leave you with Support Squads as your only choice. On the upside, Twin-linked makes Gets Hot a non issue 90% of the time, though it doesn't make up for the downsides. There are two ways to use them: the first is to spend a few points and cash on a Rhino and hope that your opponent is blind or stupid enough to let them get near whatever's toughest/priciest; the second way is to put them in a Drop Pod and hope they drop in close enough to be near the priciest/scariest thing your opponent has, and also hope your opponent forgot to buy Augury Scanners.
For a distinctive tabletop representation, you will probably have to make do with something like painting your Plasma Guns differently or something. You may have the marines carry Plasma Incinerators to differentiate, or you may get these Plasma Repeating Shotguns made by White Wyvern Designs.
Exclusive to the Dreadwing Interemptors (interēmptus = abolished, destroyed, killed ), the Plasma Burners were used during the legion days of the Great Crusade and Horus Heresy. As the name suggests, it's a plasma flamethrower. While not a new concept – with the Custodians strapping a couple on a fist attached to a behemoth of a Dreadnought called Telemon – it is likely produced in a much greater volume since the Interemptors we see here all carry one. They're kinda like Destroyers, and a flamethrower that throws out star fuel certainly sounds like having a decent amount of destruction to it. Appearance-wise, it looks a lot like the Plasma Repeater bit we got from Google above there. However, the fuel canister and the tube feeding said fuel to the mouth of the barrel aside, the barrel of the Plasma Burner does seem similar to the Phased Plasma-Fusil, specifically how the bottom coil is exposed. The only other weapon that is similar to the Plasma Burner here is the Splash Burna used by the Orks, and do note that the thing's from the Eternal Crusade game. Now, back to the fuel, we can infer that the Plasma Burner is actually a Combi-Plasma-Flamer. The fuel from the canister goes to the end of the barrel where the plasma comes out. And by now the reader should have already noticed that this thing does not have the usual pilot light to light the fuel up. Instead, it seems to be the plasma that's going to light the fuel up. Oh, and do note as well that the Splash Burna does not work that way. The Orks just used their plasma coils to heat up their fuel before discharge. The Plasma Burner here actually discharges plasma with the fire.
For all intents and purposes, it is the SPESS MEHREEN's equivalent of the Ork's Skorcha with the blowtorch mode permanently enabled, only you know, far more fucking reliable. As you can imagine, having the flamer template mix in with the AP bonus of a plasma gun sounds fucking ridiculous, just imagine having a squad of these facing off against horde armies; you can kiss those hordes goodbye.
At that time, the Plasma Blaster is essentially a Combi-weapon that is two Plasma Guns joined together. It is not a linked weapon, which means you do not fire both of them at the same time, but one at a time. And because of the massive power drain, the Terminator Armor with its bigger power pack is the perfect candidate to carry it. A Plasma Blaster was wielded by Saul Invictus, Captain of the Ultramarines 1st Company, before he was nommed by the Tyranids (Rest in Pieces) and Severus Agemman took his place, but sadly not his Plasma Blaster. The Plasma Blaster was later revamped into a Great Crusade / Horus Heresy era weapon-relic depending on which millennium you're in. During the Horus Heresy, one may find Terminators carrying them – mostly Tartaros – and Contemptor Dreadnoughts having one installed in their claw. Though the range is rather short compared to standard Plasma Guns, the higher rate of fire the Plasma Blaster affords gives it a tactical edge when engaging in assaults. What differentiates this "Combi-Plasma" from other Combi-Plasmas is that this thing is literally two fully-functional Plasma Guns that can last the entire battle or more if you still have flasks left, while the Plasma Guns fitted to general Combi-weapons are only worth a single shot. Naturally, you are wondering why the combi-plasma doesn't just take a plasma-blaster (or combi-bolter or stormbolter) and replace one of the boltguns with an actual plasmagun or even plasma pistol or even just put the ammo outside of the attachment instead of inside so you can reload a combi-plasma in combat easily. The reason is that GW has to gimp the Imperium as much as possible with sheer stupidity or else they wouldn't have a grimdark setting and their itchy ball sacks would never get that sadistic scratch they crave.
Fusil is a type of light flintlock musket, hence the term for a person using one, "Fusilier". The word "Fusil" itself is derived from "Foisil" which means "Flint".
Unlike the previous Plasma Caliver, the Phased Plasma-Fusil is the proper Plasma Gun equivalent of the Mechanicum in terms of application beyond infantry. The 30k Mechanicum's favorite weapon in a 40 watt range, the Phased Plasma-Fusil is tremendously effective against Marines as you might expect. While the Plasma Caliver is only carried by the Skitarii, the Phased Plasma-Fusil is also mounted on vehicles apart from being an option for Thallaxii. The most distinctive features of this gun is the fact that it has two barrels, and the fact that it is slender. You can probably tell that it looks pretty unremarkable when not carried by the relatively smaller Thallaxii and Tech-Priests (and their Servitors). So, vehicles: the Minotaur artillery tank of the Ordo Reductor may use the Phased Plasma-Fusil as its side weapon; and although technically not exclusively Mechanicum, the Magaera variant of the Questoris Knights has one in place of where the Heavy Stubber or Meltagun should be. As one of the few Plasma weapons that do not Get Hot at all, it is typically treated as an Assault 3 weapon because most troops equipped with the Phased Plasma-Fusil are Relentless for some reason or another. Extremely solid choice to increase the fire rate of Thallaxii or to make a death star of Myrmidons roll up in a Triaros and start tearing into Marines with two of these Fusils each.
Also shares the name with a Hrud (Warp) Plasma weapon weapon favored by mercenaries and Inquisitors, although this is likely coincidence or owed to naming conventions, as the two weapons are quite different.
Because, apart from the Dark Angels, Space Marines are rather lacking in terms of Plasma weaponry, Primaris Marines have their own dedicated squad composition with its very own Plasma weaponry.
Imperium-produced Plasma weapons all fire searing bursts of energy, but the Mark III Belisarius-pattern Plasma Incinerator primarily used by the Primaris Hellblaster Marines is the most advanced of its kind, firing a potent armor-melting blast with no risk of overload for the first time since the days of the Horus Heresy. The same cannot be said when the Plasma Incinerator is fired on its deadly overcharged setting, however. So, this Plasma Incinerator here can fire at more or less the same potency as an overcharged Plasma Gun and a little extra range without any safety concerns to trouble you with, but still has an Overcharged mode just in case that Hellblaster Marine is feeling a little lucky for that extra punch. The Plasma Incinerator distinctively features a part which looks to be a high-speed autoloader similar to the Assault Bolter as well as the Auto Bolt Rifle. Unlike most Plasma Guns out there, the Plasma Incinerator uses two hydrogen flasks at a time instead of the conventional one at a time. Perhaps the high-speed autoloader is there to push the extra plasma out. On the other hand, the one of the two might be the actual flask that is used, while the other is a spare attached close by to improve reloading speed. Distinctively, the gun also comes with a handle for you to carry with convenience while you wait for the thing to cool down.
Possibly either Mark I or Mark II of the Belisarius-pattern, the Assault Plasma Incinerator is more at home with the regular Plasma Gun where range is concerned. It is, however, a little bit different in terms of capability when compared to the regular Plasma Incinerator. The fuel flask intake is tilted clockwise possibly for a better grip. The Assault Plasma Incinerator is also fitted with presumably laser and another sophisticated targeting sights because for the reduced rate of fire, you've got to be doubly sure of making that hot shot count.
Possibly either Mark I or Mark II of the Belisarius-pattern, the Heavy Plasma Incinerator is at about the same level as a Plasma Cannon in terms of punch. Presumably, the extra range comes from the juices of the power pack which is connected to the gun. Shunning magazines altogether, this gun/cannon uses a feed line connected to a backpack/fuel tank that is stuck right on top the power pack. The backpack also features three extra ports. These are possibly additional hydrogen flasks for the Hellblaster Marine to swap the Heavy Plasma Incinerator into a regular Plasma Incinerator when the backpack runs out of fuel.
We can now see that early in the list, our next Plasma gun is already a full-fledged vehicle-mounted weapon because it's too large to carry around.
The exceptions to that rule are the Space Marines because they're big and wear Power Armor. Although the Sisters of Battle do wear Power Armor as well, we can probably infer that the power packs carried by the Sisters are not enough to charge the Plasma Cannon up, while the larger Space Marines with larger power packs may afford to do so (also because Space Marines are genetically engineered super soldiers, while the Soritas is comprised of MOSTLY mortals). Outside of the Power Armor ring, the Plasma Cannon remains a weapon to be installed on vehicles. Apart from the Servitors and influential/fortunate Hive World gangs, you won't see a Heavy Weapons Team with Plasma Cannons because you have to remember that this technology is pretty rare, and it shouldn't be installed on something less protective than in the hull of a Leman Russ Battle Tank and, strangely enough, an Armored Sentinel. Of course, Dreadnoughts may use this also because they're Space Marines made even larger. By being a larger Plasma Gun, the Plasma Cannon delivers destruction over a much larger area; several individuals may be caught in a single blast. The blast itself is also a little different from the usual Plasma Guns. Instead of a small "pulse," the Plasma Cannon fires out a larger "ball" of boiling nuclear energy which looks more or less like a miniature star, hence the "Sun Gun" moniker.
8th Edition introduces a couple more of Plasma Cannon variants mostly for representing a different effect when things go wrong just because the safety is turned off.
Continuing the early firearms theme at the size of what we would call cannons for the Mechanicus is the Plasma Culverin.
There seems to be a running theme between the Mechanicus and rapid-firing Plasma weaponry, because the Plasma Culverin is also the Dakka-friendly kind on a bigger scale. In fact, the whole thing looks just like the Plasma Caliver in design concept. Since it is much bigger, it will be too unwieldy for an average Skitarius to carry around. Unless said Skitarius happens to be an Ogryn or something, the Plasma Culverin currently sees sole usage with Servitors, specifically Kataphron Battle Servitors, more specifically the Destroyer kind that actually carries around the big guns like this or the Heavy Grav Cannon. Plasma Culverins sacrifice the range of their cannon-pattern equivalents in exchange for a higher rate of fire. Only the Adeptus Mechanicus dare coax such rampant destruction from their Plasma weaponry, yet to the adepts of Ryza in particular, the scars they leave on wielder and war zone alike are considered quite normal. Since it is essentially a larger Plasma Caliver, and Skitarii usually handle with weapons which are also usually outright lethal to people who aren't half-machine, the Plasma Culverin might actually be that much dangerous to the wielder, and so it is carried by Servitors who are half-dead along with already being half-machine.
Because of its low damage per high rate of fire, the Plasma Culverin is applied as a Plasma Gatling to "accommodate" horde armies.
What makes the Plasma Destroyer different from your regular Plasma Cannons is that it is big enough to be considered a main gun.
Certainly, the Plasma Destroyer is too big to be carried by Space Marines or mounted on flimsy walkers like Sentinels. The only pattern we know so far for this class of Plasma weaponry is the Executioner-pattern Plasma Destroyer mounted on the eponymous Leman Russ loadout; Space Marines used to install this same type of cannon on their Deimos Predators too, but those tanks have been reduced to relics by the present age. The following is why plasma tanks are so hard to come by here. The Plasma Destroyer is produced at Ryza, a Forge World that is the capital of plasma and magnetic containment field, both of which an essential component when you want to make Plasma weapons. Concerning the performance, the Plasma Destroyer is a rather volatile for a cannon with only 12 shots worth of plasma before running out of juice; to recharge, it uses photonic fuel cells, one at a time. Heat management is a critical factor when handling this weapon. In fact, the Leman Russ needs to be modified to accommodate heat vents as such to not melt the insides out. There is also an emergency coolant system by the rear of the cannon, but since the damn tube is outside and vulnerable to damage by stray fires or something, the fail-safe usually doesn't work, so most tank crews usually ditch the tank when it starts to get too hot and the Commissar isn't watching.
There seems to be a distinct difference between a couple of Plasma Destroyers: the Executioner Plasma Cannon that is mounted on the Leman Russ chassis deployed by the Imperial Guard (that we have taken liberty to refer as "Plasma Destroyer" since it is at about the same size anyway), and the Plasma Destroyer that is mounted on the Predator chassis used by Space Marines. We can see that by appearance alone, where the Executioner Plasma Cannon looks to be a giant Plasma Cannon, the Plasma Destroyer seems to be more like the next size of the Plasma Blaster. What makes the Plasma Destroyer more reliable? It doesn't get hot. Recently, this translates to not letting you take your chances and overcharge its shots.
Plasma Storm Battery
Just like the Plasma Talon, the Plasma Storm Battery is also a twin-linked Plasma weapon of sort.
Unlike the Plasma Talon, this thing is a much bigger talon used by the Ravenwing on their much bigger Land Speeder Vengeance for an occasion where there is a much bigger prey that needs a much bigger Plasma bombardment. Yes, complementing the Heavy Bolter or Assault Cannon of the Vengeance is a couple of Plasma Cannons. Like a supernova born amid the fire of battle, the blast of the Plasma Storm Battery annihilates anything it touches. Whether spitting multiple bolts of energy or loosing a single, monstrous blast, this weapon spells death to all before it. A Plasma Storm Battery is a relic weapon long held in the armory of The Rock. Normally a Land Speeder shouldn't be able to bear the weight of two Plasma Cannons and another Astartes controlling the turret, but not only is the chassis of the Land Speeder Vengeance much bigger, it also has superior lift-engines to carry the heavy weaponry. By service records, the Plasma Storm Battery is the senior equipment compared to the Land Speeder it is mounted on, which was discovered recently at around M36. The Dark Angels just noticed that the new big Land Speeder can carry the Storm Battery and promptly installed it. The original purpose of the Plasma Storm Battery is unknown (we're guessing it was for killing stuff), though we can perhaps suppose that it used to be mounted on larger relic vehicles such as Deimos Predators.
Do note that it is not recommended to fire this weapon on Supercharge; since all Imperial armies ran out of vehicle weapon-coolant somewhere during 4th Edition, firing this baby in that mode will likely result in multiple mortal wounds, quickly killing the land speeder itself.
Hellfire Plasma Cannonade
As the Devastator of Dreadnoughts, of course, the Deredeo chassis has access to a true Plasma Cannon equivalent, and it shoots like a machine gun.
Alright, that is probably an exaggeration. The Hellfire Plasma Cannonade is a development from existing Plasma-based weapons technology. It was in its experimental, field-testing stage by the onset of the Horus Heresy. The Hellfire is clearly the Imperium's attempt on making Plasma cannons that can fire a lot of miniature stars at a time. It sacrifices range which is something of a standard quality for the Deredeo's armaments for increased armor penetration as well as the aforementioned rate of fire. As a bonus, it may also a fire a one huge blast like what a Plasma weapon should be able to. Fast-forward to 40k with 8th Edition, however, the Hellfire Plasma Cannonade has lost its ability to fire one huge blast, which isn't really as bad as it seems since the Hellfire is supposed to be a Plasma cannon battery that shoots out a lot of Plasma in a short period of time. And for some reason in the present age, it is referred to as Carronade instead of Cannonade. The present rendition is likely a typo, since Carronade is a short but powerful cannon mounted on 18th–19th century warships, while the Cannonade refers to a continuous discharge of heavy gunfire. Clearly, despite being short-ranged like a Carronade, the Hellfire is a Cannonade with its high rate of fire.
Another Primaris-exclusive Plasma weapon since a couple of this can be mounted on a Primaris-exclusive vehicle.
Possibly a derivative from the Plasma Incinerator family along with the Plasma Exterminator, the Plasma Eradicator with a similarly intimidating name is perhaps a cousin to the Macro- variant. The Plasma Eradicator may be installed on both side turrets of the Astraeus Super-heavy Tank favored by the Primaris Space Marines as an alternative to the Las-rippers. Concerning firepower, the Plasma Eradicator is more or less similar, but not the same compared to the Macro Plasma Incinerator with a similar size, but the super-heavy tank makes up for that by being so heavy enough to mount two on its sponsons. Since the design of the Astraeus was actually based from a recovered STC with some additions here and there from Cawl's part, it is likely that the Plasma Eradicator was one of such additions, modified a little to become a weapon fitting for a heavy hovering tank and not a heavy Redemptor Dreadnought. Since the Plasma Eradicators are not only cheaper than the Las-rippers, but also provide more range that is a whopping extra 12", they are 90% of the time a better choice for you to live with; being within the 24" range will put the Astraeus in danger of being chopped up by enemy Knights or other equivalents, not to mention making it a prime target for most Plasma weapons from the other side.
You'd think a tank that big would be able to feed more juice into its Plasma Eradicators.
Macro Plasma Incinerator
The Plasma Cannon size of the Plasma Incinerator family alongside the Heavy variant, notice the "Macro" prefix if the looks of it is not obvious enough for you.
In a similar fashion to how the Heavy Plasma Incinerator is a relatively smaller for better handling by the Primaris Hellblasters, the Macro Plasma Incinerator is just simply a bit too large to carry alone. In their army, if Space Marines can't carry the weapon, then the Dreadnoughts will. For this particular cannon, not just every Dreadnought will do. It has to be the new Redemptor chassis which is also exclusive for the new Primaris Marines possibly due to being designed exclusively for it; regular Dreadnoughts can probably carry it, but likely don't have enough juice to fire the thing. For now, only Redemptor Dreadnoughts are compatible with the Macro Plasma Incinerator by having a better engine. To highlight the exclusiveness, it might have been tailored specifically for the Redemptor Dreadnought, which is a good reason to why we don't see this thing used elsewhere. Appearance-wise, it has a total of five flask-like objects attached to it. The three big ones are likely the fuel cells which are also likely used one at a time. The other two flasks might either be spares or possibly some sort of coolant. Since Primaris Hellblasters tend to handle with Plasma weaponry for the most part, the Macro Plasma Incinerator is a great familiar option for a Redemptor Dreadnought who was a Hellblaster back when he still had more limbs to speak of.
The Macro Plasma Incinerator is essentially a giant plasma gun made (possibly exclusively) for a pattern of giant Dreadnoughts. With a little adjustment, other bigger present Dreadnoughts might be able to use it.
Hellex Plasma Mortar
Instead of the usual cannon, the Hellex is a mortar, which means it shoots the Plasma up instead of the usual front.
Now, Plasma Mortars are certainly not new. Starforts for one may have them as part of their collection of batteries. In fact, it isn't that strange for an actual mortar or two to be used on naval ships of the past – the bomb ketch is a good example. As shown in Eternal Crusade, the Plasma Cannon may also double as a Plasma Mortar by aiming significantly higher, which is probably why there isn't a Plasma Mortar proper available. What makes the Hellex much more in line as a mortar is its payload. Deployed by Siege-Automata of the Legio Cybernetica under the Thanatar class, the Plasma coming from the Hellex they carry comes in a shell, but not the regular kind. Shunning the crude shells favored by most siege guns, the Hellex Plasma Mortar fires high-density charges of burning Plasma in programmed trajectory arcs timed to detonate over their targets. These airbursts create rolling waves of incinerating energy which engulf the surrounding area, burning through anything they encountered. The Hellex doesn't fire out raw Plasma. It shoots out a shell filled with raw Plasma, and detonate that over its targets – it fires out a star that promptly explodes. This results in a Plasma Wave that will definitely do a lot of damage over the perimeter. Though a debris might be able to cover you from regular mortar shells, you might need to pray to your god(s) for this one.
There are some additional points for the Hellex Plasma Mortar. On tabletop, the Plasma Wave essentially means you (or your opponent if you're the one who fired it) have to re-roll your successful cover saves. Because being stationary means more time to probably balance itself, the Thanatar will be able to shoot with the Hellex doubly further if it didn't move at that moment. Finally when assembling the model, the end of the barrel may be flared out if you prefer the sight of a blooming flower that shoots out miniature stars a few moments before going supernova over the poor sods over at the other side.
8th Edition brought out a lot of new Plasma weapons, and the Plasma Decimator is another one of them.
The Plasma Decimator is mounted as one of the two primary weapons of the Knight Castellan, Dominus-class chassis. It is a huge and obliterative weapon capable of bathing swathes of the battlefield in searing energies and reducing the enemy to glowing ashes. Because it is a Dominus Knight, it possesses dual plasma cores, which translates to more energy to feed into more and larger weapons these Knights carry to battle. Appearance-wise, this cannon seems to be the next size after the Plasma Culverin. It is one of the few instances where a rather Mechanicum-exclusive weapon makes its way to other Imperium forces, Imperial-aligned Castellan Knights for this case. Pilots of the Knight Castellan are adept at regulating the flow of that abundance of plasmic energy to this potent weapon, even if it risks rustling the Machine Spirits' jimmies in order to unleash an especially ferocious blast should the situation demand it. Simply put, it's just a fancy description for the signature Supercharge affiliated with most Plasma weaponry. Fundamentally speaking, the Plasma Decimator is one of the smaller, pseudo-Titan-sized middle finger graciously-signed to an entire horde of MEQs charging up their way. With the Plasma Decimator, you will be shit-firing plasma from your Knight Castellan at up to 48" across the board.
Of course and as stated, like all Plasma should be, even Knights have to deal with the volatility of that class of weaponry, so supercharge this gun at your own risk and volition.
Unlike the usual kind, this Plasma Decimator is exclusive to Mechanicum-aligned Knights Castellan.
So, why is it named after a certain Tech-Priest we know? You see, while our illustrious Archmagos Belisarius Cawl was working on some advanced Plasma weapons technology that ended up with the Primaris Marines – particularly to reduce the size to increase portability – some time around the centuries that took the work, he made this singular Plasma Decimator. Its enhanced containment fields and Machine Spirit data-shackles allow it to generate even more lethal volumes of energy than a typical example of such a weapon. The cannon itself was likely assembled as a testing bed for the containment fields which are crucial to Plasma weaponry, and/or the data-shackles that supposedly keep the Machine Spirit in line so that the system doesn't break down and explode or something. After confirming his results after a session or two of test firing, Cawl probably went back to his labs and told the Knight Castellan to take Cawl's Wrath home in a show of generosity, or the Knight just went home with Cawl's Wrath because his/her House sponsored that part of the project for the better Plasma Decimator. Since Cawl spent most of his working hours over the thousands of years on Mars, the Knight House in question is likely House Taranis, or any other unknown Houses that just so happens to be on Mars.
On the tabletop, it is simply a more potent Plasma Decimator available to Knights Castellan whose loyalties lie to the Machine God.
With a seemingly less intimidating name, the Plasma Blastgun is where we jump to the super-heavy class of this kind of weaponry. It also seems to be the next size after the Plasma Destroyer.
The Plasma Blastgun is, by all means, sized for a Titan. In fact, Warhound Titans may carry these as one – or both – of their primary armaments, and the larger sizes such as the Reaver and the Warlord may mount them as secondary weapons by what we anatomically call the shoulders. Continuing the tradition, the Imperium unsurprisingly straps this gun on tanks as well. The Stormblade, a Shadowsword variant, distinctively uses it as its main gun. The Macharius chassis, though smaller in size, also mounts the gun in a similar manner, though that variant in question is a little inferior. It was discovered relatively recently as a Plasma Blastgun that is more compact and easier to maintain. The prospect of churning out more tanks with guns that fire out hot plasma came with a drawback – only about three Forge Worlds know how to make the damn thing. Speaking of variants, the Plasma Blastgun on the Stormblade was exactly the same Plasma Blastgun used by Titans. Recently, a distinction has been made since it is obvious that the colossal Titans have more room to accommodate support systems such as that of power and cooling than a super-heavy tank. The Plasma Blastgun on Titans are obviously much better to shoot with. Despite being less powerful, the difference made by the one on Stormblades is surprisingly range when compared by overcharging.
It is not recommended to stand near a Plasma Blastgun unprotected since when this thing vents out heat with you standing nearby, fourth-degree burns will probably be the least of your worries.
The Plasma Destructor is an even bigger Plasma cannon that seems to have been forgotten and mothballed with the Plasma Annihilators.
While the smaller Titan classes such as the Warhound and the Reaver are stuck with the Plasma Blastgun, Warlord Battle Titans have the Plasma Destructor here as an option. Fun fact: way back in the ancient days with Adeptus Titanicus, this gun was called Plasma Cannon, and the smaller Plasma Blastgun was just Plasma Gun. Later on to the 2nd Edition of Epic with Codex Titanicus, they acquired themselves the new names we have here. The Plasma Destructor is so powerful that even for a Titan, it cannot do anything else after firing for a while; it takes that much power to fire the Plasma Destructor. As of recent, this quality isn't mentioned in its Apocalypse incarnation, though the high power demand should secure its place with nothing less than the Warlord size. And since the Plasma Destructor is considered to be at the size for Warlord Titans, some has suspected that the modern Sunfury Plasma Annihilator to be equipped as a primary weapon for the modern Warlord Titan is in fact a Plasma Destructor. But since there hasn't been an official model for this distinctively-shaped Plasma cannon since the Epic age, the Plasma Destructor is merely a fading memory while other Plasma weapons have moved on with models we can actually recognize and say, "Yeah, that's definitely a Plasma."
Footnote: unlike its incarnation in Apocalypse, the Plasma Destructors back in those distant days may also be shoulder-mounted as well.
This is the point where the gun is big enough to be worth mounting as an emplacement on a bastion.
The Plasma Obliterator is just at about that size. It may be mounted on Aquila Strongpoints. These things are really, really rare. Although Plasma weapons are established to be pretty rare on their own, this one has a special garnish on its cake. Aquila Strongpoints with Plasma Obliterators installed on were designed by Rogal Dorn during his fortification spree for the Horus Heresy. Because Plasma weapons are pretty good at killing traitor Space Marines, the man himself made sure to pop out some huge varieties on defense lines. Sadly, ten thousand years is a heck of a long time, and the Mechanicus doesn't seem to be building new ones to replace the old ones which were destroyed. Those cogboys are probably more at peace with reserving giant Plasma guns for their God-Machine Titans. Since the Plasma Obliterator is only about second to the Plasma Annihilator below in terms of sheer power, the size of this cannon might be that of a Plasma Destructor. Plasma Destructors are also kind of short for something that big, and this relatively short cannon fits that criterion as well. Although a Primarch, we don't really hear that much about Dorn's engineering prowess; he probably is not at the same level as Manus and Vulkan. The Plasma Obliterator is less likely a new weapon entirely designed by Dorn, but more of a Plasma Destructor appropriated for fortification by Dorn.
Since Aquila Strongpoints are reserved for things that are big and bold like Macrocannons, the Plasma Obliterator is perhaps also similar to the Plasma Projectors similarly mounted on voidships like said Macrocannons.
Walking cathedrals deserve no less than a gigantic cannon that fires out giant gouts of Plasma on one arm (or both).
A weapon of terrifying and indiscriminate power, the Plasma Annihilator is capable of incinerating entire cityscapes and rendering the strongest armor into steaming vapor. Developed from designs intended for the broadside batteries of void warships, Plasma Annihilators are able to be mounted only on the largest of the Imperium's war machines. Since the Plasma Blastgun is already the limit for super-heavies such as the Shadowsword chassis, the Plasma Annihilator seems to be reserved to nothing less than a building, or the kind that moves such as the bigger Battle Titans, as they have both the reactor-strength required to charge them and the motive power to wield them into the fighting. The blatantly intimidating name is also not just for show: the Plasma Annihilator annihilates. Not only is its payload big and bold, creating humongous craters as well as turning surrounding sands into glass, it also has a high rate of fire to keep that payload coming. Indeed, the weapon is so loud that those glass might as well shatter few moments after being made by the previous shot. Emperor-class Battle Titans can afford that much destructive capability simply because they have a "dynamic" reactor, which essentially means they have much more power than lower-tier Titans.
The Sunfury pattern is a recent rendition of the Plasma Annihilator mounted as the primary gun of Warlord Titans.
|Weapons of the Genestealer Cults|
|Sidearms:|| Autopistol - Bolt Pistol - Hand Flamer - Laspistol |
Needle Pistol - Web Pistol - Liberator Autostub
|Basic Weapons:||Autogun - Lasgun - Shotgun - Webber - Storm Bolter|
|Special Weapons:|| Flamer - Grenade Launcher - Sniper Rifle - Mining Laser |
Silencer Sniper Rifle - Proclamaitor Hailer - Scrambler Array
|Heavy Weapons:|| Autocannon - Heavy Bolter - Heavy Flamer - Heavy Stubber |
Missile Launcher - Mortar - Multi-laser - Seismic Cannon
Heavy Mining Laser - Atalan Incinerator - Multi-Melta
Plasma Cannon - Lascannon
| Battle Cannon - Clearance Incinerator - Eradicator Nova Cannon |
Heavy Seismic Cannon - Hunter-Killer Missile
|Melee Weapons|| Chain Weapon - Power Weapon - Tyranid Close Combat Weapons |
Heavy Rock Drill - Heavy Rock Cutter - Heavy Rock Saw
Miscellaneous Weapons - Basic Close Combat Weapons
|Grenades & Explosives||Frag Grenade - Krak Grenade - Blasting Charges|
Chaos Plasma Weapons
Chaos Plasma Weapons are basically the same as the Imperium's but with more spikes, Chaos decorations and they fire crimson red plasma rather than the Imperium's blue. However they do have a few Plasma weapon exclusively designed by Chaos, for Chaos.
A Plasma-ish weapon that doesn't seem to settle on whether it wants to be called "Æther-Fire" or "Æther-Flame".
Either way, the Æther-Fire Cannon is an option for the Castellax-Achea Battle-Automata to take instead of the usual Mauler Bolt Cannon loaded with psychoreactive toxins (also known as nerve agents). While the Castellax-Achea itself was developed by a Forge World with a Chaos Dwarfs-ish name, Zhao-Arkhad, the Æther-Fire Cannon is a later development by the Techmarine artificers of the Pyrae Cult – Bright Wizards of the Thousand Sons. This obviously modified Plasma weapon was developed with occult lore as well as the unique "techno-arcana" of the aforementioned Zhao-Arkhad. Aside from the psychic robots, Thousand Sons who happen to carry a Plasma Cannon may switch to this Æther-Fire Cannon instead. On the tabletop, the Æther-Fire Cannon basically adds ten points of Soul Blaze to a Plasma Cannon. If this is bought, all such guns in the unit must be upgraded in the same manner. Although it is nice of them to push the under-used Soul Blaze rule, a whole squad of 5+ Plasma Cannons is likely going to erase or cripple anything remotely susceptible to Soul Blaze anyway. Given that a whole squad of Plasma-Cannoneers will fork out 100pts for what is basically a one or three of Bolter hits a turn... you can see why Soul Blaze is under-used in the first place. If you've got a Castellax-Achea rolling around with 15pts to spare, you might want to consider this.
It is unknown whether the present Thousand Sons still have this psychic-Plasma Cannon at their disposal, or what kind of work a thousands of years in the Warp has done to it.
GW wanted to give Chaos Plasma a distinctive name. Thankfully, there's already a "Plasma" in "Ectoplasma".
For the most part, an Ectoplasma Cannon can be found as a weapon on the Forgefiend Daemon Engine when the guys who make it want to try something else instead of the usual Hades Autocannons. The pulsing energies of a Forgefiend’s furnace are not always employed to produce solid ammunition. Some sport flex-sheathed Plasma weapons of ancient design, weapons so large they would look more at home on a light aircraft than a land-bound walker. Those Daemon-beasts the Imperium have nicknamed Cerberites bear no less than three of these Ectoplasma Cannons, one mounted on each weapon-limb and one jutting out from their maws. Gargoyle-mouthed and drizzling balefire, the searing energies these devastating weapons hurl outwards are a hybrid of Plasma and burning ectoplasm channeled straight from the Forgefiend’s tainted heart consisting of raw daemonic energy from the Warp and fire tortured, screaming souls. Presumably, the Forgefiend eats stuff to make bullets for the Hades, so it wouldn't be too far-fetched to say that Forgefiends with Ectoplasma Cannons eat, well, people. The Ectoplasma Cannons were once prized artifacts, dating back to before the Heresy, but the Warpsmiths have perverted them into something far worse. In practice, it certainly has become better in regard of not getting hot as of recent.
The Hellwrights themselves seem to also have access to the Ectoplasma technology. A couple of variants, both of which notable weapons hardly found in the present age, have been corrupted into an Ectoplasma alternative. Although we do not know for sure what exactly the Ectoplasma Cannon used to be, we are certainly sure for the following two. The Ectoplasma Blaster is just that, corrupted Plasma Blasters mounted in the fists of Contemptor Dreadnoughts. There is also the Ectoplasma Battery, which is a warped variant of the Hellfire Plasma Cannonade mounted on Deredeo Dreadnoughts. Unlike the Ectoplasma Cannon, these two will still overheat if you're not careful/lucky with it. Perhaps it has something to do with how Dreadnoughts are not Daemon Engines.
And yeah, yeah, cue the Ghostbusters theme.
Tau Plasma Weapons
The Tau make extensive use of Plasma technology. In fact, almost all of their weapons are plasma-based in one form or another. Tau plasma weapons generally shoot the same globes of magnetically- shielded overheated plasma as imperial ones, but do not suffer from the "Gets Hot" rule as they are less powerful than Imperial models and are somewhat lighter. Unlike imperial plasma weapons Tau ones only rely on energy to fire, rather than a combination of energy supply and chemical fuel.
Fun With Fluff
Thanks to 8th's new Overcharge Mechanics T'au plasma weapons got a substantial nerf. The lower power was a reflection of the fact that T'au weapons didn't Overheat. Now that people can choose whether or not their weapons overheat through the overcharge profile, T'au having a lower Strength is both meaningless and a secondary nerf to Crisis and Broadside suits who have already suffered nerfs via cost increases. Plasma weapons aren't that worth it anymore.
The Plasma Rifle is essentially a Plasma Gun with its safety engaged. If you want their "Plasma" weapon that does blow up like a Plasma should, check out their Ion arsenal below.
Indeed, the Plasma Rifle is the Plasma Gun equivalent for the Battlesuits. The only difference it has to our usual Plasma Gun is that this thing is actually safe, relatively speaking, which translates to a lower power output compared to the Plasma weaponry of their oppositions. In practice, this is the only Plasma proper weapon widely available to the Tau since their Plasma Cannon below were in its field-testing stage by the time of the Taros Campaign. Although it is called a "Rifle", the range of this gun does not surpass our usual Plasma Gun at all. The lack of punch compared to what the Imperium has to offer is probably why the Tau upscaled it into their Plasma Cannon. For the XV8 Crisis Battlesuit, the Plasma Rifle is available as a primary weapon option. A couple of Plasma Rifles may be installed instead of the Smart Missile System over the shoulder of the XV88 Broadside Battlesuit – the same can be said for the XV104 Riptide Battlesuit as well. Commander Longknife also has his own experimental variant to complement the experimental Battlesuit, and his comes with two barrels. Recently, the Plasma Rifle used by Commander Farsight now has "High-intensity" attached to it, no doubt to differentiate from the rest of those used by the other Battlesuits.
The High-intensity Plasma Rifle is a much better variant overall, powerful, with the same range as the Plasma Incinerator as well.
Plasma Accelerator Rifle
The lack of "Plasma" weaponry with the Tau does not mean they neglect working on this branch of their tech tree. The Plasma Accelerator Rifle is one of the recent developments available on the field.
In fact, the Plasma Accelerator Rifle is also one of the deadliest inventions from Bork’an’s renowned science divisions, blending pulse-induction technology with a high-yield plasma generator. No, the gun does not have a metal detector mounted on it, probably. "Pulse-induction" most likely refers to their Pulse technology below which also shoots Plasma, though indirectly since it's not Plasma that comes out of the barrel. The Plasma Accelerator Rifle, on the other hand, uses Pulse technology in conjunction with proper Plasma technology. The result is a long-range armor-piercing weapon that is highly effective against both infantry and light vehicles. Now, plasma acceleration is actually a thing, and it does so through an electric field which we may correlate to the embedded Pulse tech. This E-field in particular is also special in its own way, since it also must be associated with the electron plasma wave (plasma oscillation) or other kinds with "high-gradient plasma structures" as well. Do not actually take any of our word for this, though, since the closest thing most of us get to being scientists is using a search engine. Relatively speaking since the Tau practically steamrolls through achievements, it really took the Earth Caste quite a while to figure out that they should try to combine two similarly "Plasma" weapon systems together.
It seems to be a matter of time before those nerds figure out how to incorporate their Ion technology as well in order to unite the Plasma trinity together in a single frame. Right now, the Plasma Accelerator Rifle is probably still under the early stages of field-testing – a single Battlesuit may be found with one attached in a Bork'an force, and its effectiveness is at about the same level as Farsight's High-intensity Plasma Rifle with a little difference.
Tau Plasma Cannon
This is basically the Plasma Rifle, an enlarged Plasma Rifle grown a tad bit too big.
Because of this, they are more powerful than their little kinder and also fire more rapidly. However, a Plasma Cannon sacrifices some of its armor penetration power for this advantage, making it similar in damage output to an Ion Cannon. Unlike Imperial Plasma Cannons, this Plasma Cannon does not cover a small (or large) area with a Plasma blast; it fires in bursts, which means it is more suited for vaporizing singular big targets rather than groups of heavy infantry stupid enough to clump up. Since the Tau Plasma Cannon is essentially a main gun for a tank which is the TX7 Hammerhead Gunship, a more proportionate comparison over the Imperium's side should be a Plasma cannon that is also worthy for their tanks – the Plasma Destroyer. No matter the weapon, the Tau Plasma Cannon is clearly better than any of the Gorilla counterparts in one factor other than not getting hot from playing it safe. For the size, it shoots pretty far. The smallest size over the Imperium that can shoot as far as their Plasma Cannon is the Plasma Decimator, which is fitted on the larger Dominus-class Knights. Along with the Fusion Cannon, their Plasma Cannon saw first blood during the Taros Campaign where it was 'coincidentally' being field tested there. This gave the Tau much needed boost in their little Weeaboo hands to counteract against heavy Imperial firepower at the time.
Forge World used to produce a variant turret for the Hammerhead Gunship for the twin-linked Plasma Cannon option. But since it "used to," your only choices of getting one are either finding people who sell it, finding alternative bits that look just close enough, whipping out your conversion expertise, or the universal solution.
Tau Pulse Weapons
Most Tau plasma weapons are "pulse weapons," which shoot a particle that super heats the air in its immediate vicinity due to its extreme level of excitement. Unlike what many people think, it does not fire plasma. It shoots individual, energized particles. The same thing as a Disruptor Macrocannon except in fires a single particle instead of a shell of particles. The glow is from the particle hitting the air, effectively turning it into plasma. Pulse weapons are as much of a "basic" weapons as shuriken weapons are basic eldar guns, or bolt weapons are basic non-cannon fodder Imperium guns, but they are superior to both in terms of power and effective range, with only ballancing point being they are mounted on rather human-like Fire Warriors. (And Bladestorm)
Predictably a pistol-sized Pulse weapon, the Pulse Pistol is a far smaller version of the Tau Pulse Rifle.
As a pistol-sized Pulse weapon, it is only used by certain Tau personnel as a basic defense and hold-out weapon to be applied in desperate situations. The only time you'll ever see one of these on the board is if a Battlesuit equips an ejection system and the pilot bails out (5th Edition), on a Sniper Drone spotter (6th edition), or carried by a Fire Warrior. Even Drones are armed with Pulse Pistols, specifically Escort Drones used by the Water Caste. That one is understandable since you can't really make a pacifistic impression when a bodyguard robot with two cannons is right besides you. Hilariously, this thing manages to retain the same shot-for-shot power ratio of a full-sized Pulse Rifle. As hilarious is a part of 40k fluff (depending on whether you find a dead Fishface turned into a Fishcake hilarious): a lone Tau Water Caste was trying to secretly kill a Smurf by pretending to talk the angry blueman down through the use of Diplomacy. This includes speaking to him of Ultramarine culture and whether it would be considered honorable to kill an 'unarmed' woman and what Papa Smurf would think of this dishonor. Her weapon of choice? A Pulse Pistol. Yes you heard us right – a Tau bred to be diplomats first and soldiers later actually thought it was a good idea to kill a supersoldier with lightning fast reflexes at close range with what counts as a really powerful flashlight.
Suffice to say, in a rare stroke of Awesome, Cato Sicarius of ALL people saw through her Bullshit and proceeded to flatten and crush her to death with his hueg augmented blueberry muscles.
Unlike the other handheld variants we've seen so far, the Pulse Blaster is actually a later development over that area which came along with the Breacher Teams.
Back when the Tau were starting up their expansion game, most fights they encountered had them being on the defense since their enemies were more on the aggressive side such as the Orks. Everything changed when they attacked the Imperium. Hive Worlds and bunkers are usually narrow and at times not quite accommodating for Battlesuits, and Space Marines are really good at boarding ships which similarly have little room for maneuvering. This is where the suicidal Breacher Teams and their Pulse Blasters come in. As their signature weapon, the Pulse Blaster exchanges the reach and consistency of its rifle equivalent for sheer destructive punch. It uses a two-stage process to enhance the lethality of its Plasma-based ammunition. When the trigger is halfway depressed, an invisible volley of negatively charged particles paint the target, priming it for the killing shot to come. Such victims glow with a ghostly light for a moment before their doom is delivered. A twitch of the trigger, and the Plasma payload is shot out, dragged unerringly to its destination with shocking force – far more than a conventional infantry-borne weapon could hope to achieve. At close quarters the Pulse Blaster's every shot is powerful enough to slam a gory hole through an enemy's chest or even punch clean through the side of a transport vehicle.
Crunch wise, they are a recent Pulse tool for the Fire Caste to play with, and it resembles nothing more than an over-under shotgun married to an assault rifle. With the same rate of fire as the Pulse Carbine, this weapon is the Tau's answer to close quarters combat, acting as a Pulse scattergun that grows more powerful the closer the target is. At its longest range, it won't even put a dent in Ork or Kroot "armor", while at point blank range it'll tear through a Space Marine's armor along with the Space Marine inside it.
Now this is the smallest Pulse gun you may hold your trigger with (at least without melting down the Pistol or the Blaster breaking your shoulder).
The Pulse Carbine exchanges the range enjoyed by the Pulse Rifles for more Dakka. Aside from the reduced range which is almost half, it also looks like a sawn-off Pulse Rifle as the Pulse Carbine is more compact as the name suggests – a shorter barrel goes a long way in portability. The Pulse Carbine seems to be exclusively used by the Tau over the suicidal infantry section that has to close in in order to use this gun. Tau personnel who are not infantry such as vehicle crew does not seem to have access to the Pulse Carbine, being stuck with a small (but still deadly) sidearm that is the Pulse Pistol above. Available as an alternative for Strike Teams, for the Pathfinder Teams and also Darkstrider, the size of the Pulse Carbine makes it perfect for their mode of operation whose keyword is scouting. The smaller size also makes room for a Markerlight or an underslung grenade launcher for Photon Grenades. Outside of Fire Warriors, the Pulse Carbine also sees plenty of application by the Gun Drones in a pair, while Exploratory Drones are one Pulse Carbine short since it probably needs more juice diverted to additional equipment to help in exploration such as sensors. After all, it is supposed to look at stuffs like Space Hulks, not shoot stuffs up like mean and green football hooligans.
In the All Guardsman Party, they are a favored weapon due to the punch and full auto. They were disguised as lasguns, but as of the latest, they lost them.
The Pulse Rifle used by the Cadre Fireblades in particular have a larger scope, while Kroot Shapers forgo the scope entirely. The Tetra Scout Speeder used by Pathfinders also mounts two of these as the means for self-defense or perhaps something more offensive. In terms of crunch, these things have better range and power than Lasguns and even regular Bolters. The fluff tends to respect this too, and the first Ravenor novel mentions a Guardsman who (rightly) ditched his own weapon in favor of a scavenged Pulse Rifle. Whilst it can fire on automatic, the Pulse Rifle is not that good at it. It is not something like an assault rifle that anyone can go trigger happy with, so the gunner is expected to pick and choose his shots rather than fill the air with energy rounds despite one power cell being worth fifty shots before needing to reload. The Pulse Rifle also gives us the explanation to the increased range. It appears to be due in part of the built-in gyro-stabilizer allowing the gunner to shoot further in a more accurate sense – that little spherical gizmo at the end of the barrel (at least, according to Fantasy Flight Games' Rogue Trader RPG). The Longshot variant used by the MV71 Sniper Drones, on the other hand, puts further emphasis in the sniping part along with having the range increased even further.
So, it is less likely to be the matter of recoil for the ineffectiveness of the Pulse Rifle in automatic firing. The issue might be in the rate of fire itself.
Application of the Burst Cannon is prolific among the armies of the Tau. Having been developed during their steamrolling through the tech tree, these things are EVERYWHERE.
Shortly after the Tau perfected their Pulse induction technology which is what makes Pulse weapons work, the Fire caste was provided with early prototypes of Pulse Rifles; though excellent weapons, they lacked the rate of fire to be repurposed as a support weapon apart from the Sniper Drones that use a variant with increased range. The Ethereals tasked the Earth caste with developing a weapon that would enable Tau warriors to engage with foes many times their number. The result was the creation of the Burst Cannon, which produced impressive results in combat trials where the subjects on the receiving end might have been Orks green with jealously of the amount of Dakka the gun spews out, tearing apart enemy infantry and light armor with equal ease. Now, almost every Tau vehicle has a mounting for a Burst Cannon, and it remains a favored weapon system for all marks of Stealth and Crisis Battlesuits. Appearance-wise, it is that of a quad-barreled Gatling gun – XV9 Hazard Battlesuits as well as Ghostkeel Battlesuits over the shoulders may carry double that number since they use a variant with two barrels. Burst Cannons may also have an armored sheathing over the barrels as shown on the picture. This extra bit in particular is widely attached with Stealth Battlesuits.
Enemies of the Tau have developed a healthy proportion of respect and fear for the Burst Cannon. The weapon makes a distinctive sound when fired, the induction chamber crackling with energy as a stream of charged particles hisses across the battlefield. Imperial Guardsmen that come under fire from Burst Cannons often speak of the sound, each shot cleaving the air with a distinctive whiz-hum, not unlike the sound of solid bullets buzzing past their ear, but multiplied hundredfold as dozens of bursts are fired each second just like our Minigun. Unlike the Heavy Bolters and Stubbers of the Imperium with their rhythmic thumping fire, a Burst Cannon makes a continuous tearing wail, with individual shots nearly indistinguishable from each other. With heavy mountings and large plasma coils powering them, Burst Cannons are a degree more deadly than smaller Pulse weapons. Like these smaller weapons, a Burst Cannon can punch straight through most infantry armor as if it wasn’t there. Given that a Burst Cannon fires a great many of these shots each minute, it is fully capable of clearing an area of infantry in the space of a few bloody seconds. Stealth suits especially favor this weapon, its sudden storm of fire making it an ideal weapon for short ranged ambushes. The XV86 Coldstar Battlesuits used by Tau Commanders have access to a High-output variant which has double the firepower just like there's another Burst Cannon next to it.
Burst Cannons are also well suited to combating light armor, which cannot defend against the powerful hail of high-speed plasma. Walkers, armored transports and recon vehicles all stray within range of a burst cannon at the risk of having their thin armor plating torn from them bit by bit. Even Imperial battle tanks are not immune to the burst cannon: a sustained burst directed at weaker rear armor is capable of rupturing an engine or setting off ammunition stores. Burst Cannons are found on many Tau vehicles, from the Devilfish APC to the fearsome Hammerhead Gunship. Often slaved to Drone control, a Burst Cannon makes for an ideal point-defense weapon, keeping infantry and light vehicles away from a tank hunter like the Hammerhead, or giving a squad of Fire Warriors cover as they deploy from their transport. This is where the Burst Cannon trades the range and accuracy of weapons like the pulse rifle for the ability to saturate an area with Plasma fire. For this reason, Burst Cannons are used extensively as part of established defenses. When guided by Drones, they make effective sentry guns, silently tracking back and forth, scanning for threats. Again, this thing is everywhere, sometimes in conjunction with a variant with two extra barrels below.
Pulse Submunitions Rifle
Because it'd be rather unfair for the Fire Warriors to get all the shotgun fun with their Pulse Blasters, there is the Pulse Submunitions Rifle.
Hell, since the Pulse Blaster is relatively the newer gun, it might have been based on this cannon-sized rifle here. And despite being a rifle, its range is not that great, but impressive nonetheless considering that it is more of a shotgun rather than a rifle in the first place. The Pulse Submunitions Rifle is a primary weapon used by the XV9 Hazard Battlesuits, and it fires what can be described as a "Pulse shrapnel" shot that explodes above the enemy targets, covering quite an area with cover-ignoring pulse shots. While it does have a decent range as well as killing power, the trade off for spreading shots is its lack of armor-piercing capabilities, as even Flak Armor used by the Imperial Guard has a decent chance of brushing off a Pulse submunitions shot. Nevertheless it has proven to be an awesome tool for clearing out trenches and buildings of infantry, and wiping out tightly packed Ork and Tyranid hordes swarming at the position. The XV9 Hazard Battlesuit of Shas’O Vesu’r Ra’lai in particular sports an experimental rendition of the Pulse Submunitions Rifle as its primary armament. It sacrifices a degree of damage and blast radius in favor for a higher rate of fire, and the ability to be loaded with several potent, limited-issue warheads. Whilst powerful, these warheads are currently equally experimental as their barely-contained power can damage the Battlesuit itself.
Shas'o R'alai is also your only source for a Pulse Submunitions Rifle model if you want to get one for your Hazard team.
Long-Barrelled Burst Cannon
Continuing their tradition of making a bigger variant of a weapon by making it longer, the Long-barrelled Burst Cannon is a Burst Cannon given a metaphorical penis extension.
Unlike with their similarly-sized Fusion Cannon and Plasma Cannon, the Tau's Long-barrelled version for their Burst Cannon sees a healthy application on a number of vehicles other than on a turret in a pair for the TX7 Hammerhead Gunship. In fact, all of these vehicles are atmospheric aircraft, which probably says a lot on the primary reason this weapon was developed. The firepower given by the Burst Cannon is so phenomenal that the Tau attach the guns on many, many of their war engines such as Battlesuits and other ground-based craft. The problem, however, lies in the short range of the Burst Cannon, which does not sit well with air-based craft which would have to fly dangerously close to the ground/target to unleash the regular Burst Cannon. Along with being more powerful, the Long-Barrelled Burst Cannon has double the range of the usual Burst Cannon, giving an aircraft equipped with one or two the room it needs to deliver the payload. The smallest aircraft to mount this gun so far in a pair is the DX-6 Remora, a Drone stealth fighter which provides fire support for vanguard units such as Pathfinders and Stealthsuit Teams. The Barracuda AX-5-2, the Tau's air superiority fighter, mounts two of these on its wings, while the Orca and the Manta also possess these for self-defense, with the Manta sporting sixteen of them compared to the Orca's couple on a single turret.
The Hammerhead itself used to have the Long-barrelled variant as a main gun alternative, but recent rules have that said main gun being the Heavy Burst Cannon below instead.
Heavy Burst Cannon
With the evolution of larger and more powerful armored vehicles and Battlesuits, it was inevitable that larger versions of the Burst Cannon would be created.
Currently the biggest and most powerful weapon from the Burst Cannon line, the Heavy Burst Cannon is capable of a ferocious rate of fire that can mow down hordes of enemies like a neckbeard on a diet consisted of cheez whiz and crackers. Even in broad daylight, this giant Gatling gun produces a blinding muzzle flash, the barrels a blur of motion with the only sound the enemy is capable of hearing is a constant noise of "BRRRRRRRRRRRT!" rather than the more soothing "DAKKA DAKKA DAKKA DAKKA!". Developed during the Tau Empire's Third Sphere Expansion, it can be primarily found on Riptide Battlesuits as well as a couple of aircraft which are the Barracuda and the Tiger Shark. The Heavy Burst Cannon incorporates multiple induction chambers, and six rather than four barrels, allowing them to fire even faster than their smaller variants. The Heavy Burst Cannon is able to project Plasma rounds much further and with greater force, making it substantially more dangerous to face in the battlesphere. By opening up the in-built Pulse induction inhibitors, a Heavy Burst Cannon can also draw upon the XV104’s Dark Matter Nova Reactor to go Super Saiyan, making possible a devastating storm of Pulse rounds so ridiculously huge that it can crack even the toughest of armor with ease and make reinforced ceramite to cave in and bend over.
On the tabletop, the Heavy Burst Cannon fires more powerful rounds at the same rate of fire and range as the Long-barrelled one, and also able to overcharge with the Nova reactor to double it's firing rate and gain Rending at the cost of overheating and potentially damaging the suit. As of 8th, it has lost the ability to do so. The Heavy Burst Cannon is now a Heavy 12 S6 AP-1 2 D, making it much better at wiping out two-wound infantry and gives it a secondary role of killing light vehicles with sheer number of shots.
Pulse Submunitions Cannon
Imagine going Rambo on a kill streak with an automatic shotgun on both hands. The Pulse Submunitions Cannon is the shotgun, a rather big one to boot.
A deadly experimental weapons system, the Pulse Submunitions Cannon fires micro-cluster projectiles capable of saturating the target area in a storm of Plasma pulses just like the smaller Pulse Submunitions Rifle. Additionally, larger targets such as bulky infantry, monstrous creatures, and vehicles inevitably suffer proportionally greater harm from the Pulse Submunitions Cannon, as they can be struck with a wave of near-simultaneous detonations, amplifying the blast and turning them inside out. Exceptionally handy when it comes to fending off Tyranids, some have even consider calling it as the Tyranid Bug Spray, because it kills them that easily. This plays on tabletop as firing considerably more powerful shots at considerably longer range with a Cluster Fire effect, which means the bulkier the target is, the harder it gets hit. While the Pulse Submunitions Rifle's role is anti-infantry, the Pulse Submunitions Cannon's primary purpose is killing light to medium vehicles as well as monstrous creatures, and as we have stated, it is ridiculously good at it. Like the Heavy Burst Cannon, the Pulse Submunitions Cannon can also nova-charge itself via the Dark Matter Nova Reactor and double its rate of fire. This is applied when you absolutely must murder-fuck an entire horde army up to and including hordes of gargantuan creatures.
It's a giant shotgun that shoots Plasma. Any Ork who happens to loot this boy will be a very happy Ork.
Pulse Bomb Generator
Though it doesn't sound like one, the Pulse Bomb Generator is the Tau's general solution to artillery strikes.
That is, they rely on air support to carry out the explosions, and their choice of bombs come from the Pulse Bomb Generator. As the name suggests, a pulse induction field is used to generate a single bomb made up into a Plasma ball. This ball is then dropped to where it is supposed to explode. It is the primary weapon system for a Sun Shark Bomber, mounted underneath the aircraft’s rear hull. The Pulse Bomb Generator gives the Sun Shark a fairly potent level of firepower capable of blasting ground targets and possibly really unlucky airborne targets. As you might have guessed from the appearance, the Pulse Bomb Generator works by rapidly spinning, building up the Pulse energy until a ball of plasma can be unleashed, and leaving nothing but glowing craters and drifting ashes at the point of impact. Oh, and this thing does not just drop a bomb and recharge. It keeps the bomb coming almost indefinitely, though at times minor malfunctions can occur due to battlefield debris or something, so reconfiguration is required for that case for optimal alignment to recommence Plasma ball production. Until then, the Sun Shark Bomber is forced to maneuver and use its rocket systems, Markerlight and a pair of nimble Interceptor Drones as its means for self-defense to avoid enemy fire until the reactor recharging is completed.
How does the Pulse Bomb work exactly? Well, for up to ten models in an enemy unit, you get to roll a six-sided die. For each roll of 5+, the target unit gets a mortal wound. Enemy infantries get an extra one in their results because they are not tanks.
You think the Pulse shotgun range ends at the Pulse Submunitions Cannon? Think again, and say hello to the Pulse Blastcannon, the proper big brother to the Pulse Blaster.
Also known as the Pulse ARC Cannon on the store page for some reason – another "Pulse Arc Cannon" shows up in the Tau's fluff where it is heavily modified to combine its beam with five others like the Death Star and tear a hole into a Gargant – the Pulse Blastcannon is one of the largest Pulse Weapons at the Tau's disposal, employing superheated Plasma on a scale that leaves only glowing craters to mark where its targets once stood. Pioneered by the finest of Tau's Earth Caste weapons scientists, the Pulse Blastcannon is a weapon so massive in size and power that it can only be borne into battle by a dedicated artillery Battlesuit with another generator attached to exclusively power the cannon such as the KV128 Stormsurge. As one of the two options for the Stormsurge's primary armament, the Pulse Blastcannon is simply the version with more Dakka compared to the Pulse Driver Cannon and its emphasis on range. The Pulse Blastcannon uses Aggressive Reactive Charge technology (A.R.C) to hyper-accelerate plasma energy. A stream of negatively-charged particles is fired from the gun milliseconds before the main plasma charge, accelerating the plasma blast into the target to explosive effect; a shot that disperses over a wider area the further away the target.
It is common for some people to confuse the Blastcannon with the Pulse Driver Cannon due to their similarities in appearance. The most distinct feature you may pick out of it is that the Blastcannon has two barrels at the end compared to the Driver's one. Only slightly smaller than the alternative option for the Stormsurge, the Pulse Blastcannon is an upscaled Pulse Blaster in practice. It even shares the relatively short range, rapid rate of fire, and "the closer the target, the more powerful it is" shtick with its smaller cousin; all are turned up to eleven save perhaps the range part, with the "weakest" Long Range mode being an S9, AP5 large blast, and the "strongest" being the almighty D.
Pulse Driver Cannon
Pioneered by the finest of Tau's Earth Caste weapons scientists, this XBAWXHUEG Pulse Driver Cannon mounted on the KV128 Stormsurge Battlesuit is a weapon so massive in size and power consumption, it can only be borne to battle by a dedicated artillery suit that carries another generator dedicated to the damn thing.
Emitting a deep bass thrum that rises in pitch as its particle accelerator relays spool up – that is, the circular thing by the base of the cannon – the cannon builds up force enough to level a building before the volley is released. Once the pent-up energies are let fly, an incandescent beam of blinding intensity stabs out with a buzzing roar. Anything in its path is obliterated in a column of superheated Plasma, cast into its component atoms as if a solar flare had lashed from the heart of a sun to scour them from history in one terrifying second. And do not disregard the name, this thing drives its Pulses pretty far with its Pulse accelerators, really far. The phenomenal amount of heat generated in that process is emitted through huge heat sinks built into the weapon. The Pulse Driver Cannon was originally developed in response to the gargantuan war engines of the Orks. In the latter stages of the Arkunasha War a stationary prototype mounted on the Argap Plateau sent searing columns of white force through the guts and torsos of Ork war effigies time and time again. In recent decades the weapon's deadly power has been matched instead against the Titans of the Imperium; though these boast far more sophisticated technologies than the walkers used by the Ork race, the result of a sustained barrage from a Pulse Driver Cannon has proven to be much the same.
On the tabletop, it goes way overboard with its firepower, launching S10 AP2 pie-plates with enough range to hit anywhere on the board unless you play Apocalypse. On top of that, as it is mounted on top of a very high platform, it always has the high ground advantage.
Pulse Ordnance Multi-Driver
Though the Tau rely on the Sun Shark and their Pulse bomb delivery as their means of artillery, this thing comes pretty close to conventional artillery.
Hey you! The Pulse Driver Cannon not doing enough Shoop Da Woop for you? Is it not killing enough super-heavies fast enough for you? Do you need to compensate even further for your lack of manly parts? Fear no longer, for the Tau have got your back! Take a regular Pulse Driver Cannon and than triple the amount whilst supercharging it at the same time. You want a Weeaboo Turbo-Laser? You got one. You want a Weeaboo Turbo-Laser Destructor? Well you already have it! The Pulse Ordnance Multi-Driver is a set of three extra-long Pulse Drivers mounted on the back of the Ta'unar Supremacy Armor. Judging from appearance alone, they are not exactly the same breed as the ones on the back of Stormsurges, but the range and name make it close enough to be at least related to the regular Pulse Driver Cannon. This triple-barreled system is designed to combat super-heavy enemies such as Imperial Knights and Tyranid beasts. It can either fire in volleys consisting of high explosive and kinetic projectiles that will reduce entire city blocks to dust with a ridiculously accurate bombardment, or concentrate the fire of all three Drivers into one ridiculously powerful titan-killing volley to saturate a single target or in arcing trajectories to shower a wider area to equally devastating effect because sometimes no kill is better than overkill.
Unlike other Titan weapons we've seen, this one takes the cake in looking like someone just tore the thing off from a starship and installed it on a giant walker.
Tau Ion Weapons
Ion weapons are not specifically called plasma guns, but seeing as how they shoot a stream of ions at their targets, they meet the scientific definition of "plasma". Fluff says it doesn't so much fire Plasma, but shoots a stream of Ionising particles that reduce what ever the stream hits to Plasma like a bastard hate child of laser and plasma weapons, and it's pretty horrific if you think about it. Original Ion weapon technology came from one of the Tau Empire member race known as Demiurge, but then Earth caste tinkered with it for a bit, and replaced the power source with a more compact one that could also now be "overcharged" by removing its radiation shielding - this results in weapon firing single powerful blast shots instead of a torrent of weaker ones, but also runs the risk of wounding the weapon operator or damaging his suit/vehicle with a burst of ionizing radiation from the power source. Opposite to pulse weapon, which started as small arms and then evolved in to bigger and bigger weapons, ion weapons started from the biggest of the big (spaceship cannons, analogues to imperial lances), and then went the way of minimizing down to vehicle weapons, to battlesuit weapons, and finally to compact rifles.
The aforementioned new power source allowed Earth caste engineers to create a portable Ion weapon.
The resulting weapon that is the Ion Rifle combines the man-portable size of a Pulse Rifle and the firepower equivalent to an Imperial Autocannon in one deadly package. Now, Tau Ion weapons are typically found on heavy vehicles and void craft, and this lightweight man-portable variant is still considered highly experimental. The weapon operates by creating a stream of highly ionized particles and using advanced electro-magnetic fields to direct them at a target. Where the particles strike the target they react explosively, causing massive trauma. Currently, the Ion Rifle is carried by Pathfinder Teams as a variant sniper/anti-materiel weapon to vaporize heavy infantries and light vehicles alike. As an Ion weapon should be capable of, the Ion Rifle’s safety measures can be temporarily shut off, allowing the ion stream to reach dangerous levels of energy before being released. If done correctly, the resulting damage can be spectacular. However, if left active for too long, the ion stream will become unstable and cause severe damage to the wielder. Aside from the Pathfinders, the Interceptor Drones used by Sun Shark Bombers are also armed with a set of twin-linked Ion Rifles as their standard armament, which they use in conjunction with their advanced targeting systems to blow incoming aircraft looking to do harm to the bomber out of the sky.
Ion Rifles are basically the Tau's quintessential Lascannon or Rocket Launcher equivalent; and like its big brother, the Ion Rifle can be overcharged for an S8 AР4 small blast, but in order to do so a soldier must expose the gun's power source to air. This is not a good idea, considering that said power source is highly radioactive. As a result, Pathfinders equipped with this weapon usually die from cancer, unless they get killed in action before that happens (and considering the high risk of their job, there is a good chance they will.)
Quad Ion Turret
While the Interceptor Drone carries two Ion Rifles, the Quad Ion Turret carries four as its name implies as the primary weapon of the Razorshark Strike Fighter.
If you know how anti-tank a single Ion Rifle can be, imagine four of them synchronizing their shots for maximum FUCK. For some reason, it is never given to ground vehicles and the larger Battlesuits despite it its ludicrous anti-tank capabilities. While it lacks the range and armor-piercing qualities of the Ion Cannon mounted on the Barracuda, it packs pretty much the same power per shot and even a slightly higher rate of fire. Due to it's rather compact size, it may be mounted on a 360° turret, making the Razorshark an excellent ground attack aircraft, as opposed to more of a dogfighter that is the Barracuda which can only lay hate on Groundpounders in strafing runs. This recently translates to an extra one to a hit roll on the poor sods that can't fly. As with all Tau Ion weapons, the Quad Ion Turret can be overcharged by exposing the reactive Mor'tonium power source to the environment, though this also exposes the vehicle to the dangerous ionizing radiation emitted by the Mor'tonium, not to mention the risk of overloading its primary power cells. Overcharging the Quad Ion Turret allows it to generate a massive and explosive blast with increased damage compared to the normal firing mode, 'cause why not limit yourself in the overkill department? Just don't get too surprised when all your little Tau pilots turn out with cancer at every end of a battle.
By the transition to 8th Edition, the Quad Ion Turret was hit with an equally massive nerf bat, basically reduced to a Reaper Autocannon, which are not even any good over the CSM army, though Overcharge helps it out a bit, granting +1 S D3 damage and D6 shots
Cyclic Ion Blaster
The smallest Ion weapon mounted on Battlesuits – that is, the Crisis – is the Cyclic Ion Blaster here.
The Cyclic Ion Blaster is an Ion weapon that combines the designs of the Burst Cannon and the Ion Rifle. This quad barreled weapon fires a rapid stream of ion radiation capable of annihilating lightly armored troops at pretty much the same power as the Ion Rifle, but with a considerably reduced effective range. The design, however, is not perfect. Although the rate of fire is stable, the ion radiation is still quite unpredictable, resulting in an uneven performance when deployed against armor. The original design has a static block of four barrels firing in cycles, while the modern rendition uses a new power source with a rotating three-barrel block in an even smaller package. As a result, the Cyclic Ion Blaster's role changed to one more similar to the more stable Ion Rifle design, and now features a moderately high rate of fire with immensely more damage output per shot. For quite a while it was stuck in development hell as the Earth caste constantly worked to improve it and then re-worked it with a new power source, so the number of working models available to field personnel was rather scarce, but as of recent (999M41–recent, aka five minutes till advancing the storyline) the Cyclic Ion Blaster has finally made it out of experimental stage and gone into mass production.
As of the Damocles rematch, it is one of the standard weapons of Crisis Battlesuits, and also available to the Barracudas AX-5-2.
Cyclic Ion Raker
Because the Ghostkeel is a much bigger Battlesuit, it would look silly with just the regular Cyclic Ion Blaster.
As the fancy new Ion weapon for the new XV95 Ghostkeel Suit, the Cyclic Ion Raker acts as an advanced version of the Cyclic Ion Blaster (along with looking like a bigger version) of the Crisis Battlesuits. Just like the smaller Ion Rifles used by Pathfinder Teams, the Cyclic Ion Raker uses a slab of Mor'tonium as its power source. However, due to its role as the Ghostkeel's principal weapon system, the Raker is calibrated to fire with a complete lack of sound. Yes, you read that right. This is a weapon that rakes through a group of enemies, and neither will they see it coming because the Ghostkeel is a ninja, nor will they hear it coming because the gun has a built-in suppressor. Those caught in its intense high-energy streams suddenly melt away, their flesh vaporizing with such suddenness they often do not even have time to scream. As with all Tau Ion weapons, the Ion Raker can be overcharged by exposing the unstable alloy inside to the atmosphere. The resultant burst of radiation is so powerful it can destroy entire platoons of the foe – though regrettably the pilot may also sign his own death warrant in the process... yeah, they're getting cancer for free! Not a particularly good trade off, mind you, but maybe just enough if you can look at the enemy's face once you blow up his precious super-heavy with this cheesemonger of a gun.
The Cyclic Ion Raker features one of the highest rates of fire of any Ion Weapon in its normal firing mode (R 24",S:7, AP4, Assault 6) which can overcharge to a devastated explosion of ionizing radiation (R 24", S:8, AP4, Heavy 1, Large Blast, Gets Hot!).
Phased Ion Gun
The Ion option for the experimental XV9 Hazard Battlesuit is the Phased Ion Gun, which you can probably tell from the looks of it because it doesn't really look like one, or act like one for the matter.
This equally experimental weapon seeks to develop the rapid-firing technology of the Cyclic Ion Blaster and vehicle-mounted Ion Cannon into a stable Battlesuit system. That's right. Despite not having the Cyclic prefix, this gun is at the Dakka side of the spectrum. The Earth caste probably tried to miniaturize the Ion Cannon and incorporate the Cyclic part without the rotation. The result we have seems to be a "Phased Ion", which might be related to the stuff used by the Gatling flamethrower of the XV109 Y'Vahra. The long accelerators of the Phased Ion Gun allow the pilot to engage the enemy at range with a storm of projectiles that react explosively on an atomic level, decimating heavily-armored infantry as well as vehicles. It is a prototype weapon system for use with the XV9 Hazard Battlesuit because the advanced power cores allow the Battlesuit itself to most effectively use these weapons. While the rate of fire is stable and very high, the ionization effect remains unpredictable and varies due to the developing nature of this still-experimental technology. At its peak, a Phased Ion Gun's energy streams are able to bypass virtually all forms of armor. Whilst a Phased Ion Gun is similar in its tactical role to a Cyclic Ion Blaster, it sacrifices some of the latter’s rate of fire for extra damage per shot.
Strangely for an Ion weapon, the Phased Ion Cannon doesn't get hot – that is, it cannot Overcharge. Perhaps "Phased Ion" is already considered overcharged, or a different breed entirely that cannot overcharge in the first place.
Listen up, Shas'saals. This point is where things become blurry, because the sizes from here on are all considered Ion Cannons.
The Ion Cannon itself operates by generating a stream of high-energy ionized particles and launching them at a target using an electromagnetic field. These particles react explosively with the target, transferring tremendous energy at the atomic level. The resulting vaporization only magnifies the amount of damage dealt, with starship-based models comparable to Imperial Lances. You see that main gun on the Hammerhead? That's an Ion Cannon. You see those huge guns on the Custodian Battleship? Ion Cannons. Indeed for an Ion weapon, the Ion Cannon sees fairly wide application over the Tau Empire's military; that is, more than two different vehicles use it. Aside from being an alternative cannon for the Hammerhead Gunship, Ion Cannons may also be found on their aircraft. The Barracuda may sport one as its big gun, while the bigger Tiger Shark has double that number, which is two. The huge Manta in particular does not sport the regular Ion Cannons since it is just that big, but a total of six Long-barreled variant to complement its sixteen Burst Cannons. Originally, this variant is just a better variant with no Overcharge just like the other Long-barreled renditions, but recently we can now do so with an extra effect of being more lethal against numerically large units just in case where the Burst Cannons are not enough or out of reach.
Another notable fact about the Ion Cannon is its range which is a whopping double to what their rifles are capable of, which is likely the reason why their aircraft use it.
While the Tau are still working on accelerating Plasma as with the case of the Plasma Accelerator Rifle above, they seem to already have a knack in accelerating Ion streams.
And while the Long-barreled variant maintains its superiority over range by about ten inches, the Ion Accelerator overshadows the cannon in all other aspects. The blast of an Ion Accelerator's blast is more akin to a torrent, melting its targets entirely. This massive weapon rips through armored fortifications with ease, and can amplify its killing power even further by drawing extra power from an XV104’s Nova Reactor. Indeed, available as an alternative primary armament to the Heavy Burst Cannon, XV104 Riptide Battlesuits may choose to instead sport an Ion weapon that fires a wide stream of volatile energy capable of rupturing the thickest armor-plating and tearing great chunks out of reinforced bunkers and fortress walls. The cannon is massive, and capable of turning entire squads of heavily armored soldiers into molten slag in mere moments. It would be a fool’s errand to attempt to mount such a weapon on any frame or chassis smaller than an XV104 Riptide. Appearance-wise, it looks to be a double-barreled Ion cannon with not much change length-wise. While it does have double the barrels, it does not have double the rate of fire despite what the fluff seems to suggest, and neither does it count as Twin-linked – both fire simultaneously, combining their streams to increase weapon's armor-piercing capabilities (to AP2).
The Nova Reactor can empower its Overcharged blast shot even more, turning it into an S9 and an Ordnance, a potential that could rival the almighty Railgun in its ability to bring down heavy armor.
Ionic Discharge Cannon
Another Ion weapon that is capable of tapping into a Nova Reactor, but this one is mounted on the XV109, not XV104.
Also known as the EMP Discharge Cannon on the store page for some reason we will get to right now, the massive Ionic Discharge Cannon designed to incapacitate enemy war engines. For a most relatable example, imagine the Ion Cannons from Star Wars. Yes, the Discharge Cannon is exactly that, but in a smaller package at the size just right for the Y’vahra. Appearance-wise, it is little more than a sawn-off Ion Cannon. Possibly due to the lack of barrel length, the Discharge Cannon has one of the lowest effective ranges among Ion weapons on top of not being able to Overcharge. On the bright side, its shots have the aforementioned EMP effect as they likely ionize vehicles at close range with a Haywire hit on top of the damage it does, making it extremely deadly against said vehicles. The alternate name might suggest this cannon to be a Pulse (of Electromagnetic breed) weapon. Indeed that unlike its relative, the Ion Accelerator, the Discharge Cannon does not possess the distinctive Overcharge profile alongside the Nova Reactor profile. At its size, it would be one of the only Pulse cannons that is not rapid-firing along with the Pulse Submunitions family. Nevertheless, since the official prefix according to its profile is "Ionic," we are sticking with it and assume that it does fire Ions.
While the usual Ion Cannons may not be found on usual Battlesuits, the Ionic Discharge Cannon fills in that gap nicely in an exclusive manner.
Tri-axis Ion Cannon
You can probably tell that this Ion Cannon is a big gun because it comes with three barrels.
As the biggest of the big Ion Cannons you are likely going to find on the ground because spaceships get most of the big guns, this multi-chambered – specifically triple-barreled – possibly Gatling Ion Cannon is a rapid-firing variant of the standard Ion Cannon at a much bigger scale whose firepower is unmatched for its size in the Tau arsenal. The Tri-axis Ion Cannon is particularly designed to reap through enemy formations, though that doesn't stop it from doing a good damage to something more specific by making its beams actually coherent with precisely controlled lethality after some power adjustments. Seriously this thing is just like the more Dakka version of the Pulse Ordnance Multi-Driver above. But whereas the Pulse Ordnance Multi-Driver is the Tau's Turbo-laser equivalent, the Tri-axis Ion Cannon is the Tau's Gatling Blaster equivalent... Dakka indeed. Mounted only on the Ta'unar Supremacy Armor as its standard armament over the arms, this... thing can fire either the unholy barrage of standard Ion cannon shots, or slightly less rapid firing torrent of powered up shots that surprisingly for Ion technology are not blasts, but a single-targeted tank-busting coherent beam that may rival the bigger Imperial Lascannons in its potential.
This is the weapon you want to fuck that Stompa formation quick and clean. This is the weapon you want to give a Warhound Scout Titan a good pause in its merry run on stomping your forces. This is the weapon you want to shower those pesky Nid hordes in a good bath of irradiated plasma. Ladies and gentlemen, this is the Tri-axis Ion Cannon.
Eldar Plasma Weapons
Though the signature feature of Eldar Plasma technology is its reliability to the point of being cool to the touch, that does not mean they don't appreciate exploding Plasma.
Also known as Sunburst Grenades possibly to differentiate, the Plasma Grenades of the Eldar kind are deployed by all three factions of their kind: Craftworlders throw them; Commorraghites toss them; Harlequins juggle them. For Craftworlders, the Plasma Grenades are issued to Guardian Defenders as well as a some of Aspect Warriors – Dire Avengers, Striking Scorpions, and Shadow Spectres – and by extension, Autarchs. For their darker cousins, usage is limited to Wyches and Scourges, and also Corsairs. Harlequin Troupes along with the accompanying Masters are equipped with Plasma Grenades. Though Plasma Grenades are represented over tabletop only by these Eldar factions, that does not mean the notion itself has not strike the other factions of the grimdark future. The Imperium of Man actually uses Plasma Grenades as well, seeing how Rogue Traders as well as Deathwatch Astartes for one may have in possession. If the Orks in general do toss Plasma Grenades, they are likely limited to the Meks who have access to the higher tier of their tech tree. On the other hand, the Tau seems to be content with just Photon Grenades, though Battlesuits tossing grenades might be an interesting prospect. The Necrons and the Tyranids do not seem to deploy Plasma Grenades, or any grenade for the matter.
While the Imperium relies on deliberate Plasma containment breach for their Plasma Grenades, those of the Eldar apply a different design principle like having an actual miniaturized star in there ready to go nova by the grenade pin or something.
It may not look like one, but the Starcannon is the smallest Plasma weapon deployed by Craftworld Eldar.
The Starcannon fires Plasma in many smaller pulses as opposed to single stronger ones, using sophisticated electromagnetic pulses to guide bolts of destructive plasma to a target. Though its Plasma core produces the incandescent heat of a star, unlike its crude and clumsy Imperial counterparts the Starcannon uses sophisticated containment fields to not only prevent the weapon overheating, but going as far as to keep it cool to the touch. Unless you count their Sunburst Grenades, this is as small as it gets for the category. Though its shape is somewhat similar to a few other weapons, there are a couple of ways for you to tell a Starcannon apart. Most distinctive is the Plasma coil, but since that part is sometimes covered by a shroud, the second most distinctive part you may look for is the trinity of canisters at the very back of the weapon. As practically the Plasma Cannon for the Craftworld Eldar, the Starcannon sees a healthy militaristic application. Guardian Defenders may mount one on their Heavy Weapons Platform, and Wraith constructs may similarly do so as well on their shoulders. Smaller vehicles such as Vypers and War Walkers may also has the gun attached. Larger vehicles such as the Wave Serpent and the Falcon may have one as part of their turrets. Finally, Crimson Hunter Exarchs may swap the usual pair of Bright Lances with Starcannons.
Since the Starcannon doesn't blow up ever, the fact that humanity uses a weapon that could maim and/or kill the wielder is seen by the Eldar as a testament to our idiocy. That said, if they're so reliable, it begs the question as to why the Eldar never created a handheld version of the Starcannon for use by their infantry like the Imperium and the Tau.
Plasma weapons are often described as striking with the force of a sun. Because of the pathological need to make everything dark and edgy, Disintegrator Cannons of the Dark Eldar are powered by a stolen sun.
To be specific, the Disintegrator Cannon fires particles of unstable matter harnessed from a stolen sun, each shot capable of atomizing the most heavily armored warrior. Far more sophisticated than the conventional breed, it maintains a high rate of fire and always remains cool to the touch even in the fiercest battles despite the ravening energies housed within. Indeed that unlike the crude Plasma-weaponry of more primitive races, Dark Eldar Disintegrators are able to maintain a phenomenal rate of fire without suffering an overload or reducing themselves to slag through excess heat. The Raider combat skimmer may use on as its turret. Likewise, the Ravager variant with its two additional side turrets may mount up to three Disintegrator Cannons. While the Suncannon is limited to only Exarchs for the Crimson Hunters, Razorwing Jetfighters all have access to Disintegrator Cannons. Another large vehicle with Disintegrator Cannons is the Tantalus. The two cannons mounted on the nose of the Tantalus are actually stock parts. They are commonly found on Commoraghite starships. In a space-borne environment, they were considered somewhat low-powered. In the confines of the Dark City, however, they are ridiculously strong. Anyone struck by the Tantalus’s Disintegrators is killed instantly, exploding as the water in their cells turned to superheated steam.
That starship-grade breed of Disintegrator Cannons on the Tantalus is called Pulse-disintegrator, by the way.
While the Starcannon is at just the right size to be viable for many installations, 40K being 40K, bigger vehicles need bigger guns.
For this particular case, the vehicle in question is the Wraithknight, and only the Wraithknight so far. The Suncannon is the souped-up version of the Starcannon above with notably more oomph. It is a larger and also noticeably longer Eldar Plasma weapon. A Suncannon is a rapid-firing weapon and its wide and potent blasts are able to pierce all but the heaviest types of armor worn by enemy infantry – that is, this thing wipes out whole platoons. It is also hauntingly accurate due to the use of a sophisticated electromagnetic pulse to guide its lethal blasts to enemy targets. Really, it has all the properties of a Starcannon, only in a much bigger scale. As with all Eldar Plasma weapons, it is far superior to its Imperial counterparts – which for now is only the Plasma Decimator since the Imperial Knights are kinda lacking in variety of weaponry over the Plasma spectrum – as the core of the weapon is protected by a sophisticated series of electromagnetic containment fields to ensure the weapon will never overheat in the hands of its wielder despite producing the incandescent heat of a sun. Unlike the Starcannon, the Suncannon does not come with an exposed Plasma coil for us to identify. But do not worry, because the Suncannon retains the distinctive triple canisters by the top side.
The Suncannon is identical in pretty much every way to the Starcannon except that it dispenses pie plates instead of ho-hum boring-old hits.
|Weapons of the Dark Eldar|
|Sidearms:||Blast Pistol - Fusion Pistol - Splinter Pistol - Stinger Pistol - Terrorfex|
|Basic Weapons:||Blaster - Shardcarbine - Shredder - Splinter Rifle|
|Special Weapons:|| Destructor - Eyeburst - Hexrifle |
Liquifier Gun - Ossefactor - Splinter Pods - Tormentor
|Heavy Weapons:|| Dark Lance - Haywire Blaster - Heat Lance |
Phantasm Grenade Launcher - Splinter Cannon
|Vehicle Weapons:|| Disintegrator Cannon - Horrorfex - Implosion Missile - Monoscythe Missile |
Necrotoxin Missile - Shatterfield Missile - Spirit Vortex - Stinger Pod
|Melee Weapons||Dark Eldar Combat Weapons - Power Weapons|
|Grenades & Explosives||Plasma Grenade - Haywire Grenade - Void Mine|
Ork Plasma Weapons
The Orks have only three plasma weapons, but that's all they need. It comes in many different shapes and sizes.
This is the weapon for a Mekboy to use when he finds himself in a pinch and wants to have a weapon that can tear Terminators a new one, and yet still has a convenient enough size to be wielded with one hand.
While the "rare" Plasma Pistols used by the Imperium may be found in the hands of a lot of people, the Kustom Mega-slugga may only be found in the hands of a Mekaniak, specifically not the ones wearing Mega Armor because those have outgrown the compact size given by a Slugga. Appearance-wise, the cooling vent looks exactly like it was ripped off from a Sunfury pattern of Plasma Pistols. Accompanying that is the pilot flame-looking thing that certainly makes the Slugga look like a Hand Flamer on top of a Plasma Pistol. The spherical Plasma canister is attached to the back, which raises the question to what the rectangular object attached to where a magazine might be might be. Considering the existence of what appears to be an ejection, it wouldn't be too far-fetched to say that the Kustom Mega-slugga doubles as an actual Slugga. But if this is actually the case, it probably is not represented on tabletop. Since it is a "Kustom" gun, there should exist at least some variants made by some Meks that can do just that. The Kustom Mega-slugga has a couple wiring attached to the power pack carried by the same Mek. The wires end right at the handle where a tube seems to combine whatever came from those two and send the mixture to the Slugga proper.
Despite a lesser range, it is a Pistol, meaning that you can fire whilst in melee. So, happy hunting!
A while back, some random Mek took a plasma gun from some dead Imperial and thought to himself "We needs us sum ov dese."
Perhaps due to how Ork Meks generally prefer the "Go big or go home" approach, the Kustom Mega- family skips the comfortably handheld Shoota generation and straight to the slightly more cumbersome that is Blasta. In practice, the Kustom Mega-blasta is essentially a Plasma Gun equivalent for the Orks; though whatever comes out of the barrel can pretty much be anything, the common feature that binds these guns together is the fact that it can and will explode at some point. Big Meks clad in Mega Armour tend to carry a Kustom Mega-blasta instead of their usual smaller Mega-slugga. But since "Kustom" gears are pretty much exclusive to the Meks who make them, beyond their workshops, the Kustom Mega-Blaster is typically found on those Orks who can afford one. Orks abundant with material wealth such as Lootas may be found sporting a Kustom Mega-blasta. Likewise for some reason, some Burna Boyz may also be found using one of the same breed as well. Slightly bigger machines such as Deff Dreads on the good side of the Meks may also have a Kustom Mega-blasta on the side as well as Morkanauts. Grot Tanks along with the bigger Mega Tanks may also have a Mega-blasta turret. Understandably as a rather curious piece of machinery, the "Chinork" Warcopta has a Kustom Mega-blasta as its big gun option.
As far as the "Gets Hot!" rule is concerned in 8th Edition, Orks naturally don't give a fuck – firing at maximum power is their default setting, so they also always have the danger of the weapon blowing up (which is also more fun for everybody involved).
Ork philosophy dictates any weapon to be inevitably bigger for the sake of Dakka, no exceptions, not even the Kustom Mega- family.
The Meks seem to have again skipped the a size and went straight to vehicle-grade weight of a Kannon. Despite the increase in load, the Kustom Mega-kannon is typically seen only on two things. First is a Mek Gun, which we all know is a giant gun on wheels manned by Grots. In terms of appearance at least for the specimen we have, it looks just like an Inferno Cannon, particularly the breed mounted on the Malcador Infernus, but in a much smaller package and not fueled by Promethium, of course; also, nothing screams "bigga and betta" better than having two barrels in addition to being much bigger. Unlike the smaller Kustom Mega- weapons we've seen so far, the Kustom Mega-Kannon does not seem to have a distinct Plasma sphere. We can probably speculate that the Plasma in question is built into the gun, if at all. Another vehicle that has the Mega-kannon is the Morkanaut. Now, the one on the Morkanaut is obviously special since recently, it is now a Kustom Mega-zappa. Of course, looking at it, the Kustom Mega-zappa does have a few notable differences from the kind mounted on wheels. Morkanauts obviously have more space to install auxiliary equipment to make the Mega-zappa fire better. The armor provided by the Morkanaut should also provide the volatile Mega-zappa some protection as well.
Practically, as in tabletop-speaking, the Mega-zappa is simply a more powerful version of the Mega-kannon in all aspects but range, where it remains the same.
Tyranid Plasma Weapons
Revolving entirely around being organic, Plasma weaponry among Tyranids are few. The Bio-plasma Biomorph is exclusive the Carnifexes, its shots equivalent in deadliness to that of a Plasma Cannon.
To do so, the Carnifex with this Biomorph regurgitates an unstable series of chemicals from its innards, which it then spews from its mouth, ignited using electrical impulses along the Tyranid's throat – much in the same way that a Venom Cannon works. Once launched, the glob of biochemicals go off in an incandescent explosion of Plasma. Because it does not actually burn until passing the throat, it will not overheat inside the user, negating any heat related issues that would have surfaced otherwise. Massive Bio-plasma-emitting spines can also be found on Tyranid Bio-ships. Bio-plasma spines serve as effective short-range weapons batteries in void combat, emitting gargantuan balls of Bio-plasma that can devastate enemy voidcraft. Recently, Bio-plasma-specialized Carnifexes have begun to surface with only an additional suffix along with different stats, the Bio-plasmic Scream possessed by the Screamer Killer brood. This type was originally used to designate Carnifexes with Bio-plasma because of their terrible ululating shriek, caused by rasping plates in their esophagus that were used to energize the Bio-plasma. On the other hand, it is reported that these Screamer Killers use an electrical field around their claws to build up an incandescent Bio-plasma ball before launching it at a target over longer ranges – imagine a Rasengan, but over the mouth.
Since Monstrous Creatures can only fire two of their ranged weapons per Shooting Phase, and you should already have two sets of twin-linked Brainleech Devourers on your 'Fex, AND it has only a 12-inch range, so you'll never get the chance to use this Biomorph and shouldn't waste the points to begin with. While 8th Edition lifts that restriction, it is still held back by its short range, limited number of shots (Assault D3 12" S7 AP-3 1 D), and exclusivity with the more useful Enhanced Senses and Tusks. The option is not terribly popular as far as upgrades go since players in general prefer Venom Cannons, which are both safer and longer-ranged for their anti-armor purposes, though Bio-plasma can prove quite useful against heavily-armored infantry. The Scream, on the other hand, is an Assault D6, 18" range, and AP-4 which make this a much better weapon overall than the standard Bio-plasma, allowing the Screamer Killer to soften things up before it charges.
Of course, when it comes to Bio-plasma, the Tyranids aren't limited to puking it out like some feral beast.
The Bio-plasmic Cannon is pretty much the only weapon of note on the Exocrine apart from its Powerful Limbs. This massive gun is pretty much the brain that controls the Exocrine because that hulk pretty much serves only as its transport. Only when the larger beast remains still can the symbiote focus all of its mental resources into targeting and destroying its prey. This great weapon can channel Bio-plasma through a series of different ventricles to ensure the destruction of its prey. Now don't let the fancy word fool you, "ventricle" is simply a cavity inside an organ. Get it? 'Cause the Bio-plasmic Cannon is a Biomorph. It has organs that make Bio-plasma as well as launching the hot stuff. Initially, the Bio-plasmic Cannon was essentially the Tyranid's equivalent for the Plasma Gun, but instead of the chance to explode, it had two modes to fire with – either a vast ball of roaring energy to cover a small area with Marine-melting goop, or several focused streams of death from its multiple barrels. Appearance-wise, the Bio-plasmic Cannon looks pretty much like a scaled-up version of the Spore Mine Launcher on the back the Biovore. As mentioned, it has a total of six eyes, one for each of the smaller barrels surrounding the big one. The chimneys over the back probably has something to do with making the Bio-plasma to shoot out of.
8th Edition morphed its application into a practical cannon with only a single mode to fire with just like a lot of other weapons out there, specifically a heavier stream.
|Bio-Weapons of the Tyranids|
|Small Bio-Weapons:|| Devourer - Fleshborer - Spike Rifle |
Spinefists - Strangleweb - Blinding Venom
|Medium Bio-Weapons:|| Barbed Strangler - Deathspitter - Tyranid Flamespurt Cannon |
Impaler Cannon - Shock Cannon - Spore Mine Launcher
|Large Bio-Weapons:|| Acid Spray - Bio-Plasmic Cannon - Brainleech Devourer |
Drool Cannon - Fleshborer Hive - Heavy Venom Cannon
Rupture Cannon - Stranglethorn Cannon - Tentaclids - Bio-Cannon
|Inbuilt Bio-Weapons:|| Bio-Electric Pulse - Bio-Plasma - Cluster Spines - Flesh Hooks |
Grasping Tongue - Ripper Tentacles - Spore Mine Cysts
Stinger Salvo - Thorax Swarm
|Melee Weapons||Tyranid Close Combat Weapons|
A Weapon that Beats Spess Mehreens? NEVAR!!
|This article or section involves Matthew Ward, Spiritual Liege, who is universally-reviled on /tg/. Because this article or section covers Ward's copious amounts of derp and rage, fans of the 40K series are advised that if they proceed onward, they will see fluff and crunch violation of a level rarely seen.|
When you have a weapon that's good at killing Marines, it's only a matter of time before you draw the ire of the spiritual liege. Infuriated that there were units in the 41st Millennium which actually had the firepower and weaponry to pop asshole units like Terminators, Matt decided it would be a lovely idea to give the Grey Knights a piece of Wargear for the Ordos Xenos inquisitor that functionally renders any unit using a Plasma weapon within 12" of that unit completely worthless, as its plasma weaponry is treated as BS1. The Ulumeathi Plasma Syphon is supposedly hindered by its short range, but in practice, /tg/ has found it quite easy to get the Inquisitor carrying it into the enemy's midst (often with allied units protecting it) in order to keep high-threat units like nests of Fire Warriors or a Leman Russ Executioner completely locked down. It's particularly nasty against Chaos (which as a rule lacks high-AP weapons except for the Vindicator's Demolisher Cannon and Plasma Weaponry) and Tau, who rely extensively on plasma weapons as described above. Apparently Ward forgot that there is a difference between fluff making Marines unstoppable juggernauts and actually making them that way on the tabletop. 6th Edition ameliorated this somewhat with the advent of the Inquisition Codex giving more Imperial Armies (and a handful of xenos armies as well) access to this broken tool and making the thing only affect Plasma weapons as described in the BRB, thus giving the Tau a fighting chance. Everyone else (Chaos as a whole, basically), though, is still right fucked.
But still grav-guns.
In 8th is no longer a thing. As the Syphon has been replaced in favor of Inquisitors in Terminator Armor and the Telethesia physic discipline. And since there’s a way to use one without risking being blown up now, the Plasma Weapons are arguably better than before. | 1 | 12 |
<urn:uuid:256782fa-9dff-4b90-8a13-08c6f451b3e4> | A popular notion among leftists in the United States is that the 1999 repeal of the Glass-Steagall Act set the stage for unbridled greed and gambling in the financial sector which led to the crash of 2008. The theory was articulated two weeks ago in an episode of HBO’s drama series The Newsroom.
A character in the show would have us believe that Glass-Steagall led “to the largest sustained period of economic growth in American history, a sixty years expansion of the middle class, the largest increase in productivity and the largest increase in median income. We also won World War II, put a man on the Moon and a computer in everyone’s lap.” It’s amazing what a single law can do.
In fact, there was little expansion of the middle class in the fifteen years that followed the bill’s enactment in 1933. An increase in productivity really only occurred after the United States entered World War II while putting a man on the moon probably had little to do with financial regulation.
The larger point though is that Glass-Steagall inaugurated an era of financial sanity that continued into the postwar period up to the end of the last century; that the separation between banks’ commercial and investment activities was critical to that.
The libertarian Cato Institute’s Mark A. Calabria takes issue with that statement, pointing out that the financial institutions that were most affected by the 2008 meltdown, including Bear Sterns, Lehman Brothers and Merrill Lynch, were all stand alone investment banks. “They didn’t take deposits. And of course, Fannie and Freddie weren’t even banks.”
The two government-sponsored enterprises affectionately known as Fannie Mae and Freddie Mac securitized and purchased millions of subprime mortgages before the crisis, providing a false sense of security with their implicit government guarantee which turned out to be an explicit one when push came to shove. Both entities were nationalized in 2008 at the expense of billions of dollars.
As a result, the federal government today assumes or underwrites all of the credit risk on practically every new mortgage that is originated. With regard to outstanding mortgages, the government is responsible for 100 percent of the default risk on about $6 trillion of the roughly $10 trillion market.
It was the artificially propped up housing market that got banks that did combine commercial and investment activities, like Citibank, Wachovia and WaMu, into trouble.
The most “bizarre” assumption behind Glass-Steagall, however, according to Calabria, is “that somehow commercial banking is risk free.”
Anyone ever hear of the savings and loan crisis of the late 1980s and early 1990s? No investment banking angle there. How about the 400+ small and medium banks that failed in the recent crisis? According to the FDIC, not one of them was brought down by proprietary trading.
Contrary to what proponents of reinstating Glass-Steagall seem to believe, “diversification generally reduces risk,” writes Calabria. What the financial industry needs is not more government intervention nor regulation but less of both. | 1 | 2 |
<urn:uuid:a59f0999-84e6-437a-8a12-879a55ea3bc1> | DISCLAIMER: This post is information and science heavy and probably a little overwhelming. However, interspersed are pictures of baby goats being adorable, as well as some great Photoshops (if I do say so myself), a la my wonderful Lebrongoat. So take a minute to scroll through and check out the pictures and, if you are intrigued, take a read too.
Ah, milk. So vital to cheese making.
I found pages and pages of information on milk’s mythological and symbolic significance. First and foremost, it was “the essence of the mother goddess” (Andrews, 147). The mother goddess is depicted in many forms, most commonly a tree, woman, or cow. In each one she nourished kings, gods, and even the land with her milk.
The tree form often has female attributes, i.e. numerous breasts. In an African legend, the tree provided milk to a tribal chief’s daughter so that she could feed her brother. A Scottish-Gaelic tale tells of a milk-giving tree that provided the Milk of Wisdom.
Trees providing milk became a theme in world myth because several fruits, such as fig and coconut, produced a milky juice. The Egyptian god Hathor was a fig tree and the Aztec goddess Mayahuel was a maguey (agave) tree.
In Hindu myth, milk trees were alongside fruit trees in paradise. In addition, a Hindu creation myth says the world was created when Surahbi, the sacred cow, released a stream of sacred milk into the cosmos and filled it up. Hence the worship of cows.
The trees location in paradise meant that milk was the Elixir of Life. It was fed to gods, goddesses, kings, and pharaohs. It’s like the story of Amalthea in my previous post, the she-goat/goat nymph that suckled Zeus and got her own constellation. Similarly, Vulture goddesses fed the pharaohs of Egypt with milk and a wolf fed the Roman princes Romulus and Remus. In East Africa and the Sudan, milk was not consumed because it was likened to urine, but it was used in rituals for its power.
To the ancient Hebrews, milk was a symbol for pure, simple, and wholesome truth. It was associated with wealth and prosperity. The Hebrews believed the heavenly region was a land flowing with milk and honey.
Milk was so important that King Solomon declared “thou shalt have goat’s milk enough for the food, for the food of thy household, and for the maintenance of thy maidens.” (Soyer, 168). One of the four libations offered to the gods was milk. The other three were honey, oil, and water.
In Serbia, milk was poured into the ground to ease the gods during thunderstorms and it was formerly used in baptismal rights. Milk is the most elementary and heavenly nourishment. The ancient Greeks believed the Milky Way was the milk river from which all earthly rivers and oceans were created.
So I guess you could say that milk was important to ancient peoples.
And it’s important to us too, especially nutritionally. Everyone needs what milk provides. Plus, milk makes cheese. And cheese is divine.
And by a little, I mean a lot. You may succumb to a disease that I like to call “Information Overload” or “Head Asplode.”
Goat Milk Composition – Goat’s milk is a thinner milk that is lower in fat, making it easier for humans to digest. In most cases, lactose intolerant people can eat it because of its slight differences from cow’s milk. It has fewer calories and less cholesterol, and goat cheese has more calcium than cow cheese. Goat milk is generally sold as whole milk, processed cheese, and evaporated or dried milk products.
When it comes to the composition, it’s difficult to get definite readings on any of the components. No set compositional profile has been created, making the comparisons between goat’s milk and cow’s milk shaky at best. Problems also arise when trying to prove the nutritional value of goat’s milk. An organization called the Dairy Herd Improvement program is working hard to put together a set of guidelines for testing procedures that all manufacturers of goat’s milk would have to follow.
There are several components that determine the composition of goat’s milk; lipid (fat) fraction, cholesterol, protein, lactose (sugar), ash, enzymes, and vitamins.
Lipids – The average milk fat, or lipid count, in goat’s milk is 3.9%. Milk fat will fluctuated depending on the genetics of the goat, season, stage of lactation, and quality and quantity of feed. Cow’s milk lipid count will range from 3.0 – 6%. The percent of unsaturated fat, [oleic (a monosaturated omega-9 fatty acid) and linolenic (polyunsaturated omega-6 fatty acid) acids] is no different from cow’s milk. There is no advantage to choosing goat’s milk over cow’s milk in diets restricting saturated fat intake. Goats do have a higher proportion of capric, caprylic, and caproic acids – fatty acids which are responsible for the flavor and odor of goat’s milk.
Cholestoral – According to some studies, goat’s milk has about 11 to 25mg of cholesterol per 100g of milk. Cow’s milk has been shown to have 14 to 17mg per 100g of milk. However, it is difficult to determine for sure whether or not these numbers are accurate due to the inconsistent testing methods used on goat’s milk. There is a claim that goat’s milk is naturally homogenized. This is untrue and stems mostly from observations that goat’s milk does not cream quickly. The extended time it takes to cream goat’s milk is often attributed to a belief that fat globules in goat’s milk are much smaller than those in cow’s milk. In actuality, the fat globules are only slightly smaller. The reason for the difference in creaming is the lack of a protein called agglutinating euglobulins, which cause fat globules to cluster and rise to the top. They are, however, found in cow’s milk. It is also said that the “small” fat globules in goat’s milk make is what makes the fat easier to digest, but there has been no scientific evidence to prove that.
Protein – The protein in goat’s milk is incredibly similar to cow’s milk despite claims that goat’s milk is lower in protein. This is likely the result of the wide range of reported values when goat’s milk is tested. The range is caused by the lack of standardization in testing methods and differences between breeds.
However, the structure of the protein casein (phosphorous proteins commonly found in mammalian milk) is different enough from bovine milk to be easily differentiated in a lab setting. The casein micelles (what makes milk insoluble in water) are either much smaller or much larger than those found in cow’s milk. It has been suggested that, while the quantity and distribution of amino acids is similar in most mammalian milks, the assembly sequence is almost certainly different.
There is a similar difference in the lactalbumin (heat soluble protein) portion of the two milks as well. The lactalbumin in cow milk triggers an allergic response from many people, while goat’s milk does not. The reaction is attributed to the differences in the two proteins structures.
The proteins in cow’s milk are huge, fit for an animal that will one day weigh over 500lbs. The proteins in humans, sheep, and goats are quite short, which is why babies, the infirm, and arthritics will often thrive on goat’s milk. However, this can become problematic as infants and the elderly can develop anemia from lack of certain minerals that are not found in goat’s milk.
Lactose – Lactose is the main sugar in goat’s milk, but there are also small amounts of inositol (a carbohydrate with a small amount of sweetness). The lactose concentration is usually lower than in cow’s milk but the difference is hard to determine because of the variation in testing methods. There has been no consensus on whether or not the lactose should be analyzed in non-hydrated milk form or mono-hydrated form. The water of hydration is capable of introducing a five percent discrepancy in the reported concentration of the same actual amount of lactose.
Ash – Goat’s milk ash percentage is slightly higher than that of cow’s milk.
Ash is what the minerals in milk are called. So “ash” would consist of any minerals that might wander into milk through the feed or water the animal is ingesting. The ash fraction will differ in response to different points in lactation but also on a daily basis. Accurate evaluation relies on averaging the values of a single animal over an extended period of time or using an average from samples taken from different animals in the same herd on the same day.
So basically, it’s tricky.
Milk’s nutritional value is often determined in terms of calcium and phosphorous. The difference between goat and cow milk is not significant enough to declare one more nutritional than the other. There are also similar amounts of potassium, sodium, and magnesium. Trace minerals in both types of milk are nearly the same, with only a slight variation in the level cobalt (vitamin B12) and molybdenum (promotes growth in animals), as well as vitamin B and xanthine (intermediate in the metabolic breakdown of nucleic acids to uric acids).
There is a small variation in the citrate level, which affects the flavor components in cultured dairy food. The largest variation is in the chloride concentrate. It is higher in goats, which may be the cause of infectious mastitis (an inflammation of mammary glands caused by disease producing microorganisms). It causes the salt sodium chloride concentration to increase and has become an endemic in small goat herds. The association of cow and goat milk with infantile anemia comes from the low levels of copper and iron in both milks. It can be easily reversed by adding trace minerals into the diet.
Enzymes – Following the trend of “goat’s milk isn’t so different from cow’s milk but there are some differences,” we come to enzymes. Again, they are similar. And again, there are some specific differences. The alkaline phosphatase (used as an indicator of proper pasteurization) is slightly slower than that found in dairy cows, but the enzyme has the same degree of heat susceptibility and works just as well as a pasteurization marker. Peroxidase (another indicator of proper pasteurization) activity in bovine and caprine milk is the same, but the xanthine oxidase (promotes oxidation of hypoxanthine and xanthine to uric acid) level is lower in goat milk. Levels of activity for ribonuclease (acts as a catalyst for ribonucleic acid hydrolysis) and lysozyme (enzyme present in body fluids that acts as an anti-bacterial agent) are higher in goat’s milk.
Vitamins – There is a lower level of B12 and B6 in goat’s milk, the meaning of which is not clear. Despite the fact that the concentrations of B12 and B6 are equal to or exceed the concentrations found in human milk, anemia developed in infants and experimental animals is often attributed to deficiencies in these vitamins. As mentioned, anemic conditions can be eliminated by adding cooper and iron to the diet. The anemia could also be a result of low levels of folic acid in goat’s milk. However, the concentration does not vary from that in cow’s milk and both are thought to cause anemia in infants.
Vitamin A potency comes directly from the vitamin itself, rather than the precursor carotenoid pigments (color pigments as well as chemicals responsible for the flavor of foods) in cow’s milk. This makes goat’s milk and milk fat to be whiter in color.
Milk Production – Seasonal variations will change the concentrations of nutrients within goat milk, the same as cow milk. However, the fluctuations are greater with goat’s milk, meaning that goat cheese will be even more seasonal than cow cheese. Changes in the amount of fat, SNF (solids-not-fat), and minerals (like calcium and phosphorous) alter the taste of the milk as well.
There are significant daily variations as well. If you make a chèvre on Tuesday, May 10th, it’ll be slightly different than a chèvre made on Wednesday, May 11th. These fluctuations are especially pronounced during the fourth month of lactation.
Again, the quality and quantity of feed will change the quality of the milk, but it will not change the amount of milk produced. The nutritive value will stay constant over a wide range of feeding conditions as well. The major controlling element in a goats diet is the energy content of the feed.
Goat’s Milk Products – Goat’s milk will produce the same type of products as cow’s milk. Major amounts are being made into dried milk, evaporated milk, cheese and yogurt, as well as being bottled and sold as whole milk. It has become widely used in France due to the ability of goat curd to be frozen. Some believe that frozen curd produces a superior cheese than fresh curd. It can be made without using salt and can keep for up to 6 months at 5°F.
Despite goat’s milk usefulness, many small operators have been forced to shut down because it is so difficult, time consuming, and expensive to meet the government set sanitation standards.
Dunno about you but I just got a case of the Head Asplode’s.
One more thing before I finish up. In 2002, a herd of goats in Canada were implanted with a spider gene. The milk they procured was skimmed and the protein extracted was used to produce a fiber that was identical to spider silk. It was patented as BioSteel by Nexia Biotechnologies.
What is BioSteel?
It’s a high performance sports drink.
Jaykay (but not really). It’s a biopolymer strong enough to potentially be used to create bulletproof vests. It’s a unique fiber that is strong, tough, durable, lightweight, and biodegradable. Research is being conducted in how to use BioSteel in the medical, military, and industrial fields.
Move over, Lebrongoat. Here comes Spidergoat.
That’s the end of this day’s lesson. Till next time, when I fiiiiinnnnalllly approach the subject of goat cheese.
Keep eating and asking, my friends.
Photos, in order of appearance: | 1 | 3 |
<urn:uuid:7de120d2-21f0-4b25-9d43-0d74ec5ce284> | Jump Directly To The SectionUpdates
- Famous Speeders
Radar Before Sports
- World War II
- Police Radar 1946
- Police Radar 1947
- Rollout '48-'72
- 55mph speed limit '74
- Baseball Radar 1972
- Hall of Fame
- 'Fast' Guns vs. 'Slow' Guns
- How Radar Works
Speeds Before Radar
- Dec 14 on, doing more research as I have time, compiling lists of old sports gun models and specs for the 'Products' section, and the 'Fast Guns vs Slow Guns' section. Will publish details in the months ahead.
- During Oct/Nov 2014, I completely revamped and added to this page, and converted to a more narrative form with wiki style references - this project is turning out to be massive So, I would not rely on it quite yet, but it is interesting reading (There is much misinformation out there on this topic)
- 9/19/2014 added Tom Verducci's account on Danny Litwhiler
- 9/13/2014 initial page
We will explore why radar was needed, how radar came about, and how it was used over the years.
Back in the first days of the automobile at the turn of the 20th century, speeders were such a problem that citizen vigilante groups would often track down or injure offending drivers. So, legal speed limits were rolled out across the country, and police officers successfully used bicycles, then automobiles or motorcycles (cheaper) to “pace” speeders, following behind at a constant distance. The invention of radar, however, changed the speeding game. One radar device could take the place of 20-100 motorcycles, with much less chase risk involved, and with greater accuracy.
Today, radar guns initially adapted to measure vehicle speeds for traffic purposes, have been further adapted for everything from measuring the speed of pitched and hit baseballs/softballs, runners, bicycles, tennis balls, bowling balls, locomotive speeds, and race car speeds.
In the Beginning, There Were SpeedersBack to Top
Police Used Bicycles to Enforce Speed Limitsg1The first traffic arrest was made 5/20/1899 to a taxi driver, Jacob German, in NYC operating an electric vehicle, by 26 year old New York City police officer, Bicycle Roundsman John Schuessler. Mr. German did not have the luxury of a paper ticket, he was jailed. This vehicle was a terror on the streets of New York, blazing along Lexington Avenue at an estimated 12 mph. The posted speed limit was 8 mph.
Roundsman Schuessler had been promoted during then Colonel Theodore Roosevelt's tenure on the Board of Police Commissioners.
Mr. German drove for the Electric Vehicle Company. In 1897, the company began leasing its cabs in New York City, and by 1899, there were 60 of these electric taxis called "Electrobats" in the city. The drivers would return the cars each night to a battery station on Broadway, to swap for newly charged batteries.
Hemmings had this to say about the early 'Electrobats':
they used 800 pounds of lead-acid batteries, steered with the rear wheels, drove through the front, had a top speed of about 15 MPH and took eight hours to recharge. About 200 were on the streets of Manhattan in 1900, but they seem to have gone extinct by about 1910.
Before Detroit, New England was the center of automobile manufacturing at the turn of the 20th century, the home of Pawtucket Steamboat Co (RI), Lane and Dailey (VT), Frank Duryea (MA), Holyoke Automobile's Tourist Surrey and Tourer (MA), Loomis Runabout (MA), Columbia Auto (CT), Locomobile Co (CT/MA), Kidder (MA), Clark Auto (MA), Eclipse Auto (MA), Overman Auto (MA), and Keene Auto (NH).
So it made sense that in 1901, CT became the first state to post speed limits - 12 mph in cities, 15 mph outside.
Coincidently, 1901 was the first year for an automobile to have an optional speedometer.
In 1904, the first paper speeding ticket, without jail time, was issued by Dayton, Ohio police to Mr. Harry Myers for going twelve miles per hour on West Third Street, Dayton.
Faster Cars Required Speedy, Maneuverable Motorcyclesg2In 1908, Ford's new inexpensive, gasoline-powered Model T could speed along at 40-45 mph, which was well above the previous speed limits.
By 1910, the electric taxi had essentially vanished from NYC, and wouldn't return until 2012 with a fleet of six Nissan Leaf's in 2012.
Police departments began to buy motorcycles to pace faster and faster cars.
Yonkers NY police bought their first motorcycle in 1906, an Indian for $217 which came with the newly invented speedometer.
In 1907, NYC bought two Indian camelback motorcycles (they looked more like bicycles) that could reach 25-30 mph, to form their first Motorcycle Squad.
Here is a nice article with a couple of stories about a couple of motorcycle adventures to catch speeders in Des Moines Iowa in 1906 and 1911, involving a borrowed Belgian-made Fabrique National, and a rented 1911 Harley.
In 1940, when MLB wanted to test the speed of Bob Feller's fastball, radar wasn't available to the public yet, so it made sense that they used the most common way that people were caught speeding - the police motorcycle and it's ever-present speedometer.
Famous Sports Speederg5Back to TopBabe Ruth was arrested for speeding twice in NYC in 1921. The first time in May or June 1921 for driving 27 mph.
His famous second arrest by Traffic Policeman Henry Yost after speeding through Manhattan doing 26 miles per hour in his maroon 12-cylinder Packard, he was sentenced to a 'day' in jail on June 8, 1921. The 'day' ended at 4pm which was after the 3:15 start of the Yankees-Indians game that afternoon. Since he was going to miss the start of the, he called someone to bring his uniform, which he put on in jail underneath his suit, then raced the 9 miles via police motorcycle escort to the Polo Grounds in time to replace hitter Chicken Hawk in the 6th inning.
Ruth was arrested for speeding again January 1924 in Massachusetts, where it was discovered that he hadn't had a valid driver's license for 10 years. Then he was arrested for speeding again in May 1928.
Invention of Radar and World War IIg3Back to Top
Important People in Development of Radar
1842 Christian Andreas Doppler (1801-1853)
Doppler RADAR is named after Christian Andreas Doppler.
Doppler was an Austrian physicist who first described in 1842,
how the observed frequency of light and sound waves was affected by the relative motion of the source and the detector.
This phenomenon became known as the Doppler effect.
Example of Doppler Effect
This is most often demonstrated by the change in the sound wave of a passing train.
The sound of the train whistle will become "higher" in pitch as it approaches and "lower" in pitch as it moves away.
This is explained as follows: the number of sound waves reaching the ear in a given amount of time
(this is called the frequency) determines the tone, or pitch, perceived.
The tone remains the same as long as you are not moving.
As the train moves closer to you the number of sound waves reaching your ear in a given amount of time increases.
Thus, the pitch increases. As the train moves away from you the opposite happens.
1865 James Clerk Maxwell (1831-1879) - Theory of Radio Waves
In 1865 Maxwell, a Scottish mathematical physicist, developed the theory of electromagnetic waves in his paper
'A Dynamical Theory of the Electromagnetic Field'.
For the first time there was a unified understanding of light (UV, visible, infrared) and radio waves as expressions of the same phenomenon: electromagnetic radiation.
His paper demonstrated that electric and magnetic fields travel through space as waves moving at the speed of light.
The unification of light and electrical phenomena led to the prediction of the existence of radio waves.
1886 Heinrich Rudolf Hertz (1857-1894) Sends Radio Waves and They Reflect From Solid Objects
Proves Maxwell's Radio Wave Theory
In 1886, Heinrich Hertz, a physicist in Germany, first proved the existence of electromagnetic waves,
after working 8 years on Maxwell's electromagnetic theory of light.
Hertz proved the theory by creating devices in his lab to transmit and receive radio pulses,
demonstrating and proving that such electromagnetic waves could be reflected from solid objects (like planes, cars, and ships).
He calculated that an electric current swinging very rapidly back and forth in a conducting wire would radiate electromagnetic waves into the surrounding space
(today we would call such a wire an "antenna").
With such a wire he created and detected such oscillations in his lab, using an electric spark,
in which the current oscillates (cycles) rapidly (that is how lightning creates its characteristic crackling noise on the radio!).
Today we call such waves "radio waves". At first however they were "Hertzian waves",
and even today we honor their discoverer by measuring frequencies in Hertz (Hz),
cycles per second--and at radio frequencies, in megahertz (MHz).
1904 Christian Huelsmeyer (1881-1957) Radar Can Locate (but not Range) Targets
Built on the Hertz Principles
In 1904, Christian Huelsmeyer, a German inventor, invented and patented the 'telemobiloscope'
which could detect the presence of faraway ships (to avoid collisions in the fog),
but could not determine their distance (German patent 165546).
The Telemobiloscope wasn't technically radar since it couldn't measure distance, but was the first patented device using radio waves for detecting the presence of distant objects.
1935 Sir Robert Alexander Watson-Watt (1892-1973) Radar Can Locate and Range Targets
In 1935, Watson-Watt, a Scottish physicist, while in England, developed the radar device to locate (is it there?) and range (how far away?) aircraft.
His radar invention was patented (British patent) in April 2, 1935.
His radar device which was capable of locating and ranging aircraft using pulses of microwaves rather than continuous waves.
He had proposed instead that it should be possible to develop a system to locate incoming enemy aircraft to provide an early warning, even at night and through cloud cover (since clouds are transparent to radio waves, and longer wavelength microwaves). Watson-Watt was given a team of scientists and engineers to develop the system and by the outbreak of war in Europe in 1939 a series of tall towers with radio transmitters and receivers, known as “Chain Home”, had been constructed along the Eastern coastline of Britain.
Watt was born in Brechin, Angus, Scotland, educated at St Andrews University in Scotland, and taught at Dundee University.
In 1917, he worked at the British Meteorological Office, where he designed devices to locate thunderstorms.
Watson-Watt coined the phrase "ionosphere" in 1926.
He was appointed as the director of radio research at the British National Physical Laboratory in 1935,
where he completed his research into aircraft locating devices.
1935 Mystery Radar
Contrary to current popular belief, the public did know about radar for detecting airplanes - at least in 1935.
Modern Mechanix published an article 'MYSTERY RAYS “SEE” Enemy Aircraft' in October 1935 outlining the basic principles.
Notice they never use the term 'radar' - they are 'rays' or 'beams'.
The veil was finally lifted by the Army in a Mechanix Illustrated article in Sep 1945 'At Last - The Story of Radar'.
Allied Radar Development During WWII
Radar was secretly developed by several nations before and during World War II.
On 2/21/1940, at Professor Oliphant's laboratory at Birmingham University,
with young physicists John Randall and Harry Boot,
produced a working prototype of a cavity magnetron.
(Germany's Hollman's invented a flawed version in 1935 with serious frequency stability issues - see Axis 1935 below).
But Randall and Boot added liquid cooling and a stronger cavity,
one with 8 concentric cavities - the 450 W, 9.8 cm cavity magnetron (20 times more powerful).
By 1941, they had fully solved the frequency instability issues.
This new cavity magnetron, which enabled microwave radar, was probably the second largest leap of technology during the war, after the atomic bomb.
In September 1940 during the Battle of Britain, Britain's secret (Henry) Tizard mission was dispatched to Washington, DC,
to hand over her war-time secrets to the USA in exchange for research and productive capacity.
Two of these secrets was the 10 centimeter cavity magnetron, and work on the 'VT fuze' radar.
Beginning in late 1940 and continuing through World War II, large-scale research at the MIT Radiation Laboratory (another 'decoy' name),
which operated as part of Division 14, Radar, of the National Defense Research Committee (NDRC)
and was sited at the Massachusetts Institute of Technology, was devoted to the rapid development of microwave radar.
The "Rad Lab" designed almost half of the radar deployed in World War II, created over 100 different radar systems, and constructed radar systems on several continents.
12/7/1941 - It is a little-known fact that an hour before the attack on Pearl Harbor, radar detected the incoming planes, though nothing was done with the information.
In Aug 1942, the VT proximity fuze, after two years of top secret development,
was successfully tested on the USS Cleveland.
This 'fuze' was a battery-powered radio transceiver (radar unit) inside an artillery shell.
On January 5, 1943, U.S.S. Helena shot down a Japanese bomber in the first combat use.
The Navy did not allow these shells to be shot over land for fear the Japanese would copy the technology.
The proximity fuze was one of the most important developments in the war.
These fuzes were instrumental in downing many Kamikaze planes.
22 million fuzes were delivered during the war.
The British military began production of the 'Chain Home' radar units in 1936. The first completed May 1937 and operated by the RAF
could spot aircraft at 10,000 ft or 80 miles away in good weather.
Britain had an operational air defense system, with 20 CH stations, in place at the start of the war in September 1939.
Axis Radar Development During WWII
It is a myth that the British had radar and the Germans did not.
Telefunken - Luftwaffe/Wehrmacht - Wurzburg
In 1934/35, Telefunken headed by Dr. Wilhelm Runge, began work on 50 cm radar bouncing waves off of planes.
They issued a press release of their findings, and Electronics magazine (Sep 35) and Modern Mechanix (Oct 35) posted detailed articles with photos of their devices.
In 1938, the Luftwaffe (air) had them develop land-based Darmstadt/Wurzburg radar units with klystron microwave tubes in the 53cm range.
They later worked with the Wehrmacht (army). The first units were ready in 1940, eventually 4,000 were produced.
The British invented the cavity magnetron in 1940,
but that remained a secret from the Germans until February 1943
when they recovered an undamaged H2S radar set from a crashed British bomber at Rotterdam.
This discovery caused an uproar in Germany. They created a special 'Rotterdam' Commission to study the magnetron.
The Germans responded with what was essentially a radar detector called 'Naxos',
and by Spring 1944 they had several in fighters.
All of the patents filed by Telefunken in Germany were also filed in the USA.
These were most of H. E. Hollmann's patents (Telefunken used his work), W. Runge, director of Telefunken,
patents and tens of thousands of other relating radar patents.
These patents were available to US based General Electric (GEC).
Telefunken owned the German company, AEG which was allied with GEC and traded all patents with GEC.
In this way, most of the German radar secrets, were available to the Allies.
The Allies, England and America primarily, used these patents to develop their radar systems.
GEMA - Kriegsmarine - Freya/Seetakt
In 1935 GEMA and Dr. Hans E. Hollmann, working with Rudolf Kuhnold head of German Navy Research, contracted with the Kriegsmarine (navy) to develop working 50cm pulse radar waves.
They could only detect ships/airplanes - no distance.
Later they worked with the Lorenz company to develop Freya (land, 125m range) and Seetakt (sea) radar systems.
Germany installed eight 'Freya' radar units on their western border in 1938.
By the end of the war, the GEMA company installed over 6,000 of these radar units.
Also in 1935, Dr. Hollmann in Berlin developed and patented
(Germany 11/29/1935 #112977, US 1936 #2123728)
the first multi-cavity resonant magnetron.
However, the German military considered the frequency drift of Hollman's device to be undesirable,
and based their radar systems on the much less powerful klystron instead.
1940 Where did the Term 'Radar' Come From?
Until November 1940 the word 'radar' did not exist.
Albert (A.P.) Rowe, Robert Watson-Watt's successor as Superindentent of the Bawdsey Research Station where the Chain Home RDF system was developed,
named the technology 'RDF' (radio direction finding).
The Signal Corps called it 'RPF' (radio position finding).
The Air Corps, overcome with secrecy, called it 'derax'.
The Navy, like the other branches and governments, was developing the ranging (distance/direction) effort in absolute secrecy,
and sometimes the devices were simply called radio direction finding equipment to hide them within an unclassified equipment category that already existed.
Any name giving a clue that they could detect as well as measure the position of a target was classified secret, causing many routine letters and dispatches to be classified secret for no other reason than the reference to radio target location.
Lt. Commanders F.R. Furth and S.M. Tucker, the Navy officers in charge of new radar technology, devised the acronym 'radar' (radio detection and ranging), as a way to refer to the secret tech in unclassified messages and letters.
On 11/18/1940, the Navy sent a secret letter authorizing the new term could be used in unclassified messages and conversations.
The British made the official switch from 'RDF' to 'radar' on 7/1/1943. Australians referred to radar stations as 'doovers' during the war.
1941 First US Warfare Radar - Pearl Harbor
In December 1936, the Signal Corps tested its first radar equipment at the Newark,
NJ airport where it detected an airplane seven miles away.
By May 1937, Signal Corps demonstrated the SCR-268, a short-range radar set (the 268 had a long wavelength and was not 'microwave'),
and started development of a long-range version for use as an early warning device for coastal defense.
Their radar was built for defensive purposes, therefore the Signal Corps developed the SCR-268 (205 MHz),
designed to control searchlights and anti-aircraft guns,
and subsequently designed for the Air Corps two sets for long-range aircraft detection:
SCR-270, mobile set with a range of 120 miles,
and the SCR-271, a fixed-radar with similar capabilities.
These units did not have a cavity magnetron.
They operated at 106 MHz, using a pulse width from 10 to 25 microseconds,
and a pulse repetition frequency of 621 Hz.
With a wavelength of about nine feet, the SRC-270 was comparable to Britain's Chain Home
system, but not to the more advanced microwave systems in Germany.
By early December 1941 the aircraft warning system on Oahu had not yet been fully operational.
The Signal Corps had delivered mobile SCR-270 and fixed SCR-271 radar sets earlier in the year,
but construction of 3 mountain-top SCR-271 fixed sites had been delayed,
and radar protection was limited to six mobile stations operating on a part-time basis to test crews and equipment.
The sixth and final SCR-270 mobile radar station, s/n 012, at Opana Point began operation on November 27, 1941.
At 7:02 AM Dec 7th, 1941, Privates George Elliot and Joe Lockard, mobile SCR-270 radar operators at the Opana Station,
detected the Japanese attack planes 130 miles away.
At 7:20 AM they reported "Large number of planes coming in from the north, three points east"
to the Ft. Shafter Aircraft Warning Information Center officer on duty,
Lt. Kermit Tyler, who had been at the center for only two days, and had no training in radar.
Tyler believed the radar blip was a flight of US B-17 Flying Fortresses inbound from California were expected that day,
so he responded "Don't worry about it".
Lockard and Elliot continued tracking the aircraft until they were about 22 miles away, then disappeared behind mountains, and then they stopped looking.
The first wave of attacks happened near 8:00 AM.
1942 Wartime Speed Limit
On Dec 7, 1941 the Japanese not only attacked Pearl Harbor, but they attacked Dutch East India (aka Indonesia),
thereby preventing the US from obtaining 90% of it's rubber. This immediately led to a crisis including shortages of auto tires.
So from 1942 to 1945 (depending on the state), the War Department ordered a nationwide speed limit (aka Victory Speed Limit, Patriotic Speed Limit, War Speed Limit) of 35 mph to conserve rubber and gasoline for the war effort.
They were not kidding around. Those caught speeding would appear before the local War Price and Rationing Board to have their fuel ration coupons suspended/taken.
What's The Frequency, Kenneth?Back to TopThe letters in “radar” stand for “Radio Detection and Ranging.” Radar works on the principle of bouncing radio waves at the speed of light — 186,282.4 miles per second — off of a reflective object at a specific frequency. If the reflective object is moving, the radio waves return at a different frequency than that at which they were transmitted, and this difference is called Doppler Shift, or the Doppler Effect (see below). The radar gun’s computer tabulates the speed based on the difference in transmitted and returned frequency.
Frequencies used by law enforcement for radar guns are established and maintained by the Federal Communications Commission (FCC).
All radio waves have three distinguishable characteristics:
The signal speed (speed of propagation) - constant
Every RADAR signal travels at the same speed, the speed of light, or about 186,000 miles per second. Both the transmitted and received RADAR signals always travel at that constant speed.
The wave length - variable
Literally, the physical distance, or length from the beginning of the peak to the end of the valley. Most RADAR signals have wave lengths of about 3 centimeters (about 1-1/5 inches).
The frequency - variable
That is, the number of waves transmitted in one second of time. Police RADAR signals have frequencies of more than ten billion waves per second.
Note: Frequency usually is measured in cycles per second. One cycle is the same thing as one wave. Scientists and engineers use the term Hertz (abbreviated Hz.) instead of cycles per second.
All of these terms have exactly the same meaning:
one Hertz equals one cycle per second, which equals one wave per second.
Early S-BandThe first traffic radars transmitted at 2.455 GHz in the S band (2 - 4 GHz), which is the basically the same as many microwave ovens.
S band radar antenna beamwidths varied from 15 to 20 degrees depending on model. These radars operated from a stationary position only and measured receding as well as approaching targets to an accuracy of about ± 2 mph. The maximum detection range was an unimpressive 150 to 500 feet (45 to 150 meters); vacuum-tube receivers do not have the sensitivity of solid-state receivers. A radar with a 150 foot detection range would have less than 1.5 seconds to measure a target traveling 68 mph (100 feet/second or 109 kmh)
Better BandsIn the 1960s smaller, more powerful X-band units made the S band units obsolete.
Frequencies presently used by radar guns are: X band at 10.525 GHz, K band at 24.150 GHz, and Ka band at 33.4–36 GHz.
One GHz is equal to one billion cycles per second meaning that X band, for example, sends 10,525,000,000 radio microwaves per second, which then bounce back to the detection unit.
Will Radar Tell Me the Weather? Who is this Doppler Guy?Back to TopRemember that the difference in the out/in frequency is called Doppler Shift. And it has nothing to do with the weather.
The Doppler Principle was first described by the Austrian physicist, Christian Doppler in 1842.
The Doppler Principle is a principle of physics which states:
"When an energy, be it light, radio, or sound energy, is transmitted from a moving object and reflected from a stationary object or transmitted from a stationary object and reflected from a moving object, or both, it increases or decreases in frequency in direct proportion to the speed of the object."
In the X-band radar frequencies, that Doppler shift occurs at the rate of 31.38 cycles per second. The Doppler shift for K-band is 72.02 cycles per second.
In the case of a K band radar device, the Doppler shift constant for 1 mile per hour is 72 cycles (wavelength) per second. Examples for the K Band:
- 72 cycles/second is 1mph
- 2880 cycles/second is 40mph
- 5040 cycles/second is 70mph
- 7200 cycles/second is 100mph
1946-1949 First Use of Police Radar - John Barker, CT State Police, Garden Cityg4Back to TopIn the 1930s, John L. Barker Sr., an engineer at Automatic Signal Company, had been working on traffic lights.
During World War II, Automated Signal was redirected to the war effort by developing radar for the military. They worked for Grumman Aircraft Corporation to solve the specific problem of landing gear damage on PBY Catalina amphibious aircraft. John Barker and Bernard (Ben) Midlock fashioned a Doppler radar unit from coffee cans soldered shut to make microwave resonators. The radar unit was installed at the end of the runway (at Grumman's Bethpage, NY facility), and aimed at landing PBYs.
After the war ended, Barker and Midlock (for Automated Signal, based in Norwalk Connecticut) adapted this radar device for auto traffic use, then Mr Barker filed his patent #2629865 as inventor on 6/13/1946, and worked with the nearby Connecticut State Police no later than March 1947.
The NY Times' interviewed Mr. Barker's son in 2013:
“He kept the lights off, because he didn’t want anyone to see in,” says John L. Barker of the rooms his father rented to test a radar system he was making for the U.S. military. John L. Barker Sr. — then an engineer at Automatic Signal Company — had been working on traffic lights in the 1930s, but during World War II, he dedicated himself to military research. “He pointed the machine out a window, and he tested it on cars that drove through an intersection.” When the war ended, Barker Sr. experimented with a new, peacetime application for the technology. He would pack the radar equipment in the trunk of his car and play cop on the Merritt Parkway. “He would pull off the road and open the trunk so that the equipment faced traffic,” his son says. At the time, police officers had no precise way to clock a car; Barker knew that his device could change the rules of the road. In 1947, the town of Glastonbury, Conn., deployed Barker’s machine on Route 2, creating what was perhaps the world’s first speed trap. “This is the latest scientific method,” a police captain named Ralph Buckley told a reporter in 1949. “It removes the possibility of human error.” And he added, “Any speeder who gets caught will have to argue with a little black box.” ...the most important part of the speed detector may have nothing to do with its hardware. As John Barker Sr. himself pointed out in a 1948 issue of The Traffic Quarterly, drivers change their behavior when they think they’re being watched. As soon as Connecticut State Police first parked their radar truck beside the highway, the traffic slowed down — even though the troopers were handing out “courtesy cards,” not speeding tickets.
John L. Barker Sr. (1912-1982) g6Mr. Barker received an electrical engineering degree from Johns Hopkins University in 1933. He began his career at Automated Signal (AS) in 1933, and retired 44 years later in 1977. His first position was week-to-week during the depression in the White Plains, NY office, then in 1935 he moved with the company to Norwalk, CT. In 1937, he was awarded the first of his 160 U.S. and foreign patents.
During WWII, AS suspended its traffic control operations and converted to defense work.
In 1943, Barker became director of the research and development group. After the war, AS consolidated with Eastern Industries, Inc. With Detroit turning out faster and faster cars, speeding on the highways became a hazard. Of course, he patented his famous radar speed detection device in 1946. In 1961, LFE (Lab for Electronics) Corp. merged with Eastern Industries/ AS, and Barker became vice president/GM of Automated Signal. In 1966, he became President of the AS division of LFE Corp, then in 1968 VP of LFE Corp..
1947-1949 Early Adopters in Connecticut and Garden City, NYg7Back to TopSometime between March and May 1947, the CT State Police were testing the Automated Signal Co. radar on Merritt Parkway in Fairfield, Connecticut. At least by June 1947, the device was called the Electro-Matic Radar Speed Meter by Automated Signal Co. For almost two years the CT State Police performed only traffic surverys and issued warning tickets.
Popular Science described how it worked:
Behind its front panel are two antennas. One sends out a cone-shaped stream of radar waves in whatever direction it is pointed... Whenever a moving object gets in their way, some of the waves bounce back. They rebound at a different frequency than the one they started out with. This change in their frequency varies directly with the speed of the object that reflects them. [This] difference in frequency is amplified and translated into miles per hour on a .. speedometer.
From no later than May 8, 1948, through February 1949 or later, Garden City NY police were testing the new Speed Meter radar.
On 2/12/1949, Matthew Dutka became the first speeder caught by radar, on Route 2 in Glastonbury, CT by CT State Policemen Vernon Gedney and Albert Kimball. Mr. Dutka was doing 55 in a 30 mph zone. He was the unluckiest man in CT as 3,000 speeders had been clocked and not stopped that week by CSP.
Sgt. Albert Kimball operated one of the two devices through 1952 that caught 3,000 CT speeders, 450 resulting in penalties or arrests.
Note: In 2006, Capt. David A. Carson, of the Glastonbury (CT) Police Department donated what is believed to be one of the nation's first radar units to the National Law Enforcement Officers Memorial Fund Museum in Washington, D.C. The radar unit was reported to be the first used by Glastonbury police in 1948. This is probably not the same unit discussed above owned by the CT State Police.
1948-1972 Rollout of Police Radar Devicesg8Back to Top
These early radar units could fit in a single vehicle, but they were bulky. Most had components that fit inside the trunk of the patrol car.
Police departments all over the country were using radar by the mid 1950s. They were wildly popular because experience proved that they could replace as many as 30 traffic motorcycles with one radar unit, they eliminated the inherent danger of following speeders, and hazardous chases, and they could be used in any weather, day or night.
By November 1949, Electro-Matic Radar Speed Meters had been tried out either state-wide or locally in 12 states - California, Colorado, Connecticut, Kentucky, Maine, Maryland, Mississippi, New Jersey, North Carolina, Ohio, Rhode Island, and Tennessee. Plus Garden City of course.
Then by Sep 1952 its use exploded to 31 states.
By April 1953, 56,000 radar arrests had been made, and only 318 failed to convict.
By Jan 1955, 43 states, plus Hawaii (which was not a state at the time) and DC, had some 600 machines, Ohio with 107 devices. By May 1956, 48 states had some 1600 radar devices.
Here is a partial rollout:
- 1948 - Columbus OH PD (testing)
- 1949 - Columbus OH PD (May 1949 enforcement), Illinois Hwy Dept (tests)
- 1950 - Michigan State Hwy Dept (testing)
- 1951 - South Bend (IN) Police, Washington State Patrol
- 1952 - Delaware State Police (limited locations) (on 3/13/1952 charged nine people with speeding on the DuPont Parkway) , Virginia State Police (speed surveys) , Ohio Highway Patrol (enforcement), Dayton Ohio (June 13, Ptl. Harold Murphy and Ptl. James Hopkins stopped a car traveling 45 mph in front of Carillon Historical Park on S. Patterson Blvd.)
- 1953 - St. Louis PD (testing began Oct 29, first ticket on Nov 4), Memphis PD (Mar 20 enforcement), New Jersey State Police (enforcement)
- 1954 - Texas Highway Patrol (using the Electro-Matic Radar Speed Meter, manufactured by the Automatic Signal Division of Eastern Industries, Inc), California Highway Patrol (tests radar but can't use to issue tickets for decades), Vermont State Police, Virginia State Police, Hammond IL police (April), Illinois State Police (June 11), Chicago PD (testing Sep 1st through Nov 54), Tennessee Highway Patrol (enforcement), Wyoming State Patrol (testing), Idaho State Patrol (enforcement)
- 1955 - Delaware State Police (full scale) (new smaller units were no larger than a suitcase and were permanently installed in the trunks of 1955 Ford Interceptors), Louisville (KY) PD (Jan 7 enforcement), Wyoming State Patrol (enforcement)
- 1960 - Orem UT police
- 1961 - Pennsylvania State Police, Shrewsbury MA Police, Huntsville AL police
- 1962 - South Carolina Highway Patrol (enforcement)
- 1967 - Rehoboth Beach (DE) Police
- 1972 - Lexington SC Police
- 1973 - Alabama Highway Patrol (bought 54 Speed Gun II radar units used to enforce the 55 mile-per-hour speed limit)
- 1988 - California Highway Patrol can finally use radar
Radar Products 1949-1972g9Back to Top
Forgotten BrandsNone of today’s radar companies were around during the very early days of police radar. Some company names from the beginning that are no longer active in this industry include: Eastern Industries's Automated Signal division (aka LFE), Electrolert, Smith and Wesson, Dominator, Stephenson, West Bend (Autotronics) Radar Systems Inc. (West Bend, Wisconsin) (lighter plugin and battery operated) 1969-1977? now dissolved, TriBar and Sentry.
StephensonIn 1968, Stephenson Corp was acquired by and became of subsidiary of Bangor Punta Corp, a large conglomerate. In 1971, Robert P. Falconer became President, and it was part of the Smith & Wesson Public Security Group, inside Bangor Punta. (S&W was acquired by them in '65)
Stephenson Company. William H. Stephenson, President, Eatontown/Red Bank, New Jersey Radar speed measuring devices (Speedalyzer®), resuscitators, alcohol testing equipment (Breathalyzer® and Drunkometer®), rescue and first aid equipment
I see products from 1960-1971.
Products: Two piece unit with needle gauge - TN-62 (transistors '61/62), TN-63 ('63), Models 'Transistor Speedalyzer' (1964), T-63?, 700 (1966), 700C (1966), and Mark V/VI/VI-A 'Radar Speedalyzer's. 1960 'Safety' mentions Speedalyzer. Have 1971 sales brochure 1971 Mark VI-A.
TriBar Ontario, CanadaIntroduced the MuniQuip Tribar around 1977. I see them in 1979? (They bought the MuniQuip name from Decatur in 1966)
Products: Muni-Quip Tribar T3 (handheld lighter plugin), Muni-Quip Track Radar (2 pc dash,window- moving radar) , SPEED CHEK SCX-01 (sports screen 10.25 GHz battery powered)
1955 - Electro-Matic Radar Speed Meter, Models S2 and S-2A
Decatur Electronics 1966 on (formerly Muni-Quip 1955-1966)Decatur Electronics is the oldest company still producing speed measurement radar. Bryce K. Brown studied math and physics at Southwestern College, Kansas. During WWII, he worked on the Manhattan project.
After the war, Brown taught at Millikin University in Decatur, Illinois from 1946 to 1964.
He also had an interest in business, and in 1947 he started a soft water service.
In 1955, Brown started Muniquip (short for Municipal Equipment). He made speed timers by stretching two hoses across a road, and soon branched out into radar.
By 1963, the radar portion of Muniquip had annual sales of a million dollars.
In 1964, Brown left Millikin University to focus on the radar company.
In 1966, a Toronto firm (TriBar? Duncan Parking?) bought the Muni-Quip name and the non-radar business. Brown kept the radar portion of the business, and renamed the company Decatur Electronics, Inc. (still based in Decatur, IL)
As Brown developed the business, he developed new products, including speed timers to control traffic and a radar gun specifically for use in baseball.
In the following years, Brown developed the first directional radar. While the sales were good internationally, the directional product was "too advanced" for the domestic market and did not sell well in the United States. Brown had success with the RaGun, the MVR, and the Hunter radar guns.
By 1976, the radar business was approx. $2 million, in 1977 $3 million with 80 employees. Sales were booming because of federal grants issued to police departments for radar equipment, so they could keep the national speed limit at 55 mph to conserve fuel.
By 1986, Decatur Electronics was the primary supplier for major league baseball teams.
In the mid to late-1980s, Decatur fell on hard times and filed for Chapter 11 bankruptcy, and Bob Sanner bought the business in 1989, with his son Randall who worked there from 1994-2005.
He revitalized the company introducing in 1991 the then-revolutionary, patented “miniature” Genesis I, the square “Patch” Antenna and soon to follow the even smaller Genesis II.
In 1997 Decatur introduced an innovative handheld radar which utilizes the Black & Decker VersaPak battery system. In 2000, Decatur introduced the SI-2, its newest high-tech microwave sensor.In 2009, they were acquired by UK construction firm Bowmer & Kirkland, who formed a public safety division, Soncell North America. They were located in Decatur, IL until Dec 2010, and relocated to Phoenix, AZ in Jan 2011.
In late 2004, Decatur released its best-selling GHD handheld directional radar as the lowest priced radar in the United States
Kustom Signals, Inc., headquartered in Lenexa, Kansas, is a subsidiary of Public Safety Equipment, Inc., of St. Louis, Missouri. Dedicated to serving the public safety equipment needs of law enforcement since 1965, Kustom designs, manufactures and markets traffic speed radar, lidar, in-car video systems and mobile roadside speed monitoring trailers. History We introduced the first digital readout radar, the industry’s first moving radar, and the first hand-held option. In 1979, we introduced an instant-on function that added to an officer’s ability to track speeds on compulsion and, in 2008, we introduced the Raptor, the first two-piece radar with graphical display, target tracking bar and Dura Trak™. 1990 was a year full of firsts for Kustom Signals in laser advancements: The first heads-up display, the first LIDAR (Laser Imaging Detection and Ranging) with continuous tracking history, and the first with a settable range. In 2006, we introduced the first binocular style speed enforcement laser weighing only 19 ounces. 2011 brought the rock star of industry recognition, the Innovation Award, for our ProLaser 4.
CMI. Inc., Minturn, Colorado (now MPH)(Not to be confused with Cincinnati Microwave Inc who produced radar detectors)
CMI was formed around 1970. Its first product was the Speedgun One, also known as the JF 100 for its designer Jack Fritzlen.
The Speedgun One, model JF-100, was the first handheld radar gun. The Speedgun One was also the first speed measurement radar to use a microprocessor. The Speedgun Two was the first handheld that could also be used as a moving radar.
In 1988, the radar operations of CMI were transferred to MPH. Today CMI only produces breath-alcohol test instruments at AlcoholTest.com.
MPH was founded in 1976. The first product was the K-55, an X-band radar with a much smaller antenna than the Kustom Signals MR-7 that could be mounted inside the vehicle. The K-55 was the first radar to have “instant on” activation using an attached accessory called the ECM that was introduced in 1977. The ECM held a violator’s locked speed so that the radar’s target window could continue to track a signal. The ECM also had its own internal batteries and screen so a violator’s locked speed could be carried to them and displayed. In 1984, MPH introduced several new products that brought new features to the market. These included the S-80, which was the first moving radar with three windows built into the radar itself. Others had done the tracking through lock with an accessory attachment. This new feature allowed the user to lock the target speed and continue to monitor the target for verification purposes. The S-80 was the first radar designed for use on a motorcycle and was the first with Radio Frequency Interference, Low Voltage and Harmonic detection. The BEE 36 radar was also introduced in 1984 and was the first unit to have the display separate from the counting unit allowing flexibility in mounting the unit into police cars that were being downsized due to fuel economy concerns. These also fit nicely into the compact Ford Mustangs used by many Highway Patrols at the time. MPH also introduced the first dual antenna units this same year. In 1986, MPH introduced the K-15 II, the first handheld radar with a separate lock window for continuous tracking of target vehicles once the speed was locked. While MPH remained competitive with the introduction of the Python, the next big breakthrough for MPH was in 1998 with the introduction of the POP technology on the Z-25 and Z-35 handhelds. This mode allows the radar to return a speed more quickly than a driver or radar detector can generally react. While not a substitute for a good continuous clock with tracking history, this mode allows an operator to confirm his visual estimate without alerting possible violators of the presence of radar speed enforcement in the area. In 2001, MPH released the BEE III and the Enforcer, which are claimed to be the first fully upgradeable radars.
Applied Concepts, Inc. aka Stalker Radar
During the late-1980s, the radar industry went into a small depression. While many cut back staffs of engineers and other employees to save money, a new company was being formed with these experienced personnel. That company originally founded in 1970 as Applied Concepts, Inc., and introduced its first radar, the STALKER, in 1990. This was the first Ka-band radar on the market. Applied Concepts took these experienced personnel and began to introduce some of the best performing radars of the 1990s. They followed the STALKER with the STALKER DUAL in 1994 that easily outperformed anything on the market at the time and had the best fastest mode performance. In 1998 Applied Concepts introduced the first direction-sensing moving radar, the STALKER DSR. Applied Concepts also revitalized the speedometer interface by providing the DSR with the ability to automatically switch from moving to stationary mode. While Applied Concepts was not the first to introduce the speedometer interface, this interface is now more widely used, because of the ability to reduce the effects of Batching, Shadowing and Harmonic Errors, and the fact that the new digital speedometers in patrol cars make tapping into the system easier. The latest innovation introduced to the market by Applied Concepts occurred at this year’s IACP convention in Philadelphia when it introduced the STALKER 2X. This radar can clock on the front and rear antennas at the same time with two sets of windows for target and fastest speeds. The rear antenna can also be used as a collision alert system by sensing the differential speed between the patrol car and vehicles approaching from the rear. Interestingly, MPH claims to have built a dual window radar similar to the 2X many years ago and never brought it to market.
ICOP Digital, Inc. aka PoliceRadar.com
ICOP was founded in 2003 when Ken McCoy joined with Bud Ross. McCoy started McCoy’s Law Line in 2001 after leaving Applied Concepts. Ross and McCoy have a long association going back to 1969 when they both worked at Kustom Signals. ICOP Digital, Inc. continues to sell the units produced under the McCoy’s Law Line brand, the Speed Trak Elite K, Ka and KD. One of the most experienced people still active in the radar industry, McCoy has worked at or with virtually every current supplier of Traffic Radar, working at CMI after Kustom Signals and being a co-founder of MPH. While not listing Decatur as an employer on his resume, the first units sold by McCoy’s Law Line used some components produced by Decatur including the antenna.
Sports Radar, Ltd. Homosassa, FLTracer SRA-3000 radar gun
1972 Handheld Police Radar GunsThe first police radar units in 1946 took two officers to operate. One officer had the device in or near his patrol car, usually the "antenna" was in the trunk or mounted outside, and the gauge display was inside. This patrolman usually radioed an officer in a separate vehicle to stop the speeder.
The first single-person "radar gun" with a convenient "instant-on" trigger, was introduced in 1972.
Big Oil Helped the Radar Industry?g11Back to TopThe Oil Embargo of 1973 brought about national speed limit law in 1974 where 55 mph was the maximum speed on the nation's highways.
The federal government began to issue grants to police departments around the country in order to bu new radar equipment to ensure that the public was lowering their speeds to conserve fuel.
Portable Radar Used for Sports 1970sBack to TopThe introduction of the handheld, trigger operated police radar in 1972, led a forward-thinking baseball man to develop a version for measuring the speed of baseball pitches in 1974.
Danny Litwhiler - Sports Radar Pioneer 1974Daniel (Danny) Webster Litwhiler, played college baseball and graduated in 1939 with a Science degree, was an MLB player from 1940 to 1951, playing in the 1942 All Star game and the 1944 World Series, was the first major leaguer to have an error-free season.
He coached FSU from 1955 to 1963, and led them to three College World Series appearances.
He coached Michigan State University from 1964 to 1982, and holds the record for wins. Among his former players are Steve Garvey and Kirk Gibson.
Coach Litwhiler was also a prodigious inventor:
- In 1942, he was the first MLB player to stitch together the fingers of his glove
- In 1956, developed Diamond Grit field drying agent
- On Nov 6, 1962, he received a patent for the baseball batting cage
- Most famously, in 1974, he invented the hand-held radar gun for baseball, together with John Paulson of the JUGs company
He was also a prolific creator of baseball products, and is credited with conceiving the radar gun as a tool to measure the speed of pitches. In the early 1970s, he saw a photo in the Michigan State student newspaper about radar guns used by campus police to catch speeders. "He said, 'I wonder if that could be used to time a baseball,' " said Patricia Litwhiler, his wife. Mr. Litwhiler enlisted the help of a technically savvy friend, John Paulson, developer of the JUGS pitching machine, to create a prototype. Pitch tracking by radar soon spread through baseball. JUGS Inc. continued to pay Mr. Litwhiler a royalty for his role in conceiving the radar device.
An undated SABR article by Glen Vasey titled simply Danny Litwhiler indicates that this radar gun is in the HOF.
One of his gloves, perhaps the first to ever have its fingers tied together, is on display in the Hall of Records in the Baseball Hall of Fame and Museum in Cooperstown, NY. That same museum owns, as another gift from Litwhiler, the prototype of the JUGGS Speed Gun, the first radar gun developed for use as a baseball-teaching tool by Litwhiler and a friend.
Lou Pavlovich, Jr. Editor of Collegiate Baseball, in his 1/6/2012 article, Baseball’s Great Inventor Of All Time credits Danny Litwhiler with the invention of the radar gun for baseball/sports.
Danny [Litwhiler] came up with the concept of a baseball radar gun in 1974. He had read in the Michigan St. student newspaper about a campus policeman pointing a new device at cars called a radar gun to catch speeders. Suddenly, an intriguing idea popped in his mind. Why not use such a radar gun to check the velocity of pitched balls? He immediately sprang into action and contacted a local police officer about coming to his baseball field to run an experiment. The officer drove over near the field and then was instructed to point the radar gun at one of his pitchers to determine the velocity of his pitches. Initially the gun registered 75 mph. But the reading was on a flat portion of the field. So Litwhiler asked the police ofrficer to drive his car near the pitcher’s mound since the radar gun was attached to the lighter in the car. This time, the pitches registered 85 mph. The gun caught about 75 percent of pitches thrown. Litwhiler immediately wrote to Commissioner Bowie Kuhn about the discovery to let every Major League club know so one team would not have an advantage over the other. Litwhiler then contacted John Paulson, inventor of the JUGS Pitching Machine, to see if he would be interested in making a radar gun for baseball. "I told him of the radar gun idea for baseball, and he was interested," said Litwhiler. "It took us several months to get the radar gun to the point where it would track a baseball every time. The radar gun had to be re-tuned. So it went back and forth until it was perfected. John came up with a portable gun that could be used any place on the field or indoors. It operated on a rechargeable battery." Litwhiler said the radar gun was initially used to chart the speeds of different pitches (fastballs, sliders, curves, changeups, etc.) to see if they complimented each other. He found that the speed differential of pitches was important in getting batters out. Also, the gun was used to check how quickly an infielder could throw a ball to first base, a catcher fire a ball to second or outfielders throwing balls to home. Over the years, the radar gun has become a staple of scouting prospects. The portable prototype was developed in 1975 which picked up 99 out of 100 pitches. Thousands and thousands have been sold across the world since that time.
Danny Knobler of The Bleacher Report wrote an in-depth article on 9/10/2014 The Radar Gun Revolution saying:
Danny Litwhiler is generally credited with adapting the modern radar gun to baseball. Litwhiler was the coach at Michigan State in 1973, and when he saw campus police using radar to time speeding cars, he quickly understood that the devices might be applied to baseball. Litwhiler saw it mostly as a teaching tool, one that would allow his pitchers to measure the velocity difference between their fastballs and changeups. He contacted John Paulson, whose JUGS company made pitching machines that were already in regular use. Litwhiler paid the MSU police for one of their early guns, which he sent to Paulson to be adapted for use in timing baseball pitches. The original JUGS gun is now on display at the Hall of Fame. Litwhiler understood almost immediately that the radar gun could be revolutionary. He wrote to MLB Commissioner Bowie Kuhn in hopes of alerting all major league teams, and he traveled to spring training in 1975 to show it off to big league managers, coaches and executives. Orioles manager Earl Weaver was an early adopter, but like Litwhiler, he saw the gun as most valuable for making sure there was a big enough differential between a pitcher's fastball and his changeup. He also saw it as a useful tool to help determine whether a pitcher was tiring. Radar guns were expensive then, as much as $1,500 each (while a professional model may still cost that much, cheaper versions are available online today for less than $100). In his book Weaver on Strategy, the Orioles manager wrote that it took him six years to convince the front office to provide them to the clubs' minor league teams. And in the days before velocities were listed on every scoreboard, he couldn't convince the Orioles to send someone on the road with the big league team to operate a gun and signal its reading to the dugout. The early versions of the gun would also offer wildly different readings. For many years, you had to specify whether a reading came from the "fast gun" made by JUGS or the "slow gun" made by Decatur. Some teams and scouts used one, some the other, with the difference in readings said to result from whether the pitch speed was measured right out of the pitcher's hand or when it crossed the plate. Since the teams only cared about comparing one pitcher to another, the difference hardly mattered as long as each of their scouts used the same model. But if you're trying to compare pitchers from different eras, those small differences can make all the difference in the world.
Litwhiler's Son's Recollection of EventsFrank Fitzpatrick, Philly.com Inquirer Staff Writer, wrote in his October 24, 2013 article A look back at the man who created the radar gun
Born in Ringtown, Pa., the seventh son of a seventh son, Litwhiler died in 2011 at 95. With the Phillies in 1942, Litwhiler played every inning of every game in the outfield without committing an error, the first major-leaguer to achieve the feat. But his innovations, and in particular his introduction of the radar gun, have won him a special immortality. That funny-looking glove he used while setting the record is in the Hall of Fame. And, looking as odd as an iPad among the rustic baseball artifacts, so is the cumbersome radar gun that changed baseball forever. Brainstorm strikes Litwhiler had his eureka moment in the early '70s, when the then-Michigan State baseball coach was visiting his oldest son, Daniel, an officer at the Air Force Academy in Colorado Springs. "He was driving me around campus," Daniel Litwhiler, a retired brigadier general who still lives in Colorado Springs, recalled last week, "and I said, 'Dad, be careful. The police here are equipped with radar. If you're speeding, they're going to catch you.' He looked at me and I could see that put a bug in his head." The anti-speeding application had grown out of radar's widespread use in World War II. By the late 1940s, highway police in Connecticut and New York were employing radar devices. In the ensuing decades, they became traffic-enforcement mainstays. Returning to East Lansing, Litwhiler asked campus police whether they, too, used the contraptions. Informed that they did, he borrowed one. "He said, 'Let's see if they work on a baseball,' " his son said. A baseball lifer, Litwhiler knew velocity was perhaps a pitcher's greatest asset. By accurately measuring it, managers, coaches, scouts, and pitchers themselves could gain an advantage. He employed the gun during a Spartans practice and was delighted to find it could time pitches as easily as it measured how fast a 1970 Torino was traveling on University Drive. "Wow," Litwhiler thought. "I can use this." Soon he was working with JUGS Sports, an Oregon company that specialized in pitching machines, to develop a mechanism better-suited to baseball. The result, introduced in 1974, was a bulky, wired device that looked like a cross of a pistol, a bullhorn, and a blow-dryer. Litwhiler never got a patent since what he devised was merely an alternative use for an existing technology. But for the rest of his life, according to his son, he got monthly royalty checks from JUGS. The JUGS Gun has long been the baseball standard. A 2013 version, according to the company's website, sells for $1,095 and can accurately measure a pitch's velocity within a foot of the pitcher's hand. After his discovery, Litwhiler contacted baseball commissioner Bowie Kuhn. "Once he knew it could work, he didn't think it would be fair for one team to be able to use it against another," Daniel Litwhiler said. "So he called up the commissioner and said, 'Listen, I've got this thing that can change baseball.' " Backers and detractors With Kuhn's blessing, Litwhiler toured spring-training facilities. Some teams took to the gadget quicker than others. "For some reason, Earl Weaver at Baltimore was one of the first to really appreciate it," Daniel Litwhiler said. "I know Jim Palmer [a Cy Young Award-winning pitcher] really loved having it. He's said it helped him a great deal." But many in baseball's conservative establishment, particularly old-school, seat-of-the-pants scouts, initially rejected it. Scouts, who then graded velocity on a scale of 1 to 6, felt the modern contrivance was no match for a well-honed instinct. The mechanism wouldn't gain a broader acceptance until later in the 1970s, after Texas Rangers scout Hal Keller, based on what a radar gun was telling him, persuaded skeptical team officials to sign pitcher Danny Darwin. Darwin won 171 games in a 21-year career. "He was skeptical, as all older people are of new inventions," Keller's widow, Carol, told the Washington Post last year. "Like a lot of scouts those days, he had a radar gun in his mind. He just knew how fast a pitcher was throwing. But then he sat behind a scout who was using one and he was convinced." Litwhiler played 11 major-league seasons with the Phillies, Cardinals, Braves, and Reds. As a longtime college coach, he led Michigan State and Florida State to 10 NCAA tournaments. And for those who knew him, his early acceptance of radar was no surprise. "Dad was just one of those people who always had a lot of ideas," his son said. "He had a tremendous curiosity."
Mike Marshall's Recollection at MSUOn April 3, 2011 MSU alumnus and former Cy Young winner Mike Marshall offered this version of how the now-ubiquitous radar gun got its start in baseball on his website Q&A 2011.
In 1967, at Michigan State University, I took my first high-speed film. In 1971, I took the high-speed film that launched my major league career. Shortly thereafter, Professor Bill Heusner had me present my findings to the College of Education monthly symposium. Shortly thereafter, for his Master thesis project, Professor Wayne Van Huss helped Mickey Sinks, a Michigan State University baseball pitcher, design a device that measure release velocity. Professor Van Huss taught me undergraduate and graduate Exercise Physiology. He asked me to help monitor Mr. Sinks work. After seeing Professor Van Huss's device, I told him and Mr. Sinks that they should use the radar gun that the police use. Michigan State University Head Baseball Coach, Danny Litwhiler had a friend in the Michigan State Police Department that headquartered about two blocks from the MSU baseball field. The rest is history.
Tom Verducci gave a slightly different account on LitwhilerIn Tom Verducci's 4/4/2011 Sports Illustrated article Radar Love, he had this to say about Danny Litwhiler:
Michigan State coach Danny Litwhiler, a former major league outfielder, borrowed the radar gun used by campus cops and clocked his pitchers from inside a car parked behind the backstop. Police had been using the guns since at least the 1950s. Aimed at a moving object, they send a beam of electromagnetic waves that bounce off an object and back to the gun, which measures the frequency shift in the waves that return to calculate the speed of the object. Litwhiler found the gun did not always give a reading on pitched balls, so he called CMI Inc., the Colorado company that manufactured it, and learned that it could be recalibrated to read smaller objects. Litwhiler made the adjustment, and that prototype is now in the Hall of Fame. Litwhiler made one more change. Radar guns at the time were powered through the cigarette lighters in cars. He asked the JUGS company, which produced pitching machines, to develop a battery-powered radar gun. Within a decade JUGS would become synonymous with pitch tracking, the guns standard issue for big league scouts. in the spring of 1975, Michigan State played a tournament in Florida, and Litwhiler brought his gun. He called up Earl Weaver, the manager of the Orioles, who trained in Miami, and said, "I've got something to show you." Weaver loved the device. He used it on his pitchers, his outfielders and even a plane as it descended. "All of our scouts," Weaver recalls, "no matter who it was, they would always say, 'This guy throws as hard as [Jim] Palmer.' I once put the radar gun on one of them, and he threw about eight or nine miles an hour slower than Palmer." Weaver used the gun as a managing tool. He liked knowing if one of his starters was losing velocity late in a game and might need to be pulled. He especially liked using the radar gun to ensure that Baltimore's pitchers kept a wide gap in speed between their fastball and their breaking pitches. "When [Mike] Cuellar threw his screwball a little too hard and it didn't break," Weaver says, "you could tell right away on the radar gun." The Orioles and the Dodgers were two of the early radar gunslingers. In 1975, in a unique intersection of two of the most important developments in pitching, Tommy John operated a radar gun behind home plate at Dodger Stadium while recuperating from the first elbow-reconstruction surgery, the procedure that would come to bear his name. He would predict rallies when he saw a pitcher's velocity drop late in a game. By '78 nine teams were using radar guns, and by the early '80s the tool had become essential, especially for scouts. (John was replaced by Mike Brito, a scout who sported a Panama hat and smoked a cigar in his field-level box behind home plate in L.A., lending panache to the job.)
Early Radar Guns in the Baseball Hall of Fame 1970s-1990sBack to Top
The JUGS SPEEDGUN Radar Gun 1975 ong10
Scout Bob FontaineThe Baseball Hall of Fame - scout section includes a radar gun used by scout Bob Fontaine Jr during his tenure with the Angels in the 1990s.
Earl Weaver - Orioles ManagerThe Baseball Hall of Fame says that Earl Weaver, longtime manager of the Baltimore Orioles first used a radar gun in 1975
during 1975 spring training in Miami, Florida, Earl Weaver pioneered the use of radar guns in professional baseball to track the speed of pitches
Ralph Avila - Dodgers ScoutThe Baseball Hall of Fame says that Ralph Avila, scout for the Dodgers donated his Prospeed radar gun he used in the 1970s
The Dodgers hired Cuban-born Ralph Avila as a scout in 1970. Two years later, Los Angeles assigned him to the Dominican Republic, where he became a leader in creating the modern academy system.
[they show pictures of the] radar gun and stopwatch used by Ralph Avila beginning in the early 1970s
Baseball Radar - Fast Guns vs. Slow Gunsg12Back to TopStalker explains 'fast' guns vs. 'slow' guns
Many are familiar with the Decatur Ragun and the JUGS Gun and how they read on pitches. The reason different radar guns read different speeds is because they are taking readings at different places during the pitch. Target Acquisition Time is what determines how quickly a radar can lock onto a target speed. The JUGS Gun responds relatively quick, taking the ball speed at about 7 feet after release. The Decatur Ragun responds very slowly, taking it's reading between 30 and 50 feet after release. With the STALKER's extremely fast target acquisition, it can get the ball speed at about 7 inches, displaying that speed in the peak display and then freezing the true ending plate speed on the lower display.
Baseball Radar Productsg13Back to Top
Stalker Radar 1989 onThe Stalker Company profile says
Applied Concepts, Inc., formed in 1977, introduced the first Stalker radar to the law enforcement industry in 1989. Stalker Radar has become the dominant Doppler radar system For the past 36 years, Applied Concepts, Inc. (dba Stalker Radar) has been designing and manufacturing high quality electronics from our facility in Plano, Texas, a northern suburb of Dallas. Stalker Radar is now the nation’s largest manufacturer of speed radar. In 1989, Stalker Radar pioneered the use of digital signal processing (DSP) with Doppler speed radar with the revolutionary Stalker ATR Ka band police radar. Since the ATR radar, Stalker Radar has continued to lead the industry with the development of digital antenna communication, microstrip antenna design, double balanced mixers, and most recently, digital direction sensing Doppler radar. Stalker radars are clearly the most sophisticated and advanced available, boasting the highest level of performance and accuracy.
2000sFrom the Baseball Hall of Fame article Diamond Mines - Scouting History
Because scouting is so labor intensive, teams experimented with pooling their resources in the 1960s, then created the Major League Scouting Bureau in 1974. Today the bureau employs nearly 50 full- and part-time scouts across North and South America, and its reports are made available to all 30 clubs. This combination video camera, radar gun and stopwatch was used by bureau scouts to film amateur, pro and international players from 2009 to 2012.
Pete Dougherty in his 5/2/2013 TimesUnion article Scouts getting due in Hall -New exhibit highlights work of those who found the top baseball talent has a picture of a scout's gear including a radar gun in the new Hall of Fame Diamond Mines exhibit.
Radar for the Masses (Dads)Back to Top
Bushnell Radar Guns 2002The Bushnell Speedster was introduced April 5, 2002 (it was listed on their website as early as Feb 2002.) Dusty Baker was involved in the launch.
USPTO Patent April 16, 2002
OVERLAND PARK, Kan., April 5 /PRNewswire/ -- Bushnell Performance Optics is introducing the Speedster, a sleek and versatile speed gun designed to track pitching speeds, cars at the racetrack, serves on the tennis court or wherever your desire to track speed takes you. The Speedster uses digital technology and DSP (Digital Signal Processing) to provide instantaneous and real-time speed measurements within one MPH. This 13-ounce speed gun does much more than track speed. Its stat-tracking feature allows you to record pitch counts, tennis serves and other statistics. Additional features include a highly legible four-row LCD graphics display and a two-way button pad to scroll through the features. "The Speedster is easy to use and perfect for coaches, parents and sports fans alike," said Phil Gyori, Bushnell vice president of marketing. "It's ideal for the ballpark, racetrack or in your backyard." Athletes and coaches will find the Speedster an invaluable tool to gauge improvement and development. "There are a lot of factors that go into evaluating talent, but speed is certainly a major component," said Dusty Baker, manager for Major League Baseball's San Francisco Giants. "The Speedster makes it easy for scouts and coaches of all levels to accurately gauge a pitcher's development." For more information about the Speedster, visit www.bushnell.com , or contact Derek Hall at 816.960.3125 or by e-mail at [email protected].
A Bushnell Speedster product description from 2003
The Bushnell® Speedster™ is a handy, multi-functional speed gun for all kinds of sports enthusiasts. Tracks miles (or kilometers) per hour of everything from pitching speeds, tennis serves and downhill skiers to cars at the racetrack. Measures the speed of a baseball at 6-110 mph from over 75 feet away and the speed of a race car from 6-200 mph at over 1300 feet away. Also lets you keep statistics for baseball and softball. Features highly legible 4-row LCD graphics display, trigger and 2-way button pad. The Bushnell Speedster uses digital technology and DSP (Digital Signal Processing) to provide instantaneous and real-time speed measurements of +/- 1.0 mph speed accuracy.
Product specs for the 10-1907 Speedster
A quote from Dusty Baker in their 2003 websiteSpecifications for Bushnell Speedster Radar Speed Gun 10-1907 : Compact & Ergonomic Radar gun design Accurately Measures speed Displays Speed on an LCD Graphics Display Radar Gun Accuracy: 1 MPH Detects speed of Baseball / Softball / Tennis: 6-110 MPH (10-176 KPH) Auto Racing Speed: 6-200 MPH (10-320 KPH) Statistics Mode: Baseball / Softball Speed Gun Size (in/mm): 3.5x7.2x4.3 / 89x182x110 Radar Gun Wt. (oz/g): 13/369 Battery Type: AA (6) Our Bushnell Speedster radar guns 101907 come with is a carrying case. It is a nylon case that fits tight to the Bushnell Speedster. It has a velcro lid that shuts. The Bushnell Speedster Logo is on the front of it. There is an adjustable strap with it as well. Take it with you anywhere! These are brand new Bushnell Speedster radar guns (Bushnell Part # 10-1907) with full 2-year warranty from Bushnell
“I took the Bushnell Speedster to a mini-camp with some of my scouts and coaches; all I can say is we had to order more because everyone was fighting over who was going to keep it.” Dusty Baker Manager - San Francisco Giants [Model #10-1907 Baseball Stats Mode]
Pocket Radar 2010The Pocket Radar was introduced in Jan 2010 as the first palm held radar device. The inventor describes the product launch:
It has been a very exciting time since we first officially launched the product in January at the 2010 Consumer Electronics Show. As a new start-up company, we were honored to be named an Innovation’s Honoree by the Consumer Electronics Association in 2010, and we were swarmed by throngs of national and international media throughout the entire four day show. It was quite an experience and served to validate that we definitely had a hit on our hands — correction, we had a hit in the “palm” of our hand. Still, my personal favorite was that Pocket Radar was awarded the 2010 “Popular Mechanics Editor’s Choice Award”.
Here is the product description as of Sep 2014.
Pocket Radar™ is the world's smallest full performance radar gun. This award-winning device delivers the same great accuracy and performance as other professional radar guns, at a fraction of the size and cost. Perfect for a wide variety of applications including traffic safety, R/C hobbies and running sports. Manual trigger: you time what you want to measure and when you want to measure it. (If you want to measure Ball Speeds then the Ball Coach™ radar is a better choice). Accurate to within +/- 1 mph, it gives over 10,000 readings on a single set of batteries. Includes a hardshell case, wrist strap, 2 AAA alkaline batteries, and an illustrated Quick Start Guide.
Mobile Phone AppsBack to Top
Athla Velocity for iPhone 2014In a TechCruch article 9/8/2014 Athla’s Velocity Mimics $1,200 Radar Equipment For The Price Of A Fancy Coffee you can now buy an app for your iPhone for $6.99 that measures speed.
We’ve all seen the speed of a pitch in baseball recorded, either in person or on TV, and most have probably seen the radar gun used to clock the ball’s velocity. That tech is expensive, however, with systems ranging to around $1,200 to measure the speed of smashes, hits and kicks in sports ranging from baseball, to tennis to soccer. Athla, launching today at Disrupt SF, is a startup that uses existing hardware standard on any iPhone to replicate the functions of these expensive radar guns, using only a piece of software called ‘Velocity’, with sport-specific in-app purchases that cost only $6.99 apiece to unlock. The man behind Athla is Michael Gillam, a medical doctor who has worked in tech for much of his career, following his residency in emergency medicine at Northwestern University. Gillam was Director of Research at the National Institute of Medical Informatics, took on graduate studies at Singularity University, the academic institution co-founded by futurist Ray Kurzweil. After that, he spent time as the Director of Healthcare Innovation at Microsoft’s lab designed for that purpose, and he acted as a judge for Nokia’s Xprise challenge in the category of personal health sensing technology. Velocity is the project Gillam came up with, which took two years of bootstrapped development to get where it is today. During that time, Gillam saw Moore’s Law in action – the original iPhone 4S camera could only read speeds of up to around 50MPH, while the iPhone 5 can manage up to 120MPH. The next iPhone, Gillam predicts, will probably be able to manage nearly twice that.
SourcesBack to Top
- (g1) first ticket 1899 1904 -
In a 5/20/2014 Yahoo Auto article May 20: The first U.S. speeding ticket was written on this date in 1899 by Justin Hyde
Per The Tammany Times 1/15/1900 p.30
The men behind the Electric Vehicle Co. thought they had the 20th century by the scruff of the neck. After two inventors had fashioned a working electric vehicle in 1894, they formed an electric taxicab company in New York that grew enough to win backing from a wealthy industrialist. As of 1899, there were 60-odd electric taxicabs dubbed "Electrobats" in the city, and while they were heavy and slow by modern standards, they were a marvel of luxury compared to staring at a horse's behind over cobblestone streets. On this day in 1899, a taxi driver named Jacob German was flagged down by a New York City police officer — Bicycle Roundsman Schuessler — who found his cab to be traveling at an unacceptable speed, which the officer estimated at 12 mph. German would receive America's first citation for speeding in a car, a historical note that has outlived the Electric Vehicle company, which foundered and collapsed a couple of years later. (Today, there are only two electric cars in use as cabs in all of New York City.) You can catch a glimpse of an EV taxi at work — and see why 12 mph would have been so fast — in this famous clip shot by Thomas Edison of 23rd Street in 1901. youtube:uqLTzZvKdOU
Per Cars Yeah 6/2014 article Jacob German and Patience
BICYCLE ROUNDSMAN SCHEUSSLER Conspicous among the Police Bicycle Squad is John Scheussler, who was one of the first to be appointed to the force in 1896. Previous to his appointment to cycle duty, he did patrol service on the regular police force, and was made Roundsman in 1897. Since his appointment to the Bicycle Squad he has been active in stopping runaways and saving many lives, for which service he has received several medals by the Police Board and resolutions of Honorable Mention.
Per Examiner 5/20/2009 article May 20, 1899 New York cabbie first to be arrested for speeding by Marguerite Dunbar
On May 20, 1899 the first American ever arrested for speeding was Jacob German. Jacob was a 26 year old New York City taxi driver who worked for the Electric Vehicle Company. On that day, he was hauled off to jail for speeding down Lexington Street in Manhattan, for going 12 miles an hour. The posted speed limit was 8 mph.
In Hemmings 6/30/2010 article Electric Avenue, 1905
Wednesday, May 20th of this year marks the 110th anniversary of the first known arrest for speeding. Jacob German, a New York taxi driver, was arrested in 1899 after being caught doing 12 mph on Lexington Avenue. German drove for the Electric Vehicle Company. The company had been founded by Henry G. Morris and Pedro G. Salom. In 1897, the company leased its cabs in New York City, according to John B. Rae, Associate Professor of History at MIT in his essay The Electric Vehicle Company: A Monopoly that Missed. The first taximeter was invented in 1891, and the name "taxicab" was coined using "taxi" from "taximeter" and "cab" from "cabriolet," a horse-drawn carriage in which the driver stands in the back.
In Lit Zippo 9/14/2014 article Henry H. Bliss & Mary Ward: The Unlucky Automotive Firsts
As for the electric cab, it appears to be an evolution of the Electrobat, considered to be the first successful electric vehicle. The builder was likely the Electric Vehicle Company of New York City, which bought the rights to the Electrobat in 1897 and was eventually bought by Columbia. However, we see that Electric Vehicle Company contracted Specialty Electric from Cincinnati to build its cabs, so they likely contracted with other companies as well. Ben Merkel and Chris Monier’s book, “The American Taxi: A Century of Service,” has a couple additional photos of these cabs and notes that they used 800 pounds of lead-acid batteries, steered with the rear wheels, drove through the front, had a top speed of about 15 MPH and took eight hours to recharge. About 200 were on the streets of Manhattan in 1900, but they seem to have gone extinct by about 1910.
In Wired 5/21/2008 article May 21, 1901: Connecticut Sets First Speed Limit at 12 MPH
The first run of electric taxi cabs in new York was Samuel’s Electric Carriage and Wagon Company, which ran 12 “hansom cabs” starting summer 1897. In 1898, the company had reformed into the Electric Vehicle Company, and had begun building the Electrobat Electric Car, the first successful electric automobile. Running up to 100 of these cabs by 1899, It’s quite possible that one of these much larger four-wheeled taxicabs was the offending vehicle that struck down Henry Bliss 115 years ago today.
In 1/25/2012 Jalopnik article How A New York Taxi Company Killed The Electric Car In 1900
1901: Connecticut passes the first U.S. state law regulating motor vehicles. It sets a speed limit of 12 mph in cities and a whopping 15 mph outside. Arrests for speeding in motor vehicles also precede the Connecticut law. Cabbie Jacob German was arrested and jailed in New York City May 20, 1899, for driving his electric taxi at the "breakneck speed" of 12 mph. The first known auto manufactured with a speedometer was the curved-dash 1901 Oldsmobile, from the very year of the Connecticut speed limit. But the fancy device remained a luxury option at least through the first decade of the 20th century.
In LA Times 12/11/2011 article Back to an electric future for cars
New York City is proud of its six upcoming Nissan Leaf cabs, but more than a hundred years ago an all-electric fleet of taxis served the city using technology that even today would still be considered cutting edge. In the early 20th century, electric cars were actually mainstream. In 1900, there were more electric automobiles on New York City streets than cars powered by gasoline. True, there were only 4,192 cars sold in the United States that year, but 1,575 of them were electric. The advantages were obvious — electrics were quiet, clean, and easy to use. Battery power looked like the ideal choice for personal urban transportation (For what it's worth, both electrics and gas-engined cars were both beaten in sales by steam-powered cars — 1,681 of them Street car tycoon, playboy and former Secretary of the Navy William C. Whitney saw a business opportunity in using electric vehicles as taxis, and bought up a short-on-cash New York City electric cab company run by two engineers, Morris & Salom. With $200 million in assets, Whitney renamed the company the Electric Vehicle Company and dreamt of a taxi cab monopoly in every major American city. He hoped that New York would be his first success. Whitney thought he had found a solution to the key obstacle of battery-powered electric cars, limited range. Instead of stopping every few hours to charge an electric car's massive lead-acid batteries, cars would swap out empty batteries for charged ones, not unlike Shai Agassi's Project Better Place battery-swap concept. At the end of every shift, the taxi driver would return to the central battery storage facility on Broadway and switch his spent battery for a rested, recharged one, much like a horse-drawn taxi driver would return to a central stable. The company could keep the cabs running around the clock since they only had to only to rest the batteries and not the automobiles themselves
In Ohio History article World's First Speeding Ticket
In 1900, more battery-powered electric cars ran on the streets of New York City than cars with internal combustion engines, and over the next few years there was a fierce race for supremacy between them. But the arrival in 1908 of Henry Ford's Model T turned the gasoline-powered car into an affordable mass-market product and made the electric car a historical curiosity.
12 mph speed limit sign per Florida Memories in Tallahassee, FL
The world's first speeding ticket was issued in Dayton, Ohio in 1904. Throughout most of the twentieth century, the city of Detroit, Michigan, was synonymous with American automobile manufacturing. In the late nineteenth and early twentieth centuries, that was not the case. Instead, Ohio innovators in Cleveland and elsewhere were at the forefront of this new form of transportation technology. Because of Ohio's important role in the early automobile industry, the state was the site of numerous firsts in automobile history. Among these firsts was the first speeding ticket for an automobile driver. In 1904, Dayton, Ohio, police ticketed Harry Myers for going twelve miles per hour on West Third Street.
Picture of NY Times article 5/21/1899 Automobile Driver Arrested
Per the NE Historical Society Flashback Photo: Connecticut Cracks Down on the Horseless Carriage With the 1st Speed Limit
On May 21, 1901, the Connecticut Legislature passed a speed limit law aimed at mitigating a brand-new menace on the roads: the automobile. It was the first numerical speed limit in the United States, and it was first enacted in Connecticut for a reason. Horseless carriage manufacturers were springing up all across turn-of-the-century New England, and New Englanders were buying – and driving — their products. A 1917 trade publication refers to New England as ‘the great manufacturing center’ of the early days of the industry. By 1901, new cars were made from the Pawtucket Steamboat Co. in Pawtucket, R.I., to the Lane and Dailey Motor Co. in Barre, Vt. Western Massachusetts and Connecticut were especially fertile territory for ambitious automotive startups. Frank Duryea built automobiles with internal combustion engines in several western Massachusetts factories, while The Holyoke Automobile Co. was making the Tourist Surrey and the Tourer in Holyoke, Mass. The Loomis Auto Car Co. in Westfield, Mass., produced the Loomis Runabout. Hartford’s leading auto manufacturer, the Columbia Automobile Company was pumping out hundreds of touring cars, runabouts and ‘gasoline electric vehicles.’ Siteamer manufacturers proliferated as steam engines had been around for awhile and their technology was well understood. By 1901, the Locomobile Company had just moved from Watertown, Mass., to Bridgeport, Conn., to mass-produce the finicky steam cars. Steamers were also made in New Haven, Conn., by the Kidder Motor Vehicle Co. ; in Melrose, Mass., by the Clark Automobile Co.; in Easton, Mass., by the Eclipse Automobile Co.; in Chicopee Falls, Mass., by the Overman Automobile Co.; and in Keene, N.H., by the Keene Automobile Co.
- (g2) motorcycles 1906 on -
Per Indian History and Indian History Timeline
Per Yonkers NY Police History
1907 New York Police Department selects Indian Motorcycle® for first motorcycle police unit. The New York City Police Department buys two Indian Twins to chase down runaway horses. New York Police Department selects Indians for first motorcycle police unit.
On July 12, 1906 the Yonkers Board of Police Commissioners reported they had purchased our first motorcycle, an "Indian" for $217.00 which included tools and a new device called a speedometer. We had been patrolling on foot, horseback and bicycle. However, with the surge of automobile ownership, speeding became a big problem. The speed limit in the city was 8 MPH, outlying streets allowed 15 MPH. Police bicycles could not catch the speeders but motorcycles could travel 50 MPH on level ground, and 30 MPH uphill. The motorcycle could corner without slowing down, whereas auto's had to slow down in the turns or they would simply turnover. Our first Motorcycle Officer was Patrolman Joseph Vansteenburgh who was appointed December 1, 1901. He worked 9am to 5pm and could always catch a speeder on Warburton Ave. Also, with his speedometer he never lost in court.
- (g3) Invention of radar WW2 -
Per Radar World Radar World Germany
Per Radar World Christian Hulsmeyer the Inventor
FuMG, "Funkmessgeraet," "Freya" radar. Eight Freya radar units were deployed along the western German border in 1938. This was the first operational radar systems. The Freya and Seetakt radars were built by the GEMA company and over 6,000 units were used during WWII. Harry von Kroge has written an excellent book titled, "GEMA: Birthplace of German Radar and Sonar."
Per Oct 1935 Modern Mechanix magazine Mystery Rays See Enemy Aircraft p.55
Per Sep 1935 Electronics magazine Microwaves to Protect Aircraft pp.284-285
Per Radar World Hollman US patent cavity magnetron and Radar Tutorials US patent 2123728 applied 11/27/1936, granted on July 12, 1938
German patent 11/29/35
Patent trading GE/Telefunken - Per Radar World England radar
Per German Museum in Munich Marine Technology
Mechanix Illustrated article in Sep 1945 'At Last - The Story of Radar' In Sep 2005 National Highway Traffic Safety Administration article Federal Role in Speed Management
First radar device ("Telemobiloscope") by Christian Hülsmeyer, 1904
Per the Baltimore Afro-American 7/27/1943 article 'Fine 3 Violators of Victory Speed Limit'
From 1942 to 1945, the War Department ordered a nationwide speed limit of 35 miles per hour (mph) to conserve rubber and gasoline for the war effort.
Per NY Times 11/27/1942 Full 'Gas' Rationing Dec. 1 Ordered by the President
BALTIMORE- Three persons were fined Monday night on charges of exceeding the victory speed limit of 35 miles per hour at hearings conducted by the OPA hearing panel attorneys at War Price and Rationing Board 1, 1400 Charles Street. The gasoline rations of Samuel Latham, 719 1/2 Saratoga Street, were suspended for thirty days and six C coupons were taken from him when he admitted driving sixty miles an hour on July 7.
1940 Tucker and Furth 'radar' - per book: When Computers Went to Sea: The Digitization of the United States Navy, 1991, by David L. Boslaugh p. 15 The Baby Gets a Name
FDR statement: Following submission of the Baruch Rubber Report to me in September, I asked that mileage rationing be extended throughout the nation. Certain printing and transportation problems made it necessary to delay the program until Dec. 1. With every day that passes, our need for this rubber conservation measure grows more acute. It is the Army's need and the Navy's need. They must have rubber. We, as civilians, must conserve our tires. "The Baruch Committee said: 'We find the existing situation to be so dangerous that unless corrective measures are taken immediately this country will face both a military and civilian collapse. In rubber we are a have-not nation.' "Since then the situation has become more acute, not less. Since then our military requirements for rubber have become greater, not smaller. Since then many tons of precious rubber have been lost through driving not essential to the war effort. We must keep every pound we can on our wheels to maintain our wartime transportation system. The facts," Mr. Jeffers said, "are simple. With only a trickle of new rubber coming in, with our synthetic rubber plants still in construction, we are going to have to get along on the rubber we have. That means that the vast majority of our 27,000,000 passenger cars and 5,000,000 trucks are going to have to run from now until mid-1944 on the tires now in use. "That's the reason, and the only reason, for the entire rubber conservation programs. That's the reason nationwide gasoline rationing will go into effect Dec. 1. That's the reason for the thirty-five-mile speed limit and for periodic tire inspection."
Terms 'RDF' and 'radar', plus war development, plus Telfunken press release 1935, per book: Technical and Military Imperatives: A Radar History of World War 2, 1999, by Louis Brown, pp.79,82,83
PreWWII radar per book: A Century of Electrical Engineering and Computer Science at MIT, 1882-1982 By Karl L. Wildes, Nilo A. Lindgren, pp.192-198
Rotterdam Naxos 1943/44 per book: Echoes of War: The Story of H2S Radar By Lovell Sir Ber, p.234
Per US Army Signal Corp History
Per WW2HQ Pearl Harbor Radar
In December 1936, Signal Corps engineers conducted the first field test of the radar equipment at the Newark, New Jersey, airport where it detected an airplane seven miles away. By May 1937, Signal Corps demonstrated its still crude radar, the SCR-268, a short-range radar set, for Secretary of War Harry H. Woodring; BG Hap Arnold, Assistant Chief of the Army Air Corps; and others. The Secretary and BG Arnold were impressed and the latter urged development of a long-range version for use as an early warning device. With high-level support, the Signal Corps received money needed to continue its developmental program. The Signal Corps application of radar to coastal defense was an extension of its long-standing work in the development of electrical systems for that purpose, which began in the 1890s. Because the National policy remained one of isolationism, American military planners envisioned any future war as defensive. Hence the Signal Corps developed the SCR-268, designed to control searchlights and anti-aircraft guns, and subsequently designed for the Air Corps two sets for long-range aircraft detection: SCR-270, mobile set with a range of 120 miles, and the SCR-271, a fixed-radar with similar capabilities. By early December 1941 the aircraft warning system on Oahu had not yet been fully operational. The Signal Corps had provided SCR-270 and SCR-271 radar sets earlier in the year, but construction of fixed sites had been delayed and radar protection was limited to six mobile stations operating on a part-time basis to test crews and equipment.
Per Army History The Reinforcement of Oahu
SCR-270 Early Warning Radar at Pearl Harbor Private George Elliot and Private Joe Lockard were the radar operators at the Opana Station on that day and at 7:00 A.M. were preparing to shut down the SCR-270 radar system. The truck had not arrived to return them to base so they kept the radar operating for additional training. The radio operators detected a large echo on the SCR 270 radar oscilloscope at 7:02 A.M. on December 7, 1941, which later proved to be the Japanese attack force heading for Pearl Harbor. Early Radar Warning Ignored Museum display of SCR-279 radar inside trailer. Model of a portable SCR-270 unit. U.S. Army photo The blip was so large that Lockard thought the radar was malfunctioning, however, Elliot insisted on contacting the Aircraft Warning Information Center. The Center was virtually empty due to early morning training and monitoring exercies. The officer on duty, Lt. Kermit Tyler1 had been at the center for only two days, and had no training in radar. He did know that a flight of US B-17 Flying Fortresses were expected that day and believed this is what the operators had detected. He relayed back to the radar operators, Don't worry about it. Lockard and Elliot continued tracking the aircraft until they were about 22 miles from Oahu, when they planes disappeared behind the distortions caused by surrounding mountains. The two radio operators then returned to base. At little after 8:00 A.M. Lockard and Elliott learned Japanese aircraft were attacking the base at Pearl Harbor and realized that what they had previously tracked with the radar, was the Japanese attack force.
SCR-270 Radar, National Electronics Museum website
In early December 1941 the Army did have an aircraft warning system nearing completion in Hawaii, but it was not yet in operation. This system depended for its information on the long-range radar machines developed by the Signal Corps in the late 1930's, the SCR-270 (mobile) and SCR-271 (fixed). The Signal Corps in Washington drafted the first plan for installing some of this equipment in Hawaii in November 1939, but before 1941 not much actually was done to prepare for its installation.46 As of February 1941 the War Department expected to deliver radars to Hawaii in June and hoped they could be operated as soon as they were delivered. The first mobile sets actually reached Hawaii in July, delivery having been delayed by about a month because of a temporary diversion of equipment to an emergency force being prepared for occupation of the Azores. In September five mobile sets began operating at temporary locations around Oahu, and a sixth, the Opana station at the northern tip of Oahu, joined the circuit on 27 November. Three fixed sets also arrived during November, but their mountain-top sites were not ready to receive them. The radars in operation on Oahu in late 1941 had a dependable range of from 75 to 125 miles seaward. An exercise in early November demonstrated their ability to detect a group of carrier planes before daylight 80 miles away, far enough out to alert Army pursuit planes in time for the latter to intercept incoming "enemy" bombers about 30 miles from Pearl Harbor. But this test in no way indicated the readiness of radar to do its job a month later. The sets were being operated solely for training; a shortage of spare parts and of a dependable power supply made it impracticable to operate them for more than three or four hours a day; the organization for using their information was a partly manned makeshift operating for training only; and defending pursuit, even if they could have been informed, would have had to keep warmed up and ready to take off in order to intercept enemy planes before they reached their targets. The radars were not supposed to function except for training purposes until the Signal Corps turned them over to an air defense or interceptor command, to be operated by the Army pursuit commander through an information center which would receive data from the radar stations, warn the defending pursuit, control the movement of friendly planes, and control the firing of all antiaircraft guns.
- (g4) John Barker Invents Traffic Radar 1947 -
Per Wiki Radar Gun
Per Google Patents US #2629865 Radio echo apparatus for detecting and measuring the speed of moving objects
The radar speed gun was invented by John L. Barker Sr., and Ben Midlock, who developed radar for the military while working for the Automatic Signal Company (later Automatic Signal Division of LFE Corporation) in Norwalk, CT during World War II. Originally, Automatic Signal was approached by Grumman Aircraft Corporation to solve the specific problem of terrestrial landing gear damage on the now-legendary PBY Catalina amphibious aircraft. Barker and Midlock cobbled a Doppler radar unit from coffee cans soldered shut to make microwave resonators. The unit was installed at the end of the runway (at Grumman's Bethpage, NY facility), and aimed directly upward to measure the sink rate of landing PBYs. After the war, Barker and Midlock tested radar on the Merritt Parkway. In 1947, the system was tested by the Connecticut State Police in Glastonbury, Connecticut, initially for traffic surveys and issuing warnings to drivers for excessive speed. Starting in February 1949, the state police began to issue speeding tickets based on the speed recorded by the radar device. In 1948, radar was also used in Garden City, New York.
Per 8/20/2013 NY Times article by Pagan Kennedy Who Made That Traffic Radar? (text in body)
Publication number US2629865 A Publication type Grant Publication date 24 Feb 1953 Filing date 13 Jun 1946 Priority date 13 Jun 1946 Inventors Barker John L Original Assignee Eastern Ind Inc
- (g5) Famous Speeder Babe Ruth 1921 -
Per 1921 - This Day in Baseball History
Per book: Young Babe Ruth 2001 by Brother Gilbert C.F.X.
Jail Stripes at Night, Pinstripes in the Daytime Babe Ruth is arrested for speeding in New York City. He is given a fine of $100 and thrown in the slammer overnight and into the next day—missing the first part of the New York Yankees’ late afternoon affair with the Cleveland Indians on June 8. He might have missed the entire game had his uniform not been delivered to him in his cell and received a police escort to the Polo Grounds. The Yanks provide a come-from-behind 4-3 win, though Ruth’s presence has little to do with it.
Per New York Times June 9, 1921 front page abstract 'Babe Ruth Plays After Day In Jail' and here
Per book: Amazing Tales from the New York Yankees Dugout: A Collection of the Greatest... By Ken McMillan
'BABE' RUTH PLAYS AFTER DAY IN JAIL; Sentenced as Speeder, He Spends 4 Hours in Cell and Then Speeds to Ball Game. FREE AT 4 P.M., BATS AT 4:40 Advised to Remember Law, He Is Fingerprinted and Put Behind Bars With Five Others. Worries About Ball Game. Wants "Babe" Ruth's "Real" Name. Advises Him to Remember Law. 'BABE' RUTH PLAYS AFTER DAY IN JAIL Conversations in His Cell.
Per 3/30/2014 Chicago Tribune article 10 things you might not know about traffic tickets By Mark Jacob and Stephan Benzkofer
During his second season as a Yankee, a police officer- unconvinced or unimpressed with who Ruth was - arrested Ruth for speeding on Riverside Drive. Not only did Ruth pay the $100 fine, he was sentenced to a day in jail. Back then, 'a day' ended at 4pm, so uth was happy he wouldn't have to miss all of that afternoon's 3:15 start. Ruth had his uniform delivered to the jailhouse in lower Manhattan and out it on underneath his fine suit. Not heeding any advice from the judge, Ruth told someone in his jail cell, "I'm going to have to go like hell to get to the game. Keeping you late like this makes you into a speeder". At four o'clock Ruth was released, and a crowd greeted him at the rear of the jail. This time, utilizing a police escort, Ruth made it to the upper half of Manhattan in 18 minutes and was inserted into the lineup.
Babe Ruth got two speeding tickets in New York City in 1921, for driving 27 mph and 26 mph. After the second violation, a magistrate threw him in jail for a few hours. He was released 30 minutes after the start of the Yankees game against the Cleveland Indians and drove his maroon Packard 9 miles in 19 minutes — speeding again — to the Polo Grounds. Only then did he walk — to lead off the sixth inning.
- (g6) John Barker bio -
Per Bridgeport (CT)Post 10/2/1977 article 'Traffic Control Inventor Retires After 44 Years with Signal Firm'
- (g7) Early Adopters CT 1947-1949 -
Popular Science Jun 1947 'Radar Clocks The Traffic' by Devon Francis pp.98-99
Popular Science Sep 1952 'Little Black Box Now Catches Speeders' pp.94,280
The Day, New London, CT, newspaper 2/5/1949 front page article Radar Speed Trap Will Be Setup
The Day, New London, CT, newspaper 2/12/1949 front page article 'First Arrest Made in Radar Speed Check'
HARTFORD - The dubious distinction of becoming the first speeder ever summoned into a Connecticut court on radar evidence will go to some hapless motorist in Glastonbury next week. Captain Ralph J. Buckley, commander of the state police traffic safety division, said the war-born detrection equipment will be used at the request of Police Chief George C. Hall of Glastonbury. "It will not be used secretly. Warning signs will be posted in all zones where the device is in use. "Any speeder who gets caught will have to argue with a little black box. The accuracy of the little black box, I might add, is uncanny. The speeder won't have a leg to stand on if he tries to talk his way out of a ticket." Followed Recent Survey - The request of Chief Hall for the use of radar equipment in coping with Glastonbury's traffic situation, Captain Buckley said, followed a recent radar speeding survey made in the town by state. It has been used since May, 1947, to make traffic surveys and as a speeding deterrent, Captain Buckley said. Up to now only warnings have been issued to speeders nabbed with its aid, however he added. "This is the latest scientific method of gathering evidence against speeding motorists." Captain Buckley said. "Far more accurate than a "speedometer", it removes the possibility of human error. police. (sic) This in turn, he said, was requested because the Connecticut highway safety commission reported that between January and October, 1948, in Glastonbury, there were 106 motor vehicle accidents involving property damage, 61 involving personal injuries, and two with fatalities, an increase over the same period in the previous year. In using radar to catch speeders, Captain Buckley explained, one officer operates the equipment by the roadside. Its "little black box" makes a visual and printed record of the speed of all cars passing. When a speeder goes by, the officer simply notes down the car number and radios it on to a fellow officer waiting farther along the highway, who stops the offender and gives him a summons.
Per Hartford Courant 10/9/2006 article Vintage Radar Unit Donated To Museum
First Arrest Made in Radar Speed Check HARTFORD - A man late for a business appointment was arrested as a speeder in Glastonbury this morning and thereby became the first automobile driver ever held in Conecticut on the basis of radar evidence. He was Matthew Dutka, who was driving from his home in Norwich to Hartford along Route 2. He was stopped by State Policeman Vernon Gedney and Albert Kimball as he approached Glastonbury center. Captain Ralph J. Buckley, head of the state police traffic safety division, said Dutka was doing 55 miles an hour in a 30-mile-an-hour zone. Approximately 3,000 cars were clocked by radar in Glastonbury this week before the first speeding arrest was made, according to Capt. Buckley. About 35 warnings had been issued up to this morning, he said. Dutka was given a summons to appeaar Monday night before Judge J. Ronald Regnier in the Glastonbury town court.
Per United States Municipal News, Volumes 14-16, United States Conference of Mayors, 1947, p.72 (partial)
GLASTONBURY — The Glastonbury Police Department has donated what is believed to be one of the nation's first radar units to the National Law Enforcement Officers Memorial Fund Museum in Washington, D.C. The radar unit was first used by Glastonbury police in 1948 to monitor the speed of cars. Capt. David A. Caron said the unit made the life of a police officer much easier, because it was simpler to track speed. The radar unit became obsolete as newer models were introduced, and Caron had to save it from the trash a couple of times. ``In previous years, people seemed to dispose of it,'' he said. ``I thought it was pretty special, so I saved it from the garbage can twice.'' Caron then stored the unit in a safe place so it could be preserved for history, and waited for an opportunity to share it. He noticed a bulletin that the museum put out for law enforcement articles and decided that the unit would be the perfect addition. He said he was happy to donate the unit in September to museum officials, who could not be reached for comment. ``I'm glad it got into a worthwhile cause and into this museum,'' he said. The National Law Enforcement Officers Memorial Fund Museum is expected to open in 2009.
Per undated nationwide article first located in 6/3/1948 The Amarillo Globe-Times, and again 6/9/1948 Schenectady (NY) Gazette Radar to Trap Speeders
Catching Speeders By Radar — An ingenious radar traffic beam, especially designed to indicate when motor vehicles are exceeding the speed limit, has been built for the Connecticut State Police and will be put into use on a statewide scale as soon as the police force completes an instructional course. The device is called an electromatic... The device is recatangular in shape, weighs 45 pounds, and is easily transported from place to place. A single patrolman is required to operate it. A moving vehicle reflects a microwave signal sent out by the device. As the vehicle goes down the road, the signal beam follows it. ... the speed of the moving car. A pointer stops momentarily on the device's speedometer at the speed of the vehicle is traveling, and at the same time it records it on a graph. If the speed is found to be beyond the limit, the vehicle's license plate number can be noted by the patrolman, and radioed down the road to a waiting patrol car -Highway Research Abstracts, July
Per New York Times 2/9/1949, Business, p.53 'RADAR WORKS ON SPEEDERS; Year's Test Made on Long Island Shows System Is Costly'
GARDEN CITY, N. Y., (NP)— Radar will be used to trap speeders in this Long Island town if tests prove successful. Police are experimenting with an electro-matic speed meter — a radar device set up in a patrol car. The machine bounces radar waves off passing automobiles and records their speed on a graph.
Photo of Garden City radar 1949 per Garden City Public Library, Village of Garden City archives Radar Car and 1948/1949 per radar
GARDEN CITY, L. I., Feb. 8 -- A year's test of radar detection of speeding motorists indicated today that the system works, but at a cost.
Per the May 8, 1948 entry in the Weekly Underwriter covering January 1948 through June 1948, p.1391
Radar Checks Speeders Radar is being used by the police of Garden City, L.I., to clock the speed of passing automobiles. The machine, which cost $1,000, picks up an approaching car from a distance of 75 feet and registers the speed for a ..
- (g8) Radar Rollout 1949-1972 -
TX Highway Patrol 1954 - Highway patrolmen getting radar to help combat speeders
Popular Science Sep 1952 'Little Black Box Now Catches Speeders' pp.94,280
Kiplinger Magazine Jan 1955 'Can they really check your speed by radar?', p.34
The meter, which is manufactured by Automatic Signal Division, Eastern Industries, Inc. East Norwalk, Conn., has helped police catch traffic violators in 31 states- and even in Bermuda, where the speed limit is a flat 25, and no fooling.
Per Shrewsbury, MA Police Department history History
Radar was first used in traffic work half a dozen years ago, and now 43 states, Hawaii and the District of Columbia have some 600 machines. Ohio has the greatest number of machines, 107. Some states have only 1 or 2 each.
Per Rehoboth Beach (DE) Police Department History of the Rehoboth Beach Police
1961:New radar equipment was purchased for traffic enforcement and used very effectively. The compact radar transistor unit may be mounted inside a cruiser, outside on the roof or fenders, on a stand or even off the road with an extension cord. Effective range of the unit is 2,000 feet.
Per Alabama Highway Patrol Department of Public Safety History: 1935-1990
During the mid 1960s there was a noted problem with speeding in the city. However, the RBPD did not have any radar or other speed detection device or training to help curb the problem of speeding drivers. The city intended on asking the Delaware State Police for troopers to run radar on city streets when available. However, one year later on July 14, 1967 the RBPD obtained it's first radar unit. The Delaware State Police was tasked with training RBPD officers in it's use. Today every RBPD patrol car has a radar unit installed.
Per CHP California Highway Patrol Milestones of the CHP
Federal grants received in 1973 allowed Public Safety to equip all patrol cars with protective shields, roll bars, spotlights, electronic sirens and public address systems. A separate grant, awarded through the Office of Highway and Traffic Safety, was used to purchase 54 Speed Gun II radar units used to enforce the 55 mile-per-hour speed limit effective nationwide that year. As a result, arrests for speeding violations increased 18 percent.
Per Orem, UT Police The History of Orem Law Enforcement
1954 CHP tests radar for speed enforcement. 1988 The CHP is authorized to use radar to enforce speed on roads in rural and unincorporated areas.
Per Lexington SC Police Police History
With increasing traffic problems, Chief Burgener gave permission to purchase there first radar unit. The radar unit was put to use on October 18th, 1960, and was found to be a very resourceful tool for officers.
Per unofficial Delaware State Police Delaware State Police History
1972 Police begin using Radar in the town for speed enforcement.
Per official Delaware State Police Delaware State Police History
On March 13, 1952, the state police first used RADAR devices specifically designed for traffic enforcement, charging nine people with speeding on the DuPont Parkway. The first set was borrowed from the highway department. The use of this technology was highly controversial; adding to the controversy was the use of unmarked police cruisers for enforcement. The issue became quite volatile when the chairman of the State Highway Commission was stopped for speeding. Near the end of 1955, the state police announced they would be utilizing three RADAR units around the clock. The new smaller units were no larger than a suitcase and were permanently installed in the trunks of 1955 Ford Interceptors. In the fall of the following year, the state police began using RADAR in all three counties.
Per Pennsylvania State Police Pennsylvania State Police History
Using radar for the first time, troopers arrested nine motorists on March 13, 1952. Initial units were cumbersome and brought immediate reaction from the public and the political arena. The year 1957 witnessed improvement of the radar traffic enforcement system. On October 31, radar units were placed in the trunks of patrol vehicles to assist with maintaining speed limits.
Per South Bend (IN) Police South Bend Police History
On Sept. 1, 1961, the State Police officially began radar speed checks.
Per Huntsville (AL) Police Huntsville (AL) Police History
use of radar in 1951
Per Virginia State Police Virginia State Police History
March 1961 - A radar speed checker was purchased from Janette & Company.
Per unofficial Ohio State Highway Patrol
1952: Radio detecting and ranging equipment--known as radar, was used for the first time as a speed surveying device throughout Virginia. 1954: Stationary radar was first used for traffic speed enforcement.
Per unofficial Dayton Police History
Use of radar for speed enforcement and "Intoximeters" for drunk driving offenses began in 1952
Per book: Health Instruction Yearbook 1950 p. 162
On June 13, 1952, Dayton’s first radar speeding arrest was made by Ptl. Harold Murphy and Ptl. James Hopkins when they stopped a car traveling 45 miles per hour in front of Carillon Historical Park on S. Patterson Blvd. Unlike the 1904 speeding ticket, the officers were able to register the driver’s speed using new technology and were able to chase down the driver in a patrol car.
Per Chicago Tribune 6/20/1954 Radar Eye Sees All - So Cars Slow Up
Michigan Studies Highway Speeds with Radar Charles M. Ziegler, of the Michigan State Highway Department, reports that in 1950 Michigan used radar for the first time to check vehicle speeds on the streets and highways of the state.
Per book: Louisville Division of Police: History & Personnel By Morton O. Childress
An all-seeing radar eye has wrought a remarkable slow down on the part of Hammond motorists in two months that it has been clocking them. John Mahoney, Hammond police traffic department captain, says radar is "marvelous." The city's single radar unit has proven ideal to clock speeders on side streets where rough pavement throws off the speedometer of a moving squad car. Erect Warning Signs / The threat of radar control caused a "genuine reduction" in speeding, Mahoney said, when warning signs were erected a week before radar was first used last April. Some $1,500 has been spent putting up 20 warning signs. They are posted so that no one can drive into Hammond without being cautioned, he said. Purpose of radar is not to trap tourists, or run up a large number of arrests, but to cause speed reduction, the captain explained. Mahoney attributes reduction in accidents and injuries to use of the electronic gadget. From June 1 thru 14 there were 83 accidents and 23 injuries, according to police records. During the same period in 1953 there were 116 accidents, and 34 injuries, including one fatality. So far, radar has not been used much on U.S. 41 or other main highways. For if several vehicles were before the radar scope at once, only the speed of the fastest would be registered. Catches 130 in Six Days Also, motorists on highways are more impressed by sight of a motorcycle policeman than fear of possible radar check, Capt. Mahoney said. Altho Hammond's radar car is plainly marked and has not operated from concealment, some 130 speeders were arrested in one six day period. Most of them exceeded the speed limit by at least 10 miles an hour. Tolerance on speeding will be reduced 5 miles an hour on streets with high accident frequency records, however. At school crossings, radar can be employed to arrest drivers only 1 or 2 miles an hour above the limit, the captain warned.
Per stltoday article A Look Back St. Louis police first aim radar at speeders in 1953 photo captions:
January 7, 1955 Radar to catch speeders
Per Reading (PA) Eagle 10/31/1954 Radar Withstands Legal Challenges excerpts:
St. Louis police installed warning signs on the Oakland Express Highway shortly before they began issuing speeding tickets on Nov. 4, 1953, with their first traffic-radar device. St. Louis Police Court Judge Robert G. Dowd (front left) observes a test of the city Police Department's first speed-radar device on the Oakland Express Highway on Oct. 29, 1953. Dowd handled his first radar docket on Nov. 18, imposing $60 in fines. Others are, from left, court official David Fitzgibbon, Police Board President I.A. Long, Police Judge Morris Rosenthal, Police Chief Jeremiah O'Connell, assistant chief Joseph Casey and (front right) Maj. William Cibulka, traffic division commander. The first St. Louis police traffic radar was a 45-pound device that was placed on the shoulder of the Oakland Express Highway at Forest Park in November 1953. The transmitter/receiver was attached by wire to a monitor in a police car. St. Louis Police Cpl. Elmer Kuhmann broadcasts the description of a speeding car in November 1953, when the department first began using traffic radar. Waiting up ahead on the Oakland Express Highway were motorcycle officers, who pulled over the speeders. A close-up of a radar monitoring device used by St. Louis police to catch speeders beginning in November 1953. The needle jumped across the moving roll of paper as the radar transmitter relayed information by cable. The speed marked on the paper was of a station wagon going 48 mph. St. Louis police motorcycle Officer Ed Schnelting writes a speeding ticket for Mildred Fabick, 31, of Ladue, on Nov. 4, 1953, the first day of the department's use of traffic radar. The scene is on the Oakland Express Highway. Police Court Judge Robert G. Dowd dropped the charge against her on Nov. 18, the first court day for radar tickets, because the ticket directed her to the wrong courtroom.
Per Washington State Patrol Looking Back
Chicago (U.P.) - Municipal Judge Thomas M. Powers of Akron, Ohio, addressing a session of the National Safety Council convention... said the public has more confidence in radar than it has in the speedometer method of apprehending speeders, which invloves a hazardous chase by a police car. Powers said Columbus, Ohio, started using radar in 1948. Since then, he said, its use has spread through 45 states and thousands of communities.
Per Kansas Highway Patrol History
In 1951, the Patrol began using radar with a stationary type unit
Per book: Public Management, Volumes 30-31, International City Managers' Association, 1948, pp.349, 367
Patrol units did not have moving radar until 1972 or video cameras until the 1990s.
Per The Police Chief, Volumes 16-17, International Association of Chiefs of Police, 1949
p.349 - Radar is now being used by the police in Columbus in checking up on speeders (p.367) p.367 - Traffic violations: Checks auto speeders with radar, 367 [quantity]
Per Wilton CT Bulletin March 3, 1954 N.J. Turnpike Radar Curbs Speeders: Safety record Shows Improvement p.11
Columbus, Ohio - In May started using radar to apprehend speed violators. Well marked "Police Speed Control Zone" signs were erected at critical high speed areas, to avoid "speed trap" criticism. One unit, consisting of one car equipped with...
Per Chicago Tribune 6/6/1954 RADAR TO SPOT SPEEDING CARS ON EDENS ROAD by Hal Foust
During 1953, the first year of radar's use, the accident and fatality rate showed sharp reductions. Radar was responsible for the apprehension of twice as many reckless drivers in 1953 as were apprehended by the entire detachment of State Police assigned to the Turnpike in the previous year.
Per Chicago Tribune 9/16/1954 USE RADAR CAR TO CHECK SPEED IN EXPERIMENTS
Use of Device to Begin on Friday [June 11] A police radar for speed control will be installed on Edens expressway on Friday. This will be the state's first use of the electronic device, Joseph D. Bibb, Illinois director of public safety said yesterday. The location for the installation, just north of Chicago, is on a high speed super-highway with a bad accident record. Bibb said he was confident that the radar operation will be a wholesome influence on the conduct of Edens traffic. He cited the experience of Mo- line police. There the radar reduced the number of speed violations and cut accidents, he said. Weighs 45 Pounds The instrument is portable, weighing approximately 45 pounds with its carrying case. It has an operating zone of about 150 feet, with a beam width of 30 degrees at that distance. Speeds are measured with an accuracy of plus or minus 2 miles an hour. They are read on an indicator and also plotted on a graph for evidence admissible in courts. State police with training in radar work will operate the device. One will read the indicators. A squad car ahead in the path of traffic will apprehend the driver with a radar record for dangerous speed. The men will work under the direct supervision of Wilbur Kennedy, assistant superintendent of state police for this district.
Per unnofficial Tennessee Lawman Motor Cars with photo
A radar equipped squad car has been touring Chicago's streets for speeders since Sept. 1, Traffic Chief Michael Ahern said yesterday. The experiment in checking speeds of cars will continue until November when the data will be used for a study of the relation between speed and high accident rates. No arrests are being made. Traffic Policemen Lester Beling and Leonard Baldy [who became a helicopter traffic reporter and died in a crash in 1960], who operate the car equipped with $1,100 worth of radar equipment, said they clocked 46 cars in 90 minutes going at least 10 miles over the limit in the 6400 block of S. Western av. The speeding vehicles, in a 20 mile an hour zone, included three suburban buses and a CTA street car, they said.
Per unofficial Vermont State Police State Police Series #45 - Vermont State Police - History
Memphis police began using radar units for traffic control on March 20, 1953. The radar unit was set on a tripod outside the car and connected to a monitor inside the car that displayed the speed of passing cars.
Per VT State Police Arhcives
The Field Force Division started using radar as a speed enforcement tool in 1954.
Per Mon, September 26, 1949 The Dixon (IL) Telegraph article 'Radar Being Used Here to Determine Speed of Autos' p.7
The Field Force Division started using radar as a speed enforcement tool in 1954.
Per Freeport (IL) Journal-Standard, Thu. 9/22/1949, p.5 'Radar Speed Checker Used By Authorities On Highways In Area '
Radar is now being employed by ... The meter has been in the Dixon district department ol I be far more accurate in highways in determining the speed of cars and trucks. ... method used by mat .. The device which is electrically operated either from the storage battery or 110 volts has been given a thorough test on th highways in the Dixon district. District Traffic Engineer Dan Branigan of the Dixon district offices, and his assistant Arthur J Mueller have been in charge of the tests and within a few days plan to check the speed of all cars used by Sterling district, No. 1 of the state highway police. Traffic Engineer Branigan will address the National Safety Congress in Chicago next month on the operation of the new electro-matic radar speed meter, explaining its operation and efficiency. The meter is able to check cars driving from one to 100 miles per hour. Tests taken thus far. Branigan stated today, indicate that the few automobiles have been clocked speed five to 10 per cent faster in indicated on the speedometer.
Per Book: Troopers and Highway Patrolmen, by Marilyn Olsen
Rockford, Ill., Sept. 22.—It's getting harder all the time for motorists to know when the authorities are checking up on them. The latest wrinkle? Your car may be a pip on the motor cop's radar screen. The electronic traffic spy has just been given its first test at checking up on city traffic here. It's already had a workout on Illinois highways near Dixon, where many a speeder has been clocked without his knowledge. It is called the electro-matic speed meter. It comes, all ready to go to work, in a black box. Unlike traffic counters and speed checkers in general use, it requires no hose across the pavement or beam catcher. It can be focused on the road like a camera, and its short wave emissions are bounced back by vehicles. Connected with the radar are two meters and a marking arm which records speed of the vehicle on a graph. Arthur Mueller of Dixon, assistant traffic engineer for the state highway department, says the machine's record is invariably correct to within one mile an hour.
Per Saturday Evening Post article by Irving Leiberman, appearing in the Milwaukee Journal 11/12/1949 Slow Down Radar is Watching, excerpts:
Wyoming Experimental use of radar began in 1954, with the first operations pursued on US 30 between Cheyenne and Laramie. The next year, the patrol obtained its first 'stationary' radar unit. South Carolina Radar was introduced in 1962 as a tool to apprehend speeders Idaho In January of 1954 the old "prima Fascia" speed law was replaced with an enforceable recommended speed limit of 60 mph in the daytime and 55 mph at night on all primary U.S. and state highways. Also in 1954, the ISP purchased six radar sets to aid in controlling speeding motorists. NY In 1975 the state police acquired its first radar unts capable of operating while the troop car was in motion. Tennessee In 1954, the acquisition of eight pieces of equipment changed forever the enforcement activities of the Patrol. Radar became a new word in the Patrol's vocabulary. It was used in posted zones throughtout the state. Virginia Radar was first used as a speed surveying device in 1952. The speed limit on most primary highways was 60 mph.
Per California Highway Patrol budget document 1954 p.276/277 showing - 'Excerpts from Article "Radar Methods of Speed Control" by Carl, R. Finegan, Sheriff of Lorain County, Ohio, published in "The California Highway Patrolman" March, 1950 - and "Excerpts from Article Entitled "Slow Down, Mr. Speeder!" from the April, 1953, issue of "Service," a Publication of Cities Service, by Robert I. Marshall -NOW-IN 26 STATES-THE "RADAR COPS" ARE WATCHING, AND THEY USUALLY GET THEIR MAN"
A dozen states are using electronic device to clock speeders, and violators find results uncanny. The traffic radar, known as the "electromatic speed meter" was perfected by a Norwalk (Conn.) company in co-operation with Connecticut state troopers. It consists of a handy, car borne appratus, weighing only 45 pounds. A transmitter-receiver unit in a small black box is placed on a fender with its black glass front facing oncoming traffic. The power unit is plugged into the car's battery. A recorder containing a roll of graph paper and a visual speed indicator is hung near the car's steering wheel. The device has been tried out, either state-wide or locally, in California, Colorado, Connecticut, Kentucky, Maine, Maryland, Mississippi, New Jersey, North Carolina, Ohio, Rhode Island, Tennessee, and the list is growing.
Per Popular Electronics May 1956 Radar on the Highway
[Mar 1950] Recently I talked to Capt. Clem Owens of the Columbus, Ohio, Police Department. Here is what Captain Owens told me: "During the first 6 months of 1949 in Columbus, we had a force of 30 motorcycle policemen checking traffic. These 30 men made 946 arrests, and traveled 121,515 miles to do it. However, with the use of the speed meter, 853 arrests were made, and 783 warning tickets were given out. We used only two men. The thing to note here is the fact that with the use of radar our equipment traveled no miles and we eliminated the use of 28 men 'who then could be placed in oth'er branches of law enforcement. The men we used to make our traffic arrests were policemen who had been placed on light duty. "Here you will note that there has been a saving of manpower and the cost of operating equipment. Again with the use of radar,weather conditions do not have to be perfect. If it has been raining, and the streets are slippery, you are not subjecting your men to dangerous hazards such as you would have when using motorcycles or squad cars. [April 1953] Of 56,000 arrests made by enforcement agencies, utilizing mechanical devices, only 318 have failed of conviction.
Q. How many radar speed meters are there, and where are they located? A. Radars must be licensed by the FCC. At this writing, there are about 1600 radars in use. They are scattered throughout all 48 states. Most of the longer freeway, turnpike, or expressway police patrols have one or more radar speed meters in operation daily.
- (g9) Radar Products 1949-1972 -
Company historical and Product data per Hendon Publishing's Buyer's Guide to Radar by Jim Wells
NHTSA Radar CONSUMER PRODUCT LIST (CPL) January 2003
Per October 14, 1977 The Ottawa Journal, Canada, p.42
Electro-Matic Speed Meter dash meter c1968 from 1971 MD State Police car per a href="http://www.examiner.com/slideshow/jerry-scarborough-s-retired-maryland-state-police-patrol-cars#slide=8"> Retired MD police car photots
Toronto Police chief Harold Adamson, accusing him of "misleading the public into thinking they (police) have a secret weapon." The "secret weapon" is the hand-held Muni-Quip Tri Bar radar gun, a new acquisition for Toronto police which, officials have predicted, will render the Fuzzbuster useless. The gun, police say, can be pointed at the ground until trained on an approaching vehicle, providing an accurate reading of a car's speed but denying the speeder time to slow down. Lastman, naturally, disagrees. "Besides misleading the public ... the police are using disgraceful tactics," she wrote.
Decatur history per 2008 archive of Decatur Electronics Company History
Per Decatur Elec. History of Radar
Product info per Archive.org Wayback May 1998, Dec 1998, Feb 1999
Police Radar Manufacture Begins In 1955, Bryce Brown, a university professor with experience working on the Manhattan Project, started Muniquip (short for Municipal Equipment). He made speed timers by stretching two hoses across a road, and soon began manufacturing the first law enforcement radar. In 1964, Brown left Millikin University to focus on his radar company. A Toronto firm was interested in his other products, and bought them along with the company name two years later. Brown kept the radar portion, and renamed the company Decatur Electronics, Inc. Police radar began as an analog system using a needle rather than a digital readout. Digital technology was not far behind though, and along with it came moving radar. Since then, radar technology has become firmly embedded as a law enforcement tool. Today radar is employed by the military, and in law enforcement, weather, aviation and sports.
- (g10) JUGS guns 1975 on -
JUGS Speed Gun picture from the JUGs about us page
- (g11) 1974 55 mph nationwide speed limit -
In Sep 2005 National Highway Traffic Safety Administration article Federal Role in Speed Management
In 1973 during the oil embargo, Congress enacted the National Maximum Speed Limit (NMSL), set at 55 mph, to conserve fuel. In addition to conserving fuel, the annual traffic fatality toll declined from 54,052 in 1973 to 45,196 in 1974, a drop of over 16 percent. As a result of the reduction in traffic fatalities, the Congress enacted Public Law 93-643 making the NMSL permanent. In 1995, Congress repealed the NMSL, ending the Federal sanctions for noncompliance and the requirement for States to submit speed compliance data.
- (g13) Slow Guns vs Fast Guns -
Per Stalker Pro manual
Per book: High Heat: The Secret History of the Fastball and the Improbable Search, By Tim Wendell p.108
Scouts say the JUGS gun, which Litwhiler helped popularize, measures the speed of the ball soon after it leaves the pitcher's hand. The Decatur RAGUN, which soon followed in development, is said to measure the speed of the ball closer to the batter and home plate. So the JUGS gun was soon known as the “fast gun” and the RAGUN as the “slow gun. Routinely, there was a four-mile-per-hour difference. | 1 | 22 |
<urn:uuid:d0f97603-f34c-4ee5-936d-9fcdb461141a> | Free Share Lead2pass CompTIA N10-006 VCE Dumps With New Update Exam Questions:
A technician wants to separate networks on a switch. Which of the following should be configured to allow this?
C. Spanning tree
D. Traffic filtering
A VLAN is a group of end stations in a switched network that is logically segmented by function, project team, or application, without regard to the physical locations of the users. VLANs have the same attributes as physical LANs, but you can group end stations even if they are not physically located on the same LAN segment.
A user does not have network connectivity. While testing the cable the technician receives the below reading on the cable tester:
Which of the following should the technician do NEXT?
A. Cable is a crossover, continue troubleshooting
B. Pin 3 is not used for data, continue troubleshooting
C. Pin 3 is not used for data, replace the NIC
D. Redo the cable’s connectors
A technician needs multiple networks, high speeds, and redundancy on a system. Which of the following configurations should be considered for these requirements? (Select TWO).
A. Routing table
B. Next hop
C. Port mirroring
D. Port monitoring
Port mirroring is used on a network switch to send a copy of network packets seen on one switch port (or an entire VLAN) to a network monitoring connection on another switch port. This is commonly used for network appliances that require monitoring of network traffic, such as an intrusion detection system, passive probe or real user monitoring (RUM) technology that is used to support application performance management (APM).
In computer networking, a single layer-2 network may be partitioned to create multiple distinct
broadcast domains, which are mutually isolated so that packets can only pass between them via one or more routers; such a domain is referred to as a Virtual Local Area Network, Virtual LAN or VLAN.
A technician decides to upgrade a router before leaving for vacation. While away, users begin to report slow performance. Which of the following practices allows other technicians to quickly return the network to normal speeds?
A. Change management
C. Asset management
D. Cable management
As soon as technician found a problem he generates a change management request to make changes to fast up the speed of router.
Which of the following would a network administrator recommend to satisfy fault tolerance needs within the datacenter?
A. Multimode fiber
B. Setting up a new hot site
C. Central KVM system
D. Central UPS system
For unintruppted power supply we need ups as from this no power issue will come and our systems will remain safe.
During a disaster recovery test, several billing representatives need to be temporarily setup to take payments from customers. It has been determined that this will need to occur over a wireless network, with security being enforced where possible. Which of the following configurations should be used in this scenario?
A. WPA2, SSID enabled, and 802.11n.
B. WEP, SSID enabled, and 802.11b.
C. WEP, SSID disabled, and 802.11g.
D. WPA2, SSID disabled, and 802.11a.
WPA2 is a security technology commonly used on Wi-Fi wireless networks. WPA2 (Wireless Protected Access 2) replaced the original WPA technology on all certified Wi-Fi hardware since 2006 and is based on the IEEE 802.11i technology standard for data encryption.
Which of the following wiring distribution types, often found in company closets, is used to connect wiring from individual offices to the main LAN cabling?
B. 66 block
D. Patch panel
A patch panel, patch bay, patch field or jack field is a number of circuits, usually of the same or similar type, which appear on jacks for monitoring, interconnecting, and testing circuits in a convenient, flexible manner.
Which of the following network access security methods ensures communication occurs over a secured, encrypted channel, even if the data uses the Internet?
A. MAC filtering
C. SSL VPN
SSL VPN consists of one or more VPN devices to which the user connects by using his Web browser. The traffic between the Web browser and the SSL VPN device is encrypted with the SSL protocol or its successor, the Transport Layer Security (TLS) protocol.
Which of the following is the difference between 802.11b and 802.11g?
D. Transmission power
802.11b has a maximum speed of 11Mbps whereas 802.11g has a speed of 54Mbps.
Users are reporting that some Internet websites are not accessible anymore. Which of the following will allow the network administrator to quickly isolate the remote router that is causing the network communication issue, so that the problem can be reported to the appropriate responsible party?
B. Protocol analyzer
Tracet command will tell the administrator which route is not present or which is present so he will come to know whether he has appropriate route or not.
N10-006 dumps full version (PDF&VCE): https://www.lead2pass.com/n10-006.html
Large amount of free N10-006 exam questions on Google Drive: https://drive.google.com/open?id=0B3Syig5i8gpDVzI0bUdJdU1ESkk | 1 | 4 |
<urn:uuid:d9cea40f-a55b-4ea8-9260-7f7c0b102877> | - Network card
Infobox Computer Hardware Generic
name = Network Card
caption = A 1990s
Ethernetnetwork interface controller card which connects to the motherboard via the now-obsolete ISA bus. This combination card features both a (now obsolete) bayonet cap BNC connector(left) for use in coaxial-based 10base2networks and an RJ-45connector (right) for use in twisted pair-based 10baseTnetworks. (The ports could not be used simultaneously.)
conn1 = Motherboard
via1_1 = Integrated
via1_2 = PCI Connector
via1_3 = ISA Connector
viawat up its kyle withorn1_4 =
conn2 = Network
class-name = Speeds
class1 = 10 Mbit/s
class2 = 100 Mbit/s
class3 = 1000 Mbit/s
class4 = up to 160 Gbit/s
manuf4 = Others
A Network card, Network Adapter, LAN Adapter or NIC (network interface card) is a piece of
computer hardwaredesigned to allow computers to communicate over a computer network. It is both an OSI layer 1 ( physical layer) and layer 2 ( data link layer) device, as it provides physical access to a networking medium and provides a low-level addressing system through the use of MAC addresses. It allows users to connect to each other either by using cables or wirelessly.
Although other network technologies exist,
Ethernethas achieved near-ubiquity since the mid-1990s. Every Ethernet network card has a unique 48-bit serial number called a MAC address, which is stored in ROM carried on the card. Every computer on an Ethernet network must have a card with a unique MAC address. No two cards ever manufactured share the same address. This is accomplished by the Institute of Electrical and Electronics Engineers( IEEE), which is responsible for assigning unique MAC addresses to the vendors of network interface controllers.
Whereas network cards used to be
expansion cards that plug into a computer bus, the low cost and ubiquity of the Ethernet standard means that most newer computers have a network interface built into the motherboard. These either have Ethernet capabilities integrated into the motherboard chipset, or implemented via a low cost dedicated Ethernet chip, connected through the PCI (or the newer PCI expressbus). A separate network card is not required unless multiple interfaces are needed or some other type of network is used. Newer motherboards may even have dual network (Ethernet) interfaces built-in.
The card implements the electronic circuitry required to communicate using a specific
physical layerand data link layerstandard such as Ethernetor token ring. This provides a base for a full network protocol stack, allowing communication among small groups of computers on the same LANand large-scale network communications through routable protocols, such as IP.
There are four techniques used to transfer data, the NIC may use one or more of these techniques.
*Polling is where the
microprocessorexamines the status of the peripheralunder program control.
I/Ois where the microprocessoralerts the designated peripheralby applying its address to the system's address bus.
I/Ois where the peripheralalerts the microprocessorthat it's ready to transfer data.
*DMA is where the intelligent
peripheralassumes control of the system busto access memory directly. This removes load from the CPU but requires a separate processor on the card.
A network card typically has a
twisted pair, BNC, or AUI socket where the network cable is connected, and a few LEDs to inform the user of whether the network is active, and whether or not there is data being transmitted on it. Network Cards are typically available in 10/100/1000 Mbit/s varieties. This means they can support a transfer rate of 10, 100 or 1000 Megabits per second.
Network interface controller
A Network Interface Controller (NIC) is a hardware interface that handles and allows a network capable device access to a
computer networksuch as the internet. The NIC has a ROM chip that has a unique Media Access Control (MAC) Address burned into it. The MAC address identifies the vendor and the serial number of the NIC which is unique to the card. Every NIC has a unique MAC address which identifies it on the LAN. The NIC exists on both the ' Physical Layer' (Layer 1) and the 'Data Link Layer' (Layer 2) of the OSI model.
Sometimes the word 'controller' and 'card' is used interchangeably when talking about networking because the most common NIC is the Network Interface Card. Although 'card' is more commonly used, it is in less encompassing. The 'controller' may take the form of a network card that is installed inside a
computer, or it may refer to an embedded component as part of a computer motherboard, a router, expansion card, printer interface, or a USBdevice.
A MAC Address is a unique 48 bit network hardware identifier that is burned into a ROM chip on the NIC to identify that device on the network. The first 24 bits is called the
Organizationally Unique Identifier(OUI) and is largely manufacturer dependent. Each OUI allows for 16,777,216 Unique NIC Addresses.
Smaller manufacturers that do not have a need for over 4096 unique NIC addresses may opt to purchase an
Individual Address Block(IAB) instead. An IAB consists of the 24 bit OUI, plus a 12 bit extension (taken from the 'potential' NIC portion of the MAC address)
TCP Offload Engine(TOE)
Host bus adapter(HBA)
Wireless network interface card(WNIC)
* [http://www.examcram2.com/articles/article.asp?p=438038&seqNum=2&rl=1 CCNA Exam Prep: Data Link Networking Concepts]
* [http://standards.ieee.org/regauth/oui/index.shtml IEEE Registration Authority - IEEE OUI and Company_id Assignments]
* [http://standards.ieee.org/faqs/OUI.html IEEE Registration Authority - FAQ]
Wikimedia Foundation. 2010.
Look at other dictionaries:
network card — network cards also network interface card N COUNT A network card or a network interface card is a card that connects a computer to a network. [COMPUTING] … English dictionary
network card — UK US noun [C] IT ► a small piece of electronic equipment that you put in computer to connect it to a computer network … Financial and business terms
network card — tinklo plokštė statusas T sritis informatika apibrėžtis ↑Plokštė, kurioje sumontuotos kompiuterio arba kito įtaiso sąsajos su ↑tinklu schemos. Dar vadinama tinklo korta arba tinklo adapteriu. atitikmenys: angl. network adapter; network adapter… … Enciklopedinis kompiuterijos žodynas
network card — expansion card which supplies a connection interface to a local area network … English contemporary dictionary
PlayStation Network Card — PlayStation Network Cards (or Tickets) are prepaid cards used to buy PlayStation 3 and PlayStation Portable contents in the online PlayStation Stores. These PSN Cards (or Tickets) allow PlayStation Network users to fund their online wallet in… … Wikipedia
Network interface controller — Network Interface Card (NIC) A 1990s Ethernet network interface controller card which connects to the motherboard via the now obsolete ISA bus. This combination card features both a (now obsolete) bayonet cap BNC connector (left) for use in… … Wikipedia
Network Load Balancing Services — (NLBS) is a Microsoft implementation of clustering and load balancing that is intended to provide high availability and high reliability, as well as high scalability. NLBS is intended for applications with relatively small data sets that rarely… … Wikipedia
Network Railcard — The earliest version of the Network Card, issued manually rather than through an APTIS machine … Wikipedia
CARD — ( certificates of amortized revolving debt) pass through securities backed by credit card receivables. Bloomberg Financial Dictionary * * * card card [kɑːd ǁ kɑːrd] noun [countable] 1. a small piece of plastic or paper that shows that someone… … Financial and business terms
Network Interface Card — 100 MBit/s PCI Ethernet Netzwerkkarte mit RJ45 Buchse Eine Netzwerkkarte (auch NIC für engl. Network Interface Card) ist eine elektronische Schaltung zur Verbindung eines Computers mit einem lokalen Netzwerk zum Austausch von Daten. Ihre primäre… … Deutsch Wikipedia | 1 | 3 |
<urn:uuid:66d42cba-e4a2-4e69-bc0a-d8928808686c> | A family of microcomputers produced by Acorn Computers, Cambridge, UK. The Archimedes, launched in June 1987, was the first RISC based personal computer (predating Apple Computer's Power Mac by some seven years). It uses the Advanced RISC Machine (ARM) processor and includes Acorn's multitasking operating system and graphical user interface, RISC OS on ROM, along with an interpreter for Acorn's enhanced BASIC, BASIC V.The Archimedes was designed as the successor to Acorn's sucessful BBC Microcomputer series and includes some backward compatibility and a 6502 emulator. Several utilities are included free on disk (later in ROM) such as a text editor, paint and draw programs. Software emulators are also available for the IBM PC as well as add-on Intel processor cards. There have been several series of Archimedes: A300, A400, A3000, A5000, A4000 and RISC PC. Usenet FAQ. Archive site list. HENSA archive. Stuttgart archive. See also Crisis Software, Warm Silence Software.
Last updated: 1998-04-03 | 1 | 2 |
<urn:uuid:7ec82556-dd0e-4bc6-b85a-b965574dfd21> | The Virtual Network Computing (VNC) is a graphical desktop sharing system that uses the Remote Frame Buffer protocol (RFB) to remotely control another computer). It transmits the keyboard and mouse events from one computer to another, relaying the graphical screen updates back in the other direction, over a network.
Vnc-based connections are usually faster (require less network bandwidth) then X11 applications forwarded directly through ssh.
In this chapter we show how to create an underlying ssh tunnel from your client machine to one of our login nodes. Then, how to start your own vnc server on our login node and finally how to connect to your vnc server via the encrypted ssh tunnel.
Create VNC Password¶
Local VNC password should be set before the first login. Do use a strong password.
[username@login2 ~]$ vncpasswd Password: Verify:
To access VNC a local vncserver must be started first and also a tunnel using SSH port forwarding must be established.
See below for the details on SSH tunnels.
You should start by choosing your display number. To choose free one, you should check currently occupied display numbers - list them using command:
[username@login2 ~]$ ps aux | grep Xvnc | sed -rn 's/(\s) .*Xvnc (\:[0-9]+) .*/\1 \2/p' username :79 username :60 .....
As you can see above, displays ":79" and ":60" we had occupied already. Generally, you can choose display number freely except these occupied numbers. Also remember that display number should be less or equal 99. Based on this we have choosen display number 61 for us, so this number you can see in examples below.
Your situation may be different so also choose of your number may be different. Choose and use your own display number accordingly!
Start your VNC server on choosen display number (61):
[username@login2 ~]$ vncserver :61 -geometry 1600x900 -depth 16 New 'login2:1 (username)' desktop is login2:1 Starting applications specified in /home/username/.vnc/xstartup Log file is /home/username/.vnc/login2:1.log
Check whether VNC server is running on choosen display number (61):
[username@login2 .vnc]$ vncserver -list TigerVNC server sessions: X DISPLAY # PROCESS ID :61 18437
Another way to check it:
[username@login2 .vnc]$ ps aux | grep Xvnc | sed -rn 's/(\s) .*Xvnc (\:[0-9]+) .*/\1 \2/p' username :61 username :102
The VNC server runs on port 59xx, where xx is the display number. So, you get your port number simply as 5900 + display number, in our example 5900 + 61 = 5961. Another example for display number 102 is calculation of TCP port 5900 + 102 = 6002 but be aware, that TCP ports above 6000 are often used by X11. Calculate your own port number and use it instead of 5961 from examples below!
To access the VNC server you have to create a tunnel between the login node using TCP port 5961 and your machine using a free TCP port (for simplicity the very same) in next step. See examples for Linux/Mac OS and Windows.
The tunnel must point to the same login node where you launched the VNC server, eg. login2. If you use just cluster-name.it4i.cz, the tunnel might point to a different node due to DNS round robin.
Linux/Mac OS Example of Creating a Tunnel¶
At your machine, create the tunnel:
local $ ssh -TN -f [email protected] -L 5961:localhost:5961
Issue the following command to check the tunnel is established (note the PID 2022 in the last column, you'll need it for closing the tunnel):
local $ netstat -natp | grep 5961 (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 127.0.0.1:5961 0.0.0.0:* LISTEN 2022/ssh tcp6 0 0 ::1:5961 :::* LISTEN 2022/ssh
Or on Mac OS use this command:
local-mac $ lsof -n -i4TCP:5961 | grep LISTEN ssh 75890 sta545 7u IPv4 0xfb062b5c15a56a3b 0t0 TCP 127.0.0.1:5961 (LISTEN)
Connect with the VNC client:
local $ vncviewer 127.0.0.1:5961
In this example, we connect to VNC server on port 5961, via the ssh tunnel. The connection is encrypted and secured. The VNC server listening on port 5961 provides screen of 1600x900 pixels.
You have to destroy the SSH tunnel which is still running at the background after you finish the work. Use the following command (PID 2022 in this case, see the netstat command above):
Windows Example of Creating a Tunnel¶
Start vncserver using command vncserver described above.
Search for the localhost and port number (in this case 127.0.0.1:5961).
[username@login2 .vnc]$ netstat -tanp | grep Xvnc (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 127.0.0.1:5961 0.0.0.0:* LISTEN 24031/Xvnc
On the PuTTY Configuration screen go to Connection->SSH->Tunnels to set up the tunnel.
Fill the Source port and Destination fields. Do not forget to click the Add button.
WSL (Bash on Windows)¶
Windows Subsystem for Linux is another way to run Linux software in a Windows environment.
At your machine, create the tunnel:
local $ ssh [email protected] -L 5961:localhost:5961
Example of Starting VNC Client¶
Run the VNC client of your choice, select VNC server 127.0.0.1, port 5961 and connect using VNC password.
In this example, we connect to VNC server on port 5961, via the ssh tunnel, using TigerVNC viewer. The connection is encrypted and secured. The VNC server listening on port 5961 provides screen of 1600x900 pixels.
Use your VNC password to log using TightVNC Viewer and start a Gnome Session on the login node.
You should see after the successful login.
Disable Your Gnome Session Screensaver¶
Open Screensaver preferences dialog:
Uncheck both options below the slider:
Kill Screensaver if Locked Screen¶
If the screen gets locked you have to kill the screensaver. Do not to forget to disable the screensaver then.
[username@login2 .vnc]$ ps aux | grep screen username 1503 0.0 0.0 103244 892 pts/4 S+ 14:37 0:00 grep screen username 24316 0.0 0.0 270564 3528 ? Ss 14:12 0:00 gnome-screensaver [username@login2 .vnc]$ kill 24316
Kill Vncserver After Finished Work¶
You should kill your VNC server using command:
[username@login2 .vnc]$ vncserver -kill :61 Killing Xvnc process ID 7074 Xvnc process ID 7074 already killed
Or this way:
[username@login2 .vnc]$ pkill vnc
Do not forget to terminate also SSH tunnel, if it was used. Look on end of this section for the details.
GUI Applications on Compute Nodes Over VNC¶
The very same methods as described above, may be used to run the GUI applications on compute nodes. However, for maximum performance, proceed following these steps:
Open a Terminal (Applications -> System Tools -> Terminal). Run all the next commands in the terminal.
Allow incoming X11 graphics from the compute nodes at the login node:
$ xhost +
Get an interactive session on a compute node (for more detailed info look here). Use the -v DISPLAY option to propagate the DISPLAY on the compute node. In this example, we want a complete node (16 cores in this example) from the production queue:
$ qsub -I -v DISPLAY=$(uname -n):$(echo $DISPLAY | cut -d ':' -f 2) -A PROJECT_ID -q qprod -l select=1:ncpus=16
Test that the DISPLAY redirection into your VNC session works, by running a X11 application (e. g. XTerm) on the assigned compute node:
Example described above: | 1 | 9 |
<urn:uuid:798200a3-a90b-4b4b-b956-fcd598b0f7f4> | By Martin Feldman, M.D., and Gary Null, Ph.D.
FUNCTION AND DIAGNOSIS
An under-active thyroid system is a common health problem, causing symptoms such as low energy, weight gain, hair loss and depression in its many victims. Depression is an especially tricky symptom for these people because it is likely to be diagnosed as a psychological disorder. A patient could end up taking an antidepressant such as Prozac, which poses certain health risks, at a time when he or she really needs to repair a faulty thyroid system.
Low thyroid may be the most commonly misdiagnosed health problem in the United States. Some physicians estimate that as many as one in five Americans may suffer from an unsuspected low thyroid state. In Thyroid Power: Ten Steps to Total Health, Richard Shames, M.D., and Karilee Shames, R.N., Ph.D., call low thyroid a “large-scale epidemic that has been inadequately addressed.” They note that more than half of people with low-grade hypothyroidism remain undiagnosed at any given time, and that Synthroid, the well-known thyroid hormone medication, became the best-selling prescription drug in 1999.[i]
One conventional survey found that approximately 10 percent of the general population and up to 20 percent of older women have mild thyroid failure.[ii] And according to doctors on the thyroid service at Harvard Medical School, one out of 12 women under the age of 50 and one out of six by age 60 have the disorder.[iii] Another researcher suggests that “thyroiditis is a commonly overlooked problem in perhaps 10% of chronically ill patients.”[iv]
The source of this hidden health problem is the complex thyroid system. The thyroid gland is a small, butterfly-shaped organ located in the front of the neck, just below your Adam’s apple. It secretes about a teaspoon of hormone a year and is responsible for the speed of the body’s metabolism. The thyroid affects every organ and cell in the body—from our hair follicles to our toenails—so most body functions become sluggish if it does not work properly.
In addition to stimulating oxidative metabolism, thyroid hormone promotes the synthesis of protein from amino acids. The body needs protein to replace worn-out cells and make enzymes, which moderate the speed of biochemical reactions in the cells. Thyroid hormone also potentiates the effect of other hormones, is needed for the secretion of sex-activating hormones, and is partially responsible for controlling the rate of nutrient absorption in the gastrointestinal tract.[v]
Hypothyroidism is a disorder that occurs when the thyroid system is underactive in one of two ways: 1) the thyroid gland itself does not produce enough hormone, or 2) the liver, kidney and other tissues do not properly convert the output of the thyroid gland to the active form of thyroid hormone that works in the body’s cells.
A variety of factors can lead to a low thyroid condition. They include excess stress, mineral deficiencies, exposure to toxins, prolonged illness and autoimmune disorders. Many cases of thyroid system dysfunction occur because the immune system is overvigilant and mistakenly attacks the thyroid gland.[vi] If the antibody condition is severe enough, there may be a blockage at the cellular receptors.
An underproduction of thyroid hormone or a faulty conversion process outside of the thyroid gland may affect men and women of all ages, but women and older people seem especially susceptible. Hypothyroidism significantly increases their risk of osteoporosis and cardiovascular disease. A dysfunctional thyroid system accelerates bone loss, and an underproduction of thyroid hormones can alter the body’s cholesterol by decreasing the “good” type (HDL) and increasing the “bad” type (LDL).
Considering the potential consequences, the failure to detect and treat this condition in many people is troubling. In one study of more than 1,100 women, subclinical hypothyroidism was a strong indicator of a risk for atheroclerosis and myocardial infarction in elderly women.[vii]
A FAILURE TO DIAGNOSE
Why do most thyroid conditions remain unrecognized? Because of the method of diagnosis. Conventional medicine tests thyroid functioning almost exclusively through the blood levels of three hormones. They are TSH (thyroid stimulating hormone), total T4, and T3 uptake, which comprise the thyroid panel recommended by the AMA.
A low thyroid condition is easily diagnosed by these conventional blood tests if 1) a patient’s TSH is above the normal reference range, 2) total T4 is below the reference range, or 3) T3 uptake is below the reference range. TSH directs the production of T4, and it is the most sensitive of the three measures. An elevated TSH level means the pituitary gland is directing the thyroid to produce more T4 hormone, but the thyroid is not responding. Thus, a high TSH indirectly reflects a diminished thyroid output. Most testing laboratories put the upper limit of the normal range for TSH at about 4.0 to 4.5 mU/L; any level over that amount would be considered primary hypothyroidism.[viii]
The problem with these conventional measures is that a person’s blood levels may test within the normal reference ranges even when the thyroid system as a whole is seriously underactive. That’s why it is necessary to test the second stage of thyroid functioning by measuring how well the body converts T4 hormone to T3.
The superior measure of this conversion process is “free T3,” not the T3 uptake tested by conventional medicine. Free T3 is the final, active form of thyroid hormone that enters the cells of the body and instructs them to speed up metabolism (the remainder of the hormone is bound to protein). While free T3 is a very small component of the total, it does all of the work at the body’s cellular level. A person’s free T3 may not be functioning optimally even though his or her blood levels of TSH, total T4 and T3 uptake—the conventional measures—are within normal ranges.
Hypothyroidism also can be difficult to diagnose because the condition develops subtly, according to Stephen Langer, M.D., and James Scheer, authors of Solved: The Riddle of Illness. They point to an article in Diagnosis that identified three grades of hypothyroidism:[ix]
- Grade three (subclinical form), with decreased energy and depressed mood.
- Grade two (mild hypothyroidism), with fatigue, dry skin and constipation; blood levels of thyroid hormone usually are still normal.
- Grade one (overt hypothyroidism), with a measurable decrease in circulating thyroid hormone, extreme weakness, dry skin, coarsening of hair, constipation, lethargy, memory impairment, a sensation of cold, slowed speech and weight gain.
Conventional physicians will diagnose central hypothyroidism easily when the thyroid gland itself is diseased enough to become enlarged. This condition, called a goiter, is observable and therefore easy to recognize. In addition, a thyroid nodule occasionally may be palpated. Mainstream doctors also may diagnose hypothyroidism when the hormone levels in the conventional thyroid blood tests are out of range, indicating the thyroid is underactive.
However, millions of people may have a less severe pathology than that recognized by orthodox medicine and still have a suboptimal thyroid system. This nonconventional thyroid condition has been described in the medical literature in three ways: euthyroid sick syndrome (ESS), Wilson’s syndrome, and low T3 syndrome. We hope that more research and information on this condition will expand its acceptance in traditional medicine and lead doctors to consider whether patients are suffering from a suboptimal thyroid condition.
Fortunately, there is a lot that you and an enlightened, complementary physician can do to help diagnose and optimize a faulty thyroid system. First, let’s look at the symptoms of a low thyroid condition.
DO YOU HAVE THESE PROBLEMS?
Dr. Feldman, co-author of this article, says that a diagnosis of an underactive thyroid system is probable when a patient has any of the following hallmark symptoms:
LOW ENERGY. This is a common health problem in American society. Low energy seems to be increasing and affecting younger people more often today than it did a decade ago. Lack of energy was a major issue for many of the participants in Gary Null’s “Anti-Aging Support Groups,” for whom Dr. Feldman served as medical reviewer and analyst. (Many of these people were able to rebalance their thyroid system with health-optimizing protocols.) In addition, approximately 25 percent of the patients in his private practice complain of mild, moderate or severe low-energy conditions.
Low-energy states can be very distressing for their victims. Millions of people may suffer needlessly if the true cause of their energy deficit—such as a thyroid problem—is not identified. This is especially true if people are treated for psychological conditions such as depression when their problem is much more biochemical in nature, caused by a low thyroid condition with or without other hormonal imbalances.
The fatigue caused by a low thyroid condition generally is not significant in the morning but gradually worsens throughout the day or starts later in the day. Conventional medicine recognizes that a thyroid malfunction may result in fatigue of varying degrees, including a profound and persistent exhaustion.[x]
When evaluating low-energy states, it is the pattern of energy and time of day during which the deficit occurs that help define the specific health problem a patient is facing. Consider:
- When low energy is diffuse, occurring throughout the day, statistically there is a probability of a malfunction of the thyroid system.
- Similarly, when the lowest energy is in the evening after 6 p.m., an underactive thyroid system may be the cause.
- However, when low energy is most severe upon awakening, even after restful sleep, and in the early morning, it usually implies a low adrenal
- When low energy is most severe at 3 p.m. or 4 p.m., often there is a blood sugar or food allergy This possibility is amplified when the patient craves sugar or binges on sugar or refined carbohydrates, indicating he or she has as a glucose instability condition or a hypoglycemic condition.
- When the energy deficit is very severe and occurs throughout the day, there tends to be both an adrenal and thyroid suboptimal state.
With this last pattern, the severity of the low-energy state is related to the poor functioning of both systems at once. Energy problems tend to be proportional—when any two or three of the systems are suboptimal at the same time, your energy level will be worse than with either problem alone. For example, if your adrenals or your thyroid system is faulty and you’re also hypoglycemic, your energy deficit will be more severe.
DEPRESSION. When depression is generalized rather than a reaction to a particular stressor, it is very commonly due to the following factors: malfunctions of the thyroid system, adrenals, or glucose-control mechanism, deficiencies of B12 and B complex vitamins, or food allergies that affect the brain (cerebral allergy). According to some studies, low thyroid is a cause of depression in more than half the people treated for the condition.[xi]
Depression typically is considered a psychological problem, and a diagnosis and treatment as such is valid in cases of obvious reactive depression (the loss of a loved one, etc.). But in many other instances, the so-called depression is actually a defective biochemical state. The thyroid problem may be part of the biochemistry that is draining the body and depleting its energy, causing what is described as depression.
In such cases, the depressive condition is not so much a psychological defect as a biochemical one. Malfunctions of the thyroid system can lead to depression, anxiety, panic attacks and bipolar disorders because this system affects the metabolism of the nervous system.[xii]
Psychiatrists have known for some time that a low level of free T3 is a factor in depression, according to an article by Joseph Mercola, D.O., of the Optimal Wellness Center in Schaumburg, Ill. (www.mercola.com). Often, patients’ antidepressant medication must be augmented with T3 for the depressive state to respond to the pharmaceutical. The action of antidepressants may be better enhanced by a T3/T4 treatment than by T4 alone.[xiii]
Indeed, many depressed hypothyroid patients require a treatment combining T3 and T4 rather than one containing T4 alone, according to a 1993 letter published in the Journal of Clinical Psychiatry by John V. Dommisse, M.D., FRCPC, in response to an article on T3 augmentation of antidepressant treatment. Dr. Dommisse suggests that the blood levels of T4 and T3 in all hypothyroid patients should be brought into the mid- to high-normal ranges. If these levels cannot be achieved with T4-only medication, T3 should be added to the treatment. He explains that T4-only medication is an adequate source of T3 for a certain percentage of patients who convert T4 to T3 at a sufficient rate. But for a substantial proportion of patients, a T3/T4 combination is needed.[xiv],[xv]
It should be noted that the severity of a person’s depression often correlates to the number of malfunctions he or she is experiencing, based on Dr. Feldman’s testing and statistical analysis of many patients. The malfunctions that may be involved in this energy-depleting process are listed in the table below.
In addition to a suboptimal thyroid or adrenal system, the following imbalances may contribute to low energy and “depression”:
- Deficiency of B complex vitamins
- Deficiency of vitamin B12
- Malabsorption condition leading to multiple nutritional deficiencies
- Hypoglycemia or glucose instability syndrome
- Hidden infections (viral, bacterial, parasitic, yeast)
- Cerebral allergy
- Autoimmune conditions
When the energy robbers are corrected, the depression seems to improve in relation to the number of imbalances treated. The more components that are rebalanced, the greater the improvement. For example, if a patient had five problems and two or three of them are addressed, he or she feels better. If all five are fixed, the “depression” lifts.
The good news is that some of these malfunctions are easy to treat. For example, supplements can be taken to balance a B complex or B12 deficiency and to correct a faulty glucose thermostat. This thermostat controls the blood glucose level, and its malfunctioning often correlates to low levels of chromium, zinc and manganese. It’s also easy to test for and eliminate foods that cause a cerebral allergy. As for addressing toxicity, more information on how to eliminate toxins will be presented in Part 2 of this article. For the moment, one useful resource is the video “Detoxification: A Natural Approach,” which can be obtained through this Web site.
OVERWEIGHT. A strong hallmark of a hypothyroid problem is being overweight and having difficulty losing weight even when you reduce your caloric intake and exercise vigorously. This symptom stems from the thyroid’s effect on metabolism: Recall that an underactive thyroid system causes the metabolism to slow down. As a result, you don’t burn as many calories and your body is not burning fat from storage. Because the calorie-burning mechanism is sluggish in hypothyroid conditions, it is difficult to lose weight.
HAIR LOSS. A loss of scalp hair is another symptom, especially when it occurs in women who are under 50 and have no family history of hair loss. The classic picture of a low thyroid state is a loss of hair in the outer one-third of the eyebrows, but some people with the condition have a diffuse loss of scalp hair only and do not lose hair from the eyebrows.
All women with hair loss should have their thyroid system properly evaluated (not just through the conventional thyroid blood tests). In Dr. Feldman’s experience, most women with diffuse head hair loss have a suboptimal thyroid system. Also, these women’s hair may be dry and brittle.
COLD INTOLERANCE. If a person complains of being cold, especially in the hands and feet, a low thyroid condition often is indicated. You may recognize this symptom if you usually require extra clothing, socks, hats, etc. In addition, if your body temperature tests low, a diagnostic method discussed later in this article, a diagnosis of hypothyroidism is likely.
In addition to those five symptoms, other health factors related to a suboptimal thyroid include the following:
IMMUNE SYSTEM PROBLEMS. Whenever a patient has a known, diagnosed imbalance of the immune system, the possibility of impaired thyroid functioning due to an autoimmune process increases statistically. These immune imbalances include lupus, rheumatoid arthritis, severe allergy states, multiple sclerosis, uveitis, scleroderma and Sjögren’s syndrome, among others.
POSTPARTUM DEPRESSION. In many cases, this condition is partly caused by a low thyroid condition. An immune malfunction after the delivery can cause thyroid dysfunction.[xvi]
The development of a fetus requires a complex orchestration of hormones that may put stress on the woman’s entire hormonal apparatus to nurture the baby. This process is an immune stressor and may lead to suboptimal thyroid functioning during or after pregnancy, sometimes occurring weeks or even months after the delivery.
INFERTILITY. Women with no other medical complications that account for infertility, such as anatomical problems with the fallopian tubes or ovaries, may have infertility due to a thyroid system malfunction.
OTHER PROBLEMS IN WOMEN. Two other signs that may be associated with a hypothyroid condition are irregular menstrual cycles and a loss of menstrual periods. These problems reflect an underlying hormonal imbalance of the pituitary/ovarian axis.
Hypothyroidism can cause or aggravate many female problems, including miscarriage, fibrocystic breast disease, ovarian fibroids, cystic ovaries, endometriosis, PMS and menopausal symptoms.[xvii] Severe menopause symptoms can be a reflection of borderline low thyroid conditions.[xviii]
Although the exact relationship between thyroid dysfunction and female problems is not understood, it is known that hypothyroidism is linked to hormonal imbalances, especially an excess of estrogen over progesterone. Too much estrogen is bad for the thyroid because it inhibits thyroid production while progesterone promotes it.[xix]
Researchers at the Mayo Clinic found that many gynecological conditions improved when fatigued hypothyroid women with menstrual problems took thyroid hormone. Excessive blood flow improved in 73%, loss of menstrual cycles in 72%, and deficient menstruation in 55%.[xx]
CANDIDA ALBICANS OVERGROWTH. Recurrent Candid (yeast) infections may lead to an imbalanced immune system. Over time this imbalance can cause an autoimmune process that affects the thyroid as the body mistakenly attacks its own thyroid tissue. Other autoimmune processes may accompany a chronic overgrowth of Candida as well.
UNHEALTHY SKIN. In addition to causing dry skin, hypothyroidism can lead to acne, in part because the circulation is reduced and the skin does not get the blood supply it needs. This deficit means the skin cells are deprived of oxygen and fuel and the waste products of the cells are not properly removed.[xxi]
THE THYROID HORMONAL SYSTEM
People suffering from such symptoms should understand that the thyroid mechanism is not simply a gland that secretes hormones, but rather a system that depends on tissues outside of the thyroid to produce needed hormones and on “receptor” cells in the body. If any part of this system malfunctions—and some problems are easier to detect than others—the thyroid mechanism can become underactive.
As noted, the fundamental hormones of this system are T3, T4, and TSH. When the thyroid gland is functioning optimally, it uses the amino acid tyrosine as well as iodine to produce T4 (thyroxine). However, the major active form of thyroid hormone is T3 (triiodothyronine), and the majority of T3 is produced outside of the thyroid gland itself. T3 is converted from T4 in the peripheral tissues of the liver, lung, kidneys and elsewhere. Free T3 is the final product that influences the functioning of the cells.
With that in mind, the thyroid system may become imbalanced in the following ways:
CENTRAL THYROID DYSREGULATION. The production of thyroid hormones is centrally regulated by the hypothalamus-pituitary-thyroid axis. If your TSH level is elevated or your T4 level is low, the problem lies with the central thyroid mechanism.
Here’s how this mechanism works: The hypothalamus and pituitary stimulate the thyroid function by producing a neurotransmitter called thyrotrophin releasing hormone. This hormone causes the anterior pituitary to release TSH, which directs the production and secretion of T4 by the thyroid gland.
The thyroid hormones operate on a feedback loop. If the level of T4 in the bloodstream is low, more TSH is released in order to boost the thyroid’s production of T4. When T4 levels rise, they provide feedback to the pituitary to slow down TSH secretion. The T4 binds to receptors in the anterior pituitary and thereby prevents the release of TSH.
If this central mechanism is faulty, however, the thyroid may not produce as much T4 as the body needs, resulting in the “low thyroid” condition. Consequently, the TSH level will rise as the pituitary provides louder and louder instructions, so to speak, to produce T4 hormone. In such cases, the TSH is attempting to direct the production of T4, but the mechanism is sluggish and does not respond properly.
TSH is the most sensitive of the three thyroid hormones, and an abnormal level of TSH will be the first to manifest itself when the thyroid system is off balance. Be aware, however, that while TSH is the best of the three conventional blood tests, it provides only an indirect measure of central thyroid functioning.
PERIPHERAL THYROID IMBALANCES. Even when the central regulation is working properly, the thyroid system may still be dysfunctional at the peripheral level, where T4 is converted to T3. If this conversion process does not work properly, a low thyroid condition also may result because the level of free T3 is insufficient. The thyroid effect at the cellular level is inadequate.
You may produce sufficient amounts of T4 but still have the symptoms of underactivity because the conversion aspect of the thyroid mechanism has faltered. Factors that can reduce the conversion of T4 to T3 include a restricted intake of carbohydrates, nutrient shortages, enzyme deficiencies, chronic illness, heavy metal exposure, increased glucocorticoids or high-stress states, and imbalanced estrogens. Stress can inhibit this conversion process by elevating cortisol.
Blood tests of “free T3” can indicate subclinical problems with the conversion of T4 to T3. As noted earlier, the free fraction of a hormone is the active portion that can enter the body’s cells to do its work. The amount of “reverse T3” in a person’s blood is a second way to evaluate the conversion process. Because reverse T3 is an inactive, improper byproduct of the conversion process, an elevated level indicates a malfunction in the peripheral aspect of the thyroid system.
Blood tests of free T4, free T3 and reverse T3 are available from most conventional laboratories and from specialty laboratories such as Great Smokies Diagnostic Laboratory and Quest Diagnostics Nichols Institute (for more information, see “Resources” at the end of Part 2 of this article).
As a fine point, Pharmasan Labs is in the process of correlating T4 and T3 levels in the blood with levels in the saliva. Saliva testing of free T4 and free T3 holds great promise because it would make the diagnostic process more convenient for the patient. Rather than having blood drawn at a medical facility, the patient could provide the saliva sample from home and mail it to the lab for testing. One study shows that saliva is a good marker for the level of free T4 in a person’s body. It correlates well with the amount of free T4 in the blood.[xxii]
THYROID ANTIBODIES. A third malfunction of the thyroid system may occur when the immune system produces antibodies that interfere with thyroid functioning. Two such antibodies are anti-thyroidal peroxidase (anti-TPO) and anti-thyroglobulin (anti-TG).
At high levels, thyroid antibodies may interfere with the ability of thyroid hormones to function and attach to receptor cells. An elevated level of antibodies indicates that an autoimmune process is active. The immune system is imbalanced and is wrongly attacking the body’s own thyroid tissue or other components of the thyroid system. One such autoimmune process is called Hashimoto’s thyroiditis.
Anything that creates stress on the immune system can lead to this misguided attack. Problems such as trauma, dysbiosis and inflammation may cause the levels of anti-TG and anti-TPO in the body to rise. According to Pharmasan Labs, TPO antibodies, which inhibit thyroid hormone synthesis, are positive in 95% of patients with autoimmune thyroiditis and 10% of American adults. Their prevalence in women increases with age.
Antithyroid antibodies may be a concern for women who are trying to conceive as well. In a study of 69 women with a history of early pregnancy loss, fetal death and preeclampsia, the results seemed to confirm an association between thyroid autoimmunity and obstetric complications. The researchers have called for more studies to evaluate the reproductive outcome of women with a history of these three disorders and the presence of antithyroid antibodies.[xxiii]
Many conventional laboratories measure anti-TG and anti-TPO. However, they are not the only antibodies that attack components of the thyroid system. One lab that does an excellent job of testing for more comprehensive antibodies is Specialty Laboratories (see “Resources” at the end of Part 2 of this article for more information).
We believe that an array of more comprehensive tests, such as those described above, may help increase the diagnosis of thyroid conditions by mainstream medicine. Many conventional physicians have not used such tests to date simply because the approach to thyroid functioning presented in this article is not part of their medical model. They overlook the peripheral, T4-to-T3 aspect of the thyroid mechanism.
The conventional model has taught health-care professionals to rely on mainstream blood tests—especially the AMA thyroid panel consisting of thyroid stimulating hormone, total T4 and T3 uptake—in evaluating the thyroid function. We hope this narrow focus will begin to change as the newer tests of free T4, free T3, reverse T3 and comprehensive antithyroid antibodies gain acceptance. As laboratory data better defines specific defects in the peripheral T3 aspects of the thyroid mechanism, conventional physicians may be more willing to recognize and treat the malfunctions.
THE OTHER SIDE: HYPERTHYROIDISM
Before we move on to complementary medicine’s view of low thyroid conditions, readers should be aware that there is another disorder called hyperthyroidism. This condition is the flip side of hypothyroidism: The body produces too much thyroid hormone, and as a result the metabolism speeds up significantly.
The symptoms of hyperthyroidism—often the opposite of those associated with low thyroid—include weight loss, warm, moist skin, feeling keyed up and restless, and feeling hot all the time. Hyperthyroid patients usually have Graves disease, an autoimmune process that creates an overactive output of hormones in reaction to inflammation in the central thyroid mechanism.
In addition, Graves hyperthyroidism may induce protruding eyes, high blood pressure, nervousness, insomnia, increased appetite, bowel hyperactivity, increased sweating and palpitations or arrhythmia. In older people the fast metabolism of hyperthyroidism may heighten conditions such as a weak heart, causing an irregular heart beat or even heart failure.
THE COMPLEMENTARY APPROACH
If your conventional thyroid blood tests are normal, most conventional physician will tell you that your thyroid is fine no matter how many symptoms of a low thyroid condition you have. But complementary and holistic physicians generally know from experience that you can’t count on the usual blood tests to accurately diagnose many cases of an underfunctioning thyroid system.
Complementary physicians usually rely on more comprehensive diagnostic methods to identify thyroid malfunctions: 1) a measurement of the patient’s basal body temperature, which is your temperature upon awakening; 2) an analysis of the patient’s symptoms; and 3) the results of the advanced blood tests measuring free T4 and free T3.
A low basal body temperature is a strong indicator of low thyroid. Researched decades ago by Dr. Broda O. Barnes, the temperature test is still one of the physician’s most important diagnostic tools. For patients who take their waking temperature, here’s how the process works:
- Use an oral thermometer (not the digital kind). Shake it down before going to bed and leave it on your bedside table within easy reach.
- Upon awakening, remain horizontal and move as little as possible as you place the thermometer in your armpit next to your skin. Leave it in place for 10 minutes.
- Record your temperature readings for five consecutive days. Women who still menstruate will get slightly better data on the second, third, fourth and fifth day of menses, but it is not essential to take your temperature only on those days. Males, prepubertal girls and postmenopausal or non-menstruating women may take the basal temperatures any day of the month. Women taking progesterone should not take the hormone the day before and the days that the basal temperatures are taken.
- Discuss the readings with your physician, preferably a complementary or holistic doctor who understands the importance and efficacy of this test. An average temperature between 97.8º F and 98.2º F is considered normal. If the average temperature is below 97.8º F, then the diagnosis of a suboptimal thyroid system—or hypothyroid condition—is likely.
Along with a low temperature, a variety of symptoms strongly suggest the diagnosis of hypothyroidism. These symptoms—some of which were discussed earlier in this article—include the following:
Low energy Infertility and menstrual irregularities
Fatigue Poor memory or difficulty concentrating
Depression and anxiety Constipation
Weight gain and/or difficulty Slowing of thought processes and
losing weight reactions
Intolerance to cold Slow pulse rate even if you are
Hair loss, thinning scalp hair not a well-trained athlete
Dry skin or hair Immune system problems
Thin eyebrows on outer one-third
Thin or brittle nails
As a precautionary measure, you can also check yourself for signs of an enlarged or irregular thyroid gland. The American Association of Clinical Endocrinologists encourages patients with menopausal symptoms to take this self-test to determine if they need to see a doctor about their thyroid functioning:[xxiv]
- Focus a hand mirror on your neck, just beneath the Adam’s apple and right above the collarbone.
- Tilt your head back.
- Swallow some water from a glass.
- While swallowing, observe your neck for bulges or protrusions. Repeat a few times to make certain your observation is correct.
- If you detect bulges or protrusions, see your doctor immediately. You may have an enlarged gland or a thyroid nodule.
If you do have a nodule, please keep in mind that the vast majority are not cancerous. The Thyroid Foundation of America reports that probably less than 5 percent contain cancer, and 90 percent of that small group are curable when they are treated properly. An excellent discussion of this topic, titled “Management of a Thyroid Nodule,” is available on the foundation’s Web site at www.tsh.org.
Coming in Part 2: Now that you understand the functioning of the thyroid system and the symptoms of a thyroid problem, read Part 2 of this article for a detailed discussion of how to rebalance a suboptimal thyroid mechanism. You’ll learn about natural protocols and lifestyle changes that assist the thyroid system, as well as the different types of thyroid hormone medications available to those who need them.
[i] Shames, Richard L, M.D., and Shames, Karilee H, R.N., Ph.D. Thyroid Power: Ten Steps to Total Health. New York, NY: HarperCollins Publishers Inc., 2001, p. 4.
[ii] Ridgeway, EC. Hypothyroidism: The Hidden Challenge, monograph. University of Colorado School of Medicine, December 1996 (as cited in Thyroid Power).
[iii] Wood, Lawrence C. Your Thyroid. New York, NY: Ballantine Books, 1995, p. 26 (as cited in Thyroid Power).
[iv] Wilkinson R, M.D., “Thyroid dysfunction and treatment,” CME monograph, Tucson, University of Arizona School of Medicine, 1997 (as cited in Thyroid Power).
[v] Langer, Stephen E, M.D., and Scheer, James F. Solved: The Riddle of Illness, Third Edition. Los Angeles, CA: Keats Publishing, 2000, p. 224.
[vi] Galofre, J.C. et al., “Incidence of different forms of thyroid dysfunction and its degrees in an iodine sufficient area,” Thyroidology 6, no. 2 (1994): 49-54 (as cited in Thyroid Power).
[vii] Hak AE, Pols HA, Visser TJ, Drexhage HA, Hofman A, Witteman JC, “Subclinical hypothyroidism is an independent risk factor for atherosclerosis and myocardial infarction in elderly women: the Rotterdam Study,” Ann Intern Med 2000 Feb 15; 132(4): 270-8.
[viii] Mercola, Joseph, D.O., “Hypothyroidism Part II: Hypothyroidism: sensitive diagnosis and optimized treatment—a review and comprehensive hypothesis,” Optimal Wellness Center, Schaumburg, IL, www.mercola.com.
[ix] Gold MS, Pearsall HR, and Pottash AC, “Hypothyroidism and depression: the causal connection,” Diagnosis Dec 1983: 77-80.
[x] Barsano, CP, Other forms of primary hypothyroidism, The Thyroid: A Fundamental Clinical Text, 6th Edition, L.E. Braverman and R.D. Utiger, eds. Philadelphia, PA: J.B. Lippincott, 1991, p. 956-967 (as cited in Thyroid Power).
[xi] Shames, p. 16
[xii] Null, Gary. The Food-Mood-Body Connection. New York, NY: Seven Stories Press, 2000, p. 324.
[xiii] Mercola, Part II.
[xiv] Dommisse, John V, M.D., FRCPC, Letter, the Journal of Clinical Psychiatry July 1993.
[xv] Cooke RG, Joffe RT and Levitt AJ, “T3 augmentation of antidepressant treatment in T4-replaced thyroid patients,” J Clin Psychi 1992; 53, 1 (Jan): 16-18.
[xvi] Hidaka Y, “Post-partum depression or past-partum thyroiditis?” Department of Laboratory Medicine, Osaka University Medical School. Rinsho Byori; 43, no. 11 Nov 1995: 1107-1109 (as cited in Thyroid Power).
[xvii] Null, Gary and Seaman, Barbara. For Women Only! New York, NY: Seven Stories Press, 1999, p. 533.
[xviii] Feit H, “Thyroid function in the elderly,” Clin Ger Med 4 (1988): 151-161 (as cited in Thyroid Power).
[xix] Null, p. 533.
[xx] The Thyroid Gland. Armour Laboratories, Chicago, 1945, p. 71 (as cited in Solved: The Riddle of Illness).
[xxi] Langer, p. 65-66.
[xxii] Putz Z, Vanuga A and Veleminsky J, “Radioimmunoassay of thyroxine in saliva,” Exp. Clin. Endocrinol. No. 2 1985; Vol. 85:199-203.
[xxiii] Mecacci F, Parretti E, Cioni R, Lucchetti R, Magrini A, La Torre P, Mignosa M, Acanfora L, Mello G, “Thyroid autoimmunity and its association with non-organ-specific antibodies and subclinical alterations of thyroid function in women with a history of pregnancy loss or preeclampsia,” J Reprod Immunol 2000 Feb; 46(1):39-50.
[xxiv] Langer, p. 189. | 1 | 3 |
<urn:uuid:54fd3a1d-3505-4a84-b46e-31478e0f0689> | Incredibly, the first domes date back to people living in the Mediterranean region 4,000 years BC. Since then, artists have created a fascinating variety of them all over the world. Still today, they are an essential part of modern architecture, as shown for example by Calatrava’s spectacular glass dome of the library of the Institute of Law in Zurich, Switzerland.
Unfortunately, most domes do not get the attention they really deserve. One reason is that many buildings, especially churches, are not well illuminated and the works of art can hardly be seen in the semidarkness. Another reason is that some domes, particularly those from the Renaissance and Baroque periods, are crowned by a lantern with separate windows which cause sharp contrasts. Furthermore, in bigger domes the details are far away from the observer on the ground, making it virtually impossible to study the subtle details of paintings. Finally – no surprise! – domes are located above you and looking upwards becomes strenuous for the cervical spine soon. The photographic technique described below helps to overcome some of these difficulties.
I switched to full format photography a few years ago when the Nikon D800 came out. At the same time, Carl Zeiss started to sell their super wide angle lens with a focal distance of 15mm, the Zeiss Distagon T* 15mm f/2.8. With a focus on architecture, this combination is really a dream for me. It gives you razor sharp details I have never seen before.
First, I took pictures of domes as I did of other parts of buildings. They were not really in the center of my interest. This changed when I had a close look at some photos I took more or less accidentally, especially when changing the exposure in Lightroom. I became fascinated by the geometry of domes, their colors and their decoration. So I decided to focus on this subject a little bit closer. I added a notebook to my equipment on which ControlMyNikon serves as remote control software.
The setup is simple: Without a tripod, I put the camera directly on the floor. The display is protected by a thin rubber mat. The lens is positioned directly under the middle of the dome and points directly to its center. As I cannot use the viewfinder or the LCD anymore with this setup, the camera is linked to the computer with a standard USB cable. The 15-inch monitor gives me a reasonably sized image. LiveView allows me to find the exact center of the dome, which can be a challenging task. The zoom function is extremely helpful in this context. Most of the images are taken with an aperture of f/8 or f/10. ISO is always set to 50. I take 16 pictures with shutter speeds ranging from 1/800 to 30 seconds. These settings are saved in a profile. I start the program with a click of a button and then the computer and the camera do their work. The photos are developed in Lightroom, then transferred to Photomatix as HDR software and finally prepared for web presentation in Photoshop by exactly following the workflow Nasim describes on this website.
Some situations do not allow taking photos in the way described above, mainly for two reasons: you must be really fast (e.g. in a church which is open to the public only for the mass, so you don’t have enough time to install your computer after the priest has finished his service and before the building is locked again a few minutes later), or your setup is classified as professional equipment (e.g. by staff members of a museum who think that you take photos for commercial purposes). In these cases, I use the Haehnel Giga T Pro II (B&H) wireless remote control and the built-in bracketing function of the D800 with 9 images. This requires some experience, because you have no visual control and especially in larger buildings finding the exact center of the dome might be even more difficult than with Live View on the notebook.
I am a professional radiation oncologist, not a photographer. But medical science means travelling from time to time. Furthermore, my wife and my three kids really like being on tour, so I see a lot of places. Before I start, I have a close look at good guidebooks, do some web search and scan sites like 500px.com. A perfect preparation is essential! Quite often you can identify problematic locations in advance. St. Peter’s Basilica in Rome has definitely a perfect dome, but unfortunately, the Pope’s throne is situated where the camera should be positioned… Taking photos is usually not a problem in Roman Catholic churches which are generally open to the public. With Jewish synagogues the situation is completely different: without hardly any exception, taking photos is strictly forbidden, mainly for security reasons. In other locations, like courts, written requests and formal permissions are mandatory. However, despite good research in advance, you don’t get every shot you long for: some churches are open only once a week for two hours, some domes are covered because they are getting restored etc. Nevertheless, I am able to realize about 80% of the planned projects (a good rate compared to wildlife photography, isn’t it?).
The above technique for photographing interior domes has been quite effective for capturing the beauty of popular architectural landmarks. Setting the camera on the floor allows for the widest perspective without having to worry about setting up and aligning a tripod. Long exposure times enlighten the darkness. HDR image fusion is effective in avoiding overexposure and/or underexposure of relevant parts of the final image. The combination of a full format sensor with 36 MP and a tack sharp lens reveal the finest of details – when scrolling through images on a monitor at 100% or at even higher magnifications, I am always fascinated by a perspective comparable with an artist’s or a restorer’s point of view. And, last but not least, the presentation on a monitor or as a print offers an ergonomic way of looking at the photo. It’s much more comfortable to admire works of art without a stiff neck.
A selection of photos from this series will be shown at the Photokina show in Cologne, Germany, September 16th-21st, 2014. For this purpose, the images are face-mounted to coated, highly transparent museum glass which gives them a crisp and vibrant look with an almost three-dimensional depth effect, as shown in this PDF document.
This guest post was written by Prof. Dr. Johannes Lutterbach, a professional radiation oncologist based out of Singen, Germany. Please visit his 500px page for more examples of stunning photographs of interior domes. | 1 | 2 |
<urn:uuid:6d6e9e0c-b1a2-450b-b652-d7817fc75eb4> | In the last months and years we have seen multiple DDoS attacks based on amplification techniques (DNS, NTP, Chargen, SSDP)
A new amplification attack was spotted in the last week of February (25th – 27th of February).
It is, by far, the strongest amplification attack we had and it is based on the Memcached protocol running on UDP port 11211.
Sources at CloudFlare state the attack reached 257Gbps.
Why the Memcached Protocol?
The answer is simple, it supports UDP which is stateless (which is necessary for amplification attacks), it lacks any form of authentication, and when it turns out it provides excellent ratio in amplification (the difference between the size of the trigger packet and the response).
Amplification ratio in the attack was around x10000 times but the protocol itself is capable of x51200.
The attack stats detected on CloudFlare show UDP datagrams with 1400B size. The number of packets peaked to 23Mpps which measures to the reported total 257Gbps of bandwidth. And that is a lot, it can cause very serious outages.
How does an amplification attack work and how it can be prevented?
To successfully lunch an amplification attack you need 3 components:
- Capability to spoof IP packets, meaning access to a high-bandwidth pipe on ISP that does not do a solid job in securing anti-spoofing
- Application/Protocol that is amplification friendly – UDP based, no authentication, protocol allowing large responses to be created based on small requests
- Reflector servers running a suitable protocol – These are servers that are reachable from Internet and that are going to respond to requests
How does the attack work?
The attackers send a large number of very small requests from a high-bandwidth pipe behind ISP(s), that allow ip spoofing, destined at a large list of publicly accessible application servers. The attacker is spoofing the source IP on all these requests to the target public IP address. All servers are made to respond with much larger packets to the requests, wrongfully directing all that traffic towards the unsuspecting target. The idea is to cripple either the target server/device or to congest its internet pipe, both causing Denial of Service.
How can Amp Attacks be prevented?
If any of the three components outlined above is not available, then there is no way to perform a successful Amplification attack.
Simple steps can make a bit difference.
- ISP should always adhere to the strict anti-spoofing rules and allow outbound traffic only from sources belonging to their IP ranges.
- Developers should think about security when creating new applications and protocols. UDP should be avoided unless low-latency is needed, and if UDP is used, the protocol should have some form of authentication and should never allow a reply to a request ratio bigger than 1. Meaning all replies should be smaller or equal to the request that generate them.
- Administrators should correctly “firewall” their servers and allow access to the services to whomever needs them; and not the whole Internet. Certain types of responses might be blocked from within the application or at Firewall level.
Malware is evolving constantly. The threat landscape is so dynamic that yesterday’s news is not news today. The malware business is a full-blown industry that can easily size up with the IT security industry.
Recent major security breaches:
NiceHash, the largest Bitcoin mining marketplace, has been hacked, which resulted in the theft of more than 4,700 Bitcoins worth over $57 million (at the time of breach) – more than 70 million now. The breach is reported to have happened via vulnerability on their website.
Teamviewer vulnerability – critical vulnerability discovered in the software that could allow users sharing a desktop session to gain complete control of the other’s PC without permission.
By using naked inline hooking and direct memory modification, in addition, the PoC allows users to harness control of the mouse without altering settings and permissions.
Uber – Uber’s October 2016 data breach affected some 2.7 million UK users, it has now been revealed. Uber did not disclose until now and paid a ransom (100k USD). Lawsuits to follow. Information held by a third-party cloud service provider used by Uber was accessed by the two hackers.
PayPal subsidiary breach – ID Theft for 1.6 Million Customers. PayPal Holdings Inc. said that a review of its recently acquired company TIO Networks showed evidence of unauthorized access to the company’s network, including some confidential parts where the personal information of TIO’s customers were stored.
Numerous unidentified security vulnerabilities were found in the platform (bugs that lead to security related vulnerabilities). Evidence of a breach discovered. Forensics are under way.
Equifax – breach allowed 15.2 million UK records to be made public (and 145 Million US records). Bad guys used a known vulnerability in an internet accessible service for initial penetration.
Recent Apple Root vulnerability – Any Mac system running macOS High Sierra 10.13.1 or 10.13.2 beta was vulnerable. There was no real exploit, you just typed root for username and keep the password empty and keep pressing enter and after several tries you are logged in with root rights. A logic error existed in the validation of credentials or simply a bug.
Making malware today has become more available. Malware development processes does not differentiate much from any software development, people use online available sources for much of the code, and will combine it together to their liking and purpose. A lot of the bad guys would also release the code for their creations which can later be changed and further modified (example Petya and NotPetya). Even code stolen from the government cyber agencies is now used in modern malware (example EternalBlue use in multiple malware as a way of effective horizontal spread – used in WannaCry).
Another typical trend in malware these days is to be modular. It will install and run multiple services on the infected host in specific order after the initial infection.
1st stage – there is always the initial infection – usual methods here are unpatched vulnerability of a running service or in the cases of more advanced malware – the use of Zero-Day vulnerability. Example here is the EternalBlue exploit of the SMBv1 service. Usually the delivery of the exploit is via Internet on accessible services or once inside the organization, horizontally meaning within the internal networks of the organization. That stage ends with having temporary access to the system and dropping off the malware in questions
2nd stage – privilege escalation – will try to gather credentials from the infected device in different ways – cracking the specific files on the system that holds the accounts, trying to locate account information on the local drives, or even brute-forcing credentials. These credentials will be leveraged for either privilege escalation on that machine or access to other similar machines on the network and infecting them.
3rd stage – installing a backdoor. Making sure the access is permanent
4th stage – doing the job. Downloading all necessary pieces of malware to finish the job. If that is a crypto virus it will download the tools to encrypt the sensitive files, also change desktop or even download application to show the user the ransom note, a tool to clean keys and traces of the encryption etc.
5th stage – spread, can be done again by using vulnerable services within the organization or by leveraging any credentials that are discovered in the privilege escalating process and using legit sys admin management channels such as WMI and PSExec. Sometimes the spread can be done before or simultaneously with the 4th stage as not to warn the organization of its presence before it managed to infect multiple systems.
Types of malware:
It is very hard to categorize malware these days. Most traditional classification such as: virus, worm, trojan, backdoor does not really cut it anymore as most modern malware shares the features of all of them (again example WannaCry, it is a virus, it is a worm as it spreads itself and it is a backdoor as it does install a hidden unauthorized way into the compromised system, and on top of that does encryption).
Ransomware – attacks aimed at making money by forcing victims to pay for accessing again their personal files
DDoS attacks – attacks aimed at crippling or disabling services at the victim
Attacks aimed at stealing sensitive information – attacks aimed at spying on users and gathering sensitive data – credentials, S/N, banking details, impersonating info (DOB etc.), private communications etc
Zombie/Botnet – attacks that rely on the collective resources of multiple compromised hosts that are managed by a central C&C (command and control). Can be used for multiple things, DDoS, span relay, stealing sensitive information from users
APT attacks – Advanced Persistent Attacks. Specially crafted attacks, usually used in nation-state cyber activities. Example could be the attack versus Iranian Nuclear Program
IoT related attacks – again these blur with other, as normally the compromised IoT devices are used for other kind of attacks (DDoS). This kind of IoT are very typical these days, the IoT devices are cheap network connected devices that were not designed with security in mind. The Mirai attack was a shining example on how powerful attacks can be executed using a Botnet of compromised IoT devices (DYN case). Furthermore, the number of IoT will continue to grow.
Mobile devices – attacks that are specific for mobile devices, most dangerous ones are compromised apps that go under the radar and give away sensitive information from the smart phone (ID theft, or sell personal info to ad companies, or steal financial data (credit card info etc.)). There are no such thing as free apps, they steal data from you and use it in illegal way to monetize it and make profit.
Phishing / Spear-Headed Phishing – Becoming more and more popular, bad actors will put in the effort now to get to know the victim so they can deliver the malware content in a shape and form that is interesting to the target
Some top Cyber Security Trends:
- Less number of security breaches (due to more investments in in IT Security) reported globally but more impact upon breach.
- More time is needed for the detection of breached (average time in 2016 was 80.6 days, in 2017 it is 92.2 days)
- Predictions of crime damage costs to sky rocket in the next 3 years (by 2021) to 6 Trillion USD
- Successful phishing and ransomware attacks are climbing
- Global ransomware damage cost estimated to exceed 5 Billion USD by the end of 2017
Data was gathered by CSO 2017 Cyber Security report (csoonline.com)
Summary of the evolution of Security Controls
- Intrusion Prevention (Advanced Network Threat Detection) becomes a must
Advanced IPS systems have replaced the traditional status firewalls. They incorporate multiple security technologies (signatures, behavior analytics, heuristics, sandboxing, central intelligence feeds etc.), to be able to successfully detect intrusion events and malware.
- Logging and Alerting platforms more important than ever
Logging and alerting are hugely important for each organization to be able to both proactively secure your network but in case of a breach to re-actively do forensics
- Data Loss Prevention is gaining momentum
DLP is becoming more popular as numerous breaches that year were connected to leaked sensitive information (ID theft in the Equifax and Uber)
- Endpoint security/malware is again in the front lines of combating malware
The focus of the security has shifted in the recent years from the network to the endpoint. Network and endpoint security controls should collaborate to create a strong security posture for your organization
- Systemwide threat defense is becoming necessary to adequately protect your organization
Security has become closely connected to intelligence. All major security vendors syphon off as much data from the internet as they can just, so they can filter through it in a strive to find first the zero-day exploits and provide first adequate protection for their customers. All parts of the network infrastructure can be used as sensors and deliver intelligence data to a centralized place that provides the analysis (big data).
We are experiencing a new phase in our vision of network security. There is currently no quick fix solution, no 100% proof network security protection/prevention tool or product. There is always zero-day or purposely built (very focused, low spread) APT malware that current vendors are unable to detect at the time of the breach.
Hence total prevention is a myth.
Most of the current network security solutions offer only Point-In-Time detection/prevention. Namely they inspect the traffic when the traffic goes via the firewall and if they deem the traffic is clean or unknown, at that exact time, they will allow it and forget about it. That could lead to malware passing through and being undetected for long periods of time. All vendors rely on intercepting the C&C communication to the botnet servers but not all malware uses such a centralized operation method so that cannot be considered a proven method of detection. That is why most of the vendors will apply their own sandboxing solution, namely send all files of unknown malware type to the cloud where they will be detonated in a controlled environment and the result of their execution will be deemed malicious or not by machines or sometimes humans. Upon discovery of malicious actions, the file is marked as malware and an update is shot out to all vendor appliances out there so they can intercept and drop such files. That process however takes time (typically more than 8 hours) and usually stops more than 96% of the malware spread (it depends on how quickly the different vendors discover that the file is malicious and how quickly the update is sent out) and that percentage was deemed high-enough for most companies.
What about that 4% though? I am sure any business owner would not like to be in this position and would like greater protection and value for their money. When a mere 4% can cause 100% of your security problems, you’re not protected.
Cisco is the only vendor in the NGFW market that currently has its vision also set on the retrospective side of the network security, the so-called After-The-Attack phase. Cisco uses the combination between Firepower and AMP for both network and endpoint to be able to provide threat context and to pinpoint the progress and spread of the malware in historical time so you will know exactly when and how the malware moved in your network, which hosts were infected so that you can immediately deploy mitigation techniques. First restrict the malware, block the effect of the malware and finally remove the malware that has already breached your network. Without this continuous analysis, the attack can run rampant on the network and it will be extremely difficult to determine the scope of the outbreak and the root cause or provide on-time/adequate response. Here is an example of such an event and how Firepower and AMP deal with it.
The following 4 simple steps represent how Firepower and Amp works with zero-day malware files:
- Unknown file gets downloaded to a client ip (188.8.131.52 for example) via http application with Firefox, the file is then allowed to reach the endpoint. The unknown file is sent to the cloud to be detonated and given a verdict.
- The Firepower tracks the movement/copying of the file within the network so it sees the file being propagated via any protocol at any time. For example, the file gets copied to another host 184.108.40.206 via SMB at 12:41 AM on the 1st of Dec 2016.
- Within 30min the same file gets replicated to 5 more devices within the internal range, all via SMB. The Firepower has a map of the file trajectory with hosts and timing of the movements.
- Two hours since the file was first seen, Cisco Security Intelligence Cloud had reached a verdict that the file in question is in fact malicious. From now on all Cisco AMP and Firepower enabled devices will drop that file upon encounter and alarm/log, but here comes the difference between Cisco and other vendors, namely the retrospective part. In our example, all future transfer of the files will be blocked and the file itself will be quarantined on all endpoints that have this file (requires AMP for endpoint), even more the administrators can leverage the trajectory map and verify the malicious file has been quarantined/removed and hosts have been remediated.
APT – Advanced Persistent Threats
C&C – Command and Control
NGFW – Next-Generation Firewall
+44 (0) 131 516 9771
88 Wood Street, 10th Floor, Wood Street, London, EC2V 7RS
0203 697 0353 | 1 | 2 |
<urn:uuid:e46f8363-577d-46c4-aac7-00a53ae77940> | The EU should impose a special Europe-wide tax on meat consumption to help save the planet
Last updated: April 5, 2019
We on team proposition like the taste of meat. But we cannot deny that excessive meat consumption is a threat to the planet.
In fact we are going to show that there is ample scientific evidence to support this fear. Meat production increases global warming, abuses the water supply and leads to a loss of biodiversity.
To address the harms of consuming meat a tax needs to be imposed on it. Let us first outline how the EU tax on meat would work. We propose that the EU mandate a minimum tax on meat consumption and that the EU member states be responsible for enacting the tax. The tax would be an excise tax, similar to the excise taxes on alcohol and tobacco. This system would be similar to the set-up of excise taxes on gasoline, where the EU has also mandated a minimum tax for its member states. We will in our arguments explain why this kind of tax is the appropriate remedy to the issue.
Excessive meat consumption threatens the planet by contributing to global warming
Greenhouse gases. The UN's Food and Agriculture Organization says livestock production is one of the major causes of environmental problems, including global warming, land degradation, air and water pollution, and loss of biodiversity. Using a methodology that considers the entire commodity chain, the FAO estimates that livestock are responsible for 18 percent of greenhouse gas emissions, a bigger share than that of transport.
There are three mechanisms through which meat production contributes to global warming: deforestation, manure and flatulence.
Deforestation. Deforestation happens because of a increasing demand for meat. As people become richer, they tend to (be able to) buy more meat products. For an example, pork imports to China has skyrocketed, rising more then 900% within the first four months of 2008.[[http://www.time.com/time/health/article/0,8599,1839995,00.html]] This means that more livestock has to be raised in order to keep up with demand, which in turn means that the cattle will require more land to grow up in. Already today grazing occupies 26 percent of the Earth's terrestrial surface and this number is about to grow even bigger. To get more farmland, forests are being cut down. As trees are being cut down, the CO2 that they have absorbed is being released back to the atmosphere.
Experts believe that the way to tackle these issues is to cut down on meat consumption. Dr Rajendra Pachauri, chair of the United Nations Intergovernmental Panel on Climate Change, which in 2007 year earned a joint share of the Nobel Peace Prize has suggested that people should reduce their meat consumption to help combat global warming. [[http://www.guardian.co.uk/environment/2008/sep/07/food.foodanddrink]] Gidon Eshel, a geophysicist at the Bard Center, and Pamela A. Martin, an assistant professor of geophysics at the University of Chicago, calculated that if Americans were to reduce meat consumption by just 20 percent it would be as if we all switched from a standard sedan — a Camry, say — to the ultra-efficient Prius. Similarly, a study last year by the National Institute of Livestock and Grassland Science in Japan estimated that 2.2 pounds of beef is responsible for the equivalent amount of carbon dioxide emitted by the average European car every 155 miles, and burns enough energy to light a 100-watt bulb for nearly 20 days. [[http://www.nytimes.com/2008/01/27/weekinreview/27bittman.html?_r=2&ex=1202187600&en=a1087de0ce76df87&ei=5070]]
The second place, where greenhouse gasses occur, is manure. Animal manure generates nitrous oxide, which firstly is a very strong greenhouse gas: some 300 times stronger then CO2 and secondly is one of the biggest ozone-depleting substances.
The third place, where greenhouse gasses occur, is flatulence: as livestock digests grass, it produces flatulence, which consists of methane, carbon dioxide, nitrogen and other gasses. This flatulence can reach up to 200 liters a day. As there are already about 1.3 billion cows in the world and 1 billion sheep, the overall amount of gas produced is quite big, especially when one takes into account that methane has 23 times the warming impact of CO2. Under this argument we have shown you that livestock produces significant amounts of greenhouse gasses, which we believe are directly affecting global climate change.
Livestock production threatens the world’s water supply
Unsustainable consumption Livestock production accounts for more than 8 percent of global human water use, mainly for the irrigation of feed crops. In Botswana livestock accounts for 23% of the country’s water use.[[ftp://ftp.fao.org/docrep/fao/010/a0701e/a0701e04.pdf]] As meat consumption is increasing globally the amount of water needes for raising the livestock will not be sustainable, according to the organizers of world water week. [[http://news.bbc.co.uk/2/hi/science/nature/3559542.stm]]
Pollution Evidence suggests it is the largest sectoral source of water pollutants, principally animal wastes, antibiotics, hormones, chemicals from tanneries, fertilizers and pesticides used for feed crops, and sediments from eroded pastures. While global figures are unavailable, it is estimated that in the USA livestock and feed crop agriculture are responsible for 37 percent of pesticide use, 50 percent of antibiotic use, and a third of the nitrogen and phosphorus loads in freshwater resources. The sector also generates almost two-thirds of anthropogenic ammonia, which contributes significantly to acid rain and acidification of ecosystems. [[http://www.fao.org/ag/magazine/0612sp1.htm]]
Growing livestock wastes precious grain.
First off: growing livestock wastes grain. Currently more grain is being used for feeding animals then for humans. If the grain that is used to grow pork, for an example, would be used to feed humans, more humans would get fed as swine use much of the energy of the food that they eat to breathe, keep them self warm etc., which all is in essence energy, that is lost to us.
To talk about why this inefficiency is important enough to limit its occurence, we would like to consider two things: firstly, global population is constantly rising and secondly, some of the population is already living in a food crisis.
We say that from a utilitaristic point of view, if some people can get more food just because other people change what they eat, then we should make the latter change what they eat : The people, that are eating meat today, will have their stomachs full of good food even when they stop eating meat, while stopping eating meat will allow other people, that are currently starving, have access to food, that they currently do not have. Substituting one type of food with another type of food will give us overall more food, which is good because it will lesser the suffering of those who are currently starving.
The cause of food shortages in the developed world isn't insufficient capacity in the West. It is a detestable fact that there are food surpluses in Europe while children in Africa and elsewhere go hungry. Making our grain mountains higher or our milk lakes deeper won't of itself solve this problem.
Excessive meat production threatens the world’s biodiversity
The sheer quantity of animals being raised for human consumption also poses a threat of the Earth's biodiversity. Livestock account for about 20 percent of the total terrestrial animal biomass, and the land area they now occupy was once habitat for wildlife. In 306 of the 825 terrestrial eco-regions identified by the Worldwide Fund for Nature, livestock are identified as "a current threat", while 23 of Conservation International's 35 "global hotspots for biodiversity" - characterized by serious levels of habitat loss - are affected by livestock production. [[http://www.fao.org/ag/magazine/0612sp1.htm]]
To recap we have presented there is ample scientific evidence that meat production is a threat to the planet. It increases global warming, abuses the water supply and leads to a loss of biodiversity. For further documentation of this we recommend the UN's report "Livestock's Long Shadow", which is available here: [[http://www.fao.org/docrep/010/a0701e/a0701e00.HTM]] .
That report, as well as scientists and experts working in this field advocate curbing global meat consumption to battle this threat. We are proposing that one of the ways to reduce this threat is for the EU to impose a tax on on meat consumption.
A special EU wide tax on meat consumption would reduce the threat to the planet
The EU right now has taken a broad commitment to cut its greenhouse gas emissions by 20% by the year of 2020. Mostly the responsibility how to arrive at this goal falls on the EU member states. However, in some cases, for example the phasing out of incandescent light bulbs or the aforementioned gas taxes, the EU has also mandated steps that the member countries have to take [[http://www.nytimes.com/2009/09/01/business/energy-environment/01iht-bulb.html]]. We believe that meat consumption is one of these areas where a concrete mandate for action is needed, because it is a field where we need to impact all the consumers’ behavior. In an EU wide free market it would be impossible for particular member states to enact a tax on meat consumption individually as this would put the member state’s meat producers in a competitive disadvantage. Therefore an EU-wide tax is needed.
An EU wide tax would also be a helpful message to the countries in the rest of the world just as the commitment to cut greenhouse gas emissions has been. The EU as one of the more developed regions in the world needs to step up and take responsibility for combating the threats to the planet as we have the resources to do so. The president of the EU Comission jose manusel Barroso said so recently, pointing out that the developed world needs to step up and offer help to the developing world. [[http://www.reuters.com/article/GCA-GreenBusiness/idUSTRE58J1XZ20090920]]
Finally let us point out that we do not believe that an EU wide tax would solve the problem entirely. But it would be a first step in a process moving towards improvement. So it won't be enough for the opposition to point out that the problem still remains, they need to show how this EU wide tax would make the situation worse. Also we believe that this plan is not mutually exclusive to consumer education campaigns and other government actions that would also help combat excessive meat consumption. To the contrary, we believe that the funds amassed by the tax would help the governments organize such activities. Fot the opposition to win this debate they will either need to prove that
a) Meat consumption is not a threat to the planet or
b) The EU wide tax would not decrease meat consumption or
c) It will have more negative effects that outweigh the positiives that we have outlined in our case.
We hope they'll have fun doing that and look forward to their replies.
‘For the opposition to win…’ ‘they will either need to prove that a) ‘Meat consumption is not a threat to the planet or b) The EU wide tax would not decrease meat consumption or c) It will have more negative effects that outweigh the positives that we have outlined in our case. ‘
Unfortunately they forgot to mention option d) Opps will need to prove that much more efficient means exist to achieve Prop’s aims.
We choose option d.
Thus, we approve of the aim of the proposition to save the planet (it’s where we keep all our stuff) but we strongly disapprove of the means. There are numerous more direct efficient actions the EU could take to better solve the problems Props identify, that don’t have any of the negative consequences their prescription entails.
How do we know this? Well, to make their case, and not being experts in this field, Props have quite sensibly turned to those who are. In so doing they were very fortunate to discover a recent comprehensive report on precisely this topic published by, of all bodies, the UN itself – ‘Livestock’s Long Shadow’ [their ref] which they commended to us. We found it showed, in a balanced and persuasive manner the damaging impact of the meat industry on climate change, water supply and biodiversity just as the Props report. (Indeed, where Props depart from the UN case, in point three, their argument temporarily breaks down, as we have shown.)
However, being satisfied to accept the UN’s presentation of the case against the meat industry in exactly the terms they set them out, they then choose to depart from this august body’s proposed solutions. Nowhere in this 300-page report is a consumption tax even mentioned.
Instead, the very same report Props commend to us, (and on which they lean so heavily in presenting the environmental damage caused by meat farming), contains a carefully calibrated set of answers both technical and strategic, designed to mitigate these problems on a case by case basis.
To give a sense of how thorough these technical solutions are, listed here are some of the measures meat farmers are exhorted to take to successfully reduce greenhouse gas emissions: 'sequestering carbon and mitigating CO2 emissions', 'reversing soil organic carbon losses from degraded pastures', 'reducing CH4 emissions from enteric fermentation through improved efficiency and diets', 'mitigating CH4 emissions through improved manure management and biogas', 'deploying technical options for mitigating N2O emissions and deploying NH3 volatilization.' [Their ref. p115-123]
Each of the problems the report (and Proposition) identify is dealt with on a technical level in this way, based on the central conclusion that 'resource efficiency is the key to shrinking livestock's long shadow'. [Their ref. p 276] We are happy to accept the UN's analysis and to assume these technical solutions are well-judged.
Finally, the report recommends a strategic framework to enable these technical solutions to be put into effect. It is based on two interlocking principles.
1. 'Prices of land, water and feed resources used for livestock production do not reflect true scarcities.' According to the UN, 'this leads to an overuse of these resources by the livestock sector and to major inefficiencies in the production process. Any future policy to protect the environment will, therefore have to introduce adequate market pricing for the main INPUTS'. (Our capitals)
2. 'Environmental externalities, both negative and positive, need to be explicitly factored into the policy framework, through the application of the “provider gets – POLLUTER PAYS” principle'. (Our capitals) [Their ref. p276-277]
We will elaborate on the superiority of the UN's approach over the one advocated by the Proposition in our subsequent more detailed critique.
However for now we leave you with the following question: is it possible that in place of the remedy proposed by dedicated UN scientists, the Props have been able in the time it took them to read the report, to discover a single magic bullet that optimally hits all these targets at once, but which nevertheless eluded the experts? Or is their prescription, as we will show, a blunt instrument that will do more damage than it will good?
This is the battleground on which this debate will be fought, in our view.
We want to start with the second one. One goal of taxation is to re-price certain goods so that “all social costs and benefits of production or consumption of a particular good are reflected in the market price” (same ref.). This is, for example one reason why tobacco is taxed: by increasing the price of tobacco products we want to discourage people from smoking, but also in case they do not stop smoking, they will be paying money to the government who uses some of this money to support a health service, that treats people for illnesses caused by smoking (like cancer).
In case of meat, the price that consumers currently pay does not reflect the damage done to the environment as a result of their consumption practices. We see this as a harm.
As the opposition points out, it is true that it is easier to tax alcohol and tobacco, because it is easier to see the harms they cause: because a violent drunk person is more immediately visible to us than loss of biodiversity or global warming. The fact that these harms are not evident to people is not an argument that nothing should be done.
But moreover, we have pointed out that high consumption of meat as a source of food is simply inefficient. It takes about 7 kg of grain to produce 1 kg of beef, however the nutrition value of 1 kg of beef is less than 7 kg of grain would be [[http://www.worldwatch.org/node/1626]]. If any other industry were so wastefully inefficient they would be taxed to death (like a car manufacturer using 7 kg of aluminum to produce 1 kg of car)
As for revenue: righting the wrongs caused by excessive meat consumption costs money. The government has limited resources. Putting a tax on meat will produce revenue for the government, some or all of which can be used to mitigate the harms caused by excessive meat production.
Pricing inputs and externalities realistically, on the other hand, achieves exactly the Proposition's desired effect: farmers who pollute or are profligate with water, for example, face higher costs. In this way efficiency is rewarded and waste penalised.
A key difference between taxing meat and taxing tobacco and alcohol is public support. We agree the inevitable resistance and outcry over any such proposed meat tax is not an argument that 'nothing should be done' - it is an argument that something else should be done, as we propose.
The point about inefficiency is irrelevant. For this to help their case proposition would need to show how increasing the efficiency of farms in the West would benefit the poor in the developing world, and then how this equates to 'saving the planet'.
The case with respect to revenue is, at best, uncertain for the reasons we've outlined. Meat has an important social and cultural dimension, and much of the price rise will be offset by a rise in demand for cheaper meat and meat products. The scheme will cost money, take an enormous amount of time to ratify and require extensive bureaucracy and policing, costs that need to be borne in advance.
Again contrast this with the UN's suggested plan. By removing subsidies we raise billions of dollars in revenue immediately that can be put to work at once. Half the EU's total budget, or about $50bn is spent on farming subsidies, and agricultural subsidies amount to ‘an average of 32% of total farm income in OECD countries, with livestock products (dairy and beef, in particular) regularly figuring among the most heavily subsidized products.’ [Their ref. P222] And remember the extraordinary environmental benefits of removing them in New Zealand. In addition by pricing inputs realistically we raise immediate revenue. By pricing externalities realistically we increase immediate revenue. And these costs fall proportionately on the farmers responsible for the damage we are endeavouring to repair.
In summary of our debate let us first point out that opposition conceded our point that excessive meat consumption is a danger to the planet. Therefore we can conclude that some action should be taken to alleviate this issue. Therefore we proposed an EU wide tax on meat products, which, coincidentally, is also what the motion called for.
The opposition offered two main areas of contention:
1. That the tax itself would be inefficient, discriminatory and hard to implement and therefore should be scrapped
2. That the alternatives proposed by them (originating from the much referenced UN report) lead to a better outcome and should be implemented instead.
In this summary well show how they’ve failed on both accounts and therefore our case still stands.
1. Issues with the tax itself
Firstly we would like to say that despite the excessive nitpicking on the opposition’s part (and this format does lend itself to picking a ton of nits) we believe that they haven’t managed to rebut our contention that this tax would be feasible to implement and would all in all bring more good than harm. In our substantive case we pointed out two examples where the EU has implemented similar measures despite there also being similar general issues: those were an EU wide mandated minimum gasoline excise tax and a ban on incandescent light bulbs. The main philosophy behind both those actions is the same: we find that excessive consumption of a particular product is harmful to the environment and therefore needs to be regulated on an EU wide basis. We believe the opposition have not pointed out why this also wouldn’t apply to a tax on meat.
Let us now address their issues one by one.
- the tax wouldn’t reduce consumption
We pointed to evidence that showed a correlation between higher price and less consumption. We also pointed out that the demand for meat is less elastic than demand for alcohol, or in other words meat can be fairly easily replaced with other produce (as opposed to alcohol). Therefore we believe that our point stands and the consumption of meat would fall, which, as we both agreed, would be a good thing for the planet.
- Consumers would switch to lower quality or less environmentally conscious products
The evidence in the previous point shows that while some of this might happen there still remains an overall effect of people consuming less of the product that increases in price. We also showed that the subsidies and incentives for consuming for environmentally conscious faming would remain and therefore it shouldn’t experience a difference in demand.
- An indirect tax would be bad for poorer people
Surely this is a problem with all indirect taxes(VAT, excise duties), which isn’t a reason for not having any indirect taxes at all. We want to incentivize people to come off meat so, yes, there needs to be an economic impact on them because of the tax. If we alleviated the tax burden on poorer people the tax wouldn’t have that much effect.
- The EU shouldn’t tax and subsidize the same thing simultaneously
There are a ton of cases where this already happens, most notably gasoline production or VAT.
- The EU can’t get anything done and people will start hating it
According to this logic the EU couldn’t do anything and there would be no point to this debate. We pointed out two examples in the same field – light bulbs and gas tax – where the EU has managed to get things done.
In summary we showed that an EU wide tax would reduce meat consumption and raise revenue for counteracting the damage done to the environment. All in all the tax would do more good than harm and therefore should be implemented.
The alternative is better
The opposition contends that the alternative, as outline by the UN would lead to more efficient results
Firstly we believe that the solution provided by the opposition isn’t mutually exclusive to our solution. We said from the very beginning that our plan does not exclude the possibility of other actions, nor will it solve the problem outright. Therefore we think it’s pointless to compare the two plans, but rather actions should be taken to implement them both.
The solution offered by the opposition – regulating inputs, subsidies and polluters all deal with the supply side of meat consumption. This is all well and good, but we believe a robust solution dealing with the demand side of the equation is also needed and therefore have proposed a tax. If we look at for example the field of fossile fuels we also see that we have solutions regulating both the supply and demand sides of the equation.
Let us also point out a contradiction in the oppositions case. On the one hand they want to eliminate subsidies going to meat producers, but on the other hand they support the CAP, because it encourages environmentally sound practices. But the CAP is precicely the kind of subsidy the UN is talking about eliminating.
The model will not reduce consumption.
To address the first of these first. If an excise duty is placed on meat, there are two ways a consumer might respond:
a) Cut meat consumption
b) Downgrade to lower quality, cheaper meat.
Proposition side appear to assume the former will result, we are less sure. It only takes simple economic logic to work out why.
The demand for a larger category of good is always less elastic than that of its sub-category, it being less substitutable. So for example the demand for 'vegetables' as a whole will be much less responsive to a price change (elastic) than will demand for 'carrots' it being easy to substitute carrots for, say, parsnips in response to a price rise. Wal-Mart's carrots will be even more elastic, as a further subcategory.
By this token, rather than switch categories when faced with a price rise, consumers are more likely to move to a sub-category in this case 'cheaper meat', where possible.
This argument is compounded by the notion that products that are a part of people's day to day lives, like alcohol in this respect, are notoriously insensitive to excise duty. Your alcohol example was based on a study that looked at the purchasing patterns of literally 112 people. That is incapable of supporting any argument. Increases in duty on alcohol in the UK, for example, have been accompanied by a rise in consumption of epidemic proportions.
This is our first rebuttal point: this model will not significantly cut meat consumption.
It is likely that people will downgrade to lower quality of meat, as we also said, we do not think a tax will remove the problem entirely. But the downgrading of consumption cannot account for the whole effect of tax on people. In the case of alcohol, there is also an option to instead of limiting consumption to downgrade - instead of buying the cognac you could just get the vodka, you get just as drunk.
So of course people will downgrade, they will also reduce consumption.
Some negative implications of 'trading down'.
We see two ways in which consumers could obtain cheaper meat, they are to:
a) Downgrade to lower quality meat, ie. Non-organic and battery farmed meat products.
b) Import meat from where it can be produced more cheaply.
Unfortunately, the lack of clarity in the motion prevents us from understanding how viable the second option would be to a consumer, but if it were viable, then we contend the freight costs involved in shipping or flying meat around the country are completely counter-productive to their case.
As far as the former goes, we see this to be unequivocally the most likely outcome of a tax imposition that raises the cost of meat across the board. Organic meat would be the first to go, as we have seen demand for it fall already in times of recession. The same is true of free-range and ethically produced meat products.
We return the reader now to proposition's argument number 2 which we flagged earlier. They lamented the pollution caused by meat production by way of the "antibiotics, hormones... fertilizers and pesticides" involved in the industry.
We know then, from this statement, that Prop will be equally horrified at the prospect of meat consumers switching away from organically produced meat which avoids all these to lower quality, more polluting meat producers. By their definition of saving the planet, this effect is counter to their objective.
Not all meat is or will be produced organically. Some consumers still buy such irresponsibly produced meat. Taxing them will give us revenue that we can use to mitigate the harms of meat production. One of the ways we can do this is to encourage responsible production of meat.
The UN disagrees.
We will now elaborate briefly on those principles; why the Prop's case undermines them, and why the represent better solutions.
1. 'Prices of land, water and feed resources used for livestock production do not reflect true scarcities.' According to the UN, 'this leads to an overuse of these resources by the livestock sector and to major inefficiencies in the production process. Any future policy to protect the environment will, therefore have to introduce adequate market pricing for the main INPUTS'. (Our capitals) [their ref. p 276]
The UN's suggested methods of introducing 'adequate pricing' for the main inputs are clearly stated.
Water, for example is especially under-priced in most countries. To solve this problem the UN recommends the creation of 'water markets and different types of cost recovery' that 'have been identified as suitable mechanisms to correct the situation.'
The true cost of land can be reflected by 'instruments' which 'include the introduction and adjustment of grazing fees and lease rates, and improved institutional arrangements for controlled and equitable access'. [their ref. p 277]
Another key suggestion, which flies directly in the face of the Prop's idea, is 'the removal of price support at product level (i.e. the production subsidies for livestock products in the majority of industrialized countries)' The UN cites the case of New Zealand where the simple act of removing these subsidies resulted in ‘what has become one of the most efficient and environmentally benign ruminant livestock industries!’ [their ref. p 277]
The UN's second prescription to mitigate the environmental damage caused by meat farming was as follows:
2. 'Environmental externalities, both negative and positive, need to be explicitly factored into the policy framework, through the application of the “provider gets – POLLUTER PAYS” principle'. (Our capitals) [Their ref. p276-277]
Presumably we do not need to defend this principle too strongly. The UN itself is content to say that: 'correcting for externalities, both positive and negative, will lead livestock producers into management choices that are less costly to the environment.‘
By adopting these two principles, the UN report concludes:
'Correcting for distortions and externalities will bring us a step closer to prices for both inputs and outputs that reflect the true scarcities of production factors and natural resources used. These changed prices will induce technological change that will make better use of resources, and limit pollution and waste. Producers have shown their ability to respond quickly and decisively when such price signals are sent consistently.’
[their ref. p277]
We agree with the UN's solution, not the Proposition's.
The opposition argue for something they call "provider gets – POLLUTER PAYS". They never clearly explain what they mean by this. People buying meat are the reason producers produce meat. The production of meat leads to pollution. We think it makes sense to tax the consumer. The same way there is a tax on buying gasoline, because burning gasoline produces pollution. We think it makes sense that the persons who enjoy the result of the polluting action need to pay the tax.
But moreover, many of the methods the opposition outline here and in other points: from the technical solutions they brought in a point of rebuttal (like 'sequestering carbon and mitigating CO2 emissions' etc.) to giving subsidies to responsible farmers - they all require money to do. They are all means to correct the results of the irresponsible acts of some people. It makes sense that these some people who enjoy the results of their acts also pay the bill. The same way taxes on using roads are paid by people who use the roads and are used to repair and maintain the roads.
The inequality of an indirect tax
The opposition may argue that surely if governments do not find the inequality which arises from excise duties on alcohol and cigarettes a large enough issue to avoid this form of taxation, then surely the same can be said for meat. However, the difference with meat is that, as we on the opposition shall be discussing, there are methods of reducing the negative effects meat consumption on our planet, without simultaneously generating an additional harm, one of greater social inequality. In the cases of alcohol and cigarettes, this greater social inequality is perceived as justified. This is because the products themselves are viewed as harmful not only to the consumers themselves but can lead to harms to those other than the consumer, through the consumer's own direct practices. It is not the direct effect of the consumption of the meat product itself that we would wish to be reduced (ignoring the fact that yes, excessive amounts of any food is bad for your health), for no one fears anti-social behaviour in one with a penchant for meat as they do an alcohol abuser, but the indirect harms such as the soil erosion effects to land which is used for livestock farming.
For many, meat is a necessity in terms of a balanced diet, unless we are to expect a total upheaval of consumption patterns right away. Therefore, we can see that the problem lies mainly in the practices exercised on the side of supply, rather than in the demand itself, and so approaching the situation from an angle which wishes to influence demand is not the most efficient.
In the previous point the opposition argued for "adequate market pricing for the main INPUTS" - land, water and feed, which are currently too cheap. If the inputs to production are made more expensive, the producers need to adjust the price of the product to still make a profit. So either way the price of meat will increase and either way, people with lower incomes will be "adversely affected".
The tax would disincentivise environmentally sensitive farming.
Were the EU to adopt the Prop's plan therefore, this would put them in the extraordinary position of simultaneously paying farmers huge subsidies and taxing their product!
Furthermore, the proposition’s tax on meat would surely be in direct opposition to CAP’s current stance of making “payments conditional on farmers meeting environmental and animal welfare standards and keeping their land in good condition” [[http://news.bbc.co.uk/1/hi/world/europe/4407792.stm]].
CAP wishes to encourage farmers to engage in practices which are environmentally friendly by subsidising farmers who do so to the point they can compete in the industry with farmers who choose cheaper, more environmentally harmful methods of production.
It does this by providing more generous subsidies to those who go into organic production, as of 2000 “funds can be given to…encourage organic farming” [[http://www.economicshelp.org/europe/cap.html]]. This is because the EU recognises that it is the intensive breeding of livestock and poultry that lead to the harms that are associated with meat consumption (‘intensive’ being the word to note). The blunt tool that is a blanket tax on meat would not only render a large amount of money spent on European farming wasted, but the extra subsidies provided to organic farmers. This is because the increase in price the tax would impose on organically farmed products would see the practice of organic farming resume an uncompetitive status.
The tax proposition endorses no longer sees it viable for countries within the EU to produce their meat organically to compete with the economies of scale style production of cheap meat imported from, for instance, Brazil or Argentina. With the EU even expressing the view that “It is important to ensure that trade does not undermine the EU’s efforts to improve the protection of the welfare of animals for example” [[http://ec.europa.eu/agriculture/publi/fact/meat/2004_en.pdf]] we ought to realise that proposition's scheme would be counter-productive in influencing the methods of meat production to gravitate towards environmentally practice, which the EU is actually taking great strides to do.
What we are seeing here, is the fundamental unsuitability of using a consumption tax to achieve environmental aims.
By removing subsidies and pricing externalities and inputs realistically, farmers who adopt more efficient environmentally sensitive practices win in the market place. In this way the market aligns the aims of consumers with the aims of the environment, precisely as the UN recommends.
But by placing a tax on consumption across the board, you remove the means and incentive for consumers to seek out more responsibly farmed meat (and very possibly create an incentive for them to increase consumption of less-responsibly produced livestock) while at the same time removing any incentive for producers to clean up their act.
But secondly, we do not see why it follows that environmentally sound farming techniques will die out. The CAP will remain in place. And as we said earlier, we do not think that the tax is the only thing we need to do to address the problems we have outlined. Subsidizing sound farming practices so that they can be competitive is a good thing to do. Money from the CAP can still be given for this.
Regarding cheaply produced meat imported from Brazil and Argentina. Producing meat in these countries is already cheaper at the moment. The EU can do nothing to make production of meat in these countries more expensive. Taxing the consumption of meat will make it more expensive regardless of where it was produced.
In answer to Prop's rebuttal of Point 1.
In answer to Prop's rebuttal of our second point.
To this Props say the revenue the tax raises and that creates this incentive can be used to pay the government to clean up the mess (which the tax itself helped create) and to encourage responsible production of meat!
But if it's within our power to encourage responsible production of meat, why not just do so in the first place, as we advocate, without first making the problem worse? Particularly as our model enables us to raise more money, more surely, more promptly and in a fashion that automatically bears down on bad farming practices and rewards responsible ones?
Rebutting Prop's argument against our Point 3.
'a concept where manufacturers and importers of products should bear a significant degree of responsibility for the environmental impacts of their products throughout the product life-cycle, including upstream impacts inherent in the selection of materials for the products, impacts from manufacturers’ production process itself, and downstream impacts from the use and disposal of the products. Producers accept their responsibility when designing their products to minimise life-cycle environmental impacts, and when accepting legal, physical or socio-economic responsibility for environmental impacts that cannot be eliminated by design.'
Opps say that 'people buying meat are the reason producers produce meat. The production of meat leads to pollution.' But we would seek to refine this by saying that not all meat production leads to pollution and some does more than others. This is why the principle that the polluter should pay is so important, it ensures price properly reflects environmental damage caused. In the Prop's model however, even if I am extremely careful to buy only organic, responsibly farmed meat, and therefore am not enjoying 'the result of the polluting action' I still 'need to pay the tax'. While if I care not a jot for how my meat was farmed, I pay the same.
In the second part of their rebuttal, Props say that our proposals 'all require money to do' this is simply not true.
We say (for the umpteenth time) inputs should be properly priced, externalities charged for and subsidies removed. All three of these measures simultaneously tackle the problems Props wish to solve and raise revenue.
Props then contend that since these measures are necessary to 'correct the results of the irresponsible acts of some people. It makes sense that these some people who enjoy the results of their acts also pay the bill.' But as we show above this will not happen in their plan and does happen in ours.
Rebutting Prop's rebuttal to our fourth point .
Farmers will find ways to reduce these costs or go out of business, and in the process they will reduce their impact on the environment in a way they are not incentivised to do in the Prop's model.
In answer to Prop's rebuttal to our point 5.
Taking the comparison Props are so fond of making with Tobacco and Alcohol, this is equivalent to paying tobacco manufacturers and distillers subsidies at the same time as charging them excise duties, something of a mixed message.
For people so happy to tax their own farmers indiscriminately, Prop show strange reluctance to extend this principle to the only place where it might actually do some good.
They concede our argument that raising the price of meat in the EU might lead to an increase in demand for cheaper imported meat with all the enormous environmental damage this brings with it. But then go on to say: 'the EU can do nothing to make production of meat in these countries more expensive'.
Well, the EU may not be able to make production of foreign meat more expensive, but that is irrelevant to the case, because the EU can, very simply, make the price of it more expensive by, you've guessed it, placing an excise duty on meat produced OUTSIDE THE EU. This, at least, would make some environmental sense since imported meat attracts immeasurably higher environmental costs as we will show in our final point.
Proposition plan will take too long to implement
Props began their argument with strong words about the urgency of the situation the planet is in. Just contemplate for a moment the length of time the plan they propose to combat this perilous threat would take to put into effect.
We begin with a commission charged to produce a report, a year or two later this leads to a consultation document. After this has done the rounds and all the various vested interests have had their say, maybe, a tentative draft proposal could be produced... You get the picture. We wonder if the Props are lawyers, as only a lawyer could view this prospect with any satisfaction.
(By the way, we would love to be a fly on the wall when Props put their plan to that most reasonable and law-abiding community: the French farmers. And we don't suppose for a moment it would occur to British farmers to do anything so silly as protest this irrational and indiscriminate tax by blockading channel ports? Not to be outdone we notice that last May thousands of German farmers protested against increased taxes on diesel by driving their tractors through Berlin. How much more animated will farmers' ire be when it is directed against those already-resented 'unelected fat cat MEPs' as they are styled in the popular press.)
Even the agreement (if it could ever be reached) is not the end of the affair of course. An enormous bureaucracy would need to be created; new powers granted to customs' officers to prevent evasion, and so on.
And all for what? To possibly create a marginal reduction in demand for meat, at some point in the future, not even conditional on improving environmental practice, likely to incentivise further environmental damage, in the expectation that this will ameliorate the destruction of the planet that is taking place at an accelerating rate today.
Our planet is on fire, let's not send a letter to the fire brigade.
Proposition plan will undermine the EU
The only way Britain became a signatory to the Maastricht and Lisbon treaties was by negotiating a series of 'opt-out' clauses that prevented the EU being able to mandate precisely the kind of interference in local affairs Props plan represents.
What tenuous mandate the EU has to affect the live of the citizens of its member states is, in short, fragile, and dependent on the principle of 'subsidiarity' which Prop's plan clearly violates. Given our earlier experience, we define this principle below:
Subsidiarity 'is presently best known as a fundamental principle of European Union law. According to this principle, the EU may only act (i.e. make laws) where action of individual countries is insufficient. The principle was established in the 1992 Treaty of Maastricht, and was contained within the failed Treaty establishing a constitution for Europe. However, at the local level it was already a key element of the European Charter of Local Self-Government, an instrument of the Council of Europe promulgated in 1985.' (see Article 4, Paragraph 3 of the Charter) [http://en.wikipedia.org/wiki/Subsidiarity]
In this political and economic climate, faced with the prospect of more expensive meat mandated by the EU (that subsidises farmers to the tune of $50bn a year) isn't it certain that most EU citizens would prefer to keep their cheaper meat and do without the EU instead?
Fiddling while the Amazon burns.
Let us try to put these threats into some kind of global context. (We quote from the UN report unless stated otherwise.)
The first thing to note is that expansion of livestock farming and expansion of pasture areas into natural eco-systems 'has essentially come to an end in most parts of the world, except for Latin America..' '..and central Africa.' [p256]
Secondly, 'compared to the amounts of carbon released from changes in land use and land-degradation, emissions from the food chain are small'. [p115l
This leads the UN to conclude that 'creating incentives for forest conservation and decreased deforestation, in Amazonia and other tropical areas, can offer a unique opportunity for climate change mitigation given the..’ ‘..relative low costs.’ [p116]
It's also true that these same tropical forests while they account for only '8% of the world’s land surface...'. also '..hold more than 50% of the world’s species. Tropical regions support two-thirds of the estimated 250 000 plant species and 30% of bird species’ [p183]
The UN points out that Latin America is the continent where livestock production accounts most directly for deforestation, estimating that over the past decade 24 million hectares of neotropical land have been given over to grazing. [p187]
In contrast, in Europe today ‘traditional grazing is seen as having positively affected biodiversity in pastures, by creating and maintaining sward structural heterogeneity, particularly as a result of dietary choice.’ [Rook et al 2004]
The UN agrees: 'in some contexts (e.g. Europe) extensive grazing may provide a tool to maintain a threatened but ecologically valuable level of landscape heterogeneity.’ [p188]
The last piece of the puzzle we need to finish the picture is demand. Here again the problem is not with Europe, where demand for meat has remained relatively static, but in the developing world, especially China, where, as the Props have pointed out demand for meat is expected to grow exponentially.
In this context circuitously reducing demand for meat and meat products in Europe, in a manner that risks actually increasing our appetite for cheaper imported meat as per props plan, will exacerbate rather than mitigate the impending environmental catastrophe awaiting us.
Proposition is barking up the wrong tree, with the wrong dog.
We accepted the first three threats were real and the aim to correct them was desirable, but we disagreed that taxing meat consumption would solve the problem.
We didn’t believe the fourth threat was genuine at all, so perhaps we can get this out of the way before summarising our objections to using taxation as an instrument to solve the others.
Their chain of reasoning went as follows: ‘growing livestock wastes grain…’ ‘…if the grain that is used to grow pork, for an example, would be used to feed humans more humans would get fed…’
We were happy to concede the first point: that growing livestock is comparatively wasteful of grain, but we couldn’t see how, nor did Proposition ever establish, this argued for a European tax on meat consumption. As we put this in our rebuttal, the cause of people going hungry in the developing world isn’t a lack of capacity in the West. To make this point stick Proposition would need to show how making European farming less wasteful of grain (even if this was the outcome of their proposal) would assist the cause of those going without food elsewhere. They were not able to do so.
This said, the main focus of the debate became the suitability or otherwise of an excise tax as a way of tackling the three genuine threats to the environment caused by meat production.
Proposition advocated this approach on two grounds, to reduce demand and to raise revenue ‘some or all of which can be used to mitigate the harms caused by excessive meat production.’
Again, we felt the second purpose of the tax could be dismissed relatively easily. This is because at present farming in general and livestock farming in particular, benefits from enormous financial support from the EU. Props would maintain these subsidies and simultaneously tax the product they help subsidise. We saw this is as economic madness when the simple expedient of dispensing with them achieved everything Props desired in a way that was guaranteed to provide revenue in a much more timely fashion. This is the approach preferred by the UN, which cites the case of New Zealand where the simple act of removing these subsidies resulted in ‘what has become one of the most efficient and environmentally benign ruminant livestock industries!’ [their ref. p 277]
Having dismissed Prop’s plan to raise revenue as economically flawed, it wasn't even clear, in our view, that Prop’s plan would raise money, or that its impact on consumption would be without serious negative consequences. This was the third area of contention to which we now turn.
We believe that rather than simply stop eating meat altogether when faced with a price rise consumers would seek out cheaper cuts or sources of meat, a point Props conceded. Both actions have the potential to exacerbate the damage done to the environment by meat production. We have seen, for example, how the recession has led to a fall in demand of organically produced meat. And since the tax is indiscriminate, i.e. it does not favour farmers who employ these better environmental practices, it will simply add to this pressure on organic or otherwise more ethical farmers. Meanwhile those farmers who use every means at their disposal to lower costs, including "antibiotics, hormones... fertilizers and pesticides" will be rewarded for these bad practices.
Another beneficiary of the Props plan will be farmers operating in parts of the world with a comparative economic advantage over the EU, such as Latin America. Props shrugged their shoulders at this prospect, saying simply that their meat is ‘already cheaper at the moment’ and there is nothing the EU can do about it.
We found this attitude extraordinary in the context of their overall economic approach and the global context of the problem. The real battleground in the fight against climate change and to preserve biodiversity is the tropical rainforests, and Latin America is the continent where these precious cradles of life are most under threat from the encroachment of livestock farmers. Surely what tenuous grounds Props have for their attempt to reduce European demand for meat is dependent upon directing consumption away from meat produced in these vital habitats, not towards them?
Their plan, will have the same impact on costs 'regardless of where it was produced', this is precisely the problem. We seek to make the cost of meat reflect the environmental harm caused in its production. Thus meat produced on land reclaimed from tropical rain forests would in our plan be priced out of the market, while in theirs it becomes relatively more attractive to consume.
What makes their defeatism even harder to understand is their preferred mechanism, excise tax, is particularly well-suited to combating this problem, by making imported meat more expensive.
We ended our critique of Prop’s plan by pointing out that the glacial speed at which the EU could institute it was entirely out of step with their presentation of the crisis. We further doubted whether the EU had the level of support among its citizens or member states to pass it into law, and we bemoaned it as a distraction from the far more pressing problem of tackling the explosion of demand in the developing world and its likely devastating impact on tropical habitats.
Throughout our critique we were able to show that far more effective means were available to the EU to combat the threats Props identified, that did not suffer from the critical deficiencies of Props’ plan. We based this on the UN’s own analysis that the key to reducing livestock’s long shadow was resource efficiency. And that the most effective way to achieve this was by ensuring inputs and externalities were priced realistically and subsidies removed.
Between them these three measures would ensure the price of meat reflected the true environmental cost of its production, wherever it took place, elegantly aligning the interests of consumers and the environment in a manner proposition's plan so singularly, and disastrously, fails to do. | 1 | 10 |
<urn:uuid:13173549-8976-4f0c-bdef-138e37bf19ff> | Great white shark
|Great white shark|
A. Smith, 1838
|Global range as of 2010|
The great white shark (Carcharodon carcharias), also known as the great white, white shark or "white pointer", is a species of large mackerel shark which can be found in the coastal surface waters of all the major oceans. The great white shark is notable for its size, with larger female individuals growing to 6.1 m (20 ft) in length and 1,905–2,268 kg (4,200–5,000 lb) in weight at maturity. However, most are smaller; males measure 3.4 to 4.0 m (11 to 13 ft), and females measure 4.6 to 4.9 m (15 to 16 ft) on average. According to a 2014 study, the lifespan of great white sharks is estimated to be as long as 70 years or more, well above previous estimates, making it one of the longest lived cartilaginous fish currently known. According to the same study, male great white sharks take 26 years to reach sexual maturity, while the females take 33 years to be ready to produce offspring. Great white sharks can swim at speeds of over 56 km/h (35 mph), and can swim to depths of 1,200 m (3,900 ft).
The great white shark has no known natural predators other than, on very rare occasions, the killer whale. The great white shark is arguably the world's largest known extant macropredatory fish, and is one of the primary predators of marine mammals. It is also known to prey upon a variety of other marine animals, including fish and seabirds. It is the only known surviving species of its genus Carcharodon, and is responsible for more recorded human bite incidents than any other shark.
The species faces numerous ecological challenges which has resulted in international protection. The IUCN lists the great white shark as a vulnerable species, and it is included in Appendix II of CITES. It is also protected by several national governments such as Australia (as of 2018).
The novel Jaws by Peter Benchley and its subsequent film adaptation by Steven Spielberg depicted the great white shark as a "ferocious man eater". Humans are not the preferred prey of the great white shark, but the great white is nevertheless responsible for the largest number of reported and identified fatal unprovoked shark attacks on humans.
- 1 Taxonomy
- 2 Distribution and habitat
- 3 Anatomy and appearance
- 4 Ecology and behavior
- 5 Conservation status
- 6 Relationship with humans
- 7 See also
- 8 References
- 9 External links
This section may be confusing or unclear to readers. (June 2018) (Learn how and when to remove this template message)
The great white shark was one of the many amphibia originally described by Linnaeus in the landmark 1758 10th edition of his Systema Naturae, its first scientific name, Squalus carcharias. Later, Sir Andrew Smith gave it Carcharodon as its generic name in 1833, and also in 1873. The generic name was identified with Linnaeus' specific name and the current scientific name, Carcharodon carcharias, was finalized. Carcharodon comes from the Ancient Greek words κάρχαρος (kárkharos, 'sharp' or 'jagged'), and ὀδούς (odoús), ὀδών (odṓn, 'tooth').
Ancestry and fossil record
The earliest known fossils of the great white shark are about 16 million years old, during the mid-Miocene epoch.[irrelevant citation] However, the phylogeny of the great white is still in dispute. The original hypothesis for the great white's origins is that it shares a common ancestor with a prehistoric shark, such as the C. megalodon. C. megalodon had teeth that were superficially not too dissimilar with those of great white sharks, but its teeth were far larger. Although cartilaginous skeletons do not fossilize, C. megalodon is estimated to have been considerably larger than the great white shark, estimated at up to 17 m (56 ft) and 59,413 kg (130,983 lb). Similarities among the physical remains and the extreme size of both the great white and C. megalodon led many scientists to believe these sharks were closely related, and the name Carcharodon megalodon was applied to the latter. However, a new hypothesis proposes that the C. megalodon and the great white are distant relatives (albeit sharing the family Lamnidae). The great white is also more closely related to an ancient mako shark, Isurus hastalis, than to the C. megalodon, a theory that seems to be supported with the discovery of a complete set of jaws with 222 teeth and 45 vertebrae of the extinct transitional species Carcharodon hubbelli in 1988 and published on 14 November 2012. In addition, the new hypothesis assigns C. megalodon to the genus Carcharocles, which also comprises the other megatoothed sharks; Otodus obliquus is the ancient representative of the extinct Carcharocles lineage.
Distribution and habitat
Great white sharks live in almost all coastal and offshore waters which have water temperature between 12 and 24 °C (54 and 75 °F), with greater concentrations in the United States (Northeast and California), South Africa, Japan, Oceania, Chile, and the Mediterranean including Sea of Marmara and Bosphorus. One of the densest known populations is found around Dyer Island, South Africa.
The great white is an epipelagic fish, observed mostly in the presence of rich game, such as fur seals (Arctocephalus ssp.), sea lions, cetaceans, other sharks, and large bony fish species. In the open ocean, it has been recorded at depths as great as 1,200 m (3,900 ft). These findings challenge the traditional notion that the great white is a coastal species.
According to a recent study, California great whites have migrated to an area between Baja California Peninsula and Hawaii known as the White Shark Café to spend at least 100 days before migrating back to Baja. On the journey out, they swim slowly and dive down to around 900 m (3,000 ft). After they arrive, they change behavior and do short dives to about 300 m (980 ft) for up to ten minutes. Another white shark that was tagged off the South African coast swam to the southern coast of Australia and back within the year. A similar study tracked a different great white shark from South Africa swimming to Australia's northwestern coast and back, a journey of 20,000 km (12,000 mi; 11,000 nmi) in under nine months. These observations argue against traditional theories that white sharks are coastal territorial predators, and open up the possibility of interaction between shark populations that were previously thought to have been discrete. The reasons for their migration and what they do at their destination is still unknown. Possibilities include seasonal feeding or mating.
In the Northwest Atlantic the white shark populations off the New England coast were nearly eradicated due to over-fishing. However, in recent years the populations have begun to grow greatly, largely due to the increase in seal populations on Cape Cod, Massachusetts since the enactment of the Marine Mammal Protection Act in 1972. Currently very little is known about the hunting and movement patterns of great whites off Cape Cod, but ongoing studies hope to offer insight into this growing shark population.
A 2018 study indicated that white sharks prefer to congregate deep in anticyclonic eddies in the North Atlantic Ocean. The sharks studied tended to favor the warm water eddies, spending the daytime hours at 450 meters and coming to the surface at night.
Anatomy and appearance
The great white shark has a robust, large, conical snout. The upper and lower lobes on the tail fin are approximately the same size which is similar to some mackerel sharks. A great white displays countershading, by having a white underside and a grey dorsal area (sometimes in a brown or blue shade) that gives an overall mottled appearance. The coloration makes it difficult for prey to spot the shark because it breaks up the shark's outline when seen from the side. From above, the darker shade blends with the sea and from below it exposes a minimal silhouette against the sunlight. Leucism is extremely rare in this species, but has been documented in one great white shark (a pup that washed ashore in Australia and died). Great white sharks, like many other sharks, have rows of serrated teeth behind the main ones, ready to replace any that break off. When the shark bites, it shakes its head side-to-side, helping the teeth saw off large chunks of flesh. Great white sharks, like other mackerel sharks, have larger eyes than other shark species in proportion to their body size. The iris of the eye is a deep blue instead of black.
In great white sharks, sexual dimorphism is present, and females are generally larger than males. Male great whites on average measure 3.4 to 4.0 m (11 to 13 ft) long, while females at 4.6 to 4.9 m (15 to 16 ft). Adults of this species weigh 522–771 kg (1,151–1,700 lb) on average, however mature females can have an average mass of 680–1,110 kg (1,500–2,450 lb). The largest females have been verified up to 6.1 m (20 ft) in length and an estimated 1,905 kg (4,200 lb) in weight, perhaps up to 2,268 kg (5,000 lb). The maximum size is subject to debate because some reports are rough estimations or speculations performed under questionable circumstances. Among living cartilaginous fish, only the whale shark (Rhincodon typus), the basking shark (Cetorhinus maximus) and the giant manta ray (Manta birostris), in that order, are on average larger and heavier. These three species are generally quite docile in disposition and given to passively filter-feeding on very small organisms. This makes the great white shark the largest extant macropredatory fish. Great white sharks are at around 1.2 m (3.9 ft) when born, and grow about 25 cm (9.8 in) each year.
According to J. E. Randall, the largest white shark reliably measured was a 6.0 m (19.7 ft) individual reported from Ledge Point, Western Australia in 1987. Another great white specimen of similar size has been verified by the Canadian Shark Research Center: A female caught by David McKendrick of Alberton, Prince Edward Island, in August 1988 in the Gulf of St. Lawrence off Prince Edward Island. This female great white was 6.1 m (20 ft) long. However, there was a report considered reliable by some experts in the past, of a larger great white shark specimen from Cuba in 1945. This specimen was reportedly 6.4 m (21 ft) long and had a body mass estimated at 3,324 kg (7,328 lb). However, later studies also revealed that this particular specimen was actually around 4.9 m (16 ft) in length, a specimen in the average maximum size range.
The largest great white recognized by the International Game Fish Association (IGFA) is one caught by Alf Dean in south Australian waters in 1959, weighing 1,208 kg (2,663 lb). Several larger great whites caught by anglers have since been verified, but were later disallowed from formal recognition by IGFA monitors for rules violations.
Examples of large unconfirmed great whites
A number of very large unconfirmed great white shark specimens have been recorded. For decades, many ichthyological works, as well as the Guinness Book of World Records, listed two great white sharks as the largest individuals: In the 1870s, a 10.9 m (36 ft) great white captured in southern Australian waters, near Port Fairy, and an 11.3 m (37 ft) shark trapped in a herring weir in New Brunswick, Canada, in the 1930s. However, these measurements were not obtained in a rigorous, scientifically valid manner, and researchers have questioned the reliability of these measurements for a long time, noting they were much larger than any other accurately reported sighting. Later studies proved these doubts to be well founded. This New Brunswick shark may have been a misidentified basking shark, as the two have similar body shapes. The question of the Port Fairy shark was settled in the 1970s when J. E. Randall examined the shark's jaws and "found that the Port Fairy shark was of the order of 5.0 m (16.4 ft) in length and suggested that a mistake had been made in the original record, in 1870, of the shark's length". These wrong measurements would make the alleged shark more than five times heavier than it really was.
While these measurements have not been confirmed, some great white sharks caught in modern times have been estimated to be more than 7 m (23 ft) long, but these claims have received some criticism. However, J. E. Randall believed that great white shark may have exceeded 6.1 m (20 ft) in length. A great white shark was captured near Kangaroo Island in Australia on 1 April 1987. This shark was estimated to be more than 6.9 m (23 ft) long by Peter Resiley, and has been designated as KANGA. Another great white shark was caught in Malta by Alfredo Cutajar on 16 April 1987. This shark was also estimated to be around 7.13 m (23.4 ft) long by John Abela and has been designated as MALTA. However, Cappo drew criticism because he used shark size estimation methods proposed by J. E. Randall to suggest that the KANGA specimen was 5.8–6.4 m (19–21 ft) long. In a similar fashion, I. K. Fergusson also used shark size estimation methods proposed by J. E. Randall to suggest that the MALTA specimen was 5.3–5.7 m (17–19 ft) long. However, photographic evidence suggested that these specimens were larger than the size estimations yielded through Randall's methods. Thus, a team of scientists—H. F. Mollet, G. M. Cailliet, A. P. Klimley, D. A. Ebert, A. D. Testi, and L. J. V. Compagno—reviewed the cases of the KANGA and MALTA specimens in 1996 to resolve the dispute by conducting a comprehensive morphometric analysis of the remains of these sharks and re-examination of photographic evidence in an attempt to validate the original size estimations and their findings were consistent with them. The findings indicated that estimations by P. Resiley and J. Abela are reasonable and could not be ruled out. A particularly large female great white nicknamed "Deep Blue", estimated measuring at 6.1 m (20 ft) was filmed off Guadalupe during shooting for the 2014 episode of Shark Week "Jaws Strikes Back". Deep Blue would also later gain significant attention when she was filmed interacting with researcher Mauricio Hoyas Pallida in a viral video that Mauricio posted on Facebook on 11 June 2015. Deep Blue was later seen off Oahu in January, 2019 while scavenging a sperm whale carcass, whereupon she was filmed swimming beside divers including dive tourism operator and model Ocean Ramsey in open water. In July 2019, a fisherman, J. B. Currell, was on a trip to Cape Cod from Bermuda with Tom Brownell when they saw a large shark about 40 mi (64 km) southeast of Martha's Vineyard. Recording it on video, he said that it weighed about 5,000 lb (2,300 kg), and measured 25–30 ft (7.6–9.1 m), evoking a comparison with the fictional shark Jaws. The video was shared with the page "Troy Dando Fishing" on Facebook. A particularly infamous great white shark, supposedly of record proportions, once patrolled the area that comprises False Bay, South Africa, was said to be well over 7 m (23 ft) during the early 1980s. This shark, known locally as the "Submarine", had a legendary reputation that was supposedly well founded. Though rumors have stated this shark was exaggerated in size or non-existent altogether, witness accounts by the then young Craig Anthony Ferreira, a notable shark expert in South Africa, and his father indicate an unusually large animal of considerable size and power (though it remains uncertain just how massive the shark was as it escaped capture each time it was hooked). Ferreira describes the four encounters with the giant shark he participated in with great detail in his book "Great White Sharks On Their Best Behavior".
One contender in maximum size among the predatory sharks is the tiger shark (Galeocerdo cuvier). While tiger sharks which are typically both a few feet smaller and have a leaner, less heavy body structure than white sharks, have been confirmed to reach at least 5.5 m (18 ft) in the length, an unverified specimen was reported to have measured 7.4 m (24 ft) in length and weighed 3,110 kg (6,860 lb), more than two times heavier than the largest confirmed specimen at 1,524 kg (3,360 lb). Some other macropredatory sharks such as the Greenland shark (Somniosus microcephalus) and the Pacific sleeper shark (S. pacificus) are also reported to rival these sharks in length (but probably weigh a bit less since they are more slender in build than a great white) in exceptional cases. The question of maximum weight is complicated by the unresolved question of whether or not to include the shark's stomach contents when weighing the shark. With a single bite a great white can take in up to 14 kg (31 lb) of flesh and can also consume several hundred kilograms of food.
Great white sharks, like all other sharks, have an extra sense given by the ampullae of Lorenzini which enables them to detect the electromagnetic field emitted by the movement of living animals. Great whites are so sensitive they can detect variations of half a billionth of a volt. At close range, this allows the shark to locate even immobile animals by detecting their heartbeat. Most fish have a less-developed but similar sense using their body's lateral line.
To more successfully hunt fast and agile prey such as sea lions, the great white has adapted to maintain a body temperature warmer than the surrounding water. One of these adaptations is a "rete mirabile" (Latin for "wonderful net"). This close web-like structure of veins and arteries, located along each lateral side of the shark, conserves heat by warming the cooler arterial blood with the venous blood that has been warmed by the working muscles. This keeps certain parts of the body (particularly the stomach) at temperatures up to 14 °C (25 °F) above that of the surrounding water, while the heart and gills remain at sea temperature. When conserving energy, the core body temperature can drop to match the surroundings. A great white shark's success in raising its core temperature is an example of gigantothermy. Therefore, the great white shark can be considered an endothermic poikilotherm or mesotherm because its body temperature is not constant but is internally regulated. Great whites also rely on the fat and oils stored within their livers for long-distance migrations across nutrient-poor areas of the oceans. Studies by Stanford University and the Monterey Bay Aquarium published on 17 July 2013 revealed that in addition to controlling the sharks' buoyancy, the liver of great whites is essential in migration patterns. Sharks that sink faster during drift dives were revealed to use up their internal stores of energy quicker than those which sink in a dive at more leisurely rates.
Toxicity from heavy metals seems to have little negative effects on great white sharks. Blood samples taken from forty-three individuals of varying size, age and sex off the South African coast led by biologists from the University of Miami in 2012 indicates that despite high levels of mercury, lead, and arsenic, there was no sign of raised white blood cell count and granulate to lymphocyte ratios, indicating the sharks had healthy immune systems. This discovery suggests a previously unknown physiological defense against heavy metal poisoning. Great whites are known to have a propensity for "self-healing and avoiding age-related ailments".
A 2007 study from the University of New South Wales in Sydney, Australia, used CT scans of a shark's skull and computer models to measure the shark's maximum bite force. The study reveals the forces and behaviors its skull is adapted to handle and resolves competing theories about its feeding behavior. In 2008, a team of scientists led by Stephen Wroe conducted an experiment to determine the great white shark's jaw power and findings indicated that a specimen massing 3,324 kg (7,328 lb) could exert a bite force of 18,216 newtons (4,095 lbf).
Ecology and behavior
This shark's behavior and social structure is complex. In South Africa, white sharks have a dominance hierarchy depending on the size, sex and squatter's rights: Females dominate males, larger sharks dominate smaller sharks, and residents dominate newcomers. When hunting, great whites tend to separate and resolve conflicts with rituals and displays. White sharks rarely resort to combat although some individuals have been found with bite marks that match those of other white sharks. This suggests that when a great white approaches too closely to another, they react with a warning bite. Another possibility is that white sharks bite to show their dominance.
The great white shark is one of only a few sharks known to regularly lift its head above the sea surface to gaze at other objects such as prey. This is known as spy-hopping. This behavior has also been seen in at least one group of blacktip reef sharks, but this might be learned from interaction with humans (it is theorized that the shark may also be able to smell better this way because smell travels through air faster than through water). White sharks are generally very curious animals, display intelligence and may also turn to socializing if the situation demands it. At Seal Island, white sharks have been observed arriving and departing in stable "clans" of two to six individuals on a yearly basis. Whether clan members are related is unknown, but they get along peacefully enough. In fact, the social structure of a clan is probably most aptly compared to that of a wolf pack; in that each member has a clearly established rank and each clan has an alpha leader. When members of different clans meet, they establish social rank nonviolently through any of a variety of interactions.
Great white sharks are carnivorous and prey upon fish (e.g. tuna, rays, other sharks), cetaceans (i.e., dolphins, porpoises, whales), pinnipeds (e.g. seals, fur seals, and sea lions), sea turtles, sea otters (Enhydra lutris) and seabirds. Great whites have also been known to eat objects that they are unable to digest. Juvenile white sharks predominantly prey on fish, including other elasmobranchs, as their jaws are not strong enough to withstand the forces required to attack larger prey such as pinnipeds and cetaceans until they reach a length of 3 m (9.8 ft) or more, at which point their jaw cartilage mineralizes enough to withstand the impact of biting into larger prey species. Upon approaching a length of nearly 4 m (13 ft), great white sharks begin to target predominantly marine mammals for food, though individual sharks seem to specialize in different types of prey depending on their preferences. They seem to be highly opportunistic. These sharks prefer prey with a high content of energy-rich fat. Shark expert Peter Klimley used a rod-and-reel rig and trolled carcasses of a seal, a pig, and a sheep from his boat in the South Farallons. The sharks attacked all three baits but rejected the sheep carcass.
Off California, sharks immobilize northern elephant seals (Mirounga angustirostris) with a large bite to the hindquarters (which is the main source of the seal's mobility) and wait for the seal to bleed to death. This technique is especially used on adult male elephant seals, which are typically larger than the shark, ranging between 1,500 and 2,000 kg (3,300 and 4,400 lb), and are potentially dangerous adversaries. Most commonly though, juvenile elephant seals are the most frequently eaten at elephant seal colonies. Prey is normally attacked sub-surface. Harbor seals (Phoca vitulina) are taken from the surface and dragged down until they stop struggling. They are then eaten near the bottom. California sea lions (Zalophus californianus) are ambushed from below and struck mid-body before being dragged and eaten.
In the Northwest Atlantic mature great whites are known to feed on both harbor and grey seals. Unlike adults, juvenile white sharks in the area feed on smaller fish species until they are large enough to prey on marine mammals such as seals.
White sharks also attack dolphins and porpoises from above, behind or below to avoid being detected by their echolocation. Targeted species include dusky dolphins (Lagenorhynchus obscurus), Risso's dolphins (Grampus griseus), bottlenose dolphins (Tursiops ssp.), Humpback dolphins (Sousa ssp.), harbour porpoises (Phocoena phocoena), and Dall's porpoises (Phocoenoides dalli). Groups of dolphins have occasionally been observed defending themselves from sharks with mobbing behaviour. White shark predation on other species of small cetacean has also been observed. In August 1989, a 1.8 m (5.9 ft) juvenile male pygmy sperm whale (Kogia breviceps) was found stranded in central California with a bite mark on its caudal peduncle from a great white shark. In addition, white sharks attack and prey upon beaked whales. Cases where an adult Stejneger's beaked whale (Mesoplodon stejnegeri), with a mean mass of around 1,100 kg (2,400 lb), and a juvenile Cuvier's beaked whale (Ziphius cavirostris), an individual estimated at 3 m (9.8 ft), were hunted and killed by great white sharks have also been observed. When hunting sea turtles, they appear to simply bite through the carapace around a flipper, immobilizing the turtle. The heaviest species of bony fish, the oceanic sunfish (Mola mola), has been found in great white shark stomachs.
Off Seal Island, False Bay in South Africa, the sharks ambush brown fur seals (Arctocephalus pusillus) from below at high speeds, hitting the seal mid-body. They can go so fast that they completely leave the water. The peak burst speed is estimated to be above 40 km/h (25 mph). They have also been observed chasing prey after a missed attack. Prey is usually attacked at the surface. Shark attacks most often occur in the morning, within 2 hours of sunrise, when visibility is poor. Their success rate is 55% in the first 2 hours, falling to 40% in late morning after which hunting stops.
Whale carcasses comprise an important part of the diet of white sharks. However, this has rarely been observed due to whales dying in remote areas. It has been estimated that 30 kg (66 lb) of whale blubber could feed a 4.5 m (15 ft) white shark for 1.5 months. Detailed observations were made of four whale carcasses in False Bay between 2000 and 2010. Sharks were drawn to the carcass by chemical and odour detection, spread by strong winds. After initially feeding on the whale caudal peduncle and fluke, the sharks would investigate the carcass by slowly swimming around it and mouthing several parts before selecting a blubber-rich area. During feeding bouts of 15–20 seconds the sharks removed flesh with lateral headshakes, without the protective ocular rotation they employ when attacking live prey. The sharks were frequently observed regurgitating chunks of blubber and immediately returning to feed, possibly in order to replace low energy yield pieces with high energy yield pieces, using their teeth as mechanoreceptors to distinguish them. After feeding for several hours, the sharks appeared to become lethargic, no longer swimming to the surface; they were observed mouthing the carcass but apparently unable to bite hard enough to remove flesh, they would instead bounce off and slowly sink. Up to eight sharks were observed feeding simultaneously, bumping into each other without showing any signs of aggression; on one occasion a shark accidentally bit the head of a neighbouring shark, leaving two teeth embedded, but both continued to feed unperturbed. Smaller individuals hovered around the carcass eating chunks that drifted away. Unusually for the area, large numbers of sharks over five metres long were observed, suggesting that the largest sharks change their behaviour to search for whales as they lose the maneuverability required to hunt seals. The investigating team concluded that the importance of whale carcasses, particularly for the largest white sharks, has been underestimated. In another documented incident, white sharks were observed scavenging on a whale carcass alongside tiger sharks.
Stomach contents of great whites also indicates that whale sharks both juvenile and adult may also be included on the animal's menu, though whether this is active hunting or scavenging is not known at present.
Great white sharks were previously thought to reach sexual maturity at around 15 years of age, but are now believed to take far longer; male great white sharks reach sexual maturity at age 26, while females take 33 years to reach sexual maturity. Maximum life span was originally believed to be more than 30 years, but a study by the Woods Hole Oceanographic Institution placed it at upwards of 70 years. Examinations of vertebral growth ring count gave a maximum male age of 73 years and a maximum female age of 40 years for the specimens studied. The shark's late sexual maturity, low reproductive rate, long gestation period of 11 months and slow growth make it vulnerable to pressures such as overfishing and environmental change.
Little is known about the great white shark's mating habits, and mating behavior has not yet been observed in this species. It is possible that whale carcasses are an important location for sexually mature sharks to meet for mating. Birth has never been observed, but pregnant females have been examined. Great white sharks are ovoviviparous, which means eggs develop and hatch in the uterus and continue to develop until birth. The great white has an 11-month gestation period. The shark pup's powerful jaws begin to develop in the first month. The unborn sharks participate in oophagy, in which they feed on ova produced by the mother. Delivery is in spring and summer. The largest number of pups recorded for this species is 14 pups from a single mother measuring 4.5 m (15 ft) that was killed incidentally off Taiwan in 2019. The Northern Pacific population of great whites is suspected to breed off the Sea of Cortez, as evidenced by local fisherman who have said to have caught them and evidenced by teeth found at dump sites for discarded parts from their catches.
A breach is the result of a high speed approach to the surface with the resulting momentum taking the shark partially or completely clear of the water. This is a hunting technique employed by great white sharks whilst hunting seals. This technique is often used on cape fur seals at Seal Island in False Bay, South Africa. Because the behavior is unpredictable, it is very hard to document. It was first photographed by Chris Fallows and Rob Lawrence who developed the technique of towing a slow-moving seal decoy to trick the sharks to breach. Between April and September, scientists may observe around 600 breaches. The seals swim on the surface and the great white sharks launch their predatory attack from the deeper water below. They can reach speeds of up to 40 km/h (25 mph) and can at times launch themselves more than 3.0 m (10 ft) into the air. Just under half of observed breach attacks are successful. In 2011, a 3 metres (9.8 ft) long shark jumped onto a seven-person research vessel off Seal Island in Mossel Bay. The crew were undertaking a population study using sardines as bait, and the incident was judged not to be an attack on the boat but an accident.
Interspecific competition between the great white shark and the orca is probable in regions where dietary preferences of both species may overlap. An incident was documented on 4 October 1997, in the Farallon Islands off California in the United States. An estimated 4.7–5.3 m (15–17 ft) female orca immobilized an estimated 3–4 m (9.8–13.1 ft) great white shark. The orca held the shark upside down to induce tonic immobility and kept the shark still for fifteen minutes, causing it to suffocate. The orca then proceeded to eat the dead shark's liver. It is believed that the scent of the slain shark's carcass caused all the great whites in the region to flee, forfeiting an opportunity for a great seasonal feed. Another similar attack apparently occurred there in 2000, but its outcome is not clear. After both attacks, the local population of about 100 great whites vanished. Following the 2000 incident, a great white with a satellite tag was found to have immediately submerged to a depth of 500 m (1,600 ft) and swum to Hawaii. In 2015, a pod of orcas was recorded to have killed a great white shark off South Australia. In 2017, three great whites were found washed ashore near Gaansbai, South Africa, with their body cavities torn open and the livers removed by what is likely to have been killer whales. Killer whales also generally impact great white distribution. Studies published in 2019 of killer whale and great white shark distribution and interactions around the Farallon Islands indicate that the cetaceans impact the sharks negatively, with brief appearances by killer whales causing the sharks to seek out new feeding areas until the next season. Occasionally however, great whites have been seen to swim near orcas without fear.
It is unclear how much of a concurrent increase in fishing for great white sharks has caused the decline of great white shark populations from the 1970s to the present. No accurate global population numbers are available, but the great white shark is now considered vulnerable. Sharks taken during the long interval between birth and sexual maturity never reproduce, making population recovery and growth difficult.
The IUCN notes that very little is known about the actual status of the great white shark, but as it appears uncommon compared to other widely distributed species, it is considered vulnerable. It is included in Appendix II of CITES, meaning that international trade in the species requires a permit. As of March 2010, it has also been included in Annex I of the CMS Migratory Sharks MoU, which strives for increased international understanding and coordination for the protection of certain migratory sharks. A February 2010 study by Barbara Block of Stanford University estimated the world population of great white sharks to be lower than 3,500 individuals, making the species more vulnerable to extinction than the tiger, whose population is in the same range. According to another study from 2014 by George H. Burgess, Florida Museum of Natural History, University of Florida, there are about 2,000 great white sharks near the California coast, which is 10 times higher than the previous estimate of 219 by Barbara Block.
Fishermen target many sharks for their jaws, teeth, and fins, and as game fish in general. The great white shark, however, is rarely an object of commercial fishing, although its flesh is considered valuable. If casually captured (it happens for example in some tonnare in the Mediterranean), it is misleadingly sold as smooth-hound shark.
The great white shark was declared Vulnerable by the Australian Government in 1999 because of significant population decline and is currently protected under the Environmental Protection and Biodiversity Conservation (EPBC) Act. The causes of decline prior to protection included mortality from sport fishing harvests as well as being caught in beach protection netting.
The national conservation status of the great white shark is reflected by all Australian states under their respective laws, granting the species full protection throughout Australia regardless of jurisdiction. Many states had prohibited the killing or possession of great white sharks prior to national legislation coming into effect. The great white shark is further listed as Threatened in Victoria under the Flora and Fauna Guarantee Act, and as rare or likely to become extinct under Schedule 5 of the Wildlife Conservation Act in Western Australia.
In 2002, the Australian government created the White Shark Recovery Plan, implementing government-mandated conservation research and monitoring for conservation in addition to federal protection and stronger regulation of shark-related trade and tourism activities. An updated recovery plan was published in 2013 to review progress, research findings, and to implement further conservation actions. A study in 2012 revealed that Australia's White Shark population was separated by Bass Strait into genetically distinct eastern and western populations, indicating a need for the development of regional conservation strategies.
Presently, human-caused shark mortality is continuing, primarily from accidental and illegal catching in commercial and recreational fishing as well as from being caught in beach protection netting, and the populations of great white shark in Australia are yet to recover.
In spite of official protections in Australia, great white sharks continue to be killed in state "shark control" programs within Australia. For example, the government of Queensland has a "shark control" program (shark culling) which kills great white sharks (as well as other marine life) using shark nets and drum lines with baited hooks. In Queensland, great white sharks that are found alive on the baited hooks are shot. The government of New South Wales also kills great white sharks in its "shark control" program. Partly because of these programs, shark numbers in eastern Australia have decreased.
The Australasian population of great white sharks is believed to be in excess of 8,000-10,000 individuals according to genetic research studies done by CSIRO, with an adult population estimated to be around 2,210 individuals in both Eastern and Western Australia. The annual survival rate for juveniles in these two separate populations was estimated in the same study to be close to 73 percent, while adult sharks had a 93 percent annual survival rate. Whether or not mortality rates in great white sharks have declined, or the population has increased as a result of the protection of this species in Australian waters is as yet unknown due to the slow growth rates of this species.
In New Zealand
As of April 2007, great white sharks were fully protected within 370 km (230 mi) of New Zealand and additionally from fishing by New Zealand-flagged boats outside this range. The maximum penalty is a $250,000 fine and up to six months in prison. In June 2018 the New Zealand Department of Conservation classified the great white shark under the New Zealand Threat Classification System as "Nationally Endangered". The species meets the criteria for this classification as there exists a moderate, stable population of between 1000 and 5000 mature individuals. This classification has the qualifiers "Data Poor" and "Threatened Overseas".
In North America
In 2013, great white sharks were added to California's Endangered Species Act. From data collected, the population of great whites in the North Pacific was estimated to be fewer than 340 individuals. Research also reveals these sharks are genetically distinct from other members of their species elsewhere in Africa, Australia, and the east coast of North America, having been isolated from other populations.
In 2015 Massachusetts banned catching, cage diving, feeding, towing decoys, or baiting and chumming for its significant and highly predictable migratory great white population without an appropriate research permit. The goal of these restrictions is to both protect the sharks and public health.
Relationship with humans
Shark bite incidents
Of all shark species, the great white shark is responsible for by far the largest number of recorded shark bite incidents on humans, with 272 documented unprovoked bite incidents on humans as of 2012.
More than any documented bite incident, Peter Benchley's best-selling novel Jaws and the subsequent 1975 film adaptation directed by Steven Spielberg provided the great white shark with the image of being a "man eater" in the public mind. While great white sharks have killed humans in at least 74 documented unprovoked bite incidents, they typically do not target them: for example, in the Mediterranean Sea there have been 31 confirmed bite incidents against humans in the last two centuries, most of which were non-fatal. Many of the incidents seemed to be "test-bites". Great white sharks also test-bite buoys, flotsam, and other unfamiliar objects, and they might grab a human or a surfboard to identify what it is.
Contrary to popular belief, great white sharks do not mistake humans for seals. Many bite incidents occur in waters with low visibility or other situations which impair the shark's senses. The species appears to not like the taste of humans, or at least finds the taste unfamiliar. Further research shows that they can tell in one bite whether or not the object is worth predating upon. Humans, for the most part, are too bony for their liking. They much prefer seals, which are fat and rich in protein.
Humans are not appropriate prey because the shark's digestion is too slow to cope with a human's high ratio of bone to muscle and fat. Accordingly, in most recorded shark bite incidents, great whites broke off contact after the first bite. Fatalities are usually caused by blood loss from the initial bite rather than from critical organ loss or from whole consumption. From 1990 to 2011 there have been a total of 139 unprovoked great white shark bite incidents, 29 of which were fatal.
However, some researchers have hypothesized that the reason the proportion of fatalities is low is not because sharks do not like human flesh, but because humans are often able to escape after the first bite. In the 1980s, John McCosker, Chair of Aquatic Biology at the California Academy of Sciences, noted that divers who dove solo and were bitten by great whites were generally at least partially consumed, while divers who followed the buddy system were generally rescued by their companion. McCosker and Timothy C. Tricas, an author and professor at the University of Hawaii, suggest that a standard pattern for great whites is to make an initial devastating attack and then wait for the prey to weaken before consuming the wounded animal. Humans' ability to move out of reach with the help of others, thus foiling the attack, is unusual for a great white's prey.
Shark culling is the deliberate killing of sharks by a government in an attempt to reduce shark attacks; shark culling is often called "shark control". These programs have been criticized by environmentalists and scientists — they say these programs harm the marine ecosystem; they also say such programs are "outdated, cruel, and ineffective". Many different species (dolphins, turtles, etc.) are also killed in these programs (because of their use of shark nets and drum lines) — 15,135 marine animals were killed in New South Wales' nets between 1950 and 2008, and 84,000 marine animals were killed by Queensland authorities from 1962 to 2015.
Great white sharks are currently killed in both Queensland and New South Wales in "shark control" (shark culling) programs. Queensland uses shark nets and drum lines with baited hooks, while New South Wales only uses nets. From 1962 to 2018, Queensland authorities killed about 50,000 sharks, many of which were great whites. From 2013 to 2014 alone, 667 sharks were killed by Queensland authorities, including great white sharks. In Queensland, great white sharks found alive on the drum lines are shot. In New South Wales, between 1950 and 2008, a total of 577 great white sharks were killed in nets. Between September 2017 and April 2018, 14 great white sharks were killed in New South Wales.
KwaZulu-Natal (an area of South Africa) also has a "shark control" program that kills great white sharks and other marine life. In a 30-year period, more than 33,000 sharks were killed in KwaZulu-Natal's shark-killing program, including great whites.
In 2014 the state government of Western Australia led by Premier Colin Barnett implemented a policy of killing large sharks. The policy, colloquially referred to as the Western Australian shark cull, was intended to protect users of the marine environment from shark bite incidents, following the deaths of seven people on the Western Australian coastline in the years 2010–2013. Baited drum lines were deployed near popular beaches using hooks designed to catch great white sharks, as well as bull and tiger sharks. Large sharks found hooked but still alive were shot and their bodies discarded at sea. The government claimed they were not culling the sharks, but were using a "targeted, localised, hazard mitigation strategy". Barnett described opposition as "ludicrous" and "extreme", and said that nothing could change his mind. This policy was met with widespread condemnation from the scientific community, which showed that species responsible for bite incidents were notoriously hard to identify, that the drum lines failed to capture white sharks, as intended, and that the government also failed to show any correlation between their drum line policy and a decrease in shark bite incidents in the region.
Attacks on boats
Great white sharks infrequently bite and sometimes even sink boats. Only five of the 108 authenticated unprovoked shark bite incidents reported from the Pacific Coast during the 20th century involved kayakers. In a few cases they have bitten boats up to 10 m (33 ft) in length. They have bumped or knocked people overboard, usually biting the boat from the stern. In one case in 1936, a large shark leapt completely into the South African fishing boat Lucky Jim, knocking a crewman into the sea. Tricas and McCosker's underwater observations suggest that sharks are attracted to boats by the electrical fields they generate, which are picked up by the ampullae of Lorenzini and confuse the shark about whether or not wounded prey might be near-by.
Prior to August 1981, no great white shark in captivity lived longer than 11 days. In August 1981, a great white survived for 16 days at SeaWorld San Diego before being released. The idea of containing a live great white at SeaWorld Orlando was used in the 1983 film Jaws 3-D.
Monterey Bay Aquarium first attempted to display a great white in 1984, but the shark died after 11 days because it did not eat. In July 2003, Monterey researchers captured a small female and kept it in a large netted pen near Malibu for five days. They had the rare success of getting the shark to feed in captivity before its release. Not until September 2004 was the aquarium able to place a great white on long-term exhibit. A young female, which was caught off the coast of Ventura, was kept in the aquarium's 3,800,000 l (1,000,000 US gal) Outer Bay exhibit for 198 days before she was released in March 2005. She was tracked for 30 days after release. On the evening of 31 August 2006, the aquarium introduced a juvenile male caught outside Santa Monica Bay. His first meal as a captive was a large salmon steak on 8 September 2006, and as of that date, he was estimated to be 1.72 m (68 in) in length and to weigh approximately 47 kg (104 lb). He was released on 16 January 2007, after 137 days in captivity.
Monterey Bay Aquarium housed a third great white, a juvenile male, for 162 days between 27 August 2007, and 5 February 2008. On arrival, he was 1.4 m (4.6 ft) long and weighed 30.6 kg (67 lb). He grew to 1.8 m (5.9 ft) and 64 kg (141 lb) before release. A juvenile female came to the Outer Bay Exhibit on 27 August 2008. While she did swim well, the shark fed only one time during her stay and was tagged and released on 7 September 2008. Another juvenile female was captured near Malibu on 12 August 2009, introduced to the Outer Bay exhibit on 26 August 2009, and was successfully released into the wild on 4 November 2009. The Monterey Bay Aquarium added a 1.4 m (4.6 ft) long male into their redesigned "Open Sea" exhibit on 31 August 2011. The animal was captured in the waters off Malibu.
One of the largest adult great whites ever exhibited was at Japan's Okinawa Churaumi Aquarium in 2016, where a 3.5 m (11 ft) male was exhibited for three days before dying. Probably the most famous captive was a 2.4 m (7.9 ft) female named Sandy, which in August 1980 became the only great white to be housed at the California Academy of Sciences' Steinhart Aquarium in San Francisco, California. She was released because she would not eat and constantly bumped against the walls.
Cage diving is most common at sites where great whites are frequent including the coast of South Africa, the Neptune Islands in South Australia, and Guadalupe Island in Baja California. The popularity of cage diving and swimming with sharks is at the focus of a booming tourist industry. A common practice is to chum the water with pieces of fish to attract the sharks. These practices may make sharks more accustomed to people in their environment and to associate human activity with food; a potentially dangerous situation. By drawing bait on a wire towards the cage, tour operators lure the shark to the cage, possibly striking it, exacerbating this problem. Other operators draw the bait away from the cage, causing the shark to swim past the divers.
At present, hang baits are illegal off Isla Guadalupe and reputable dive operators do not use them. Operators in South Africa and Australia continue to use hang baits and pinniped decoys. In South Australia, playing rock music recordings underwater, including the AC/DC album Back in Black has also been used experimentally to attract sharks.
Companies object to being blamed for shark bite incidents, pointing out that lightning tends to strike humans more often than sharks bite humans. Their position is that further research needs to be done before banning practices such as chumming, which may alter natural behavior. One compromise is to only use chum in areas where whites actively patrol anyway, well away from human leisure areas. Also, responsible dive operators do not feed sharks. Only sharks that are willing to scavenge follow the chum trail and if they find no food at the end then the shark soon swims off and does not associate chum with a meal. It has been suggested that government licensing strategies may help enforce these responsible tourism.
The shark tourist industry has some financial leverage in conserving this animal. A single set of great white jaws can fetch up to £20,000. That is a fraction of the tourism value of a live shark; tourism is a more sustainable economic activity than shark fishing. For example, the dive industry in Gansbaai, South Africa consists of six boat operators with each boat guiding 30 people each day. With fees between £50 and £150 per person, a single live shark that visits each boat can create anywhere between £9,000 and £27,000 of revenue daily.
A great white shark approaches a cage
Tourists in a cage near Gansbaai
- The Devil's Teeth by Susan Casey.
- Close to Shore by Michael Capuzzo about the Jersey Shore shark attacks of 1916.
- Twelve Days of Terror by Richard Fernicola about the same events.
- Gottfried, M. D.; Fordyce, R. E. (2001). "An associated specimen of Carcharodon angustidens (Chondrichthyes, Lamnidae) from the Late Oligocene of New Zealand, with comments on Carcharodon interrelationships". Journal of Vertebrate Paleontology. 21 (4): 730–739. doi:10.1671/0272-4634(2001)021[0730:AASOCA]2.0.CO;2. JSTOR 20062013.
- Rigby, C.L.; Barreto, R.; Carlson, J.; Fernando, D.; Fordham, S.; Francis, M.P.; Herman, K.; Jabado, R.W.; Liu, K.M.; Lowe, C.G.; Marshall, A.; Pacoureau, N.; Romanov, E.; Sherley, R.B.; Winker, H. (2019). "Carcharodon carcharias". IUCN Red List of Threatened Species. IUCN. 2019: e.T3855A2878674. Retrieved 19 December 2019.
- "Great white sharks: 10 myths debunked". The Guardian. Retrieved 3 June 2016.
- Viegas, Jennifer. "Largest Great White Shark Don't Outweigh Whales, but They Hold Their Own". Discovery Channel. Retrieved 19 January 2010.
- "Just the Facts Please". GreatWhite.org. Retrieved 3 June 2016.
- Parrish, M. "How Big are Great White Sharks?". Smithsonian National Museum of Natural History Ocean Portal. Retrieved 3 June 2016.
- "Carcharodon carcharias". Animal Diversity Web. Retrieved 5 June 2016.
- "New study finds extreme longevity in white sharks". Science Daily. 9 January 2014.
- Ghose, Tia (19 February 2015). "Great White Sharks Are Late Bloomers". LiveScience.com.
- Wright, Bruce A. (2007) Alaska's Great White Sharks. Lulu.com. p. 27. ISBN 0-615-15595-2.
- Thomas, Pete (5 April 2010). "Great white shark amazes scientists with 4000-foot dive into abyss". GrindTV. Archived from the original on 17 August 2012.
- Currents of Contrast: Life in Southern Africa's Two Oceans. Struik. 2005. pp. 31–. ISBN 978-1-77007-086-8.
- Knickle, Craig. "Tiger Shark". Florida Museum of Natural History Ichthyology Department. Archived from the original on 7 July 2013. Retrieved 2 July 2009.
- "ISAF Statistics on Attacking Species of Shark". Florida Museum of Natural History University of Florida. Archived from the original on 24 April 2012. Retrieved 4 May 2008.CS1 maint: unfit url (link)
- "Carcharodon carcharias". UNEP-WCMC Species Database: CITES-Listed Species On the World Wide Web. Archived from the original on 16 June 2013. Retrieved 8 April 2010.
- Government of Australia Department of Sustainability, Environment, Water, Population and Communities (2013). Recovery Plan for the White Shark (Carcharodon carcharias) (Report).CS1 maint: multiple names: authors list (link)
- Hile, Jennifer (23 January 2004). "Great White Shark Attacks: Defanging the Myths". Marine Biology. National Geographic. Archived from the original on 26 April 2009. Retrieved 2 May 2010.
- ISAF Statistics on Attacking Species of Shark
- Linnaeus, Carl (1758). Systema Naturae per Regna Tria Naturae, Secundum Classes, Ordines, Genera, Species, cum Characteribus, Differentiis, Synonymis, Locis (in Latin). Vol. I (10th revised ed.). Holmiae: (Laurentii Salvii).
- "The Great White Shark". The Enviro Facts Project. Archived from the original on 13 June 2009. Retrieved 9 July 2007.
- Portell, R. W.; Hubbell, G.; Donovan, S. K.; Green, J. L.; Harper, D. A.; Pickerill, R. (2008). "Miocene sharks in the Kendeace and Grand Bay formations of Carriacou, The Grenadines, Lesser Antilles". Caribbean Journal of Science. 44 (3): 279–286. doi:10.18475/cjos.v44i3.a2.
- "New Ancient Shark Species Gives Insight Into Origin of Great White". sciencedaily.com. University of Florida. 14 November 2012.
- Kevin G. N.; Charles N. C.; Gregory A. W. (2006). "Tracing the Ancestry of the Great White Shark, Carcharodon carcharias, Using Morphometric Analyses of Fossil Teeth" (PDF). Journal of Vertebrate Paleontology. 26 (4): 806–814. doi:10.1671/0272-4634(2006)26[806:ttaotg]2.0.co;2. JSTOR 4524633. Archived from the original on 16 December 2008. Retrieved 25 December 2007.CS1 maint: BOT: original-url status unknown (link)
- "Areal Distribution of the White Shark". National Capital Freenet. Retrieved 16 October 2010.
- Kabasakal H.. 2014. The status of the great white shark (Carcharodon carcharias) in Turkey’s waters (pdf). Marine Biodiversity Records. Vol. 7. pp.1-8. doi:10.1017/S1755267214000980; Vol. 7; e109; 2014. Retrieved on September 04, 2017
- "Seabird predation by white shark, Carcharodon carcharias, and Cape fur seal, Arctocephalus pusillus pusillus, at Dyer Island". South African Journal of Wildlife Research. Retrieved 22 May 2017.
- "South Africa – Australia – South Africa". White Shark Trust.
- Thomas, Pete (29 September 2006). "The Great White Way". Los Angeles Times. Retrieved 1 October 2006.
- Curtis, Tobey; McCandless, Camilla; Carlson, John; Skomal, Gregory; Kohler, Nancy; Natanson, Lisa; Burgess, George; Hoey, John; Pratt, Harold (June 2014). "Seasonal Distribution and Historic Trends in Abundance of White Sharks, Carcharodon carcharias, in the Western North Atlantic Ocean" (PDF). PLOS ONE. 9: 12.
- Fieldstadt, Elisha (2 August 2019). "More than 150 great white sharks sightings logged off Cape Cod, Massachusetts, since June". NBC News. Retrieved 5 August 2019.
- Wasser, Miriam (2 August 2019). "Seals On Cape Cod Are More Than Just Shark Bait". WBUR. Retrieved 5 August 2019.
- Annear, Steve (18 June 2019). "Tracking great white sharks off Cape Cod could help protect beachgoers". The Boston Globe. Retrieved 5 August 2019.
- Finucane, Martin (23 June 2018). "Great white sharks like to hang out in ocean eddies, new study says". The Boston Globe. Retrieved 23 June 2018.
- "Confirmed: 'Albino' great white shark washes up in Australia | Sharks | Earth Touch News". Earth Touch News Network.
- "Great White Sharks, Carcharodon carcharias at MarineBio.org". Marine Bio. Retrieved 20 August 2012.
- "Great White Sharks Have Blue Eyes". The Atlantic White Shark Conservancy. Retrieved 2 July 2018.
- Echenique, E. J. "A Shark to Remember: The Story of a Great White Caught in 1945". Archived from the original on 27 April 2012. Retrieved 22 January 2013.CS1 maint: BOT: original-url status unknown (link) Home Page of Henry F. Mollet, Research Affiliate, Moss Landing Marine Laboratories.
- Tricas, T. C.; McCosker, J. E. (12 July 1984). "Predatory behaviour of the white shark (Carcharodon carcharias), with notes on its biology". Proceedings of the California Academy of Sciences. California Academy of Sciences. 43 (14): 221–238. Retrieved 22 January 2013.
- Wood, Gerald (1983). The Guinness Book of Animal Facts and Feats. ISBN 978-0-85112-235-9.
- Ellis, Richard and John E. McCosker. 1995. Great White Shark. Stanford University Press, ISBN 0-8047-2529-2
- "ADW: Carcharodon carcharias: INFORMATION". Animal Diversity Web. Retrieved 16 May 2016.
- Cappo, Michael (1988). "Size and age of the white pointer shark, Carcharodon carcharias (Linnaeus)". SAFISH. 13 (1): 11–13. Archived from the original on 6 January 2007.CS1 maint: BOT: original-url status unknown (link)
- Taylor, Leighton R. (1 January 1993). Sharks of Hawaii: Their Biology and Cultural Significance. University of Hawaii Press. p. 65. ISBN 978-0-8248-1562-2.
- Wroe, S.; Huber, D. R.; Lowry, M.; McHenry, C.; Moreno, K.; Clausen, P.; Ferrara, T. L.; Cunningham, E.; Dean, M. N.; Summers, A. P. (2008). "Three-dimensional computer analysis of white shark jaw mechanics: how hard can a great white bite?" (PDF). Journal of Zoology. 276 (4): 336–342. doi:10.1111/j.1469-7998.2008.00494.x.
- "Great White Shark". National Geographic. Retrieved 24 July 2010.
- Mollet, H. F. (2008), White Shark Summary Carcharodon carcharias (Linnaeus, 1758)], Home Page of Henry F. Mollet, Research Affiliate, Moss Landing Marine Laboratories, archived from the original on 31 May 2012
- Klimley, Peter; Ainley, David (1996). Great White Sharks: The Biology of Carcharodon carcharias. Academic Press. pp. 91–108. ISBN 978-0-12-415031-7.
- Jury, Ken (1987). "Huge 'White Pointer' Encounter". SAFISH. 2 (3): 12–13. Archived from the original on 17 April 2012.CS1 maint: BOT: original-url status unknown (link)
- "Learn More About Deep Blue, One of the Biggest Great White Sharks Ever Filmed". Discovery.
- Staff, H. N. N. "Biggest great white shark on record seen in Hawaii waters". Hawaii News.
- Ciaccia, Chris (16 January 2019). "Great white shark, called 'Deep Blue,' spotted near Hawaii". Fox News.
- "Deep Blue, perhaps the largest known great white shark, spotted off Hawaii". 16 January 2019.
- Armitage, Stefan (12 July 2019). "Fisherman encounters '30-foot' great white shark". Vt.co. Retrieved 28 November 2019.
- "'Jaws does exist' Monster '30ft' shark spotted near set of Steven Spielberg classic". The Daily Star. 12 July 2019. Retrieved 28 November 2019.
- Ferreira, Craig (2011). Great White Sharks On Their Best Behavior.
- Froese, Rainer and Pauly, Daniel, eds. (2011). "Galeocerdo cuvier" in FishBase. July 2011 version.
- "Summary of Large Tiger Sharks Galeocerdo cuvier (Peron & LeSueur, 1822)". Archived from the original on 10 April 2012. Retrieved 3 May 2010.CS1 maint: BOT: original-url status unknown (link)
- Eagle, Dane. "Greenland Shark". Florida Museum of Natural History. Retrieved 1 September 2012.
- Martin, R. Aidan. "Pacific Sleeper Shark". ReefQuest Centre for Shark Research. Biology of Sharks and Rays. Archived from the original on 20 April 2013. Retrieved 1 September 2012.
- "The physiology of the ampullae of Lorenzini in sharks". Biology Dept., Davidson College. Biology @ Davidson. Archived from the original on 24 November 2010. Retrieved 20 August 2012.
- Martin, R. Aidan. "Body Temperature of the Great white and Other Lamnoid Sharks". ReefQuest Centre for Shark research. Retrieved 16 October 2010.
- White Shark Biological Profile from the Florida Museum of Natural History
- Barber, Elizabeth (18 July 2013). "Great white shark packs its lunch in its liver before a big trip". The Christian Science Monitor. Archived from the original on 2 August 2013.
- Jordan, Rob (17 July 2013). "Great White Sharks' Fuel for Oceanic Voyages: Liver Oil". sciencedaily.com. Stanford University.
- Solly, Meilan (3 April 2019). "Great White Sharks Thrive Despite Heavy Metals Coursing Through Their Veins". Smithsonian Magazine. Archived from the original on 30 June 2019. Retrieved 30 June 2019.
- Medina, Samantha (27 July 2007). "Measuring the great white's bite". Cosmos Magazine. Archived from the original on 5 May 2012. Retrieved 1 September 2012.
- Martin, R. Aidan; Martin, Anne (October 2006). "Sociable Killers: New studies of the white shark (aka great white) show that its social life and hunting strategies are surprisingly complex". Natural History. Retrieved 24 November 2019. Cite magazine requires
- Martin, R. Aidan; Martin, Anne. "Sociable Killers". Natural History Magazine. Archived from the original on 15 May 2013. Retrieved 30 September 2006.
- Johnson, R. L.; Venter, A.; Bester, M.N.; Oosthuizen, W.H. (2006). "Seabird predation by white shark Carcharodon Carcharias and Cape fur seal Arctocephalus pusillus pusillus at Dyer Island" (PDF). South African Journal of Wildlife Research. South Africa. 36 (1): 23–32. Archived from the original (PDF) on 3 April 2012.
- Teenage great white sharks are awkward biters. Science Daily (2 December 2010)
- White shark diets show surprising variability, vary with age and among individuals. Science Daily (29 September 2012)
- Estrada, J. A.; Rice, Aaron N.; Natanson, Lisa J.; Skomal, Gregory B. (2006). "Use of isotopic analysis of vertebrae in reconstructing ontogenetic feeding ecology in white sharks". Ecology. 87 (4): 829–834. doi:10.1890/0012-9658(2006)87[829:UOIAOV]2.0.CO;2. PMID 16676526.
- Fergusson, I. K.; Compagno, L. J.; Marks, M. A. (2000). "Predation by white sharks Carcharodon carcharias (Chondrichthyes: Lamnidae) upon chelonians, with new records from the Mediterranean Sea and a first record of the ocean sunfish Mola mola (Osteichthyes: Molidae) as stomach contents". Environmental Biology of Fishes. 58 (4): 447–453. doi:10.1023/a:1007639324360.
- Hussey, N. E., McCann, H. M., Cliff, G., Dudley, S. F., Wintner, S. P., & Fisk, A. T. (2012). Size-based analysis of diet and trophic position of the white shark (Carcharodon carcharias) in South African waters. Global Perspectives on the Biology and Life History of the White Shark. (Ed. ML Domeier.) pp. 27–49.
- "Catch as Catch Can". ReefQuest Centre for Shark Research. Retrieved 16 October 2010.
- Le Boeuf, B. J.; Crocker, D. E.; Costa, D. P.; Blackwell, S. B.; Webb, P. M.; Houser, D. S. (2000). "Foraging ecology of northern elephant seals". Ecological Monographs. 70 (3): 353–382. doi:10.2307/2657207. JSTOR 2657207.
- Haley, M. P.; Deutsch, C. J.; Le Boeuf, B. J. (1994). "Size, dominance and copulatory success in male northern elephant seals, Mirounga angustirostris". Animal Behaviour. 48 (6): 1249–1260. doi:10.1006/anbe.1994.1361.
- Weng, K. C.; Boustany, A. M.; Pyle, P.; Anderson, S. D.; Brown, A.; Block, B. A. (2007). "Migration and habitat of white sharks (Carcharodon carcharias) in the eastern Pacific Ocean". Marine Biology. 152 (4): 877–894. doi:10.1007/s00227-007-0739-4.
- Martin, Rick. "Predatory Behavior of Pacific Coast White Sharks". Shark Research Committee. Archived from the original on 30 July 2012.
- Chewning, Dana; Hall, Matt. "Carcharodon carcharias (Great white shark)". Animal Diversity Web. Retrieved 15 August 2019.
- Heithaus, Michael (2001). "Predator–prey and competitive interactions between sharks (order Selachii) and dolphins (suborder Odontoceti): a review" (PDF). Journal of Zoology. 253: 53–68. CiteSeerX 10.1.1.404.130. doi:10.1017/S0952836901000061. Archived from the original (PDF) on 15 January 2016. Retrieved 26 February 2010.
- Long, Douglas (1991). "Apparent Predation by a White Shark Carcharodon carcharias on a Pygmy Sperm Whale Kogia breviceps" (PDF). Fishery Bulletin. 89: 538–540.
- Kays, R. W., & Wilson, D. E. (2009). Mammals of North America. Princeton University Press.
- Baird, R. W.; Webster, D. L.; Schorr, G. S.; McSweeney, D. J.; Barlow, J. (2008). "Diet variation in beaked whale diving behavior" (PDF). Marine Mammal Science. 24 (3): 630–642. doi:10.1111/j.1748-7692.2008.00211.x. hdl:10945/697.
- "How Fast Can a Shark Swim?". ReefQuest Centre for Shark Research.
- "White Shark Predatory Behavior at Seal Island". ReefQuest Centre for Shark Research.
- Krkosek, Martin; Fallows, Chris; Gallagher, Austin J.; Hammerschlag, Neil (2013). "White Sharks (Carcharodon carcharias) Scavenging on Whales and Its Potential Role in Further Shaping the Ecology of an Apex Predator". PLoS ONE. 8 (4): e60797. doi:10.1371/journal.pone.0060797. PMC 3621969. PMID 23585850.
- Dudley, Sheldon F. J.; Anderson-Reade, Michael D.; Thompson, Greg S.; McMullen, Paul B. (2000). "Concurrent scavenging off a whale carcass by great white sharks, Carcharodon carcharias, and tiger sharks, Galeocerdo cuvier" (PDF). Marine Biology. Fishery Bulletin. Archived from the original (PDF) on 27 May 2010. Retrieved 4 May 2010.
- Owens, Brian (12 February 2016). "White shark's diet may include biggest fish of all: whale shark". New Scientist.
- Moore, G. I.; Newbrey, M. G. (2015). "Whale shark on a white shark's menu". Marine Biodiversity. 46 (4): 745. doi:10.1007/s12526-015-0430-9.
- "Natural History of the White Shark". PRBO Conservation Science. 2 May 2010. Archived from the original on 3 July 2013.
- "Legendary Great White Shark Was Just A Teenager When Killed, New Research Reveals". The Inquisitr News.
- "Carcharodon carcharias, Great White Sharks". marinebio.org.
- "Brief Overview of the Great White Shark (Carcharodon carcharias)". Elasmo Research. Retrieved 20 August 2012.
- Animals, Kimberly Hickok 2019-03-22T19:03:40Z. "Enormous Great White Shark Pregnant with Record 14 Pups Was Caught and Sold in Taiwan". livescience.com.
- Martin, R. Aidan. "White Shark Breaching". ReefQuest Centre for Shark Research. Retrieved 18 April 2012.
- Martin, R. A.; Hammerschlag, N.; Collier, R. S.; Fallows, C. (2005). "Predatory behaviour of white sharks (Carcharodon carcharias) at Seal Island, South Africa". Journal of the Marine Biological Association of the UK. 85 (5): 1121. CiteSeerX 10.1.1.523.6178. doi:10.1017/S002531540501218X.
- Rice, Xan (19 July 2011). "Great white shark jumps from sea into research boat". The Guardian. London: Guardian Media Group. Retrieved 20 July 2011.
Marine researchers in South Africa had a narrow escape after a 3 metres (10 ft) long great white shark breached the surface of the sea and leapt into their boat, becoming trapped on deck for more than an hour. […] Enrico Gennari, an expert on great white sharks, [...] said it was almost certainly an accident rather than an attack on the boat.
- Pyle, Peter; Schramm, Mary Jane; Keiper, Carol; Anderson, Scot D. (26 August 2006). "Predation on a white shark (Carcharodon carcharias) by a killer whale (Orcinus orca) and a possible case of competitive displacement" (PDF). Marine Mammal Science. 15 (2): 563–568. doi:10.1111/j.1748-7692.1999.tb00822.x. Archived from the original (PDF) on 22 March 2012. Retrieved 8 May 2010.
- "Nature Shock Series Premiere: The Whale That Ate the Great White". Tvthrong.co.uk. 4 October 1997. Archived from the original on 6 April 2012. Retrieved 16 October 2010.
- "Killer Whale Documentary Part 4". youtube.com.
- Turner, Pamela S. (October–November 2004). "Showdown at Sea: What happens when great white sharks go fin-to-fin with killer whales?". National Wildlife. National Wildlife Federation. 42 (6). Archived from the original on 16 January 2011. Retrieved 21 November 2009.
- "Great white shark 'slammed' and killed by a pod of killer whales in South Australia". Australian Broadcasting Corporation. 3 February 2015. Retrieved 10 July 2015.
- Haden, Alexis (6 June 2017). "Killer whales have been killing great white sharks in Cape waters". The South African. Retrieved 27 June 2017.
- Jorgensen, S. J.; et al. (2019). "Killer whales redistribute white shark foraging pressure on seals". Scientific Reports. 9 (1): 6153. doi:10.1038/s41598-019-39356-2.
- Starr, Michell (11 November 2019). "Incredible Footage Reveals Orcas Chasing Off The Ocean's Most Terrifying Predator". Science Alert. Retrieved 24 November 2019.
- "Regulation of Trade in Specimens of Species Included in Appendix II". CITES (1973). Archived from the original on 17 July 2011. Retrieved 8 April 2012.
- "Memorandum of Understanding on the Conservation of Migratory Sharks" (PDF). Convention on Migratory Species. 12 February 2010. Archived from the original (PDF) on 20 April 2013. Retrieved 31 August 2012.
- Sample, Ian (19 February 2010). "Great white shark is more endangered than tiger, claims scientist". The Guardian. Retrieved 14 August 2013.
- Jenkins, P. Nash (24 June 2014). "Beachgoers Beware: The Great White Shark Population Is Growing Again". Time. Retrieved 29 October 2014.
- Gannon, Megan. "Great White Sharks Are Making a Comeback off US Coasts". livescience.com. Retrieved 29 October 2014.
- De Maddalena, Alessandro; Heim, Walter (2012). Mediterranean Great White Sharks: A Comprehensive Study Including All Recorded Sightings. McFarland. ISBN 978-0786458899.
- Government of Australia. "Species Profile and Threats Database – Carcharodon carcharias—Great White Shark". Retrieved 21 August 2013.
- Environment Australia (2002). White Shark (Carcharodon carcharias) Recovery Plan (Report).
- Blower, Dean C.; Pandolfi, John M.; Bruce, Barry D.; Gomez-Cabrera, Maria del C.; Ovenden, Jennifer R. (2012). "Population genetics of Australian white sharks reveals fine-scale spatial structure, transoceanic dispersal events and low effective population sizes". Marine Ecology Progress Series. 455: 229–244. doi:10.3354/meps09659.
- https://www.seashepherd.org.au/apex-harmony/overview/about-the-campaign.html "About the Campaign: Sea Shepherd Working Together With The Community To Establish Sustainable Solutions To Shark Bite Incidents". Retrieved August 29, 2019.
- https://web.archive.org/web/20181002102324/https://www.marineconservation.org.au/pages/shark-culling.html marineconservation.org.au. Shark culling (archived). Archived from the original on 2018-10-02. Retrieved August 30, 2019.
- http://www.onegreenplanet.org/news/brutal-lengths-australia-going-order-keep-sharks-away-tourists/ One Green Planet. Heartbreaking Photos Show the Brutal Lengths Australia Is Going to In Order to ‘Keep Sharks Away From Tourists’. Kelly Wang. Retrieved August 29, 2019.
- "Aussie shark population in staggering decline", NewsComAu, 14 December 2018, retrieved 30 August 2019
- Hillary, Rich; Bradford, Russ; Patterson, Toby. "World-first genetic analysis reveals Aussie white shark numbers". The Conversation.
- "Great white sharks to be protected". The New Zealand Herald. 30 November 2006. Retrieved 30 November 2006.
- Duffy, Clinton A. J.; Francis, Malcolm; Dunn, M. R.; Finucci, Brit; Ford, Richard; Hitchmough, Rod; Rolfe, Jeremy (2018). Conservation status of New Zealand chondrichthyans (chimaeras, sharks and rays), 2016 (PDF). Wellington, New Zealand: Department of Conservation. p. 9. ISBN 9781988514628. OCLC 1042901090.
- Quan, Kristene (4 March 2013). "Great White Sharks Are Now Protected under California Law". Time.
- Williams, Lauren (3 July 2014). "Shark numbers not tanking". Huntington Beach Wave. The Orange County Register. p. 12.
- Burgess, George H.; Bruce, Barry D.; Cailliet, Gregor M.; Goldman, Kenneth J.; Grubbs, R. Dean; Lowe, Christopher J.; MacNeil, M. Aaron; Mollet, Henry F.; Weng, Kevin C.; O'Sullivan, John B. (16 June 2014). "A Re-Evaluation of the Size of the White Shark (Carcharodon carcharias) Population off California, USA". PLoS ONE. 9 (6): e98078. doi:10.1371/journal.pone.0098078. PMC 4059630. PMID 24932483.
- New Regulations Affecting Activity around White Sharks. Commonwealth of Massachusetts, Division of Marine Fisheries (4 June 2015)
- Benchley, Peter (April 2000). "Great white sharks". National Geographic: 12. ISSN 0027-9358.
considering the knowledge accumulated about sharks in the last 25 years, I couldn't possibly write Jaws today … not in good conscience anyway … back then, it was OK to demonize an animal.
- "Great White Shark Attacks: Defanging the Myths". nationalgeographic.com.
- Martin, R. Aidan (2003). "White Shark Attacks: Mistaken Identity". Biology of Sharks and Rays. ReefQuest Centre for Shark Research. Retrieved 30 August 2016.
- "ISAF Statistics for Worldwide Unprovoked White Shark Attacks Since 1990". 10 February 2011. Retrieved 19 August 2011.
- Tricas, T.C.; McCosker, John (1984). "Predatory behavior of the white shark, Carcharodon carcharias, and notes on its biology". Proceedings of the California Academy of Sciences. Series 4. 43 (14): 221–238.
- Phillips, Jack (4 September 2018), Video: Endangered Hammerhead Sharks Dead on Drum Line in Great Barrier Reef, Ntd.tv, archived from the original on 19 September 2018, retrieved 30 August 2019
- Thom Mitchell (20 November 2015), Action for Dolphins. Queensland's Shark Control Program Has Snagged 84,000 Animals, retrieved 30 August 2019
- Mackenzie, Bruce (4 August 2018), Sydney Shark Nets Set to Stay Despite Drumline Success, Swellnet.com., retrieved 30 August 2019
- Shark Nets, Sharkangels.org, retrieved 30 August 2019
- "New measures to combat WA shark risks". Department of Fisheries, Western Australia. 10 December 2013. Retrieved 2 February 2014.
- Arup, Tom (21 January 2014), "Greg Hunt grants WA exemption for shark cull plan", The Sydney Morning Herald, Fairfax Media, archived from the original on 22 January 2014
- "Can governments protect people from killer sharks?". Australian Broadcasting Corporation. 22 December 2013. Retrieved 2 February 2014.
- Australia shark policy to stay, despite threats TVNZ, 20 January 2014.
- "More than 100 shark scientists, including me, oppose the cull in Western Australia". 23 December 2013. Retrieved 31 August 2016.
- "Unprovoked White Shark Attacks on Kayakers". Shark Research Committee. Retrieved 14 September 2008.
- Tricas, Timothy C.; McCosker, John E. (1984). "Predatory Behaviour of the White Shark (Carcharodon carcharias), with Notes on its Biology" (PDF). Proceedings of the California Academy of Sciences. 43 (14): 221–238.
- "Great white shark sets record at California aquarium". USA Today. 2 October 2004. Retrieved 27 September 2006.
- Hopkins, Christopher Dean (8 January 2016). "Great White Shark Dies After Just 3 Days In Captivity At Japan Aquarium". NPR. Archived from the original on 3 April 2017. Retrieved 21 December 2017.
- Gathright, Alan (16 September 2004). "Great white shark puts jaws on display in aquarium tank". San Francisco Chronicle. Retrieved 27 September 2006.
- "White Shark Research Project". Monterey Bay Aquarium. Archived from the original on 19 January 2013. Retrieved 27 September 2006.
- Squatriglia, Chuck (1 September 2003). "Great white shark introduced at Monterey Bay Aquarium". San Francisco Chronicle. Retrieved 27 September 2006.
- "Learn All About Our New White Shark". Monterey Bay Aquarium. Archived from the original on 20 November 2009. Retrieved 28 August 2009.
- Hongo, Jun. "Great White Shark Dies at Aquarium in Japan". WSJ Blogs – Japan Real Time. Retrieved 9 January 2016.
- "Great white shark dies after three days in Japanese aquarium". Telegraph.co.uk. Retrieved 9 January 2016.
- "Electroreception". Elasmo-research. Retrieved 27 September 2006.
- "Shark cage diving". Department of Environment, Water and Natural Resources. Archived from the original on 9 April 2013. Retrieved 11 March 2013.
- Squires, Nick (18 January 1999). "Swimming With Sharks". BBC. Archived from the original on 17 August 2003. Retrieved 21 January 2010.
- Simon, Bob (11 December 2005). "Swimming With Sharks". 60 Minutes. Retrieved 22 January 2010.
- "Blue Water Hunting Successfully". Blue Water Hunter. Archived from the original on 18 August 2012. Retrieved 20 August 2012.
- "A Great white shark's favorite tune? 'Back in Black'" Archived 16 April 2016 at the Wayback Machine Surfersvillage Global Surf News (3 June 2011). Retrieved 2014-01-30.
- "Shark Attacks Compared to Lightning". Florida Museum of Natural History. 18 July 2003. Retrieved 7 November 2006.
- Hamilton, Richard (15 April 2004). "SA shark attacks blamed on tourism". BBC. Archived from the original on 23 March 2012. Retrieved 24 October 2006.
- Great white feasting on a whale's carcass
- Great white survives attack by orcas (killer whales) near Neptune Islands, Australia
- Great white and orca fight off the coast of California
- Great whites with a pod of orcas off the coast of South Africa
- Ocean Ramsey Encounters GIANT 20 ft (6.1 m) Great White Shark, near Oahu, Hawaii
- Longest Shark Ever Recorded! | 1 | 15 |
<urn:uuid:bde363fe-dfc5-4fda-a252-1b08dc95bb9f> | The image is from Wikipedia Commons
Alba (Scottish Gaelic)
Recognised languages [c]
|Religion||Church of Scotland
|Government||Devolved parliamentary legislature within a constitutional monarchy[e]|
|Parliament of the United Kingdom|
|• Secretary of State||Alister Jack|
|• House of Commons||59 MPs (of 650)|
|9th century (traditionally 843)|
|1 May 1707|
|19 November 1998|
|77,933 km2 (30,090 sq mi)|
• Water (%)
• 2017 estimate
• 2011 census
|67.5/km2 (174.8/sq mi)|
|• Total||£138 billion|
|• Per capita||£25,500|
|Currency||Pound sterling (GBP; £)|
|Time zone||UTC (Greenwich Mean Time)|
• Summer ( DST)
|UTC+1 (British Summer Time)|
|Date format||dd/mm/yyyy (AD)|
|ISO 3166 code||GB-SCT|
|Internet TLD||.scot [f]|
Scotland (Scots: Scotland, Scottish Gaelic: Alba [ˈal̪ˠapə] (listen)) is a country that is part of the United Kingdom. It covers the northern third of the island of Great Britain, with a border with England to the southeast, and is surrounded by the Atlantic Ocean to the north and west, the North Sea to the northeast, the Irish Sea to the south, and more than 790 islands, including the Northern Isles and the Hebrides.
The Kingdom of Scotland emerged as an independent sovereign state in the European Early Middle Ages and continued to exist until 1707. By inheritance in 1603, King James VI of Scotland became king of England and Ireland, thus forming a personal union of the three kingdoms. Scotland subsequently entered into a political union with England on 1 May 1707 to create the new Kingdom of Great Britain. The union also created a new Parliament of Great Britain, which succeeded both the Parliament of Scotland and the Parliament of England. In 1801, Great Britain entered into a political union with Ireland to create the United Kingdom of Great Britain and Ireland (in 1922, the Irish Free State seceded from the United Kingdom, leading to the latter being renamed the United Kingdom of Great Britain and Northern Ireland in 1927).
Within Scotland, the monarchy of the United Kingdom has continued to use a variety of styles, titles and other royal symbols of statehood specific to the pre-union Kingdom of Scotland. The legal system within Scotland has also remained separate from those of England and Wales and Northern Ireland; Scotland constitutes a distinct jurisdiction in both public and private law. The continued existence of legal, educational, religious and other institutions distinct from those in the remainder of the UK have all contributed to the continuation of Scottish culture and national identity since the 1707 union with England.
In 1997, a Scottish Parliament was re-established, in the form of a devolved unicameral legislature comprising 129 members, having authority over many areas of domestic policy. The head of the Scottish Government is the first minister of Scotland, who is supported by the deputy first minister of Scotland. Scotland is represented in the United Kingdom Parliament by 59 MPs and in the European Parliament by 6 MEPs. Scotland is also a member of the British–Irish Council, and sends five members of the Scottish Parliament to the British–Irish Parliamentary Assembly.
Scotland is divided into 32 administrative subdivisions or local authorities, known as council areas. Glasgow City is the largest council area in terms of population, with Highland being the largest in terms of area. Limited self-governing power, covering matters such as education, social services and roads and transportation, is devolved from the Scottish Government to each subdivision.
"Scotland" comes from Scoti, the Latin name for the Gaels. Philip Freeman has speculated on the likelihood of a group of raiders adopting a name from an Indo-European root, *skot, citing the parallel in Greek skotos (σκότος), meaning "darkness, gloom". The Late Latin word Scotia ("land of the Gaels") was initially used to refer to Ireland. By the 11th century at the latest, Scotia was being used to refer to (Gaelic-speaking) Scotland north of the River Forth, alongside Albania or Albany, both derived from the Gaelic Alba. The use of the words Scots and Scotland to encompass all of what is now Scotland became common in the Late Middle Ages.
Repeated glaciations, which covered the entire land mass of modern Scotland, destroyed any traces of human habitation that may have existed before the Mesolithic period. It is believed the first post-glacial groups of hunter-gatherers arrived in Scotland around 12,800 years ago, as the ice sheet retreated after the last glaciation. At the time, Scotland was covered in forests, had more bog-land, and the main form of transport was by water.:9 These settlers began building the first known permanent houses on Scottish soil around 9,500 years ago, and the first villages around 6,000 years ago. The well-preserved village of Skara Brae on the mainland of Orkney dates from this period. Neolithic habitation, burial, and ritual sites are particularly common and well preserved in the Northern Isles and Western Isles, where a lack of trees led to most structures being built of local stone. Evidence of sophisticated pre-Christian belief systems is demonstrated by sites such as the Callanish Stones on Lewis and the Maes Howe on Orkney, which were built in the third millennium BCE.:38
The first written reference to Scotland was in 320 BC by Greek sailor Pytheas, who called the northern tip of Britain "Orcas", the source of the name of the Orkney islands.:10 During the first millennium BCE, the society changed dramatically to a chiefdom model, as consolidation of settlement led to the concentration of wealth and underground stores of surplus food.:11 The first Roman incursion into Scotland occurred in 79 AD, when Agricola invaded Scotland; he defeated a Caledonian army at the Battle of Mons Graupius in 83 AD.:12 After the Roman victory, Roman forts were briefly set along the Gask Ridge close to the Highland line, but by three years after the battle, the Roman armies had withdrawn to the Southern Uplands. The Romans erected Hadrian's Wall in northern England:12 and the Limes Britannicus became the northern border of the Roman Empire. The Roman influence on the southern part of the country was considerable, and they introduced Christianity to Scotland.:13–14:38
Beginning in the sixth century, the area that is now Scotland was divided into three areas: Pictland, a patchwork of small lordships in central Scotland;:25–26 the Anglo-Saxon Kingdom of Northumbria, which had conquered southeastern Scotland;:18–20 and Dál Riata, founded by settlers from Ireland, bringing Gaelic language and culture with them.:20 These societies were based on the family unit and had sharp divisions in wealth, although the vast majority were poor and worked full-time in subsistence agriculture. The Picts kept slaves (mostly captured in war) through the ninth century.:26–27
Gaelic influence over Pictland and Northumbria was facilitated by the large number of Gaelic-speaking clerics working as missionaries.:23–24 Operating in the sixth century on the island of Iona, Saint Columba was one of the earliest and best-known missionaries.:39 The Vikings began to raid Scotland in the eighth century. Although the raiders sought slaves and luxury items, their main motivation was to acquire land. The oldest Norse settlements were in northwest Scotland, but they eventually conquered many areas along the coast. Old Norse entirely displaced Gaelic in the Northern Isles.:29–30
In the ninth century, the Norse threat allowed a Gael named Cináed mac Ailpín (Kenneth I) to seize power over Pictland, establishing a royal dynasty to which the modern monarchs trace their lineage, and marking the beginning of the end of Pictish culture.:31–32 The kingdom of Cináed and his descendants, called Alba, was Gaelic in character but existed on the same area as Pictland. By the end of the tenth century, the Pictish language went extinct as its speakers shifted to Gaelic.:32–33 From a base in eastern Scotland north of the River Forth and south of the River Spey, the kingdom expanded first southwards, into the former Northumbrian lands, and northwards into Moray.:34–35 Around the turn of the millennium, there was a centralization in agricultural lands and the first towns began to be established.:36–37
In the twelfth and thirteenth centuries, with much of Scotland under the control of a single ruler and united by the Gaelic language, a modern nation-state first emerged, as did Scottish national consciousness.:38 The domination of Gaelic was diminished during the reign of David I (1124–53), during which many English-speaking colonists settled in Scotland.:39 David I and his successors centralized royal power:41–42 and united mainland Scotland, capturing regions such as Moray, Galloway, and Caithness, although he did not succeed at extending his power over the Hebrides, which had been ruled by various Scottish clans following the death of Somerled in 1164.:48–49 The system of feudalism was consolidated, with both Anglo-Norman incomers and native Gaelic chieftains being granted land in exchange for serving the king.:53–54 The Scottish kings rejected English demands to subjugate themselves; in fact, England invaded Scotland several times to prevent Scotland's expansion into northern England.:45
The death of Alexander III in March 1286 broke the succession line of Scotland's kings. Edward I of England arbitrated between various claimants for the Scottish crown. In return for surrendering Scotland's nominal independence, John Balliol was pronounced king in 1292.:47 In 1294, Balliol and other Scottish lords refused Edward's demands to serve in his army against the French. Scotland and France sealed a treaty on 23 October 1295, known as the Auld Alliance. War ensued, and John was deposed by Edward who took personal control of Scotland. Andrew Moray and William Wallace initially emerged as the principal leaders of the resistance to English rule in the Wars of Scottish Independence, until Robert the Bruce was crowned king of Scotland in 1306. Victory at the Battle of Bannockburn in 1314 proved the Scots had regained control of their kingdom. In 1320 the world's first documented declaration of independence, the Declaration of Arbroath, won the support of Pope John XXII, leading to the legal recognition of Scottish sovereignty by the English Crown. :70, 72
A civil war between the Bruce dynasty and their long-term Comyn-Balliol rivals lasted until the middle of the 14th century. Although the Bruce faction was successful, David II's lack of an heir allowed his half-nephew Robert II to come to the throne and establish the House of Stewart.:77 The Stewarts ruled Scotland for the remainder of the Middle Ages. The country they ruled experienced greater prosperity from the end of the 14th century through the Scottish Renaissance to the Reformation,:93 despite the effects of the Black Death in 1349:76 and increasing division between Highlands and Lowlands.:78 Multiple truces reduced warfare on the southern border.:76, 83
Early modern period
The Treaty of Perpetual Peace was signed in 1502 by James IV of Scotland and Henry VII of England. James married Henry's daughter, Margaret Tudor. James invaded England in support of France under the terms of the Auld Alliance and became the last British monarch to die in battle, at Flodden in 1513. In 1560, the Treaty of Edinburgh brought an end to the Anglo-French conflict and recognized the Protestant Elizabeth I as Queen of England.:112 The Parliament of Scotland met and immediately adopted the Scots Confession, which signaled the Scottish Reformation's sharp break from papal authority and Catholic teaching.:44 The Catholic Mary, Queen of Scots was forced to abdicate in 1567.
In 1603, James VI, King of Scots inherited the thrones of the Kingdom of England and the Kingdom of Ireland in the Union of the Crowns, and moved to London. The military was strengthened, allowing the imposition of royal authority on the western Highland clans. The 1609 Statutes of Iona compelled the cultural integration of Hebridean clan leaders.:37–40 With the exception of a short period under the Protectorate, Scotland remained a separate state, but there was considerable conflict between the crown and the Covenanters over the form of church government.:124 The Glorious Revolution of 1688–89 saw the overthrow of King James VII of Scotland and II of England by the English Parliament in favour of William III and Mary II.:142
The Battle of Altimarlach in 1680 was the last significant clan battle fought between highland clans. In common with countries such as France, Norway, Sweden and Finland, Scotland experienced famines during the 1690s. Mortality, reduced childbirths and increased emigration reduced the population of parts of the country about 10–15%.
In 1698, the Company of Scotland attempted a project to secure a trading colony on the Isthmus of Panama. Almost every Scottish landowner who had money to spare is said to have invested in the Darien scheme. Its failure bankrupted these landowners, but not the burghs. Nevertheless, the nobles' bankruptcy, along with the threat of an English invasion, played a leading role in convincing the Scots elite to back a union with England.
On 22 July 1706, the Treaty of Union was agreed between representatives of the Scots Parliament and the Parliament of England. The following year, twin Acts of Union were passed by both parliaments to create the united Kingdom of Great Britain with effect from 1 May 1707 with popular opposition and anti-union riots in Edinburgh, Glasgow, and elsewhere.
With trade tariffs with England abolished, trade blossomed, especially with Colonial America. The clippers belonging to the Glasgow Tobacco Lords were the fastest ships on the route to Virginia. Until the American War of Independence in 1776, Glasgow was the world's premier tobacco port, dominating world trade. The disparity between the wealth of the merchant classes of the Scottish Lowlands and the ancient clans of the Scottish Highlands grew, amplifying centuries of division.
The deposed Jacobite Stuart claimants had remained popular in the Highlands and north-east, particularly amongst non-Presbyterians, including Roman Catholics and Episcopalian Protestants. However, two major Jacobite risings launched in 1715 and 1745 failed to remove the House of Hanover from the British throne. The threat of the Jacobite movement to the United Kingdom and its monarchs effectively ended at the Battle of Culloden, Great Britain's last pitched battle.
The Scottish Enlightenment and the Industrial Revolution turned Scotland into an intellectual, commercial and industrial powerhouse–so much so Voltaire said "We look to Scotland for all our ideas of civilisation." With the demise of Jacobitism and the advent of the Union, thousands of Scots, mainly Lowlanders, took up numerous positions of power in politics, civil service, the army and navy, trade, economics, colonial enterprises and other areas across the nascent British Empire. Historian Neil Davidson notes "after 1746 there was an entirely new level of participation by Scots in political life, particularly outside Scotland." Davidson also states "far from being 'peripheral' to the British economy, Scotland – or more precisely, the Lowlands – lay at its core."
In the Highlands, clan chiefs gradually started to think of themselves more as commercial landlords than leaders of their people. These social and economic changes included the first phase of the Highland Clearances and, ultimately, the demise of clanship.:32–53, passim
The Scottish Reform Act 1832 increased the number of Scottish MPs and widened the franchise to include more of the middle classes. From the mid-century, there were increasing calls for Home Rule for Scotland and the post of Secretary of State for Scotland was revived. Towards the end of the century Prime Ministers of Scottish descent included William Gladstone, and the Earl of Rosebery. In the late 19th century the growing importance of the working classes was marked by Keir Hardie's success in the Mid Lanarkshire by-election, 1888, leading to the foundation of the Scottish Labour Party, which was absorbed into the Independent Labour Party in 1895, with Hardie as its first leader.
Glasgow became one of the largest cities in the world and known as "the Second City of the Empire" after London. After 1860 the Clydeside shipyards specialised in steamships made of iron (after 1870, made of steel), which rapidly replaced the wooden sailing vessels of both the merchant fleets and the battle fleets of the world. It became the world's pre-eminent shipbuilding centre. The industrial developments, while they brought work and wealth, were so rapid that housing, town-planning, and provision for public health did not keep pace with them, and for a time living conditions in some of the towns and cities were notoriously bad, with overcrowding, high infant mortality, and growing rates of tuberculosis.
While the Scottish Enlightenment is traditionally considered to have concluded toward the end of the 18th century, disproportionately large Scottish contributions to British science and letters continued for another 50 years or more, thanks to such figures as the physicists James Clerk Maxwell and Lord Kelvin, and the engineers and inventors James Watt and William Murdoch, whose work was critical to the technological developments of the Industrial Revolution throughout Britain. In literature, the most successful figure of the mid-19th century was Walter Scott. His first prose work, Waverley in 1814, is often called the first historical novel. It launched a highly successful career that probably more than any other helped define and popularise Scottish cultural identity. In the late 19th century, a number of Scottish-born authors achieved international reputations, such as Robert Louis Stevenson, Arthur Conan Doyle, J. M. Barrie and George MacDonald. Scotland also played a major part in the development of art and architecture. The Glasgow School, which developed in the late 19th century, and flourished in the early 20th century, produced a distinctive blend of influences including the Celtic Revival the Arts and Crafts movement, and Japonism, which found favour throughout the modern art world of continental Europe and helped define the Art Nouveau style. Proponents included architect and artist Charles Rennie Mackintosh.
This period saw a process of rehabilitation for Highland culture. In the 1820s, as part of the Romantic revival, tartan and the kilt were adopted by members of the social elite, not just in Scotland, but across Europe, prompted by the popularity of Macpherson's Ossian cycle and then Walter Scott's Waverley novels. However, the Highlands remained poor, the only part of mainland Britain to continue to experience recurrent famine, with a limited range of products exported out of the region, negligible industrial production, but a continued population growth that tested the subsistence agriculture. These problems, and the desire to improve agriculture and profits were the driving forces of the ongoing Highland Clearances, in which many of the population of the Highlands suffered eviction as lands were enclosed, principally so that they could be used for sheep farming. The first phase of the clearances followed patterns of agricultural change throughout Britain. The second phase was driven by overpopulation, the Highland Potato Famine and the collapse of industries that had relied on the wartime economy of the Napoleonic Wars. The population of Scotland grew steadily in the 19th century, from 1,608,000 in the census of 1801 to 2,889,000 in 1851 and 4,472,000 in 1901. Even with the development of industry, there were not enough good jobs. As a result, during the period 1841–1931, about 2 million Scots migrated to North America and Australia, and another 750,000 Scots relocated to England.
After prolonged years of struggle in the Kirk, in 1834 the Evangelicals gained control of the General Assembly and passed the Veto Act, which allowed congregations to reject unwanted "intrusive" presentations to livings by patrons. The following "Ten Years' Conflict" of legal and political wrangling ended in defeat for the non-intrusionists in the civil courts. The result was a schism from the church by some of the non-intrusionists led by Dr Thomas Chalmers, known as the Great Disruption of 1843. Roughly a third of the clergy, mainly from the North and Highlands, formed the separate Free Church of Scotland. In the late 19th century growing divisions between fundamentalist Calvinists and theological liberals resulted in a further split in the Free Church as the rigid Calvinists broke away to form the Free Presbyterian Church in 1893. Catholic emancipation in 1829 and the influx of large numbers of Irish immigrants, particularly after the famine years of the late 1840s, mainly to the growing lowland centres like Glasgow, led to a transformation in the fortunes of Catholicism. In 1878, despite opposition, a Roman Catholic ecclesiastical hierarchy was restored to the country, and Catholicism became a significant denomination within Scotland.
Industrialisation, urbanisation and the Disruption of 1843 all undermined the tradition of parish schools. From 1830 the state began to fund buildings with grants; then from 1846 it was funding schools by direct sponsorship; and in 1872 Scotland moved to a system like that in England of state-sponsored largely free schools, run by local school boards. The historic University of Glasgow became a leader in British higher education by providing the educational needs of youth from the urban and commercial classes, as opposed to the upper class. The University of St Andrews pioneered the admission of women to Scottish universities. From 1892 Scottish universities could admit and graduate women and the numbers of women at Scottish universities steadily increased until the early 20th century.
Caused by the advent of refrigeration and imports of lamb, mutton and wool from overseas, the 1870s brought with them a collapse of sheep prices and an abrupt halt in the previous sheep farming boom. Land prices subsequently plummeted, too, and accelerated the process of the so-called "Balmoralisation" of Scotland, an era in the second half of the 19th century that saw an increase in tourism and the establishment of large estates dedicated to field sports like deer stalking and grouse shooting, especially in the Scottish Highlands. The process was named after Balmoral Estate, purchased by Queen Victoria in 1848, that fueled the romanticisation of upland Scotland and initiated an influx of the newly wealthy acquiring similar estates in the following decades. In the late 19th century just 118 people owned half of Scotland, with nearly 60 per cent of the whole country being part of shooting estates. While their relative importance has somewhat declined due to changing recreational interests throughout the 20th century, deer stalking and grouse shooting remain of prime importance on many private estates in Scotland.
Early 20th century
Scotland played a major role in the British effort in the First World War. It especially provided manpower, ships, machinery, fish and money. With a population of 4.8 million in 1911, Scotland sent over half a million men to the war, of whom over a quarter died in combat or from disease, and 150,000 were seriously wounded. Field Marshal Sir Douglas Haig was Britain's commander on the Western Front.
The war saw the emergence of a radical movement called "Red Clydeside" led by militant trades unionists. Formerly a Liberal stronghold, the industrial districts switched to Labour by 1922, with a base among the Irish Catholic working-class districts. Women were especially active in building neighbourhood solidarity on housing issues. However, the "Reds" operated within the Labour Party and had little influence in Parliament and the mood changed to passive despair by the late 1920s.
The shipbuilding industry expanded by a third and expected renewed prosperity, but instead, a serious depression hit the economy by 1922 and it did not fully recover until 1939. The interwar years were marked by economic stagnation in rural and urban areas, and high unemployment. Indeed, the war brought with it deep social, cultural, economic, and political dislocations. Thoughtful Scots pondered their declension, as the main social indicators such as poor health, bad housing, and long-term mass unemployment, pointed to terminal social and economic stagnation at best, or even a downward spiral. Service abroad on behalf of the Empire lost its allure to ambitious young people, who left Scotland permanently. The heavy dependence on obsolescent heavy industry and mining was a central problem, and no one offered workable solutions. The despair reflected what Finlay (1994) describes as a widespread sense of hopelessness that prepared local business and political leaders to accept a new orthodoxy of centralised government economic planning when it arrived during the Second World War.
During the Second World War, Scotland was targeted by Nazi Germany largely due to its factories, shipyards, and coal mines. Cities such as Glasgow and Edinburgh were targeted by German bombers, as were smaller towns mostly located in the central belt of the country. Perhaps the most significant air-raid in Scotland was the Clydebank Blitz of March 1941, which intended to destroy naval shipbuilding in the area. 528 people were killed and 4,000 homes totally destroyed.
Perhaps Scotland's most unusual wartime episode occurred in 1941 when Rudolf Hess flew to Renfrewshire, possibly intending to broker a peace deal through the Duke of Hamilton. Before his departure from Germany, Hess had given his adjutant, Karlheinz Pintsch, a letter addressed to Hitler that detailed his intentions to open peace negotiations with the British. Pintsch delivered the letter to Hitler at the Berghof around noon on 11 May. Albert Speer later said Hitler described Hess's departure as one of the worst personal blows of his life, as he considered it a personal betrayal. Hitler worried that his allies, Italy and Japan, would perceive Hess's act as an attempt by Hitler to secretly open peace negotiations with the British.
As in World War I, Scapa Flow in Orkney served as an important Royal Navy base. Attacks on Scapa Flow and Rosyth gave RAF fighters their first successes downing bombers in the Firth of Forth and East Lothian. The shipyards and heavy engineering factories in Glasgow and Clydeside played a key part in the war effort, and suffered attacks from the Luftwaffe, enduring great destruction and loss of life. As transatlantic voyages involved negotiating north-west Britain, Scotland played a key part in the battle of the North Atlantic. Shetland's relative proximity to occupied Norway resulted in the Shetland bus by which fishing boats helped Norwegians flee the Nazis, and expeditions across the North Sea to assist resistance.
Scottish industry came out of the depression slump by a dramatic expansion of its industrial activity, absorbing unemployed men and many women as well. The shipyards were the centre of more activity, but many smaller industries produced the machinery needed by the British bombers, tanks and warships. Agriculture prospered, as did all sectors except for coal mining, which was operating mines near exhaustion. Real wages, adjusted for inflation, rose 25% and unemployment temporarily vanished. Increased income, and the more equal distribution of food, obtained through a tight rationing system, dramatically improved the health and nutrition.
After 1945, Scotland's economic situation worsened due to overseas competition, inefficient industry, and industrial disputes. Only in recent decades has the country enjoyed something of a cultural and economic renaissance. Economic factors contributing to this recovery included a resurgent financial services industry, electronics manufacturing, (see Silicon Glen), and the North Sea oil and gas industry. The introduction in 1989 by Margaret Thatcher's government of the Community Charge (widely known as the Poll Tax) one year before the rest of Great Britain, contributed to a growing movement for Scottish control over domestic affairs. Following a referendum on devolution proposals in 1997, the Scotland Act 1998 was passed by the UK Parliament, which established a devolved Scottish Parliament and Scottish Government with responsibility for most laws specific to Scotland. The Scottish Parliament was reconvened in Edinburgh on 4 July 1999. The first to hold the office of first minister of Scotland was Donald Dewar, who served until his sudden death in 2000.
The Scottish Parliament Building at Holyrood itself did not open until October 2004, after lengthy construction delays and running over budget. The Scottish Parliament has a form of proportional representation (the additional member system), which normally results in no one party having an overall majority. The pro-independence Scottish National Party led by Alex Salmond achieved this in the 2011 election, winning 69 of the 129 seats available. The success of the SNP in achieving a majority in the Scottish Parliament paved the way for the September 2014 referendum on Scottish independence. The majority voted against the proposition, with 55% voting no to independence. More powers, particularly in relation to taxation, were devolved to the Scottish Parliament after the referendum, following cross-party talks in the Smith Commission.
Geography and natural history
The mainland of Scotland comprises the northern third of the land mass of the island of Great Britain, which lies off the north-west coast of Continental Europe. The total area is 78,772 km2 (30,414 sq mi), comparable to the size of the Czech Republic. Scotland's only land border is with England, and runs for 96 kilometres (60 mi) between the basin of the River Tweed on the east coast and the Solway Firth in the west. The Atlantic Ocean borders the west coast and the North Sea is to the east. The island of Ireland lies only 21 kilometres (13 mi) from the south-western peninsula of Kintyre; Norway is 305 kilometres (190 mi) to the east and the Faroe Islands, 270 kilometres (168 mi) to the north.
The territorial extent of Scotland is generally that established by the 1237 Treaty of York between Scotland and the Kingdom of England and the 1266 Treaty of Perth between Scotland and Norway. Important exceptions include the Isle of Man, which having been lost to England in the 14th century is now a crown dependency outside of the United Kingdom; the island groups Orkney and Shetland, which were acquired from Norway in 1472; and Berwick-upon-Tweed, lost to England in 1482
The geographical centre of Scotland lies a few miles from the village of Newtonmore in Badenoch. Rising to 1,344 metres (4,409 ft) above sea level, Scotland's highest point is the summit of Ben Nevis, in Lochaber, while Scotland's longest river, the River Tay, flows for a distance of 190 kilometres (118 mi).
Geology and geomorphology
The whole of Scotland was covered by ice sheets during the Pleistocene ice ages and the landscape is much affected by glaciation. From a geological perspective, the country has three main sub-divisions.
The Highlands and Islands lie to the north and west of the Highland Boundary Fault, which runs from Arran to Stonehaven. This part of Scotland largely comprises ancient rocks from the Cambrian and Precambrian, which were uplifted during the later Caledonian orogeny. It is interspersed with igneous intrusions of a more recent age, remnants of which formed mountain massifs such as the Cairngorms and Skye Cuillins.
A significant exception to the above are the fossil-bearing beds of Old Red Sandstones found principally along the Moray Firth coast. The Highlands are generally mountainous and the highest elevations in the British Isles are found here. Scotland has over 790 islands divided into four main groups: Shetland, Orkney, and the Inner Hebrides and Outer Hebrides. There are numerous bodies of freshwater including Loch Lomond and Loch Ness. Some parts of the coastline consist of machair, a low-lying dune pasture land.
The Central Lowlands is a rift valley mainly comprising Paleozoic formations. Many of these sediments have economic significance for it is here that the coal and iron bearing rocks that fuelled Scotland's industrial revolution are found. This area has also experienced intense volcanism, Arthur's Seat in Edinburgh being the remnant of a once much larger volcano. This area is relatively low-lying, although even here hills such as the Ochils and Campsie Fells are rarely far from view.
The Southern Uplands are a range of hills almost 200 kilometres (124 mi) long, interspersed with broad valleys. They lie south of a second fault line (the Southern Uplands fault) that runs from Girvan to Dunbar. The geological foundations largely comprise Silurian deposits laid down some 400–500 million years ago. The high point of the Southern Uplands is Merrick with an elevation of 843 m (2,766 ft). The Southern Uplands is home to Scotland's highest village, Wanlockhead (430 m or 1,411 ft above sea level).
The climate of most of Scotland is temperate and oceanic, and tends to be very changeable., As it is warmed by the Gulf Stream from the Atlantic, it has much milder winters (but cooler, wetter summers) than areas on similar latitudes, such as Labrador, southern Scandinavia, the Moscow region in Russia, and the Kamchatka Peninsula on the opposite side of Eurasia. However, temperatures are generally lower than in the rest of the UK, with the coldest ever UK temperature of −27.2 °C (−17.0 °F) recorded at Braemar in the Grampian Mountains, on 11 February 1895. Winter maxima average 6 °C (43 °F) in the Lowlands, with summer maxima averaging 18 °C (64 °F). The highest temperature recorded was 32.9 °C (91.2 °F) at Greycrook, Scottish Borders on 9 August 2003.
The west of Scotland is usually warmer than the east, owing to the influence of Atlantic ocean currents and the colder surface temperatures of the North Sea. Tiree, in the Inner Hebrides, is one of the sunniest places in the country: it had more than 300 hours of sunshine in May 1975. Rainfall varies widely across Scotland. The western highlands of Scotland are the wettest, with annual rainfall in a few places exceeding 3,000 mm (120 in). In comparison, much of lowland Scotland receives less than 800 mm (31 in) annually. Heavy snowfall is not common in the lowlands, but becomes more common with altitude. Braemar has an average of 59 snow days per year, while many coastal areas average fewer than 10 days of lying snow per year.
Flora and fauna
Scotland's wildlife is typical of the north-west of Europe, although several of the larger mammals such as the lynx, brown bear, wolf, elk and walrus were hunted to extinction in historic times. There are important populations of seals and internationally significant nesting grounds for a variety of seabirds such as gannets. The golden eagle is something of a national icon.
On the high mountain tops, species including ptarmigan, mountain hare and stoat can be seen in their white colour phase during winter months. Remnants of the native Scots pine forest exist and within these areas the Scottish crossbill, the UK's only endemic bird species and vertebrate, can be found alongside capercaillie, Scottish wildcat, red squirrel and pine marten. Various animals have been re-introduced, including the white-tailed sea eagle in 1975, the red kite in the 1980s, and there have been experimental projects involving the beaver and wild boar. Today, much of the remaining native Caledonian Forest lies within the Cairngorms National Park and remnants of the forest remain at 84 locations across Scotland. On the west coast, remnants of ancient Celtic Rainforest still remain, particularly on the Taynish peninsula in Argyll, these forests are particularly rare due to high rates of deforestation throughout Scottish history.
The flora of the country is varied incorporating both deciduous and coniferous woodland as well as moorland and tundra species. However, large scale commercial tree planting and the management of upland moorland habitat for the grazing of sheep and field sport activities like deer stalking and driven grouse shooting impacts upon the distribution of indigenous plants and animals. The UK's tallest tree is a grand fir planted beside Loch Fyne, Argyll in the 1870s, and the Fortingall Yew may be 5,000 years old and is probably the oldest living thing in Europe.[dubious ] Although the number of native vascular plants is low by world standards, Scotland's substantial bryophyte flora is of global importance.
Although Edinburgh is the capital of Scotland, the largest city is Glasgow, which has just over 584,000 inhabitants. The Greater Glasgow conurbation, with a population of almost 1.2 million, is home to nearly a quarter of Scotland's population. The Central Belt is where most of the main towns and cities are located, including Glasgow, Edinburgh, Dundee, and Perth. Scotland's only major city outside the Central Belt is Aberdeen. The Scottish Lowlands host 80% of the total population, where the Central Belt accounts for 3.5 million people.
In general, only the more accessible and larger islands remain inhabited. Currently, fewer than 90 remain inhabited. The Southern Uplands are essentially rural in nature and dominated by agriculture and forestry. Because of housing problems in Glasgow and Edinburgh, five new towns were designated between 1947 and 1966. They are East Kilbride, Glenrothes, Cumbernauld, Livingston, and Irvine.
Immigration since World War II has given Glasgow, Edinburgh, and Dundee small South Asian communities. In 2011, there were an estimated 49,000 ethnically Pakistani people living in Scotland, making them the largest non-White ethnic group. Since the Enlargement of the European Union more people from Central and Eastern Europe have moved to Scotland, and the 2011 census indicated that 61,000 Poles live there.
Scotland has three officially recognised languages: English, Scots, and Scottish Gaelic. Scottish Standard English, a variety of English as spoken in Scotland, is at one end of a bipolar linguistic continuum, with broad Scots at the other. Scottish Standard English may have been influenced to varying degrees by Scots. The 2011 census indicated that 63% of the population had "no skills in Scots". Others speak Highland English. Gaelic is mostly spoken in the Western Isles, where a large proportion of people still speak it; however, nationally its use is confined to just 1% of the population. The number of Gaelic speakers in Scotland dropped from 250,000 in 1881 to 60,000 in 2008.
There are many more people with Scottish ancestry living abroad than the total population of Scotland. In the 2000 Census, 9.2 million Americans self-reported some degree of Scottish descent. Ulster's Protestant population is mainly of lowland Scottish descent, and it is estimated that there are more than 27 million descendants of the Scots-Irish migration now living in the US. In Canada, the Scottish-Canadian community accounts for 4.7 million people. About 20% of the original European settler population of New Zealand came from Scotland.
In August 2012, the Scottish population reached an all-time high of 5.25 million people. The reasons given were that, in Scotland, births were outnumbering the number of deaths, and immigrants were moving to Scotland from overseas. In 2011, 43,700 people moved from Wales, Northern Ireland or England to live in Scotland.
The total fertility rate (TFR) in Scotland is below the replacement rate of 2.1 (the TFR was 1.73 in 2011). The majority of births are to unmarried women (51.3% of births were outside of marriage in 2012).
Just over half (54%) of the Scottish population reported being a Christian while nearly 37% reported not having a religion in a 2011 census. Since the Scottish Reformation of 1560, the national church (the Church of Scotland, also known as The Kirk) has been Protestant in classification and Reformed in theology. Since 1689 it has had a Presbyterian system of church government and enjoys independence from the state. Its membership is 398,389, about 7.5% of the total population, though according to the 2014 Scottish Annual Household Survey, 27.8%, or 1.5 million adherents, identified the Church of Scotland as the church of their religion. The Church operates a territorial parish structure, with every community in Scotland having a local congregation.
Scotland also has a significant Roman Catholic population, 19% professing that faith, particularly in Greater Glasgow and the north-west. After the Reformation, Roman Catholicism in Scotland continued in the Highlands and some western islands like Uist and Barra, and it was strengthened during the 19th century by immigration from Ireland. Other Christian denominations in Scotland include the Free Church of Scotland, and various other Presbyterian offshoots. Scotland's third largest church is the Scottish Episcopal Church.
Islam is the largest non-Christian religion (estimated at around 75,000, which is about 1.4% of the population), and there are also significant Jewish, Hindu and Sikh communities, especially in Glasgow. The Samyé Ling monastery near Eskdalemuir, which celebrated its 40th anniversary in 2007, is the first Buddhist monastery in western Europe.
Politics and government
The head of state of the United Kingdom is the monarch, currently Queen Elizabeth II (since 1952). The regnal numbering ("Elizabeth II") caused controversy around the time of her coronation because there had never been an Elizabeth I in Scotland. The British government stated in April 1953 that future British monarchs would be numbered according to either their English or their Scottish predecessors, whichever number would be higher. For instance, any future King James would be styled James VIII—since the last Scottish King James was James VII (also James II of England, etc.)—while the next King Henry would be King Henry IX throughout the UK even though there have been no Scottish kings of that name. A legal action, MacCormick v Lord Advocate (1953 SC 396), was brought in Scotland to contest the right of the Queen to entitle herself "Elizabeth II" within Scotland, but the Crown won the case.
The monarchy of the United Kingdom continues to use a variety of styles, titles and other royal symbols of statehood specific to pre-union Scotland, including: the Royal Standard of Scotland, the Royal coat of arms used in Scotland together with its associated Royal Standard, royal titles including that of Duke of Rothesay, certain Great Officers of State, the chivalric Order of the Thistle and, since 1999, reinstating a ceremonial role for the Crown of Scotland after a 292-year hiatus.
Scotland has limited self-government within the United Kingdom, as well as representation in the UK Parliament. Executive and legislative powers respectively have been devolved to the Scottish Government and the Scottish Parliament at Holyrood in Edinburgh since 1999. The UK Parliament retains control over reserved matters specified in the Scotland Act 1998, including UK taxes, social security, defence, international relations and broadcasting. The Scottish Parliament has legislative authority for all other areas relating to Scotland. It initially had only a limited power to vary income tax, but powers over taxation and social security were significantly expanded by the Scotland Acts of 2012 and 2016.
The Scottish Parliament can give legislative consent over devolved matters back to the UK Parliament by passing a Legislative Consent Motion if United Kingdom-wide legislation is considered more appropriate for a certain issue. The programmes of legislation enacted by the Scottish Parliament have seen a divergence in the provision of public services compared to the rest of the UK. For instance, university education and care services for the elderly are free at point of use in Scotland, while fees are paid in the rest of the UK. Scotland was the first country in the UK to ban smoking in enclosed public places.
The Scottish Parliament is a unicameral legislature with 129 members (MSPs): 73 of them represent individual constituencies and are elected on a first-past-the-post system; the other 56 are elected in eight different electoral regions by the additional member system. MSPs serve for a four-year period (exceptionally five years from 2011–16). The Parliament nominates one of its Members, who is then appointed by the monarch to serve as first minister. Other ministers are appointed by the first minister and serve at his/her discretion. Together they make up the Scottish Government, the executive arm of the devolved government. The Scottish Government is headed by the first minister, who is accountable to the Scottish Parliament and is the minister of charge of the Scottish Government. The first minister is also the political leader of Scotland. The Scottish Government also comprises the deputy first minister, currently John Swinney MSP, who deputises for the first minister during a period of absence of overseas visits. Alongside the deputy first minister's requirements as Deputy, the minister also has a cabinet ministerial responsibility. Swinney is also currently Cabinet Secretary for Education and Skills. The Scottish Government's cabinet comprises nine cabinet secretaries, who form the Cabinet of Scotland. There are also twelve other ministers, who work alongside the cabinet secretaries in their appointed areas.
In the 2016 election, the Scottish National Party (SNP) won 63 of the 129 seats available. Nicola Sturgeon, the leader of the SNP, has been the first minister since November 2014. The Conservative Party became the largest opposition party in the 2016 elections, with the Labour Party, Liberal Democrats and the Green Party also represented in the Parliament. The next Scottish Parliament election is due to be held on 6 May 2021.
Scotland is represented in the British House of Commons by 59 MPs elected from territory-based Scottish constituencies. In the 2019 general election, the SNP won 48 of the 59 seats. This represented a significant increase from the 2017 general election, when the SNP won 35 seats. Conservative, Labour and Liberal Democrat parties also represent Scottish constituencies in the House of Commons. The next United Kingdom general election is scheduled for 2 May 2024. The Scotland Office represents the UK government in Scotland on reserved matters and represents Scottish interests within the UK government. The Scotland Office is led by the Secretary of State for Scotland, who sits in the Cabinet of the United Kingdom. Conservative MP Alister Jack has held the position since July 2019.
Devolved government relations
The relationships between the central UK Government and devolved governments of Scotland, Wales and Northern Ireland are based on the extra-statutory principles and agreements with the main elements are set out in a Memorandum of Understanding between the UK government and the devolved governments of Scotland, Wales and Northern Ireland. The MOU lays emphasis on the principles of good communication, consultation and co-operation.
Since devolution in 1999, Scotland has devolved stronger working relations across the two other devolved governments, the Welsh Government and Northern Ireland Executive. Whilst there are no formal concordats between the Scottish Government, Welsh Government and Northern Ireland Executive, ministers from each devolved government meet at various points throughout the year at various events such as the British-Irish Council and also meet to discuss matters and issues that are devolved to each government. Scotland, along with the Welsh Government, British Government as well as the Northern Ireland executive, participate in the Joint Ministerial Committee (JMC) which allows each government to discuss policy issues together and work together across each government to find solutions. The Scottish Government considers the successful re-establishment of the Plenary, and establishment of the Domestic fora to be important facets of the relationship with the UK Government and the other devolved administrations.
In the aftermath of the United Kingdom's decision to withdraw from the European Union in 2016, the Scottish Government has called for there to be a joint approach from each of the devolved governments. In early 2017, the devolved governments met to discuss Brexit and agree on Brexit strategies from each devolved government which lead for Theresa May to issue a statement that claims that the devolved governments will not have a central role or decision making process in the Brexit process, but that the UK Government plans to "fully engage" Scotland in talks alongside the governments of Wales and Northern Ireland.
Whilst foreign policy remains a reserved matter, the Scottish Government still has the power and ability to strengthen and develop Scotland, the economy and Scottish interests on the world stage and encourage foreign businesses, international devolved, regional and central governments to invest in Scotland. Whilst the first minister usually undertakes a number of foreign and international visits to promote Scotland, international relations, European and Commonwealth relations are also included within the portfolios of both the Cabinet Secretary for Culture, Tourism and External Affairs (responsible for international development) and the Minister for International Development and Europe (responsible for European Union relations and international relations).
During the G8 Summit in 2005, First Minister Jack McConnell welcomed each head of government of the G8 nations to the countries Glasgow Prestwick Airport on behalf of then UK Prime Minister Tony Blair. At the same time, McConnell and the then Scottish Executive pioneered the way forward to launch what would become the Scotland Malawi Partnership which co-ordinates Scottish activities to strengthen existing links with Malawi. During McConnell's time as first minister, several relations with Scotland, including Scottish and Russian relations strengthened following a visit by President of Russia Vladimir Putin to Edinburgh. McConnell, speaking at the end, highlighted that the visit by Putin was a "post-devolution" step towards "Scotland regaining its international identity".
Under the Salmond administration, Scotland's trade and investment deals with countries such as China and Canada, where Salmond established the Canada Plan 2010–2015 which aimed to strengthen "the important historical, cultural and economic links" between both Canada and Scotland. To promote Scotland's interests and Scottish businesses in North America, there is a Scottish Affairs Office located in Washington, D.C. with the aim to promoting Scotland in both the United States and Canada.
During a 2017 visit to the United States, First Minister Nicola Sturgeon met with Jerry Brown, Governor of California, where both signed an agreement committing both the Government of California and the Scottish Government to work together to tackle climate change, as well as Sturgeon signing a £6.3 million deal for Scottish investment from American businesses and firms promoting trade, tourism and innovation. During an official visit to the Republic of Ireland in 2016, Sturgeon claimed that is it "important for Ireland and Scotland and the whole of the British Isles that Ireland has a strong ally in Scotland". During the same engagement, Sturgeon became the first head of government to address the Seanad Éireann, the Upper House of the Irish Parliament.
A policy of devolution had been advocated by the three main UK parties with varying enthusiasm during recent history. A previous Labour leader. John Smith, described the revival of a Scottish parliament as the "settled will of the Scottish people". The devolved Scottish Parliament was created after a referendum in 1997 found majority support for both creating the Parliament and granting it limited powers to vary income tax.
The Scottish National Party (SNP), which supports Scottish independence, was first elected to form the Scottish Government in 2007. The new government established a "National Conversation" on constitutional issues, proposing a number of options such as increasing the powers of the Scottish Parliament, federalism, or a referendum on Scottish independence from the United Kingdom. In rejecting the last option, the three main opposition parties in the Scottish Parliament created a commission to investigate the distribution of powers between devolved Scottish and UK-wide bodies. The Scotland Act 2012, based on proposals by the commission, was subsequently enacted devolving additional powers to the Scottish Parliament.
In August 2009 the SNP proposed a bill to hold a referendum on independence in November 2010. Opposition from all other major parties led to an expected defeat. After the 2011 elections gave the SNP an overall majority in the Scottish Parliament, a referendum on independence for Scotland was held on 18 September 2014. The referendum resulted in a rejection of independence, by 55.3% to 44.7%. During the campaign, the three main parties in the UK Parliament pledged to extend the powers of the Scottish Parliament. An all-party commission chaired by Lord Smith of Kelvin was formed, which led to a further devolution of powers through the Scotland Act 2016.
Following a referendum on the UK's membership of the European Union on 23 June 2016, where a UK-wide majority voted to withdraw from the EU whilst a majority within Scotland voted to remain, Scotland's first minister, Nicola Sturgeon, announced that as a result a new independence referendum was "highly likely".
Historical subdivisions of Scotland included the mormaerdom, stewartry, earldom, burgh, parish, county and regions and districts. Some of these names are still sometimes used as geographical descriptors.
Modern Scotland is subdivided in various ways depending on the purpose. In local government, there have been 32 single-tier council areas since 1996, whose councils are responsible for the provision of all local government services. Decisions are made by councillors who are elected at local elections every five years. The head of each council is usually the Lord Provost alongside the Leader of the Council, with a Chief Executive being appointed as director of the council area. Community Councils are informal organisations that represent specific sub-divisions within each council area.
In the Scottish Parliament, there are 73 constituencies and eight regions. For the Parliament of the United Kingdom, there are 59 constituencies. Until 2013, the Scottish fire brigades and police forces were based on a system of regions introduced in 1975. For healthcare and postal districts, and a number of other governmental and non-governmental organisations such as the churches, there are other long-standing methods of subdividing Scotland for the purposes of administration.
Law and criminal justice
Scots law has a basis derived from Roman law, combining features of both uncodified civil law, dating back to the Corpus Juris Civilis, and common law with medieval sources. The terms of the Treaty of Union with England in 1707 guaranteed the continued existence of a separate legal system in Scotland from that of England and Wales. Prior to 1611, there were several regional law systems in Scotland, most notably Udal law in Orkney and Shetland, based on old Norse law. Various other systems derived from common Celtic or Brehon laws survived in the Highlands until the 1800s.
Scots law provides for three types of courts responsible for the administration of justice: civil, criminal and heraldic. The supreme civil court is the Court of Session, although civil appeals can be taken to the Supreme Court of the United Kingdom (or before 1 October 2009, the House of Lords). The High Court of Justiciary is the supreme criminal court in Scotland. The Court of Session is housed at Parliament House, in Edinburgh, which was the home of the pre-Union Parliament of Scotland with the High Court of Justiciary and the Supreme Court of Appeal currently located at the Lawnmarket. The sheriff court is the main criminal and civil court, hearing most cases. There are 49 sheriff courts throughout the country. District courts were introduced in 1975 for minor offences and small claims. These were gradually replaced by Justice of the Peace Courts from 2008 to 2010. The Court of the Lord Lyon regulates heraldry.
For three centuries the Scots legal system was unique for being the only national legal system without a parliament. This ended with the advent of the Scottish Parliament in 1999, which legislates for Scotland. Many features within the system have been preserved. Within criminal law, the Scots legal system is unique in having three possible verdicts: "guilty", "not guilty" and "not proven". Both "not guilty" and "not proven" result in an acquittal, typically with no possibility of retrial in accordance with the rule of double jeopardy. There is, however, the possibility of a retrial where new evidence emerges at a later date that might have proven conclusive in the earlier trial at first instance, where the person acquitted subsequently admits the offence or where it can be proved that the acquittal was tainted by an attempt to pervert the course of justice – see the provisions of the Double Jeopardy (Scotland) Act 2011. Many laws differ between Scotland and the other parts of the United Kingdom, and many terms differ for certain legal concepts. Manslaughter, in England and Wales, is broadly similar to culpable homicide in Scotland, and arson is called wilful fire raising. Indeed, some acts considered crimes in England and Wales, such as forgery, are not so in Scotland. Procedure also differs. Scots juries, sitting in criminal cases, consist of fifteen jurors, which is three more than is typical in many countries.
The Scottish Prison Service (SPS) manages the prisons in Scotland, which collectively house over 8,500 prisoners. The Cabinet Secretary for Justice is responsible for the Scottish Prison Service within the Scottish Government.
Health care in Scotland is mainly provided by NHS Scotland, Scotland's public health care system. This was founded by the National Health Service (Scotland) Act 1947 (later repealed by the National Health Service (Scotland) Act 1978) that took effect on 5 July 1948 to coincide with the launch of the NHS in England and Wales. However, even prior to 1948, half of Scotland's landmass was already covered by state-funded health care, provided by the Highlands and Islands Medical Service. Healthcare policy and funding is the responsibility of the Scottish Government's Health Directorates. The current Cabinet Secretary for Health and Sport is Jeane Freeman and the Director-General (DG) Health and chief executive, NHS Scotland is Paul Gray.
In 2008, the NHS in Scotland had around 158,000 staff including more than 47,500 nurses, midwives and health visitors and over 3,800 consultants. There are also more than 12,000 doctors, family practitioners and allied health professionals, including dentists, opticians and community pharmacists, who operate as independent contractors providing a range of services within the NHS in return for fees and allowances. These fees and allowances were removed in May 2010, and prescriptions are entirely free, although dentists and opticians may charge if the patient's household earns over a certain amount, about £30,000 per annum.
Scotland has a Western-style open mixed economy closely linked with the rest of the UK and the wider world. Traditionally, the Scottish economy was dominated by heavy industry underpinned by shipbuilding in Glasgow, coal mining and steel industries. Petroleum related industries associated with the extraction of North Sea oil have also been important employers from the 1970s, especially in the north-east of Scotland. De-industrialisation during the 1970s and 1980s saw a shift from a manufacturing focus towards a more service-oriented economy.
Scotland's gross domestic product (GDP), including oil and gas produced in Scottish waters, was estimated at £150 billion for the calendar year 2012. In 2014, Scotland's per capita GDP was one of the highest in the EU. As of April 2019 the Scottish unemployment rate was 3.3%, below the UK rate of 3.8%, and the Scottish employment rate was 75.9%.
Edinburgh is the financial services centre of Scotland, with many large finance firms based there, including: Lloyds Banking Group (owners of HBOS); the Government-owned Royal Bank of Scotland and Standard Life. Edinburgh was ranked 15th in the list of world financial centres in 2007, but fell to 37th in 2012, following damage to its reputation, and in 2016 was ranked 56th out of 86.
In 2014, total Scottish exports (excluding intra-UK trade) were estimated to be £27.5 billion. Scotland's primary exports include whisky, electronics and financial services. The United States, Netherlands, Germany, France, and Norway constitute the country's major export markets.
Whisky is one of Scotland's more known goods of economic activity. Exports increased by 87% in the decade to 2012 and were valued at £4.3 billion in 2013, which was 85% of Scotland's food and drink exports. It supports around 10,000 jobs directly and 25,000 indirectly. It may contribute £400–682 million to Scotland, rather than several billion pounds, as more than 80% of whisky produced is owned by non-Scottish companies.
A briefing published in 2002 by the Scottish Parliament Information Centre (SPICe) for the Scottish Parliament's Enterprise and Life Long Learning Committee stated that tourism accounted for up to 5% of GDP and 7.5% of employment.
Although the Bank of England is the central bank for the UK, three Scottish clearing banks issue Sterling banknotes: the Bank of Scotland, the Royal Bank of Scotland and the Clydesdale Bank. The value of the Scottish banknotes in circulation in 2013 was £3.8 billion, underwritten by the Bank of England using funds deposited by each clearing bank, under the Banking Act 2009, in order to cover the total value of such notes in circulation.
Of the money spent on UK defence, about £3.3 billion can be attributed to Scotland as of 2013. Although Scotland has a long military tradition predating the Treaty of Union with England, its armed forces now form part of the British Armed Forces, with the exception of the Atholl Highlanders, Europe's only legal private army. In 2006, the infantry regiments of the Scottish Division were amalgamated to form the Royal Regiment of Scotland. Other distinctively Scottish regiments in the British Army include the Scots Guards, the Royal Scots Dragoon Guards and the 154 (Scottish) Regiment RLC, an Army Reserve Regiment of the Royal Logistic Corps.
Because of their topography and perceived remoteness, parts of Scotland have housed many sensitive defence establishments. Between 1960 and 1991, the Holy Loch was a base for the US fleet of Polaris ballistic missile submarines. Today, Her Majesty's Naval Base Clyde, 25 miles (40 kilometres) north-west of Glasgow, is the base for the four Trident-armed Vanguard-class ballistic missile submarines that comprise the UK's nuclear deterrent. Scapa Flow was the major Fleet base for the Royal Navy until 1956.
A single front-line Royal Air Force base is located in Scotland. RAF Lossiemouth, located in Moray, is the most northerly air defence fighter base in the United Kingdom and is home to three fast-jet squadrons equipped with the Eurofighter Typhoon.
The Scottish education system has always been distinct from the rest of the United Kingdom, with a characteristic emphasis on a broad education. In the 15th century, the Humanist emphasis on education cumulated with the passing of the Education Act 1496, which decreed that all sons of barons and freeholders of substance should attend grammar schools to learn "perfyct Latyne", resulting in an increase in literacy among a male and wealthy elite. In the Reformation, the 1560 First Book of Discipline set out a plan for a school in every parish, but this proved financially impossible. In 1616 an act in Privy council commanded every parish to establish a school. By the late seventeenth century there was a largely complete network of parish schools in the lowlands, but in the Highlands basic education was still lacking in many areas. Education remained a matter for the church rather than the state until the Education (Scotland) Act 1872.
The Curriculum for Excellence, Scotland's national school curriculum, presently provides the curricular framework for children and young people from age 3 to 18. All 3- and 4-year-old children in Scotland are entitled to a free nursery place. Formal primary education begins at approximately 5 years old and lasts for 7 years (P1–P7); children in Scotland study Standard Grades, or Intermediate qualifications between the ages of 14 and 16. These are being phased out and replaced by the National Qualifications of the Curriculum for Excellence. The school leaving age is 16, after which students may choose to remain at school and study for Access, Intermediate or Higher Grade and Advanced Higher qualifications. A small number of students at certain private, independent schools may follow the English system and study towards GCSEs and A and AS-Levels instead.
There are fifteen Scottish universities, some of which are amongst the oldest in the world.[vague] These include the University of St Andrews, the University of Glasgow, the University of Aberdeen and the University of Edinburgh—many of which are ranked amongst the best in the UK.[vague] Scotland had more universities per capita in QS' World University Rankings' top 100 in 2012 than any other nation. The country produces 1% of the world's published research with less than 0.1% of the world's population, and higher education institutions account for 9% of Scotland's service sector exports. Scotland's University Courts are the only bodies in Scotland authorised to award degrees.
Tuition is handled by the Student Awards Agency Scotland (SAAS), which does not charge fees to what it defines as "Young Students". Young Students are defined as those under 25, without children, marriage, civil partnership or cohabiting partner, who have not been outside of full-time education for more than three years. Fees exist for those outside the young student definition, typically from £1,200 to £1,800 for undergraduate courses, dependent on year of application and type of qualification. Postgraduate fees can be up to £3,400. The system has been in place since 2007 when graduate endowments were abolished. Labour's education spokesperson Rhona Brankin criticised the Scottish system for failing to address student poverty.
Scotland's universities are complemented in the provision of Further and Higher Education by 43 colleges. Colleges offer National Certificates, Higher National Certificates, and Higher National Diplomas. These Group Awards, alongside Scottish Vocational Qualifications, aim to ensure Scotland's population has the appropriate skills and knowledge to meet workplace needs. In 2014, research reported by the Office for National Statistics found that Scotland was the most highly educated country in Europe and among the most well-educated in the world in terms of tertiary education attainment, with roughly 40% of people in Scotland aged 16–64 educated to NVQ level 4 and above.[failed verification] Based on the original data for EU statistical regions, all four Scottish regions ranked significantly above the European average for completion of tertiary-level education by 25- to 64-year-olds.
Kilmarnock Academy in East Ayrshire is one of only two schools in the UK, and the only school in Scotland, to have educated two Nobel Prize Laureates – Alexander Fleming, discoverer of Penicillin, and John Boyd Orr, 1st Baron Boyd-Orr, for his scientific research into nutrition and his work as the first Director-General of the United Nations Food and Agriculture Organization (FAO).
Scottish music is a significant aspect of the nation's culture, with both traditional and modern influences. A famous traditional Scottish instrument is the Great Highland bagpipe, a wind instrument consisting of three drones and a melody pipe (called the chanter), which are fed continuously by a reservoir of air in a bag. Bagpipe bands, featuring bagpipes and various types of drums, and showcasing Scottish music styles while creating new ones, have spread throughout the world. The clàrsach (harp), fiddle and accordion are also traditional Scottish instruments, the latter two heavily featured in Scottish country dance bands. There are many successful Scottish bands and individual artists in varying styles including Annie Lennox, Amy Macdonald, Runrig, Belle and Sebastian, Boards of Canada, Camera Obscura, Cocteau Twins, Deacon Blue, Franz Ferdinand, Susan Boyle, Emeli Sandé, Texas, The View, The Fratellis, Twin Atlantic and Biffy Clyro. Other Scottish musicians include Shirley Manson, Paolo Nutini, Andy Stewart and Calvin Harris.[failed verification]
Scotland has a literary heritage dating back to the early Middle Ages. The earliest extant literature composed in what is now Scotland was in Brythonic speech in the 6th century, but is preserved as part of Welsh literature. Later medieval literature included works in Latin, Gaelic, Old English and French. The first surviving major text in Early Scots is the 14th-century poet John Barbour's epic Brus, focusing on the life of Robert I, and was soon followed by a series of vernacular romances and prose works. In the 16th century, the crown's patronage helped the development of Scots drama and poetry, but the accession of James VI to the English throne removed a major centre of literary patronage and Scots was sidelined as a literary language. Interest in Scots literature was revived in the 18th century by figures including James Macpherson, whose Ossian Cycle made him the first Scottish poet to gain an international reputation and was a major influence on the European Enlightenment. It was also a major influence on Robert Burns, whom many consider the national poet, and Walter Scott, whose Waverley Novels did much to define Scottish identity in the 19th century. Towards the end of the Victorian era a number of Scottish-born authors achieved international reputations as writers in English, including Robert Louis Stevenson, Arthur Conan Doyle, J. M. Barrie and George MacDonald. In the 20th century the Scottish Renaissance saw a surge of literary activity and attempts to reclaim the Scots language as a medium for serious literature. Members of the movement were followed by a new generation of post-war poets including Edwin Morgan, who would be appointed the first Scots Makar by the inaugural Scottish government in 2004. From the 1980s Scottish literature enjoyed another major revival, particularly associated with a group of writers including Irvine Welsh. Scottish poets who emerged in the same period included Carol Ann Duffy, who, in May 2009, was the first Scot named UK Poet Laureate.
As one of the Celtic nations, Scotland and Scottish culture are represented at interceltic events at home and over the world. Scotland hosts several music festivals including Celtic Connections (Glasgow), and the Hebridean Celtic Festival (Stornoway). Festivals celebrating Celtic culture, such as Festival Interceltique de Lorient (Brittany), the Pan Celtic Festival (Ireland), and the National Celtic Festival (Portarlington, Australia), feature elements of Scottish culture such as language, music and dance.[excessive citations]
The image of St. Andrew, martyred while bound to an X-shaped cross, first appeared in the Kingdom of Scotland during the reign of William I. Following the death of King Alexander III in 1286 an image of Andrew was used on the seal of the Guardians of Scotland who assumed control of the kingdom during the subsequent interregnum. Use of a simplified symbol associated with Saint Andrew, the saltire, has its origins in the late 14th century; the Parliament of Scotland decreeing in 1385 that Scottish soldiers should wear a white Saint Andrew's Cross on the front and back of their tunics. Use of a blue background for the Saint Andrew's Cross is said to date from at least the 15th century. Since 1606 the saltire has also formed part of the design of the Union Flag. There are numerous other symbols and symbolic artefacts, both official and unofficial, including the thistle, the nation's floral emblem (celebrated in the song, The Thistle o' Scotland), the Declaration of Arbroath, incorporating a statement of political independence made on 6 April 1320, the textile pattern tartan that often signifies a particular Scottish clan and the royal Lion Rampant flag. Highlanders can thank James Graham, 3rd Duke of Montrose, for the repeal in 1782 of the Act of 1747 prohibiting the wearing of tartans.
Although there is no official national anthem of Scotland, Flower of Scotland is played on special occasions and sporting events such as football and rugby matches involving the Scotland national teams and since 2010 is also played at the Commonwealth Games after it was voted the overwhelming favourite by participating Scottish athletes. Other currently less popular candidates for the National Anthem of Scotland include Scotland the Brave, Highland Cathedral, Scots Wha Hae and A Man's A Man for A' That.
St Andrew's Day, 30 November, is the national day, although Burns' Night tends to be more widely observed, particularly outside Scotland. In 2006, the Scottish Parliament passed the St Andrew's Day Bank Holiday (Scotland) Act 2007, designating the day an official bank holiday. Tartan Day is a recent innovation from Canada.
Scottish cuisine has distinctive attributes and recipes of its own but shares much with wider British and European cuisine as a result of local and foreign influences, both ancient and modern. Traditional Scottish dishes exist alongside international foodstuffs brought about by migration. Scotland's natural larder of game, dairy products, fish, fruit, and vegetables is the chief factor in traditional Scots cooking, with a high reliance on simplicity and a lack of spices from abroad, as these were historically rare and expensive. Irn-Bru is the most common Scottish carbonated soft drink, often described as "Scotland's other national drink" (after whisky). During the Late Middle Ages and early modern era, French cuisine played a role in Scottish cookery due to cultural exchanges brought about by the "Auld Alliance", especially during the reign of Mary, Queen of Scots. Mary, on her return to Scotland, brought an entourage of French staff who are considered responsible for revolutionising Scots cooking and for some of Scotland's unique food terminology.
National newspapers such as the Daily Record, The Herald, The Scotsman and The National are all produced in Scotland. Important regional dailies include the Evening News in Edinburgh, The Courier in Dundee in the east, and The Press and Journal serving Aberdeen and the north. Scotland is represented at the Celtic Media Festival, which showcases film and television from the Celtic countries. Scottish entrants have won many awards since the festival began in 1980.
Television in Scotland is largely the same as UK-wide broadcasts, however, the national broadcaster is BBC Scotland, a constituent part of the British Broadcasting Corporation, the publicly funded broadcaster of the United Kingdom. It runs three national television stations, and the national radio stations, BBC Radio Scotland and BBC Radio nan Gàidheal, amongst others. Scotland also has some programming in the Gaelic language. BBC Alba is the national Gaelic-language channel. The main Scottish commercial television station is STV which broadcasts on two of the three ITV regions of Scotland.
Scotland has a number of production companies which produce films and television programmes for Scottish, UK and international audiences. Popular films associated with Scotland through Scottish production or being filmed in Scotland include Braveheart (1995), Highlander (1986), Trainspotting (1996), Red Road (2006), Neds (2010), The Angel's Share (2012), Brave (2012) and Outlaw King (2018). Popular television programmes associated with Scotland include the long running BBC Scotland soap opera River City which has been broadcast since 2002, Still Game, a popular Scottish sitcom broadcast throughout the United Kingdom (2002–2007, revived in 2016), Rab C. Nesbitt, Two Doors Down and Take the High Road.
Wardpark Studios in Cumbernauld is one of Scotland's television and film production studios where the television programme Outlander is produced. Dumbarton Studios, located in Dumbarton is largely used for BBC Scotland programming, used for the filming and production of television programmes such as Still Game, River City, Two Doors Down Shetland.
Scotland hosts its own national sporting competitions and has independent representation at several international sporting events, including the FIFA World Cup, the Rugby Union World Cup, the Rugby League World Cup, the Cricket World Cup, the Netball World Cup and the Commonwealth Games. Scotland has its own national governing bodies, such as the Scottish Football Association (the second oldest national football association in the world) and the Scottish Rugby Union. Variations of football have been played in Scotland for centuries, with the earliest reference dating back to 1424.
The world's first official international association football match was held in 1872 and was the idea of C. W. Alcock of the Football Association which was seeking to promote Association Football in Scotland.[better source needed] The match took place at the West of Scotland Cricket Club's Hamilton Crescent ground in the Partick area of Glasgow. The match was between Scotland and England and resulted in a 0–0 draw. Following this, the newly developed football became the most popular sport in Scotland. The Scottish Cup was first contested in 1873. Queen's Park F.C., in Glasgow, is probably the oldest association football club in the world outside England.
The Scottish Football Association (SFA), the second-oldest national football association in the world, is the main governing body for Scottish association football, and a founding member of the International Football Association Board (IFAB) which governs the Laws of the Game. As a result of this key role in the development of the sport Scotland is one of only four countries to have a permanent representative on the IFAB; the other four representatives being appointed for set periods by FIFA.
The SFA also has responsibility for the Scotland national football team, whose supporters are commonly known as the "Tartan Army". As of December 2019[update], Scotland are ranked as the 50th best national football team in the FIFA World Rankings. The national team last attended the World Cup in France in 1998, but finished last in their group stage. The Scotland women's team have achieved more recent success, qualifying for both Euro 2017 and the 2019 World Cup. As of December 2019[update], they were ranked as the 22nd best women's national team in the FIFA Rankings.
Scottish clubs have achieved some success in European competitions, with Celtic winning the European Cup in 1967, Rangers and Aberdeen winning the UEFA Cup Winners' Cup in 1972 and 1983 respectively, and Aberdeen also winning the UEFA Super Cup in 1983. Celtic, Rangers and Dundee United have also reached European finals, the most recent of these being Rangers in 2008.
With the modern game of golf originating in 15th-century Scotland, the country is promoted as the home of golf. To many golfers the Old Course in the Fife town of St Andrews, an ancient links course dating to before 1552, is considered a site of pilgrimage. In 1764, the standard 18-hole golf course was created at St Andrews when members modified the course from 22 to 18 holes. The world's oldest golf tournament, and golf's first major, is The Open Championship, which was first played on 17 October 1860 at Prestwick Golf Club, in Ayrshire, Scotland, with Scottish golfers winning the earliest majors. There are many other famous golf courses in Scotland, including Carnoustie, Gleneagles, Muirfield, and Royal Troon.
Other distinctive features of the national sporting culture include the Highland games, curling and shinty. In boxing, Scotland has had 13 world champions, including Ken Buchanan, Benny Lynch and Jim Watt.
Scotland has competed at every Commonwealth Games since 1930 and has won 356 medals in total—91 Gold, 104 Silver and 161 Bronze. Edinburgh played host to the Commonwealth Games in 1970 and 1986, and most recently Glasgow in 2014.
Scotland's primary sources for energy are provided though renewable energy (42%), nuclear (35%) and fossil fuel generation (22%).
The Scottish Government has a target to have the equivalent of 50% of the energy for Scotland's heat, transport and electricity consumption to be supplied from renewable sources by 2030.
Scotland has five international airports operating scheduled services to Europe, North America and Asia, as well domestic services to England, Northern Ireland and Wales.
Highlands and Islands Airports operates eleven airports across the Highlands, Orkney, Shetland and the Western Isles, which are primarily used for short distance, public service operations, although Inverness Airport has a number of scheduled flights to destinations across the UK and mainland Europe.
Edinburgh Airport is currently Scotland's busiest airport handling over 13 million passengers in 2017. It is also the UK's 6th busiest airport.
Four airlines are based in Scotland:
Network Rail owns and operates the fixed infrastructure assets of the railway system in Scotland, while the Scottish Government retains overall responsibility for rail strategy and funding in Scotland. Scotland's rail network has around 350 railway stations and 3,000 kilometres (1,900 mi) of track. Over 89.3 million passenger journeys are made each year.
The East Coast and West Coast main railway lines connect the major cities and towns of Scotland with each other and with the rail network in England. London North Eastern Railway provides inter-city rail journeys between Glasgow, Edinburgh, Aberdeen and Inverness to London. Domestic rail services within Scotland are operated by Abellio ScotRail. During the time of British Rail, the West Coast Main Line from London Euston to Glasgow Central was electrified in the early 1970s, followed by the East Coast Main Line in the late 1980s. British Rail created the ScotRail brand. When British Rail existed, many railway lines in Strathclyde were electrified. Strathclyde Passenger Transport Executive was at the forefront with the acclaimed "largest electrified rail network outside London". Some parts of the network are electrified, but there are no electrified lines in the Highlands, Angus, Aberdeenshire, the cities of Dundee or Aberdeen, or Perth & Kinross, and none of the islands has a rail link (although the railheads at Kyle of Lochalsh and Mallaig principally serve the islands).
The East Coast Main Line crosses the Firth of Forth by the Forth Bridge. Completed in 1890, this cantilever bridge has been described as "the one internationally recognised Scottish landmark".[page needed] Scotland's rail network is managed by Transport Scotland.
Regular ferry services operate between the Scottish mainland and outlying islands. Ferries serving both the inner and outer Hebrides are principally operated by the state-owned enterprise Caledonian MacBrayne.
Services to the Northern Isles are operated by Serco. Other routes, served by multiple companies, connect southwest Scotland to Northern Ireland. DFDS Seaways operated a freight-only Rosyth – Zeebrugge ferry service, until a fire damaged the vessel DFDS were using. A passenger service was also operated between 2002–2010.
Additional routes are operated by local authorities.
- "St Andrew—Quick Facts". Scotland. org—The Official Online Gateway. Archived from the original on 11 November 2007. Retrieved 2 December 2007.
- "St Andrew". Catholic Online. Retrieved 15 November 2011.
- "St Margaret of Scotland". Catholic Online. Retrieved 15 November 2011.
- "Patron saints". Catholic Online. Retrieved 15 November 2011.
- "St Columba". Catholic Online. Retrieved 15 November 2011.
- "Ethnic groups, Scotland, 2001 and 2011" (PDF). The Scottish Government. 2013. Retrieved 9 December 2013.
- Other religion"Analysis of Religion in the 2001 Census - gov.scot". www.gov.scot. Retrieved 8 October 2019.
- Scotland's Census (27 March 2011). "Scotland's Census 2011 – National Records of Scotland" (PDF). Scotland's Census. Retrieved 8 October 2019.
- Region and Country Profiles, Key Statistics and Profiles, October 2013, ONS. Retrieved 9 August 2015.
- Jonathan, McMullan (28 June 2018). "Population estimates for UK, England and Wales, Scotland and Northern Ireland". Ons.gov.uk. Office for National Statistics.
- "Population estimates by sex, age and administrative area, Scotland, 2011 and 2012". National Records of Scotland. 8 August 2013. Retrieved 8 August 2013.
- Office for National Statistics. "Regional gross value added (income approach), UK: 1997 to 2017, December 2015". Retrieved 24 April 2017.
- "Sub-national HDI – Area Database – Global Data Lab". hdi.globaldatalab.org. Retrieved 13 September 2018.
- "European Charter for Regional or Minority Languages". Scottish Government. Retrieved 23 October 2011. [dead link]
- Macleod, Angus "Gaelic given official status" (22 April 2005) The Times. London. Retrieved 2 August 2007.
- "Scotland becomes first part of UK to recognise signing for deaf as official language". Herald Scotland. 2015. Retrieved 17 January 2016.
- "The Countries of the UK". Office for National Statistics. 6 April 2010. Retrieved 24 June 2012.
- "Countries within a country". 10 Downing Street. Archived from the original on 16 April 2010. Retrieved 24 August 2008.
The United Kingdom is made up of four countries: England, Scotland, Wales and Northern Ireland
- "ISO 3166-2 Newsletter Date: 28 November 2007 No I-9. "Changes in the list of subdivision names and code elements" (Page 11)" (PDF). International Organization for Standardization codes for the representation of names of countries and their subdivisions – Part 2: Country subdivision codes. Retrieved 31 May 2008.
SCT Scotland country
- "Scottish Executive Resources" (PDF). Scotland in Short. Scottish Executive. 17 February 2007. Retrieved 14 September 2006.
- Keay, J. & Keay, J. (1994) Collins Encyclopaedia of Scotland. London. HarperCollins.
- Mackie, J.D. (1969) A History of Scotland. London. Penguin.
- "Parliament and Ireland". London: The Houses of Parliament. Retrieved 26 December 2016.
- Collier, J. G. (2001) Conflict of Laws (Third edition)(pdf) Cambridge University Press. "For the purposes of the English conflict of laws, every country in the world which is not part of England and Wales is a foreign country and its foreign laws. This means that not only totally foreign independent countries such as France or Russia ... are foreign countries but also British Colonies such as the Falkland Islands. Moreover, the other parts of the United Kingdom – Scotland and Northern Ireland – are foreign countries for present purposes, as are the other British Islands, the Isle of Man, Jersey and Guernsey."
- Devine, T. M. (1999), The Scottish Nation 1700–2000, P.288–289, ISBN 0-14-023004-1 "created a new and powerful local state run by the Scottish bourgeoisie and reflecting their political and religious values. It was this local state, rather than a distant and usually indifferent Westminster authority, that in effect routinely governed Scotland"
- "Devolution Settlement, Scotland". gov.uk. Retrieved 7 May 2017.
- "Cabinet and ministers". Gov.scot. Retrieved 3 January 2019.
- "Scottish MEPs". Europarl.org.uk. Archived from the original on 1 May 2014. Retrieved 26 May 2014.
- "Scotland / Alba". British-Irish Council. 7 December 2011. Retrieved 4 May 2013.
- "Members". British-Irish Parliamentary Assembly. Retrieved 1 August 2018.
- "Scottish Local Government". Cosla.gov.uk. Retrieved 3 January 2019.
- P. Freeman, Ireland and the Classical World, Austin, 2001, pp. 93.
- Gwynn, Stephen (July 2009). The History Of Ireland. ISBN 9781113155177. Retrieved 17 September 2014.
- Ayto, John; Ian Crofton (2005). Brewer's Britain & Ireland: The History, Culture, Folklore and Etymology of 7500 Places in These Islands. WN. ISBN 978-0-304-35385-9.
- The earliest known evidence is a flint arrowhead from Islay. See Moffat, Alistair (2005) Before Scotland: The Story of Scotland Before History. London. Thames & Hudson. Page 42.
- Forsyth, Katherine (2005). "Origins: Scotland to 1100". In Wormald, Jenny (ed.). Scotland: A History. Oxford: Oxford University Press. ISBN 9780199601646.
- Pryor, Francis (2003). Britain BC. London: HarperPerennial. pp. 98–104 & 246–250. ISBN 978-0-00-712693-4.
- Houston, Rab (2008). Scotland: A Very Short Introduction. Oxford: Oxford University Press. ISBN 9780191578861.
- Hanson, William S. The Roman Presence: Brief Interludes, in Edwards, Kevin J. & Ralston, Ian B.M. (Eds) (2003). Scotland After the Ice Age: Environment, Archeology and History, 8000 BC—AD 1000. Edinburgh. Edinburgh University Press.
- Robertson, Anne S. (1960). The Antonine Wall. Glasgow Archaeological Society.
- Keys, David (27 June 2018). "Ancient Roman 'hand of god' discovered near Hadrian's Wall sheds light on biggest combat operation ever in UK". Independent. Retrieved 6 July 2018.
- Brown, Dauvit (2001). "Kenneth mac Alpin". In M. Lynch (ed.). The Oxford Companion to Scottish History. Oxford: Oxford University Press. p. 359. ISBN 978-0-19-211696-3.
- Stringer, Keith (2005). "The Emergence of a Nation-State, 1100–1300". In Wormald, Jenny (ed.). Scotland: A History. Oxford: Oxford University Press. ISBN 9780199601646.
- "Scotland Conquered, 1174–1296". National Archives.
- "Scotland Regained, 1297–1328". National Archives of the United Kingdom.
- Murison, A. F. (1899). King Robert the Bruce (reprint 2005 ed.). Kessinger Publishing. p. 30. ISBN 978-1-4179-1494-4.
- Brown, Michael; Boardman, Steve (2005). "Survival and Revival: Late Medieval Scotland". In Wormald, Jenny (ed.). Scotland: A History. Oxford: Oxford University Press. ISBN 9780199601646.
- Mason, Roger (2005). "Renaissance and Reformation: The Sixteenth Century". In Wormald, Jenny (ed.). Scotland: A History. Oxford: Oxford University Press. ISBN 9780199601646.
- "James IV, King of Scots 1488–1513". BBC.
- "Battle of Flodden, (Sept. 9, 1513)". Encyclopædia Britannica.
- "Religion, Marriage and Power in Scotland, 1503–1603". The National Archives of the United Kingdom.
- Ross, David (2002). Chronology of Scottish History. Geddes & Grosset. p. 56. ISBN 978-1-85534-380-1.
1603: James VI becomes James I of England in the Union of the Crowns, and leaves Edinburgh for London
- Devine, T M (2018). The Scottish Clearances: A History of the Dispossessed, 1600-1900. London: Allen Lane. ISBN 978-0241304105.
- Wormald, Jenny (2005). "Confidence and Perplexity: The Seventeenth Century". In Wormald, Jenny (ed.). Scotland: A History. Oxford: Oxford University Press. ISBN 9780199601646.
- "Dictionary of Battles and Sieges: A-E". Dennis E. Showalter (2007). Springer. p.41
- Cullen, Karen J. (15 February 2010). Famine in Scotland: The 'ill Years' of The 1690s. Edinburgh University Press. pp. 152–3. ISBN 978-0748638871.
- "Why did the Scottish parliament accept the Treaty of Union?" (PDF). Scottish Affairs. Archived from the original (PDF) on 3 October 2011. Retrieved 1 May 2013.
- "Popular Opposition to the Ratification of the Treaty of Anglo-Scottish Union in 1706–7". scottishhistorysociety.com. Scottish Historical Society. Retrieved 23 March 2017.
- Devine, T. M. (1999). The Scottish Nation 1700–2000. Penguin Books. p. 9. ISBN 978-0-14-023004-8.
From that point on anti-union demonstrations were common in the capital. In November rioting spread to the south west, that stronghold of strict Calvinism and covenanting tradition. The Glasgow mob rose against union sympathisers in disturbances that lasted intermittently for over a month
- "Act of Union 1707 Mob unrest and disorder". London: The House of Lords. 2007. Archived from the original on 1 January 2008. Retrieved 23 December 2007.
- Robert, Joseph C (1976). "The Tobacco Lords: A study of the Tobacco Merchants of Glasgow and their Activities". The Virginia Magazine of History and Biography. 84 (1): 100–102. JSTOR 4248011.
- "Some Dates in Scottish History from 1745 to 1914 Archived 31 October 2013 at the Wayback Machine", The University of Iowa.
- "Enlightenment Scotland". Learning and Teaching Scotland.
- Neil Davidson(2000). The Origins of Scottish Nationhood. London: Pluto Press. pp. 94–95.
- Devine, T M (1994). Clanship to Crofters' War: The social transformation of the Scottish Highlands (2013 ed.). Manchester University Press. ISBN 978-0-7190-9076-9.
- T. M. Devine and R. J. Finlay, Scotland in the Twentieth Century (Edinburgh: Edinburgh University Press, 1996), pp. 64–5.
- F. Requejo and K-J Nagel, Federalism Beyond Federations: Asymmetry and Processes of Re-symmetrization in Europe (Aldershot: Ashgate, 2011), p. 39.
- R. Quinault, "Scots on Top? Tartan Power at Westminster 1707–2007", History Today, 2007 57(7): 30–36. ISSN 0018-2753 Fulltext: Ebsco.
- K. Kumar, The Making of English National Identity (Cambridge: Cambridge University Press, 2003), p. 183.
- D. Howell, British Workers and the Independent Labour Party, 1888–1906 (Manchester: Manchester University Press, 1984), p. 144.
- J. F. MacKenzie, "The second city of the Empire: Glasgow – imperial municipality", in F. Driver and D. Gilbert, eds, Imperial Cities: Landscape, Display and Identity (2003), pp. 215–23.
- J. Shields, Clyde Built: a History of Ship-Building on the River Clyde (1949).
- C. H. Lee, Scotland and the United Kingdom: the Economy and the Union in the Twentieth Century (1995), p. 43.
- M. Magnusson (10 November 2003), "Review of James Buchan, Capital of the Mind: how Edinburgh Changed the World", New Statesman, archived from the original on 29 May 2011
- E. Wills, Scottish Firsts: a Celebration of Innovation and Achievement (Edinburgh: Mainstream, 2002).
- K. S. Whetter (2008), Understanding Genre and Medieval Romance, Ashgate, p. 28
- N. Davidson (2000), The Origins of Scottish Nationhood, Pluto Press, p. 136
- "Cultural Profile: 19th and early 20th century developments", Visiting Arts: Scotland: Cultural Profile, archived from the original on 5 November 2011
- Stephan Tschudi-Madsen, The Art Nouveau Style: a Comprehensive Guide (Courier Dover, 2002), pp. 283–4.
- J. L. Roberts, The Jacobite Wars, pp. 193–5.
- M. Sievers, The Highland Myth as an Invented Tradition of 18th and 19th century and Its Significance for the Image of Scotland (GRIN Verlag, 2007), pp. 22–5.
- P. Morère, Scotland and France in the Enlightenment (Bucknell University Press, 2004), pp. 75–6.
- William Ferguson, The identity of the Scottish Nation: an Historic Quest (Edinburgh: Edinburgh University Press, 1998), p. 227.
- Divine, Scottish Nation pp. 292–95.
- E. Richards, The Highland Clearances: People, Landlords and Rural Turmoil (2008).
- A. K. Cairncross, The Scottish Economy: A Statistical Account of Scottish Life by Members of the Staff of Glasgow University (Glasgow: Glasgow University Press, 1953), p. 10.
- R. A. Houston and W. W. Knox, eds, The New Penguin History of Scotland (Penguin, 2001), p. xxxii.
- G. Robb, "Popular Religion and the Christianization of the Scottish Highlands in the Eighteenth and Nineteenth Centuries", Journal of Religious History, 1990, 16(1): 18–34.
- J. T. Koch, Celtic Culture: a Historical Encyclopedia, Volumes 1–5 (ABC-CLIO, 2006), pp. 416–7.
- T. M. Devine, The Scottish Nation, pp. 91–100.
- Paul L. Robertson, "The Development of an Urban University: Glasgow, 1860–1914", History of Education Quarterly, Winter 1990, vol. 30 (1), pp. 47–78.
- M. F. Rayner-Canham and G. Rayner-Canham, Chemistry was Their Life: Pioneering British Women Chemists, 1880–1949, (Imperial College Press, 2008), p. 264.
- Warren, Charles R. (2009). Managing Scotland's environment (2nd ed., completely rev. and updated ed.). Edinburgh: Edinburgh University Press. pp. 45 ff., 179 ff. ISBN 9780748630639. OCLC 647881331.
- Glass, Jayne (2013). Lairds, Land and Sustainability: Scottish Perspectives on Upland Management. Edinburgh: Edinburgh University Press. pp. 45 ff., 77 f. ISBN 9780748685882. OCLC 859160940.
- Richard J. Finlay, Modern Scotland 1914–2000 (2006), pp 1–33
- R. A. Houston and W. W. J. Knox, eds. The New Penguin History of Scotland (2001) p 426. Niall Ferguson points out in "The Pity of War" that the proportion of enlisted Scots who died was third highest in the war behind Serbia and Turkey and a much higher proportion than in other parts of the UK.
- Iain McLean, The Legend of Red Clydeside (1983)
- Finlay, Modern Scotland 1914–2000 (2006), pp 34–72
- Richard J. Finlay, "National identity in Crisis: Politicians, Intellectuals and the 'End of Scotland', 1920–1939", History, June 1994, Vol. 79 Issue 256, pp 242–59
- "Primary History – World War 2 – Scotland's Blitz". BBC. Retrieved 1 August 2018.
- "Scotland's Landscape : Clydebank Blitz". BBC. Retrieved 1 August 2018.
- J. Leasor Rudolf Hess: The Uninvited Envoy (Kelly Bray: House of Stratus, 2001), ISBN 0-7551-0041-7, p. 15.
- Evans 2008, p. 168.
- Sereny 1996, p. 240.
- P. Wykeham, Fighter Command (Manchester: Ayer, rpt., 1979), ISBN 0-405-12209-8, p. 87.
- J. Buchanan, Scotland (Langenscheidt, 3rd edn., 2003), ISBN 981-234-950-2, p. 51.
- J. Creswell, Sea Warfare 1939–1945 (Berkeley, University of California Press, 2nd edn., 1967), p. 52.
- D. Howarth, The Shetland Bus: A WWII Epic of Escape, Survival, and Adventure (Guilford, Delaware: Lyons Press, 2008), ISBN 1-59921-321-4.
- Harvie, Christopher No Gods and Precious Few Heroes (Edward Arnold, 1989) pp 54–63.
- Stewart, Heather (6 May 2007). "Celtic Tiger Burns Brighter at Holyrood". The Guardian.
- "National Planning Framework for Scotland". Gov.scot. Retrieved 17 September 2014.
- Torrance, David (30 March 2009). "Modern myth of a poll tax test-bed lives on". The Scotsman. Retrieved 19 September 2017.
- "The poll tax in Scotland 20 years on". BBC News. BBC. 1 April 2009. Retrieved 17 September 2014.
- "The Scotland Act 1998" Office of Public Sector Information. Retrieved 22 April 2008.
- "Devolution > Scottish responsibilities" Scottish Government publication, (web-page last updated November 2010)
- "Special Report | 1999 | 06/99 | Scottish Parliament opening | Scotland's day of history". BBC News. 4 July 1999. Retrieved 1 August 2018.
- "Donald Dewar dies after fall". The Independent. 11 October 2000. Retrieved 1 August 2018.
- "UK | Scotland | Guide to opening of Scottish Parliament". BBC News. 6 October 2004. Retrieved 1 August 2018.
- Severin Carrell, Scotland correspondent (6 May 2011). "Salmond hails 'historic' victory as SNP secures Holyrood's first ever majority | Politics". The Guardian. Retrieved 1 August 2018.
- "Scottish independence referendum – Results". BBC News. 19 September 2014. Retrieved 1 August 2018.
- Whitaker's Almanack (1991) London. J. Whitaker and Sons.
- North Channel, Encyclopædia Britannica. Retrieved 2 May 2016.
- "Uniting the Kingdoms?". Nationalarchives.gov.uk. Retrieved 17 September 2014.
- See "Centre of Scotland" Newtonmore.com. Retrieved 7 September 2012.
- Keay, J. & Keay, J. (1994) Collins Encyclopaedia of Scotland. London. HarperCollins. Pages 734 and 930.
- "Tay". Encarta. Archived from the original on 17 May 2008. Retrieved 21 March 2008.
- "Southern Uplands". Tiscali.co.uk. 16 November 1990. Archived from the original on 28 November 2004. Retrieved 11 June 2009.
- "Education Scotland – Standard Grade Bitesize Revision – Ask a Teacher – Geography – Physical – Question From PN". BBC. Retrieved 11 June 2009.
- "Scotland Today " ITKT". Intheknowtraveler.com. 28 December 2006. Archived from the original on 6 January 2007. Retrieved 11 June 2009.
- Murray, W.H. (1973) The Islands of Western Scotland. London. Eyre Methuen ISBN 978-0-413-30380-6
- Murray, W.H. (1968) The Companion Guide to the West Highlands of Scotland. London. Collins. ISBN 0-00-211135-7
- Johnstone, Scott et al. (1990) The Corbetts and Other Scottish Hills. Edinburgh. Scottish Mountaineering Trust. Page 9.
- "BBC Weather: UK Records". BBC.co.uk. Archived from the original on 2 December 2010. Retrieved 21 September 2007. The same temperature was also recorded in Braemar on 10 January 1982 and at Altnaharra, Highland, on 30 December 1995.
- "Weather extremes". Met Office. Retrieved 23 March 2017.
- "Western Scotland: climate". Archived from the original on 8 October 2014. Retrieved 17 September 2014.
- "Eastern Scotland: climate". Archived from the original on 8 October 2014. Retrieved 17 September 2014.
- "Scottish Weather Part One". BBC. Archived from the original on 26 January 2011. Retrieved 21 September 2007.
- Fraser Darling, F. & Boyd, J. M. (1969) Natural History in the Highlands and Islands. London. Bloomsbury.
- Benvie, Neil (2004) Scotland's Wildlife. London. Aurum Press. ISBN 1-85410-978-2 p. 12.
- "State of the Park Report. Chapter 2: Natural Resources"(pdf) (2006) Cairngorms National Park Authority. Retrieved 14 October 2007.
- Preston, C. D., Pearman, D. A., & Dines, T. D. (2002) New Atlas of the British and Irish Flora. Oxford University Press.
- Gooders, J. (1994) Field Guide to the Birds of Britain and Ireland. London. Kingfisher.
- Matthews, L. H. (1968) British Mammals. London. Bloomsbury.
- WM Adams (2003). Future nature:a vision for conservation. p. 30. ISBN 978-1-85383-998-6. Retrieved 10 January 2011.
- "East Scotland Sea Eagles" RSPB. Retrieved 3 January 2014.
- Ross, John (29 December 2006). "Mass slaughter of the red kites". The Scotsman. Edinburgh.
- Ross, David (26 November 2009) "Wild Boar: our new eco warriors" The Herald. Glasgow.
- "Beavers return after 400-year gap". BBC News. 29 May 2009. Retrieved 5 December 2009.
- Integrated Upland Management for Wildlife, Field Sports, Agriculture & Public Enjoyment (pdf) (September 1999) Scottish Natural Heritage. Retrieved 14 October 2007.
- "The Fortingall Yew". Archived from the original on 6 December 2008. Retrieved 17 September 2014.
- "Scotland remains home to Britain's tallest tree as Dughall Mor reaches new heights". Forestry Commission. Archived from the original on 3 October 2012. Retrieved 26 April 2008.
- Copping, Jasper (4 June 2011) "Britain's record-breaking trees identified" London. The Telegraph. Retrieved 10 July 2011.
- "Why Scotland has so many mosses and liverworts". Snh.org.uk. Retrieved 17 September 2014.
- "Bryology (mosses, liverworts and hornworts)". Rbge.org.uk. Retrieved 17 September 2014.
- "Scotland's Population at its Highest Ever". National Records of Scotland. 30 April 2015. Retrieved 12 February 2015.
- Census 2011: Detailed characteristics on Ethnicity, Identity, Language and Religion in Scotland – Release 3A. Scotland Census 2011. Retrieved 20 September 2014.
- "Did You Know?—Scotland's Cities". Rampantscotland.com. Retrieved 17 September 2014.
- Clapperton, C.M. (ed) (1983) Scotland: A New Study. London. David & Charles.
- Miller, J. (2004) Inverness. Edinburgh. Birlinn. ISBN 978-1-84158-296-2
- "New Towns". Bbc.co.uk. Retrieved 17 September 2014.
- "Scotland speaks Urdu". Urdustan.net. Retrieved 17 September 2014.
- The Pole Position (6 August 2005). Glasgow. Sunday Herald newspaper.
- Gaelic Language Plan, www.gov.scot. Retrieved 2 October 2014
- Scots Language Policy, Gov.scot, Retrieved 2 October 2014
- Stuart-Smith J. Scottish English: Phonology in Varieties of English: The British Isles, Kortman & Upton (Eds), Mouton de Gruyter, New York 2008. p. 47
- Stuart-Smith J. Scottish English: Phonology in Varieties of English: The British Isles, Kortman & Upton (Eds), Mouton de Gruyter, New York 2008. p.48
- Macafee C. Scots in Encyclopedia of Language and Linguistics, Vol. 11, Elsevier, Oxford, 2005. p. 33
- "Scotland's Census 2011". National Records of Scotland. Retrieved 27 May 2014.
- Kenneth MacKinnon. "A Century on the Census—Gaelic in Twentieth Century Focus". University of Glasgow. Archived from the original on 5 September 2007. Retrieved 26 September 2007.
- "Can TV's evolution ignite a Gaelic revolution?". The Scotsman. 16 September 2008.
- The US Census 2000. The American Community Survey 2004 by the US Census Bureau estimates 5,752,571 people claiming Scottish ancestry and 5,323,888 people claiming Scotch-Irish ancestry. "Archived copy". Archived from the original on 8 January 2012. Retrieved 5 February 2016.CS1 maint: archived copy as title (link) CS1 maint: BOT: original-url status unknown (link)
- "The Scotch-Irish". American Heritage Magazine. 22 (1). December 1970. Archived from the original on 20 October 2010.
- "Born Fighting: How the Scots-Irish Shaped America". Powells.com. 12 August 2009. Retrieved 30 April 2010.
- "Scots-Irish By Alister McReynolds, writer and lecturer in Ulster-Scots studies". Nitakeacloserlook.gov.uk. Archived from the original on 16 February 2009. Retrieved 30 April 2010.
- "2006 Canadian Census". 12.statcan.ca. 2 April 2008. Retrieved 17 September 2014.
- Linguistic Archaeology: The Scottish Input to New Zealand English Phonology Trudgill et al. Journal of English Linguistics.2003; 31: 103–124
- "Scotland's population reaches record of high of 5.25 million". The Courier. 3 August 2012. Retrieved 3 January 2014.
- "Scotland's Population 2011: The Registrar General's Annual Review of Demographic Trends 157th Edition". Gro-gov.scot. Retrieved 1 May 2013.
- "Table Q1: Births, stillbirths, deaths, marriages and civil partnerships, numbers and rates, Scotland, quarterly, 2002 to 2012" (PDF). General Register Office for Scotland. Retrieved 1 May 2013.
- "2011 Census population data for localities in Scotland". Scotlandscensus.gov.uk. Retrieved 10 July 2014.
- Life Expectancy for Areas within Scotland 2012–2014 (PDF) (Report). National Records of Scotland. 13 October 2015. p. 5. Retrieved 22 March 2017.
- "Scotland's Census 2011" (PDF). National Records of Scotland. Retrieved 11 August 2016.
- "Church of Scotland 'struggling to stay alive'". scotsman.com.
- "Survey indicates 1.5 million Scots identify with Church". Churchofscotland.org.uk. Archived from the original on 7 December 2016. Retrieved 29 September 2016.
- Andrew Collier, "Scotland's Confident Catholics", The Tablet 10 January 2009, 16.
- "Scottish Episcopal Church could be first in UK to conduct same-sex weddings". Scottish Legal News. 20 May 2016. Retrieved 1 October 2016.
- "Analysis of Religion in the 2001 Census". General Register Office for Scotland. Retrieved 26 September 2007.
- "In the Scottish Lowlands, Europe's first Buddhist monastery turns 40". Buddhistchannel.tv. Retrieved 17 September 2014.
- "Hansard vol 514 cc 199–201". Hansard.millbanksystems.com. 15 April 1953. Retrieved 1 August 2018.
- "Opening of Parliament: Procession of the Crown of Scotland". Scottish Parliament. Retrieved 9 July 2016.
- "Government of Scotland Facts". Archived from the original on 3 May 2010. Retrieved 17 September 2014.
- "Brown opens door to Holyrood tax powers". Sunday Herald. 16 February 2008. Retrieved 4 January 2014.
- Fraser, Douglas (2 February 2016). "Scotland's tax powers: What it has and what's coming?". BBC News. BBC. Retrieved 27 April 2017.
- BBC Scotland News Online "Scotland begins pub smoking ban", BBC Scotland News, 26 March 2006. Retrieved 17 July 2006.
- "People: Who runs the Scottish Government". Scottish Government. 21 November 2014. Retrieved 11 January 2015.
- "Deputy First Minister". Gov.scot. 24 May 2016. Retrieved 11 August 2016.
- "The Scottish Government". Beta.gov.scot. Archived from the original on 9 August 2016. Retrieved 11 August 2016.
- "Election 2016: Before-and-after and party strength maps". BBC News. 9 May 2016. Retrieved 3 January 2019.
- "Nicola Sturgeon wins Scottish first minister vote". BBC News. 17 May 2016. Retrieved 3 January 2019.
- "Scottish Elections (Dates) Act 2016". legislation.gov.uk. Retrieved 3 January 2019.
- "Scotland election results 2019: SNP wins election landslide in Scotland". BBC News. BBC. 13 December 2019. Retrieved 18 December 2019.
- "General election 2017: SNP lose a third of seats amid Tory surge". BBC News. BBC. 9 June 2017. Retrieved 20 June 2017.
- "Scotland Office Charter". Scotland Office website. 9 August 2004. Archived from the original on 30 October 2007. Retrieved 22 December 2007.
- "Alister Jack: What do we know about the new Scottish Secretary?". BBC News. BBC. 24 July 2019. Retrieved 18 December 2019.
- "Devolution of powers to Scotland, Wales and Northern Ireland". GOV.UK. Retrieved 1 August 2018.
- "Scottish/UK relations". Gov.scot. 11 January 2016. Retrieved 1 August 2018.
- "Devolved administrations hold 'difficult' Brexit talks". BBC News. 19 January 2017. Retrieved 1 August 2018.
- "Devolved governments won't get decisive role in Brexit talks Theresa May confirms". The Independent. 30 January 2017. Retrieved 1 August 2018.
- "Devolved and Reserved Matters – Visit & Learn". Scottish Parliament. 14 February 2017. Retrieved 1 August 2018.
- "International". gov.scot. Retrieved 1 August 2018.
- "Cabinet Secretary for Culture, Tourism and External Affairs". gov.scot. Retrieved 1 August 2018.
- "Minister for Europe, Migration and International Development". gov.scot. Retrieved 1 August 2018.
- "'Best of Scotland' at G8 summit". BBC News. 3 July 2005. Retrieved 1 August 2018.
- "About us". Scotland Malawi Partnership. 3 November 2005. Retrieved 1 August 2018.
- "Putin in Scottish capital". BBC News. 25 June 2003. Retrieved 1 August 2018.
- "First Minister Alex Salmond arrives in China". BBC News. 3 November 2013. Retrieved 1 August 2018.
- "Working with China: five-year engagement strategy". gov.scot. 4 December 2012. Retrieved 1 August 2018.
- "Scotland's International Framework: Canada engagement strategy". gov.scot. 30 March 2017. Retrieved 1 August 2018.
- "International relations: International offices". gov.scot. Archived from the original on 19 January 2018. Retrieved 1 August 2018.
- "Sturgeon signs climate agreement with California". BBC News. 3 April 2017. Retrieved 1 August 2018.
- "Nicola Sturgeon nets £6.3million deal for Scots jobs on first day of US visit". Daily Record. 3 April 2017. Retrieved 1 August 2018.
- "First Minister in Dublin: Day 2". First Minister of Scotland. 29 November 2016. Retrieved 1 August 2018.
- Cavanagh, Michael (2001) The Campaigns for a Scottish Parliament. University of Strathclyde. Retrieved 12 April 2008.
- Kerr, Andrew (8 September 2017). "Scottish devolution referendum: The birth of a parliament". BBC News. Retrieved 3 January 2019.
- "Party people confront new realities". BBC News. BBC. Retrieved 18 January 2008.
- "Commons clears transfer of power". The Herald. Glasgow. January 2011. Retrieved 4 October 2011.
- "Referendum Bill". Official website, About > Programme for Government > 2009–10 > Summaries of Bills > Referendum Bill. Scottish Government. 2 September 2009. Archived from the original on 10 September 2009. Retrieved 10 September 2009.
- MacLeod, Angus (3 September 2009). "Salmond to push ahead with referendum Bill". The Times. London. Archived from the original on 10 September 2009. Retrieved 10 September 2009.
- "Scottish independence plan 'an election issue'". BBC News. 6 September 2010.
- Black, Andrew (21 March 2013). "Scottish independence: Referendum to be held on 18 September, 2014". BBC News. London. Retrieved 21 March 2013.
- "Scotland votes no: the union has survived, but the questions for the left are profound". The Guardian. 19 September 2014.
- "Scotland decides". BBC. Retrieved 19 September 2014.
- Scottish Independence Referendum: statement by the Prime Minister, UK Government
- Scottish referendum: Who is Lord Smith of Kelvin?, BBC News
- "Scottish Leader Nicola Sturgeon Announces Plans for Second Independence Referendum". Time. 24 June 2016. Retrieved 24 June 2016.
- "Brexit: Nicola Sturgeon says second Scottish independence vote 'highly likely'". BBC News. 24 June 2016. Retrieved 24 June 2016.
- "Local Government etc. (Scotland) Act 1994" Archived 1 March 2010 at the Wayback Machine Office of Public Sector Information. Retrieved 26 September 2007.
- "Council leaders". Cosla.gov.uk. Retrieved 3 January 2019.
- "Chief executives". Cosla.gov.uk. Retrieved 3 January 2019.
- "City status". Dca.gov.uk. Retrieved 17 September 2014.
- "UK Cities". Dca.gov.uk. Retrieved 17 September 2014.
- "History of the Faculty of Law". The University of Edinburgh School of Law. Archived from the original on 22 November 2007. Retrieved 22 October 2007.
- The Articles: legal and miscellaneous, UK Parliament House of Lords (2007). "Article 19: The Scottish legal system and its courts was to remain unchanged":"Act of Union 1707". House of Lords. Archived from the original on 14 November 2007. Retrieved 22 October 2007.
- "Law and institutions, Gaelic" & "Law and lawyers" in M. Lynch (ed.), The Oxford Companion to Scottish History, (Oxford, 2001), pp. 381–382 & 382–386. Udal Law remains relevant to land law in Orkney and Shetland: "A General History of Scots Law (20th century)" (PDF). Law Society of Scotland. Archived from the original (PDF) on 25 September 2007. Retrieved 20 September 2007.
- "Court Information" www.scotcourts.gov.uk. Retrieved 26 September 207. Archived 20 March 2015 at the Wayback Machine
- "The case for keeping 'not proven' verdict". Timesonline.co.uk. Retrieved 17 September 2014.
- "Scotland's unique 15-strong juries will not be abolished". The Scotsman. 11 May 2009. Retrieved 13 March 2017.
- "Prisoner Population". Sps.gov.uk. Retrieved 8 July 2009.
- "Scotshield wins hospital fire system contract". HeraldScotland. Retrieved 3 January 2019.
- Highlands and Islands Medical Service (HIMS) Archived 14 January 2013 at the Wayback Machine www.60yearsofnhsscotland.co.uk. Retrieved 28 July 2008.
- "Cabinet and ministers – gov.scot". beta.gov.scot. Retrieved 11 August 2017.
- "Strategic Board of the Scottish Government". Scottish Government. Retrieved 8 June 2014.
- "About the NHS in Scotland". Archived from the original on 28 June 2014. Retrieved 17 September 2014.
- Scottish Government. "Key Economy Statistics". Retrieved 22 August 2014.
- Khan, Mehreen (12 September 2014). "The Scottish economy in ten essential charts". Daily Telegraph. Retrieved 1 August 2018.
- "Scotland's employment rate hits record high". BBC News. 11 June 2019. Retrieved 24 July 2019.
- Askeland, Erikka (20 March 2012) "Scots Cities Slide down Chart of the World's Top Financial Centres". The Scotsman.
- "The Global Financial Centres Index 19". Long Finance. March 2016. Archived from the original on 8 April 2016. Retrieved 6 July 2016.
- Scottish Government. "Export Statistics Scotland – Publication". Retrieved 14 December 2014.
- "Economy Statistics". The Scottish Government. April 2003. Retrieved 26 May 2014.
- "Scotch Whisky Exports Hit Record Level". Scotch Whisky Association. 2 April 2013. Retrieved 12 June 2013.
- "Scotch Whisky Exports Remain Flat". 11 April 2014. Retrieved 17 September 2014.
- "Scotch Whisky Briefing 2014". Scotch Whisky Association. Retrieved 30 May 2014.
- Carrell, Severin; Griffiths, Ian; Terry Macalister, Terry (29 May 2014). "New Doubt Cast over Alex Salmond's Claims of Scottish Wealth". The Guardian. Retrieved 30 May 2014.
- "The Economics of Tourism" (PDF). SPICe. 2002. Archived from the original (PDF) on 6 November 2005. Retrieved 22 October 2007.
- "Scottish Banknotes: The Treasury's Symbolic Hostage in the Independence Debate". The Guardian. Retrieved 26 May 2014.
- The large number of military bases in Scotland led some to use the euphemism "Fortress Scotland". See Spaven, Malcolm (1983) Fortress Scotland. London. Pluto Press in association with Scottish CND.
- "Pensioner, 94, in nuclear protest". News.bbc.co.uk. Retrieved 17 September 2014.
- "Reprieve for RAF Lossiemouth base". News.bbc.co.uk. Retrieved 17 September 2014.
- "Dunoon and the US Navy". Argyllonline.co.uk. Retrieved 17 September 2014.
- "A Guide to Education and Training in Scotland – "the broad education long regarded as characteristic of Scotland"". Scottish Government. Retrieved 18 October 2007.
- P. J. Bawcutt and J. H. Williams, A Companion to Medieval Scottish Poetry (Woodbridge: Brewer, 2006), ISBN 1-84384-096-0, pp. 29–30.
- R. A. Houston, Scottish Literacy and the Scottish Identity: Illiteracy and Society in Scotland and Northern England, 1600–1800 (Cambridge: Cambridge University Press, 2002), ISBN 0-521-89088-8, p. 5.
- "School education prior to 1873", Scottish Archive Network, 2010, archived from the original on 2 July 2011
- R. Anderson, "The history of Scottish Education pre-1980", in T. G. K. Bryce and W. M. Humes, eds, Scottish Education: Post-Devolution (Edinburgh: Edinburgh University Press, 2nd edn., 2003), ISBN 0-7486-1625-X, pp. 219–28.
- "Schools and schooling" in M. Lynch (ed.), The Oxford Companion to Scottish History, (Oxford, 2001), pp. 561–563.
- "Curriculum for Excellence – Aims, Purposes and Principles". Scottish Government. Archived from the original on 1 August 2010.
- "The Scottish Exam System". Archived from the original on 14 February 2008. Retrieved 17 September 2014.
- "Welcome to the Carnegie Trust for the Universities of Scotland". Carnegie Trust for the Universities of Scotland. Archived from the original on 11 October 2007. Retrieved 18 October 2007.
- "Understanding Scottish Qualifications". Scottish Agricultural College. Archived from the original on 22 May 2012. Retrieved 18 October 2007.
- "RAE 2008: results for UK universities". The Guardian. London. 18 December 2008. Retrieved 11 June 2009.
- Foster, Patrick. "The Times Good University Guide 2009 – league table". The Times. London. Retrieved 30 April 2010.
- "Scotland tops global university rankings". Newsnet Scotland. 11 September 2012. Archived from the original on 9 March 2013. Retrieved 11 January 2013.
- "A Framework for Higher Education in Scotland: Higher Education Review Phase 2". Scottish Government. Retrieved 18 October 2007.
- "What is higher education?" (PDF). Universities Scotland. Archived from the original (PDF) on 16 March 2004. Retrieved 18 October 2007.
- "Introduction" (PDF). Retrieved 1 August 2018.
- "Scottish Government – Graduate endowment scrapped". Scotland.gov.uk. Retrieved 29 October 2014.
- "MSPs vote to scrap endowment fee". BBC News. 28 February 2008. Retrieved 12 February 2011.
- ITV (5 June 2014). "Scotland 'most highly educated country in Europe'". Retrieved 8 June 2014.
- "Tertiary educational attainment, age group 25–64 by sex and NUTS 2 regions". Eurostat. 2014. Retrieved 8 June 2014.
- "Best Scottish Band of All Time". The List. Retrieved 2 August 2006.
- R. T. Lambdin and L. C. Lambdin, Encyclopedia of Medieval Literature (London: Greenwood, 2000), ISBN 0-313-30054-2, p. 508.
- I. Brown, T. Owen Clancy, M. Pittock, S. Manning, eds, The Edinburgh History of Scottish Literature: From Columba to the Union, until 1707 (Edinburgh: Edinburgh University Press, 2007), ISBN 0-7486-1615-2, p. 94.
- J. T. Koch, Celtic Culture: a Historical Encyclopedia (ABC-CLIO, 2006), ISBN 1-85109-440-7, p. 999.
- E. M. Treharne, Old and Middle English c.890-c.1400: an Anthology (Wiley-Blackwell, 2004), ISBN 1-4051-1313-8, p. 108.
- M. Fry, Edinburgh (London: Pan Macmillan, 2011), ISBN 0-330-53997-3.
- N. Jayapalan, History of English Literature (Atlantic, 2001), ISBN 81-269-0041-5, p. 23.
- J. Wormald, Court, Kirk, and Community: Scotland, 1470–1625 (Edinburgh: Edinburgh University Press, 1991), ISBN 0-7486-0276-3, pp. 60–7.
- I. Brown, T. Owen Clancy, M. Pittock, S. Manning, eds, The Edinburgh History of Scottish Literature: From Columba to the Union, until 1707 (Edinburgh: Edinburgh University Press, 2007), ISBN 0-7486-1615-2, pp. 256–7.
- R. D. S. Jack, "Poetry under King James VI", in C. Cairns, ed., The History of Scottish Literature (Aberdeen University Press, 1988), vol. 1, ISBN 0-08-037728-9, pp. 137–8.
- J. Buchan (2003). Crowded with Genius. Harper Collins. p. 163. ISBN 978-0-06-055888-8.
- L. McIlvanney (Spring 2005). "Hugh Blair, Robert Burns, and the Invention of Scottish Literature". Eighteenth-Century Life. 29 (2): 25–46. doi:10.1215/00982601-29-2-25.
- N. Davidson (2000). The Origins of Scottish Nationhood. Pluto Press. p. 136. ISBN 978-0-7453-1608-6.
- "Cultural Profile: 19th and early 20th century developments". Visiting Arts: Scotland: Cultural Profile. Archived from the original on 5 November 2011.
- "The Scottish 'Renaissance' and beyond". Visiting Arts: Scotland: Cultural Profile. Archived from the original on 5 November 2011.
- "The Scots Makar". The Scottish Government. 16 February 2004. Archived from the original on 5 November 2011. Retrieved 28 October 2007. Cite journal requires
- "Duffy reacts to new Laureate post". BBC News. 1 May 2009. Archived from the original on 5 November 2011.
- Harvey, David; Jones, Rhys; McInroy, Neil; et al., eds. (2002). Celtic geographies: old culture, new times. Stroud, Gloucestershire: Routledge. p. 142. ISBN 978-0-415-22396-6.
- Pittock, Murray (1999). Celtic identity and the British image. Manchester: Manchester University Press. pp. 1–5. ISBN 978-0-7190-5826-4.
- "Celtic connections:Scotland's premier winter music festival". Celtic connections website. Celtic Connections. 2010. Retrieved 23 January 2010.
- "'Hebridean Celtic Festival 2010 – the biggest homecoming party of the year". Hebridean Celtic Festival website. Hebridean Celtic Festival. 2009. Retrieved 23 January 2010.
- "Site Officiel du Festival Interceltique de Lorient". Festival Interceltique de Lorient website. Festival Interceltique de Lorient. 2009. Archived from the original on 5 March 2010. Retrieved 23 January 2010.
- "Welcome to the Pan Celtic 2010 Home Page". Pan Celtic Festival 2010 website. Fáilte Ireland. 2010. Retrieved 26 January 2010.
- "About the Festival". National Celtic Festival website. National Celtic Festival. 2009. Archived from the original on 19 January 2012. Retrieved 23 January 2010.
- "Feature: Saint Andrew seals Scotland's independence" Archived 16 September 2013 at the Wayback Machine, The National Archives of Scotland, 28 November 2007, retrieved 12 September 2009.
- "Feature: Saint Andrew seals Scotland's independence". The National Archives of Scotland. 28 November 2007. Archived from the original on 16 September 2013. Retrieved 9 December 2009.
- Dickinson, Donaldson, Milne (eds.), A Source Book Of Scottish History, Nelson and Sons Ltd, Edinburgh 1952, p.205
- G. Bartram, www.flaginstitute.org British Flags & Emblems Archived 9 November 2012 at the Wayback Machine (Edinburgh: Tuckwell Press, 2004), ISBN 1-86232-297-X, p. 10.
- "National identity" in M. Lynch (ed.), The Oxford Companion to Scottish History, (Oxford, 2001), pp. 437–444.
- Keay, J. & Keay, J. (1994) Collins Encyclopaedia of Scotland. London. HarperCollins. Page 936.
- "Symbols of Scotland—Index". Rampantscotland.com. Retrieved 17 September 2014.
- Bain, Robert (1959). Margaret O. MacDougall (ed.). Clans & Tartans of Scotland (revised). P.E. Stewart-Blacker (heralidic advisor), forward by The R. Hon. C/refountess of Erroll. William Collins Sons & Co., Ltd. p. 108.
- "Action call over national anthem". BBC News. 21 March 2006. Retrieved 3 November 2011.
- "Games team picks new Scots anthem". BBC. 9 January 2010.
- "Explanatory Notes to St. Andrew's Day Bank Holiday (Scotland) Act 2007" Office of Public Sector Information. Retrieved 22 September 2007.
- "Scottish fact of the week: Scotland's official animal, the Unicorn". Scotsman.com. Retrieved 17 September 2014.
- Brooks, Libby (30 May 2007). "Scotland's other national drink". The Guardian. Retrieved 5 January 2020.
- Gail Kilgore. "The Auld Alliance and its Influence on Scottish Cuisine". Retrieved 29 July 2006.
- "Who invented the television? How people reacted to John Logie Baird's creation 90 years ago". The Telegraph. 26 January 2016.
- "Newspapers and National Identity in Scotland" (PDF). IFLA University of Stirling. Retrieved 12 December 2006.
- "About Us::Celtic Media Festival". Celtic Media Festival website. Celtic Media Festival. 2014. Retrieved 3 January 2014.
- "ITV Media – STV". www.itvmedia.co.uk.
- "Great Scottish Movies – Scotland is Now". Scotland. Retrieved 3 January 2019.
- "Disney Pixar's Brave – Locations & Setting". Visitscotland.com. Retrieved 3 January 2019.
- McKenna, Kevin (10 November 2018). "Scotland braces for 'Netflix effect' as TV film about Robert the Bruce is launched". Theguardian.com. Retrieved 3 January 2019.
- "BBC Studios – Scripted – Continuing Drama – River City". www.bbcstudios.com.
- "Still Game makes stage comeback". Bbc.com. 23 October 2013. Retrieved 3 January 2019.
- "BBC – Two Doors Down comes calling again with series four – Media Centre". www.bbc.co.uk.
- "Lesley Fitz-Simons: Scottish actress known for her role in Take the High Road". The Independent. 11 April 2013.
- "wpstudio". wpstudio.
- "BBC Dumbarton Studios". Retrieved 20 August 2019.
- Soccer in South Asia: Empire, Nation, Diaspora by James Mills, Paul Dimeo: Page 18 – Oldest Football Association is England's FA, then Scotland and third oldest is the Indian FA.
- Gerhardt, W. "The colourful history of a fascinating game. More than 2000 Years of Football". FIFA. Archived from the original on 10 August 2006. Retrieved 11 August 2006.
- Minutes of the Football Association of 3 October 1872, London
- "MEN'S RANKING". fifa.com. 19 December 2019. Retrieved 5 January 2020.
- "Craig Brown's highs and lows". BBC Sport. 7 October 2001. Retrieved 31 August 2008.
- Wilson, Richard (10 January 2017). "Scotland: Anna Signeul urges players to fight for Euro 2017 places". BBC Sport. Retrieved 2 April 2017.
- MacBeath, Amy (4 September 2018). "Albania Women 1–2 Scotland Women". BBC Sport. Retrieved 4 September 2018.
- "WOMEN'S RANKING". fifa.com. 13 December 2019. Retrieved 5 January 2020.
- Lindsay, Clive (14 May 2008). "Zenit St Petersburg 2-0 Rangers". BBC. BBC Sport. Retrieved 4 January 2020.
- "Scotland is the home of golf". PGA Tour official website. Archived from the original on 28 August 2008. Retrieved 4 December 2008.
Scotland is the home of golf...
- "The Home of Golf". Scottish Government. 6 March 2007. Retrieved 4 December 2008.
The Royal & Ancient and three public sector agencies are to continue using the Open Championship to promote Scotland as the worldwide home of golf.
- Keay (1994) op cit page 839. "In 1834 the Royal and Ancient Golf Club declared St. Andrews 'the Alma Mater of golf'".
- "1574 St Andrews – The Student Golfer". Scottish Golf History. Retrieved 1 August 2018.
- Cochrane, Alistair (ed) Science and Golf IV: proceedings of the World Scientific Congress of Golf. Page 849. Routledge.
- Forrest L. Richardson (2002). "Routing the Golf Course: The Art & Science That Forms the Golf Journey". p. 46. John Wiley & Sons
- The Open Championship – More Scottish than British Archived 2 October 2012 at the Wayback Machine PGA Tour. Retrieved 23 September 2011
- "Medal Tally". Cgcs.org.uk. Retrieved 17 September 2014.
- "Overview and History". Cgcs.org.uk. Retrieved 17 September 2014.
- Scottish Government, St Andrew's House (1 April 2003). "Energy – Electricity Generation". 2.gov.scot. Retrieved 3 January 2019.
- "The future of energy in Scotland: Scottish energy strategy". Gov.scot. Retrieved 3 January 2019.
- "Datasets – UK Civil Aviation Authority". Caa.co.uk. Retrieved 3 January 2019.
- "Disaggregating Network Rail's expenditure and revenue allowance and future price control framework: a consultation (June 2005)" Office of Rail Regulation. Retrieved 2 November 2007.
- "Rail". Transport.gov.scot. Transport Scotland. Retrieved 15 December 2016.
- Keay, J. & Keay, J. (1994) Collins Encyclopaedia of Scotland. London. HarperCollins. ISBN 0-00-255082-2
- "Ferry freight service axed after fire". Bbc.co.uk. 23 April 2018. Retrieved 3 January 2019.
- "Passenger ferry service to stop". Bbc.co.uk. 20 August 2010. Retrieved 3 January 2019.
- Aymar aru
- Беларуская (тарашкевіца)
- Bikol Central
- Chavacano de Zamboanga
- Emiliàn e rumagnòl
- Fiji Hindi
- गोंयची कोंकणी / Gõychi Konknni
- Bahasa Indonesia
- Kreyòl ayisyen
- Lingua Franca Nova
- Bahasa Melayu
- Norfuk / Pitkern
- Norsk bokmål
- Norsk nynorsk
- Tok Pisin
- Runa Simi
- Gagana Samoa
- Simple English
- Српски / srpski
- Srpskohrvatski / српскохрватски
- ئۇيغۇرچە / Uyghurche
- Vepsän kel’
- Tiếng Việt
- ဘာသာ မန်
- This page is based on the Wikipedia article Scotland; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA. | 1 | 30 |
<urn:uuid:b071e013-d1f7-4117-a3d3-14404be2de1b> | In the UK, there are five broad categories of single-ply roofing:
1) Poly-isobutylene (PIB)
This polymer was invented by chemists working for the BASF chemical company in 1931. Thus, PIB is the oldest single-ply roofing material in the world and is often regarded as a benchmark material for single-ply applications. One major advantage of PIB is that it has attained the standards ISO14040 as determined by the international standard organisation (ISO). It means that PIB has no major environmental impact and it is 100% recyclable. In addition, PIB is highly impermeable, durable and will withstand chemical corrosion and long periods of inclement weather. An added bonus is that PIB sheets are flexible and are relatively easy to install.
2) Thermoplastic Polyolefin (TPO)
These polymers were developed in the 1980’s and are in some respects more environmentally sound than PVC. The initials TPO cover any material that contains a synthetic polymer and filler such as fibreglass which strengthens the entire structure. TPO is used in a range of civil engineering applications including lining for artificial ponds and for waterproofing tunnels. The material is tough strong and durable with a wide range of applications but is only partially recyclable. TPO is only available in grey and can only be installed by heat welding. In addition the welding can only occur if the sheets are kept very clean. In turn they can only be kept in such a condition by regular washing with solvents, which of course undermines their environmental credentials.
3) Thermoplastic Poly-olefin Elastomer (TPE)
In effect TPE is the improved version of TPO and as such is the preferred of the two substances. It is completely recyclable and can be installed without welding meaning that the need for specialist (and more expensive) contractors is reduced. TPE is easier to clean and maintain and can be repaired by gentle heating of minor damage, whereas both TPO and PVC would require patching.
4) Poly Vinyl Chloride (PVC)
PVC is the third most used synthetic polymer in the world and is by far the most utilised single-ply roofing material in the UK. PVC is available in a very wide range of colours and styles and is easily installed and welded. Furthermore, PVC contains plasticisers and chlorine and if combusted will produce dioxins, which are highly toxic to both humans and animals. It also important to be aware, that PVC cannot be laid over substances such as bitumen without an isolating and / or waterproofing layer.
5) Ethylene Propylene Di-ene Monomer (EPDM)
EPDM is a semi-elastic synthetic rubber material which is very common in residential roofing applications. It is often made to measure and the joining, welding and attachment points are assessed off site. When brought on site the material is further sealed and treated to the requirements of the premises. Overall, EPDM is inexpensive but only available in black and must be treated to meet requisite standards of fire safety.
Clearly, it is essential to consult with professional contractors before deciding on the correct materials for any premises.
As human beings, we have been aware of static electricity for hundreds of years. This form electricity is a transfer of electrons (sub-atomic particles which carry electrical charge) as a result of friction. For example, if you rub a balloon on your clothing it will “stick” to a wall. Secondly if you rub a glass rod on your clothing and hold it next to a gentle stream of water from a tap the water stream will be pulled toward the rod. The early and neolithic humans were doubtless aware of electricity in the form of lightning.
The Battery and electric lighting
The technical term for a battery is an electrochemical cell and their invention is credited to the Italian scientist Alessandro Volta. He demonstrated that electricity could be harnessed by demonstrating that it could be made to flow through a conducting wire. The wire glowed further indicating that electrical energy could be transferred into light energy. By the middle of the 18th century the first circuits using electricity derived from electrodes standing in water had been developed.
The first practical application of using circuits to transmit electricity was electric lighting. Concurrently, this was made possible by the invention of the first functioning incandescent light bulbs by Thomas Edison. Many other inventors and scientists were aware of the concept but on October 22nd 1879 the first functioning bulb remained functioning for 14 hours.
Battery powered circuits
The early circuits were powered by batteries which produced a steady and constant flow of current from negative to positive terminals. This direct current (DC) always flows in the negative positive direction; the issue was that the electricity could not be transported over long distances. In fact for these early circuits any area exceeding about a square mile would present problems.
To the Engineers, scientists and proto-electricians of the time, the potential was obvious, but they could not overcome the distance issue. A Serbian engineer known as Nikola Tesla developed and applied the notion of an alternating current (AC). This current, as the name suggests, constantly changes and can be made to change direction and it changed everything.
How alternating current changed everything
If the electrical current is alternating the voltage level in a given circuit can be changed by a device known as transformer. The flow of electrical current always produces a magnetic field and so in a DC circuit the field does not change but in an AC circuit the field oscillates in tandem with the current.
Transformers work by a physical science principle known as magnetic induction, which can only occur in an AC circuit. In essence Nikola Tesla invented a method by which the voltage magnitude of a circuit can be increased enabling long distance transmission of electricity. Quite literally, the rest as the colloquium goes, is history.
The above introduction shows, that there are many devices which can cause an electrical discharge, but electricity as a form of energy is useless to us, without circuits.
Part of the process of deciding which system is suitable for your premises involves gaining an understanding the different types of available materials. Put simply, it is impossible to arrive at an informed decision if the relative merits of cost, sustainability, reliability and useful lifespan are not assessed. In addition the range of available styles and brands ensures that choosing the correct materials is no small undertaking.
Overall, a single-ply roof is composed of a synthetic (derived from crude oil) polymer which is applied to the roof in a single layer. The material is usually supplied in large rolls and applied in sheets which may (or may not) contain a reinforced layer. The layers are connected together by heat sealing or by fixing and ballasting. In addition, whilst there is no such thing as completely environment-friendly roofing system massive strides have been made in the materials technology. The roofing sheets contain no heavy metals and aside from PVC contain no halogens (Chlorine is halogen) or plasticisers
All single-ply roofing materials have been developed to cope with the most extreme conditions they can expect to encounter within their design remit. In effect this means extremes of temperature and impact, in certain applications “durability” will also cover corrosion by industrial chemicals.
The roofing materials themselves have a very high tensile strength, the strongest materials can withstand 1200N (Newton’s)/mm2, making them as durable as the best titanium alloys. In roofing systems which require welding of the seams between the polymer sheets both durability and water proofing properties are enhanced.
A cost effective Investment
Once the correct choice of materials has been made there are specific techniques which enable the sheets to be installed rapidly and efficiently, thus reducing labour costs.
Additionally, disruption of the day-to-day running of the premises will be minimised which offers clear advantages when it comes to necessary refurbishment and maintenance work. Finally, the roofs are designed to last for decades and so are generally installed to last at least as long as the useful life of the premises itself.
Fire and Safety
When the roofing systems which require heat sealing a hot stream of air is used instead of an acetylene flame, thus making the installation process itself intrinsically safer. Furthermore, legal statutes exist to ensure that the roof work itself is resistant to combustion and melting in the event of a fire. With this in mind, the roofing materials manufactured in the UK have the appropriate fire safety ratings.
The sheets are generally designed to be attached to the roof of the building. This in combination with effective sealing or fastening of the sheets to each other effectively removes the possibility that the roof will be lifted off should storms or other extreme weather events occur.
The advantages of single-ply roofing lie in the swift and safe installation of the materials and with the added benefits outlined above the popularity of such systems will continue to increase.
Irrespective of the circumstances the complexities of choosing the correct roofing material can be broken down ad expressed as the optimum balance of design, requirements, longevity, sustainability as well as cost.
Environmental impact and sustainability
Of the five main materials PVC is widely acknowledge to be the most long lived and durable. At time of writing over 80% of all the single ply roofing material in the UK is PVC based, which says much about its utility. However, it is considered the least environmentally sound because of the presence of chlorine and other substances. Conversely, if it were substituted properly many of its components can be recycled and converted to other applications in alternative single ply roofing systems.
A substance known as EPDM which is the mainstay of domestic roofing systems cannot be recycled and can only be reused if it is originally set down in mechanically fixed loose layers. Having a roof constructed in this way would clearly defeat the whole point of the exercise. A polymer known as TPO is as durable as PVC but does not contain chlorine or plasticising substances and can be itself recycled and made from recycled materials.
However, it can only be installed after cleaning with solvents. This last factor brought forward the necessity for the alternative TPE. From an environmental point of view PIB with its ISO 14040 standard is the most environmentally sound single-ply roofing material.
Cost of materials
Clearly, there is a strong element of completion and an onus on the end user to play the market and secure for themselves the best possible deal. It must be remembered that although cost is “a” factor it is rarely “the” factor, the performance of the materials of which the roofing sheet is made will often over ride pure cost considerations. In General Terms EPDM is likely to be the cheapest material and PIB the most expensive, with PVC, TPO and TPE situated somewhere in the middle.
Longevity of materials
The quality of materials is always evolving, such that new products are always coming on stream and innovations to existing materials continually present themselves. Overall, PIB will last for between 30 and 50 years, which partly explain its cost. The useful life of PVC sheet depends on its chemical composition but a quality product will last for between 25 and 45 years.
The relatively recent TPO and TPE will last for a minimum of 40 years but as with PVC the potential costs in sound reusing and recycling methods must be factored in to the decision to use them. Furthermore, the actual projected lifespan of the building its self and whether or not the roofing project is a new build or refurbishment must be considered. Finally, the end user is advised to check the credentials of both the product and the contractor installing it.
As can be seen a clear and definite trade-off between several of intertwined factors exists. There are no clear cut boundaries and so the buyer must evaluate all variables in their own individual context.
When roughly equal amounts of sodium bicarbonate or bicarbonate of soda (BOS) and warm vinegar are mixed a characteristic fizzing is seen. As any year 7 (first year of secondary school) science text book explains, the fizz is one of the characteristics which tells us that a chemical reaction is occurring. The reaction represents a highly effective and affordable method for both effective preventative drain maintenance and for removing minor blockages. The reacting mixture should be poured straight into the drain.
Alternatively, pour the bicarbonate of soda into the pipe work and then follow up with the vinegar. Ideally, the reaction should be left to run for several hours and should be repeated as necessary and flushed through with boiling water. Additionally, for DIY drain cleaning using BOS and vinegar together reduces the need for chemical cleaners and for professional assistance.
Why does the reaction happen?
It all comes down to one of the principal thread which runs through the study of chemistry. Put simply BOS (NaHCO3) is an alkali and vinegar or acetic acid (HCH3COO), is a weak organic (carbon based) acid dissolved in water. In chemistry acids and alkalis always react in what are known as neutralisation reactions. Put simply hydrogen in its charged (ionic state) is transferred to the alkali creating new compounds in the process.
This particular reaction is a two-step process. The products of the reaction are carbon dioxide gas – CO2 – (which produces the fizzing and bubbling), water and dilute solution sodium acetate (C2H3NaO2). In chemical terms the BOS removes hydrogen atoms from the vinegar and when this occurs the BOS is converted to water and carbon dioxide. The bubbles of CO2 are denser than the surrounding air and so collect on internal surface of the drain aiding the removal of unwanted material.
The reaction between the BOS and vinegar is a complex process and occurs because there is an exchange of atoms. The exchange happens due to the behaviour of the electrons which are farthest from the nucleus of each atom in the reacting substances.
Each molecule of BOS contains an atom of sodium, hydrogen and oxygen and a molecule of CO2 while acetic acid contains hydrogen and an acetate ion. An ion is any positively or negatively charged atom or molecule and the charge on the acetate ion is negative. When BOS and vinegar react together (and while the hydrogen is removed from the vinegar by the BOS), the hydrogen atom in the acetic acid combines with the hydrogen and oxygen in the BOS to form a molecule of water.
Simultaneously, the acetate ion in the vinegar chemically bonds with the sodium atom in the BOS forming the sodium acetate in solution. The CO2 itself does not react with any of the reagents, that is the vinegar and BOS. However, the chemical bonds which attached it to the BOS molecule are broken and so it is able to bubble off into the surrounding atmosphere.
The above reaction is one of many neutralisation reactions that have clear benefits and represent a clear alternative to chemical cleaners for low key applications.
A standard method of understanding the flow of electrical current is to compare it to a mammalian circulatory system. In a mammal the heart is the pump that drives the circulation of the blood throughout the whole organism, such that the blood reaches every single cell in the animal. Similarly, the wires in an electrical circuit carry the electrical current throughout the whole circuit making sure the device the current powers can function. In a circuit the power source is either a battery (or electrochemical cell) or generator which supplies mains electricity.
For electricity to be transported the circuit must be complete and a voltage or potential difference must exist. Just as when we cut ourselves, blood flows outside of the circulatory system, if the current cannot complete a circuit then electricity will not be transported and the device or mains appliance will not function. At its simplest an electrical current is a flow of electrons, which are negatively charged sub-atomic particles.
Voltage (potential difference) is the force required measured in volts to push the electrons through the circuit. Any GCSE physics text book will impart that the flow can only occur if there is a build of electrons (and therefore negative charge) at the negative (anode) terminal of the circuit. This means that there is a deficit of electrical charge at the positive anode (cathode). The flow of electrical current always occurs as a result of a flow of electrons from the anode to the cathode.
The more energy the electrons have (not necessarily the actual number of particles) determines the voltage needed to move them around the circuit. Hence the electrons in a torch have less energy than the electrons in car battery. The flow of electrons from the anode to cathode is measured in amperes. Put simply voltage and amperes can be mathematically connected to produce a unit we understand as electrical power. The greater either value is the more power produced as measured in Watts.
Resisting the flow of electricity
In the circulatory system the flow of blood is impeded by the friction generated as it contacts the vessel walls and the reduction in pressure as it is pumped back to the heart. Similarly, when electrons flow through the conducting wire there speed is reduced as they come into contact with the atoms which compose the conducting material.
The wire is said to resist the flow of electrons and the greater the degree of resistance the more voltage is needed to push the electrons through the circuit. The degree of resistance depends on the material used to create the wire, its diameter and its length. As with power resistance can be calculated through the mathematical relationship between voltage and current.
Electricity is a natural phenomenon which has become inextricably linked with the continued evolution of the human species. In short, it is a simple flow of electrons from negative to positive terminals which we simply cannot do without.
In Scotland, the organisation responsible for maintaining the supply and sanitation of fresh water is Scottish Water (SW). The body in charge of ensuring adherence to environmental regulations is the Scottish Environmental Protection Agency (SEPA).
In turn potable (drinking) water standards as well as discharges of effluent are set by the EU. Within the standards set by European-wide water management policies, SW operates the following infrastructure:
Water treatment is concerned with the entire set of processes that produce water that is fit for a particular purpose, this may not mean for human consumption. Waste water treatment is distinct from this definition because it is concerned with the collective set of physical and biochemical techniques employed to ensure that water is specifically safe for human consumption.
In addition, waste water treatment is normally associated with processing water that has been utilised in some way, for instance in an industrial process or is actually sewage.
Why treat the water?
Most of water that comes from the taps of Glasgow and Scotland is sourced from reservoirs, lochs and river systems. Although not likely to be as polluted as effluent water, this fresh water is unlikely to be pure and so it must be treated, it is likely to contain substances such as:
The EU regulations mentioned above require Scottish water to carry regular sampling of the sources of water which they use to supply the drainage system of the country, a remit which also covers ground water and aquifers. For example the microbes that cause water-borne diseases such as cholera (vibrio cholerae) and amoebic dysentery (Entamoeba histolytica), must be removed.
Colloidal clay occurs when the particle is dissolved within another molecule or substance and so cannot be removed by those techniques designed to remove suspensions. Colloids in their entirety are removed by adding coagulants such as aluminium sulphate (Alum Al2(SO4)2). Nitrates are particularly toxic to infants and young animals and so there level is kept below 50mg/l.
Water Treatment Techniques:
A stringent sampling regime therefore ensures that the treatment techniques are effective; the water is treated in the following stages:
The material formed as a result of these complex chemical reactions form a surface layer of sludge. The sludge is removed, and the remaining water is then allowed to settle in a process called clarification that occurs in sedimentation tanks.
Clarification and other processes
Clarification refers to a set of chemical techniques which are applied to the water during and after it has undergone coagulation and flocculation. For example, dissolved air flotation occurs when a high-pressure stream of very fine bubbles is blown into the flocculated water. This forces any remaining suspended material to the surface where it is continually skimmed off.
Simultaneously, at very high speed the water is passed through another set of sand or carbon filters. This rapid gravity filtration ensures that any other suspended particles are extricated from the system. The water is often sampled at this stage and if necessary recycled back into the system for secondary treatment. Once it is of sufficient purity any microbes, pathogens and dissolved ions are removed. In the latter case, PO43+ ions are removed by the controlled addition of powdered iron fillings.
If necessary, NO3– ions are removed by specialist nitrifying bacteria that convert the ions to nitrogen gas. In some cases, specialist ion exchange techniques are used to eliminate these and other types of inorganic ions. Further filtration removes any additional sludge material.
Disinfecting the water
Disinfection is only effective if the maximum possible quantities of suspended and colloidal materials have been removed. This includes natural organic compounds dissolved in the water that have a tendency to react with chloride (Cl–) ions forming a class of compounds called tri-halo-methanes (THM). These form when three of the hydrogen atoms in a molecule of methane (CH4) are substituted for chlorine atoms.
The THM’s cause discolouration of the water and if consumed can cause damage (amongst other undesirable effects) to the renal organs. Micro-organisms (beneficial or pathogenic) are removed by bubbling ozone (O3) through the water or more commonly chlorine, or compounds that contain chlorine are added. The chlorine dissolves in the water forming a substance called hypochlorous acid, which can pass through the membranes of both bacteria and water borne disease microbes. Once inside cell or microbe the hypochlorous acid reacts with its constituent proteins thus overwhelming its metabolic processes. To be effective, the water must be kept alkaline between pH 7.2 and 7.8. This treatment is analogous to the addition of chlorine to swimming pools and other water recreational facilities.
Overall, water treatment is a complex series of mechanical, biological and chemical steps that must be carried out in a precise order and with great care and attention. In addition, many of the techniques have been in operation for over a century and are being continually improved, irrespective of the already strict legislation.
The term pipework refers to any pipes that compose any machine or building. The term drains refers to the system of pipes that are joined together and drain liquids away from the premises. In very general terms looking after the drainage and pipe work in Glasgow and indeed anywhere else in Scotland falls to two agencies.
The owner of the property is responsible for the drainage system inside the building and the water provider for that which leads up to the premises. In general terms, the water supply pipe runs from the edge of the buildings to the stop valve inside. Both commercial and residential insurance policies will stipulate the extent of the water supply pipe and who is responsible for it.
Burst pipes and the water supply
From bitter and costly experience, I can assure readers that flooding is no laughing matter. The potential for harm is not merely the inconvenience of restricted or even curtailed access to running water, but can seriously damage the property and its contents. At DPG Plus a full and comprehensive CCTV pipework survey is offered as standard.
The survey will identify potential problems, and our fully trained and registered plumbers will advise on the best course of preventative action. The study will also indicate where damage has occurred. At DPG plus we advise that property owners check their insurer as to how often a survey must be undertaken and for which components they are responsible for. On this note, it is highly likely that the insurer will require a clear delimitation of responsibility as a condition of the buildings insurance.
The water communication pipe runs from the water main to the stop-valve and water meter. In some cases the water provider owns either the stop-valve or the communication pipe. In other situations, it could be that the provider owns both. Furthermore, it may transpire that even though they do not own it, the registered property owner may be responsible for maintenance of either or both components. This can be confusing but the onus is not on the water provider to establish the limits of responsibility.
Overall, the water provider is responsible for the repair and maintenance of the water supply into an area such as Glasgow. This responsibility ends once the communication pipe reaches a given property.
In the UK, almost £20 million per annum is spent on unblocking accumulated blockages of thick grease and material trapped within drainage and pipework. The figure does not include the cost of clean up after undetected blockages have resulted in flood damage to residential or business across Glasgow every year.
Dozens of tonnes of this waste is poured into the pipework every single day in a large city such as Glasgow. The waste fats solidify inside the drainage pipework and start to collect on the internal surface of the pipe and so its bore diameter is progressively reduced. As the fat accumulates the internal wall narrows, the flow rate is gradually reduced which allows for the build-up of more greasy fat. If nothing is done you have a clog in the pipework and there you have it, the entire drainage system backs up.
When waste water leaves a sink it first passes through the water trap beneath the sink and the trap which leads to the external water pipe. The usually warm waste water comes into contact with the cold water in the drainage piping. The fat effectively precipitates out of the hot water and collects in between the two traps.
To make matters worse if there are any clothing fibres or dirt particles then further combination occurs, resulting in a matrix that has the consistency of semi-solid cement. If the blockage has got to this stage then even regularly punching holes through it will only temporarily solve the problem. Eventually, you will need to call out a contractor or use the appropriate chemical cleaner. Clearly, preventing such an accumulation is preferred.
Preventing and clearing the blockage
Regularly flushing the drain with a solution of vinegar or citric acid, salt, bicarbonate of soda or borax will reduce the incidence of blocked drains. For example a 50:50 mix of vinegar and bicarbonate soda causing a foaming brew that will sit in between the water traps and begin to dislodge any accumulating material. Leave for at least 30 minutes and repeat if necessary.
After this time use a plunger to dislodge the clog and if that doesn’t work further action is required. If you are successful in your endeavours, finish by running hot water or several kettles of boiling water through the system, plunging as you go. Chemical cleaners work on a similar principle but if the above preventive measures are employed there necessity is reduced.
Calling a contractor
If the blockage just is not shifting it will be time to call in the professionals. It may be inevitable to do so because the blockage may be located more than 20cm below the sink. After this distance most available chemical cleaners become so diluted that they are useless once they come into contact with the clog. The contractor will have an array of specialist chemicals, spraying equipment, plumbing rods and other tools.
In conclusion, prevention is better than cure so regularly flush your drainage system. If after all your best efforts success is still elusive, then there is no shame in admitting defeat and getting on the phone. | 1 | 2 |
<urn:uuid:de0501bc-d191-486d-af66-dfdf8f680925> | By Karin M. Ouchida and Mark S. Lachs
Dr. Robert N. Butler coined the term ageism in 1968 and spent his career trying to eradicate it. Unfortunately, despite his many accomplishments, “systematic stereotyping and discrimination against people because they are old” still occurs today (Achenbaum, 2013).
The healthcare community is not immune to the deleterious effects of ageism. It permeates the attitudes of medical providers, the mindset of older patients, and the structure of the healthcare system, having a potentially profound influence on the type and amount of care offered, requested, and received.
Ageism Among Healthcare Providers
Adults ages 65 and older see doctors on average twelve times per year, and nearly 80 percent see a primary clinician at least once per year (Davis et al., 2011). These visits represent critical opportunities for providers to promote physical and psychosocial health, and patients expect counseling that is individualized for their functional status, life expectancy, and care preferences. Providers’ knowledge and attitudes about aging can affect how accurately and sensitively they distinguish normal changes associated with aging from acute illness and chronic disease. Ageism can take the form of a provider dismissing treatable pathology as a feature of old age, or treating expected changes of aging as though they were diseases (Kane, Ouslander, and Abrass, 2004).
Ageism among healthcare providers can be explicit or implicit. The geriatrician and writer Dr. Louise Aronson (2015) describes a disturbing example of explicit ageism in which a surgeon asks the medical student observing his case what specialty she is thinking of pursuing. When she answers, “Geriatrics,” the surgeon immediately begins mimicking an older adult complaining about constipation in a high-pitched whine. The attending surgeon had a reputation for being an outstanding teacher, yet repeats this parody throughout the surgical procedure. Another example of explicit ageism involves a respected internal medicine resident flippantly telling her team that she is worried because her patient on morning rounds “looked like this.” The resident closes her eyes and opens her mouth with her tongue protruding off to one side. She then says, “But then I remembered . . . I’m on the geriatrics service.” The resident had made her face into “the Q sign”—a disparaging term, originated in Samuel Shem’s novel The House of God, that describes extremely moribund patients (Shem, 1978).
Sadly, despite the growing need for more providers with geriatrics expertise, many physicians-in-training come to view the care of older adults as frustrating, uninteresting, and less rewarding overall. These negative views likely are influenced by the predominant exposure of medical trainees to hospitalized geriatric patients versus community-dwelling older adults, and by the inherent challenges in caring for medically complex older adults who need extensive care coordination within an increasingly fragmented system (Adelman, Greene, and Ory, 2000).
Trainees’ attitudes are further shaped by the persistent misconceptions that older patients are demented, frail, and somehow unsalvageable. In Higashi and colleagues’ study based on observations of inpatient teams and interviews with students and residents about their experiences caring for elderly patients, a resident said, “It’s always a bigger save when you help a 35-year-old woman with kids than it is to bring an altered 89-year-old with a urinary tract infection back to her semi-altered state” (Higashi et al., 2012).
Dr. Becca Levy (2001) points out that ageism can also operate as implicit thoughts, feelings, and behaviors toward older people that occur without conscious awareness or control. Whether provider ageism is explicit or implicit, it puts older patients at risk for under-treatment and over-treatment. Healthcare providers must also be attentive to unique features of medical encounters with older patients. Older adults may have sensory or cognitive impairments and may be accompanied to the medical encounter by a third person. Clinicians can learn to recognize implicit ageist attitudes and actions, and adopt communication techniques to effectively elicit the patient’s concerns and preferences to provide individualized care.
Potential for under-treatment
Experts in aging often underscore the profound heterogeneity of the elderly population by saying, “If you’ve seen one 85-year-old, you’ve seen one 85-year-old.” Unfortunately, the reported experiences of older adults suggest that healthcare providers remain prone to stereotyping older adults or “applying age-based, group characteristics to an individual, regardless of that individual’s actual personal characteristics” (Macnicol, 2006). In Dr. Erdman Palmore’s Ageism Survey (2001) of community-dwelling older adults ages 60 to 93, 43 percent of respondents reported that “a doctor or nurse assumed my ailments were caused by my age,” and 9 percent said they were “denied medical treatment because of age.”
In a cross-sectional survey design, Davis et al. (2011) used the Expectations Regarding Aging Scale to assess primary care clinicians’ perceptions of aging in the domains of physical health, mental health, and cognitive function. The majority of providers surveyed were physicians, but the sample also included nurse practitioners and physician assistants who serve as primary care providers (PCP). Most PCPs agreed with the statements “Having more aches and pains is an accepted part of aging” (64 percent), and, “The human body is like a car: when it gets old, it gets worn out” (61 percent ). More than half of PCPs (52 percent) agreed that one should expect to become more forgetful with age, and 17 percent agreed “mental slowness” is “impossible to escape.” Few PCPs believed getting older was associated with social isolation (4.8 percent) and loneliness (5.9 percent), but 14.7 percent of respondents agreed with the statement “It’s normal to be depressed with you are old.” One-third of the physicians agreed that increasing age was associated with worrying more and having lower energy levels. These results demonstrate how pain, fatigue, cognitive impairment, depression, and anxiety could easily go undiagnosed and untreated if healthcare providers erroneously attribute these symptoms and conditions solely to advancing age.
Research has shown that pain is consistently under-treated among older adults. Qualitative studies demonstrate that while patients may harbor ageist expectations about the inevitability of pain in older age, their medical providers reinforce these beliefs by dismissing or minimizing back pain. In one study of an ethnically diverse sample of adults ages 65 years and older who had experienced restricting back pain in the last three months, a New York City focus group participant described the following exchange with his doctor: “Look I can’t walk. What am I supposed to do?” He [the doctor] says, “How old are you?” I said, “I’m close to 90.” [The doctor replies] “What do you expect? You’re an old man” (Makris et al., 2015).
Another common ageist misconception among healthcare providers that can affect diagnosis and treatment of patients is that older adults are no longer sexually active. While the prevalence of sexual activity declines with age, 53 percent of 65- to 74-year-olds and 26 percent of 75- to 85-year-olds report having sex with at least one partner in the previous year. Among the 75- to 85-year-olds who are sexually active, more than 50 percent had sex two to three times per month. Among sexually active men and women, more than half suffer from a bothersome problem related to sex, but only 38 percent of men and 22 percent of women have talked to any physician about it (Lindau, 2007). Physicians who are unaware of their older patients’ sexual health and behaviors will fail to address problems like decreased libido and erectile dysfunction, and miss diagnoses of sexually transmitted diseases, including HIV.
Potential for over-treatment
In healthcare settings, age discrimination (Macnicol, 2006) also can result in harmful over-treatment if medical providers offer misguided health recommendations based on chronological age without assessing an individual’s functional status, other comorbid conditions, and preferences.
Given the unsustainable rate of growth in healthcare spending in the United States, health economists and policy experts have focused on over-treatment as a category of waste. According to some estimates, waste accounts for approximately a third of all U.S. health spending, and over-treatment represents $158 to $226 billion of that waste. Examples of over-treatment specific to older patients include universal prostate-specific antigen screening for prostate cancer, which can result in over-diagnosis of benign or slow-growing tumors, excessive treatment with surgery, and unnecessary harms like urinary incontinence following surgery; intensive care at the end of life that is inconsistent with patient preference; and, overuse of tests and procedures lacking evidence of benefit (Berwick and Hackbarth, 2012; Health Affairs, 2012).
In 2012, the American Board of Internal Medicine launched the Choosing Wisely campaign, asking medical specialties to identify commonly used tests and procedures that lack solid proof of benefit and may cause harm. The campaign aims to foster conversations between patients and providers about the necessity of medical tests and treatments. Examples of medications, tests, and procedures that geriatric patients and providers should question include the placement of percutaneous feeding tubes in patients with advanced dementia, the excessive use of diabetes medications that can result in hypoglycemia, the use of harmful sedatives (like benzodiazepines) for insomnia or agitation, and the use of antibiotics for bacterial colonization of the urine, without clinical symptoms or signs of infection (Choosing Wisely, 2015a).
Surgeons also are trying to individualize care to avoid under- and over-treatment. They are incorporating novel assessment tools to help forecast surgical risk and paying attention to both morbidity and mortality. Each year, more than 4 million major operations are performed on geriatric patients ages 65 and older in the United States. To effectively counsel patients and caregivers about the benefits and risks of surgery, physicians must estimate both the perioperative mortality risk of the procedure and the patient’s life expectancy. Both the surgeon and the primary care physician should also understand the older patient’s overall treatment goals and expectations. Known risk factors for surgical mortality include cognitive impairment, functional dependence, malnutrition, frailty, and preoperative institutionalization, but these vary according to how the risk factor is defined, the procedure type, and the clinical setting. For example, cognitively impaired patients (defined as a diagnosis of dementia) undergoing total knee replacement had a 1.8-fold higher risk of 90-day mortality, but a six-fold-increase in one-year mortality following a hip fracture (Oresanya, Lyons, and Finlayson, 2014).
Joseph et al. (2014) studied geriatric trauma patients and found that a measure of frailty was superior to chronological age in predicting in-hospital complications, discharge to a skilled nursing facility, and death. Frailty has been defined as a geriatric syndrome marked by decreased physiologic reserve resulting in increased vulnerability to poor health outcomes in the face of stressors like acute illness, surgery, and hospitalization. The 50-item Frailty Index used by Joseph and colleagues assessed social activity, mood, activities of daily living, and nutrition, in addition to age, comorbidities, and medications. Academic general surgeon Emily Finlayson has focused on characterizing the long-term functional impact of colon cancer and vascular surgery because her clinical experience conflicted with existing data showing older adult patients have only transient and reversible declines in function after surgery. In her study that looked at a national sample of frail nursing home residents ages 65 and older who underwent resection of colon cancer, more than half of the study population had died and 24 percent had sustained functional decline one year after their surgeries (Finlayson, 2012).
Finlayson hypothesizes that the previous research looked at healthier community-dwelling older adults, used self-reported measures of function, and followed participants for six months, as opposed to a year. In a different study, led by Finlayson, of elderly nursing home residents who had surgery to re-establish blood flow to their lower extremities, 51 percent of patients had died at one year and 32 percent sustained functional decline. The functional decline is especially significant given that 75 percent of the participants were not ambulatory prior to the procedure (Oresanya et al., 2015). Finlayson is quick to point out that she does not want her findings to be used to automatically bar geriatric patients or, specifically, nursing home residents from being offered surgery for cancer or for peripheral vascular disease. Her research, and that of Joseph et al., underscores the heterogeneity of the older adult population, the need to incorporate functional measures preoperatively to assess surgical risk, and the importance of studying long-term functional outcomes.
Communication during medical encounters with older patients
Effective communication between the older adult patients and their healthcare providers to elicit individual goals and preferences is one of the keys to avoiding under- or over-treatment. Unfortunately, studies show that providers communicate differently in medical encounters involving older versus younger adults. When Greene et al. (1986) analyzed the content of eighty audiotaped medical visits, physicians provided better questioning, information, and support to younger patients. Doctors were rated as less patient, less engaged, and less egalitarian with their older patients. Also, physicians responded less to the issues raised by older patients, devoting more time to provider-raised topics. Greene and colleagues hypothesized that their findings reflect different power dynamics, where the generation of older adults in their sample were more likely to respect authority. The results might change if the study were repeated today with a cohort from the Baby Boom Generation (Greene et al., 1986).
Communication between healthcare providers and older adults also is more likely to be complicated by sensory deficits, cognitive impairment, functional limitations, and the presence of an accompanying relative or caregiver in the medical visit (Adelman, Greene, and Ory, 2000). With increasing age, there are expected changes in vision, hearing, and memory. Approximately a third of adults ages 65 and older report some hearing loss (Kane, Ouslander, and Abrass, 2004) and a quarter of individuals ages 75 and older have vision impairment (Cassel and Leipzig, 2003). Alzheimer’s dementia affects 8 percent to 15 percent of people ages 65 and older, but the prevalence of dementia doubles every five years until age 85.
Healthcare providers should remember that the presence of sensory and cognitive changes does not necessarily signify functional impairment. Even in patients diagnosed with dementia, the degree and type of cognitive deficits ranges widely. Unfortunately, older patients may encounter healthcare providers who automatically shout or raise their voices when communicating, or worse, ignore them altogether and speak only to a younger person who accompanied them to the visit. Palmore (2001) reported that a third of respondents in his Ageism Survey described encountering providers who assumed they could not hear well or could not understand, and 39 percent felt they were “patronized or talked down to.”
One way healthcare providers unknowingly patronize older adults is to use “elderspeak”—speaking slowly, with exaggerated intonation, elevated pitch and volume, greater repetitions, and simpler vocabulary and grammatical structure. Older adults perceive elderspeak as demeaning and studies show it can result in lower self-esteem, withdrawal from social interactions, and depression, which only reinforce dependency and increase social isolation (Williams, Kemper, and Hummert, 2005). In patients with dementia living in long-term-care settings, elderspeak has been shown to increase resistance to care (Herman and Williams, 2009). Providers can routinely screen older patients for hearing and vision loss and memory impairment, and employ verbal and written communication strategies to ensure patients understand and retain medical information.
In Higashi and colleagues’ (2012) ethnographic study of negative attitudes toward older adults among medical trainees, a medical student and an intern each describe witnessing poor communication occurring because providers assumed an older patient was cognitively impaired. “Sometimes staff talk about their condition in front of them without addressing them,” one student said, and, “People just don’t explain as much to them, they . . . more just reassure and tell them they’re going to be [okay] and don’t explain the details of their illness,” said an intern. Failing to speak directly to the patient can have huge implications for the quality of patient care; clinicians might obtain incomplete or erroneous histories, and provide inadequate patient education.
By some estimates, more than 50 percent of the time a partner, friend, or caregiver accompanies the older adult to medical encounters (Adelman, Greene, and Ory, 2000). While the third person can be helpful for providing history, recording medical information, and advocating for the patient, the presence of another individual also changes the visit dynamic. Greene et al. (1994) compared triadic and dyadic initial visits of adults ages 60 and older to a primary care practice affiliated with a large urban teaching hospital. Patients who were accompanied to their medical visit had poorer functional status and were more likely to require assistance with ambulation.
In the triadic encounters, the patients raised fewer topics across all content areas and raters reviewing transcripts of the encounters rated the patients as less assertive and expressive. The third person present often talked with physicians about the patient instead of with the patient. Doctors and the third parties frequently referred to the older adult as “he” or “she.” Almost 75 percent of the time, the third party answered questions for the patient even when the patient was capable of responding. A third person may believe he or she is being helpful, but if they answer questions for the patient, this could prevent providers from obtaining an accurate history and from recognizing cognitive impairment, depression, and elder abuse. And, a third person can hinder the older adult’s ability to form a close relationship with the physician—a relationship that has been shown to affect adherence, patient satisfaction, and even health status.
Ageism Among Older Adults
Health providers are not the only ones who may harbor or exhibit ageist attitudes. Older adults often possess very negative views of aging, not realizing the potential impact on their health. Older adults who believe pain, fatigue, depressed mood, dependency upon others, and decreased libido are a normal part of aging are less likely to seek healthcare (Sarkisian, Hays, and Mangione, 2002) and therefore are at risk for being under-treated. In one study focusing on depression, older participants who attributed feeling depressed to aging were four times less likely to believe they should discuss the symptom with a doctor (Sarkisian, Lee-Henderson, and Mangione, 2003). Those with low expectations for aging are less likely to engage in physical activity (Sarkisian et al., 2005) and other preventive behaviors like having regular physical examinations, eating a balanced diet, using a seatbelt, exercising, and limiting alcohol and tobacco use (Levy and Myers, 2004). Providers can routinely ask about pain, mood, energy level, functional status, and sexual health, and then educate patients about options for evaluation and treatment.
Not all older adults believe normal changes with aging signal inevitable decline. On a commonly used “Attitudes Toward Own Aging” scale, those with positive attitudes feel they have as much energy now as they did the previous year, are generally as happy now as they were when they were younger, and feel things are better than they expected them to be. Levy and Meyers (2004) found that individuals with these positive self-perceptions are more likely to engage in preventive health behaviors over twenty years. In other long-term studies, positive self-perceptions were associated with better functional health and were more predictive of changes in functional health over time than socioeconomic status, race, self-rated health, and gender (Levy, Slade, and Kasl, 2002.). Levy et al. (2012) found that older persons with positive age stereotypes were 44 percent more likely to fully recover from severe disability than those with negative age stereotypes. Finally, older individuals who held more optimistic views of aging lived 7.5 years longer than those with less positive perceptions of aging (Levy et al., 2002).
While much research focuses on how negative self-perceptions of aging could result in under-reporting and under-treatment, negative views of aging might also lead to over-treatment. The Baby Boom Generation began turning 65 in 2011, and has a reputation for wanting to appear youthful, fearing the aging process, and believing medical technology will allow them to live longer than their parents (AARP, 2011). A baby boomer wanting to prevent or postpone the aging process might be extra vigilant about symptoms and seek out more medical care.
Compared to the cohort of older adults studied by Greene and colleagues (1986), the baby boomers are probably more likely to get their agendas addressed during medical visits because they are perceived as driven, technologically savvy, and not afraid to question authority. To try to limit unnecessary treatments and tests, the American Academy of Family Physicians’ Choosing Wisely List advised doctors to avoid performing the following procedures: imaging tests for low-back pain in the absence of red flags like fever, weight loss, and neurological deficits; annual electrocardiograms for low-risk patients without any cardiac symptoms; and, PAP smears for screening of cervical cancer in women older than age 65 who have had normal PAPs in the past, and do not have any new sexual partners (Choosing Wisely, 2015b).
The Society for General Internal Medicine recommends against annual “health maintenance” visits because they have not been shown to reduce morbidity, mortality, or hospitalization, but instead create potential harm from unnecessary testing (Choosing Wisely, 2015c). Recognizing the need to shift the emphasis from tests and procedures to more functional assessment and counseling, The Centers for Medicare & Medicaid Services (CMS) created an annual wellness visit that reimburses providers for time spent screening for depression and anxiety, assessing functional status and social support, reconciling medications, reviewing vaccines, and discussing other preventive health measures (CMS, 2015).
Ageism in the Healthcare System
Using Butler’s original definition of ageism as “systematic stereotyping and discrimination against people because they are old,” one could argue that the healthcare system discriminates against older adults in several ways. First, the number of doctors with advanced geriatrics training is declining. Second, more physicians are opting out of Medicare. Third, using data from clinical trials and the recommendations from clinical practice guidelines is problematic when caring for older adults with multiple chronic illnesses because they are often excluded from the study populations.
By 2030, one in five Americans will be age 65 or older. There will be 61 million “young-old” (ages 65 to 84) and 9 million “old-old” (ages 85 and older). Unfortunately, as the demand for providers with geriatrics expertise increases, the physician supply remains inadequate. Currently, there are approximately 7,300 certified geriatricians but only 50 percent of fellowship-trained geriatricians are recertifying (Bragg et al., 2012). While the number of geriatrics fellowship positions has increased slightly from 430 to 455 over the last three years, the number of slots filled has remained around 300 or less (ADGAP, 2015).
More geriatricians are needed for direct patient care but they also are needed in academic environments to teach medical trainees and inspire them to choose careers in geriatrics, to conduct aging research, and to pioneer new models of care. From 2005 to 2010, the number of full-time geriatric medicine physician faculty and research faculty increased from 1,690 to 2,008. Yet in 2010, only half of all medical schools had nine or more full-time faculty engaged in education, research, and clinical care—the estimated minimum number needed to develop and maintain an effective medical school geriatrics curricula (Bragg et al., 2012).
Compounding the shortage of geriatricians, more doctors may be closing their practices to patients with Medicare because of frustration with declining reimbursement rates and increasing requirements like the use of electronic health records. Bishop, Federman, and Keyhani (2011) analyzed data from a national survey of physicians working in non–federally funded, non–hospital-based office practices, and found that physician acceptance of new Medicare patients only declined from 95.5 percent in 2005 to 92.9 percent in 2008. Data released by the CMS confirm that the percentage of physicians opting out remains small, but the absolute number increased 250 percent, from 3,700 doctors in 2009 to 9,539 in 2012 (Beck, 2013). Doctors may be limiting the number of Medicare patients in their practices even if they do not opt out completely. Patients who live in wealthier urban and suburban areas may have difficulty finding a Medicare provider, or face long wait times for appointments because other patients in the area are willing and able to pay out of pocket.
The final example of ageism in the healthcare system concerns the inadequacy of single disease clinical practice guidelines (CPG) and the exclusion of older adults with multiple chronic illnesses from the trials that are used to generate these guidelines. Twenty percent of Medicare beneficiaries have five or more chronic illnesses (Tinetti, Bogardus, and Agostini, 2004). In one year, these individuals have an average of fifty prescriptions filled, see fourteen different physicians, and make thirty-seven office visits, putting them at risk for adverse drug events, conflicting medical advice, and duplicate tests (Benjamin, 2010). The majority of CPGs have no specific recommendations for patients with more than one chronic illness, so clinicians try to combine several disease-specific CPGs, increasing the risk of adverse drug events and disease−disease interactions. Boyd et al. (2005) analyzed fifteen CPGs representing the most common chronic diseases managed by primary care providers such as heart failure, atrial fibrillation, diabetes mellitus, osteoarthritis, and chronic obstructive pulmonary disease (COPD). Most of the CPGs reviewed did not modify suggestions for older adults with multi-morbidity, discuss the quality of the evidence underlying recommendations, or advise incorporation of patient preferences and life expectancy into treatment plans. When Boyd and colleagues applied the relevant CPGs to a hypothetical 79-year-old woman with COPD, diabetes, osteoporosis, hypertension, and osteoarthritis, they found she would be advised to take twelve medications requiring nineteen doses per day, and to do fourteen non-pharmacologic activities, such as weight-bearing exercise and diabetes self-management. Of note, in 2007, the American Heart Association published a two-part series with age-specific practice guidelines for “Acute Coronary Care in the Elderly” (Alexander et al., 2007). This follows previous research demonstrating older patients with unstable angina, acute congestive heart failure, and acute myocardial infarction were less likely to receive standard life-saving therapies (Giugliano et al., 1998).
Older adults with multiple comorbidities and cognitive or functional impairment are generally excluded from the randomized controlled trials that eventually form the basis of clinical practice guidelines. The trials often use restrictive admission criteria to maximize the accuracy of the results for the target population, but at the expense of being able to generalize the results to other patients. If the health needs of older adults are not being addressed in the CPGs and are not part of the evidence used to generate these guidelines, then physicians may not be able to extrapolate CPG recommendations. Cox and colleagues (2011) conducted a descriptive analysis of fourteen CPGs. Twelve provide specific recommendations for individuals ages 65 and older, but only five guidelines gave recommendations for frail elderly ages 80 and older. Approximately 2,200 of the 2,500 studies used to create the clinical practice guidelines had information about the mean participant age. Only thirty-one of the 2,200 studies had an average participant age of 80 and older, representing 1.4 percent of the total number of studies (Cox et al., 2011).
Lewis et al. (2003) looked specifically at cancer clinical trials and found that although 61 percent of new cancer diagnoses (and 70 percent of cancer deaths) occur in older adults, they continue to make up a minority of trial participants. Using National Cancer Institute data with characteristics for 59,300 patients across 495 trials, they found that older adults were included in more in trials for late-stage cancers (41 percent of participants) versus early-stage cancers (25 percent). While less than 1 percent of all trials had age cut-offs, older adults were commonly excluded from participation based on lab abnormalities (hematologic, hepatic, renal), cardiac conditions, and functional status. More than 80 percent of the cancer trials required participants be ambulatory, capable of working, or independent in activities of daily living. Interestingly, only 3 percent of trials specifically excluded patients with Alzheimer’s Disease, but 16 percent did have exclusion criteria for other psychiatric conditions.
While ageism unfortunately still exists among the attitudes of health providers, older adults and the healthcare system itself, there are numerous interventions underway that should begin to mitigate it.
Encouraging non-ageist attitudes among healthcare providers requires that they learn to recognize and appreciate the heterogeneity of older adults. This will happen when medical trainees gain exposure to older adults outside the hospital, so ageism does not become “an occupational hazard of the health profession” (Greene, 1986). Geriatrics education also needs to be a required part of the medical school curriculum because the majority of students will go on to care for older adults whether they end up in surgical or medical specialties. The Institute of Medicine’s report, Retooling for an Aging America, called for more universal geriatric education among health professionals in 2008 (Leipzig, 2009). However, while 85 percent of medical schools offer a geriatrics elective experience, only 27 percent of medical schools require geriatrics rotations during the clerkship years (Bragg et al., 2012). Meanwhile, medical students complete required clinical rotations in pediatrics and obstetrics even though most will never care for children or pregnant woman after they graduate. Minimum geriatrics competencies already exist and recommend that every medical student upon graduation possess the ability to safely prescribe medications, assess functional status, and make clinical decisions based on elderly patients’ prognosis and personal preferences (Leipzig, 2009). More medical schools have begun to incorporate longitudinal curricula where trainees are paired with patients over years in order to witness patient experience first-hand, and begin to understand the challenges in navigating the healthcare system.
Older adults can change their self-perceptions of aging, but ageist stereotypes are both pervasive in American culture and harmful to the physical and psychological well-being of older adults. The demographic changes that will result in one in five Americans being age 65 or older in 2030 likely will not be sufficient. Strategies for reducing ageism will require targeted educational and media campaigns like the successful AARP campaign to increase physical activity among older adults, which included an intensive consumer market research plan based on data combined from three national surveys, focus groups, and in-depth one-on-one interviews to identify opportunities and barriers to change behaviors at the individual and community levels (Ory et al., 2003).
Geriatrics as a field and profession has already started to rebrand itself and will need to use some of the same strategies as AARP. To foster positive attitudes toward aging, older adults need something akin to the “What to Expect” series for pregnant mothers, or the standard anticipatory guidance counseling that is embedded in each well child visit. However, given the power and persistence of negative attitudes, it might be best to have a “What Not to Expect” guide to aging to dispel harmful assumptions that depression, social isolation, dementia, pain, and fatigue are part and parcel of getting older. Such a guide, if created by multi-disciplinary experts in aging and embedded into routine office visits, could also offer practical advice on distinguishing normal from abnormal aging, promoting and maintaining physical and cognitive function, and navigating the healthcare system.
Eradicating ageism within the healthcare system will require more substantial changes. Creating funding for geriatrics fellows to pursue a second year of training for educational or clinical research, and improving reimbursement for practicing geriatricians will help support academic departments and divisions, perhaps fueling a better pipeline of highly qualified trainees who have an genuine interest in caring for older adults. To address the shortage of qualified geriatrics providers, more nurse practitioners and physician assistants should be encouraged to obtain geriatrics training and certification. Geriatrics educators can also look to form partnerships within their medical departments with colleagues in cardiology, oncology, and nephrology, and should collaborate with the general and specialist surgeons as well so that the trainees in these fields and the patients benefit from the dual expertise. Finally, it is imperative to begin including older adults in clinical trials that go on to form the basis of clinical practice guidelines.
Karin M. Ouchida, M.D., is Joachim Silbermann Family Clinical Scholar in Geriatrics at Weill Cornell Medical College in New York City, assistant professor of Medicine at the College, and the program director for the Cornell Geriatrics Fellowship at New York–Presbyterian Hospital. Mark S. Lachs, M.D., is the Irene F. and I. Roy Psaty Distinguished Professor of Clinical Medicine and professor of medicine at Weill Cornell.
Editor’s Note: This article is taken from the Fall 2015 issue of ASA’s quarterly journal, Generations, an issue devoted to the topic “Ageism in America: Reframing the Issues and Impact.” ASA members receive Generations as a membership benefit; non-members may purchase subscriptions or single copies of issues at our online store. Full digital access to current and back issues of Generations is also available to ASA members and Generations subscribers at Ingenta Connect. For details, click here.
AARP. 2011. “Poll: Perceptions of Boomers; What Are the Beliefs Held About the Baby Boom Generation?” Retrieved July 27, 2015.
Achenbaum, W. A. 2013. “Robert N. Butler, M.D. (January 21, 1927–July 4, 2010): Visionary Leader.” The Gerontologist 54: 6–12.
Adelman, R. D., Greene, M. G., and Ory, M. G. 2000. “Communication Between Older Patients and Their Physicians.” Clinical Geriatric Medicine 16: 1–24.
ADGAP. 2015. “2015 Match Data Presentation.”Retrieved July 28, 2015.
Alexander, K. P., et al. 2007. “Acute Coronary Care in the Elderly, Part 1: Non-ST Segment Elevation Acute Coronary Syndromes.” Circulation 115: 2549–69.
Aronson, L. 2015. “The Human Lifecycle’s Neglected Stepchild.” The Lancet 385(9967): 500–1.
Beck, M. 2013. “More Doctors Steer Clear of Medicare.” The Wall Street Journal, July 29. Retrieved July 28, 2015.
Benjamin, R. M. 2010. “Surgeon General’s Perspectives. Multiple Chronic Conditions: A Public Health Challenge.” Public Health Reports (125) September−October.
Berwick, D. M., and Hackbarth, A. D. 2012. “Eliminating Waste in U.S. Health Care.” Journal of the American Medical Association (JAMA) 307(14): 1513–6.
Bishop, T. F., Federman, A. D., and Keyhani, S. 2011. “Declines in Physician Acceptance of Medicare and Private Coverage.” Archives of Internal Medicine 171(12): 1117–19.
Boyd, C. M., et al. 2005. “Clinical Practice Guidelines and Quality of Care for Older Patients with Multiple Comorbid Diseases: Implications for Pay for Performance.” JAMA 294(6): 716–24.
Bragg, E. J., et al. 2012. “The Development of Academic Geriatric Medicine in the United States from 2005 to 2010: An Essential Resource for Improving the Medical Care of Older Adults.” Journal of the American Geriatrics Society 60: 15405.
Cassel, C. K., and Leipzig, R. M., eds. 2003. Geriatric Medicine: An Evidence-Based Approach (4th ed.). New York: Springer.
Choosing Wisely. 2015b. “American Academic of Family Physicians: Fifteen Things Physicians and Patients Should Question.” Retrieved July 27, 2015.
Choosing Wisely. 2015c. “Society for General Internal Medicine: Five Things Physicians and Patients Should Question.” Retrieved July 27, 2015.
Cox, L., et al. 2011. “Underrepresentation of Individuals 80 years of Age and Older in Chronic Disease Clinical Practice Guidelines.” Canadian Family Physician 57: 263–9.
Davis, M., et al. 2011. “Primary Care Clinician Expectations Regarding Aging.” The Gerontologist 51(6): 856–66.
Finlayson, E., et al. 2012. “Functional Status After Colon Cancer Surgery in Elderly Nursing Home Residents.” Journal of the American Geriatrics Society 60: 967−73.
Greene, M. G., et al. 1986. “Ageism in the Medical Encounter: An Exploratory Study of the Doctor−Elderly Patient Relationship.” Language & Communication 6(1/2): 113–24.
Greene, M. G., et al. 1994. “The Effects of the Presence of a Third Person on the Physician−Older Patient Medical Interview.” Journal of the American Geriatrics Society 42(4): 413–9.
Giugliano, R. P., et al. 1998. “Elderly Patients Receive Less Aggressive Medical and Invasive Management of Unstable Angina: Potential Impact of Practice Guidelines.” Archives of Internal Medicine 158(10): 1113–20.
Centers for Medicare & Medicaid Services (CMS). 2015. “The ABCs of the Annual Wellness Visit.” Retrieved July 27, 2015.
Choosing Wisely. 2015a. “American Geriatrics Society: Ten Things Physicians and Patients Should Question.” Retrieved July 14, 2015.
Health Affairs. 2012. “Health Policy Brief: Reducing Waste in Health Care.” December 13. Retrieved August 12, 2015.
Herman, R. E., and Williams, K. N. 2009. “Elderspeak’s Influence on Resistiveness to Care: Focus on Behavioral Events.” American Journal of Alzheimer’s Disease and Other Dementias 24(5): 417−23.
Higashi, R., et al. 2012. “Elder Care as ‘Frustrating’ and ‘Boring’: Understanding the Persistence of Negative Attitudes Towards Older Patients Among Physicians-in-Training.” Journal of Aging Studies 26(4): 476–83.
Joseph, B., et al. 2014. “Superiority of Frailty Over Age in Predicting Outcomes Among Geriatric Trauma Patients: A Prospective Analysis.” JAMA Surgery 149(8): 766–72.
Kane, R. L., Ouslander, J. G., and Abrass, I. B., eds. 2004. Essentials of Clinical Geriatrics (5th ed.) New York: The McGraw-Hill Companies, Inc.
Leipzig, R. 2009. “The Patients Doctors Don’t Know.” The New York Times, July 1. Retrieved August 12, 2015.
Levy, B. R. 2001. “Eradication of Ageism Requires Addressing the Enemy Within.” The Gerontologist 41(5): 578–9.
Levy, B. R., and Myers, L. M. 2004. “Preventive Health Behaviors Influenced by Self-perceptions of Aging.” Preventive Medicine 39(3): 625–9.
Levy, B. R., Slade, M. D., and Kasl, S. V. 2002. “Longitudinal Benefit of Positive Self-perceptions of Aging on Functional Health.” Journals of Gerontology, Series B: Psychological Sciences and Social Sciences 57: 409–17.
Levy, B. R., et al. 2002. “Longevity Increased by Positive Self-perceptions of Aging.” Journal of Personality and Social Psychology 83(2): 261–70.
Levy, B. R., et al. 2012. “Association Between Positive Age Stereotypes and Recovery from Disability in Older Persons.” JAMA 308(19): 1972–3.
Lewis, J. H., et al. 2003. “Participation of Patients 65 Years of Age or Older in Cancer Clinical Trials.” Journal of Clinical Oncology 21(7): 1383–9.
Lindau, S. T., et al. 2007. “A Study of Sexuality and Health Among Older Adults in the United States.” New England Journal of Medicine 357(8): 762−74.
Macnicol, J. 2006. Age Discrimination: A Historical and Contemporary Analysis. New York: Cambridge University Press.
Makris, U. E., et al. 2015. “Ageism, Negative Attitudes, and Competing Comorbidities—Why Older Adults May Not Seek Care for Restricting Back Pain: A Qualitative Study.” BMC Geriatrics 15(39): 1−9.
Oresanya, L. B., Lyons W. L., and Finlayson, E. 2014. “Preoperative Assessment of the Older Patient: A Narrative Review.” JAMA 311(20): 2110–20.
Oresanya, L. , et al. 2015. “Functional Outcomes After Lower Extremity Revascularization in Nursing Home Residents: A National Cohort Study.” JAMA Internal Medicine 176(6): 951−7.
Ory, M., et al. 2003. “Challenging Aging Stereotypes: Strategies for Creating a More Active Society.” American Journal of Preventive Medicine 25(3Siii): 164−71.
Palmore, E. 2001. “The Ageism Survey: First Findings.” The Gerontologist 41(5): 572–5.
Sarkisian, C. A., Hays, R. D., and Mangione, C. M. 2002. “Do Older Adults Expect to Age Successfully? The Association Between Expectations Regarding Aging and Beliefs Regarding Healthcare-seeking Among Older Adults.” Journal of the American Geriatrics Society 50(11): 1837–43.
Sarkisian, C. A., Lee-Henderson, M. H., and Mangione, C. M. 2003. “Do Depressed Older Adults Who Attribute Depression to ‘Old Age’ Believe It Is Important to Seek Care?” Journal of General Internal Medicine 18(12): 1001–5.
Sarkisian, C. A., et al., 2005. “The Relationship Between Expectations for Aging and Physical Activity Among Older Adults.” Journal of General Internal Medicine 20(10): 911–15.
Shem, S. 1978. The House of God. New York: Dell Publishing.
Tinetti, M. E., Bogardus S. T., and Agostini, J. V. 2004. “Potential Pitfalls of Disease-specific Guidelines for Patients with Multiple Conditions.” New England Journal of Medicine 351(27): 2870–4.
Williams, K., Kemper, S., and Hummert, M. L. 2005. “Enhancing Communication with Older Adults: Overcoming Elderspeak.” Journal of Psychosocial Nursing Mental Health Services 43(5): 12–6. | 1 | 7 |
<urn:uuid:08f5e9e4-a779-4a65-afdc-21b5a15f6ccd> | Understanding Spanning Tree
The Spanning Tree Protocol (STP) is a network protocol that ensures a loop-free topology for any bridged Ethernet local area network.
STP was invented by Dr. Radia Perlman, distinguished engineer at Sun Microsystems. Dr. Perlman devised a method by which bridges can obtain Layer 2 routing utopia: redundant and loop-free operation. Think of spanning tree as a tree that the bridge keeps in memory for optimized and fault-tolerant data forwarding.
Spanning tree in a nutshell
- STP provides a means to prevent loops by blocking links in an Ethernet network. Blocked links can be brought in to service if active links fail.
- The root bridge in a spanning tree is the logical center and sees all traffic on a network.
- Spanning tree recalculations are performed automatically when the network changes but cause a temporary network outage.
- Newer protocols, such as TRILL, prevent loops while keeping links that would be blocked by STP in service.
Eliminating loops with spanning tree
If your switches are connected in a loop without STP, each switch would infinitely duplicate the first broadcast packet heard because there’s nothing at Layer 2 to prevent a loop.
STP prevents loops by blocking one or more of the links. If one of the links in use goes down, then it would fail over to a previously blocked link. How spanning tree chooses which link to use depends entirely on the topology that it can see.
The idea behind a spanning tree topology is that bridges can discover a subset of the topology that is loop-free: that’s the tree. STP also makes certain there is enough connectivity to reach every portion of the networkby spanning the entire LAN.
Bridges will perform the spanning tree algorithm when they are first connected to the network or whenever there is a topology change.
When a bridge hears a “configuration message,” a special type of BPDU (bridge protocol data unit), it will begin its disruptive spanning tree algorithm. This starts with the election of a “root bridge” through which all data will flow.
Tip: Cisco hardware normally uses the device with the lowest MAC address as the root bridge. Since this is the oldest and probably slowest device, it’s best to configure the root bridge manually.
The next step is for each bridge to determine the shortest path to the root bridge so that it knows how to get to the “center.” A second election happens on each LAN, and it elects the designated bridge, or the bridge that’s closest to the root bridge. The designated bridge will forward packets from the LAN toward the root bridge.
The final step for an individual bridge is to select a root port. This simply means “the port that I use to send data towards the root bridge.”
Note: Every single port on a bridge, even ones connected to endpoints, will participate in the spanning tree unless a port is configured as “ignore.”
A newly connected bridge will send a reconfiguration BPDU, and the other connected devices will comply. All traffic is stopped for 30-50 seconds while a spanning tree calculation takes place.
In 2001, certain vendors started introducing rapid spanning tree, a modified version of the spanning tree algorithm that reduces outages. It’s fully compatible with older devices that only know the old spanning tree algorithm and reduces the 30-50-second outage time to less than ten in most cases, so use it if you can.
Note: RSTP works by adding an alternative port and a backup port. These ports are allowed to immediately enter the forwarding state rather than passively wait for the network to converge.
VLANs and PVST
STP can cause problems with VLANs if one of the physical links happens to be a VLAN trunk. That’s because with only one spanning tree, it’s possible the link with the VLAN trunk will need to be blocked. That could result in no connectivity for a particular VLAN to the rest of its LAN. To solve this, enable per-VLAN spanning trees (PVST).
With PVST enabled, a bridge will run one spanning tree instance per VLAN on the bridge. If a trunk link contains VLANs 1, 2, and 3, it can then decide that VLANs 1 and 2 should not take that path, but still allow VLAN 3 to use it.
Spanning tree drawbacks
One of the drawbacks of STP is that even though there may be many physical or equal-cost multiple paths through your network from one node to another, all your traffic will flow along a single path that has been defined by a spanning tree. The benefit of this is that traffic loops are avoided, but there is a cost. Restricting traffic to this unique path means blocking alternative, and sometimes more direct, paths.
That means that your full potential network capacity can never be realized. (It is possible to use multiple simultaneous spanning trees for separate VLANs, as mentioned above, but the traffic in any given VLAN will still not be able to use all your available network capacity.)
In the past this has been acceptable, but with the increasing use of virtualization technology in many data centers, there is a need for a more efficient and reliable routing infrastructure that can handle the very high I/O demands of virtualized environments.
Spanning tree alternatives: TRILL and NPB
Transparent Interconnection of Lots of Links (TRILL) is a routing protocol network standard which:
- Uses shortest path routing protocols instead of STP.
- Works at Layer 2, so protocols such as FCoE can make use of it.
- Supports multihopping environments.
- Works with any network topology, and uses links that would otherwise have been blocked.
- Can be used at the same time as STP.
The main benefit of TRILL is that it frees up capacity on your network which can’t be used (to prevent routing loops) if you use STP, allowing your Ethernet frames to take the shortest path to their destination. This in turns mean more efficient utilization of network infrastructure and a decreased cost-to-benefit ratio.
These benefits are particularly important in data centers running cloud computing infrastructure. TRILL is also more stable than STP because it provides faster recovery time in the event of hardware failure.
Why do we need private ip address?
We need private IP (version4) addresses because the total amount public IP (version 4) addresses quickly outgrew the amount available. The total amount available is 3706.65 million addresses after taking out the reserved ranges. You can probably imagine that this easily outstrips the number of devices that need to connect to the Internet. Think mobile phones, office PCs, home networked devices and so on.
To overcome this a whole private IP address range can be used to hide behind a single public IP addresses. The available private IP address ranges are 10.0.0.0 – 10.255.255.255, 172.16.0.0 – 172.31.255.255 and 192.168.0.0 – 192.168.255.255.
It is up to you which private range you choose but this comes down to network design and the total addresses required on any given network.
When you go on the Internet you can create a rule on your router or firewall stating that if you are using a private IP between 10.10.10.10 and 10.10.10.100 (for example), present to the Internet as this public IP address of 184.108.40.206. This is known as Network Address Translation (NAT). You may ask that how does it know which IP is mapped to which IP address if we only have one public IP address and hundreds of private addresses? This is where we use Port Address Translation (PAT). The router/firewall maintains a table and does very specific port mappings. So it will say if you come to me on 10.10.10.10 port 12345, present to the Internet on 220.127.116.11 port 45678. That way we can overload a lot of private IP addresses, to one public address using random port numbers as the identifier.
To finish your question, by using this method we can use a lot less public IP addresses per person to achieve global Internet access to each other.
On a side note, the IP (version 6) address space is design with only public IP addresses in mind. That has a possible 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses. We won’t be running out of those any time soon. 🙂
What are the limitations of a private ip address?
Private IP addresses are not routable inInternet as well as not registered.
It has some range in all 3 classes.
Its is mainly reserved for private use only but public IPs are reserved and routable ininternet.
Isolation, Maintenance Costs
Which IPv4 address ranges have been reserved for private networks?
|Address block (CIDR)||Range||Number of addresses||Scope||Purpose|
|16,777,216||Software||Used for broadcast messages to the current (“this”)|
|16,777,216||Private network||Used for local communications within a private network|
|4,194,304||Private network||Used for communications between a service provider and its subscribers when using a carrier-grade NAT|
|16,777,216||Host||Used for loopback addresses to the local host|
|65,536||Subnet||Used for link-local addresses between two hosts on a single link when no IP address is otherwise specified, such as would have normally been retrieved from a DHCP server|
|1,048,576||Private network||Used for local communications within a private network|
|256||Private network||Used for the IANA IPv4 Special Purpose Address Registry|
|256||Documentation||Assigned as “TEST-NET” for use in documentation and examples. It should not be used publicly.|
|256||Internet||Used by 6to4 anycast relays|
|65,536||Private network||Used for local communications within a private network|
|131,072||Private network||Used for testing of inter-network communications between two separate subnets|
|256||Documentation||Assigned as “TEST-NET-2” for use in documentation and examples. It should not be used publicly.|
|256||Documentation||Assigned as “TEST-NET-3” for use in documentation and examples. It should not be used publicly.|
|268,435,456||Internet||Reserved for multicast|
|268,435,456||Internet||Reserved for future use|
|255.255.255.255/32||255.255.255.255||1||Subnet||Reserved for the “limited broadcast” destination address|
and the IP address range “127.0.0.0/8”?
It’s also reserved for loopback, so no, it’s not widely used for anything.
In practice, 127.0.0.1 is usually used as “the” loopback address, but the rest of the block should loopback as well, meaning it’s just generally not used for anything. (Though, for example, larger Cisco switches will use 127.0.0.xx IPs to listen for attached cards and modules, so at least some of other addresses are in use.)
As already stated whole block is used as loopback so i’m only adding one example for regular desktop use.
Loopback other than
That’s right, normally you would not connect RDP client to same computer that you are using (and not allowed to do so even if wanted to see nice mirror effects :).
How does PAT (Port Address Translation, also called “IP-Masquerading” or “Source-NAT”) work internally?
Due in large part to alleged NAT support on consumer devices, many people are confused about what NAT really is. Network Address Translation is used for many purposes, including but certainly not limited to, saving IP addresses. In this installment of Networking 101, we’ll try to clear all this up.
NAT is a feature of a router that will translate IP addresses. When a packet comes in, it will be rewritten in order to forward it to a host that is not the IP destination. A router will keep track of this translation, and when the host sends a reply, it will translate back the other way.
Home users who talk about NAT are actually talking about PAT, or Port Address Translation. This is quite easy to remember: PAT translates ports, as the name implies, and likewise, NAT translates addresses. Sometimes PAT is also called Overloaded NAT. It doesn’t really matter what you call it, just be careful about blanket “NAT can’t” statements: they are likely incorrect.
Now that that’s out of the way, let’s clarify some terminology required for a NAT discussion. When we refer to the inside, we’re talking about the internal network interface that receives egress traffic. This internal network may or may not be using private addresses—more on those in a minute. The outside refers to the external-facing network interface, the one that receives ingress traffic. In the real world, it is not the case that NAT is simply using a single outside IP; translating traffic into internal IPs and ports. That’s what your Linksys does.
The “inside” of a NAT configuration is not synonymous with “private” or RFC1918 addresses. The often-referred-to “non-routable” addresses are not unroutable. You may configure most any router to pass traffic for these private IP subnets. If you try and pass a packet to your ISP for any of these addresses, it will be dropped. This is what “non-routable” means: not routable on the Internet. You can and should mix RFC1918 addresses (for management interfaces) on your local internal network.
NAT is not used to simply share a single IP address. But when it is, in this strange configuration that’s really called PAT, issues can arise. Say two geeks want to throw up an IPIP tunnel between their networks so they can avoid all the issues of firewall rules and state-keeping. If they both use the same IP subnet, they can’t just join two networks together: they won’t be able to broadcast for each other, so they will never communicate, right? It would seem that one side or the other would have to renumber their entire subnet, but there is a trick. Using a semi-complicated NAT and DNS setup, the hosts could actually communicate. This is another case of blanket “NAT is evil” statements actually having little reflection on reality. This issue does come up frequently when two companies merge and various branch offices need to communicate.
So why in the world would someone want to use one external IP and map it to one internal IP, as opposed to just translating the port? Policy. It’s even likely that both sides will use real bona fide Internet IP addresses. Everyone understands that NAT (the naive definition) will keep track of state; it’s the only way to make translations happen. What they may not realize is that stateful filtering is a powerful security mechanism.
Stateful filtering means that the router will keep track of a TCP connection. Remember from our previous installment on TCP and its followup that a TCP connection consists of four parts: the remote and local IP address, and the connected ports. Stateful filters verify that every packet into the network is part of an already established, pre-verified connection.
Imagine a b2b transaction that ships very sensitive data across the Internet, even between continents. It’s not feasible to lay fiber for this purpose, so the Internet has to be used. What to do? How would you secure this transaction, or set of transactions? It can be done with IPSEC, but also utilizing NAT at the same time. Each side will have a 1:1 (real) NAT router configured to only allow specific connections from specific hosts. This guarantees that from either network, only authorized hosts will be making a connection. This also guarantees that hosts on both sides have been minimally exposed, and very unlikely compromised, since nobody else can get into that network.
Once the session starts, packets are carefully inspected in and out of each NAT router. If something nefarious happens, and someone in-between is able to inject a forged packet into the stream, at least one side will notice. One of the NAT routers will be able to detect that a sequence number anomaly has occurred, and can immediately terminate all communication. When the TCP session completes with a FIN, the state is wiped clean.
In much the same way, home users take advantage of PAT to keep their less-than-secure machines from being completely taken over on a daily basis. When a connection attempt from the outside hits the external interface of a PAT device, it cannot be forwarded unless state already exists. State setup can only be done from the inside, when an egress attempt is made. If this version of NAT didn’t exist on such a wide scale, the Internet would be a completely different place. Nobody would ever successfully install and patch a Windows computer prior to a compromise without some the minimal protection provided by PAT.
Clearly NAT is useful in these cases. So why do people say that NAT is evil? They are likely referring to PAT, the bastard child of NAT. It’s called “overloaded” for a reason.
IPv6 introduces the ability to have way more IP addresses than we really need. Does that mean that IPv6 will eliminate NAT? No. It also won’t eliminate the usage of NAT everyone’s familiar with: PAT. We all need somewhere to stow Windows boxes away from the myriad of uninitiated connection attempts that come from the Internet.
What is the purpose of DNAT (Destination-NAT, also called “Port-Forwarding”) and how does this process work?
Let’s look at a usage example. A lot of multiplayer video games (as an example, Counter Strike) allow you to run a game server on your computer that other people can connect to in order to play with you. Your computer doesn’t know all the people that want to play, so it can’t connect to them – instead, they have to send new connection requests to your computer from the internet.
If you didn’t have anything set up on the router, it would receive these connection requests but it wouldn’t know which computer inside the network had the game server, so it would just ignore them (or, more specifically, it would send back a packet indicating that it can’t connect). Luckily, you know the port number that will be on connection requests for the game server. So, on the router, you set a port forward with the port number that the game server expects (for example, 27015) and the IP address of the computer with the game server (for example, 192.168.1.105).
The router will know to forward the incoming connection requests to 192.168.1.105 inside the network, and computers outside will be able to connect in.
Another example would be a local network with two machines, where the second one with the IP 192.168.1.10 hosts a website using Apache. Therefore the router should forward incoming port 80 requests to this machine. Using port forwarding, both machines can run in the same network at the same time.
Video games are perhaps the most common place everyday users will encounter port forwarding, although most modern games use UPnP so that you don’t have to do this manually (instead, it’s fully automatic). You’ll need to do this whenever you want to be able to connect directly to something in your network though (rather than through some intermediary on the internet). This might include running your own web server or connecting via Remote Desktop Protocol to one of your computers.
A note on security
One of the nice things about NAT is that it provides some effort-free, built-in security. A lot of people wander the internet looking for machines that are vulnerable… and they do this by attempting to open connections with various ports. These are incoming connections, so, as discussed above, the router will drop them. This means that in a NAT configuration, only the router itself is vulnerable to attacks involving incoming connections. This is a good thing, because the router is much simpler (and thus less likely to be vulnerable) than a computer running a full operating system with a lot of software. You should keep in mind, then, that by DMZing a computer inside your network (setting it as the DMZ destination) you lose that layer of security for that computer: it is now completely open to incoming connections from the internet, so you need to secure it as if it was directly connected. Of course, any time you forward a port, the computer at the receiving end becomes vulnerable on that specific port. So make sure you run up-to-date software that is well configured.
What is DHCP and How DHCP Works? (DHCP Fundamentals Explained.
Computer networks can be of any form like a LAN, WAN etc. If you are connected to a local LAN or an internet connection, the IP addresses form the basis of communication over computer networks. An IP address is the identity of a host or a computer device while connected to any network.
In most of the cases when you connect your computer to a LAN or internet, you’ll notice that the IP address and other information like subnet mask etc are assigned to your computer automatically. Have you ever thought about how this happens? Well, in this article we will understand the concept of DHCP that forms the basis of this functionality.
What is DHCP?
DHCP stands for Dynamic Host Configuration Protocol.
As the name suggests, DHCP is used to control the network configuration of a host through a remote server. DHCP functionality comes installed as a default feature in most of the contemporary operating systems. DHCP is an excellent alternative to the time-consuming manual configuration of network settings on a host or a network device.
DHCP works on a client-server model. Being a protocol, it has it’s own set of messages that are exchanged between client and server. Here is the header information of DHCP :
|op||1||Type of message|
|htype||1||type of hardware address|
|hlen||1||length of hardware address|
|hops||1||used in case of relay agents. Clients sets them to 0.|
|xid||4||Transaction ID used by the client and server for a session.|
|secs||2||Time elapsed (in seconds) since the client requested the process|
|ciaddr||4||Client IP address.|
|yiaddr||4||The IP address assigned by server to the client|
|siaddr||4||Server IP address.|
|giaddr||4||IP address of the relay agent.|
|chaddr||16||Hardware address of the client.|
|sname||64||Host name of the server.|
|file||128||Boot file name.|
Understanding DHCP helps in debugging many network related problems. Read our articles on wireshark and Journey of a packet on network to enhance your understanding on network and network debugging tools.
In the next section, we will cover the working of this protocol.
How DHCP Works?
Before learning the process through which DHCP achieves it’s goal, we first have to understand the different messages that are used in the process.
It is a DHCP message that marks the beginning of a DHCP interaction between client and server. This message is sent by a client (host or device connected to a network) that is connected to a local subnet. It’s a broadcast message that uses 255.255.255.255 as destination IP address while the source IP address is 0.0.0.0
It is DHCP message that is sent in response to DHCPDISCOVER by a DHCP server to DHCP client. This message contains the network configuration settings for the client that sent the DHCPDISCOVER message.
This DHCP message is sent in response to DHCPOFFER indicating that the client has accepted the network configuration sent in DHCPOFFER message from the server.
This message is sent by the DHCP server in response to DHCPREQUEST recieved from the client. This message marks the end of the process that started with DHCPDISCOVER. The DHCPACK message is nothing but an acknowledgement by the DHCP server that authorizes the DHCP client to start using the network configuration it received from the DHCP server earlier.
This message is the exact opposite to DHCPACK described above. This message is sent by the DHCP server when it is not able to satisfy the DHCPREQUEST message from the client.
This message is sent from the DHCP client to the server in case the client finds that the IP address assigned by DHCP server is already in use.
This message is sent from the DHCP client in case the IP address is statically configured on the client and only other network settings or configurations are desired to be dynamically acquired from DHCP server.
This message is sent by the DHCP client in case it wants to terminate the lease of network address it has be provided by DHCP server.
Now as we know about the various DHCP messages, it’s time to go through the the complete DHCP process to give a better Idea of how DHCP works. Note that the steps mentioned below assume that DHCP functionality is enabled by default on the client side.
Here are the steps :
- Step 1: When the client computer (or device) boots up or is connected to a network, a DHCPDISCOVER message is sent from the client to the server. As there is no network configuration information on the client so the message is sent with 0.0.0.0 as source address and 255.255.255.255 as destination address. If the DHCP server is on local subnet then it directly receives the message or in case it is on different subnet then a relay agent connected on client’s subnet is used to pass on the request to DHCP server. The transport protocol used for this message is UDP and the port number used is 67. The client enters the initializing stage during this step.
- Step 2: When the DHCP server receives the DHCPDISCOVER request message then it replies with a DHCPOFFER message. As already explained, this message contains all the network configuration settings required by the client. For example, the yaddr field of the message will contain the IP address to be assigned to client. Similarly the the subnet mask and gateway information is filled in the options field. Also, the server fills in the client MAC address in the chaddr field. This message is sent as a broadcast (255.255.255.255) message for the client to receive it directly or if DHCP server is indifferent subnet then this message is sent to the relay agent that takes care of whether the message is to be passed as unicast or broadcast. In this case also, UDP protocol is used at the transport layer with destination port as 68. The client enters selecting stage during this step
- Step 3: The client forms a DHCPREQUEST message in reply to DHCPOFFER message and sends it to the server indicating it wants to accept the network configuration sent in the DHCPOFFER message. If there were multiple DHCP servers that received DHCPDISCOVER then client could receive multiple DHCPOFFER messages. But, the client replies to only one of the messages by populating the server identification field with the IP address of a particular DHCP server. All the messages from other DHCP servers are implicitly declined. The DHCPREQUEST message will still contain the source address as 0.0.0.0 as the client is still not allowed to use the IP address passed to it through DHCPOFFER message. The client enters requesting stage during this step.
- Step 4: Once the server receives DHCPREQUEST from the client, it sends the DHCPACK message indicating that now the client is allowed to use the IP address assigned to it. The client enters the bound state during this step.
The Concept of Lease
With all the necessary information on how DHCP works, one should also know that the IP address assigned by DHCP server to DHCP client is on a lease. After the lease expires the DHCP server is free to assign the same IP address to any other host or device requesting for the same. For example, keeping lease time 8-10 hours is helpful in case of PC’s that are shut down at the end of the day. So, lease has to be renewed from time to time. The DHCP client tries to renew the lease after half of the lease time has expired. This is done by the exchange of DHCPREQUEST and DHCPACK messages. While doing all this, the client enters the renewing stage.
Computer Networks Prof. S. Ghosh Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture 33 DHCP AND ICMP.. | 1 | 3 |
<urn:uuid:03a1307e-b593-448c-814c-5eb32a0604ee> | Different Measures of the Exchange Rate
The exchange rate is simply the price of one currency in terms of other currencies. For instance if I can exchange £1.00 for $2.00, the exchange rate is 2 dollars to the pound (or 50 pence to the dollar).
There is a number of different measures of the exchange rate:
The spot market rate – This is the exchange rate that can be obtained at the present time, if you want to exchange say pounds for euros today.
The forward market rate – This is the rate that currency traders (mainly banks) are prepared to buy or sell a currency at some future date. The forward market is used by firms importing or exporting goods. It is a way of hedging (reducing risk) against an unexpected change in the exchange rate.
The nominal and real exchange rates – The nominal rate means no adjustment has been made for changes in relative inflation rates between trading partners. Suppose for instance that over the last year the U.K. has experienced 10% inflation, but in the eurozone it has only been 4%. This will be reflected in UK export prices rising relative to the prices of its imports from Europe. If the market exchange rate between the pound and the euro has not changed over the year, the real exchange rate will have risen by 6%, but the nominal exchange rate will be unchanged. A rise in the real exchange rate means that the terms of trade have improved.
The trade weighted exchange rate – This measure of the exchange rate (which is also known as the effective exchange rate), measures how the exchange rate is changing against other currencies in general, rather than a single currency. It takes into account not only the changes of a currency’s exchange rates, but also the proportion of trade with each country. It uses an index number to measure changes in the effective exchange rate.
Suppose country X has just two trading partners, Y and__ Z__. 60% of its trade is with Y __and 40% with __Z. At the start of the year the trade weighted index of its currency is 100. At the end of the year its currency has depreciated by 10% against the currency of country Y and by 20% against the currency of Z.
The fall against Y’s currency contributes a 6% fall to the trade weighted value of X’s currency (60% x 10%) and the fall against Z’s currency contributes 8% (40% x 20%).
The overall (or average) fall in X’s exchange rate is therefore 14% (6% + 8%). Notice that this is less than the 16% figure we would obtain by simply adding together the non-trade weighted falls in the exchange rate.
At the end of the year__ X__’s trade weighted exchange rate index is now 86.
Floating Exchange Rates
Under a floating rate system, a currency’s exchange rate is simply determined by the free market forces of demand and supply, without any intervention by the government or its central bank. To understand what determines the equilibrium exchange rate, we need to look at the factors that create a demand or supply of a currency.
Demand for a currency is created by:
Exports of goods and services – This causes foreign buyers to sell their own currency in exchange for the exporter’s currency to pay for the goods. For instance when Jaguar sells cars in Europe it wants to be paid in £s, so it can pay its workers and suppliers.
Long term capital inflows – Foreign direct investment into a country, such as Toyota building a car plant in the U.K. creates a demand for £’s as the Japanese company has to buy land and pay for the building.
Short term capital inflows – currency speculators may buy a currency if they think it is likely to appreciate in value or if they believe interest rates in the country are likely to rise.
Supply of a currency is created by:
Imports of goods and services
Long term capital outflows
Short term capital outflows
Like any other demand and supply curves, the demand curve is downward sloping and the supply curve upward sloping. A fall in the exchange rate of the pound, for instance would make Britain’s exports more competitive, so other countries would want to buy more £s in order to buy more British goods. A cheaper pound would also make the U.K. more attractive for foreign investment as assets could be purchased more cheaply.
An increase in the value of the pound, by contrast would encourage more selling of sterling, as imports into Britain would be cheaper. It would also lead to a greater outflow of capital as U.K firms will be able to afford to buy more assets abroad.
The conditions of demand/supply could change as a result of the following:
- _Changes in the value of exports _– An increase in a country’s exports will cause an increase demand for its currency. A decrease in exports will cause a decrease in demand for the currency.
- _Changes in the value of imports _– An increase in imports will result in an increase in the supply of a country’s currency and vice versa.
- _Changes in long term capital flows _– A capital outflow (caused perhaps by a loss of confidence in the economy or better opportunities to invest overseas) will cause an increase in the supply of a currency and vice versa.
- Changes in interest rates – A rise in interest rates will attract an inflow of short term capital, resulting in an increase in demand for the currency. It will also lead to a decrease in its supply as holders of the currency will be less willing to sell.
- Speculative activity – If currency speculators believe a currency will appreciate in value (perhaps because of favourable economic news) this will result in an increase in its demand and a decrease in supply and vice versa.
An increase in demand is shown by a rightward shift of the demand curve (and vice versa). An increase in supply is shown by a rightward shift of the supply curve (and vice versa).
An increase in demand or decrease in supply will push the exchange rate up.
A decrease in demand or increase in supply will push the exchange rate down.
Possible changes in the demand and supply of a currency, and the effect on the equilibrium exchange rate are shown in Fig 2 below:
Suppose the exchange rate is initially in equilibrium at point A, where demand curve D1 intersects supply curve S1. An increase in demand for the currency would move the equilibrium to__ B__, causing the exchange rate to rise. A decrease in demand to D2 would shift the equilibrium to C, with a fall in the exchange rate.
An increase or decrease in supply would shift the equilibrium to D or J respectively, and a fall or rise in the exchange rate.
If demand increased and supply decreased, the equilibrium would move to H. If demand decreased and supply increased, the equilibrium would be at__ F__.
The Purchasing Power Parity Theory
This theory argues that in the long run exchange rates will move to a level so that a unit of any currency will have the same purchasing power wherever it is spent (ie at home or abroad).
Suppose a Big Mac costs £3.00 in the UK and $6.00 in the USA. Provided these prices are typical of all prices (ie dollar prices are twice sterling prices), the theory suggests that the exchange rate will be £1 to $2.
Now suppose that over the next year there is rapid inflation of 50% in the UK, but prices are stable in the USA. The sterling price of the BigMac rises to £4.50, but the American price stays at $6.00.
The pound is now overvalued by 50%. An American tourist buying a burger in London would have to buy $9 worth of pounds to pay for the burger. British goods in general would now be more expensive (and less competitive) than American ones. This will cause an increase in imports from the USA and a fall in British exports. The resulting increased buying of dollars and selling of pounds will drive down the pound until it reaches its purchasing power parity rate of exchange, which is now £1.00 = $1.33 (6/4.5).
Put another way, the PPP theory argues that the exchange rate will move towards a level where the current account is in balance.
There is plenty of evidence that supports the view that changes in domestic price levels are a very important determinant of exchange rates in the long run. Countries with persistently high inflation do experience falls in their exchange rates, whilst countries like Switzerland have seen a steady rise in the value of their currency over many decades; the consequence of very low inflation over that period of time.
The theory is less good at predicting short run changes in the exchange rate, because there are so many factors affecting capital flows between countries, and these are very important in determining exchange rate fluctuations on a daily or monthly basis.
The real strength of the theory is that it shows that a government will find it very difficult to maintain an exchange rate for long if it is seriously over or undervalued.
Fixed Exchange Rates
A country can try to maintain a fixed exchange rate against one or more other currencies. In the 19th and earl 20th centuries, the major economies in the industrialised world belonged to the Gold Standard. Each currency was freely convertible into gold at a fixed price. So if the £ was worth four times as much gold as the dollar, the exchange rate of the £ to the $ would be 1:4. Under the rules of the Gold Standard a country would increase or reduce the amount of its domestic currency in circulation according to the amount of old held by the central bank.
The Gold Standard was abandoned in the 1930s as the Great Depression resulted in countries trying to boost their economies by devaluing their currencies.
There is no longer a global attempt to operate a fixed rate system, but many smaller countries try to fix their exchange rate against one of the major currencies, usually the American dollar or the euro.
Some of the smaller countries in South America for instance, fix their exchange rate against the dollar. Holders of the currency can freely exchange them for dollars, so the central bank has to restrict the issue of its own currency in line with its reserves of dollars.
A country’s central bank will have to intervene in the foreign exchange market to maintain the fixed exchange rate. If there was downward pressure on the exchange rate (caused for example by a current account deficit), the central bank could sell some of its reserves of foreign currencies to buy its own currency, thereby increasing demand for it and pushing the exchange rate back to the fixed rate. Alternatively it could raise interest rates to attract an inflow of short term capital. If the exchange rate faces upward pressure, the central bank has to sell its own currency in exchange for foreign currency, or lower interest rates.
Managed Exchange Rates
Most countries adopt a hybrid exchange rate policy, called a managed exchange rate. This means the exchange rate is neither rigidly fixed or allowed to float freely. There are several approaches to managed exchange rates:
Adjustable peg system – This system was used between the end of World War 2 and the early 1970s. It was introduced alongside the establishment of the International Monetary Fund (IMF), the World Bank and GATT (which later became the WTO). It became know as the __Bretton Woods system __(the place where the agreement was signed) and aimed to restore trade and growth amongst countries by providing stable exchange rates rather than the competitive devaluations of the 1930s.
Each country had a central fixed exchange rate, but with a small band of flexibility, allowing the rate to fall slightly below, or rise slightly above the central rate. If the rate threatened to go above the ‘ceiling’ or below the ‘floor’, the central bank would have to intervene by lowering/raising interest rates or selling/buying foreign currency.
In exceptional circumstances countries were allowed to devalue or revalue (for instance if a country had a large and persistent current account deficit that it could not finance), but only with the agreement of other countries in the system.
The system was abandoned after 1972 when the U.S.A. (by far the biggest economy in the world) let the dollar float because of the country’s vast current account deficit.
Crawling peg system – This is similar to the adjustable peg, but allows for more frequent changes in the fixed rates. It is therefore closer to a floaing rate system than the adjustable peg.
Dirty or managed float – Under this approach a government is not committed to maintaining any particular exchange rate, but will intervene to keep the rate at a level it thinks is appropriate for the economy, and to prevent sharp rises or falls. This is currently the approach adopted by most of the major economies in the world, including the eurozone, the USA and the UK.
As well as buying/selling foreign currencies and raising/lowering interest rates, a government has other options to control its exchange rate in a managed system:
International borrowing by the government – This usually means going to the IMF. This is a drastic measure, as the IMF usually imposes conditions on its loans, such as reducing government spending. The loans of foreign currency can be used to buy its own currency to maintain the exchange rate.
Exchange controls – The central bank can restrict access to foreign currency by firms wanting to import and therefore keep the exchange rate above its free market level. (see notes on 4.1.6)
Devaluation/Revaluation – Under an adjustable peg a currency may be devalued (ie fixed at a lower rate) or revalued (at a higher rate). This usually happens if the central bank can no longer keep the currency at its existing exchange rate. Speculators can identify if a currency is fundamentally over or under valued and massive buying or selling of the currency will overwhelm the central bank’s ability to maintain the existing rate by buying or selling currency from the country’s official reserves.
A devaluation or revaluation means declaring a new fixed rate the government will defend. It is likely to be accompanied by a fall (for devaluation) or rise (for revaluation) in interest rates.
Devaluation/revaluation must be distinguished from a currency depreciation/appreciation.
The latter refers to changes in the exchange rate that occur naturally as a result of market forces; an increase in the buying or selling of a currency by traders will drive its value up or down.
Pros and Cons of Different Exchange Rate Systems
There is no easy answer as to which is the best system, which is probably why most countries adopt a managed, rather than fixed or floating rate. There are three main issues to consider with regard to exchange rate policy:
Robustness – Because of its flexibility, a floating rate system can cope better with big trade imbalances; deficit countries will see their currencies depreciate against surplus countries. A rigidly fixed exchange rate system is likely to break up if a country has a persistent deficit and not enough foreign currency reserves.
Trade and investment flows – A fixed exchange rate system produces less uncertainty and risk for international trade and investment. This is a major reason for the creation of the euro, which eliminates all exchange rate risks between member countries. Fluctuating exchange rates can make it hard to predict the prices and profits firms will earn from exporting. Risks can be reduced by buying currency on the forward market, but usually only for up to 6 months ahead. Long term contracts priced in a given currency are therefore still risky.
Economic costs of correcting current account imbalances – Under a floating rate system current account imbalances are corrected by exchange rate changes. This means that a deficit country does not have to reduce domestic demand to reduce imports. Under a fixed rate system this is the only option, and can result in rising unemployment and falling output.
On the other hand, a fixed exchange rate system is likely to result in greater financial discipline and lower inflation, because countries cannot let their currencies depreciate if they are losing competitiveness because of higher domestic inflation than in competitor countries.
- Identify and explain the factors that can cause an exchange rate to rise or fall.
- Your answer should include: exports / imports / short term capital / long term capital / interest rates / speculative activity / speculation | 1 | 3 |
<urn:uuid:874a3c62-7633-4199-81fe-6751255bfaa7> | all towns are one, all men our kin.
|Home||Whats New||Trans State Nation||One World||Unfolding Consciousness||Comments||Search|
Home > Tamilnation Library > Eelam Section > Ethnic Conflict and Violence in Sri Lanka - Professor Virginia A. Leary
TAMIL NATION LIBRARY: Eelam
Sri Lanka (formerly known as Ceylon) is a large island (65,609 square kilometers) situated 29 miles off the southern tip of India. The ethnic composition of its population of 14 million is 72% Sinhalese, 20.5% Tamil, (Ceylon and Indian), 7% Moors (Muslims), 0.5% Burghers (descendents of the Dutch and Portuguese) and others.
The official language of Sri Lanka is Sinhala with Sinhala and Tamil having equal status as national languages. English is widely spoken. The official religion is Buddhism, the religion of the majority of the Sinhalese population. The Tamils are predominantly Hindu, although a substantial minority of the Tamil speaking population are Muslims and Christians.
Sri Lanka obtained its independence from Great Britain in 1948. Prior to British occupation in 1796 it had been colonized by the Portuguese and Dutch. It is a unitary democratic republic with a mixed presidential--parliamentary political system. Universal adult franchise was introduced in Sri Lanka as early as 1931 and since independence the country has held elections every six or seven years.
Two major political parties, the United National Party (UNP) and the Sri Lankan Freedom Party (SLFP) have governed the country alternately since independence. Both these parties are predominantly Sinhalese. The SLFP, in coalition during part of the time with two Marxist parties, was in power from 1970 to 1977. The country was governed under a state of emergency for much of this period, following a major insurrection in 1971. Many civil liberties were severely curtailed. The coalition government adopted a policy of land redistribution and economic self-sufficiency. The defeat of the SLFP in 1977 has been widely attributed to economic difficulties, inefficiency and corruption.
The UNP came into power in 1977 with a strong market economy orientation, an open door for foreign investment and imports and strong encouragement of tourism. It also pledged to restore civil liberties neglected by the preceding regime. J.R. Jayewardene, the UNP leader, is President of the country. At the present writing, the UNP has more than a 2/3 parliamentary majority. The Tamil United Liberation Front (TULF) has become the opposition party in parliament with only 17 seats. The UNP has 139 seats. The next election is scheduled for 1983.
Although Sri Lanka has a very low per capita income it has a high level of literacy and education, low infant mortality rates and relatively high average life expectancy. Like many other developing countries, however, it has major problems of poverty and unemployment. Many skilled laborers and professional persons have emigrated. Its economy has traditionally been agricultural with an emphasis on tea, rice, rubber and coconuts. An attractive country with beautiful beaches, diversified scenery, ruins of ancient cities, and friendly people, it has become a popular tourist area for Europeans.
On August 17, 1981, the government of Sri Lanka declared a state of emergency in order to control an outbreak of violence directed against the minority Tamil community. This state of emergency was the second declared within three months; the communal violence was the third major attack against Tamils since independence in 1948 and the second since the election of the present government in 1977. The August violence followed several months of increasing tension between the two major ethnic groups. In April 1981, a number of Tamil youths were apprehended and detained by security forces under the Prevention of Terrorism Act. At least 27 youths were held incommunicado without access to lawyers or family members. The arrests followed a bank robbery in which two policemen were killed, attributed to an extremist group called the Tamil Liberation Tigers.
In early June, local elections for District Development Council members took place throughout the country. In the north of the country in the overwhelmingly Tamil area the elections were held during a state of emergency and in an atmosphere of violence. During the campaign, a candidate and two police officers were killed. Police and security forces, apparently in reaction to the killing of the policemen, went on a rampage in the Tamil City of Jaffna burning the market area, the home of a member of Parliament, the TULF party headquarters and the Public Library containing 95,000 volumes.
In July, a police station in Anaicottai in the Tamil area was attacked, two policemen were killed and many firearms stolen. The attack was again attributed by government sources to a terrorist group of Tamil youths. Also in July, in an unusual, nearly unheard of, parliamentary procedure, members of the UNP, the parliamentary majority party (composed predominantly of Sinhalese), approved a motion of no confidence in the leader of the opposition political party, Mr. A. Amirthalingam of the Tamil United Liberation Front. The vote was preceded by verbal attacks by majority party members against the Tamil leader for criticizing the government abroad for its handling of the Tamil question.
At the end of July, the Court of Appeal in Colombo in a widely publicized and emotionally charged proceeding, began hearing petitions for writs of habeas corpus for four of the 27 Tamil youth detained incommunicado by the Army since April under the Prevention of Terrorism Act. The immediate occasion for undertaking the International Commission of Jurists' mission was the continued incommunicado detention of Tamil youths, but the events described above formed the backdrop of ICJ concern about the state of human rights in Sri Lanka and are intimately related to the application of the Terrorism Act.
In July, 1981, the ICJ requested the author of the present report, then on a private visit in Sri Lanka, to undertake a study of the human rights aspects of the Terrorism Act and events related to its adoption and application. The ICJ observer was in Sri Lanka from July 12 to August 2 and from August 18 to 23, a total period of four weeks, and was thus present during the attack on the police station in Anaicottai, the vote of no-confidence against the opposition Tamil leader, the habeas corpus hearing in the Court of Appeal and the period of the state of emergency immediately following the communal violence.
During the mission in Sri Lanka, the undersigned interviewed government officials, opposition party members, lawyers, professors, sociologists, trade union officials, journalists, and members of human rights organizations. The Ministries of Justice and Foreign Affairs were informed of the ICJ study and were helpful in making known the government's point of view concerning the current crisis in Sri Lanka. The observer attended several sessions of the habeas corpus proceedings, met the President of the Court of Appeal, the attorneys for the petitioners and the Deputy Solicitor General representing the government. She also interviewed families of detainees held under the Terrorism Act, visited areas of Jaffna which had been burned in June and interviewed residents of Jaffna.
Sri Lanka is a country in which citizens--even those in opposition to the government--appeared to feel free to express their opinions. Individuals interviewed did not request anonymity or lack of attribution and government officials were uniformly courteous. In addition to information obtained through interviews, the observer was also able to obtain extensive written material on historical aspects of the racial situation,1 recent racial incidents and the situation of human rights in general. There appears to be no systematic censorship of the press or the mails; however, all of the major English language newspapers except one and radio and television are government controlled.2 The privately owned English language newspaper carried in full the Amnesty International report concerning the recent incommunicado detention of Tamil youths.
A number of allegations were heard, however, of selective reporting which exacerbated racial tension, including by the privately owned English language paper. Concern was also expressed over a proposed bill which would provide that every newspaper make a deposit with the Insurance Corporation in order to meet any claim for damages that might result from being found guilty in a libel action, the amount of deposit to he determined by the Cabinet. Concern was expressed that the deposit might he so large as to put small newspapers with limited financial resources out of circulation.3
Other recent human rights issues in Sri Lanka which have caused concern are the deprivation by Parliament in 1980 of the civic rights for a period of seven years of Mrs. Sirimavo Bandaranaike, leader of the Sri Lankan Freedom Party and twice Prime Minister of Sri Lanka, and the adoption of the Essential Public Services Act in 1979 which enables the government to declare any of a wide variety of services as essential services, thereby outlawing strikes or temporary cessation of work in such services.
While the International Commission of Jurists is concerned about these issues, the present report is limited to the most serious human rights problem in Sri Lanka at the moment, namely, the racial problem, violence resulting from racial conflict and the draconian provisions of the Terrorism Act as a means of coping with the violence. The report is based on observations and interviews of the undersigned while in Sri Lanka, on written material obtained in Sri Lanka and on press and other reports of events occurring since the visit of the undersigned.
The present racial tension between the Sinhalese and Tamil populations in Sri Lanka has deep historical roots, dating back to the first century A.D. It is claimed that the Sinhala race was founded in Sri Lanka in the fifth century B.C. by an exiled prince from northern India and that the Sinhalese are of Aryan origin. The Tamils are Dravidians and came from southern India. There are two separate Tamil communities in Sri Lanka: the "Jaffna" or "Ceylon Tamils" and the "Indian" or "Estate Tamils". They are both of the same ethnic origin and speak the same language.
The "Ceylon Tamils" came at a date disputed by historians, but there were Tamil incursions from South India at least by the first century, A.D. Major Tamil invasions took place from 700 A.D. to 1300 A.D. culminating in the establishment of a Tamil kingdom in the North. Buddhist historical chronicles report frequent wars between Sinhalese and Tamil kings.
At the time of the Portuguese conquest in 1621 an independent Tamil kingdom existed in the North.4 The "Indian Tamils" were brought to Ceylon as indentured laborers by the British to work on the tea and rubber plantations in the 19th and early 20th centuries. At present, Ceylon Tamils constitute 11% of the population of Sri Lanka and Indian Tamils 9%. The two Tamil communities have remained largely separate with the Ceylon Tamils concentrated in the northern part of the Island, particularly in the area known as the Jaffna peninsula.
A substantial number of Ceylon Tamils, however, are resident in Colombo and some southern areas. The Indian Tamils are primarily resident in the hill country in the central part of Sri Lanka. The Ceylon Tamils are, in general, a prosperous and well educated group; the Indian Tamils live and work in conditions of misery and poverty.
At independence in 1948 the Indian Tamils were deprived of citizenship and disenfranchised. Under an agreement with India in 1964, Sri Lanka agreed to repatriate 60% of the Indian Tamils and to grant citizenship to the remaining 40%. The agreement has been only partially carried out. The ethnic conflict, until recently, has been largely between the Ceylon Tamils and the Sinhalese. In August 1981, however, and to an extent in 1977 the Indian Tamils were attacked when communal violence broke out.
The Sinhalese population of Sri Lanka has historically considered the Tamils as invaders, infringing on Sinhalese territory. Sinhalese myths and legends often refer to the triumph of Sinhalese kings over rival Tamil rulers. One scholar has written,
The identification of the Buddhist religion with Sinhalese nationalism is also an important element in understanding the roots of ethnic conflict in Sri Lanka. Sri Lanka is regarded as one of the major world centers of Buddhism. It is widely believed that Buddha himself consecrated Sri Lanka; a relic of the Buddha's tooth is enshrined in Kandy in central Sri Lanka.
Buddhist temples abound. The Sinhalese population is overwhelmingly Buddhist. The Tamil speaking population is predominantly Hindu although there is a substantial minority of Muslims and Christians. The Constitution provides that the Republic of Sri Lanka "shall give to Buddhism the foremost place" and that it is the duty of the state to protect and foster the Buddhist faith. Freedom of religion is guaranteed in the Constitution but other religions are not mentioned.
It is frequently pointed out that, although a majority group within Sri Lanka, the Sinhalese have a minority complex since they are a minority ethnic group within Asia. Tamils in Asia outnumber the Sinhalese by five to one. There are more than 50,000,000 Tamils in South Asia, primarily in the South of India only a few miles across the sea from Sri Lanka. This insecurity of the Sinhalese may have contributed to the racial tension in the Island.
To combat the advantages of Tamils the Sinhalese majority population after independence adopted two policies that have been the source of much of the subsequent discontent of the Tamil population: a "Sinhala only" language policy and a quota system on the basis of race, referred to as "standardization" for entrance to university faculties. In the eyes of the Sinhalese, these were "affirmative action" provisions designed to compensate for the former disadvantage of Sinhalese. In the eyes of the Tamils, they were discriminatory provisions adopted by the majority population which placed their language in an inferior position, required them to learn the majority language and blocked their access to education which constituted their most important route to economic advancement. It also became more difficult for Tamils to enter government service, apparently because of the adoption of Sinhala as the official language.
The first Constitution of Ceylon was drafted by an Englishman, Lord Soulbury and adopted by an Order in Council rather than by a constitutive assembly. It remained in force until 1972. Section 29 of the Soulbury Constitution protected the rights of minorities. It read
Despite this constitutional provision the Official Languages Act was adopted in 1956 providing that "Sinhala only" should be the official language, the Indian Tamil plantation workers were deprived of citizenship and disenfranchised, and a quota and standardization system was adopted which drastically curtailed the access of Tamils to higher education.
At the time of the adoption of the "Sinhala only" Act a proposal to include a clause on the use of Tamil was dropped because of pressure from extremist Buddhist groups. The threat of the Tamils to engage in island-wide peaceful protest in 1956 resulted in a compromise between the government and the leader of the Tamils called the Bandaranaike-Chelvanayakam Pact. It made provisions for the use of Tamil in the Tamil areas and provided for regional councils with powers in agriculture, education, and in colonization schemes6 and included a promise by the government to reconsider the disenfranchisement of the Indian Tamils. Certain elements of the Buddhist population reacted strongly against the Pact and it became a dead letter. In 1958 the first major outbreak of communal violence occurred with deaths in the hundreds, particularly among Tamils.
In the 1950s and 1960s there was increasing dissatisfaction with the foreign drafted constitution. This dissatisfaction, culminated in a demand for a new Constitution following an obiter dictum in a 1966 decision of the Judicial Committee of the Privy Council in London that Section 29 was an entrenched provision of the Constitution. During this same period, the Tamil Federal Party became predominant in the Tamil community. It urged that Ceylon change from a unitary state to a federal structure. The proposal was strongly rejected by the Sinhalese majority who considered it a divisive proposal.
In 1970, the SLFP, strong advocates of Sinhala-Buddhist predominance, came into power in coalition with two Marxist parties. In 1972 legal links with the United Kingdom were severed with the adoption of a new Constitution by a Constituent Assembly (composed of the sitting Parliament) acting outside the framework of the Soulbury Constitution. The Constitution set up Sri Lanka as a republic, continuing the parliamentary system of government. The Tamil Party boycotted the Constituent Assembly because it had rejected a proposal that both Sinhala and Tamil be declared official languages.
The Tamils had previously accepted Sinhala as the official language, but only on the basis that Section 29 of the Soulbury Constitution protected certain of their rights. Section 29 was now dropped from the new Constitution and the "Sinhala only" policy, which had previously been of statutory origin was now enshrined as a constitutional provision. The UNP had voted against the adoption of the 1972 Constitution and on coming to power in 1977 drafted the third Constitution which remains in force today.7 It provided for a modified Presidential-parliamentary system similar to the French system of government.
During the tenure of the SLFP from 1970 to 1977 the negative effects of the standardization and quota system of education on the Tamils became increasingly evident resulting in tension in the Tamil community.8 It also became increasingly difficult for Tamils to obtain government employment. The disaffection of the Tamil youth over these policies can only be understood in the light of their traditional emphasis on education and government service. The most common complaints of the Tamils relate to discrimination in education and employment.
Beginning in the 1970s the Tamils increasingly supported the concept of a separate state of Tamil Eelam comprising much of the northern and eastern area of Sri Lanka. In 1976 the Tamil United Liberation Front (TULF) which had replaced the Federal Party as the dominant Tamil political party, declared itself in favor of a separate state of Tamil Eelam. In the 1977 elections the TULF received a strong majority in the North and a simple majority in the East, signifying the support of the Tamil population of these areas for the concept of separation.
The Tamil demand for a separate state is predicated on the conviction that as an identifiable people with a defined territory, they are entitled to self-determination under international law. They claim that the sovereignty of the Tamil nation which existed in 1621 at the time of the Portuguese conquest reverted to the Tamil community when the legal ties with Great Britain were severed in 1972 and that they are thus asking for restoration of sovereignty.
Until 1833 the successive colonial powers administered the Tamil territory separately from the rest of the country. In that year, the British, for administrative purposes, began administering the island as a common unit. The Tamils maintain that sovereignty passed from the Tamil kingdom to the Portuguese, Dutch and British and that sovereignty continued to reside in the British crown until 1972 when legal ties with Britain were broken.
The Tamils maintain that, in view of the boycott by their members of the Constituent Assembly which drafted the 1972 Constitution, they have never given up their sovereignty and the Sinhala nation has not obtained sovereignty over them either by conquest or consent.
A resolution adopted by the TULF at their first national conference in 1976 was the first clear commitment of a Tamil party to a separate state of Eelam. It listed a variety of actions taken by the Sinhalese majority to the detriment of the Tamils including
The resolution also referred to the failure of the efforts of various Tamil political parties to win rights through negotiations with successive governments or through entering into pacts with successive Prime Ministers. The resolution ended with the statement that
The TULF represents primarily the Ceylon Tamils resident in the northern and eastern provinces.
The Indian Tamils are not members of the TULF. They are represented in Parliament by the Ceylon Workers Congress, their labor union and political party. Mr. S. Thondaman, CWC member of Parliament, is the Minister of Rural Industrial Development in the present government. The Indian Tamils, through the Ceylon Workers Congress, have not supported the demand of the Ceylon Tamils for a separate state.
The TULF leaders have said, however, that their proposed state of Eelam would welcome any Indian Tamils who wish to live there. The TULF manifesto of 1976 states that
The differences in education and economic development between the Indian Tamils and the Ceylon Tamils is great, and, except for a shared sense of insecurity and discrimination on the basis of their common ethnicity, the two communities have little in common.
In addition to the Ceylon Tamils resident in the North and East of Sri Lanka, there are a substantial number of Ceylon Tamils resident in Colombo and in other central and southern areas which are predominantly Sinhalese. These Tamils appear integrated into the social and business life of their communities. Since they do not constitute the main supporters for the TULF, it is unclear whether they support a separate state of Eelam.
In view of their integration into communities outside the area claimed for the state of Eelam it is unlikely that they feel directly involved in the demand for independence.
If communal violence against Tamils throughout the Island continues, however, this may change. A distinction should be drawn between the attitude of older Tamils who were educated in English, together with their Sinhalese contemporaries, and the younger group of Tamils who have been educated in the Tamil language schools totally separate from the Sinhalese. The older Tamils have Sinhalese friends from childhood and are less conscious of a separate identity than the younger Tamils.
An article in the Ceylon Daily News on August 8, 1981 pointed out that there "is a strong demand within the government parliamentary group that the separatist cry be banned by law." Mr. Harinda Corea, the Deputy Minister of Public Administration, has argued that a constitutional amendment banning separatist demands is possible with a 2/3 majority in Parliament and that a referendum is unnecessary. It will be recalled that the UNP has a 2/3 majority in Parliament at present.
While Tamils in the North are strongly in favor of self-determination, it is by no means certain that, in exercising that self-determination, they would choose independence rather than remaining part of Sri Lanka under a federal constitution. The Sinhalese majority, however, has rejected federalism in the past and seems no more likely to favor it at present. A step toward decentralization has been made recently, through the setting up of District Development Councils.
The violence resulting from racial conflict in Sri Lanka has been of three types: communal, political or terrorist, and violence by security forces. In 1981, all three types have been present to a serious degree. The present section will discuss the three types of violence with emphasis on the events occurring recently.
Communal violence first appeared in Sri Lanka in 1958, ten years after independence. [note by tamilnation.org - but see Tamil Parliamentarians attacked & 150 Tamils killed - 1956, two years before 1958] The early history of Ceylon was replete with a history of wars between Sinhalese and Tamil kingdoms, but the 1958 conflict was the first in which individuals of one ethnic group attacked members of the other group.
As mentioned earlier, the "Official Languages" Act was adopted in 1956 and agitation by an extremist Buddhist group resulted in the failure to adopt a provision for the use of Tamil. The Tamils launched a "satyagraha" or peaceful protest which resulted in the Bandaranaike-Chelvanayakam Pact making certain concessions to the Tamils. The Pact was not carried out after another peaceful protest, this time by Buddhists. According to one commentator, the scheduled Tamil national convention and
Hundreds of persons, primarily Tamils, were killed in this first episode of communal violence. Over 25,000 Tamil refugees ere relocated from Sinhalese areas to Tamil areas in the North. The government was criticized for failing to declare a state of emergency early enough.
The next major outbreak of communal violence occurred in August 1977, only a few months after the election of the present Overnment. The violence began as an aftermath to the 1977 elections and was first directed against the losing political party but quickly became communal violence. It appeared to be related to events occurring during the preceding administration but was also linked to the first evidence of political violence by Tamil youths.
During the 1970-1977 government of Mrs.Bandaranaike there had been increasing tension between Tamils and Sinhalese, particularly between the primarily Sinhalese police force in the northern Tamil area and Tamil youths.
According to the Sansoni Commission (a Commission of Inquiry appointed by the President of Sri Lanka to investigate the 1977 violence), the communal violence was immediately sparked by the shooting of two policemen in the North by Tamil youths, by the inflammatory speeches of Tamil leaders and by the desire of the Tamil population for separation.11
From the Tamil point of view, the violence of the youths and the demand for separation were a consequence of increasing discrimination against them during the previous administration. The allegation that the violence was a reaction to the Tamil demand for a separate state has been perceived as a threat that, if the Tamils persist in demanding separation, they can expect violence against them by the Sinhalese majority. The Sansoni report detailed widespread killings, assaults, rapes, and damage to Hindu temples in almost every area of the Island during the August-September 1977 events.
In August 1981, the third major outbreak of communal violence occurred. Since March, increasing tension had developed between the two ethnic groups because of terrorist attacks against police in the north, incommunicado detention of Tamil youths, arson and looting by police in Jaffna. The first act of violence occurred in early August following a clash at a sports meet between Sinhalese and Tamil students in Amparai. It was reported that the Tamil school was surrounded, teachers and students attacked, Tamils in government offices assaulted and the Hindu temple set on fire in the first few days of August. Later, several Tamil colonies in nearby areas were attacked by Sinhalese colonists.
Subsequent August incidents of violence centered on three specific areas: the gem mining area of Ratnapura, Negombo near the capital city of Colombo, and the plantation towns in central Sri Lanka. Before the violence was brought under control by the declaration of a state of emergency by President Jayewardene on August 17, at least 10 Indian Tamils had been killed, numerous Tamil shops and businesses burned, and more than 5,000 Indian Tamils had fled to refugee camps.
Unlike the earlier events of violence in 1958 and 1977 the 1981 attacks of arson, looting and killing appear to have been, in part, the work of organized gangs. The International Herald Tribune reported that President Jayewardene, in an interview with a Reuters correspondent on August 14, stated that the attacks on Tamils in Ratnapura appeared to have been organized.
The Guardian (London) reported on August 15 that "it seems to have been established that an unnamed group is organising the present violence for motives of its own." An editorial in The Hindu (India) of August 18, 1981 stated that "a close look into the riots would show that behind them is a planned and systematic effort to aggravate racial animosity."
It was widely reported that attacks in Negombo as well as an attack against passengers on a Jaffna to Colombo train were made by organized gangs. Tamil sources stated that it could not be ruled out that people close to the government were behind the organized violence. They also claimed that police and army forces did not intervene to prevent attacks until the declaration of the state of emergency many days after the attacks began.
Another new element in the recent incidents was the concentration of the violence against the Indian estate Tamils. Earlier communal violence had been directed primarily against the Ceylon Tamils. The attacks against the impoverished Indian Tamils had the effect of internationalizing the conflict since Indian passport holders were among those attacked. According to Indian sources, some 70,000 Indian passport holders in Sri Lanka are awaiting repatriation to India as a result of the 1964 agreement between the two countries. As mentioned earlier, thousands of these Indian Tamils fled to refugee ramps during the August violence. Some sought refuge with the office of the Indian High Commissioner in Sri Lanka.
The August violence was widely reported in the Indian press and was the subject of editorials in major Indian newspapers. In Madras, India, hundreds of students demonstrated to protest the attacks against the Tamils. Prior to the declaration of the state of emergency, the Indian High Commissioner in Sri Lanka conveyed to the Sri Lankan government his government's concern over attacks against Indian Tamils. In Lok Sabha, the Indian parliament, a number of M.P.s expressed concern. In response, the Indian Minister for External Affairs, Narasimha Ran, stated that the incidents were an internal matter for Sri Lanka, that he had been assured that the violence was being brought under control and that he hoped that there would be no disruption of the traditional good relations between the two countries.
The outbreak of violence in August 1981 has been attributed variously to organized gangs, to a "foreign hand", to a backlash of the Sinhalese population because of Tamil youth terrorism and demands for separation, and to animosity against Tamils stimulated by Sinhalese elements within the government. The Sri Lankan Minister of State for Information and Broadcasting, Anandatissa De Alwis, announced on August 16 that "a foreign hand" was behind the communal violence. He did not identify the foreign country allegedly involved.
The accusations of involvement of the Sri Lankan government relate particularly to a no-confidence motion in Parliament in July against A.Amirthalingam, Tamil United Liberation Front opposition leader. The motion of no-confidence was passed with 121 Government members voting for it and two abstaining. (Mr. S. Thondaman, the Minister of Rural Industrial Development and the President of the Ceylon Workers Congress representing Indian estate Tamils, abstained). The M.P.s of the TULF and the SLFP did not participate in the vote. Such a parliamentary procedure is highly unusual since it is a vote by the Parliamentary majority party of no-confidence in the leader of the opposition party. It is clearly a deviation from normally accepted rules of parliamentary procedure.
The vote was preceded by comments by majority party members strongly critical of Mr. Amirthalingam for speeches abroad on the situation of the Tamils.
An article in The Hindu (India) of August 21, 1981 referred to these comments as "declamatory, Tamil-baiting rhetoric." The Sun (Colombo) of August 8 reported that as a follow-up to the no-confidence motion a group of government M.P.s led by Dr. Neville Fernando wanted Parliament to sit as a Judicial Committee to take action against Mr. Amirthalingam on the grounds that he violated his Oath of Allegiance and the Constitution by the requests to foreign governments to interfere in the internal affairs of Sri Lanka.12
On the 13th of August, during the violent outbreak against the Tamils, Mr. Amirthalingam wrote to President Jayewardene referring to the influence of the parliamentary moves on the then ongoing violent incidents:
The International Herald Tribune of August 31 reported "In July, posters began appearing on walls in Colombo saying: 'Alien Tamils, you have danced too much, your destruction is at hand. This is the country of us Sinhalese.' Tamil leaders claim the posters were inspired by radical elements within Mr.Jayewardene's government and party."
The same article reported that the President had said the posters had been removed and action taken to prevent their publication under the state of emergency. On September 11, the New York Times quoted President Jayewardene as saying "I regret that some members of my party have spoken in Parliament and outside words that encourage violence and the murders, rapes and arson that have been committed." The article continued by stating that the President said he would resign as head of his party if some of its leaders continued to encourage ethnic hostilities.
In July, it was announced that a planned visit of the Indian President to Sri Lanka had been postponed. Indian newspapers alleged that the reason was recent racial tension in Jaffna, although government sources denied this.
With the declaration of the state of emergency on August 17th the situation in Sri Lanka stabilized and violence ceased.13 A large number of Tamils remained in refugee camps.
Terrorist acts by Tamil youth have exacerbated the already tense relations between Sinhalese and Tamils. The political violence or terrorism by Tamil youths, primarily against police in the Jaffna area, began substantially in 1977. The terrorist acts have been attributed to a group called the "Liberation Tigers," estimated to include fewer than 200 persons by government sources.14
A government pamphlet published in June 1981 stated that the group of terrorists had been involved in over 200 acts of violence in the previous three years including the killing of politicians, 18 police officials, acts of homicide and robberies of banks.15
The leadership of the Tamil United Liberation Front has condemned the violence and does not advocate violence to achieve the separate state of Eelam, although allegations have been made that individual members of the TULF have advocated violence as a means of achieving a separate state. The terrorist youth gangs are acting independently from the policy of the Tamil party and there is no evidence that they have substantial support from the Tamil population in the North.
On March 25, a bank was robbed in the town of Neerveli in the Jaffna Peninsula area and two policemen were killed. The robbery was attributed to a terrorist gang and one month later, the army and police, without warrants, arrested 27 young Tamil men under the Prevention of Terrorism Act for implication in the robbery.
This Act and its application will be discussed more fully in a later section of this report.
At the end of May, further violence developed during the campaign for District Development Council elections. These elections were to be a significant step towards decentralization and were regarded as a positive act by the government in responding to the demands of the Tamil population for more control over their own affairs. Unfortunately, the election in Jaffna turned into a tragic event further exacerbating the racial conflict.
On May Mr. A. Thiagarajah, a Tamil who headed the UNP list of candidates, was assassinated. Since the UNP is the governing majority party in the country and a predominantly Sinhalese party, the killing was perceived as a threat to Tamil politicians not to enter the UNP lists.
On June 9, 1981 Mr. Gamini Dissanayake, Minister of Lands and Land Development, stated in Parliament that "those who take to politics opposed to the Tamil United Liberation Front run the risk of death."16
On May 31 two policemen were killed during a TULF rally, in disputed circumstances. According to some sources, the policemen shot each other during a dispute. According to others, the two were shot in the hack of the head by unknown assailants. The ICJ observer was unable to verify personally the veracity of either account of the deaths. This event precipitated a rampage by police in Jaffna (which is described in the next section under violence by security forces) and led to the imposition of a temporary state of emergency in Jaffna.
On July 28, a terrorist gang of about 15 persons attacked a police station in Anaicottai, six miles out of Jaffna. One policeman was killed, another, who was seriously wounded, died later. The gang escaped with firearms including 17 rifles, two shotguns, a sub-machine gun and a thousand rounds of ammunition. The attack was the first attack against a police station in Sri Lanka since a Sinhalese youth insurrection in 1971. It was immediately condemned by the leadership of the TULF who described it as a senseless act of violence.
The government reacted with a number of strong measures. Police personnel were pulled out of six stations in outlying areas and replaced by army officers. Army units were moved into Jaffna. Trucks and armored vehicles carrying army personnel on patrol in Jaffna were evident during the visit of the ICJ observer in early August. The Police Department requested the Defense Ministry to permit police to require national identity cards at all times in the Jaffna peninsula. The increased security measures took on the tone of an army of occupation in Jaffna.
The government is clearly deeply concerned about the problem of terrorism in the north. They have applied the provisions of the Prevention of Terrorism Act to detain a number of youth. The government issued a regulation, under emergency legislation, on August 25 providing for the death penalty or life imprisonment for unlawful possession and transport of weapons and explosives in four Tamil areas. A consideration of whether such measures will prove effective depends on an understanding of the causes of violence among a segment of the Tamil youth.
Tamil publications have explained the development of youth violence in the Jaffna peninsula.17 Although full scale violence did not erupt until 1977 the roots of it can be traced to events occurring during Mrs. Bandaranaike's reign from 1970 to 1977. In 1971 a major insurrection occurred in Sri Lanka. It was led by Sinhalese youth and there appeared to be no participation of Tamils. During the insurrection, 92 police stations were attacked by Sinhalese youth, 37 members of the police and 26 members of the armed forces were killed. The insurrection was eventually suppressed. Funds for the insurrection had been obtained through bank robberies and hold-ups.
Tamil youth, who increasingly suffered the effects of discriminatory measures in language, education and employment, apparently learned some of the tactics of violence from the earlier insurrection.
These discriminatory measures, and the unsuccessful efforts of the Tamil representatives to combat them, led a group of Tamil youth to abandon hope for a peaceful solution to the ethnic problem and to turn to violence. Police harassment and cruelty against young Tamils also appears to have played a part.18
Early instances of violence against police officers appeared to he directed against particular officers considered responsible for brutality against Tamils. Although much of the cruelty and harassment against Tamil youth occurred in the 1970-77 period of the previous government, particularly brutal attacks by police and armed forces occurred during a state of emergency declared in the Jaffna peninsula in 1979 by the present government.
Thus far, no Tamil youths have been convicted of terrorist offences. The complaint is frequently heard that the Tamil population has not assisted the government in apprehending terrorists.
In July, some 150 Tamil youths flew to East Germany and from there sought political asylum in West Berlin, claiming to be persecuted at home. The Sun (Colombo) reported on August 1 that "According to officials both in West Berlin and Colombo, the Tamil youth, who claim to be persecuted at home, are being lured to Berlin by unscrupulous agents promising them work or asylum." Officials in West Germany repatriated a number of the youth to Sri Lanka. The West German section of Amnesty International then began legal proceedings charging "persons unknown" with kidnapping in connection with the repatriation.
Violence or state terrorism by police and armed forces is the third type of violence that has been prevalent in Sri Lanka. The most recent serious incident occurred in early June in Jaffna, but it has been a recurring fact since 1974. In that year, during a session of an international Tamil cultural Conference, the police waded into a large group of persons, ostensibly in order to prevent a particular person from speaking, and a stampede resulted causing nine deaths, the majority of them through electrocution by a fallen wire. The government refused to appoint a Commission of Inquiry and the Tamils set up their own Commission which reported the growing antagonism of police forces against Tamils in the north.
Numerous incidents of detention of Tamil youths and maltreatment were reported during the 1970s. The Sri Lankan Movement for Inter-Racial Justice and Equality (MIRJE) reported that, following the adoption of the 1972 Constitution,
In 1979, under the present government, a state of emergency was declared in Jaffna as a result of terrorist attacks. On August 1, 1979, the Civil Rights Movement of Sri Lanka stated that
Allegations of the killing and torture of Tamil youth by police and armed forces during the 1979 emergency are widespread.
Of more immediate concern is the action of police in the burning of Jaffna in June 1981. The situation in Jaffna between March and June has been explained previously. A bank robbery in March had been followed by the detention incommunicado of a number of Tamil youths, and on May 31, two policemen were killed, and two wounded during an election rally. According to both government and Tamil sources, a large group of police (estimated variously from 100-200) went on a rampage on the nights of May 31-June 1 and June 1-2 burning the market area of Jaffna, the office of the Tamil newspaper, the home of V. Yogeswaran, member of Parliament for Jaffna, and the Jaffna Public Library.
The widespread damage in Jaffna as a result of the actions of the police were evident during the visit of the ICJ observer in Jaffna in August. According to government sources, the police, who had been brought to Jaffna from other parts of Sri Lanka, mutinied and were uncontrollable. They had allegedly been enraged at the attacks on police at the election rally and at earlier failures to bring police killers to justice.
In the early days of June several killings of Tamils were reported, allegedly as a result of police action. Tamil leaders pointed out that it was the responsibility of the government to maintain law and order and that several Cabinet ministers and high security officials were present in Jaffna when some of the violent events occurred.19
The destruction of the Jaffna Public Library was the incident which appeared to cause the most distress to the people of Jaffna. The ICJ observer heard many comments from both Sinhalese and Tamils concerning the senseless destruction by arson of this most important cultural center in the Tamil area. The Movement for Inter-racial Justice and Equality sent a delegation to Jaffna to investigate the June occurrences. The Delegation's report, in referring to the arson of the Public Library, stated,
The 95,000 volumes of the Public Library destroyed by the fire included numerous culturally important and irreplaceable manuscripts.
A state of emergency in the Jaffna area was declared on June 2. On June 4 the District Development Council elections were held. Results were announced after reports of many irregularities including lost ballot boxes. The TULF won every seat in the Jaffna District. On June 11, the government announced that it would appoint a Commission of Inquiry to investigate the events between April 20 and June 2, thus not including the events occurring after the declaration of a state of emergency and during the election. On June 24, Bishop Lakshman Wickremesinghe, Chairman of the Civil Rights Movement, wrote to President Jayewardene urging that the Commission's mandate be extended to include the election period,
It is apparent that relations between the population of Jaffna and the police and security forces seriously deteriorated following the burning of Jaffna by the police in early June. The problem has undoubtedly been accentuated by the heavy deployment of the army in Jaffna following the attack on the Anaicottai police station in July, the emergency regulation imposing the death penalty or life imprisonment for the illegal possession of arms in Tamil areas and the proposed requirement that identity cards be carried at all times, particularly in the north.
The great majority of police and army personnel assigned to Jaffna are Sinhalese who understand neither the language nor the culture of the Tamils. In addition, in view of the attacks on them, they appear to have a feeling of fear and insecurity. It has also been alleged that when heavy reinforcements of police have been brought into the area inadequate provision has been made for their food and housing. In July, newspapers reported that 43 policemen assigned to Jaffna requested transfers from the area.
Violence by the police has not, of course, been universal. The report of the MIRJE Delegation to Jaffna in June pointed out that
In 1975, Walter Schwarz wrote the following in a study prepared for the Minority Rights Group entitled The Tamils of Sri Lanka:
Unfortunately, the situation thus envisaged in 1975 has come to pass: there have been two serious outbreaks of communal violence since 1975 and political terrorism and security force counter-terror have become all too prevalent. The evocation of the situation in Sri Lanka evolving into that of a Northern Ireland or Cyprus no longer seems remote.
When the United National Party won the election in 1977 there were high hopes among Tamils that the racial problem would improve in comparison with the situation under the previous government. The UNP manifesto prior to the election stated,
A new Constitution was adopted in 1977 by a Select Committee of Parliament but the TULF refused to participate in the drafting and adoption on the grounds that the government had failed to summon the promised All-Party Congress to consider the Tamil problem. The All-Party Congress referred to in the UNP manifesto was never held.
The government maintained that with the adoption of the new Constitution the Tamil problem had found a fair and just solution. The 1978 Constitution contains extensive provisions on the use of Sinhala and Tamil. It provides that Sinhala shall be the official language but that both Sinhala and Tamil shall be national languages. Both languages may be used in Parliament and local governments, official documents must he published in both Sinhala and Tamil, a person is entitled to be examined in either national language at any official examination and persons are entitled to education in the medium of either language. In the Northern and Eastern Provinces the Tamil language is to be used as the language of administration in addition to Sinhala. Although persons sitting for official examination may take them in either Sinhala or Tamil they may be required also to have a sufficient knowledge of the official language for admission to government service or to acquire such knowledge within a reasonable time. Government officials are not required to have a knowledge of Tamil. The failure to accord equal status to the Tamil language remains a bone of contention.
The 1978 Constitution also contains provisions guaranteeing fundamental rights. The preceding administration had been widely criticized for continuing a state of emergency during most of its tenure and for severe curtailment of civil liberties. According to one scholar,
Others have pointed out that the Constitution permits extensive restrictions in certain circumstances on many of the rights guaranteed. (Article 15)
In the area of education, the present government made some changes by dropping a controversial provision for standardization of examination marks, but left basically intact a racial quota system. At the present time only 30% of the places available in universities are to be filled according to merit on an all-island basis. Fifty-five percent are allocated to revenue districts in proportion to their population and filled according to order of merit within each district.
Since the Tamil population is localized in certain districts, the effect of this percentage provision is to limit effectively the two populations to a proportionate share of university entrance, and to make it possible for students from one revenue district with lower marks to achieve university entrance while students with higher marks from another district are denied admission.
The remaining 15% of places are allocated to revenue districts deemed to be educationally underprivileged. The conformity of these "affirmative action" provisions with international norms will be discussed in a later section, hut they have been criticized by Tamils as constituting a form of racial discrimination, since entrance is based on merit only to a limited extent.
An area in which the present government has not made concessions is that of colonization. Tamils have objected to State colonization schemes which import large numbers of Sinhalese into traditional Tamil areas. The Tamil concern about colonization is related to insecurity about their physical safety and to fears that Tamils will become a minority in their traditional homelands.
The government maintains that since Sri Lanka is a single country citizens may freely move into any part of the country and that it is necessary to transplant some populations to more productive areas.
The Tamils answer that they are not opposed to individual migration hut only to large scale government colonization schemes which change the ethnic composition of an area. The present writer was not able to obtain statistics on the extent of colonization in Tamil areas and thus to determine the degree to which such schemes are a major problem.
One of the most positive steps the Jayewardene government has taken in the area of human rights is the ratification in 1980 of the International Covenants on Civil and Political Rights and on Economic, Social and Cultural Rights.
The government also made the declaration under Article 41 of the Civil and Political Covenant which permits the Human Rights Committee to entertain complaints of non-observance by another state which has made a similar declaration. Sri Lanka has not yet ratified the Optional Protocol to the Civil and Political Covenant which would permit individuals to bring complaints of violations before the Human Rights Committee, but the government's willingness to accept international norms and thus to have its own actions evaluated in accordance with such norms is a welcome step.
Another positive step is the government support and development of the educational activities of the Human Rights Centre of the Sri Lanka Foundation. The Centre is a government controlled organization which does not entertain complaints concerning human rights, but carries on educational functions such as programs within schools to make the Human Rights Covenants better known.
Racial antagonism has been such a pervasive element in Sri Lanka that it would appear appropriate for a government controlled Human Rights Centre to undertake an intensive educational campaign for the elimination of racial intolerance.
It has been frequently pointed out that the separate educational systems for Tamils and Sinhalese in Sri Lanka since independence has had certain negative effects on racial understanding. In addition, the traditional teaching of history in Sri Lanka has contributed to racial animosity. Although immediate short-term actions are necessary to defuse racial tension, a long-range program of education in racial tolerance and understanding seems essential.
The Tamils have consistently pressured for decentralization of government administration. This took the form of a demand for a federal structure of government prior the the TULF commitment to a separate state in 1976. But, while continuing to advocate separation, the TULF has simultaneously worked toward decentralization within the present structure.
The present government has made some important concessions in this regard. It appointed a Presidential Commission to inquire into the idea of District Development Councils and, rather than opting for weak councils, adopted the system advocated in a Commission dissent by the TULF appointee, Dr. Neelan Tiruchelvam.24 One commentator has written
The unfortunate circumstances connected with the June 1981 District Development Council elections in the Jaffna Peninsula, however, and the communal violence in August again seemed to dash hopes that the Tamil problem might be settled. The repeated reports that some members of the government were responsible for the irregularities in the local elections in Jaffna, as well as responsible for stirring up the racial animosity which led to violence has caused distrust of the UNP's sincerity in meeting reasonable Tamil demands.
The present government has been unsuccessful in controlling the communal violence, security force violence and political violence that has escalated during its tenure. Two major outbreaks of communal violence have occurred since 1977. The first, which broke out immediately after the UNP election, did not, however, relate to events occurring under the present regime. The communal violence which occurred on August 1981 appeared to many observers to be the product of organised gangs and to have been stimulated by anti-Tamil propaganda, some of it allegedly emanating from the United National Party.
During the August violence the government and Sinhala controlled English language newspapers reported, but did not play up, accounts of the killings, widespread arson and looting which occurred directed primarily against the Tamil population.
On the other hand, the English language papers headlined the terrorist attack by Tamil youth against the police station in Anaicottai in July in which two policemen were killed. Censorship of news of violence would not be a wise solution, but government efforts might well be directed towards discouraging selective reporting which arouses racial animosity.
Controlling elements within the government's own Party which contribute to anti-Tamil sentiment is clearly a necessity. As a minimum, the Tamils are entitled to protection of their physical security within Sri Lanka. This protection can no longer be taken for granted. Some Sinhalese have urged the Tamil leaders to refrain from advocating separation since it appears to he one of the causes of Sinhalese animosity and thus violence. Such urging hardly seems likely to be heard as long as Tamils feel discriminated against in education and employment and, as happened in Jaffna in June 1981, feel unprotected, even from police violence in their traditional homeland.
A major step towards controlling the violence of the police in the Jaffna area would be vigorous investigation and prosecution of police and security officials responsible for the burning of Jaffna in June and allegedly responsible for several arbitrary killings. The government has stated that a Commission of Inquiry will be established to investigate the events occurring up to June 2, but not the irregularities which occurred during the election for the District Development Council. It is to be hoped that the government will respond to demands of civil rights groups and others to expand the scope of the inquiry and to name to the Commission respected persons acceptable to both the Sinhalese and Tamil communities.
The problem of political violence or terrorism has proved an intractable one for many governments. Easy solutions are obviously not available. The Jayewardene government has chosen to attempt to control the terrorist activities of the relatively small group of Tamil youths by the application of the Prevention of Terrorism (Temporary Provisions) Act adopted in 1979. The human rights issues raised by this Act as well as questions concerning its effectiveness are such that they warrant discussion in a separate section of this report.
In 1979, Parliament adopted the Prevention of Terrorism (Temporary Provisions) Act in response to growing political violence in the northern Tamil area. The Act contains a number of disturbing provisions from the human rights point of view. Section 6 of the Act provides that
The Act provides that a person may be detained for periods up to 18 months (renewable by order every 3 months) if "the Minister has reason to believe or suspect that any person is connected with or concerned in any unlawful activity" (Section 9). The same Section also provides that such a person may he detained "in such place and subject to such conditions as may be determined by the Minister." Under recent application of the Act, 27 persons have been detained in an army camp without access to attorneys or to relatives for prolonged periods.
The Act also provides that any confession made by a person orally or in writing at any time shall he admissible in evidence unless made to a police officer below the rank of an Assistant Superintendent (Section 16). Thus, confessions made to police, possibly under duress, are admissible. It provides that a statement recorded by a Magistrate or made at an identification parade shall be admissible in evidence even if the person is dead or cannot be found and thus cannot be cross-examined. (Section 18(1)(a)). Any document found in the custody of a person accused of an offence under the Act may be produced in court as evidence without the maker being called as a witness and the contents of the document will be evidence of the facts stated therein (Section 18(1)(b)).
The Act is also retroactive since it defines "unlawful activity" as including action taken or committed before the date of coming into operation of the Act which would, if committed after the date of passing of the Act, be an offence under the Act. (Section 31(1)).
The Act provides for prison terms for conviction of an offence, ranging from five to twenty years or life imprisonment depending upon the severity of the offense.
The government has stated that many democratic countries such as Canada, Australia, the United Kingdom and India faced with similar situations have adopted similar legislation.26
The title given to the Sri Lankan Act, "Prevention of Terrorism (Temporary Provisions) Act" is the same as the title of a United Kingdom Act originally adopted in 1974 and repealed and re enacted with some amendments in 1976.
The Sri Lankan Act, however, differs substantially from the U.K. Act in the extent to which it infringes human rights. In the latter, terrorism is given a narrow definition, namely, "the use of violence for political ends, and includes any use of violence for the purpose of putting the public or any section of the public in fear." The U.K. Act makes membership in a proscribed organization (the IRA) an offence, with some exceptions. The much broader definition of offences or unlawful activity under the Sri Lankan Act has been referred to above.
While the U.K. Act permits arrest without warrant on suspicion that an offence under the Act has been committed and permits exclusion of persons from mainland Britain in certain circumstances, it does not permit prolonged incommunicado detention without trial as does the Sri Lankan Act. Persons arrested under the U.K. Act may not be detained more than seven days without being charged with an offence.
Under the Sri Lankan Act they may be detained incommunicado up to 18 months. The application of the U.K. Act, which is less repressive than the Sri Lankan Act, has been criticized within the U.K. The Guardian (London) reported on Jan. 13, 1980 that
A number of the objectionable features of the Sri Lankan Act are similar to provisions of the widely criticized 1967 Terrorism Act of South Africa.28
The South African Act defines a "terrorist," inter alia, as a person who has committed or attempted to commit any act which could "cause, encourage or further feelings of hostility between the White and other inhabitants of the Republic." This provision has been criticized as unduly vague since speeches or writings which criticize the apartheid system, for example, could be considered terrorist activities under this definition.29 The same criticism may be directed against a similar section of the Sri Lankan Act (Section 2(1)(h)) which states that
Such a broad definition could be construed as encompassing the
The South African Act, like the Sri Lankan Act, is retroactive. Similarly to the Sri Lankan Act, it permits prolonged detention without access to legal counsel on suspicion of commission of an offence. In language similar to the Sri Lanka Act it provides that the Commissioner of Police may detain terrorists or persons with information concerning offences under the Act "at such place . . . and subject to such conditions" as the Commissioner may determine, subject to the directions of the Minister of Justice. The South African Act permits indefinite detention; the Sri Lankan Act limits detention to 18 months.
Section 11 of the Sri Lankan Act permits the Minister, if he has reason to believe or suspect that any person is connected with any "unlawful activity," to restrict the residence, employment, movement and activities of such person for periods up to 18 months. Any person who violates such restrictions shall be guilty of an offence and liable to imprisonment for a period of five years (Section 12). This provision, as yet not applied in Sri Lanka, is reminiscent of the notorious "banning orders" permitted under South African legislation.
The South African Terrorism Act has been called "a piece of legislation which must shock the conscience of a lawyer."30 Many of the provisions of the Sri Lankan Act are equally contrary to accepted principles of the Rule of Law.
While a substantial number of the provisions of the Terrorism Act are clearly contrary to internationally accepted minimum standards for criminal procedure,31 they also appear to be contrary to the provisions of the Sri Lankan Constitution which provide that every person held in custody or detained shall he brought before the judge of the nearest competent court and shall be held in custody or detained only on the order of the judge (Article 13(2)).
The Constitution forbids retroactive criminal offenses and penalties (Article 13(6)). Article 15(7) of the Constitution, however, provides that the exercise and operation of the fundamental rights recognized in Article 13, inter alia,
There is no provision for judicial review of the constitutionality of laws in Sri Lanka after they have been enacted by Parliament. The ordinary procedure for testing the constitutionality of laws occurs before an Act is adopted. Article 121 of the Constitution provides that the President or any citizen may ask the Supreme Court for its judgment as to the constitutionality of a Bill within one week of the Bill being placed before Parliament.
The Supreme Court is to make its decision known to the President and the Speaker within three weeks. Bills which are determined to be unconstitutional by the Supreme Court may not be passed. In the case of a Bill which is considered by the Cabinet to be an "urgent" Bill, however, the Supreme Court is to make its determination within 24 hours and there is no provision for reference to the Supreme Court by citizens. The Prevention of Terrorism Act was declared an urgent Bill and rushed through Parliament without the opportunity for public discussion or debate or for any challenge to its constitutional validity.
Twenty-seven Tamils were detained as of the end of August 1981 under the Prevention of Terrorism Act. They had been held since April in Panagoda Army Camp as suspects in a bank robbery in Neerveli in March 1981. Nine persons were detained in previous years under the Act but were eventually released. No convictions have ever been made under the Act.
Soon after the arrest of the Tamils currently in detention, petitions for writs of habeas corpus for four of the detainees were filed in the Court of Appeal by their relatives under Article 141 of the Constitution.33 This Article provides that the Court may issue writs of habeas corpus to bring before the Court . .
The hearing on the petitions for habeas corpus opened on July 27, three months after the arrests. The hearings concerned the legality of the arrests of the detainees, allegations that they were severely tortured and the validity of detention orders made by the Minister of Internal Security. The undersigned ICJ observer was present in court for part of the hearings.
On the first day of the hearings, the three member Court consisting of the President, Justice Percy Colin-Thome, Justice Parinda Ranasinghe and Justice D. Athukorale, ordered the Army to bring the detainees into court and to permit them to consult lawyers.
This was the first opportunity provided to the detainees to consult lawyers in the three months since their arrest. It was also the first opportunity for family members to see the detainees since the arrest. The detainees were brought to the Court by army officers who were ordered to withdraw from the courtroom after objections by one of the attorneys for petitioners. Numerous members of the armed forces remained in the courtyard during the trial.
The petitioners were represented at the hearing by a team of respected lawyers led by a distinguished advocate, Dr. Colvin R. de Silva. The lawyers did not argue for the release of the detainees but asked that they be removed from the custody of the Army and placed in the custody of the Court, relying on the section of Article 141 of the Constitution which permits the Court to "otherwise deal with such person (detainee) according to law." Dr. de Silva contended that a detainee could be considered to have been "improperly detained" when he had been subjected to assaults and torture while in custody, had been arrested without a warrant and without being informed of the reasons for his arrest, or had been held without a valid order from the Minister.
He also argued that the Minister must have an objective basis for his "reason to believe or suspect that any person is connected with or concerned in any unlawful activity" as required for detention under the Act. He argued that when two constructions may be placed on a statute, such as the Terrorism Act, the construction most in harmony with fundamental freedoms should be accepted. Hence, the Act should not he interpreted in such a way as to infringe on rights guaranteed in the Constitution.
The Deputy Solicitor General, Tilak Marapane, arguing on behalf of the respondents, police and army officials, contended that the detainees were held under valid ministerial orders and that the Minister had an objective basis for his reason "to believe or suspect" that the detainees were connected with unlawful activities. The Deputy Solicitor General presented the information on which the Minister had relied to the Court. In the main, the evidence relied upon appeared to consist of allegations that the four detainees were close associates of persons known to have been connected with the bank robbery or allegations that they were members of an organization attempting to bring about the separate State of Eelam through violence.
The petitioners were ably represented at the hearing and the trial was conducted with judicial propriety by Mr. Justice Colin Thome, President of the Court of Appeal. The hearings were reported extensively in the press.
The judgment of the Court was rendered on September 10 when the undersigned was no longer in Sri Lanka.34
The Court of Appeal unanimously refused the application for writs of habeas corpus, but directed that lawyers should have access to the detainees at the Panagoda Army Camp and that the Judicial Medical officer or his Deputy should examine each of the detainees once a week. The Court stated that its refusal to remand the detainees to custody with other prisoners was in the detainees' own interest in view of "recent disturbances." The Court also found that the Minister had sufficient reasons for the making of the detention orders and that valid detention orders were ultimately made which remedied defective early orders. Hence, the detainees were validly held under the Terrorism Act.
Concerning the allegations of torture and mistreatment, the Court found that violence had been used against C. Kulasegarajasingam at Elephant Pass Camp prior to his transfer to Panagoda. The judgment stated that the detainee had been examined by a doctor after the filing of the application for writs of habeas corpus and "The doctor ended his report with the euphemism--'There is no evidence of any unreasonable harsh force being used to amount to torture.' There is no doubt, however, that violence had been used on him at the Elephant Pass Camp and we reject the denials of his custodians that he was not assaulted." With regard to allegations that S. Arunagirinathan had been assaulted during detention the Court found that, on medical examination, he had "two non-grievous contusions on his buttocks and there is no doubt that these indicated that he had been beaten by a blunt weapon."
The judgment also said that the allegation that V. Sivaselvam was severely assaulted "appears to us to be exaggerated. However, the use of violence of whatever degree on a prisoner is illegal and is not only an offence under the Penal Code, it contravenes Article 11 of the Constitution. 'No person shall be subjected to torture or to cruel, inhumane or degrading treatment or punishment.'"
Physical assaults against detainees in order to elicit confessions are common occurrences in many countries during prolonged detention incommunicado under executive order. The Court has now confirmed that violence has been used against detainees held under the Terrorism Act in Sri Lanka. The Court's finding that assault occurred against three of the four detainees was presumably based on affidavits of Judicial Medical Officers who examined the detainees in May on orders of the Court and on the detainees' own statements. The medical examinations had been requested by attorneys for the petitioners.
The Court held that the arrests without warrant were in accordance with the provisions of the Terrorism Act. As regards the allegations that the detainees were not informed of the reasons for their arrest, the Court held that it was unable to verify whether this had been done or not. Referring to Article 13(1) of the Constitution which states that a person arrested shall he informed of the reason for the arrest, the Judges said "these provisions are mandatory and any infraction of them is illegal and must be strongly condemned as a serious encroachment on the liberty of the subject guaranteed under the Constitution." They pointed out that failure to inform the arrested person will make a police officer liable to be convicted under the Penal Code for assault and wrongful confinement.
The Judges stated "what is the mischief aimed at by this Act? Everybody knows that this Act is intended to rid this country of terrorism in all its recent sophisticated manifestations. To achieve this end, the legislature has invested extreme powers in the courts, the executive and the police which they do not have in normal times, in the interest of national security and public safety. Conscious that these powers are of an extreme nature the legislature has laid down that this Act certified on July 20, 1979, shall be in operation for a period of three years from the date of its commencement."
By its frequent invocation of constitutional safeguards, its findings of violence during detention, and its references to the expiration date of the Act, the Court's judgment makes abundantly clear the exceptional danger to human rights implicit in the Terrorism Act. The petitioners for writs of habeas corpus in the case have appealed to the Supreme Court against the decision of the Court of Appeal denying the petitions.
The provisions of the Sri Lankan Terrorism Act are not only objectionable from a human rights point of view but it is doubtful that the Act is effective in controlling terrorism. The limitations on human rights, therefore, do not seem acceptable as a necessary means of maintaining public security. Since 1979, when the Act was adopted, terrorism has not declined but rather increased in the northern Tamil area. Increased police and army surveillance of the population have not curtailed the violence but seemingly stimulated it. This experience is similar to that of some other countries which have attempted to control terrorism by armed force rather than dealing with the fundamental factors contributing to the recourse to violence.
The experience of the United Kingdom in dealing with terrorism in Northern Ireland is instructive. It demonstrates that provisions for prolonged incommunicado detention of suspected terrorists may be counterproductive. According to the judgment of the European Court of Human Rights in the case of Ireland against the United Kingdom extra-judicial powers were adopted to control violence in Northern Ireland in the 1970s because
These reasons are strikingly similar to those mentioned by the Sri Lankan government for adopting the Prevention of Terrorism Act. In a brochure entitled "Investigations into Acts of Terrorism" prepared by the Ministry of Foreign Affairs, June 25, 1981,36 it is stated that,
The extra-judicial methods adopted in Northern Ireland to combat terrorism included prolonged detention of suspected IRA members. Widespread detention of suspects was terminated in 1975 in Northern Ireland following recommendations of the Gardiner Committee, appointed by the United Kingdom government, whose terms of reference were "to consider what provisions and powers, consistent to the maximum extent practicable in the circumstances with the preservation of civil liberties and human rights, were required to deal with terrorism in Northern Ireland, including provisions for administration of justice."37 The report of the Gardiner Committee concluded,
Persuaded by these arguments, the United Kingdom government abandoned further use of administrative detention orders in December 1974 and released all existing detainees by 1976.
The undersigned interviewed families of two detainees in Jaffna in August. The families detailed the frightening manner in which large groups of security officials, some in civilian clothing, came in early morning hours to arrest detainees without warrants and without identifying themselves. The families stated that they were not told where the detainees were being taken and were informed of their whereabouts only after more than a month. They have not been allowed to visit detainees. Now, they have learned that their family members who were detained have been assaulted. It is not difficult to imagine that such tactics may, in the long range, be counterproductive.
The Northern Ireland case before the European Court of Human Rights may have further relevance to the application of the Sri Lankan Act.
During the habeas corpus hearings before the Court of Appeal in Colombo in July 1981 it was alleged by attorneys for the petitioners that during periods of interrogation detainees had been required to stand for long periods against a wall in a stress position with their hands high above their head against the wall.
In the Northern Ireland case the use of this technique by security forces against detainees, together with other techniques such as hooding, subjection to noise, deprivation of sleep and deprivation of food and drink, was determined by the European Court of Human Rights to constitute inhuman and degrading treatment and thus a violation of Article 3 of the European Convention on Human Rights. During the hearing before the court the United Kingdom agreed to discontinue the use of such techniques.
The European Commission on Human Rights had earlier considered that such techniques constituted torture. The Court of Appeal decision in the habeas corpus proceeding in Colombo did not specifically find whether such techniques had been employed against detainees in Sri Lanka.
The great concern of the Sri Lankan government over the growing violence in the Tamil areas is understandable. Nevertheless, it is to be hoped that the limitations on human rights present in the Terrorism Act and the possible counter-productiveness of the Act will lead the government to urge Parliament to permit the Act to expire in 1982 or to amend it to better protect the rights of detainees.
The Sri Lankan government has evidenced its commitment to human rights by its ratification in 1980 of the two International Covenants on Civil and Political Rights and on Economic, Social and Cultural Rights. This commitment, subjecting the status of human rights in Sri Lanka to evaluation in accordance with international norms, is a positive step for which the present government should be commended. This section will briefly consider certain international standards which are relevant to some of the current human rights problems in Sri Lanka today.
The rights of arrested and detained persons are referred to in Articles 7, 9, 10, 14 and 15 of the International Covenant on Civil and Political Rights. Provisions of the Prevention of Terrorism Act are contrary to the following articles of the Covenant:
"Anyone arrested or detained on a criminal charge shall be brought promptly before a judge or other officer authorized by law to exercise judicial power and shall be entitled to trial within a reasonable time or release." (Article 9(3)).
The Terrorism Act permits detention on administrative order for a period up to eighteen months.
"No one shall be guilty of any criminal offence on account of any act or omission which did not constitute a criminal offence under national or international law, at the time when it was committed." (Article 15(1)).
The Terrorism Act contains provisions for retroactive application.
In addition, it appears that in the application of the Terrorism Act, the following provisions have not been conformed with:
"No one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment." (Article 7)
"All persons deprived of their liberty shall be treated with humanity and with respect for the inherent dignity of the human person." (Article 10(1))
"Anyone who is arrested shall be informed, at the time of his arrest, of the reasons for his arrest and shall be promptly informed of any charges against him." (Article 9(2))
In September 1981 the Court of Appeal found that three of the four detainees then before the Court in habeas corpus proceedings had been assaulted during detention. The Court said that it could not determine whether the detainees had been informed at the time of their arrest of the reasons for the arrest.
At the present writing, a Draft Body of Principles for the Protection of All Persons under any Form of Detention or Imprisonment is being considered by the U.N. General Assembly. Although the Principles have not yet been formally adopted by the General Assembly, they have been approved by the Human Rights Commission and Sub-Commission and thus represent an appropriate standard against which to measure the Terrorism Act.
The Draft Body develops in more detail the general provisions contained in the Civil and Political Rights Covenant.
It provides that "A detained person shall be entitled to communicate with a lawyer of his own choice within the shortest possible period after arrest" (Principle 15(2))
Not only does the Terrorism Act make no provision for access to a lawyer soon after arrest but the government has stated that withholding access from lawyers and family members is one of the important and necessary aspects of the Act. A government pamphlet concerning the Terrorism Act states, "If the Police are to conduct and complete their investigations successfully, it is important that these detainees should not have access to their lawyers and relations for a certain minimum period."39
Principle 14 of the Draft Body of Principles states, "Immediately after arrest and after each transfer from one place of detention to another, a detained or imprisoned person shall be entitled to notify members of his family of his arrest or detention or of the transfer and of the place where he is kept in custody." The families of prisoners detained under the Terrorism Act have stated that they were uninformed of the whereabouts of their detained family members for more than a month after their arrest.
Principle 23 of the Draft Body of Principles provides that "Any evidence obtained in contravention of these Principles shall not he admissible in any proceedings against a detained or imprisoned person." Thus, confessions obtained during prolonged detention without access to lawyers or obtained when there has been evidence of torture or inhuman or degrading treatment (Principle 5) should not he admissible in evidence. The
Terrorism Act does not prohibit the admission of evidence obtained under such circumstances. (Section 16 et seq.)
It has frequently been pointed out that incommunicado detention, such as permitted by the Terrorism Act, opens the door to abuse. The Inter-American Commission on Human Rights stated in a report on Chile:
"Unlawful detention incommunicado is, moreover, an encouragement to other crimes, particularly that of torture. For if the officials in charge of detention facilities need not produce the detainee in a short time, they may with impunity employ brutal means, whether for purposes of interrogation or intimidation."
The International Commission of Jurists has pointed out that "[s]everal decisions by the Human Rights Committee under the Optional Protocol to the International Covenant on Civil and Rights finding violations of the Covenant by Uruguay also demonstrate the relationship between torture and detention incommunicado, and in particular denial of access to a lawyer."41
The Bennett Report in the United Kingdom on Police Interrogation Practices in Northern Ireland reported that "the security forces regularly denied detainees access to a lawyer in order to create an atmosphere more favourable to extorting a confession."42
The Court of Appeal in Sri Lanka has wisely attempted to temper the application of the Terrorism Act by requiring access to lawyers, and regular medical examinations for the four detainees for whom habeas corpus writs were requested. The existence of the writ of habeas corpus is therefore an important procedural protection in Sri Lanka. Nevertheless, 23 persons remained in custody of the Army for whom relatives had not filed petitions for writs of habeas corpus. For these persons there has been no judicial control.
The government has pointed out that the Terrorism Act is needed to control the outbreak of terrorism, a situation which might be considered as an emergency situation, thus justifying certain derogations from human rights.
Article 4 of the Covenant on Civil and Political Rights permits derogation from Articles 9, 10 and 14 which concern criminal procedure "in times of public emergency which threatens the life of the nation and the existence of which is officially proclaimed . . to the extent strictly required by the exigencies of the situation."
The Terrorism Act has been in effect since 1979 and the government has officially proclaimed a state of emergency only for short periods during that time. In addition, although this is a matter of appreciation, its draconian provisions do not seem "strictly required by the exigencies of the situation," particularly in view of the fact they may be counterproductive in dealing with terrorism. Furthermore, Article 4 does not permit any derogations in emergency from the prohibition of torture or inhumane treatment and the prohibition of retroactive criminal legislation.
There is no doubt that terrorist acts have been and are occurring in Sri Lanka and that they create a serious law enforcement problem. It is doubtful, however, that the Sri Lankan government itself would consider these terrorist acts as a "public emergency which threatens the life of the nation," a requirement for derogation under the Covenant.
The Human Rights Committee set up under the Covenant has not yet interpreted this language but the European Court on Human Rights, in interpreting similar language in the European Convention on Human Rights, has held it to mean "an exceptional situation of crisis or emergency which affects the whole population and constitutes a threat to the organized life of the community of which the state is composed."43
Attention has been focused recently on the problem of derogations from human rights during periods of emergency. It has been pointed out that the major violations of human rights occurring in the world today take place during periods of "emergency" which has been defined as "the suspension of or departure from legal normality in response to a political, economic or social crisis."44
The International Commission of Jurists is presently studying the effects of states of emergency on Human Rights and Madame Nicole Questiaux has been appointed special Rapporteur of the U.N. Sub-Commission on Prevention of Discrimination and Protection of Minorities for a similar study. In a preliminary progress report to the Sub-Commission the Special Rapporteur has referred to a series of principles which must be maintained even in periods of emergency. One of these is the Principle of Proportionality. This means that the emergency measures taken must be in proportion to the actual requirements, in other words derogations from human rights should he only "to the extent strictly required by the exigencies of the situation." The potential for human rights abuses implicit in the provisions of the Sri Lankan Terrorism Act are such that it is doubtful that they are required by the situation in that country.
In referring to derogations from human rights under Article 4 of the Covenant the Human Rights Committee has stated,
It appears that the situation created by terrorist acts in Sri Lanka is not one threatening the life of the nation and that the provisions of the Act exceed the measures strictly necessary in the circumstances. The violations of human rights resulting from the Act are thus not permissible under the Civil and Political Rights Covenant.
In August 1981 the President of Sri Lanka declared a state of emergency after authorization by Parliament. This emergency was declared as a result of communal violence--looting, arson and murders against the Tamil population--and not as a result of acts of the small terrorist group. The August declaration of the state of emergency was widely regarded as a necessary step and an effective method of halting the communal violence. This appears to he an appropriate and even necessary use of a state of emergency in contrast to the continuing emergency type legislation embodied in the Terrorism Act. The state of emergency will presumably be terminated as soon as the immediate danger of communal violence has passed.
Both the Covenant on Civil and Political Rights and the Covenant on Economic, Social and Cultural Rights provide that the States Parties will guarantee that the rights enunciated in the Covenant will be exercised without discrimination on the basis of race or language. (Articles 2 in both covenants.) The Economic Covenant provides that "Higher education shall be made equally accessible to all, on the basis of capacity." (Article 13(2)(c). It appears that legislation in Sri Lanka concerning admission to universities is contrary to Article 13.
The Convention for the Elimination of All Forms of Racial Discrimination (although not ratified by Sri Lanka) can he considered as developing the more general discrimination provisions of the Covenants. It permits affirmative action under certain circumstances in Article 2(2):
Affirmative action programs are usually adopted by a majority group to help a backward minority group. It is unusual for a majority group to adopt affirmative action programs to help their own group as is the case of the Sinhalese in Sri Lanka.
Nevertheless, it could possibly be justified in unusual circumstances if the extreme backwardness of the majority population was the result of prior political domination by the minority group.
The majority Sinhalese community has been in political power in Sri Lanka since independence in 1948 and the difference between the two groups economically and educationally does not seem sufficient to justify the affirmative action measures relating to higher education. The fifteen percent of admission to universities awarded to backward areas may be justified. It seems, however, that in order to conform with international standards, the Government should reconsider its policy as to the 55% of places which are awarded to revenue districts on the implicit basis of race.
The government's commitment to racial justice would be further demonstrated by ratification of the Convention on the Elimination of All Forms of Racial Discrimination. This Convention has been ratified by over 100 countries, including the majority of developing countries. An explicit international commitment to eradicate racial discrimination should have a positive effect on the current situation in Sir Lanka. The policies adopted by the government with relation to education could then he appraised on the basis of international human rights norms and the problem resolved more satisfactorily.
Articles 1 of both the Civil and Political Covenant and the Economic, Social and Cultural Covenant provide that
The Tamil United Liberation Front contends that the Tamil population of Sri Lanka has a right to self-determination under international law contained in the U.N. Charter, the Covenants and general international law.
The Tamils could he considered to he a "people." They have a distinct language, culture, a separate religious identity from the majority population, and to an extent, a defined territory. Claims to self-determination under international law, however, must also be balanced against the international law principle of the territorial integrity of states. Moreover, minorities have not generally been considered as a "people" in U.N. application of the principle of self-determination.46 The Principles of International Law concerning Friendly Relations and Cooperation Among States, approved by the U.N. General Assembly in 1970,47 state in relation to self-determination,
The Principle also states that
Although the practice of the United Nations has been to limit the application of the Principle of Self-Determination to colonial situations there is a substantial body of academic opinion which contends that the principle should have wider application, and thus could apply to a situation such as that of Tamils in Sri Lanka.
It is understood that the right of self-determination may he claimed only once by a "people." It could he argued that by participating in the Sri Lankan government since independence, the Tamils no longer have a right to self-determination. The TULF contend, however, that the Tamils did not participate in the adoption of the 1952, 1972 or 1978 Constitutions and thus have never given up sovereignty which reverted to them when the legal ties with Britain were broken in 1972.
The application of the principle of self-determination in concrete cases is difficult. It seems, nevertheless, that a credible argument can be made that the Tamil community in Sri Lanka is entitled to self-determination.
But, ultimately, it will not be the legal principle of self-determination which will solve the problem of Sinhalese-Tamil relations in Sri Lanka but rather a willingness on the part of both groups to work out a political settlement.
Self-determination does not necessarily mean "separation," as pointed out in the Principles of Friendly Relations. It may be exercised while remaining in association or integration with an existing state. A substantial measure of autonomy accorded to the Tamil community through the District Development Councils would seem to satisfy the principle of self-determination. What is essential is that the political status of the "people" should be freely determined by the "people" themselves.
In the absence of substantial measures of autonomy being accorded to the Tamils by the majority community, the argument that self-determination permits separation becomes more persuasive. Whether separation is feasible or advisable is not within the purview of this study and the undersigned expresses no opinion on this subject.
"1. Recent events, particularly relating to ethnic conflict between the majority Sinhalese population and the minority Tamil population have created concern about the status of human rights in Sri Lanka. This is unfortunate since Sri Lanka has had one of the better records in Asia in the field of human rights. Democratic elections have been held and democratic parliamentary institutions maintained since independence in 1948. The country recently celebrated 50 years of universal adult suffrage, has had a proud tradition of adherence to the Rule of Law and a distinguished judiciary. The present government has made explicit commitments to human rights. It has adopted a Constitution which includes articles on fundamental rights and has ratified International Human Rights Covenants. Although the government has made efforts to meet certain demands of the minority Tamil community, the basic inter-ethnic conflict remains unresolved, violence is escalating and the government has taken measures with regard to terrorism which are in violation of international human rights norms.
2. Violence resulting from racial conflict between the majority Sinhalese and minority Tamil communities has reached alarming proportions recently. The violence includes communal violence directed against Tamils and violence by security forces primarily against the Tamil community as well as political terrorism by a small group of Tamil youths directed primarily against the police. In June 1981 the police engaged in widespread arson in the Tamil area of Jaffna in the North of Sri Lanka and in August 1981 there was a major outbreak of communal violence again directed against Tamils. The communal violence in August had international repercussions since Indian Tamil passport holders were killed, their residences burned and many were forced to seek refuge. President Jayewardene has admitted that some members of his political party have stimulated racial intolerance and violence and has promised to purge these elements from the party and government.
3. The sources of racial conflict in Sri Lanka are historical, economic, cultural and religious. Separate Sinhalese and Tamil communities existed on the Island from the pre colonial era until the administrative unification of the Island by the British in 1833. The early history of Sri Lanka is replete with stories of conflicts between Sinhalese and Tamil kings. During the colonial period the Tamils had a disproportionately high percentage of high governmental posts and admission to prestigious faculties in higher education.
4. Upon independence, the majority Sinhalese population imposed certain policies relating to language, religion, education and government service which were perceived by Tamils as discriminatory hut by the Sinhalese as compensating for the prior inferior status of Buddhism and the Sinhalese language as well as the proportionately low percentage of Sinhalese in higher education and government service. The Tamils consider these policies as intended to maintain them in an inferior status in the country.
They point to the fact that Indian Tamils were disenfranchised and rendered stateless at the time of independence, cutting down the Tamil vote to less than one-third in Parliament. The Tamils are thus unable to exercise any effective Parliamentary control over policies that discriminate against them. The 1964 agreement between India and Sri Lanka to repatriate a certain number of Indian Tamils and grant citizenship to the rest has not been fully carried out and Indian Tamils continue to live and work on plantations in conditions of poverty and misery. Sinhala is the official language required for government service; civil service employees are not required to learn Tamil. Buddhism is the official religion; equal status is not given to Hinduism, the religion of the majority of the Tamils. Repression by the police and army in the Tamil areas has been a constant cause of concern and appears to be growing. Tamils are unable to compete for admission to university faculties on the basis of merit alone; an implicit racial quota limits the Tamils to a certain percentage of places.
5. The 1958, 1977 and 1981 communal violence against Tamils by the Sinhalese population coupled with the measures relating to language, religion, education, and government service resulted in a pervasive sense of insecurity among Tamils, a demand for greater autonomy in Tamil areas and eventually the adoption by the Tamil United Liberation Front, the main Tamil political party, of a policy of separation of the Tamil area from Sri Lanka and the creation of a separate state of Tamil Eelam.
6. The Sinhalese regard the Tamil demand for a separate state as unrealistic since they believe that such a state would not be viable economically and politically. They cite the unhappy record of divided countries in support of their point of view. They also consider the demand for a separate state as dangerous since it creates antagonism against Tamils among Sinhalese and polarizes the ethnic dispute. It has been claimed that the Sinhalese have a minority complex since, although a majority within Sri Lanka, they are a minority within Asia. There are more than 50 million Tamils in India and other parts of Asia.
7. A small terrorist group known as the Liberation Tigers has developed among Tamil youth in the north of Jaffna. This group has allegedly been responsible for a bank robbery, an attack on a police station, and a number of killings within the last six months. The development of terrorism among Tamil youth has been linked to frustration concerning opportunities for higher education and government service and assaults against Tamils by police. To cope with the terrorist threat the government has adopted the Terrorism Act. This Act violates norms of the International Covenant on Civil and Political Rights ratified by Sri Lanka, as well as other generally accepted international standards of criminal procedure by permitting prolonged detention on administrative order without access to lawyers and the use of evidence possibly obtained under duress. The Court of Appeal has found in three of four cases brought before it concerning detainees under the Act that violence was used against the detainees during detention. The definition of an offence under the Act is unduly vague.
8. The tension between the ethnic communities creates an extremely dangerous situation in Sri Lanka which may escalate into major violence in the Island and negate all efforts to develop the Island economically. Despite long-standing tension, grievances and insecurities, the leader of both communities should be prepared to undertake major efforts to resolve the ethnic conflict.
9. The long-term solution to the ethnic conflict in Sri Lanka in the interests of the entire population can only be achieved on the basis of respect for the rule of law and relevant human rights standards. It is regrettable that certain government and United National Party actions such as the actions and remarks of certain government and party members, the actions of security forces, the stripping of the civic rights of Mrs. Bandaranaike, the Parliamentary vote of no confidence in the Leader of the Tamil United Liberation Front as well as the adoption of the Terrorism Act have undermined respect for the rule of law in Sri Lanka."
The following recommendations are respectfully submitted to the government in view of its international commitment to human rights and its expressed desire to resolve the ethnic conflict and promote economic development in Sri Lanka:
1. A primary concern of the government should he the physical security of the minority Tamil population and the avoidance of future communal violence so frequently directed against Tamils in the past. The army and police should be strictly controlled and used to ensure the safety of all Sri Lankans. In this regard the government should pursue a vigorous policy of investigation and prosecution of police officers responsible for the burning of many areas in Jaffna in May-June 1981. This serious violation of the duties of security forces deserves severe government condemnation and the enforcement of disciplinary and criminal sanctions. A thorough investigation should also be carried out of the role of organized groups in the communal violence against Tamils in August 1981 and individuals and groups found responsible should be prosecuted.
2. The government and the United National Party should make major efforts to ensure that, in the future, no member of the government or the Party is responsible for stimulating racial intolerance or violence by words or actions. Special attention should be given to limiting the role of government and party members perceived as encouraging anti-Tamil sentiments. The government represents all Sri Lankans and must maintain great care to ensure it is not representative of only one ethnic group. Members of the opposition party, the Tamil United Liberation Front, should also discourage members of the Party from actions or language which exacerbate racial tension and contribute to violence. It should be noted, however, that citing discriminatory government policies and adopting the policy of a separate state of Eelam are legitimate exercises of the right to free speech.
3. The government should lead a major national and international effort to rebuild and develop the Jaffna Public Library destroyed by arson by police in June 1981. Such an effort would evidence the respect of the government for the cultural rights of the Tamils, help to remedy a serious injustice done to the Tamil community and contribute to restoring Tamil confidence in the government.
4. The government should seriously consider the ratification of the Convention for the Elimination of All Forms of Racial Discrimination. This Convention has been ratified by more than 100 countries, including the majority of developing nations. An international commitment to eradicate racial injustice in Sri Lanka should contribute to the improvement of the racial climate in that country.
5. In view of the draconian provisions of the 1979 Terrorism Act which violate accepted standards of criminal procedure, the government should urge its parliamentary majority not to re-enact the Act on its expiration in 1982 or to amend it so that its provisions on arrest, detention and evidence conformwith the international commitments made by Sri Lanka in ratifying the Covenant on Civil and Political Rights.
6. In conformity with its commitment to the rule of law, the government should rely on the usual methods of criminal procedure in combatting terrorism as well as on eliminating the underlying causes which have led to terrorism among Tamil youth. The most effective method of combatting terrorism among Tamil youth would appear to be (1) to provide Tamil youth equal access to education and employment on the basis of merit (2) to prevent violence by security forces against Tamils and (3) to provide substantial autonomy to the Tamil population in the north of Sri Lanka. Eliminating the objectionable features of the Terrorism Act should not result in an increase in terrorism since the application of the Act in Sri Lanka and similar acts elsewhere has not appeared to decrease terrorist activity. On the contrary, there is evidence that the use of tactics permitted by the Act may lead to greater antagonism among the minority groups in which terrorism develops and thus be counter-productive.
7. It is to be hoped that the judiciary will continue to play an important role in tempering the objectionable features of the Terrorism Act, emphasizing the importance of procedural safeguards even for persons accused of serious crimes and upholding the rule of law in accordance with the Sri Lankan Constitution.
(a) Educational Policies
8. The government should re-examine its policies on university admissions with a view to basing admission on merit rather than on racial grounds. Tamil and Sinhalese young people alike will then have equal rights to university education on the basis of capacity rather than on race. One of the major points of tension among Tamil youth has been the implicit racial quota imposed under present university admission policies which has barred many competent youths from pursuing higher education.
(b) Employment in Government Service
9. Policies concerning the use of Sinhala, inter alia, have seriously lessened the opportunities of Tamils for government employment. The government should adopt a system for recruitment for government service which nrovides equal opportunities for all persons regardless of ethnic origin.
10. The government should give renewed attention to Tamil concern over government sponsored colonization schemes which bring large numbers of Sinhalese into Tamil areas and thus change the ethnic composition in such areas. This is particularly important in view of the insecurity of Tamils due to communal violence against them in areas where they are a minority.
Autonomy in Tamil Areas; District Development Councils
11. The government should continue and expand its policy of decentralization. It appears essential that Tamils he given greater roles in government administration in the areas in which they constitute an overwhelming majority. This can best be accomplished through substantial roles being given to the District Development Councils. Decentralization appears to be the only hope of avoiding more widespread agitation for a separate State of Eelam.
Role of Police and Army
12. Consideration should be given to providing Tamils with a larger role in security forces in the areas in which Tamils predominate. The presence of primarily Sinhalese police and army officers in these areas, the actions of some of the security forces and the perception that these forces represent an army of occupation has been unnecessarily provocative and a source of insecurity among Tamils.
13. Clear directives should he given to police and army officers that assault and torture of detainees is an unacceptable practice. The Court of Appeal recently found that assault occurred on three of four detainees before the Court on petitions for writs of habeas corpus. The government should ensure that such assaults are not part of a consistent administrative pattern by disciplining and prosecuting officers responsible for such practices.
Education for Racial Understanding and Tolerance
14. A major effort towards education in racial understanding and tolerance should be made at all levels of education and among the adult population. Such an effort might be coordinated through the government sponsored Human Rights Centre. The increasing antagonism and lack of understanding between the two ethnic groups must be combatted by vigorous efforts.
On August 31, 1981, President Jayewardene, R. Premadasa, the Prime Minister, and eight Cabinet ministers agreed with leaders of the Tamil United Liberation Front to set up a high level joint committee to discuss questions in dispute between them. The London Times reported on September 1 that "The agreement which came after two earlier rounds of discussions is the biggest breakthrough towards creating peace between Tamils and Sinhalese since President Jayewardene's Government took office in July, 1977."
The settlement agreed to between the two groups, however, was not made public. According to the Times of India (Sept. 2, 1981), "The reluctance of the government to divulge the details of the settlement at this stage was viewed by observers as being motivated by the fear that it would arouse the ire of Sinhalese nationalists, who all along have opposed any major concession to Tamils, such as giving them a degree of internal autonomy." The settlement apparently provided for much wider powers than previously accorded to be given to the District Development Councils elected in June, thus permitting Tamils substantial management of their internal affairs in the northern area. If such is the case, it would indeed be a major step in the solution of the Tamil problem.
The United National Party took certain steps in September to purge itself of elements which appeared to be contributing to anti-Tamil sentiments and encouragement of violence. One member of the Party who was expelled had earlier urged Parliament to act as a Judicial Committee and to take action against the Leader of the Opposition Tamil party. Under pressure from the UNP, three MP members of the Party agreed to withdraw remarks made in Parliament in July during the debate on the vote of no-confidence in the Opposition Leader. The Deputy Minister for Regional Development, who represented Ratnapura in Parliament, was removed from his office in September. Ratnapura was one of the areas most hard hit by racial violence in August.
Despite reports of these positive steps, the ICJ has also received reports of numerous acts of violence by the army against individual Tamils during the month of October. The government action in negotiating with the TULF and in ridding the Party and government of anti-Tamil elements is commendable. It is to be hoped that the government, however, will also be particularly vigilant in protecting the physical security of Tamils. A basic responsibility of the government is clearly the safety and security of the entire population. The fate of the Tamils in Sri Lanka remains a matter of international concern.
Professor Virginia A. Leary
1. The terms "racial" and "ethnic" are both used in Sri Lanka to describe the problems arising between the Sinhalese and Tamil communities. Although the term "ethnic" is preferable, the two terms are used interchangeably in this report in view of their common usage in Sri Lanka.
2. The Ceylon Daily News, on July 29, 1981, however, carried an item that customs at the Sri Lankan International Airport that day had detained 200 copies of a New York newspaper, Worker"4s Vanguard, which carried an interview with the Leader of the Opposition, Mr. A. Amirthalingam. The copies were addressed to a person in Sri Lanka.
3. For information about the proposed bill see "An Assault on the Right to Read," Civil Rights Movement (CRM) of Sri Lanka, Colombo. For general information about the press in Sri Lanka see Gunewardena, V., "Man, Media and Development: The Press in Sri Lanka," 3 Human Rights Quarterly, No. 3, p. 89, Summer 1981.
4. For an excellent account of the history of racial conflict in Sri Lanka see Coomaraswamy, R., "Ethnic Conflict in Sri Lanka", 1981, Marga Institute, Colombo (forthcoming publication). See also Schwarz, W., The Tamils of Sri Lanka, 1975, Minority Rights Group, London, and "Race Relations in Sri Lanka," Logos, vol. 16, 1977.
5. Coomaraswamy, op. cit.
6. Government colonization schemes providing for relocation of depressed populations to more fertile areas has been a continuing problem between the two ethnic communities. Such schemes have frequently involved moving Sinhalese into Tamil areas.
7. For a succinct survey of the constitutional history of Sri Lanka see de Silva, K.M., "A Tale of Three Constitutions," The Ceylon Journal of Historical and Social Studies, New Series, vol. VII, No. 2, p. 1 (1977).
8. In 1970-71 Tamils constituted 40.7% of admissions into engineering faculties in Sri Lanka and 40.8% of admissions into medical faculties. In 1975, the Tamil percentage of admissions in these same two faculties was 14.1% and 17.4%. Jayawickrama, N., Human Rights in Sri Lanka, Office of the Secretary, Ministry of Justice, Colombo, Sri Lanka, August 4, 1976.
9. See footnote 6.
10. Coomaraswamy op. cit.
Report of Presidential Commission of Inquiry (Sansoni Commission), Sessional Paper No. VII of 1980.
11. On Sept. 16, 1981, Dr. Fernando was expelled from the United National Party for violating the code of conduct and the party constitution. According to press reports he was expelled for criticizing the government, the party and the party leader in public.
12. Concern has frequently been expressed about violations of human rights which occur during states of emergency. Delay in declaring a state of emergency may, however, also result in human rights violations. In August, 1981, in Sri Lanka the murder, looting and arson were finally brought under control only after the state of emergency was declared. The declaration of the emergency was perhaps even too long delayed. It remains true, however, that special precautions must he taken during periods of emergency to avoid unnecessary abuse of rights. Widespread brutal attacks by police and armed forces were alleged to have occurred in Jaffna during a state of emergency declared by the Sri Lankan government in 1979.
13. Interview of the author with the Permanent Secretary of the Ministry of Justice, August 21, 1981.
14. Investigation Into Acts of Terrorism, Ministry of Foreign Affairs, June 25, 1981.
15. Hansard, June 9, 1981.
16. Insecurity of Tamils in Sri Lanka, CBFTRR/TRG Publications, 10th June 1981, London.
17. "The sporadic acts of violence that have marred the traditionally tranquil atmosphere of Jaffna did not crop up spontaneously. They can be related directly to gross political discrimination meted out to the Tamils and the reign of police and army terror unleashed on them in the post-1970 period," Emergency 1979, p. 5, Movement for Inter-Racial Justice and Equality TMIRJE), 1980. MIRJE is a Sri Lankan movement which includes both Sinhalese and Tamil members.
18. The June incidents in Jaffna were discussed in Parliament on June 9, 1981. Mr. Gamini Disanayake, Minister of Lands and Land Development, presented the government viewpoint and Mr. A. Amirthalingam, leader of the TULF in Parliament, spoke for the Tamil Party. Hansard, June 9, 1981.
19. The letter has been published by the Civil Rights Movement, Colombo.
21. Report of MIRJE Delegation to Jaffna, 6 June-9 June, 1981, reprinted in Violence in Jaffna 1981, p. 19, Centre for Society and Religion, Colombo.
22. Schwarz, The Tamils of Sri Lanka, 1975, p. 15, Minority Rights Group, London.
23. Wilson, A. J. "Focus on the New Constitution," Sunday. Observer, Sept. 10, 1978, reproduced in Towards Concord, Facts About the Tamil Problem in Sri Lanka, Department of Information, Sri Lanka, 1979.
24. Coomaraswamy, op. cit.
26. Investigation into Acts of Terrorism, Sri Lankan Ministry of Foreign Affairs, June 25, 1981.
27. "Cost to Civil Rights of Fighting Terrorism," by Stephen Cook, The Guardian, January 13, 1980.
28. For criticism of this Act No. 28 of 1967 see "The Terrorism Act of South Africa," Bulletin of the International Commission of Jurists, No. 34, June 1968, p. 2T, and Suzman, "South Africa and the Rule of Law," The South African Law Journal, Vol. LXXXV (Part III), August 1968, p. 261, 26-9.
29. Bulletin of the International Commission of Jurists, op_ cit., p. 31. Article 20(2) of the International Covenant on Civil and Political Rights provides that "Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law." This provision of the Covenant might he considered as constituting international approval of the relevant portions of the South African and Sri Lankan Terrorism Acts. Democratic countries have recognized the potential danger to human rights from such provisions. The United Kingdom and the United States inter alia, on signing the Covenant have made reservations to this Article in the interests of the protection of freedom of speech.
30. Ibid., p. 34.
31. See later section of this report on International Norms.
32. Other paragraphs of Article 15 permit restrictions on a number of other fundamental rights.
33. 11/81 and 13/81. In the Matter of an Application for an Order in the Nature of a Writ of Habeas Corpus under Article 41 of the Constitution of the Democratic Socialist Republic of Sri Lanka, decided on 10th September, 1981.
35. European Court of Human Rights, Judgment, Case of Ireland against the United Kingdom, para. 36, p. 12, Strasbourg, 18 January, 1978.
36. The introduction to the brochure prepared by the Ministry of Foreign Affairs refers to the reasons for its publication: "Some of the visitors to Sri Lanka may have received pamphlets distributed by interested parties trying to make out that the detention of a number of persons in connection with the recent incidents of violence in Jaffna, is unjustified and harsh. This note sets out the circumstances in which 27 persons are now under detention under the provisions of the Terrorism Act."
37. European Court of Human Rights, Judgment, Case of Ireland against the United Kingdom, para. 74, p. 23, Strasbourg, 18 January, 1978.
38. Ibid., para. 74, p. 24.
39. Investigation Into Acts of Terrorism, Ministry of Foreign Affairs, June 25, 1981.
40. Inter-American Commission on Human Rights, Third Report on the Situation of Human Rights in Chile, 1977, p. 44.
41. O'Donnell, D., States of Siege or Emergency and Their Effects on Human Rights, Observations and Recommendations of the International Commission of Jurists, 1980, n. 32.
42. Ibid., n. 33, Report of the Committee of Inquiry into Police Interrogation Practices in Northern Ireland, The Bennett Report, HMO Stationary Office, London, 1979, pp. 30 and 92.
43. The Lawless 4 Yearbook of European Convention on Human Rights (1961), pp. 472-3; cited in O"4Donnell, ICJ Observations, op. cit., p. 18.
44. O'Donnell, ICJ Observations, op. cit., p. 1.
45. CCPR/C/XII/CRP.1/Add.16, p. 6.
46. See The Right to Self-Determination, Study prepared by Aureliu Cristescu, E/CN.4/Sub.2/404/Rev.1 (1981); The Right to Self-Determination, Study prepared by Hector Gros Espiell, E/CN.4/Sub.2/405/Rev.1 (1980).
33. The detainees on whose behalf petitions for writs of habeas corpus were filed are S. Arunagirinathan, C. Kulasegarajasingam, S. Murugaiah and V. Sivaselvam.
34. Court of Appeal--Habeas Corpus Application Nos: 10/81, 47. UNGA, Res. 2625 (XXV), 24 Oct. 1970.
A report by the staff of the International Commission of Jurists
Ethnic Violence in Sri Lanka, 1981-1983
The outburst of ethnic violence in Sri Lanka at the end of July 1983, far the worst to have occurred since independence, did not come as a surprise to those who have been following events in the country.
On June 11 Gamini Navaratne, a Sinhalese journalist wrote in the Saturday Review, an English language weekly published in Jaffna,
"After six years of United National Party rule, Sri Lanka is once again near the incendiary situation of 1958. Let's hope to God that no one, from any side, will provide that little spark that is necessary to set the country aflame. The politicians of all parties, should be especially careful about their utterances in this grave situation".
Unfortunately the spark was provided on 23 July when 13 soldiers were killed in an ambush by members of the small Tamil terrorist organisation, styling themselves as the Tigers. According to a report published in the London Times on 27 July, the soldiers were killed in reaction to the abduction and rape of three Tamil girls by a group of soldiers. In addition, about 3 days before the attack by the Tigers two suspected terrorists were shot by army soldiers at Meesalai Chavakacheri, 15 miles from Jaffna. As the government had suspended Tamil language newspapers at the beginning of July, and as this explanation was not published in the Sinhalese language press, the public was not aware of these earlier incidents, and the killing of the soldiers became the signal for unleashing widespread racial violence.
In July and August 1981, Virginia Leary, Professor of International law at the State University of New York at Buffalo, undertook a mission on behalf of the International Commission of Jurists to study the human rights aspects of the Terrorism Act and events related to its adoption and application. Her report, entitled "Ethnic Conflict and Violence in Sri Lanka". to which this is a supplement, was published by the ICJ. The conclusions of her report will be found on pages 72 to 76.
Among her recommendations were that
- a primary concern of the government, the ruling United National Party and the opposition Tamil United Liberation Front (TULF) should be to ensure that no members of the government or of the two parties are responsible for stimulating racial intolerance or violence by words or actions;
- the army and police should be strictly controlled and used to ensure the safety of all Sri Lankans; thorough investigations of the communal violence against Tamils in August 1981 should be carried out and individuals and groups found responsible should be prosecuted;
- the Jaffna public library, destroyed by arson by the police in June 1981 should be rebuilt;
- the Terrorism Act, which violates accepted standards of criminal procedure and has proved to be counter-productive, should not be re_ enacted on its expiration in 1982 and the government should rely on the usual methods of criminal procedure in combatting terrorism;
- in order effectively to combat terrorism among Tamil youth, they should be given equal access to education and employment on the basis of merit, violence by security forces against Tamils should be prevented, and substantial autonomy should be provided to the Tamil population in the north of Sri Lanka;
- the government should heed Tamil concern over the colonisation schemes bringing large numbers of Sinhalese into Tamil areas, in view of the insecurity of Tamils due to communal violence in areas Where they are a minority;
- the government should continue and expand its policy of decentralisation, giving greater autonomy in areas where the Tamils constitute an overwhelming majority;
- clear directives should be given to police and army officers that assault and torture of detainees is an unacceptable practice.
Unfortunately these recommendations were not followed, and the next two years saw an increasing escalation of ethnic violence and counter violence and the introduction of further repressive legislation by the government, used almost exclusively against Tamils.
The Presidential election and referendum to extend the life of the Parliament
An anticipated Presidential election and a referendum to extend the life of the Parliament took place at the end of 1982.
Mr. Junius R. Jayewardene had come to power in 1977 with a sweeping victory that gave his party a sufficient majority (143 out of 166 seats) to vote amendments to the Constitution at will. Some amendments affecting Article 3 of the Constitution enshrining the sovereignty of the people required approval in a referendum.
Following the 1977 election major amendments were approved to the Constitution, replacing the former system based on the British model by an executive presidency, the president's term being fixed at 6 years.
On August 26, 1982 the Constitution was again amended to enable the President to seek re election before the end of his term. Since the Supreme Court ruled that the amendment did not infringe Article 3 of the Constitution, no referendum was held. Critics suggested that the President was motivated to make this amendment and to seek re election before the expiry of his original term by the waning popularity of his party.
In the Presidential election held on October 20, 1982, President Jayewardene received 52.9 percent of the total votes polled. His nearest rival Mr. Hector Kobbekaduwa of the Sri Lanka Freedom Party polled 39.1 percent.
One main criticism of the Presidential election was that President Jayewardene held the election after effectively silencing his rival leader and former Prime Minister Mrs Bandaranaike. This was done by Parliament imposing civic disabilities on her for seven years, with the result she was banned from participating in elections or even campaigning on behalf of other candidates for that period, under pain of fine or imprisonment, and any successful candidate for whom she campaigned could be unseated. This was done following investigations by a Special Presidential Commission whose procedure seriously violated basic principles of the rule of law (see ICJ Review No. 21, December 1978 at p. 11).
Another relevant feature of the Presidential election was the opposition by the Tamil minority. In the predominantly Tamil Jaffna district the voter turn out was only 46 percent as compared to the national average of 81 percent. President Jayewardene who headed the polls in 21 of the country's 22 districts was third in Jaffna.
President Jayewardene interpreted his re-election as an approval by the nation of his policies. Soon after this Presidential election President Jayewardene again used his Parliamentary majority to amend the Constitution, this time to extend the life of the Parliament till 1989, and he called for a referendum to approve the amendment. In a press statement on 25 November 1982, the Secretary-General of the ICJ, commented that the recent amendments and the proposed amendments savoured more of political manoeuvring than of a desire to maintain the stability of the Constitution, and expressed the hope that in the coming referendum the electors would reflect carefully before allowing the undoubted popularity of the President to undermine the tradition of constitutional rule.
The referendum, which took place on December 22, was conducted under emergency regulations imposed on the eve of the presidential election on 20 October. Of the 22 districts 15 voted in favour of extending the life of the Parliament and seven districts, including the districts with large Tamil populations, voted against. The voter turn out in the Tamil dominated Jaffna district was particularly heavy with 95 percent of the 265,000 voters opposing the continued dominance by the United National Party led by President Jayewardene.
In a statement made after the referendum one of the leaders of TULF, Mr. Sivasithamparam said (1)
The Civil Rights Movement of Sri Lanka published a critique of the referendum questioning whether it was in fact 'free and fair'.
So far from allowing the Prevention of Terrorism Act to expire in 1982, as recommended by Professor Leary, it was made permanent in March 1982, and a new Section 15A was added. This gives the Minister of Defence (not the Minister of Justice) the power to order that any person remanded in custody under the Act may be kept 'in the custody of any authority, in such place and subject to such conditions as may be determined by him". This means that the suspect can be transferred, for example, from a civilian prison or police station to a military barracks or camp.
Experience in many countries shows that when persons are removed from the custody of trained prison officers and handed over to military custody, abuses are liable to result. In Sri Lanka itself such abuses had been reported by Prof. Leery, including sane which had been the subject of findings by the Court of Appeal.
The fact that the Prevention of Terrorism Act was made permanent, and that its use has been exclusively or almost exclusively directed against Tamils, indicates that the government intended to use the weapon of preventive detention permanently and not merely as a temporary measure in dealing with the minority problem.
The Colombo based multiracial Movement for Inter-racial Justice and Equality (MIRJE) sent a delegation an 21-22 March 1982, consisting of its President Rev. D.J. Kanagaratnam, the National organiser Mr. Shelton Perera and Mr. Wilfred Silva to Vavunia district to study the human rights abuses. In its report submitted to the President it said (2)
Even after publication of such a report the government did not take any effective steps to prevent atrocities by the police and army personnel.
The government lack of respect for the rule of law can be illustrated by three cases in which a mantle of protection was thrown over officials who had exceeded or abused their powers.
In the first case the Supreme Court passed strictures against Mr. P. Udagampola, a class I Superintendent of Police, for preventing a Buddhist monk from distributing leaflets arguing against the referendum, and said that he should pay compensation to the monk. Instead of taking any action against the officer, the Cabinet of Mr. Jayewardene decided to promote him and the compensation was paid by the government.
In September 1982 two army personnel who had been arrested and remanded in connection with the shooting of a lame Tamil youth, Kandiah Navaratnam, were released by the magistrate on the instructions of the Attorney-General.
In another case in June 1983 a bench of three judges of the Supreme Court ruled that the arrest of a Mrs Vivienne Goonewardene by the police officer in charge of Kollupitiya police station was unlawful, and that the State should pay compensation of Rs. 2,500. Again the concerned officer was promoted. The reason given was that it would enable police officers to carry out their duties without fear of being punished.
In face of this attitude by the government it is not surprising that the police and army increasingly took the law into their hands, as will be seen when examining the activities of the police and army in the Tamil district of Jaffna.
Indeed, after judgment was delivered in Mrs Vivienne Goonewardene's case, the residences of the three judges who constituted the bench were attacked on June 11, by a mob. Their attempts to obtain assistance from the po1ice were of no avail. An editorial in The Island newspaper published in Colombo said (3)
Escalating terrorist violence and counter violence
Many observers have commented that the harassment and violence by the army and police have contributed to growing support for the Tigers. Though the actual number of Tigers is not known, it appears to consists of a small number, the estimates ranging from 25 to 1,200 (4)
The October 1982 presidential election and the referendum led to an increase in the activities of the 'Tigers', which included an attack on police and army personnel. On the eve of the presidential election a country wide emergency was imposed and 'Suthanthiram', a Tamil newspaper, was suspended.
With the growing failure to prevent the terrorists' activities, the government started using the Terrorism Act more widely. For example, on November 11, 30 people were arrested. These included 8 priests, 6 belonging to the Catholic church and two to the Anglican and Methodist churches. A university lecturer and his wife were also arrested on the same day. On November 17, two of the Catholic priests, Fr. Singarayer, Fr. Sinharasa and the university lecturer Mr. Nityanandam and his wife Mrs Nirnala Nityanandam were charged under the Prevention of Terrorism Act. They were accused of withholding information about terrorists and habouring them. (The other six Catholic priests were released).
Fr. Singarayer in a letter addressed to the President of the Bishops Conference of Sri Lanka stated that he was tortured and made to sign statements (5).
In the predominantly Tamil districts protests were organised for the release of the priests and others. On 10 December a protest fast and a prayer meeting were organised at St. Anthony's church at Vavunia. After the prayer meeting, the gathering was attacked by the army, who even entered the church to assault some of the people. This led to further protest in the form of hartal (closing of shops and business establishments) in the town of Vavunia.
The beginning of 1983 saw a continuing escalation of the violence by both sides, the Tamil terrorists and the Sri Lankan army. In January a U.N.P. organiser was shot dead at Vavunia by the terrorists. In February a police Inspector and a driver were shot dead. In March an army vehicle was ambushed and five soldiers were injured.
The counter violence by the army and police included an attack in March on a refugee settlement helped by a voluntary organisation called the Ghandhiyam Society.
The Ghandhiyam Society, which was formed in 1976 as a social service organisation, had been involved in rehabilitating the refugees of the 1977 and 1981 racial riots. These refugees had fled Elm the southern parts of Sri Lanka and had settled down in the existing Tamil villages in and around Trincomalee district. The Ghandhiyam organisation with the help of church agencies from West European countries had been helping these refugees to build houses, dig wells, and use better methods of cultivation, and was conducting health and education programmes.
One such village settlement in Pannakulam in Trincomalee district was attacked on 14 March. Sixteen huts were burnt and the Chandhiyam volunteers were intimidated. Though the affected families filed a complaint, no action was taken.
In the beginning of April Dr. Rajasundaram, the Secretary and Mr. S.A. David, the President of the Gandhiyam Society were arrested. It was alleged that both were tortured and even after a court order access to their lawyers was delayed. Both ware accused of helping the Tamil terrorists through the Ghandhiyam organisation. Dr. Rajasundaram was one of the persons killed in the Colombo prisons between 25 and 29 July 1983.
On 1 June 1983, a farm and a children's home in Kovilkulam village near Vavunia and run by the Gnandhiyam Society was burnt. Mr. Tim Moore, Honorary Treasurer of the Australian Section of the ICJ, who was able to visit the place in June 1983, says in his report that "discussed the operations of the movement with a wide range of people in Sri Lanka and cane to the conclusion that it is not involved in politics or with the Tigers, but is a genuine social service organisation. The Sinhalese suspicions with respect to its resettlement activities appear to arise more from increases in Tamil populations in areas close to Sinhalese settlements than from any legitimate grievances about its activities".
Events that took place in the months of May and June clearly indicate that the situation was deteriorating seriously.
On May 18, polling took place for 37 municipal and urban Councils and 18 Parliamentary seats. It was reported that between nominations and polling day, militant Tamil youths had launched a violent campaign for the boycott of the polls. Two U.N.P. candidates and the party secretary of the Jaffna district were shot and killed. Acts of violence to disrupt the elections in the Jaffna district were maintained right up to election day on 18 May, when several polling stations were attacked with home made bombs. The major confrontation came after the voting ended, when a gang of armed Tamil youth stormed a polling station two miles from Jaffna in a bid to seize the ballot boxes. An army corporal on guard duty was killed and four policemen and a soldier were wounded.
At 5 pm on the same evening, a state of emergency was declared. Later in the night, in what was clearly a retaliatory strike, soldiers burnt houses and vehicles and looted in the general vicinity of the polling booth in which the incident had taken place. Several million rupees worth of damage was done before the soldiers were pulled back to their barracks.
Mr. Tim Moore states in his report that on the same night, while attempting to burn down the Jaffna cooperative stores, located opposite the Jaffna medical hospital, the soldiers had shot at some of the hospital personnel who were watching from the hospital. A non medical junior member of the staff was injured.
A report published on the incident in the Far Eastern Economic Review of 2 June 1983 quoted a senior police officer as saying, "what happened in Jaffna after the shooting is exactly what the terrorists want; they want the people to be resentful and embittered with the army'.
The army continued to penalise the Tamil community at random for the actions of the militant Tamil youth. For example on 1 June, two members of the airforce were killed by the Tamil youth while they were making routine purchases in the local market in the town of Vavunia. One of the shops was alleged to have been used by the Tamil youth to attack the two airman. In retaliation, soldiers set fire to the shop and the adjacent shops. Mr. Tim Moore, who inspected the site, says that the damage "extended to some 16 or 17 small shops and destroyed the means of livelihood and a considerable portion of the assets of the traders involved".
The innocent Tamils affected on both occasions had no possibility of claiming compensation for the losses incurred by the illegal acts of the soldiers. Such disciplinary action as was taken against the soldiers involved in the May 18 incident was withdrawn by the government when 40 soldiers of the same regiment deserted in protest.
The situation was aptly summarised by a correspondent of the Far Eastern Economic Review of June 23, 1983, reporting from Jaffna on the May 13 incident. He said, "At present the northern and eastern provinces are experiencing a vicious circle of violence: terrorism followed by reprisals by and the army and other security agencies, which have led to a drastic deterioration of law and order".
In the midst of this increasing violence by the army and police a Public Security Ordinance was promulgated authorising the police, with the approval of the Secretary to the Ifinister of Defence, to bury dead bodies in secret without any inquest or post mortem examination.
Government spokesmen sought to justify this measure by stating that "the morale of service and police personnel is low because under normal circumstances if they shoot down a terrorist they have to face an inquest, remands and other constraints", and, on another occasion, that 'the government wishes to ensure that serviceman and policemen doing their duty under difficult circumstances are in no way harassed by the law".
This extraordinary ordinance applies to the burying of any dead body, including persons who have died in custody. Mr. Tim Moore, after examining the records in Jaffna in June 1983, stated that at least 23 renters of the Tamil community have died since July 1979 in army or police custody. In addition four persons are reported to have disappeared after arrest.
On 10 April 1983 a young farmer from Trincomalee, K. Navaratnarajah, died in custody after having been held without charge for two weeks. Twenty five external wounds and 10 internal injuries were found on his body during the post marten examination. At the end of an inquiry a verdict of homicide was given by a Jaffna Magistrate. No action has been taken by the government against those responsible.
Tension between the Tamils and Sinhalese spread to other parts of the country. Tamil students were attacked in the universities, and passengers in trains to and from Jaffna were attacked.
The effect of this indiscriminate counter violence was well summarised in the article already referred to in the Far Eastern Economic Review of 23 June 1983:
"The Tamil underground secessionist movement seem to have acquired more popular sympathy than it had a few months ago - but it is still too early to predict its chances of success in its quest for Eelam, the name for the sought-for sovereign state that the, secessionists want carved out of Sri Lanka. However, what cannot be ignored is the current total alienation of the Tamil region from Colombo, plus an unusually high degree of antipathy between the Tamils and the majority Sinhalese communities of Sri Lanka".
The government's reaction to this situation was to intensify the repression. In an interview with Ian Ward, published in the London Daily Telegraph on 11 July 1983, President Jayewardene said:
The sense of frustration of President Jayewardene is understandable, but his remedy is of doubtful validity. A doctor cannot remove a seat of infection without knowing where it is located. Experience in many countries shows that terrorist organisations cannot be run to earth where they have popular sympathy, and a general indiscriminate repression of the public will only serve to increase such sympathy.
The violence that rocked Sri Lanka between 24 July and 2 August surpassed all earlier incidents.
As has already been stated, this outburst of cormiunel violence is attributed to a reaction to the killing by the Tigers of 13 soldiers on 23 July. The opposition leader, Mr. Amirthalingam has stated that 51 people had been killed in the Jaffna peninsula by troops in previously unreported incidents (Guardian, 8/7/1983). The period over which these killings are alleged to have occurred is not stated.
On 7 August President Jayewardene disclosed that the army in Jaffna had gone on the rampage in response to the killing of the 13 soldiers and had killed 20 civilians. He said this information had been withheld from him by the army until 7 August. This presents a terrifying picture of army discipline.
On 25 July a group of 130 naval personnel in Trincomalee went on a rampage burning 175 Tamil houses, killing one Tamil and wounding 10 others before they were able to be rounded up and returned to their barracks.
Press reports published on 27 July described the killing of 35 prisoners during a fight in Colombo's Welikada jail. The London Times called it the 'worst incident so far in the violence sweeping the country'.
On 29 July it was reported that 17 more Tamil prisoners had been killed in the sane prison. This included Dr. Rajasundaram, Secretary of the Gandhiyam Society.
All the 52 prisoners killed in Welikada prison were arrested under the Prevention of Terrorism Act. It is not clear how it was possible for the killings to take place without the connivance of prison officials, and how the assassinations could have been repeated after an interval of two days, since Welikada prison is a high security prison and the Tamil prisoners were kept in separate cells.
On and after 29 July there were widespread reports of the destruction by burning in Colombo of Tamil-owned businesses, shops and factories, the seizure of Tamils from their hones, and the looting and burning of their hones. This violence then spread to other centres of population.
Nearly one-half of the 141,000 Tamils living in the Colombo area may have been left homeless. According to government sources 350 people died in nine days rioting, and many fear the toll to have been much heavier. More than 100,000 people sought refuge in 27 temporary camps set up across the country. Some oT the refugees were snipped to the Jaffna peninsula.
Reports from Colombo indicate that the attacks on Tamil business men and others were highly organised. Those making them went direct to the homes of the persons concerned and set them on fire. The looting of their homes and shops took place subsequently and was compared by one journalist to the arrival of vultures to take pickings from the prey.
The suspicion is strong that this organised attack on the Tamil population was planned and controlled by extremist elements in the government UNP party, and that the killing of the 13 soldiers by the Tigers served as the occasion for putting the plan into operation. Some reports go so far as to allege that a member of the Cabinet was actively involved in planning these attacks.
Some of the factories burnt down bore English names from pre-independence days, and the ordinary public would not know that the majority shareholding had been acquired by Tamils.
For three days of the violence there was no word from President Jayewardene. On 29 July he made a 4 minute speech in which he announced that any organisation supporting the division of Sri Lanka would be proscribed, and that any person subscribing to such a policy would not be allowed to take a seat in Parliament, would lose the right to vote and other civil rights, could not hold office, could not practice a profession and could not join any movement or organisation.
Further the President said that the Sinhalese will never agree to the division of a country which has been a united nation for 2,500 years. On the separatist movement he said "this movement for separation was non violent, but since later in 1976 it became violent. Violence increased and innocent people were murdered. It has grown to such large proportions that not a few but hundreds had been killed during this movement'.
Surprisingly there was no condemnation of the violence against the Tamils. Rather the President seems to have sought to placate the majority Sinhalese and by implication to justify the racial atrocities.
Further the statement that the country had been united for 2,500 years flies in the face of history. There was for some centuries an independent Tamil kingdom and the chronicles report frequent wars between Sinhalese and Tamil kings. Separate Sinhalese and. Tamil communities existed on the island from the pre colonial era until the administrative unification of the island by the British in 1833.
On 3 August Parliament approved an amendment to the Constitution providing for the banning of any political party advocating secession of any part of the country. This is clearly aimed at the main opposition party, the TULF, which at the time of its formation did pass a resolution advocating secession. However, as will be shown in the following section, the subsequent history of the party shows that its leaders have repeatedly shown readiness to compromise, have never advocated or had recourse to any unconstitutional action, and have clearly denounced and dissociated the party from the terrorist Tigers organisation.
The government also announced the banning of three left-wing parties for their alleged involvement in the communal violence. The three banned parties were the Janata Vimukti Peranuna or Peoples Liberation Front, the Nava Sane Sanaj Party or New Equal Society Party and the Communist Party of Sri Lanka.
President Jayewardene has also sought to suggest that the communal violence was fomented by a foreign power, apparently meaning the USSR. Government representatives have referred to Tamil guerrillas being harboured in the South India state of Tamil Nadu. At the time of writing no evidence has been published connecting the banned parties or any foreign country with the communal violence.
On May 17, 1976, the Federal Party and other Tamil organisations united to form the Tamil United Liberation Front (TULF). A resolution adopted by the TULF at their first national conference in 1976 stated,
In May 1976, Mr. Amirthalingam, Joint Secretary-General of the Front, was arrested on a charge of inciting to defy the Constitution. In September in the same year, a special court of three High Court judges discharged him on the grounds that the emergency regulations under which he was charged were constitutionally invalid. The Supreme Court reversed the ruling and ordered the Special Court to proceed with the trial, but the case was withdrawn by the then government led by Mrs Bandaranaike. So no judicial decision was made whether the party was defying the Constitution, and numerous subsequent episodes indicated that, notwithstanding the 1976 resolution, leaders of the party were willing to compromise on the issue of separation.
In February 1977, Mrs Bandaranaike had discussions with 21 Tamil members of Parliament belonging to the TULF and other Tamil parties in which the Tamil representatives agreed that they would not raise the demand for a separate Tamil state if an interim settlement could be worked out.
After the general election in which Mr. Jayewardene came to power, the TULF became the main opposition party in the Parliament. In May 1979 the Parliament approved a Bill banning the 'Liberation Tigers' and empowering the President to proscribe any organisation wish advocated the use of violence or was engaged in any unlawful activity. The TULF and the Sri Lanka Freedom Party (SLFP) opposed the Bill on the ground that it could be used for suppressing all political opposition. In reply the Prime Minister, Mr. Premadaca, gave assurances that the new law would not be used against democratic and law abiding organisations and that the government did not suspect the TULF of being behind the 'Liberation Tigers'.
In the sane year the government appointed a Commission to report on devolution and decentralisation of the administration and the creation of elected district councils in en effort to seek a solution to the problems of the Tamil minority. The TULF participated in the Commission and when the Bill for setting up District Development Councils was passed on August 21, 1980, the TULF members voted with the government.
In 1981, the TULF participated in the elections to the new district councils and gained control of six predominantly Tamil districts. The Liberation Tigers had opposed the TULF's participation in the elections.
In August 1981, President Jayewardene and Mr. Amirthalingam, leader of TULF, agreed to set up a high level joint committee to discuss means of reducing communal tensions. Reporting on the progress of the discussions to his party's Parliamentary members, the President said that the TULF leaders had agreed to cooperate with the government in wiping out terrorism. He also said that he believed that there was no link between the TULF and the so-called separatist type of movement, and he warned party members not to be misled by what he described as certain elements intent on disturbing the peace. Mr. Amirthalingam said at a meeting of the Joint Committee on December 9 that the TULF had nothing to do with certain elements reportedly preparing a unilateral declaration of Tamil Eelem.
All this indicates that
- both the members of the TULF and the government considered the Tigers to be entirely different from the TULF party
-though the TULF had passed a resolution supporting separation, it had continued to participate in the political life of the country;
-active participation in the Parliament and in elections show that the TULF party was not advocating or supporting extra-constitutional methods to solve its grievances.
The threatened banning of the TULF and the disqualification of its embers of Parliament unless they openly renounce any claim to separatism places the TULF leaders in an impossible situation politically. In the present climate of opinion it would be impossible for them to retain the confidence of the Tamil population if they made such a declaration. The likely result will be to leave the Tamils without any representation in the Parliament and, in fact, to disenfranchise the Tamil people.
Paradoxically, the government's action in threatening to ban the TULF and in sending Tamil refugees to the north is a significant step towards a de facto partition of the country. The implications are very grave for the future.
The increase in population in the north due to the influx of the refugees from the south will increase the pressure on resources, including land,water, and food, and employment opportunities. In such a situation it appears that the Tamils will be left without any representatives or any party to negotiate their demands.
The communal violence of July 1983, compounded by government ineffectiveness and illegal counter violence by the armed forces, has resulted in the death of hundreds of Tamils, rendered thousands homeless, caused a major refugee movement to the north of the island and devastated the economy of the country.
It is imperative that the government, which is committed to a united country, should now take urgent steps to heal the national fabric
It is clear that animosity between the Sinhalese and Tamil communities has now reached a level which makes the role of the government exceptionally difficult. The actions of the government during, the recent violence appear to have been responsive to pressures from the armed forces and the majority Sinhalese community. Yet, the expressed desire of the government to maintain a united country can only be accomplished if the government represents the entire population and affords equal protection to all, not only to the Sinhalese majority.
Geneva August 1983 | 1 | 5 |
<urn:uuid:1429a81a-c858-4a5c-9c01-c72b52f55a29> | The Effects of Different Curing Methods on Tack-Free Curing
In almost all applications of UV curing, the issue of oxygen inhibition must be addressed in one form or another. As such, different UV-curable formulations must be optimized to contain an appropriate initiator package and UV light source to overcome oxygen inhibition, which adds additional cost and development time. The advent of LED lights has enabled less-expensive lights with increased operating lifetimes and improved energy efficiency. However, rather than covering a broad spectrum of wavelengths, LED lights emit in narrow bands of light. The narrow wavelength emission spectrum of LED lights will inevitably have an effect on both curing rates and oxygen inhibition. In this study, we evaluate the use of broadband mercury, 385 nm LED and electron beam (EB) curing across a range of different acrylic formulations. The different methods of curing are compared by examining their effect on oxygen inhibition, cure speed and material properties.
From coatings to biomedical implants to photolithographically controlled materials, photopolymerization has dramatic advantages. It can be utilized for in situ cure materials at whatever time, location and three-dimensional pattern desired. It is one of the most energy-efficient processes known and can be used as a 100% solvent-free process. One drawback that must be overcome in most photopolymerization applications is the severe inhibition of these polymerizations by the ubiquitous presence of oxygen.
Oxygen inhibits the cure of acrylates by diffusing into the coating and creating radicals that react much more slowly than other radicals and prevent polymerization. This process is shown in Figure 1. When the rate of oxygen diffusing into the coating is greater than the rate of initiation, oxygen inhibition can’t be overcome. When the rate of initiation is greater than the flux of oxygen into the system, tack-free curing can be achieved with long enough exposure times (Equation 1).
FO2 = RId (Equation 1)
Parameters that influence the flux of oxygen are the polymerization rate, viscosity of the resin and crosslink density of the formulation. Parameters that influence the rate of initiation are the irradiation intensity, the photoinitiator concentration, and the overlap of the emission and absorption spectrums respectively. Figure 2 shows the emission of a typical “H” bulb and the absorption of I-184 (Additol CPK).
Numerous routes to overcome oxygen inhibition have been explored, including high irradiation intensity, high photoinitiator concentration, nitrogen purging and chemical additives such as thiol monomers. Generally, the routes to overcome oxygen inhibition with typical high-intensity mercury broadband UV lamps are well established. With LED curing gaining prevalence throughout the UV-curing industry, examining the effectiveness of these known routes to overcome oxygen inhibition and comparing to broadband UV is important to understand. In this work we compare methods to overcome oxygen inhibition using a typical broadband mercury irradiation source as well as LED sources including 405 and 385 nm. We have also evaluated curing with electron beam irradiation.
Materials and Testing
Epoxy diacrylate (PE230), polyester triacrylate (PS3220), urethane diacrylate (PU2100) and isobornyl acrylate (IBOA) were donated by Miwon North America Inc. Tripropylene glycol diacrylate (TPGDA) was purchased from Miwon North America Inc. CPS 1020 and CPS 1040 are proprietary thiol-ene based formulations. 1-hydroxy-cyclohexyl-phenyl ketone (Omnirad 481/I-184) and 2,4,6-Trimethylbenzoyl-diphenyl phosphine oxide (Omnirad TPO) were purchased from IGM Resins.
Substrates were coated with a ~125 µm layer of formulation using a wire wound drawdown bar.
Formulations were cured on a conveyor system using a Heraeus F300 light with 300 W/inch H bulb or a 25 W 385 nm LED (Heraeus) or a 405 nm LED that was donated by Dymax.
Tack Free Determination
A fresh latex glove was pressed against the surface of the polymer with moderate pressure. If the polymer is marred in any way the surface is tacky. If no residue is observed on the glove then the surface is considered to be tack free.
Results and Discussion
A study of the effect of photoinitiator concentration is shown in Table 1 for the epoxy diacrylate system mixed 50/50 with TPGDA. Here, the results show that there is a significant increase in the maximum belt speed when the photoinitiator concentration is increased from 4 to 6 wt%. Beyond 6 wt% photoinitiator, tack-free curing is achieved with the maximum belt speed of 155 fpm. When cured with the LED light, tack-free curing was not achievable in the system with 4 wt% photoinitiator. The results show that there is a significant decrease in cure time when the phototinitiator concentration is increased from 6 to 8 wt%.
A comparison of tack-free curing with a typical epoxy diacrylate, polyester triacrylate and urethane diacrylate was performed. Each of the acrylates was cured with 1, 2 and 4 wt% photoinitiator. The results shown in Table 2 indicate no significant difference in curing performance across these materials despite the various viscosities and crosslink densities. Experiments were performed with broadband UV irradiation using a typical UV photoinitiator (I-184). Experiments were also performed with a 25 W 385 nm LED system with TPO as the photoinitiator. Belt speeds for tack-free curing ranged from 70-150 fpm with 2 and 4 wt% photoinitiator for UV broadband irradiation. Using TPO and a 385 nm LED system, belt speeds for tack-free curing ranged from 9-20 fpm using 2 and 4 wt% photoinitiator. The high-viscosity trifunctional acrylate was able to achieve tack-free cure with 1 wt% photoinitiator when cured continuously for 60 seconds. Curing was also performed with a 405 nm LED system similar in power to the 385 nm LED. Minimal differences were observed between curing performance with the 385 and 405 nm LEDs.
The pure acrylate systems were compared to different thiol-ene-based formulations. As seen in Table 3, the thiol-ene-based formulations all cured tack free with 1 wt% photoinitiator and with belt speeds at 130 fpm, whereas the acrylate systems (Table 2), which are much thicker, cured at a maximum belt speed of only 20 fpm with 1 wt% photoinitiator in all cases. At 4 wt% photoinitiator, the thiol-ene systems achieved tack-free curing at belt speeds of greater than 155 fpm (155 fpm is the maximum belt speed for the system utilized in this study) compared to belt speeds ranging from 135-155 for the diacrylate systems. Using 4 wt% TPO as the photoinitiator and the 385 nm LED, the thiol-ene systems exhibited tack-free curing with belt speeds between 100-155 fpm.
A thiol was added to the standard acrylate formulations with I-184 as the photoinitiator and cured using the UV broadband bulb. The belt speeds needed for tack-free cure increased dramatically with the addition. The comparison can be seen in Table 4. Cure speeds originally near 10 fpm increased to 80-90 fpm. Using 1 wt% TPO and a 385 nm LED system, a system that originally couldn’t reach tack-free cure was then able to cure tack free at slow belt speeds.
The epoxy diacrylate system was evaluated as a 50/50 mixture with two different diluents – TPGDA and IBOA (Table 5). TPGDA is a low-viscosity diacrylate that results in significant drop in viscosity, but maintains high modulus and crosslink density. IBOA is a low-viscosity monoacrylate that results in significant drop in viscosity, maintains high modulus, but results in significantly reduced crosslink density. The results show that tack-free curing is most difficult to achieve in the system diluted with IBOA, and less difficult with TPGDA. Due to the reduced viscosity, the system diluted with TPGDA is more difficult to achieve tack-free curing than the base system with higher viscosity. When cured with the LED light, only the epoxy diacrylate system was able to achieve tack-free curing.
EB curing was also evaluated for the urethane diacrylate system in bulk and diluted 50/50 with TPGDA and IBOA (Table 6). Electrons are accelerated through a thin foil window impinging on a moving web at atmospheric pressure. The accelerated electrons will ionize most organic materials, with this ionization leading to the formation of free radicals, which initiates polymerization of the coating without the need for added photoinitiators in acrylate-based systems. The EB parameters are typically set by selecting the total dose of energy delivered to the sample and the belt speed. The current is then adjusted as needed to deliver the total dose with the given belt speed. When curing with EB, the resins are typically purged with nitrogen to remove the presence of oxygen. EB curing has not been studied nearly as much as UV curing.
Though the initiation mechanism to generate radicals is different, the fundamental polymerization kinetics should follow the same principles. For EB curing, decreasing viscosity had no effect on curing, as seen in Table 6. This is contrary to UV-cured systems under ambient conditions where the effects of oxygen inhibition are more pronounced in systems with lower viscosity. Decreasing crosslinking reduces cure speed. This result is similar to UV-cured systems. Polymerizations were also performed without a nitrogen blanket; here it was found that the typical diacrylate systems were not able to achieve tack-free curing. However, the CPS 1040 thiol-ene system was readily able to achieve tack-free curing without the aid of a nitrogen blanket.
Several typical acrylate systems were cured with both UV broadband mercury irradiation sources as well as LED systems. The results indicated that curing with broadband sources was more rapid than curing with LEDs. The LED systems emit significantly less energy than the broadband sources, so the reduced cure speed is not necessarily a result of reduced initiation efficiency. It was demonstrated that reducing viscosity and crosslink density both increase the effects of oxygen inhibition and increase the curing time required to achieve tack-free surfaces. The use of thiol-ene-based formulations was shown to significantly increase cure speed with both UV broadband and LED systems. In fact, the use of thiol-ene systems resulted in cure speeds with LED systems that were equivalent to those achieved in acrylate systems with UV broadband. An initiator optimization study was performed and indicated that upon achieving a certain threshold initiation rate, cure times decreased dramatically. Systems cured with EB showed the same fundamental cure characteristics as UV-cured systems.
The authors gratefully acknowledge Miwon North America Inc. for their ongoing support and discussions and for donating materials for this research. | 1 | 2 |
<urn:uuid:c3f7e61d-8d69-4f38-99df-71d2cd021f22> | The 8th Armored Brigade (Hebrew: חטיבה שמונה, Hativa Shmoneh) was an Israeli mechanized brigade headquartered near Jerusalem. It was the Israel Defense Forces' first armoured brigade which possessed tanks, jeeps and armored personnel carriers (APCs), whereas all other IDF units at the time were entirely infantry-based.
The brigade was called 'armored' for morale reasons, although in reality it only had a single tank company (later in the war, two companies), and a single APC company (these companies became the brigade's armored battalion), and an assault battalion composed of jeeps.
The Brigade's first commander was Yitzhak Sadeh.
Founding and organizationEdit
The brigade was founded and subordinated to Yitzhak Sadeh on May 24, 1948. Two battalions were created—the 82nd Tank Battalion under Felix Biatus, and the 89th Commando Battalion under Moshe Dayan. Another battalion, the 88th, was founded later.
According to Dayan, the 89th consisted of four companies each made up from different groups: from kibbutz and Moshavs, from Tel Aviv, from Lehi and veterans from South Africa. The battalion consisted entirely of volunteers: "A" company consisted of men from the disbanded Lehi underground organisation; "B" company was taken from the 43rd Battalion, Kiryati Brigade; "C" company from the Golani Brigade. Many of the men joined after personal approaches from Dayan, to the annoyance of their commanding officers. They were stationed at Tel Litvinsky. Their first action was against Irgun members landing arms from the Altalena at Kfar Vitkin, 23 miles north of Tel Aviv. The Alexandroni Brigade was "reluctant" to intervene, and Dayan was unable to deploy ex-members of Irgun from his battalion.
1948 Arab–Israeli WarEdit
On 11 July 1948 the 89th Battalion was involved in the attack on Lydda. Later that month, Dayan became the commander of the Jerusalem front and was replaced by Dov Chesis. On 28 October the 89th Battalion, now under Chesis, captured the town of al-Dawayima.
After the 1948 Arab–Israeli WarEdit
Following the 1948 war, the brigade served as the backbone of the IDF's armored forces. In the Six-Day War, the brigade fought on two fronts, including in the Sinai and the Golan Heights. In the Yom Kippur War, the brigade fought in the Sinai. In the 1982 Lebanon War, the brigade was mobilized but did not take part in the fighting.
A memorial to the 8th Brigade is located adjacent to the Ben Gurion International Airport.
- ↑ Eshel, Tzadok (1968), Yitzhak Sadeh Brigade. Ministry of Defense Publishing and IDF Education Corps. p. 4
- ↑ Moshe Dayan, "Story of My Life." ISBN 0-688-03076-9. Page 94.
- ↑ Teveth, Shabtai, (1972). Moshe Dayan. The soldier, the man, the legend. London: Quartet Books. ISBN 0-7043-1080-5. Page 170.
- ↑ Dayan, page 96.
- ↑ "חטיבה 8 - עוצבת "הזקן"". http://www.yadlashiryon.com/show_item.asp?itemId=657&levelId=64596&itemType=0. Retrieved 2009-12-08. (Hebrew)
- profile at Yad LeShirion website (Hebrew)
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| | 1 | 2 |
<urn:uuid:178359e6-88f0-4e3a-b61b-9e5b63327cd6> | Hyperhidrosis is a commonly unknown illness. As a matter of fact, severe sweating can be a clinical condition if certain limits of sweat secretion are exceeded: The WHO (World Health Organization) has specified these limits as follows:
- Hyperhidrosis = production of 100 mg sweat in 5 minutes
Weighing the ammount of sweat by using a microbalance is one part of the diagnostic process, a general physical examination the other. For the classification of focal Hyperhidrosis, dermatologists use the Minor-Test (topical application of Lugol’s iodine and starch) to localize the body parts that are mostly affected.
The WHO’s official ICD-10 classification codes for Hyperhidrosis are:
- R61 → Hyperhidrosis
- R61.0 → localized Hyperhidrosis
- R61.1 → generalised Hyperhidrosis
- R61.9 → Hyperhidrosis, unspecified
Health professionals divide Hyperhidrosis into 2 basic types:
- Primary Hyperhidrosis = excessive sweating that is not caused by any bodily illness, though it could get provoked by nervous/psychological influence
- Secondary Hyperhidrosis = excessive sweating that is caused by an explicit bodily illness, infection, inflammation etc.
Furthermore, they divide between localized and generalised Hyperhidrosis:
- Focal Hyperhidrosis = severe sweating, localized to specific parts of the body, for example sweaty palms, sweaty feet, underarm sweating etc.
- Generalised Hyperhidrosis = excessive sweating, all over the body, not localized (med. Hyperhidrosis generalis)
Characteristic feature of Hyperhidrosis:
Most cases occur in people who are otherwise healthy (primary Hyperhidrosis, see above). Many who suffer from Hyperhidrosis sweat nearly all the time, regardless of their mood, their agitation or the weather. It is a totally uncontrolled appearance of sweat. It can happen at any time without any identified reason.
Experts estimate that millions of people are actually suffering from Hyperhidrosis (approx. 3% of the population). However, most people still expect sweating to be healthy and essential.
Underestimated psychological strains:
From a physical/medical point of view, Hyperhidrosis (itself) is not a dangerous illness, as long as the permament loss of water and electrolytes is efficiently counterbalanced. Nevertheless, most people with Hyperhidrosis would say they are seriously affected. Sweating still is a social taboo. Due to that, endless sweating – visible and sniffable for everyone – can cause serious mental stress, anxiety, shame, envy or disruptive behaviour. It can become a very painful burden which can result in social isolation, depression or, sadly enough, in suicide. | 1 | 4 |
<urn:uuid:3b958fb0-fe36-4c87-b61b-42bdb363b192> | By Dr J Floor Anthoni (2001)
This page summarises
the important events and discoveries that changed the world, its societies,
populations and its environment. By restricting the events to important
discoveries, this page allows you to take note of the full development
of mankind while not wasting your time. When learning about the history
of Man, much emphasis is usually placed on nations, rulers and their military
exploits. However, the real development of human society follows a pattern
which is independent of its rulers, being entirely dependent on natural
resources, knowledge and technology.
Having evolved to an erect,
thinking and talking ape over millions of years of evolution, Man has found
ways of evolving at an even faster rate by extending his faculties by means
of technology. Ironically, the technology evolved following the same rules
as the evolution of species, but much faster and more profound. Having
given humanity the means to live longer, eat more and to live more comfortably
in any place on the globe, human populations have risen above sustainable
levels. At the same time, technology is causing lasting damage to the environment,
which may ultimately destroy human civilisation. Is this the predictable
path that intelligence takes? The predictable fate which has overcome all
intelligent civilisations everywhere else in the universe, before they
could seed themselves to other planets? Will humans be able to control
themselves by the very intelligence that is threatening to destroy them?
Previous civilisations have
risen and fallen, yet humanity has survived. But these were localised events.
Present civilisation on the other hand, ranges world-wide, causing world-wide
problems like the ozone hole, global warming, global pollution, weather
changes and so on. It has exploited the minerals and energies found world-wide
and changed the world environment profoundly. So this one is not a local
event, and it could well be the last one.
Here is a schematic overview
of the development of intelligence, be it on this planet or anywhere else:
-2 million years: among all
creatures, it must be an animal (which moves), which can develop intelligence.
It cannot be a grazer because grazers spend too much time eating on all
four feet. It cannot be a predator because they are higher up the food
pyramid, needing large territories, living singly or in very small groups,
and having too much armament in claws and jaws. It must be an omnivore,
capable of switching diets (plants&fruits, animals, insects) and living
in a multitude of environments. It must rely on experience rather than
instinct, so its children are born helplessly and have a long childhood.
Eventually this creature must develop dexterity, so it is not a
walking type, but a tree-living type. Being able to live in groups, it
develops group skills such as politics, altruism and communication. By
leaving the trees and by walking upright, its dextrous hands are freed
to become handy. It develops primitive tools from sticks
and stones. Learning is mostly by imitation.
-1 million years: the omnivorous
and opportunistic life style allows humanoids to live in groups,
finding food easily. They develop further social skills, resulting
in language. It harnesses fire by finding a way to carry
it along. Later it finds ways to make fire. Fire for cooking
allows it to eat a much larger range of food, enabling larger group densities.
-100,000 years: language
develops and extensive survival skills can be transferred from elders
to children. Learning is done by imitation, assisted by language. History
is taught. Hunting is done in groups, furthering social skills. Language
develops further. Humans will defend large territories. Within their
patches, they will promote plants and animals useful to them, while
eradicating those hindering them. Finding food becomes easier and their
groups become larger. Consensus behaviour (religion) becomes an
important survival instinct. Tribes develop. Habitat destruction
-50,000: hunting with
weapons has become an important source of protein. Language is complete.
painting, singing, weaving, clothing, develop fully and also ceremonial
rituals. Humans are now fully capable of spreading all over the
world, often necessitated by their growth in numbers. Important plants
and animals are reared at home and spread as tribes migrate.
settle down in fishing villages along the coast. They live in semi-permanent
housing. Shifting agriculture allows human population to grow
tenfold. Slavery develops, since human labour is the only form of
energy. Food storage is carried in living stock. New diseases come from
human wastes. Wars are waged for extended territories.
-10,000: development of settled
agriculture by ploughing, weed control and irrigation. Tribes can grow
sufficiently large to establish villages and to maintain a bureaucracy
and professional warriors. Houses become permanent. Capital
develops and a social hierarchy. Tenfold increase in population.
Division of labour: guilds and professionals develop. Organised religion.
-5,000: extensive domestication
of cattle, beasts of burden and crops. Horses and camels twice as energy
efficient as oxen. Invention of the wheel, projectiles, metals.
develops. Growing contact with domesticated animals introduces new human
-3,000: extensive ceremonial
buildings, temples, bridges, roads, iron and other metals. Nations
develop, replacing or unifying tribes. Land trade. Schools. Nations
wage war to one another.
-2,000: reliable agriculture,
crop rotation, philosophy, beginning of written science. Ships
and coastal trade. Money. Accumulation of capital. Building
canals, aqueducts, roads, cities. Hydro and wind power. Soil nutrients
no longer return to their origins, the land, but are wasted by cities.
Water pollution, soil erosion and degradation set in. Major communicable
diseases erupt in densely populated areas.
-1,000: scientific method,
experimentation, medical knowledge begins. Extensive trade and accumulation
-500: invention of book printing.
Rapid spread and storage of knowledge. World explorations, world trade,
world exploitation. Spread of biologically invasive species and aquatic
exchanges by ships and canals. Rapid growth of technology: ships, transport,
steel, other metals, etc. Medical technology to prolong life: hygiene,
surgery. Universities. Colonisation of other nations. Slavery. The
can now blossom in times of prosperity and peace.
-200: use of fossil fuel
for engines, industry, building, transport, war. With fossil fuel, bigger
projects could be done, and more transported, including fuel, minerals
and fertilisers. Public schools for everyone, leading to mass literacy.
and sanitation. Separating drinking water from waste water.
Sewers. Populations surge. Democracy as preferred form of
government, opening the way to scrutiny and public access to information.
Mining releases poisons, that were once stable underground. Soil erosion.
End of the slave trade and slavery.
-100: Extensive progress in
science and technology. Mass production. Immunisation and vaccination.
Air pollution over cities. Sewage treatment with bacteria. Longer lives.
Understanding of genetics leads to vastly improved crops. Beasts of burden
disappear from cities.
-50: Air planes. Invention of
the computer to extend the mind. Space technology. Birth control.
fertilisers. Crops selected for their response to fertiliser. Sewage
treatment plants. Massive accumulation of capital creates a capital speculation
economy. Development of global markets & global corporations. Free
movement of capital moves dirty labour-intensive industries to poor
0: civilisation peaks. Overpopulation
and overexploitation of our environment. Maximal production by mechanised
farming with fertilisers. Genetic engineering. Serious environmental
degradation. Some very large cities, stretching the limits of urban
metabolism. Populations increasing at maximal rates.
+50: A new era which won't resemble
anything in the past. Civilisation reaches an unsustainable level of complexity
and vulnerability? Limits of biosphere reached? Limits of cities reached?
Ecological collapses? Massive species extinctions? End of fossil fuel?
Energy-based systems collapse, and with it stored knowledge or access to
it? Agricultural land fatigues and produces less? Anarchy rules? Civilisation
collapses? End of the nation-state?
The above is but a schematic
time line, but notice how one invention could not have occurred without
the previous, thus making the whole development of civilisation predictable
(with hindsight). Most previously collapsed civilisations overextended
themselves in their war machines and bureaucracies, at a time that their
agriculture collapsed due to poor farming practices and sheer pressure
to deliver. Our present civilisation managed to bypass this trap by developing
world trade, exploiting other nations' agricultural and mineral resources
(colonisation) and by discovering fossil fuel to subsidise everything we
do. However, a new trap now presents itself. Will we be sucked in, is the
Warm and cool periods In
considering the history of mankind, often the most important influence,
that of temperature, is overlooked. Having come out of an ice age, only
12 millennia ago, the growth of civilisations was possible only by increasing
temperatures. The world of 18 millennia ago was very poor and very barren.
Only some 6 millennia ago the planet warmed sufficiently to sustain growing
populations, but even then some remarkable fluctuations in temperature
occurred, all with similar results: during the warm periods, societies
flourished while during the cold periods they suffered disease and famine.
The above chart was made
by climatologists Cliff Harris and Randy Mann. Click on the image for a
version and relate this to what follows below.
Philosophers often refer
to the three major transitions of mankind, the developments that changed
society profoundly. After three beneficial transitions, they are now seeing
the fourth as troublesome:
medical knowledge, immunisation, hygiene, sanitation give longer and better
lives: reduced death rates, increased birth rates (fertility), longer life
spans, lower disease risks. It has doubled the human life span but also
introduced new diseases of old age.
green revolution, giving
more food: fertilisation, irrigation, pest control, mechanisation and improved
crops gave more and better food, and has allowed populations to grow fourfold
in the twentieth century, also introducing unknown degrees of famine and
suffering and inequality. It has changed the way of life of tribes, previously
living in harmony with nature. It started the depopulation of the land
and the overpopulation of cities.
giving more material wealth and comfort: industrialisation, mass production,
mining and transport caused new health risks due to changed lifestyles,
like smoking-induced, obesity-induced and pollution-indused diseases. It
also led to distribution problems, the rich getting richer and the poor
poorer, but average world incomes quadrupled. The global economy increased
twentyfold. Tractor power displaced more people from the country to the
preventing further health risks due to an overload of the biosphere: humanity
is facing increased risk of epidemics due to population densities (tuberculosis,
cholera, typhus), spread of new diseases due to mobility (AIDS, Ebola,
Dengue, Marburg, rinderpest, foot-and-mouth, mad-cow), spread of old diseases
due to temperature change and irrigation (malaria, yellow fever), multiple-drug
resistant (MDR) diseases, collapse of civility (poverty wars, dictatorships),
deaths from wars and disasters, death and disease due to massive resettlement
(fleeing war, crop failures, drought), irreversible loss of species, and
Unlike the preceeding three
revolutions which happened by themselves, the fourth revolution is one
of prevention of serious risk, something which requires a co-ordinated
effort, motivated by the most humane of human qualities. What chance does
it have of succeeding in the face of so much injustice, disparity, dishonesty
and avarice, while time is also running out?
Read the history of mankind
below in this light, and be amazed at how much it accelerated in the most
recent fifty years.
Colours show the advances
in building, agriculture/food,
Important discoveries are
bolded. Note that discoveries were often
made in different places of the world at different times. Use the Edit/search
in page option of your browser to search for specific events.
Diamond, Jared: Guns,
germs and steel, the fate of human societies. 1997. W W Norton &
Hellemans, A and B Bunch:
timetables of science, a chronology of the most important people and events
in the history of science. 1988. Simon & Schuster.
McNeill, J R: Something
new under the sun, an environmental history of the twentieth-century world.2000.
WW Norton & Co.
Asimov, Isaac: Asimov's
chronology of science & discovery. 1990. Grafton Books
Harpur, Patrick (ed): The
timetable of technology, a record of our century's achievements. 1982.
Marshall Editions London.
Goat Island Sustainability Transition: a call for action by twelve
concerned scientists. (4 pages)
home -- Revised: 20010922,20011003,20011014,20020904,20170617,
civilisation or period
year and discovery
stone age: 2,400,000BC - 35,000BC
2,400,000BC: hominids in Africa
manufacture stone tools.
750,000BC ancient hearths in
France show that Homo erectus used fire.
of wholesale habitat destruction by fire.
90,000BC: Homo sapiens
79,000BC: stone lamps
with wicks are in use.
45,000BC: Early humans reach
modern Man: 35,000BC - present
35,000BC: Homo sapiens sapiens?
±25,000BC: Humans play
first music, using wind and percussion instruments.
20,000BC: first boomerang in
Poland; sewing needle; bow and arrow in Spain,
bones to count.
15,000BC: Retreat of terrestrial
ice sheets from last ice age, completed. Lascaux cave paintings show that
early humans could draw and paint.
13,000BC: end of the last ice
age, which began 2 million years ago. (The Antarctic ice sheet was formed
38 million years ago and is still there.) Humans reach North America over
Bering Strait. Within 2000 years they reach south America and have exterminated
most large mammals, without domesticating any.
thrower and harpoon invented.
Agriculture: 12,000BC - present
of brick and mortar in Jericho
agriculture in the Nile valley. Human population estimated between
2 and 20 million.
6000BC-2000BC?: ocean waters
rising by 100m after last ice age.
continued at Bronze Age below.
Civilisations of Asia, China,
Indochina: 8000BC - present . The Chinese attitude towards nature was
different from other civilisations, in that they never separated matter
from the sacred world, and did not have the conviction that people dominated
nature. They were not interested in developing scientific method or theories,
relying instead on practical and empirical data. Science could therefore
not develop to the extent it did in Western Europe. Because of its religious
character, science was less pronounced in India, hindered by Buddhist philosophy,
which insists on rebirth of matter and creatures.
8000BC: rice in Indochina; floodwater
agriculture in SW Asia.
7000BC: the pig
and water buffalo domesticated in eastern Asia and China
6500BC: Modern-type domesticated
wheat and lentils in SW Asia.
cattle domesticated in Thailand;
domesticated in India; cotton cultivated in
culture in China.
culture of rice. Wooden plough.
1500BC: distillation of liquor.
1400BC: China invents multiple
cropping in a year.
numerals used in China.
1200BC: China casts bronze
1000BC: The compass
used in China
876BC: India invents the number
zero (0); Natural gas from wells is used in China.
600BC: whale oil lamps with
asbestos wicks in China.
540BC: Indians develop a geometry
based on stretching ropes.
is made in India. Chinese farming is
very advanced with hoeing weeds, planting crops in rows and fertilising.
(Europe follows 2300 years later, 1800AD)
310BC: The Chinese invent a
double-acting bellows, blowing air uninterruptedly. The chest
harness for horses invented. Lodestone used for compasses.
260BC-100BC: the Great Wall
is built in China
230BC: China develops the federal
bureaucratic system that runs China for the next 2000 years.
200BC: The Chinese
develop a malleable form of cast iron.
110BC: the chinese invent the
harness for horses, the most efficient harness to this day. The
West follows a thousand years later. Chinese invent the crank
handle for turning wheels. The Chinese invent negative
50BC: the Ayurveda details medical
treatise in India.
20BC: the Chinese invent the
belt drive, and methods for drilling deep wells.
1AD: the Chinese build suspension
bridges of cast iron, strong enough for vehicles. Wheelbarrow invented.
bellows for making steel.
70-1327: the 950km Grand Canal
of China, built in many stages over a millennium. The
chain pump for raising water from rivers or lakes. A grain winnow
operated by wind. The powder of dried Chrysanthemum
flowers (Pyrethrum) used to kill insects. Around
this time, also paper was invented,
first for packing but later for writing.
115: China's Zhang Heng scientist
and inventor develops a grid to locate positions on a map. Around this
time he also invents the seismograph for recording earthquakes.
190: the Chinese invent the
whippletree mechanism which allows two oxen to pull one cart. Stirrups
for horses. They also invent the decimal system
for representing all numbers. Invention of porcelain.
270: first use of compass in
China, pointing south. Around this time, using coal instead of wood for
making cast iron. The compass was not used for navigation until after 1000.
400: chinese invent steel
540: a wind-driven land vehicle.
Matches invented. Paper used as toilet paper.
printing in China, discovered 100 years earlier. Printed
newspapers. Gunpowder invented.
used for warfare.
810: use of paper
bank drafts, forerunner of paper money, 80 years later.
1000: extensively burning
coal for fuel. Spinning wheel invented. Movable
type book printing invented.
1107: the Chinese invent multicolor
printing, mainly to make paper money harder to counterfeit. Maps printed.
1180: a 213m long bridge across
the Yung-ting River (Marco Polo Bridge). Invent bombs
that produce shrapnel.
1403: the chinese issue a 22,937
volume encyclopedia in three copies.
1644: Li Zi-Cheng overthrows
the Ming dynasty, but the Manchus take over China.
1800: jute is cultivated in
1855: the third pandemic of
bubonic plague begins in China.
Civilisations of Middle America:
Peruvian and Chilean Incas, Central American Mayas and Mexican Aztecs.
and beans cultivated in Peru; pumpkins in Middle America
based on corn, squash, beans and peppers in the Tehuacan Valley,
Mexico. Chinchorro Indians produce mummies.
5000BC: the Llama
and Alpaca domesticated in Peru
2400BC: peanuts domesticated
in tropical Americas. Pottery.
1000BC: the city of Machu Pichu
in the Peruvian Andes, from the first Inca
800BC: The Olmec build byramids
300BC: the turkey is domesticated
1290AD: cable bridges across
deep canyons in the Andes
1325: Aztecs found Mexico City
bronze age 7000BC - 1400BC;
bronze was the main metal, until iron was used by the Hittites, 1400BC.
Extensive use of iron started around 1000BC; forging it around 1400AD.
Bronze consists of 90% copper and 10% tin, and can easily be cast. It is
not oxidised easily, and makes lasting, strong, objects of any form.
The stone age is sometimes
split in old stone age to 8000BC, new stone age (with agriculture) 8000-3000BC,
Bronze age 3000-2000BC, Iron age 2000BC-500AD when the middle ages begin.
4500BC?: stones are used to
construct buildings in Guernsey, England.
2900BC: Stone Henge in England
2200BC: further extensions to
Stone Henge with 80 bluestones.
350BC: Celtic chiefs build fortified
Maiden Castle in Dorsset, Britain.
mills for grinding corn (Yugoslavia, Albania)
510AD: the abacus
used for counting, although already for 1000 years in use in Greece
continued at the Middle Ages
in Europe, below.
Middle East, Asia Minor,
7000BC: Domesticated cattle
in Anatolia (Turkey)
wheat cultivation starts, leading to the world's most important food source.
in use in Mesopotamia.
and mules domesticated in (Israel); camels
2500BC: The Yak
is domesticated in Tibet.
2000BC: Alfalfa cultivated in
Iran. Wheels with spokes invented.
terraced the land to prevent erosion.
850BC: First arched
bridge built in Smyrna, Turkey.
(coins) invented in Turkey. Babylon becomes the largest city on
Earth, with an area of 100km2.
200BC: Philon uses bronze
springs in catapults.
600AD: earliest known wind
mills using a vertical shaft, to grind grain.
3500BC? - 525BC. ends with Persian conquest (General Cambyses, emperor
calendar of 365 days, 12 months of 30 days and 5 festival days.
±3500BC: Menes becomes
first Pharaoh, uniting Upper and Lower Egypt.
3200BC: Egyptians use hieroglyphs
for writing, and develop their number system, but requiring
new symbols as numbers grow larger. Using papyrus
to write on. alphabet?
ships used in Egypt.
The Egyptians and Babylonians
make extensive use of bronze, an alloy
of copper and tin.
Pyramid of Giza is built.
2000BC: Contraceptives used
in Egypt; Geometry well known.
1700BC: Phoenicians (on the
Algerian coast) writing with a 22 letter alphabet.
1600BC: Start of the New Kingdom.
Medical knowledge includes surgery, prescriptions, fasting, massage and
1500BC: Light two-wheeled
carts for warfare.
1400BC: Inventon of glass, also
1300BC: Moses leads the Hebrews
1250BC: Egyptions build a canal
to link the Nile with the Red Sea.
1000BC: Phoenicians terrace
agricultural land to prevent erosion.
950BC: Darius I builds another
such canal, effectively connecting the Mediterranean and Red Seas.
5000BC - 2300BC by Sumerians; 2300 - 1600BC by Akkadians; 1600BC -
±5000BC: The plough?
& the wheel? Sailing ships.
and domesticated grapes in Turkestan; Beer in Mesopotamia; Olives
in Crete; Fired bricks invented in Mesopotamia.
3500BC: Pictograms with 2000
signs used for writing in sumeria. Egyptians and Sumerians smelt
silver and gold. Potters wheel introduced in Mesopotamia. Wheeled
vehicles in Mesopotamia.
3000BC: In Mesopotamia the Sumerians
are familiar with domes, columns, arches and vaults.
for length, weight (shekel and mina) and volume standardised.
for land taxation. Sexagesimal system (base-60) for position indication.
Aegean (Minoan) civilisation:
3500BC - 1640BC: In Crete, named after King Minos; destroyed by volcanic
eruption. Capital city of Knossos had plumbing and palaces. Minoans wrote
in Minoan Script.
1640BC volcano Santorini (Thera)
explodes, destroying the Minoan civilisation
Greek civilisation: 1150BC
- 529AD. Started by Mycenaean warriors from the North; ended by the Byzantine
776BC: First Olympiad
of science and natural philosopy begins, with advances in astronomy,
physics and the life sciences. Major improvements in mathematics, architecture,
technology, mechanics, navigation, health. Aqueducts,
canals and water tunnels for irrigation and drinking water.
Agricultural crop rotation as a soil conservation measure.
570BC: Xenophanes finds fossil
sea shells on tops of mountains, and concludes that the surface of the
Earth is in motion.
travel by ship around Africa.
530BC: Technology invents tools
like the bubble level, locks and keys, carpenter's square and the lathe
(in Samos). Extensive ore smelting and forming.
500BC: Pythagoras discovers
that Earth is a sphere. World
461BC: The age of Pericles begins,
a period of peace and prosperity. Many Greek philosphers flourish.
430BC: The great plague strikes
410BC: Invention of the catapult
and quadrireme (four rows of rowers) warships. It gives
advantage in sea wars, and conquering sea-bordering nations.
classifies 500 animals into 8 classes. Discovers that space is always filled
336BC: Start of the reign of
Alexander the Great, who spreads Greek culture from Egypt to India.
300BC: The Greek discover the
functions of animal and human organs. Euclid geometry proves helpful for
building. Dicaerchus constructs a world map on a sphere, correctly marking
meridians and equator.
240BC: Eratosthenes calculates
the circumference of Earth at 46,000 km; also the prime numbers.
invented for the ox-powered waterwheel. Accurate water clock
80BC: Greek engineers invent
has the largest library on Earth.
10BC: Herod the Great builds
the first open sea port.
40AD: De materia medica describes
the medical properties of ±600 plants and ±1000 drugs.
180: Galen compiles all medical
knowledge in one systematic treatment.
library at Alexandria is destroyed
Assyrians: 1350BC -
Roman civilisation: 350BC
- 470AD, A large empire, extending from Europe to Britain and down into
Africa, but short-lived. It was ended by German invaders after Rome having
been sacked first, in 455 by vandals. The Romans perfected infrastructure,
bureaucracy and transport (goods, people and information).
320BC: the Via
Appia along the Italian peninsula built. The
Aqua Appia, an aqueduct to bring water to Rome from 16km away.
100BC: Romans discover possolana
concrete from volcanic ash.
100AD: Peak of Roman civilisation.
agricultural irrigation practices to their conquered lands.
122AD: Romans in Britain build
Hadrian's Wall to protect Britain from northern tribes.
455AD: Roman civilisation ends.
0AD: Human population estimated
at 200-300 million. By this time, almost all the
productive land around the Mediterranean Sea had eroded into badlands.
iron age 1000BC -. Already
4000BC, iron was known from meteorites, and campfires lit over iron ore
soils. By 1000BC most civilisations had mastered the art of iron making.
Islamic/ Arab culture:
700-1300. Intensive commercial activity brought this culture in contact
with many others, which contributed to Arab thinking. Centres of learning
were established and 'houses of wisdom', and all known knowledge was compiled
and preserved. Under Arab influence, the Spanish cityof Cordoba became
the richest and largest cityin europe, with a library of 400,000 volumes.
840: Suleiman travels to China.
middle ages in Europe
early middle ages 530-1452:
the decline of science, suppressed by the might of the Catholic Church,
and adherence to the old ideas of Greek philosophers.
541: A serious bubonic plague
rages over Europe and the Middle East.
becoming used on a large scale. Waterwheels
used to drive industrial processes.
furnaces for making cast iron in Scandinavia.
1000: Vikings reach North America.
Europe has many waterwheel-drive wheels. In
England one per 400 people. The iron plough share
invented in Europe.
1100: Italians learn to distill
wine to make brandy.
1277: The Pope issues a condemnation
of many scientific ideas.
1140: Norman king Roger II decrees
that only licensed physicians can practise medicine.
1189: First paper
mill in Herault, France. Magnetic compass
introduced in the West.
1250: The goose
feather quill is used for writing.
1260: Roger Bacon starts the
new tradition of science by experiment,
also William Ockham (Ockham's razor:"When several explanations are
offered, the simplest must be taken"). Spinning wheel in Europe.
1310: Mechanical clocks with
escapements. China still uses water to power its clocks.
1340: A small cannon firing
1340-1349 (1346-1352?): The
Black Death plague epidemic kills 25 million people, one third of its population.
During the next 80 years, it recurs frequently, killing 3/4 of Europe's
1370: The steel
crossbow invented. Rockets used for war.
Cast iron becomes generally available.
1400: Coffee introduced. Holland
uses windmills for pumping water and reclaiming land.
Alchemism, believing that one metal could be turned into another, ends.
Fossils are considered the remains of prehistoric organisms. The Dutch
use driftnets for fishing.
belts are being used in industry.
1440: Johann Gutenberg (1398)
and Lauren Janszoon Koster invent bookprinting
with movable type. Spectacles for the nearsighted. Gutenberg
prints the bible in 1454, 42 lines per page.
Human population estimated at
The renaissance and the
scientific revolution 1453 - 1659, starts with the invention of book printing,
the fall of Constantinople, and ends gradually as science takes flight.
It was a time of renewing knowledge about the Greek classics and Arabian
mathematics. Explorations to distant countries discovered new plants, animals
and minerals. Painting and architecture bloomed. The world was opened by
navigation and commerce and the further discovery of knowledge. "Science
is the enlarging of the bounds of human empire to the effect of all things
possible." Francis Bacon, 1603. But the Church fought back. Surgery and
medicine made huge strides forward.
1453: The Turks capture Constantinople,
forcing many Greek-speaking scholars to escape to
the West, bringing with them clasical Greek manuscripts and the ability
to translate them into Latin.
1473. Michelangelo paints the
ceiling of the Sistine Chapel. The firsst edition of Avicenna's Canon of
Medicine is printed in Milan. Leonardo da Vinci makes scientific discoveries
like the parachute, flying machine, clock with pendulum, roller bearings,
1493: Pope Alexander VI draws
a line on a map that gives Spain all the undiscovered lands to the west
of the line and Portugal those to the east. Later the line was altered
to give Brazil to Portugal. Tobacco and smoking introduced to Europe.
1498: Vasco Da Gama reaches
India via the Cape of Good Hope, rounding Africa. Columbus discovers America.
Capsicum peppers circle the world in 50 years, being accepted world-wide
in all continents.
1502: First spring-driven pocket
watch by Peter Henlein. Raw sugar is refined a year later.
1507: Maps show America, discovered
by Columbus and first explored by Americus Vespuchus (1497-1504).
+ and minus - signs in mathematical notations. Machiavelli's Il
Principe (The Prince) presents a study on how to rule and stay in power,
laying the basics for politics.
1517: Girolamo Fracastoro explains
fossils as the remains of actual organisms. Biology starts to recognise
the kinship between fish, reptiles and mammals.
1519: Portuguese explorer Ferdinand
Magellan starts the voyage around the world,
which is completed in 1521 by Juan Sebastian del Cano after Magellan was
killed by natives.
1520: A smallpox epidemic demoralises
the Aztecs, allowing Hernando Cortez to destroy them and take over the
Aztec empire. A few years later, smallpox would kill Huayna Capac, the
Inca ruler. Muskets and guns are used extensively.
also devastate wildlife through hunting.
1535-45: diving bells for underwater
work are invented. Science of ballistics. Metal
smelting, glass making. Nicholas Copernicus
publishes his De Revolutionibus Orbium Coelestium, in which he describes
the revolution of the celestial bodies. Andreas Vesalius publishes De
humani corporis fabrica (On the structure of the human body), showing
the true structure of the human body. Sebastian Muenster's
is the first major compendium on world geography.
of the theodolite for surveying by Leonard Digges. The way blood
circulates in the body, discovered.
numbers for solving differential equations. Mercator introduces his map
projection. Knowledge about minerals, where to find them and how to process
them, accumulates rapidly. Also chemistry makes strides forward.
1589: Knitting machine invented
by Rev William Lee. 1592: Galileo's Della Scienza
meccanica (On the mechanical sciences).
a charcoal-like fuel produced by heating coal discovered. Coke would
allow engineers to reach higher temperatures and to use carbon chemically
in industrial processes.
builds the first telescope, which would introduce many new astronomical
discoveries. Rainbow explained. 1610: Galileo extends hydrostatics, Archimedes'
discovery of floating objects. Nature of logarithms discovered. 1617: Trigonometric
triangulation. The knowledge of optics increases.
Extinction of the Aurochs, wild ancestor of domestic cattle. Perhaps heralding
the beginning of the age of extinctions.
Renee Descartes argues for the correct use of deductive reasoning in science,
from metaphysical (philosophical) principles. 1643: Torricelli makes the
first barometer. Pascal invents a machine that can add and subtract.
Experiments with vacuum. 1659: Christaan Huyghens invents the chronometer
for use at sea. This instrument would be critical for determining the longitudinal
positions of ships. Huyghens also invented the microscope which would trigger
the science of microbiology.
The Newtonian Epoch 1660-1734.
this period starts with the foundation of the Royal Society (for science).
The period is largely dominated by the ideas of scientist Isaac Newton
and many others. Scientific academies followed in other countries. The
scientific method was formulated for analysis and synthesis. It requires
theories to be formulated from observations; these are then tested and
used to predict other phenomena. Observation and experimentation became
the pillars of science. This separated physics from metaphysics (philosopy)
and made scientists look for new things rather than the past achievements
of old masters. Many mathematical innovations by scientists like Newton,
Leibniz, Euler, Bernouilli, Fourier.
1660s - 1662: Boyle defines
the gas laws of volume and pressure. England is
producing 2 million tons of coal a year, over 80% of all world coal.
1665: cells and the nervous system described. Robert
Hooke publishes his microscopic discoveries in
The great plague in london kills over 75,000 people. mathematical calculus
1670s- 1673: Anton van Leeuwenhoek
makes microscopic discoveries like protozoa. Experiments with electicity.
1676: Robert Hooke invents the universal joint,
used in all moving vehicles. 1678 wave theory of light described
by Huyghens. Clocks are equipped with hands to show minutes. Newton's binomial
algorithm and complex numbers. Accurate clocks.
1687: Newton establishes the
three fundamental laws of motion in his Principia. Motion of celestial
bodies explained by gravity.
1690: Denis Papin is the first
to use steam pressure to move a piston, building the
first steam engine in 1698; the high pressure steam pump in 1707.
1694: male and female sex organs on plants identified and described. 1696:
Marquis Antoine discovers differential calculus.
Tull invents the machine drill for planting seeds. 1703: First western
seismograph (the Chinese did it first). 1708: Hard-paste porcelain discovered.
1709: Coke used for iron smelting. Jacob Christoph
Le Blon invents three-colour printing.
1713: Emanuel Timoni describes
the Turkish practice of inoculating young children with smallpox to prevent
more serious cases of the disease when they get older; laying the basis
1722: Renee de Reaumur describes
making of steel from iron. Jacob Leupold describes the theory
of mechanical engineering and making machines. 1728: advances
in dental treatment. 1729: Bernard Forest de Belidor in La science des
ingenieurs lays down
construction rules for
1730: Renee de Reaumur constructs
an alcohol thermometer with a scale from 0-80 from freezing to boiling.
Discovery of bacteria.
1731: Jethro Tull in Horse-hoeing
husbandry advocates the use of manure, pulverising
the soil, growing crops in rows and hoeing to remove weeds.
Human population estimated at
The Enlightenment and the
Industrial Revolution 1735-1819. In the enlightenment, rationalism
won over God or faith. Due to Newton's success in science, other faculties
followed suit, embracing the scientific method. Knowledge was to come from
experiment (empiricism) and reason (rationalism). Chemistry could make
no real progress because it was dominated by the incorrect 'phlogiston'
theory, but towards the end of this period, John Dalton made real progress.
The industrial revolution was driven by the invention of the steam engine
and the use of coal as fuel. Cottage industries made room for factories.
A new class, the capitalists emerged, who used capital, labour and machines
to create more capital. The use of fossil fuel to replace labour, enabled
them to acquire capital at an exponential rate, creating many millionaires
in the process. It also spurred the prospecting for minerals, needed by
the industrial processes. Guns and improved traps started to eradicate
wildlife. As people travelled to far away lands, they brought with them
invading species such as rats, goats and sheep, which caused major damage
to native wildlife.
1735: John Harrison made an
accurate chronometer for navigation and proved its worth for determining
longitude. This greatly improved the safety and accuracy of shipping. Natural
rubber discovered. Euler describes mechanics in terms of differential
1738: Bernouilli explains the
behaviour of liquids and objects in liquids. 1741: Steller's sea cow is
discovered living off the coast of Kamchatka Peninsula. Within 27 years
it has been hunted to extinction. 1742: Anders Celsius invents the Celsius
scale for temperature.
1750: Advances in electricity
and magnetism. Benjamin Franklin describes electricity as a kind of fluid
and distinguishes between positive and negative charges. Carolus Linnaeus
works on classifying plants, inventing the binnary nomenclature classification
system. The flying shuttle finds acceptance in cotton
weaving, and so does the spinning jenny, both invented decades earlier.
1754: The first steel rolling mill in England. Etienne Bonnet realises
that knowledge reaches humans only through the senses. First woman with
a degree of medical doctor (Germany). The beginning of quantitative chemistry
would herald a new age for chemistry.
1760: Euler invents the phi
function, which in the computer age would be used for 'open key' encryption.
Optical instruments developed, free from aberration. Conversion
of cast iron into malleable iron by the action of pit fire and
artificial blast, in Scotland.
1765: James Watt (Scotland)
builds an efficient steam engine in
which the condenser is separated from the cylinder so that steam acts on
the piston directly, resulting in a power source six times more effective
than the earlier invented Newcomen engine. This machine is rapidly accepted
as the driving force for industry.
1768-1771: Captain James Cook
leaves England to observe the planet Venus transit the sun on the 3rd June
1769, from Tahiti. It led to the discovery of New Zealand in 1769, where
he raised the British flag. On board were botanist Joseph Banks, the Swedish
naturalist Daniel Carl Solander and Charles Green, astronomer. He also
discovers that there is no 'balancing continent', except Australia.
1771: John Hunter lays the foundations
of dental anatomy and pathology. 1772 Johan Elert Bode discovers that the
distances of the planets to the sun are proportional to the series: 0,
3 ,6, 12, 24, 48, 96 (Bode's Law). 1775: Invention of the dipping-needle
compass. Pierre-Simon Girard invents the water
turbine. Equipment for boring cylinders and cannons improved.
1777: David Bushnell invents the torpedo. 1778: John Wilkinson invents
turning lathe. 1779: Oxygen discovered
by Lavoisier, which would lead to more rapid chemical discoveries.
1782: James Watt patents a double-acting
steam engine and converts the piston motion to a rotary motion. First
steam-engine powered paddleboat. 1783 Thomas Bell develops cylinder
printing for fabrics. Hot air and hydrogen balloons
invented. 1784 Benjamin Franklin introduces bifocal eyeglasses.
Henry Shrapnel invents the shrapnel shell,
which spreads pieces of steel around at explosion, designed to kill people.
1787: Fredrich Krupp establishes a steel plant at Essen, Germany. John
Wilkinson builds the first iron barge,
would change the face of shipping.
1789: The USA introduces its
first patent law. The first steam-driven
cotton mill in Manchester. James Watt invents the governor, a centrifugal
device that controls the rotational speed of steam engines. Many
factories become powered by steam after these inventions. Abraham
Bennett invents a simple electric induction machine, which would lead to
1790: Bloodletting as a treatment
for disease denounced by Marshall Basford. 1791: the
metric system of units is proposed in France. It would become the world
standard of measures in 1875. 1792: coalgas
lighting invented. 1793: parachute jump from a balloon.
1795: Physician Sir Gilbert
Blane introduces lime juice to prevent scurvy during sea voyages. Preservation
of food by bottling or canning, heating and sealing, invented
by Francois Appert, in response to a prize offered by Napoleon. It would
enable armies to foray far away from home, and assist shipping and trade.
1796: Edward Jenner uses cowpox
to vaccinate against smallpox, but the Royal Society rejects his
technique. 1797: Engineer Henry Maudslay (England) perfects the slide bed
for lathes, permitting the lathe operator to operate the lathe without
holding the metal cutting tools in his hands. It led to precision
tooling. 1798: Henry Cavendish measures the mass of Earth indirectly
from the attraction between two known masses. 1798: the first four-person
submarine (earlier attempts date a century back). Mathematicians
Gauss, Wessel, Lagrange, develop complex numbers.
1799: Napoleon's soldiers uncover
the Rosetta Stone in Egypt, which became the key to deciphering Egyptian
hieroglyphs. Joseph-Louis Proust finds new rules for chemical reactions.
1800: William Herschel discovers
infrared radiation. Johann Ritter discovers ultraviolet radiation. Thomas
Young describes the wave theory of light. Alessandro Volta invents the
battery. John Dalton formulates the law of partial pressures within
gases, and proposes that atoms must exist. Evolution of species becomes
discussed (Jean-Baptiste Lamarck) but several mechanisms for evolution
are in vogue. It was also assumed that 'The Creator would not admit extinctions'.
James Finney builds the first suspension bridge.
John C Stevens builds a propeller-driven steamboat,
which would herald the era of steamships.
1804: A D Thaer introduces the
concept of crop rotation. Nicolas Appert
opens the world's first canning factory, also inventing the meat stock
cube. Richard Trevithick invents the steam locomotive,
running on iron rails. It would herald the age of the train.
1805: Joseph-Marie Jacquard invents the punched-card
operated loom. The punched card would later be used as an input-output
medium for computers.
1805: Georges Cuvier establishes
the science of comparative anatomy, and in 1812 comparative vertebrate
paleontology. Austrian pathologist Karl Rokitansky describes the symptoms
of many diseases, based on dissecting over 30,000 cadavers. Proteins and
amino acids discovered as building blocks for animal tissues. Also discovery
of the three sugars in plant juices: glucose, fructose and sucrose. Science
of geology established. Thomas Young introduces the concept of energy.
is used extensively to pipe energy to homes in cities. 1808:
Humphrey Davy discovers the electric arc light, which will be used in cinemas.
1810: Jean-Baptiste Joseph Fourier
presents his mathematical series, which would later revolutionise radio
transmission and communication. Fourier analysis is an important scientific
tool today. Amedeo Avogadro does many discoveries on gases and molecules.
Pierre-Simon Laplace establishes probability theory,
only to be used 75 years later!. William Hyde invents the camera lucida
for projecting an image on a flat surface, in order to draw it. 1813: Augustin
de Candolle publishes a 21-volume plant encyclopedia.
Jean-Baptiste Lamarck produces a similar encyclopedia
on invertebrate animals. Frederick Scott Archer uses negatives
for making photographic prints. William Smith identifies rock strata
on the basis of fossils, allowing comparisons between places in the world
which are far apart. Carl Reinhold uses a thermometer for diagnosing fevers.
1815: Scotsman John Loudon McAdam
uses crushed rock to pave roads. A
century later, the concrete-slab roads would be named after him. This greatly
enhanced road traffic. The first steam-powered
warship is built in the USA. William Prout discovers that the
chemical elements weigh a multiple of that of hydrogen. 1818: Augustin
Fresnel demonstrates that light is a transverse wave, rather than a longitudinal
wave. The stethoscope is used for listening to murmurs originating from
inside the body.
Human population estimated at
1 billion. Economic growth starts to outrun population growth.
The nineteenth century, the
age of science 1820-1894: the understanding of electricity and magnetism
takes off, resulting in its practical use in motors, light and communication.
The nineteenth century ends with the discovery of electrons, and the use
of plastics. During this period, all sciences made great strides forward,
also since the occupation of scientist became a paid profession. Scientists
started to travel to national and international conferences. Emperor Napoleon
Bonaparte saw the benefits of science and supported it wholeheartedly.
Science and technology started in the USA. Darwin's theory of evolution
attracted much criticism but became widely accepted towards the end of
the century. Major advances are also made in anthropology and archaeology,
with the discoveries of ancient Egyptian, Maya and Peruvian civilisations.
The discovery of photography and spectrography greatly spurred Astronomy.
Biology made major strides with Darwin's theory of evolution, heredity,
cell biology and the work of Louis Pasteur. Chemistry received a proper
foundation and organic chemistry emerged, and the age ended with the first
plastics. Earth sciences, physics, mathematics, medicine and technology
all made great strides. War technology benefited from the discovery of
nitrocellulose (guncotton) and nitroglycerine (dynamite).
1820: Hans Christian Oersted
discovers electromagnetism, opening the way
for electricity generation, transmission and electric motors. Many other
scientists are involved: Dominique-Francois Arago, Andre-Marie Ampere,
Michael Faraday, etc. Guano is being used as natural
fertiliser. 1821: Electric motor
invented by Faraday, but Joseph Henry describes a practical electric motor
in 1831. Jean-Baptiste Joseph Fourier shows that any continuous function
can be composed as the sum of sine and cosine curves.
1825: Candles made from fatty
acids rather than from tallow. Aluminium discovered by Oersted, but it
remains the most expensive element on earth. First
steam engine for both passengers and freight. George Simon Ohm
defines electric circuit analysis.
1830: Charles Lyell discovers
that earth must be hundreds of millions of years old. 1831: Charles Darwin
starts his famous world voyage aboard the Beagle. James Clark Ross
reaches the magnetic North Pole. 1832: Charles Babbage conceives of the
first computer, the Analytical Engine, driven by external instructions,
but the device was too complicated to work. 1833: Karl Friedrich Gauss
and Wilhelm Weber build an electric telegraph
operating over 2km, improved by Joseph Henry in 1835, and Samuel Morse
in 1837-1846. Electrolysis used to make aluminium. Amalgam is used as a
filling for teeth. Revolver firearm.
1835: Gustave-Gaspard Coriolis
describes the Coriolis effect, caused by Earth's rotation. 1836: the
combine harvester introduced in the US. 1838: Faraday discovers
phosphorescence in vacuum. 1839: Louis Jacques Daguerre makes silver photographs
on a copper plate (daguerrotypes). Fuel cell invented by William Robert
Grove, but these would not be used until late in the 21st century. William
Talbot Dosetshire invents photographic paper
for making negatives.
screw thread invented. 1842: First use of ether in surgery for
anaesthesia. John Lawes applies sulfuric acid to phosphate rock and makes
artificial fertiliser. 1843: first tunnel
under the Thames river. 1844: discovery that all cells in an organism
originate from divisions by the egg cell.
1845: Robert William Thomson
invents the rubber tyre. Guncotton discovered.
1846: the lock-stitch sewing machine. Use
of chloroform in surgery. 1847: George Boole invents symbolic
logic, later used in logic circuits and computers. 1847: doctors
start washing their hands to prevent transmission of diseases, and sterilise
their equipment. 1847: James Prescott Joule and Mayer discover
law of conservation of energy and that various forms of energy
1848: The revolution in France
restores the republic. 1849: Hippolyte Fizeau measures the speed of light.
Lord Kelvin (William Thomson) and Sadi Carnot form the thermodynamics
theory of heat, one of the most important physical laws. Carnot
was intrigued why British steam engines were more effective than the French,
and used thermodynamics to explain how heat engines
1850: End of the Western slave
trade. Lord Kelvin proposes the absolute temperature scale to -273 degrees
Celsius, at which point molecules stop moving. London and Paris build sewer
systems, but sewage plants have to wait till 1915.
1851: The Great International
Exhibition in London, showcase for technology and industry. 1852: The origin
of spermatozoa explained. First steam-powered dirigible (balloon). Steelmaking
improved. 1855: invention of the mercury vacuum pump,
opening the way to produce cathode ray tubes, and leading to the discovery
of the electron. Aluminium becomes cheaper to make. 1856 Henry Bessemer
invents the Bessemer process for producing inexpensive
steel from pig iron, by blasting air through the melt to remove
impurities like carbon.
1857: Gregor Johann Mendel experiments
with peas to discover laws of heredity (1860,1865). The
positions of nearly 500,000 stars are known and catalogued. 1859:
Darwin On the origin of species. Edwin Laurentine Drake
the world's first oil well in Titusville USA, through deep rock.
He invented drill bits to do so. Kirchhoff and Bunsen explain the chemical
nature of spectral lines. Jean Joseph Etienne Lenoir develops the first
internal combustion engine which works on coal gas.
1859-1869: the Suez
Canal is dug by Ferdinand de Lesseps. It allowed marine species
to migrate between seas that were once separated. 1860: The Frenchman Lenoir
builds the first car with internal combustion
engine. 1862: new telescopes with refracting lenses. Eye glasses
correcting astigmatism. The beginning of iron-clad
warships. Louis Pasteur discovers microorganisms
as the source of decay of food and wine. 1864: Agricultural
chemist George Washington develops techniques for
regenerating the fertility of land by growing sweet potatoes and peanuts.
James Clerk Maxwell publishes a mathematical theory
for electric and magnetic fields, based on Michael Faraday's concept
of a field. It proves to be a unifying theory with considerable importance.
1865: Joseph Lister introduces
as a disinfectant in surgery, reducing surgical death rate from
45% to 15%. 1866: Beginning of the science of ecology. Telegraph
cable across the Atlantic Ocean. Torpedo invented.
1867: Karl Marx writes Das
Kapital, the theory that society evolves as a result of conflict between
classes. Dating from tree rings. First commercially practical generator
for alternating current. George Westinghouse (USA) invents the
air brake for trains, solving a major problem, allowing all wheels to brake
with equal strength.
1868: Robert Scott reaches the
North Pole. First skeletons of modern humans, 35,000 year old in the Cro
Magnon caves, France. Discovery of helium from a spectral line in sunlight.
Beginning of acoustic engineering. Harpoon cannon with explosives for whaling,
which would start global whaling.
1869 Dmitri Mendelev and Julius
Lothar Meyer publish the Periodic Table of Elements,
with 63 of the 90 naturally occurring elements, putting chemistry on a
firm basis. First commercially practical generator for direct current.
First trans-USA railway completed.
tunnel through the Alps. 1872: Roald Engelbregt Amundsen reaches
the SouthPole. Ferdinand Cohn classifies bacteria into genera and species.
Charles Darwin argues that human emotions stem from the behaviour of animals.
Start of the first major oceanographic expedition on the Challenger.
First US national park, Yellowstone. 1873: beginning of psychology.
1874: DDT insecticide discovered. George
Stoney estimates the charge of the electron.
1875: The official kilogram,
a bar of platinum, becomes the official standard for weight. Medical knowledge
discovers enzymes and their functions. Liquid helium made by cooling air.
First practical refrigerator, running
on liquid ammonia gas. Birth of the science of bacteriology. Louis Pasteur
discovers that some bacteria kill others, which would lead to the discovery
of antibiotics. First glider with arched
wings. Nikolaus August Otto develops the four-stroke
internal combustion engine. 1878: discovery that microbes
convert ammonium compounds into nitrites and nitrates, which can be absorbed
by plants. Invention of rayon fibre made from cellulose. Physiologist
Paul Bert discovers caisson disease (the bends) in divers, caused by nitrogen
gas, and its cure of gradual decompression. First commercial
telephone exchange. Discovery of saccharin, a sugar substitute
made from tar. Louis Pasteur discovers immunity to cholera by a weakened
cholera virus, paving the way to immunisation
by vaccination, an effective cure against the major communicable
diseases of the time: cholera, tuberculosis, tetanus, diphteria, rabies.
Before that, he persuaded French army surgeons to sterilise
their equipment and patients wounds. First electric railway
demonstration in Berlin. Thomas Edison (USA) and Joseph Swann (England)
invent the carbon-thread incandescent electric bulb.
cause of malaria discovered. Chlorination of drinking water.
Piezo-electricity discovered, which would lead to precise crystal oscillators,
grammophone pickups, etc. London's first electric
generation station. 1881: Vector analysis
discovered, a major contribution to science. The science of aerodynamics
begins, leading to the design of aircraft. First electric streetcar in
Berlin. First colour photograph. The motion picture camera is used to study
motion in people and animals. Invention of the self-regulating electrical
generator. 1882: use of photography to map stars. Discovery of chromosomes.
Edison patents a three-wire system for transporting
electrical power, which is still in use today. The Maxim
machine gun invented. 1883: The quagga, a close relative of
the zebra, dies out. The volcanic island of Krakatoa explodes, killing
40,000. Synthesis of antipyrene, a pain killer. Cocaine was already in
use. Daimler develops a high speed internal combustion
engine for a boat; two years later for a motor cycle and in 1887
for a car. Manganese steel, a super hard steel
discovered. 1884: Greenwich becomes the prime meridian. Otto Wallach systematically
isolates terpenes from essential oils, laying the basis for the perfume
industry. The Linotype typesetting machine
invented, and the roll film and fountain pen.
1885: Sigmund Freud develops
his theories of psychoanalysis. James Dewar invents the thermos flask or
Dewar flask for retaining heat. William Stanley invents the electric
transformer. Britain starts the largest
irrigation project on the Indus River in India's Punjab, the size
of Greece (14 million ha). It leads to salinisation
problems in 1960. 1886: Hermann Hellrigel discovers that legumes
fix nitrogen from the air for growth. Later it is proved that
this is done by bacteria in their roots. Alfred Bernard Nobel discovers
a non-smoking nitroglycerine explosive,
called ballistite. Sound recording on wax discs (Bell) differing from the
phonograph rolls of Edison. Charles Hall makes aluminium
from alumina ore using electric power, heralding the way for
cheap aluminium. Electric welding invented.
1887: Vito Volterra founds functional analysis,
which would become the mainstay for engineering, later expanded by Henry
Lebesgue. Einstein discovers that the speed of light is constant and that
light has both particle and wave properties. Discovery of haploid sex cells
with half the number of chromosomes. Contact lenses invented and celluloid
photographic film. 1888: Fridtjof Nansen crosses Greenland by land.
Radio waves detected by Heinrich Hertz. Invention of the adding
machine, air-filled rubber tyres
(Dunlop), a commercial roll-film camera (Eastman). 1889: Ivan Petrovich
Pavlov shows that stomach juices can be conditioned to the sounding of
a bell. James Dewar invents cordite, a smokeless
gunpowder. Francis Galton formulates the statistics
for correlation and the standard error. First hydrolake
and electric hydro power generator.
tower 303m. Only 550 live bison remain in
the USA, out of millions living there before.
1890: Oliver Heaviside invents
the operational calculus, highly important
to technology. Paul Ehrlich establishes the field of immunology,
also discovering passive and active immunity. Sterilised rubber gloves
used during surgery. Hollerith develops a punched-card machine for counting
the census. First aircraft to fly under its
own power. 1891: Stock exchange collapse in the UK, followed by a deep
economic depression (1891-1893). Carborundum (silicon carbide) discovered.
Nikola Tesla invents high frequency high voltage electricity, which produces
spectacular auroras, believed by many to have supernatural properties.
1892: Tobacco mosaic disease identified as caused by a virus, too small
to be seen under a microscope. Fingerprints used for identification. Viscose
India inoculated against
cholera, reducing death rate by 70%. First open heart surgery. The
four function mechanical calculator. Diesel engine
invented. Homo erectus fossils found in Java (Java Man).
The double life cycle of mosses and ferns discovered. Baden-Powell uses
kites to lift human beings into the air. Marconi builds the first
transmitter and receiver, which will ring a bell at 10m distance.
Human population estimated at
First half of the twentieth
The discovery of X-rays, radioactivity, subatomic particles, relativity
and quantum theory had a profound effect on many disciplines of science,
marking the beginning and ending of a remarkable period in which the major
foundations for all sciences were laid. The end of this period is marked
by the world war, the atom bomb in 1945, and the invention of computers.
The number of scientists grew profusely, influencing society. Industries
founded their own research laboratories. After European scientists fled
Hitler Germany, the USA became the leading country of science. Germany
on the other hand, focused on research for the military, resulting in its
supremacy at war. The politics of the first decades of this century were
inspired by laissez-faire capitalism, leading to the Great Depression of
the thirties. Experimental biology made major strides, leading to the new
discipline of biochemistry. The understanding of electricity and electrons
opened a new branch of technology: electronics, which changed society profoundly
in almost every way.
1895: Konstantin Tsiolkovsky
publishes the first scientific papers on space flight. Chromosomes
identified as the carriers of heredity. Wilhelm Konrad Roentgen
discovers X-rays. First manned glider
able to gain height. 1896: Svante Arrhenius discovers that the amount of
carbon dioxide in the atmosphere determines the global temperature and
theorises that ice ages occurred because of reduced atmospheric CO2. Charles
Guillaume discovers Invar, a nickel-steel alloy which does not expand
nor shrink with temperature. It proves to be of major benefit. First diagnostic
X-ray photograph. Herman Hollerith founds the Tabulating machine Company,
which later becomes IBM. First steam-driven flying machine flies 1.2km.
1897: The jet stream discovered. Turbine powered
steamships prove to be superior over conventional steamships. Casein
plastics discovered. 1898: thermite discovered, a mix of aluminium powder
and iron, that burns at high temperature, leaving iron behind. it is used
for welding. Discovery of foot-and-mouth disease virus. A vaccine against
1900: Gregor Mendel's work on
heredity rediscovered. Emil Wiechert invents the inverted pendulum seismograph,
still in use today. Electricity in metals explained. Invention of the nickel-alkaline
accumulator. Gamma rays discovered. Count
Ferdinand von Zeppelin flies his first dirigible (navigable balloon). 1901:
insemination used in agriculture (Russia), after it was invented
in 1785. Inventions: electric typewriter, motorbicycle, vacuum cleaner,
crystal radio detector, safety razor, mercury vapour arc lamp, motor-driven
airplane (Whitehead). 1902: Mount Pelee on Martinique erupts, killing 38,000.
Heaviside predicts the ionosphere to reflect radio waves, and Kennelly
confirms this. Four human blood groups discovered. Rutherford and Soddy
explain radioactivity and its associated rays. Inventions: lightbulbs with
osmium filaments, the spark plug, the airconditioner, high
voltage ignition for internal combustion engines, electric hearing
aid, the drum brake. 1903: A method for producing nitric acid from atmospheric
nitrogen. A method for making artificial silk (Viscose). Proposal
to use X-rays to treat cancer. First successful airplane
launched by Wilbur and Orville Wright, flew for 59 seconds. 1904 Panama
Canal started. Stanley Hall argues that the child, in its development,
recapitulates the life history of the race. Inventions: ultraviolet lamps,
flat disc phonograph, photoelectric cell, vacuum
tube (diode), offset printing.
1904-1905: Japan fights Russia
with an army vaccinated against infectious diseases, for the first time
claiming more deaths from battle than from disease. Antarctic whaling started.
Invention of hydrogenation of fatty acids, allowing whale oil to be turned
into margarine and soap. Whale glycerin is used to make nitroglycerine
1905: Zoological classification.
The structure of DNA is being unravelled. Female mammals have two X chromosomes;
males one X and one Y. Hormones being studied. Adolf von Bayer has discovered
many organic dyes. Discovery of cellophane and chlorophyll. Development
of the intelligence test. Direct blood transfusion. Einstein submits his
paper on the theory of relativity and
the famous relationship between mass and energy. Inventions: safety glass,
submarine, Cottrell dust remover, directional radio antenna,
dial telephone, first airplane factory in France. 1906: A mysterious explosion
near Tunguska in Siberia, the cause of which has never been identified.
Frederick Hopkins discovers essential food substances,
the vitamins. The great San Fransisco earthquake kills 700.
The Milky Way is a spiral galaxy. Inventions: tungsten
filament lightbulb, freeze-drying, music
and voice transmission by radio waves. 1907: Discovery that
can affect the physical functioning of the body. Uranium used for
dating rocks. John Scott Haldane develops the diver decompression tables.
Inventions: radio amplifier, paint
spray gun, first helicopter flies for 20 seconds, amplifier vacuum triode.
Louis Lumiere invents colour photography.
1908: 1.5m Hale reflecting telescope at Mt Wilson. Fritz Haber develops
the process for making cheap ammonia from nitrogen
gas, which would lead to the agricultural revolution with artificial
fertilisers. It would also give the Germans an inexhaustable
suply of explosives. Student test for probability in experimental
data. Sulfanilamide discovered, but its bacteria-killing properties would
not be known until 1936. Inventions: first tractor
with moving treads, rock drilling bit for oil drilling, gyroscopic compass,
cellophane, Geiger counter, Orville Wright flies for one hour, Model T
Ford. 1909: Robert Peary and Matthew Hensen reach the North Pole.
Andrija Mohorovic discovers the earth's boundary between crust and mantle.
Invention: the electric toaster, bakelite insulating
plastic which solidifies on heating (replaces wood, ivory, hard
rubber), Louis Bleriot flies across the English Channel in 37 minutes,
1910: Paul Ehrlich introduces
(an arsenic compound) as a cure for syphillis, the forerunner of chemotherapy.
Discovery that some animal cancers are caused by viruses.
of iodine used for disinfecting wounds. Inventions: electric
washing machines, rayon stockings for women, neon street lights. Eugene
Fly takes off in an airplane from the deck of a ship, showing that aircraft
carriers are possible. Charles Proteus warns about
air pollution from burning coal, and water pollution from sewage discharges.
Less than 1 million motor vehicles worldwide.
1910-1920: Mexican revolution,
developing the Mexican oil fields. By 1919 Mexico is second-largest oil
producer until Venezuela takes over in 1928. The
era of cheap oil begins.
1911: Roald Amundsen reaches
the south Pole. Genes are being mapped on chromosomes. Thermal
cracking of oil for refining petroleum. William Hill develops
the first gastroscope. Superconductivity discovered in mercury. Owen Richardson
explains the Edison Effect, whereby electrons boil out of a heated cathode.
This would later give rise to electron tubes and the science of electronics.
Rutherford presents his theory of the atom. Frederick Soddy
discovers more about isotopes and radiation.
Inventions: escalators, self-starter for automobiles,
1912: The Titanic sinks on its
maiden voyage, killing 1500. Robert Falcon Scott dies in Antarctica after
reaching the South Pole, and finding that Amundsen had beaten him by a
month. Alfred Lothar Wegener proposes the theory of continental drift and
the super continent of Pangea. It will take another 60 years before being
accepted. Inventions: gas regulator,
heating pad which becomes the electric blanket.
1913: Theory of stellar evolution.
Muscle cells use oxygen after a contraction has finished. Friedrich Karl
Bergius discovers how to make gasoline from coal,
by hydrogenation at high pressure. This would later power Hitler's war
machine. Niels Bohr describes atoms and electrons in detail. Inventions:
transmitter with vacuum tubes, mammography for breast cancer,
Igor Sikorski flies a multi-engined plane.
1914: The assassination of Austrian
archduke Francis Ferdinand and his wife in Sarajevo (Yugoslavia) precipitates
World War I. The role of ATP (adenine triphosphate) discovered as the energy
molecule in cells. Beno Gutenberg discovers the boundary between Earth's
mantle and core at 3000km depth. Ernest Rutherford discovers the proton.
The Passenger Pigeon, only 50 years earlier living in billions, dies out.
Inventions: a sewage plant operating on bacteria,
red and green traffic lights, the brassiere, experimental rockets, teletypewriter.
1915: formal opening of the
Canal. Inventions: the radio tube oscillator, Pyrex heat-resistant
glass, transatlantic and transcontinental radiotelephone,
aircraft with machine guns able to fire through the propellers, semiconductor
germanium diode, tractor trailer, sonar for detecting ice bergs.
1916: discovery of cobalt-tungsten
steel magnetic alloys. Inventions: windshield wipers, washing machine.
1917: beginning of the Russian
revolution as communists assume control of government. Black holes predicted
from Einstein's equations. Freezing as a way of
preserving foods. it will change agriculture and trade, and
eventually permeate every household. Power chain
saw, would become popular after World War 2, allowing men to
cut trees up to 100x faster.
1918: World War 1 ends when
the Germans surrender. The mass spectrograph becomes an important scientific
tool. Inventions: radio crystal oscillator, electric beater for foods,
superheterodyne radio receiver allowing for precision tuning and low noise
reception, high speed hydrofoil boat (Bell). 1918-1919 a world influenza
pandemic kills over 30 million people, about as many as WW1 and 2 combined.
Double-cross hybrid maize.
1919: Eccles & Jordan discover
the flipflop circuit, the basic element for computer information storage.
1920: Walter Nernst completes
the theory of thermodynamics. Existence of the neutron proposed. Inventions:
gun, a submachine gun. Working farm animals
require 25% of crop production in USA. Creosote oil preserves timber,
requiring less replacement.
1921: Alexander fleming discovers
bacteriocidal properties of mucus. Thomas Midgley invents tetraethyl lead
to prevent knocking in gasoline engines, making high compression and more
efficient engines possible. In 1930 he would invent Freon (a CFC), earning
him the reputation of making the largest impact on the atmosphere. A tuberculosis
1922: Benito Mussolini takes
power in Italy. Discovery of vitamin-D in cod liver oil. Sonar
for measuring the depth of the oceans.
1923: Theodor Svedberg develops
the ultracentrifuge, which becomes an essential tool in biochemistry. Inventions:
photoelectric cell, autogiro, continuous hot-strip rolling of steel which
requires precise machine control. A diphtheria vaccine.
identified from a fossilised skull. Galaxies are distant Milky Ways. Willem
Einthoven invented the electrocardiograph. Inventions: spiral bound books,
celluwipes tissues, self-winding watch, photographic radio transmission
(forerunner of faximile), iconoscope (forerunner of TV).
1925: Ronald Aymler Fisher publishes
Methods for Research Workers, which becomes a standard work on statistics
for science. Karl Bosch invents a method
for producing hydrogen. Experiments begin in colliding atoms
to study their composition. Half-integer quantum numbers for electron spin.
Inventions: the analogue computer for solving differential
equations. First whaling factory ship with rear ramp, 'seagoing
1926: Erwin Schroedinger discovery
that X-rays cause genetic mutations. Inventions: talking
movies, pop-up toaster, liquid fuel
rocket reaches 56m height.
1927: Big flooding of the Mississippi
River, evacuating half a million and killing hundreds. Georges Lemaitre
proposes the Big Bang theory of the creation of the universe. Rudolf
Geiger founds the study of microclimatology. Inventions: the iron lung,
pentode vacuum tube, negative feedback in amplifiers,
thus reducing distortion.
1928: Richard Evelyn Byrd establishes
a camp on Antarctica and begins an extensive program of exploration. Invention
of the Diels-Alder reaction for combining atoms into molecules, useful
for synthetic rubber and plastics.
Adolf Windaus studies cholesterol. Alexander Fleming discovers penicillin,
which will be used clinically in World War II.
1929: New York sharemarket crash
(Black Thursday) starts the Great Depression (1929-1939), but the entire
1920s after World War 1 were times of economic turmoil. Swiss astronomer
Fritz Zwicky proposes that red shift of distant galaxies is caused by photon
'fatigue', by which photons lose potential energy, a theory which is not
generally accepted. M Matuyama shows that rocks of different strata have
their magnetic fields changed, even reversed. This technique would later
prove plate tectonics and the magnetic pole reversals. Hans Berger develops
the electro encephalogram (EEG). Detection of cosmic
rays. Inventions: FM radio, Empire State Building
construction (1929-1931), foam rubber (Dunlop), quartz crystal clock, Wankel
1930: Andrew Ellicott Douglas
establishes the use of tree rings for dating (dendrochronology),
discovered in 1863. Arne Wilhelm Tiselius invents electrophoresis for separating
proteins with electric currents. Inventions: polystyrene,
polyvinyl chloride (PVC), Freon as a gas for refrigerators, frozen foods
being marketed, sliced bread, tape recorder, jet engine. 50
million motor vehicles worldwide.
1931: Experiments by Karl Janski
led to the founding of radioastronomy. Inventions: neoprene,
the George Washington Bridge over the Hudson River
spanning 1066m. Last sturgeon fish caught in the river Rhine.
1932: Gerhard Domagk discovers
the sulfa drug Prontosil, hailed as
a wonder drug, which he tried on his own daughter in 1935. In 1936 it is
discovered that Prontosil breaks down to Sulfanilamide,
the active ingredient (see the discovery of sulfanilamides in 1908). The
second sulfa drug, sulfapyridine will
be discovered in 1937. Neutron and positron discovered. Neutrons change
Bohr's view of the atomic nucleus. Auguste Piccard enters the stratosphere
by balloon. Publication of Thomas Hunt Morgan's The scientific basis
of evolution. Inventions: television receiver
with a cathode ray tube (CRT).
1933: Adolph Hitler rises to
power by being appointed chancellor of Germany. The Tasmanian Wolf dies
out in a zoo. Discovery of deuterium and heavy water, and tritium. Ernst
Ruska builds the first electron microscope,
able to magnify 12,000 times. Van de Graaf develops a static
electricity generator capable of producing 7 million volts. Clarence
Zener explains the quantum tunnelling effect and the breakdown of insulators.
The Zener diode is named after him. Inventions: artificial
vitamin C (ascorbic acid), high-intensity mercury vapour lamps.
diffraction photography to determine the nature of proteins. Isolation
of progesterone, a female hormone. Arnold Beckman invents the pH
meter for measuring acidity and alkalinity. Jesse Beams develops
an ultracentrifuge that works in vacuum. William Beebe descends to 1001m
below the ocean's surface in the tethered Bathysphere. Inventions:
the first streamlined car (Chrysler Airflow), Wernher von Braun
develops a liquid-fuel rocket that achieves a height of 2.4 km.
1935: Sydney Chapman determines
the lunar air tide, the effect of the moon's gravity on the atmosphere.
Charles Francis Richter develops a scale for measuring the strength of
earthquakes. Discovery of the last essential amino acid, threonine. Testosterone,
a male hormone isolated. The four-step Krebs cycle
discovered, by which animals and plants produce energy (respirate). Isotope
separation by centrifuge, which would become important for making nuclear
weapons. Inventions: bioflavin vitamine B2, vitamin K, the beer can, nylon,
1936: DNA is isolated in its
pure state. Artificial heart in use during cardiac surgery. Development
of the field-emission microscope which makes individual atoms visible on
a fluorescent screen. Inventions: synthesised vitamin B1, fluorescent
lighting, paperback books (Penguin), regular public
TV transmissions, Colorado River Boulder
Dam, German engineer Heinrich Focke develops the first practical
A digital computer using relays.
1937: Japan invades China. Discovery
of RNA in tobacco mosaic virus. William Cumming Rose discovers the 10 essential
amino acids (out of 20), essential to rats, and 8 essential to humans (have
to be part of our diet). A V Kazakov explains the origins of phosphate
rocks related to ocean upwellings causing abundant sea life. The magnetic
resonance method discovered for studying the atomic nucleus. Inventions:
vitamin A, sulfapyridine, antihistamine,
vaccine against yellow fever,
1938: Nuclear fusion, the energy
source of the Sun, explained. Claude Elwood Shannon lays the basis of the
theory of information. 1024 genes of the Drosophila fruit fly
mapped. First living Coelacanth caught. G S Callendar detects increases
in CO2 due to human activities. Total artificial hip replacement using
stainless steel. Otto Hahn splits the uranium atom,
opening the possibility for nuclear energy and bombs. Inventions:
lysergic acid diethylamide (LSD), radio altimeter, ballpoint pen, Nylon
goes on sale, Porsche introduces the Volkswagen Beetle, Perlon synthetic
fibre. To stop Japanese advance in the Sino-Japanese war, Chinese Nationalists
destroy the dykes in the Yellow river, drowning several hundred thousand
people, flooding 11 cities, 4000 villages and destroying millions of hectares
1939: German and soviet forces
begin occupying Poland, starting World War II. Albert Einstein writes to
President Roosevelt, which will lead to the US developing an atomic bomb.
Several scientists make progress on the fission of uranium and the possibility
of a chain reaction. DNA and RNA are always present in bacteria. Inventions:
(ICI), DDT insecticide, sulfathiazole
(3rd sulfa drug), electric knife, the complex number calculator, regular
commercial flights across the Atlantic Ocean (PanAm), precooked frozen
foods, first Sikorski mass-produced helicopter, fully automatic transmission,
electron microscopes 50 times more powerful than light microscopes.
1940: Use of C14 for carbon
dating, the most useful of radioactive tracers. Rhesus factor in
blood discovered. Development of the gas-diffusion method to enrich uranium.
collapse of the Tacoma Narrows Bridge leads engineers to consider aerodynamic
stability in bridges and buildings. Inventions: biotin/vitamin H,
drying for food preservation, colour TV broadcasts. St.Louis USA
first to adopt smoke abatement policies for cleaner air. Most industries
would follow after 1966.
1941: Japanese attack on Pearl
harbour. One gene one enzyme hypothesis. Inventions: aerosol spray for
insecticides, terylene or dacron,
1942: Solar radio emission detected.
First radio maps of the universe. First images of a virus. Louis Fieser
develops napalm, jellied petroleum,
which burns viciously while sticking to victims. Enrico Fermi's team creates
the first controlled nuclear chain reaction.
John Atanasoff and Clifford Berry invent the Anatoff-Berry
Computer (ABC), which works with capacitor storage on a rotating
drum. It could berform operations with 8-digit decimal numbers (16 bits).
This is now recognised as the first binary digital computer. Inventions:
LORAN Long Range Air Navigation system along the US Atlantic seaboard.
1943: The proton synchrotron
proposed for accelerating atomic particles. Inventions: silicones
(Dow Corning), kidney dialysis machine, antibiotic
streptomycin effective against gram-negative bacteria, first
operational nuclear reactor at Oak Ridge, Jacques Yves Cousteau underwater
lung, continuous casting of steel,
first vacuum tube computer (Colossus) used for cracking German encryption.
Washington DC builds its first sewage plant.
1944: German forces begin operating
V1 flying bombs, and V2 rocket-propelled bombs. Radio waves at 21.1cm predicted
from interstellar hydrogen. Discovered that DNA is the hereditary material
for almost all living organisms. Paper chromatography
discovered as a tool to identify biochemical compounds. The theory
of games and economic behaviour discovered. Aureomycin,
the first tetracycline extracted from soil organisms. Inventions: the Automatic
Sequence Controlled Calculator with vacuum tubes and punched papertape
for programming, made by IBM, but is not very reliable.
1945: Germany surrenders to
the Allies. Hiroshima bombed with a uranium-based
nuclear fission bomb. Nagasaki bombed with a plutonium-based fission bomb.
Japan surrenders. Salvador Luria shows that bacterial mutations are closely
related to those of their predators, viral bacteriophages. It would open
the path to genetic engineering. The synchrocyclotron particle accelerator
proposed. Inventions: herbicide 2,4-D,
fluoridation of water supplies, ENIAC computer (Electronic Numerical Integrator
and computer), the first all-purpose stored-program electronic computer.
It had 18,000 vacuum tubes and worked with decimal numbers. A vaccine against
Human population estimated at
2.5 billion. 75 million motor vehicles.
The age of reconstruction.
1946-1959. The world war vastly accelerated scientific discoveries.
Many scientists and their laboratories were conscripted for war research,
results of which remained classified for many years, some even today. As
many countries rebuilt themselves, the progress in technology was used,
causing a period of prosperity and rapid economic growth. In pure science
as well, the unexpected continued to occur. Few scientists in the brief
optimistic period between the end of the World War and the start of the
Cold War would have predicted that before the end of the century our evolutionary
history would be revealed in the test tube instead of in fossils; people
would walk on the moon; the beginnings of the universe would be explained;
the secrets of heredity would be unravelled; the discarded theory of continental
drift would resurface as the most vital part of earth sciences; many famous
conjectures in mathematics would be resolved; the elementary particles
would almost make sense; and solid-state devices would replace vacuum tubes
in most applications. As the cost of science soared, nations would pool
together and scientists would work together in interdisciplinary teams
of specialists. Initially optimistic about the benefits of science, some
people become critical about scientists' apparent immorality, leading to
armaments, nuclear weapons, nuclear energy, industrial and agricultural
chemicals, bacterial and insect resistance to biocides, genetic experiments
and genetic engineering.
1946: The first meeting of the
United Nations (UN). International Whaling Commission established. Genetic
material from two viruses can combine to make a new virus. Discovery that
carbondioxide can cause water vapour to condense in the atmosphere. First
synchro-cyclotron at Berkeley. High pressure physics.
research becomes a new discipline. Inventions: First Soviet
nuclear reactor, ENIAC computer,
1947: Synthesis of ADP and ATP,
used by cells to convert energy. Quantum Electrodynamics (QED) to explain
new subatomic particles. Discovery that radiation is produced when charged
particles change path. Discovery of the muon, pion, meson nuclear
particles. Inventions: tubeless tyre (Goodyear).
1948: 5m Hale reflecting telesope
at Mt Palomar USA. The Big Bang theory gains credibility. Protons and neutrons
in the nucleus occupy shells, like the electrons outside. Effect of DDT
on insects understood. Norbert Wiener develops the mathematical
theory of feedback systems and automation. Inventions: cortisone
against inflammations, atomic clock, the computer term 'bug', Velcro,
long-playing record, polaroid camera, the cinematograph of Louis Lumiere,
1949: The German state is split
into East and West. The North Atlantic Treaty Organisation (NATO) is created.
A rocket testing ground is created at Cape Canaveral, the place in the
USA closest to the equator. A 2-stage V2 rocket reaches 400km height. F
M Burnett proposes that immune response is not inborn, but develops as
a person grows. Derek Barton will discover that the
shape of molecules is important to their properties. Structure
of penicillin calculated. Claude Shannon publishes his work on information
theory, and symbolic logic, which would herald the way to communication,
error detection and correcting codes and provide a basis for many scientific
disciplines. Inventions: X-rays in medical diagnosis, EDSAC computer, BINAC
1950: Troops from North Korea
invade South Korea, starting the Korean War. Jan hendrik Oort discovers
the Oort cloud of comet material. Inventions: embryo
transplants for cattle, cyclamate artificial sweetener, Diners
Card, commercial colour TV in the USA,
Baltic Sea and Black Sea eutrophied and smelly. Nile Perch released in
Lake Victoria, would extinguish over 200 endemic cichlid species within
1951: Computers used in astronomical
calculations and mapping the Milky Way. Alpha-helix structure of proteins
discovered. Nikolaas Tinbergen publishes The study of instinct,
an important study of animal behaviour. Nikolay Vavilov
in The origin, variation, immunity, and breeding of cultivated plants
finds the evolutionary basis of immunity in various strains of wheat.
Synthesis of the steroids cortisone and cholesterol. Erwin W Mueller develops
the field ion microscope. Walter Henry Zinn USA develops an experimental
breeder reactor. Inventions: 3-D motion pictures, power steering
(Chrysler), the zebra street crossing, first commercial
1952: Joseph Stalin dies in
the USSR. A Polio epidemic in the USA affects 47,000. Hodgkin and Huxley
explain how nerves work. Rapid Eye Movement in sleep detected. Douglas
Bevis develops amniocentesis, a method of examining the genetic heritage
of a fetus while still in the womb. Jean Dausset discovers that repeated
blood transfusions cause antibodies to be formed. James Alfred van Allen
invents the rockoon, a rocket launched from a balloon, to study the physics
of the upper atmosphere. First nuclear accident causes the core of a nuclear
reactor to explode. Inventions: First sex-change operation, a killed-virus
vaccine against polio (vaccination started in 1954). Later a live-virus
vaccine would be used, developed by Albert Sabin. Development of the H-bomb,
based on nuclear fusion. Antabuse, a drug preventing alcoholics
from drinking; the heart-lung machine, the universal reaction blood test,
UNIVAC computer used in election prediction, transistor hearing aid, pocket
transistor radio. 4000 people die of a London fog.
1953: Elizabeth II crowned Queen
of the UK. Sir Edmund Percival Hillary and Tenzing Norgay reach the summit
of Mount Everest. Flooding of Holland when sea dykes are breached in a
rare storm with high tides, killing 1500. Super clusters of galaxies discovered.
Radio emissions from the Crab nebula explained as synchrotron radiation
(caused by charged particles changing course). Alfred C Kinsey produces
a landmark study of the sex practices of US women. It reveals that half
have sex before marriage, a quarter are unfaithful and a quarter have had
homosexual relationships. Second Coelacanth discovered. Watson
& Cricks unravel the double helix structure of DNA. Tars
from tobacco smoke cause cancers in mice. Structure of insulin protein.
Inventions: phase-contrast microscope, radial tyres (Pirelli), the MASER
(Microwave Amplification by Stimulated Emission of Radiation) with ammonia
gas, for amplifying weak microwaves from the universe and in communication.
1954: France's occupation of
Indochina ends. Humans have 46 instead of 48 chromosomes. Fossils of bacteria
and blue-green algae discovered in Canada. Chlorpromazine (Thorazine) introduced
for the treatment for mental disorders. CERN (Centre Europeen de Recherche
Nucleaire) founded in Geneva. Liquid hydrogen bubble chamber for detecting
nuclear particles. Inventions: Soviet Union first
nuclear reactor for peacetime use, TV dinners in USA, Nautilus
atomic powered submarine, photovoltaic
cell produces electric power from sunlight. A vaccine against
polio. First international accord against dumping oil at sea.
1955: Formation of the Warsaw
Pact against NATO. 76.2m radio telescope dish at Jodrell Bank, England.
Radio interferometry to join radiotelescopes into more accurate observers.
Various forms of RNA discovered. Vitamin D. The field ion microscope pictures
individual atoms. Inventions: domestic deep freezers,
artificial diamonds for industrial use, hovercraft, optical fibres. Over
100 million motor vehicles worldwide.
1956: Hungarian uprising against
Soviet domination. Israeli, British and French forces invade Egypt to prevent
nationalisation of the Suez canal (the Suez Crisis) but had to retreat.
Hydrogen bomb test in Bikini Island. William Clouser Boyd identifies all
13 human races by their blood groups. Human Growth Hormone isolated. Mid-Oceanic
Ridges discovered. Inventions:
birth coltrol pills,
electric watch (Lip), transatlantic telephone cable, FORTRAN (FORmula TRANslator)
computer language, LISP (LISt Program) computer language, MANIAC1 chess
computer. London proclaims the Clean Air Act. Minamata disease (Japan)
due to pollution from methyl-mercury. Soviet planners begin to divert the
water flowing to the Aral Sea for cotton cultivation. By 1960 this land-locked
sea shrinks by 60 million cubic km per year, leading to its death by 1990
and extinction of 20 endemic fish species.
1957: The Common Market (EEC/
EEG) established in Europe. Sputnik 1 and 2 artificial
satellites launched by the USSR. G E Hutchinson reasons that
an ecological niche may exist both in space and behaviour of an organism.
Discovery of interferon, a natural substance produced by the human body
to fight viruses. Albert Sabin develops a live polio vaccine. Tunnelling
in semiconductors. Discovery that the highly poisonous Dioxin may contaminate
herbicides. Inventions: 2,4,5-T herbicide,
speed painless dental drill,
1958: Solar 'wind' detected.
Earth is slightly pear shaped with a 15m bulge in the Southern Hemisphere.
Enzymes correspond to genes. Discovery of an all-female lizard species.
The brain's hypothalamus identified as a centre for the production of hormones.
James Van Allen discovers the Earth's radiation belts
named after him. Inventions: bifocal contact lenses, ultrasound to examine
unborn children, first nuclear electric power station in the USA, a chess
program on an IBM704 computer, cyclotron for particle acceleration. USA
starts building its Interstate Highways.
1959: The Antarctic Treaty keeps
Antarctica free from military activity and exploration (later, in 1988
it would be amended to allow for exploration). Radar contact made with
the sun. The soviets launch Lunik1 but instead of orbiting the moon, it
orbits around the sun. Lunik2 hits the moon. Lunik3 passes the moon, taking
the first photographs of its far side. G E Hutchinson defines more principles
of ecology. Inventions: the THI (Temperature
Humidity Index) as a measure of discomfort, XEROX photo copier, the
Lawrence Seaway is opened, connecting the St Lawrence River and
the Great Lakes, first artificial diamond (De Beers), COBOL (COmmon Business-Oriented
Language) computer language.
1960: Project OZMA starts the
search for extraterrestrial life (SETI= Search for Extraterrestrial Intelligence).
Dolphins use echo sound for locating objects. Jacques Piccard descends
to 10,900m, the bottom of the Challenger Deep, in his Trieste bathyscaphe.
(Light Emission by Stimulated Emission of Radiation).
Inventions: Astroturf plastic grass, ruby laser,
Echo passive communications satellite.
During the 1960s more than one large dam will be completed every year.
1961: Discovery of Homo
habilis (Handy Man). USSR tries a Venus probe but loses contact. Yuri
Gagarin (USSR) becomes the first human astronaut, orbiting Earth 1.8 hours
in Vostok1. Alan B Shepard (USA) becomes second astronaut in space
in a 15 minute sub-orbital flight. Soviet cosmonaut G Titov orbits Earth
17 times in 25.6 hours. Further progress with various forms of RNA. Transfer-RNA
builds one amino acid at a time, starting at one end, to produce a given
protein. Murray Gell-Mann develops a theory to unite new subatomic
particles. A core is drilled in the ocean floor at 3.5km depth, leading
the way to conclusive proof of continental drift
theory. Birth of chaos theory. Frank L Horsfall announces that
all forms of cancer result from changes in the DNA of cells. Inventions:
IUD Intrauterine Device for birth control.
1962: Cuban missile crisis caused
by USSR installing guided missiles in Cuba. Radar contact with the planet
Mercury. Rachel Carson's book Silent Spring
alarms the public of the dangers of the release of chemicals in nature.
Linus Pauling suggests that changes in genetic material over time may hold
the key to how one species relates to another. X-ray emission detected
other than from the sun. Astronaut Scott Carpenter completes three orbits
of Earth. Inventions: lasers used in eye surgery, first industrial robot,
nuclear powered ship Savannah, Telstar communications satellite, a vaccine
1963: President John F Kennedy
assassinated. Satellite to study X-rays from space. Inventions: Syncom2
first geosynchronous communications satellite, friction welding, audiocassette
(Philips), tunnelling semiconductor diodes for amplification of high frequency
signals. High-yielding dwarf wheat.
1964: Completion of the Aswan
Dam on the river Nile, causing profound ecological
changes. Start of Vietnam War. US spaceprobe Ranger7 takes close-range
photographs of the moon. Start of the green revolution
with new strains of rice with double yield. Birth of sociobiology
as a controversial offshoot of ecology. Inventions: the Verrazano
Bridge opens in New York, containerships.
1965: the great 14 hour power
blackout in New York. Cosmic MASERs discovered, interstellar gas producing
coherent microwave radiation by stimulation with light. Venus turns in
the opposite direction to all other planets, but only just so: a venus
day is 247 earth days. A day on Mercury is 56 earth days. Radio wave remnants
of the Big Bang found. Many space excursions with manned and unmanned satellites.
Variable genes discovered to explain the many forms of antigen. Artificial
pheromones (sex hormones in insects) synthesised. Chloroplasts
in algae have their own DNA. A self-reproducing virus synthesised. Monkeys
reared in total isolation show great emotional impairment for the rest
of their lives. Inventions: a vaccine against measles, soft contact
lenses, estrogen 'replacement' therapy, continuously
tunable laser, BASIC (Beginners All-purpose Symbolic Instruction Code)
computer language is particularly suitable for interactive programming
and easy to learn.
1966: Surveyor1 soft-lands on
the moon. Aflatoxins from the mould Aspergillus flavus, growing
on peanuts (and elsewhere) cause liver damage and cancer. Inventions: ESSA1
satellite, live-virus vaccine for rubella (German measles),
injection for automobile engines.
1967: Discovery of Aegyptopithecus,
a 30 million year old primate in the hominid line of Man. A ground test
of an Apollo spacecraft kills 3 astronauts. A Soviet cosmonaut is killed
during an emergency reentry. Data from classified US Navy navigation satellites
made available to the public. Surveyor3 soft lands on the moon. Soviet
space probe Venera4 parachutes down into Venus' atmosphere, discovering
its density and that it consists mostly of carbondioxide. Soviet union
lead from fuel. 10,000 year old frozen arctic lupine seeds prove
still viable and germinate. Clomiphene used to increase fertility, also
increases multiple births. Mammography for breast
cancer. First human heart transplant. Coronary bypass operation.
Inventions: keyboards for computer data input,
overseas direct dialling, food irradiation for
conserving it, antibiotics in animal food
may leave traces in meat, computer with parallel processors, Dolby
noise reduction in sound. Extensive use of antibiotics leads to drug resistant
strains of bacilla and bacteria.
1968: Uprising in Czechoslovakia.
First Apollo mission lasting 260 hours. Astronauts orbit the moon and return.
Erie is so polluted that it is essentially dead. Most enzymes
and codons found, that take part in replication of DNA. Glomar Challenger
ocean core drilling ship goes into operation for the next 15 years. Amino
acids from life discovered in 3 billion year old rock. People move back
to Bikini Atoll, after radioactive contamination from H-bomb tests in 1956,
has subsided (They'll move out again in 1978). Tooth decay caused by streptococcal
bacteria. Inventions: regular hovercraft
service across the English Channel, first supertankers,
luxury liner Queen Elizabeth2 launched, first supersonic airliner Tupolev
TY144 looks similar to Concorde.
1969: First humans on the moon,
with Apollo 11 and 12. Inventions: scanning electron microscope, bubble
memory devices for computers.
1970: China and Japan launch
artificial satellites. Apollo 13 is aborted because of equipment failure.
Human Growth Hormone synthesised. (Re-)discovery that viruses can cause
cancer. Inventions: carbon dioxide lasers, the Boeing
747 jumbo jet, floppy disk for computers,
1971: UN launched its Man and
the Biosphere programme. Completion of the Aswan
High Dam in Egypt, creating Lake Nasser, 150km3 water, over two years of
Nile flow storage. It would later cause major ecological disaster.
Apollo14 brings back 44kg of moon rocks. Apollo15 lands a vehicle on the
moon, the Lunar Rover. Soviet space probe Mars3 lands on Mars. First water-cooled
nuclear power station (Canada). Inventions: holography
with lasers, produces 3-D images, first microprocessor
(Intel), pocket calculator (Texas Instruments) weighs 1.1kg, Computer language
1972: Foundation of UN Environment
Programme (UNEP), headquartered in Nairobi. First earth
resources satellite Landsat 1. Soviet Venera8 soft lands on
venus. Apollo16 lands on the moon as fifth crew. US space probe Pioneer
10 launched. It will eventually leave the solar system. Use
of DDT restricted. Fermilab accelerates particles to 400GeV.
Inventions: Computerised Axial Tomography (CAT scanner), diamond-bladed
1973: The America-Vietnam war
ends. US Bombers left 20,000,000 bomb craters. Completion of the New York
Trade Centre buildings, 110 storeys tall; destroyed by suicide bombers
in 2001. The UK joins the Common Market. OPEC raises oil prices and enforces
oil embargoes to selected countries, which leads to the first Oil Shock.
Skylab missions start to obtain medical data from people in space.
calf produced from a frozen embryo.
Magnetic Resonance (NMR) scanner used for medical diagnosis. Inventions:
barcoded product labels, the push-through tab on drink cans,
1974: 'Lucy' sekeleton discovered
Australopithecus afarensis. A halt called to Genetic Engineering
until implications are better understood, but research continues. F Sherwood
Rowland and Mario Molina warn that CFCs can damage the ozone layer. Inventions:
programmable pocket calculator,
1975: End of the South Vietnam
War. The Milky Way galaxy moves at 500km/s. First pictures from the surface
of Venus by Soviet Venera 9 and 10. Discovery of endorphins, morphine-like
substances produced by the body. Inventions: LCD
displays, personal computer (Altair8800) with 256 bytes memory.
1976: More understanding of
the high variety of antibodies produced by very few genes. Inventions:
inkjet printer (IBM), the supersonic Concorde starts passenger services,
which would end in 2000 after a tragic mishap at Paris.
1977: A 40,000 year old frozen
baby mammoth recovered. Most neurons contain several different neurotransmitters,
not just one. Deep-sea ocean vents surrounded by specialised communities
discovered. Discovery of AIDS. Incinerator wastes may be contaminated by
dioxins which can cause cancer. The smallpox virus is declared extinct
(Somalia), but officially 3 years later. Inventions: public-key encription
codes, Apple2 personal computer, fibre-optics
trialled for communication.
1978: The complete genetic structure
of a virus. Chlorofluorocarbons (CFCs) banned as spray propellants. Bikini
Atoll islanders moved off the island after their return in 1968. It was
discovered that radioactive Cesium-137, caused by H-bomb tests, had entered
the food chain. Inventions: the first human test tube baby conceived outside
1979: End of Egyptian-Israeli
war. A human-powered aircraft, the Gossamer Albatross, crosses the
English Channel. The nuclear reactor Unit2 of
Three Mile Island undergoes a partial meltdown of its reactor core.
Inventions: VISICALC computer 'language' first spreadsheet,
ADA computer language for the US armed services.
1980: Mount St Helens erupts,
killing 61. Walter and Luis Alvarez discover the metal iridium in the Cretacious-Tertiary
(KT) layer, speculating that the extinction of the dinosaurs was caused
by a large meteorite. The VLA Very Large Array radiotelescope in Socorro
USA, begins operation. Soundwaves used to break kidney stones. Inventions:
hepatitis B vaccine, scanning tunnelling microscope,
1981: The space shuttle Columbia.
The Chinese clone a carp fish. Gene
transfer from one animal to another. Primates with large testes for their
body size, are promiscuous. The element Boron is important for bone development
and sex hormones. Inventions: Solar One, the largest solar-powered
electricity station generates 10MW of electricity, the IBM
Personal Computer with DOS operating system.
1982: The Mexican volcano El
Chichon erupts, sending dust and gases into the stratosphere, where they
remain for 3 years. First deployment of a satellite from the space shuttle
Columbia. Human insulin produced by bacteria. First artificial human heart.
Inventions: compact-disc players,
1983: The second space shuttle
Challenger starts service. Aspartame approved for use as a sweetener in
soft drinks. Immuno-suppressant cyclosporine for organ transplants. Inventions:
a solar cell with 9.5% efficiency, Apple's LISA operating system introduces
the mouse and pull-down menus.IBM
PC-XT with freely imitable architecture.
Moratorium on whaling; in 80 years the whale biomass reduced from 43 to
6 million tons.
1984: First un-looted Maya tomb
from 500BC discovered. An unbroken tree-ring chronology
based on Irish oak trees, dating back 7272 years. Alec Jeffreys
discovers the technique of genetic fingerprinting.
First cloned sheep. Inventions: optical disks for computer storage, the
one megabit RAM, IBM's PC AT
1985: Mud torrents from erupting
volcano Nevada del Ruiz kills 21,000. The hole in the ozone layer detected.
Construction begins on the Keck, world's largest telescope at Mauna Kea
Hawaii, with a mirror of 10m. Nuclear X-ray laser test underground proves
successful. Over 500 million motor vehicles world-wide. UNEP Vienna Convention
on ozone depletion.
1986: The soviets launch Mir,
a permanently manned space station. The space shuttle Challenger blows
up 73 seconds after launch, killing 6 astronauts and a teacher. A 35 million
year old frog found, preserved as a fossil in amber resin. Superconductivity
now detected at up to 30 degrees above absolute zero. Chernobyl
nuclear reactor No4 near Kiev, explodes and releases radioactivity, killing
12 and forcing mass evacuations of all families within 30km.
Inventions: a hepatitis B vaccine, the 32 bit chip 80386, the two-pilot
airplane Voyager flies around the world in 9 days without refuelling. Muammar
al-Qaddaffi of Libya completes the 'manmade river', drawing water from
desert aquifers (40 days by camel) to the coast, supplying 80% of fresh
water. Accidental introduction (by ballast water) of the zebra mussel from
the Black Sea to the Great Lakes (US/Canada), becoming a real nuisance,
together with over 100 other alien species. A comb jellyfish travelled
the other way, destroying the Black Sea fisheries.
bans lead from gasoline. The Clovis people believed to be the first
Americans at about 9,500BC. Human growth hormone works also in fish. The
last wild California condor is trapped for a captive breeding programme.
The last dusky seaside sparrow, previously found all over Florida, died
in captivity, as its saltmarsh habitat also disappeared.The Brundtland
Report urges for economic restraint and sustainable development. A single
gene on the Y chromosome is responsible for maleness. Implanting cells
from a person's adrenal gland into the brain can cure or alleviate Parkinson's
disease. Inventions: Apple Macintosh,
IBM PS/2 computer. 2000 people die from smog in Athens Greece. Montreal
Protocol to lower CFCs by 70-100%.
1988: The 1959 Antarctic Treaty
is modified to allow for mining minerals and drilling for oil. 92,000 year
old Homo sapiens fossils found in Israel. First
US patent issued for a genetically modified mouse. Chemists
estimate a total of 10 milllion recorded compounds, increasing with 400,000
each year. The number of new book titles increases by 800,000 each year.
The world had about 10,000 languages but many of these are disappearing.
1989: Start of the building
of the Three Gorges Dam in China, damming the YangTze River.
1990: First Earth Day. Human
populaton 5.3 billion. Energy use increased 80-fold since 1800. Energy
use averages 20 'energy slaves' per human being, but in the USA alone,
about 100. Over 700 million motor vehicles worldwide. Irrigated land
ruined as fast as engineers can irrigate new land. Over 25,000 antibiotics
exist. 500 million people fly from one country to another and back every
year. Humans account for 0.1% of total biomass and 5% of animal biomass,
almost equal to cattle. CO2 emission grew 17x this century. Large cities
like Mexico City generate over 10,000 tons of garbage each day. In USA,
Europe and Japan, roads occupy 5-10% of all land. Cars kill 400,000 people
1991: End of the Cold War. Non-communists
gain control over most East-Block nations. The American weapons complex
involves some 3,000 sites with 10,000 war heads. Nuclear waste cleanup
may take 75 years. USSR has 45,000 warheads and unknown waste dumps. Most
nuclear waste dumped at sea. In the Gulf War, the Irakis ignite huge oil
fires that darken the sky and spilled oil into the fragile Persian Gulf.
European Union and monetary system established.
1992: First UN Conference on
Environment and Development in Rio de Janeiro. Fist salmon caught in the
River Rhine, after its massive cleanup effort lasting forty years. Tests
for detecting HIV (AIDS) become widely available.
1993: USA ends the plans for
the Strategic Defence Initiative (SDI).
1996: Adoption of the nuclear
test ban treaty.
1997: Mars Pathfinder puts a
roving vehicle on Mars, which sends images and does experiments.
1998: 30 million people infected
2000: Human population 6.0 billion.
2001: Anti-pollution laws for
the Baltic Sea. World Trade Centre towers in New York destroyed by terrorist | 1 | 9 |
<urn:uuid:8512777f-d76b-4494-aff0-9f25cbd0c030> | The opportunistic fungal pathogen Candida albicans has a remarkable ability to adapt to unfavorable environments by different mechanisms, including microevolution. For example, a previous study has shown that passaging through the murine spleen can cause new phenotypic characteristics. Since the murine kidney is the main target organ in murine Candida sepsis and infection of the spleen differs from the kidney in several aspects, we tested whether C. albicans SC5314 could evolve to further adapt to infection and persistence within the kidney. Therefore, we performed a long-term serial passage experiment through the murine kidney of using a low infectious dose. We found that the overall virulence of the commonly used wild type strain SC5314 did not change after eight passages and that the isolated pools showed only very moderate changes of phenotypic traits on the population level. Nevertheless, the last passage showed a higher phenotypic variability and a few individual strains exhibited phenotypic alterations suggesting that microevolution has occurred. However, the majority of the tested single strains were phenotypically indistinguishable from SC5314. Thus, our findings indicate that characteristics of SC5314 which are important to establish and maintain kidney infection over a prolonged time are already well developed.
Citation: Lüttich A, Brunke S, Hube B, Jacobsen ID (2013) Serial Passaging of Candida albicans in Systemic Murine Infection Suggests That the Wild Type Strain SC5314 Is Well Adapted to the Murine Kidney. PLoS ONE 8(5): e64482. https://doi.org/10.1371/journal.pone.0064482
Editor: David R. Andes, University of Wisconsin Medical School, United States of America
Received: November 9, 2012; Accepted: April 15, 2013; Published: May 30, 2013
Copyright: © 2013 Lüttich et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: AL was supported by the excellence graduate school Jena School for Microbial Communication (JSMC; www.jsmc.uni-jena.de). BH and AL were also supported by the Deutsche Forschungsgemeinschaft (Hu 528 17-1; www.dfg.de). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: Co-author IJ is a PLOS ONE Editorial Board member. This does not alter the authors' adherence to all the PLOS ONE policies on sharing data and materials.
The diploid fungus Candida albicans lives as a harmless commensal on mucosal surfaces of the gastrointestinal and reproductive tracts of most healthy humans. However, C. albicans also has the potential to cause disease, ranging from superficial infections, such as oropharyngeal candidiasis, to life-threatening disseminated infections affecting various internal organs (invasive candidiasis). Risk factors for invasive candidiasis include antibiotic treatment, gastrointestinal surgery, indwelling catheters and other medical devices, and prolonged stay in an ICU . The incidence of invasive candidiasis has remained unchanged or even increased during the last decades –. Despite the use of modern antifungals the overall mortality is still high , . Thus, invasive candidiasis presents a persistent problem in patients at risk , .
Development of clinical disease depends on both host susceptibility, like the immune status, and C. albicans virulence traits –. Another prerequisite is the ability of C. albicans to adapt to different host niches during the infection process , . This adaptation is mediated short-term by changes in gene expression, translation and post-translational modifications. Additionally, genetic alterations like nucleotide exchanges, insertions, deletions, chromosomal rearrangements and copy-number variations of chromosome segments or whole chromosomes can lead to phenotypic variation within Candida populations , . The gradual development of phenotypic variants by genetic modifications under selection pressure is called microevolution –. Microevolution of C. albicans has been experimentally confirmed as a mechanism of adaptation to antifungal drugs in vitro , and has also been identified to be involved in the development of drug resistance in vivo –. Furthermore, microevolution of C. albicans occurs in the commensal state during long-term colonization of the human gastrointestinal tract as well as in recurrent vaginal infections , . The latter suggests that microevolution may be important during C. albicans infections. Supporting this hypothesis, Forche et al. showed that a single passage through a mouse host leads to variations in colony growth and morphology associated with long-range loss of heterozygosity and chromosome rearrangement events . However, to our knowledge the consequences of microevolution for virulence have yet only been addressed in one study: Cheng et al. repeatedly transferred C. albicans isolated from the spleen of infected mice to the blood stream of healthy animals . The fifth passage resulted in a stable respiration-deficient isolate displaying delayed filamentation initiation and abnormalities in carbon-assimilation, thus further supporting the concept of microevolution during C. albicans infection .
The main target organ in murine Candida-sepsis is the kidney , . In contrast to the spleen, the fungal burden in the kidneys increases over time, leading to a strong proinflammatory response without clearing infection . To determine whether microevolution plays a role in establishment of infection and fungal persistence within the kidney, we performed a long-term serial passage experiment in which mice were systemically infected with a low dose of C. albicans cells isolated from the kidneys of mice 14 days after infection. The results of this serial passage experiment demonstrate that passaging through the kidney leads to increased phenotypic variability within the fungal population, possibly by microevolution. However, the overall virulence and fungal fitness, as well as the host response, varied between infected animals without a clear trend over the passages, suggesting that the C. albicans strain used, SC5314, is well adapted to infect and persist in the murine kidney.
Materials and Methods
All animal experiments were in compliance with the German animal protection law and were approved by the responsible Federal State authority (Thüringer Landesamt für Lebensmittelsicherheit und Verbraucherschutz) and ethics committee (beratende Komission nach § 15 Abs. 1 Tierschutzgesetz; permit no. 03–006/09).
Strains and Culture Conditions
C. albicans SC5314 was used for initiation of the serial infection experiment. Colonies isolated from infected mouse kidneys (passage 1 to 8) were obtained as described below. All strains were maintained as glycerol stocks at −80°C. Individual strains were obtained by plating either SC5314 or kidney isolates from the glycerol stock on YPD agar plates (1% w/v peptone, 1% w/v yeast extract, 2% w/v glucose, 2% w/v agar). After two days incubation at 30°C, 40 colonies from each plate were selected and transferred to a 96 well plate (TPP) containing YPD medium. The plate was incubated at 30°C and 180 rpm for 24 h and subsequently used to prepare two glycerol stock plates. For all experiments, over night cultures were prepared by inoculating liquid YPD with glycerol stock culture followed by incubation at 30°C and 180 rpm.
Preparation of Infection Inoculum
SC5314 maintained as glycerol stock at −80°C was streaked on a YPD agar plate and incubated for 48 h at 30°C. A single colony was used for subculture on YPD agar for 12 h at 30°C. From this subculture, a single colony was inoculated in liquid YPD and grown for 24 h at 30°C and 180 rpm. SC5314 cells were washed twice with ice-cold phosphate-buffered saline (PBS) and adjusted to 1×106/ml in PBS. The infection dose was confirmed by plating serial dilutions of the infection suspension on YPD plates. For subsequent passages, C. albicans cells isolated from infected kidneys were used. Therefore, serial dilutions of kidney homogenate were plated onto YPD plates containing 50 µg/ml chloramphenicol and incubated for two days at 30°C. Colonies were swabbed from plates using a sterile cotton bud and resuspended in liquid YPD. In case both mice survived the infection, colonies recovered from the kidneys of both mice were used. After 24 h at 30°C and 180 rpm, C. albicans cells were harvested and prepared for infection as described for SC5314. An aliquot of the infection suspension was mixed with 85% glycerol and stored at −80°C (pool isolates). In total, eight rounds of infection were performed.
Mouse Infection Model and Microevolution
Female BALB/c mice five to six weeks old (18–20 g; Charles River) were used for the microevolution experiment. The animals were housed in groups of two in individually ventilated cages and cared for in strict accordance with the principles outlined in the European Convention for the Protection of Vertebrate Animals Used for Experimental and Other Scientific Purposes (http://conventions.coe.int/Treaty/en/Treaties/Html/123.htm).
Two mice per passage were challenged intravenously on day 0 with 5×103 C. albicans cells per g body weight via the lateral tail vein. After infection, the health status of the mice was examined twice a day by a veterinarian and surface temperature and body weight were recorded daily. Mice showing signs of severe illness (isolation from the group, apathy, >25% weight loss, hypothermia) were humanely sacrificed. Surviving mice were humanely sacrificed on day 14. Immediately after euthanasia, kidneys, spleen, liver and brain were removed aseptically, rinsed with sterile PBS, weighed, and kept in ice-cold lysis buffer (200 mM NaCl, 5 mM EDTA, 10 mM Tris pH 7.4, 10% glycerin, 1 mM phenylmethylsulfonyl fluoride, 1 µg/ml leupeptid, 28 µg/ml aprotinin) on ice. The organs were aseptically homogenized using an Ika T10 basic Ultra-Turrax homogenizer (Ika). The fungal burden was determined by plating serial dilutions of the homogenates on YPD plates containing 50 µg/ml chloramphenicol.
Quantification of Myeloperoxidase (MPO) and Cytokines from Tissue Homogenates
For quantification of MPO and cytokines, tissue homogenates were centrifuged at 1.500 g, 4°C for 15 min. The first supernatants were centrifuged again and the obtained final supernatants were stored at −80°C. The MPO levels were determined using the MPO ELISA Kit (Hycult Biotechnology). For the quantification of IL-6 and GM-CSF, Mouse ELISA Ready-SET-Go (eBioscience) were applied according to the manufacturers instructions.
Growth Rate Determination and Stress Resistance
Over night cultures of the individual strains were diluted 1∶15 in YPD or SD minimal medium (2% dextrose, 0.17% yeast nitrogen base, 0.5% ammonium sulfate) in a 96 well plate (TPP) and incubated at 30°C and 180 rpm. After 4 h the OD600 was determined and the strains were diluted in fresh media (YPD, SD and SD supplemented with 2 mM H2O2, respectively) to an optical density of 0.05. Cell growth was monitored over 24 h or 48 h in 96 well plates at OD600 at indicated temperatures using an Infinite M200 pro ELISA reader (Tecan). Measurements were performed every 30 min directly after a 5 sec shaking period (3 mm amplitude). Generation time of exponentially growing yeast was calculated by using the following formula: g = 1/[(lbOD600 end – lbOD 600 start)/(tend –tstart)] (lb = binary logarithm, t = time). For stress resistance tests on solid media, over night cultures of the pools were washed twice with PBS and diluted to 2×103 in PBS. 50 µl each were spotted onto SD agar (control) and on SD agar containing either 2 mM H2O2 (AppliChem), 1 µg/ml caspofungin (Merck & Co), 350 µg/ml Congo Red (Sigma) or 1.5 M sodium chloride (NaCl, Roth). Plates were incubated for 48 h at 30°C (H2O2, Congo Red, NaCl, caspofungin) or 42°C (temperature stress) and colony forming units (CFU) were determined. Survival was calculated by dividing the number of CFU on the stress plate by the number of CFU on the control plate. Experiments were performed in biological triplicates.
Determination of Filamentation
To determine serum-induced filamentation in liquid media, overnight cultures were inoculated to Dulbecco Modified Eagles Medium (DMEM, PAA) containing 2 mM L-glutamine (PAA) and 10% heat inactivated fetal bovine serum (FBS, PAA) in 24 well plates (TPP) at a density of 1×106 cells per well (germ tube formation) or 1×105 cells per well (filament length), respectively. Plates were centrifuged (3 min, 500 g) to settle cells and then incubated at 37°C in the presence of 5% CO2 for 1 h and 4 h, respectively. Each experiment was performed in biological duplicates with two technical replicates. From each sample, germ tube formation was determined for 300 cells using an inverse microscope (Leica DMIL); the filament length of 40 cells per sample was measured using the inverse microscope and the software LAS (Leica Application Suite).
Filamentation under embedded conditions was determined for colonies grown in YPS agar (1% w/v yeast extract, 2% w/v bactopeptone, 2% w/v D(+)-saccharose, 2% w/v agar) incubated at 25°C for five days in biological triplicates. Colonies were categorized as follows: Category 1: low filamentation, zero to five filaments; category 2: moderately filamented, >5 filaments, that were shorter and less dense compared to category 3; category 3: highly filamented, >5 long filaments, which appeared as a very dense network.
Furthermore, hyphal formation was investigated on solid YPD agar supplemented with 10% FBS and on solid Spider medium (1% w/v mannitol, 1% w/v nutrient broth, 0.2% w/v K2HPO4, 1% w/v agar, pH 7.2, ). Serum agar plates were incubated for four days and Spider agar plates for seven days at 37°C.
Macrophage and Oral Epithelial Cells
The human buccal carcinoma epithelial cell line TR-146 (Cancer Research Technology) and the peritoneal macrophage-like cell line J774.A1 (Deutsche Sammlung von Mikroorganismen und Zellkulturen (DSMZ)) were cultured and passaged in DMEM supplemented with 10% heat inactivated FBS at 37°C and 5% CO2.
Quantification of Damage to Host Cells
Damage of macrophages and oral epithelial cells was determined by measuring the release of lactate dehydrogenase (LDH). 3×104 TR146 cells per well and 4×104 J774 cells per well were seeded in 96 well plates (TPP) and kept at 37°C with 5% CO2. After one day of incubation, cells were washed twice with PBS and infected with C. albicans at an MOI of 1∶1 in DMEM +1% FBS for 24 h at 37°C and 5% CO2. The following controls were included in the assay: (i) medium only control (MOC), (ii) low control (LC) of uninfected host cells and (iii) high control (HC) of uninfected host cells lysed with 0.5% Triton X-100 (Ferak) in DMEM +1% FBS ten minutes before measurement. For LDH quantification the Cytotoxicity Detection Kit (Roche Applied Science) was used according to the manufacturer’s protocol. To calculate the cell damage, the MOC and the LC values were subtracted from all sample values and damage was expressed as percentage of the HC. Each experiment was performed in biological duplicates.
Invasion rates were determined as described previously . Briefly, 3×105 TR146 cells per well were seeded on 15 mm diameters glass coverslips in a 24 well plate (TPP) and grown to confluency. For infection, the monolayers were washed with PBS and 1×105 C. albicans cells were added to the TR146 cells. After 3 h of incubation at 37°C and 5% CO2 the epithelial cells were washed once with PBS and fixed with 4% paraformaldehyde (Roth). Non-invading fungal cells and fungal cell parts outside of host cells were stained for 45 min with fluorescein-conjugated concanavalin A (Con A, Invitrogen) in the dark with gentle shaking (70 rpm). Then, the cells were washed twice with PBS and the TR146 cells were permeabilized by adding 0.5% Triton X-100 in PBS for 5 min. Finally, the cells were washed twice with PBS and all C. albicans cells were stained with calcofluor white (Fluorescent brightener 28, Sigma) for 15 min. After three intensive washing steps with water the coverslips were mounted on microscopy slides with mounting media. Stained cells were visualized with epifluorescence (Leica DM5500B, Leica DFC360 FX) using the appropriate filter sets for detection of Con A and calcofluor white. For each sample 100 cells were examined and the percentage of invading fungal cells was determined by dividing the number of (partially) internalized cells by the total number of cells. The experiment was performed on three separate occasions.
Statistical Analysis and Definition of Outliers
GraphPad Prism, version 5.00 for Windows (GraphPad Software, San Diego, CA) was used to plot and to analyze the data. Data of the pools were compared to SC5314 by one-way analysis of variance (ANOVA) followed by Bonferronis’ multiple-comparison test. The data of the individual isolates are shown as scatter plot with line for mean and were analyzed with the nonparametric Mann-Whitney test.
For comparison of variation within SC5314 and passaged cells, mean and standard deviation were determined from the values obtained for all individual strains tested. Outliers were defined as strains with values above or below the whole population mean ±2×standard deviation, and color-coded for illustration. The F-test was used to compare the variances of the populations of the 40 strains from SC5314 and Pool 8.
Results and Discussion
Passaging through the Murine Kidney does not Affect Overall Virulence of C. albicans SC5314
Serial passage experiments can be used to study factors which are important for local adaptation and therefore for the survival in the host . For instance, the continuous passage of C. albicans through the murine spleen originated a strain that was more resistant to killing by neutrophils and attenuated in a systemic mouse infection model . However, the kidney is the main target organ in systemic C. albicans infection in mice, and infection of the spleen differs from the kidney in several aspects. First, the kidney is the only organ that exhibits continuously increasing fungal burden after intravenous infection, whereas C. albicans is at least partially cleared from the spleen –, . Secondly, neutrophil accumulation in the spleen occurs rapidly but transiently without affecting the organ architecture. In contrast, in the kidney the neutrophil accumulation is delayed but persistent and results in formation of abscesses . Furthermore, C. albicans is able to form filaments in the kidney but not the spleen, likely due to the differences in the immune response .
Given these organ-specific differences, we decided to perform serial passages of C. albicans through the murine kidney, focusing on cells which survived in the kidneys for a prolonged time, to determine whether microevolution is involved in kidney infection and persistence. We hypothesized that persistence in the kidney might be linked to reduced virulence, possibly accompanied by a dampened immune response.
Therefore, mice were infected intravenously with a low dose of SC5314 wild type cells and monitored over a period of 14 days. C. albicans cells recovered from the kidneys of mice surviving this period were combined, representing passage pool 1 (P1). This pool was used for the next round of infection. We decided to use pools rather than single strains to avoid introduction of an artifical population bottleneck and to be able to assess whether strains with altered phenotypes would become dominant within the population. In total, eight passages were performed. Clinical examination, body weight, fungal burden, myeloperoxidase (MPO) concentration and levels of IL-6 and GM-CSF were measured to determine severity of clinical disease, fungal proliferation and immune response (Figure S1).
As to be expected due to the low infectious dose , clinical symptoms within infected mice varied significantly (Fig. 1), ranging from mice which remained clinically healthy throughout the experiment or showed only mild, transient symptoms (category 1), over mice that showed clinical symptoms throughout the experiment but survived up to day 14 (category 2), to mice which became severely ill and had to be euthanized before the end of the experiment (category 3). The distribution of mice to the three categories was independent of the passage stage of the infection pool (Fig. 1). Consistent with published data , , the kidney was the only organ from which C. albicans could be readily isolated in high numbers 14 days after infection (Fig. 2A). This demonstrates the ability of C. albicans to persist in this organ, even in animals showing no clinical symptoms at the time of analysis. However, fungal burden in the kidney was independent of the passage. The fungal burdens of spleen, brain and liver were generally low and variable, without clear trend towards higher or lower fungal load over the passages.
Based on the health state mice were grouped into one of three categories: Category 1: Mice which survived the infection and appeared clinically healthy until day 14. Category 2: Mice that survived the infection but showed clinical symptoms of illness throughout the experiment. Category 3: Mice which became severely ill and had to be sacrificed before day 14. Lines show the body weight of individual mice, mice were designated as follows: 1. Number of passage; 2. Letter A or B (differentiating the two mice within each passage group). For example, the two mice used for the initial round of infection are designated as 1A and 1B, respectively. Mouse 5B died atypically before day 14 and is therefore not included in the figure.
(A) Fungal burden in kidneys (light blue), liver (red), spleen (green) and brain (dark blue) of systemically infected mice. The left graph shows data of mice surviving until the end of the experiment on day 14 (category (cat.) 1 and 2), the right graph shows fungal burden in moribund mice at the time of euthanasia. The X-axis is set as the passage number. (B) Evaluation of the immune response in different organs by quantification of MPO, IL-6 and GM-CSF by ELISA in tissue homogenates. The origin of the Y-axis is set to the detection limit. If both mice within a passage survived the infection, mean and standard deviation are presented.
In addition to fungal burden, the host response significantly influences the outcome of C. albicans infections , . Therefore, the immune response was analyzed by determination of MPO, a marker for neutrophil infiltration , and the proinflammatory cytokines IL-6 and GM-CSF. IL-6 is associated with recruitment of neutrophils to the site of C. albicans infection in mice, and animals genetically deficient in IL-6 showed enhanced susceptibility to C. albicans , . The production of GM-CSF is likewise induced upon C. albicans infection, and neutrophils activated with GM-CSF showed an enhanced phagocytosis rate and intracellular killing of C. albicans , . The MPO concentrations in the kidney homogenates increased to a peak in passage 5 and decreased afterwards (Fig. 2B). The MPO levels in the other organs showed no specific pattern. IL-6 and GM-CSF were detected in all organs of surviving mice, except in the spleen of passage 2. The cytokine levels showed variations between animals but no clear passage-dependent trend. Thus, passaging C. albicans through the murine kidney did not significantly alter the renal immune response. In summary, the in vivo results indicated that virulence and pathogenesis remained unaltered after eight passages through the murine kidney.
Passaging through the Murine Kidney Induced Moderate Phenotypical Alterations on the Population Level
It has been previously shown that C. albicans microevolution does occur in the murine kidney, even during a single passage, within 5–7 days . In our experiment, colonies with altered morphology were isolated with a frequency of approx. 20% each from the liver of one mouse of passage 6 and from livers of both mice of passage 7. Albeit the phenotype was not stable upon subculture (Fig. 3A), the observation suggested adaptational changes in response to the host environment. We hypothesized that, although putative mutations were not sufficient to substantially change virulence on the population level, their impact on specific phenotypic traits might be detectable in vitro. Therefore, pools from selected passages were analyzed regarding filamentation, stress resistance and interaction with host cells.
(A) Clone displaying instable morphological alteration. (B) Filamentation under embedded conditions. Depending on the filamentation, colonies were placed in one of the following morphology types (examples depicted on the right side): Morphology type 1: reduced filamentation, zero to five filaments; morphology type 2: moderate filamentation, >5 filaments, that were shorter and less dense compared to morphology type 3; morphology type 3: strong filamentation, >5 long filaments, which appeared as a very dense network. The graph shows the distribution of colonies from each pool in the three morphology types. (C) Effect of different stresses in solid SD media on the survival of the pools. Survival was calculated by dividing the number of colonies on the stress plate by the number of colonies on a control plate. Error bars = standard deviation. * = p<0.05 compared to SC5314 (WT) by ANOVA followed by Bonferronis’s multiple-comparison test, n = 3.
Filamentation was determined in response to serum, in Spider medium and under embedded growth. The formation of germ tubes as well as the hyphal length in liquid serum-containing media (Figure S2A), hyphal formation on serum-containing agar and Spider agar (data not shown) did not significantly differ between the passaged pools. However, we observed differences under embedded growth, a condition that might simulate growth in tissue at least on the physical level . Under these conditions, three out of four passaged pools showed significantly less colonies that exhibited a very dense filament network (category 3) as compared to SC5314 (Fig. 3B). This suggests a moderately impaired ability of passaged pools to form filaments under this specific condition.
Within the host, C. albicans is likely exposed to different stresses, especially high temperature and oxidative stress. Thus, we evaluated the resistance of passaged pools to thermal and oxidative stress as well as to osmotic stress, cell wall stress and the antifungal compound caspofungin. While resistance against cell wall stress induced by Congo Red and osmotic stress induced by NaCl (Figure S2B) was not altered in any of the pools, the passaged pools showed increased survival at 42°C, which was statistically significant for the two final passage pools P7 and P8. Resistance to oxidative stress (H2O2) did likewise increase but did not reach statistical significance (Fig. 3C). Additionally, P4 and P8 displayed lower resistance against caspofungin (Fig. 3C). Finally, we investigated whether the interaction with host cells differed between the pools. Consistent with the unaltered virulence in vivo, invasion into and cell damage of epithelial cells (Figure S2C) and the ability to damage macrophages (data not shown) were indistinguishable between passaged pools and SC5314.
In summary, these experiments showed that most phenotypic characteristics were retained within the pools over the passages. However, the increased tolerance to high temperature and oxidative stress, which represent stresses relevant in vivo, suggested moderate adaptation on the population level.
Isolates from the Last Passage were Better Adapted to High Temperature and Showed a Higher Phenotypic Variability
Microevolution is a process which occurs in single cells within a population. Whether phenotypic alterations mediated by microevolution become evident on the population level depends on various factors, including the selective pressure and possible benefit of a mutation which determines the selective advantage of individual strains within populations. Thus, microevolution will only become phenotypically evident on the population level if the selective pressure favors mutants with certain properties. In the absence of a directed selective pressure, microevolution might occur within a population without changing the collective phenotype. A recent study showed that a single passage through the murine kidney is sufficient to increase population heterogeneity, both on the genome and phenotype level . We assumed that, similarly, passaged pools from our experiment might have been enriched for strains with altered phenotypical properties. To test this hypothesis, we analyzed 40 randomly selected individual strains each from the last passage pool (P8) and SC5314 for growth speed, stress resistance and host cell interaction. Growth curves were generated for all 80 strains in complete (YPD) and minimal medium (SD) at different temperatures and to oxidative stress in minimal medium. In complete medium at either 30°C or 42°C, the mean growth rates of SC5314 and P8 strains were indistinguishable. However, when incubated in minimal medium at 30°C, mean generation times were significantly higher for the 40 strains from P8 (Fig. 4A), indicating slower growth at low temperatures. In contrast, SC5314 and P8 displayed indistinguishable mean generation times at 42°C in minimal medium (Fig. 4A). This result and the observation that P8 showed an increased survival at 42°C on population level (Fig. 3C) suggested that P8 strains are well adapted to elevated temperatures.
(A) The graphs show the generation times of 40 strains of each WT and P8 at different media at 30°C and 42°C. (B) Generation times of 40 strains of each WT and P8 for H2O2 stress at 30°C. (C) Damage capacity of the 40 strains of each WT and P8 to epithelial cells and macrophages. WT = SC5314. P8 = passage 8. blue line = mean of the 40 strains. * = p<0.05 compared to WT by nonparametric Mann-Whitney test. n = 2 for growth experiments, and n = 3 for experiments with host cells, mean value is shown for each strain. Colored dots and triangles = outliers (for definition see material and methods), each color defines a specific strain.
Because P8 showed moderately enhanced resistance to oxidative stress at the population level (Fig. 3C), we also determined the generation time of individual strains under oxidative stress. The mean doubling times of the individual strains of P8 under this stress were significantly higher in comparison to the SC5314 strains (Fig. 4B). However, the slower growth of P8 treated with H2O2 is likely due to the generally slower growth at 30°C. It should furthermore be noted that survival in the presence of H2O2 was increased at the population level for P8 (Fig. 3C), suggesting that slower growth and increased survival might be mechanistically linked. Finally, we tested the potential to damage epithelial cells and macrophages. The mean damage caused by P8 strains was similar to SC5314 (Fig. 4C), confirming the results obtained with pools.
In addition to mean growth and mean damage potential, these experiments also allowed us to estimate the phenotypic variability within the SC5314 stock and P8 pool. The higher variability within P8 was reflected by the higher standard deviations observed for growth, cell damage capacity and invasion into epithelial cells (Table 1). By applying the F-test, significant differences of the variances were identified for growth in complete medium at 30°C and 42°C, respectively, for growth under oxidative stress and for damage capacity. Interestingly, albeit mean growth rates and damage potential of P8 and SC5314 were similar, several individual strains of P8 displayed increased generation times in complex medium and minimal medium as well as under H2O2 stress (colored dots in Fig. 4A and B). Four P8 strains displayed slower growth in more than one condition. Similarly, individual strains with altered ability to damage host cells were identified in P8 (Fig. 4C).
Taken together, analysis of individual strains revealed that although the majority of P8 strains were phenotypically indistinguishable from SC5314, a few strains displayed phenotypic alterations, suggesting that microevolution occurred during the passages through the murine kidney. However, genotyping would be necessary to confirm that the observed differences in phenotype are indeed based on microevolution rather than stable epigenetic modifications. Phenotype alterations detected included both gain (survival at 42°C) and loss of fitness (growth rate, resistance to caspofungin) under the different conditions tested; this is consistent with results from evolution experiments with other organisms showing that an increased fitness for one condition is commonly associated with reduced fitness in other environments , .
Even though we observed increased phenotypic variability in passaged pools, the overall virulence of the passaged pools did not change. This outcome might be in part due to the experimental setup. To generate yeast cells suitable for infection, in vitro culture steps were necessary: First, plating on solid medium to remove host tissue, determine CFU and to remove and detect gross bacterial contamination. Secondly, colonies were propagated in liquid medium to obtain semi-synchronized cells in early stationary phase and yeast form only. Thus, in vitro culturing might have selected against mutant strains and favored strains which retained wild type-like abilities . Furthermore, it should be noted that kidney infection in vivo is a two-stage process: Within one day after infection, C. albicans forms filaments which grow invasively in the renal cortex. Neutrophil recruitment within the first day is comparatively low . Therefore, the initial stage of kidney infection is characterized by fungal filamentation, hyphal proliferation and tissue invasion. At the later stage of infection, C. albicans proliferates with hyphae extending to the renal tubules and pelvis while large numbers of neutrophils and macrophages are recruited . Hence, the ability of C. albicans to withstand the immune response is likely a crucial feature allowing fungal cells to persist. As we isolated C. albicans 14 days after infection, the ability to persist determined the strains in the recovered pool; however, these strains would also need to have a sufficient ability for filamentation and invasion to establish infection in the next round of in vivo selection. It appears therefore likely that the sequential infections selected for an “all-rounder” phenotype and that SC5314 is already well adapted to both establishment of systemic infection and persistence within the kidney.
Interestingly, Forche et al. identified several strains with wrinkling morphology after a single renal passage . In contrast, we did not observe stable hyperfilamentous strains within the re-isolated pools, and pools rather showed a moderately reduced capacity to filament under embedded conditions. We suggest that these differences might be the consequence of the experimental designs: First, we used a lower inoculum to facilitate prolonged survival of infected mice. Secondly, we re-isolated C. albicans from clinically healthy mice or animals with a chronic infection whereas the other study used kidneys from mice succumbing to acute lethal infection. We thus speculate that increased filamentation might provide a selective advantage in acute infection but not chronic persistence.
To our knowledge, there is only one additional study investigating the effects of serial passages on the phenotype of SC5314. Cheng et al. performed serial passaging through the spleen, resulting in a strain with reduced growth in vitro, strongly reduced virulence in murine systemic infection but increased resistance to phagocytosis and retained ability to persist in the kidney . This contrasts our experiment of kidney passages which did not produce a pool with altered virulence. We suggest that this difference is likely due to the distinct organ environment. In contrast to the kidney, C. albicans SC5314 does not filament within the spleen and fungal burden in this organ decreases over time . Thus, phagocyte exposure is likely a strong selective force throughout the whole course of infection in the spleen. Furthermore, the steady decline in the number of fungal cells in the spleen creates a population bottleneck which reduces the variation in the fungal population and thus makes it more likely that mutations which further survival become enriched.
In summary, repeated passages of SC5314 through the kidney of systemically infected mice did not induce alteration of virulence and resulted in only moderate changes of phenotypic traits on the population level. The phenotypic variation within the population suggests that microevolution events might have occurred; however, this assumption was not experimentally tested. We propose that the ability to establish and maintain kidney infection over a prolonged time requires complex abilities, which are already well developed in C. albicans SC5314.
Experimental setup of the in vivo microevolution experiment. Two BALB/c mice were challenged intravenously with SC5314 at an infectious doses of 5×103 CFU/g body weight. After 14 days, kidney, brain, spleen and liver were removed aseptically for analyses of fungal burden, myeloperoxidase (MPO) and cytokine levels. Yeast colonies recovered from both kidneys were used for the next round of infection. Overall, eight serial passages of SC5314 through murine kidneys were performed. AB = antibiotic (chloramphenicol)
Characterization of passaged pools. (A) Germ tube formation after 1 h and hyphal length after 3 h in DMEM +10% serum at 37°C and 5% CO2, WT = SC5314. (B) Effect of cell wall stress (Congo Red) and osmotic stress (NaCl) in solid media on the survival of WT (SC5314) and the pools. Survival was calculated by dividing the number of colonies on the stress plate by the number of colonies on the control plate. (C) Invasion and damage capacity of WT (SC5314) and pools. Invasion was quantified after 3 h and damage after 24 h of co-incubation. Data are shown as mean+standard deviation.
We thank Maria Schreiner, Ursula Stöckel and Birgit Weber for excellent technical help. Furthermore, we thank Lydia Kasper and Duncan Wilson for helpful discussions.
Conceived and designed the experiments: AL SB BH IJ. Performed the experiments: AL SB IJ. Analyzed the data: AL SB BH IJ. Wrote the paper: AL IJ. Designed the software for generation time calculation: SB. Obtained permission for use of cell line: BH.
- 1. Perlroth J, Choi B, Spellberg B (2007) Nosocomial fungal infections: epidemiology, diagnosis, and treatment. Med Mycol 45: 321–346.
- 2. Eggimann P, Bille J, Marchetti O (2011) Diagnosis of invasive candidiasis in the ICU. Ann Intensive Care 1: 37.
- 3. Laupland KB, Gregson DB, Church DL, Ross T, Elsayed S (2005) Invasive Candida species infections: a 5 year population-based assessment. J Antimicrob Chemother 56: 532–537.
- 4. Pfaller MA, Diekema DJ (2007) Epidemiology of invasive candidiasis: a persistent public health problem. Clin Microbiol Rev 20: 133–163.
- 5. Asticcioli S, Nucleo E, Perotti G, Matti C, Sacco L, et al. (2007) Candida albicans in a neonatal intensive care unit: antifungal susceptibility and genotypic analysis. New Microbiol 30: 303–307.
- 6. Vonk AG, Netea MG, van der Meer JW, Kullberg BJ (2006) Host defence against disseminated Candida albicans infection and implications for antifungal immunotherapy. Expert Opin Biol Ther 6: 891–903.
- 7. Calderone RA, Fonzi WA (2001) Virulence factors of Candida albicans. Trends Microbiol 9: 327–335.
- 8. van de Veerdonk FL, Plantinga TS, Hoischen A, Smeekens SP, Joosten LA, et al. (2011) STAT1 mutations in autosomal dominant chronic mucocutaneous candidiasis. N Engl J Med 365: 54–61.
- 9. van Enckevort FH, Netea MG, Hermus AR, Sweep CG, Meis JF, et al. (1999) Increased susceptibility to systemic candidiasis in interleukin-6 deficient mice. Med Mycol 37: 419–426.
- 10. Barelle CJ, Priest CL, Maccallum DM, Gow NA, Odds FC, et al. (2006) Niche-specific regulation of central metabolic pathways in a fungal pathogen. Cell Microbiol 8: 961–971.
- 11. Hube B (2004) From commensal to pathogen: stage- and tissue-specific gene expression of Candida albicans. Curr Opin Microbiol 7: 336–341.
- 12. Selmecki A, Forche A, Berman J (2010) Genomic plasticity of the human fungal pathogen Candida albicans. Eukaryot Cell 9: 991–1008.
- 13. Galhardo RS, Hastings PJ, Rosenberg SM (2007) Mutation as a stress response and the regulation of evolvability. Crit Rev Biochem Mol Biol 42: 399–435.
- 14. Bougnoux ME, Diogo D, Francois N, Sendid B, Veirmeire S, et al. (2006) Multilocus sequence typing reveals intrafamilial transmission and microevolutions of Candida albicans isolates from the human digestive tract. J Clin Microbiol 44: 1810–1820.
- 15. Franzot SP, Mukherjee J, Cherniak R, Chen LC, Hamdan JS, et al. (1998) Microevolution of a standard strain of Cryptococcus neoformans resulting in differences in virulence and other phenotypes. Infect Immun 66: 89–97.
- 16. Shin JH, Park MR, Song JW, Shin DH, Jung SI, et al. (2004) Microevolution of Candida albicans strains during catheter-related candidemia. J Clin Microbiol 42: 4025–4031.
- 17. Cowen LE, Kohn LM, Anderson JB (2001) Divergence in fitness and evolution of drug resistance in experimental populations of Candida albicans. J Bacteriol 183: 2971–2978.
- 18. Yan L, Zhang J, Li M, Cao Y, Xu Z, et al. (2008) DNA microarray analysis of fluconazole resistance in a laboratory Candida albicans strain. Acta Biochim Biophys Sin (Shanghai) 40: 1048–1060.
- 19. White TC, Pfaller MA, Rinaldi MG, Smith J, Redding SW (1997) Stable azole drug resistance associated with a substrain of Candida albicans from an HIV-infected patient. Oral Dis 3 Suppl 1S102–109.
- 20. Cowen LE, Sanglard D, Calabrese D, Sirjusingh C, Anderson JB, et al. (2000) Evolution of drug resistance in experimental populations of Candida albicans. J Bacteriol 182: 1515–1522.
- 21. Andes D, Forrest A, Lepak A, Nett J, Marchillo K, et al. (2006) Impact of antimicrobial dosing regimen on evolution of drug resistance in vivo: fluconazole and Candida albicans. Antimicrob Agents Chemother 50: 2374–2383.
- 22. Schroppel K, Rotman M, Galask R, Mac K, Soll DR (1994) Evolution and replacement of Candida albicans strains during recurrent vaginitis demonstrated by DNA fingerprinting. J Clin Microbiol 32: 2646–2654.
- 23. Forche A, Magee PT, Selmecki A, Berman J, May G (2009) Evolution in Candida albicans populations during a single passage through a mouse host. Genetics 182: 799–811.
- 24. Cheng S, Clancy CJ, Zhang Z, Hao B, Wang W, et al. (2007) Uncoupling of oxidative phosphorylation enables Candida albicans to resist killing by phagocytes and persist in tissue. Cell Microbiol 9: 492–501.
- 25. Lionakis MS, Lim JK, Lee CC, Murphy PM (2011) Organ-specific innate immune responses in a mouse model of invasive candidiasis. J Innate Immun 3: 180–199.
- 26. Spellberg B, Ibrahim AS, Edwards JE Jr, Filler SG (2005) Mice with disseminated candidiasis die of progressive sepsis. J Infect Dis 192: 336–343.
- 27. MacCallum DM, Odds FC (2005) Temporal events in the intravenous challenge model for experimental Candida albicans infections in female mice. Mycoses 48: 151–161.
- 28. Gillum AM, Tsay EY, Kirsch DR (1984) Isolation of the Candida albicans gene for orotidine-5′-phosphate decarboxylase by complementation of S. cerevisiae ura3 and E. coli pyrF mutations. Mol Gen Genet 198: 179–182.
- 29. Liu H, Kohler J, Fink GR (1994) Suppression of hyphal formation in Candida albicans by mutation of a STE12 homolog. Science 266: 1723–1726.
- 30. Rupniak HT, Rowlatt C, Lane EB, Steele JG, Trejdosiewicz LK, et al. (1985) Characteristics of four new human cell lines derived from squamous cell carcinomas of the head and neck. J Natl Cancer Inst 75: 621–635.
- 31. Park H, Myers CL, Sheppard DC, Phan QT, Sanchez AA, et al. (2005) Role of the fungal Ras-protein kinase A pathway in governing epithelial cell interactions during oropharyngeal candidiasis. Cell Microbiol 7: 499–510.
- 32. Ebert D (1998) Experimental evolution of parasites. Science 282: 1432–1435.
- 33. Papadimitriou JM, Ashman RB (1986) The pathogenesis of acute systemic candidiasis in a susceptible inbred mouse strain. J Pathol 150: 257–265.
- 34. Netea MG, Brown GD, Kullberg BJ, Gow NA (2008) An integrated model of the recognition of Candida albicans by the innate immune system. Nat Rev Microbiol 6: 67–78.
- 35. Ashman RB (2008) Protective and pathologic immune responses against Candida albicans infection. Front Biosci 13: 3334–3351.
- 36. Lin Y, Cai J, Turner JD, Zhao X (1996) Quantification of bovine neutrophil migration across mammary epithelium in vitro. Can J Vet Res 60: 145–149.
- 37. Romani L, Mencacci A, Cenci E, Spaccapelo R, Toniatti C, et al. (1996) Impaired neutrophil response and CD4+ T helper cell 1 development in interleukin 6-deficient mice infected with Candida albicans. J Exp Med 183: 1345–1355.
- 38. Natarajan U, Randhawa N, Brummer E, Stevens DA (1998) Effect of granulocyte-macrophage colony-stimulating factor on candidacidal activity of neutrophils, monocytes or monocyte-derived macrophages and synergy with fluconazole. J Med Microbiol 47: 359–363.
- 39. MacCallum DM, Castillo L, Brown AJ, Gow NA, Odds FC (2009) Early-expressed chemokines predict kidney immunopathology in experimental disseminated Candida albicans infections. PLoS One 4: e6420.
- 40. Djeu JY (1990) Role of tumor necrosis factor and colony-stimulating factors in phagocyte function against Candida albicans. Diagn Microbiol Infect Dis 13: 383–386.
- 41. Brown DH Jr, Giusani AD, Chen X, Kumamoto CA (1999) Filamentous growth of Candida albicans in response to physical environmental cues and its regulation by the unique CZF1 gene. Mol Microbiol 34: 651–662.
- 42. Elena SF, Lenski RE (2003) Evolution experiments with microorganisms: the dynamics and genetic bases of adaptation. Nat Rev Genet 4: 457–469.
- 43. Ensminger AW, Yassin Y, Miron A, Isberg RR Experimental evolution of Legionella pneumophila in mouse macrophages leads to strains with altered determinants of environmental survival. PLoS Pathog 8: e1002731.
- 44. Levin BR, Perrot V, Walker N (2000) Compensatory mutations, antibiotic resistance and the population genetics of adaptive evolution in bacteria. Genetics 154: 985–997. | 1 | 10 |
<urn:uuid:af2b5f02-877c-4b84-a75e-f9e3c3b80b49> | Schizophrenia is a psychiatric diagnosis denoting a persistent, often chronic, mental illness variously affecting behavior, thinking, and emotion. The term schizophrenia comes from the Greek words (schizo, split or divide) and (phrenos, mind) and is best translated as "shattered mind". Schizophrenia is commonly confused with multiple personality disorder, a different diagnosis.
Schizophrenia is most commonly characterized by both 'positive symptoms' (those additional to normal experience and behaviour) and 'negative symptoms' (the lack or decline in normal experience or behaviour). Positive symptoms are grouped under the umbrella term psychosis and typically include delusions, hallucinations, and thought disorder. Negative symptoms may include inappropriate or lack of emotion, poverty of speech, and lack of motivation. Some models of schizophrenia include thought disorder and planning problems in a third grouping, the 'disorganization syndrome'. Additionally, neurocognitive deficits may be present. These take the form of reduction or impairment in basic psychological functions such as memory, attention, problem solving, executive function and social cognition. The onset is typically in late adolescence and early adulthood, with males tending to show symptoms earlier than females.
Psychiatrist Emil Kraepelin was first to make the distinction between what he called dementia praecox and other forms of madness. This classification was later renamed 'schizophrenia' by psychiatrist Eugen Bleuler in 1911 as it became clear Kraepelin's name was not an adequate description of the condition.
The diagnostic approach to schizophrenia has been opposed, most notably by the anti-psychiatry movement, who argue that classifying specific thoughts and behaviours as illness allows social control of people that society finds undesirable but who have committed no crime.
More recently, it has been argued that schizophrenia is just one end of a spectrum of experience and behaviour, and everybody in society may have some such experiences in their life. This is known as the 'continuum model of psychosis' or the 'dimensional approach' and is most notably argued for by psychologist Richard Bentall and psychiatrist Jim van Os.
Although no definite causes of schizophrenia have been identified, most researchers and clinicians currently believe that schizophrenia is primarily a disorder of the brain. It is thought that schizophrenia may result from a mixture of genetic disposition (genetic studies using various techniques have shown relatives of people with schizophrenia are more likely to show signs of schizophrenia themselves) and environmental stress (research suggests that stressful life events may precede a schizophrenic episode).
It is also thought that processes in early neurodevelopment are important, particularly those that occur during pregnancy. In adult life, particular importance has been placed upon the function (or malfunction) of dopamine in the mesolimbic pathway in the brain. This theory, known as the dopamine hypothesis of schizophrenia largely resulted from the accidental finding that a drug group which blocks dopamine function, known as the phenothiazines, reduced psychotic symptoms. These drugs have now been developed further and antipsychotic medication is commonly used as a first line treatment. However, this theory is now thought to be overly simplistic as a complete explanation.
Differences in brain structure have been found between people with schizophrenia and those without. However, these tend only to be reliable on the group level and, due to the significant variability between individuals, may not be reliably present in any particular individual.
Accounts that may relate to symptoms of schizophrenia date back as far as 2000 BC in the Book of Hearts, part of the ancient Ebers papyrus. However, a recent study1 into the ancient Greek and Roman literature showed that whilst the general population probably had an awareness of psychotic disorders, there was no recorded condition that would meet the modern diagnostic criteria for schizophrenia in these societies.
This nonspecific concept of madness has been around for many thousands of years and schizophrenia was only classified as a distinct mental disorder by Kraepelin in 1887. He was the first to make a distinction in the psychotic disorders between what he called dementia praecox (a term first used by psychiatrist Benedict A. Morel) and manic depression. Kraepelin believed that dementia praecox was primarily a disease of the brain2, and particularly a form of dementia. Kraepelin named the disorder 'dementia praecox' (early dementia) to distinguish it from other forms of dementia (such as Alzheimer's disease) which typically occur late in life. He used this term because his studies focused on young adults with dementia.22
The term schizophrenia is derived from the Greek words 'schizo' (split) and 'phrene' (mind) and was coined by Eugene Bleuler to refer to the lack of interaction between thought processes and perception. He was also the first to describe the symptoms as "positive" or "negative."22 Bleuler changed the name to schizophrenia as it was obvious that Krapelin's name was misleading. The word "praecox" implied precocious or early onset, hence premature dementia, as opposed to senile dementia from old age. Bleuler realized the illness was not a dementia (it did not always lead to mental deterioration) and could sometimes occur late as well as early in life and was therefore misnamed.
With the name 'schizophrenia' Bleuler intended the name to capture the separation of function between personality, thinking, memory, and perception, however it is commonly misunderstood to mean that affected persons have a 'split personality' (something akin to the character in Robert Louis Stevenson's The Strange Case of Dr. Jekyll and Mr. Hyde). Schizophrenia is commonly, although incorrectly, confused with multiple personality disorder (now called 'dissociative identity disorder'). Although people diagnosed with schizophrenia may 'hear voices' and may experience the voices as distinct personalities, schizophrenia does not involve a person changing between distinct multiple personalities. The confusion perhaps arises in part due to the meaning of Bleuler's term 'schizophrenia' (literally 'split mind'). Interestingly, the first known misuse of this word schizophrenia to mean 'split personality' (in the Jekyll and Hyde sense) was in an article by the poet T. S. Eliot in 19333.
In the first half of the twentieth century, schizophrenia was considered by many as a "hereditary defect", and people with schizophrenia became the target of the eugenics programs of many countries. Hundreds of thousands were forcibly sterilized, the majority in Germany, the United States, and various Scandinavian countries.
Diagnosis and presentation (signs and symptoms)
Like many mental illnesses, the diagnosis of schizophrenia is based upon the behaviour of the person being assessed. There is a List of diagnostic criteria which must be met for a person to be so diagnosed. These depend on both the presence and duration of certain signs and symptoms.
The most commonly-used criteria for diagnosing schizophrenia are from the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM) and the World Health Organisation's International Statistical Classification of Diseases and Related Health Problems (ICD). The most recent versions are ICD-10 (http://www.who.int/whosis/icd10/) and DSM-IV-TR (http://www.psych.org/research/dor/dsm/index.cfm).
To be diagnosed as having schizophrenia, a person must display:
Characteristic symptoms: Two or more of the following, each present for a significant
portion of time during a one-month period (or less, if successfully treated)
- disorganized speech (e.g., frequent derailment or incoherence). See thought disorder.
- grossly disorganized or catatonic behavior
- negative symptoms, i.e., affective flattening (lack or decline in emotional response), alogia (lack or decline in speech), or avolition (lack or decline in motivation).
- Note: Only one Criterion A symptom is required if delusions are bizarre or hallucinations consist of hearing voices.
- B) Social/occupational dysfunction: For a significant portion of the time since the onset of the disturbance, one or more major areas of functioning such as work, interpersonal relations, or self-care, are markedly below the level achieved prior to the onset.
- C) Duration: Continuous signs of the disturbance persist for at least six months. This six-month period must include at least one month of symptoms (or less, if successfully treated) that meet Criterion A.
- catatonic type (where marked absences or peculiarities of movement are present),
- disorganized type (where thought disorder and flat or inappropriate affect are present together),
- paranoid type (where delusions and hallucinations are present but thought disorder, disorganized behaviour, and affective flattening is absent),
- residual type (where positive symptoms are present at a low intensity only) and
- undifferentiated type (psychotic symptoms are present but the criteria for paranoid, disorganized, or catatonic types has not been met).
Symptoms may also be described as 'positive symptoms' (those additional to normal experience and behaviour) and negative symptoms (the lack or decline in normal experience or behaviour). 'Positive symptoms' describe psychosis and typically include delusions, hallucinations and thought disorder. 'Negative symptoms' describe inappropriate or nonpresent emotion, poverty of speech, and lack of motivation. In three-factor models of schizophrenia, a third symptom grouping, the so called 'disorganisation syndrome' is also given. This considers thought disorder and related disorganized behaviour to be in a separate symptom cluster from delusions and hallucinations.
It is worth noting that many of the positive or psychotic symptoms may occur in a variety of disorders and not only in schizophrenia. The psychiatrist Kurt Schneider tried to list the particular forms of psychotic symptoms which he thought were particularly useful in distinguishing between schizophrenia and other disorders which could produce psychosis. These are called first rank symptoms or Schneiderian first rank symptoms and include delusions of being controlled by an external force, the belief that thoughts are being inserted or withdrawn from your conscious mind, the belief that your thoughts are being broadcast to other people and hearing hallucinated voices which comment on your thoughts or actions, or may have a conversation with other hallucinated voices. As with other diagnostic methods, the reliability of 'first rank symptoms' has been questioned4, although they remain in use as diagnostic criteria in many countries.
Diagnostic issues and controversies
It has been argued that the diagnostic approach to schizophrenia is flawed, as it relies on an assumption of a clear dividing line between what is considered to be mental illness (fulfilling the diagnostic criteria) and mental health (not fulfilling the criteria). Recently it has been argued, notably by psychiatrist Jim van Os and psychologist Richard Bentall, that this makes little sense, as studies have shown that psychotic symptoms are present in many people who never become 'ill' in the sense of feeling distressed, becoming disabled in some way or needing medical assistance6.
Of particular concern is that the decision as to whether a symptom is present is a subjective decision by the person making the diagnosis or relies on an incoherent definition (for example, see the entries on delusions and thought disorder for a discussion of this issue). More recently, it has been argued that psychotic symptoms are not a good basis for making a diagnosis of schizophrenia as "psychosis is the 'fever' of mental illness a serious but nonspecific indicator".5
Perhaps because of these factors, studies examining the diagnosis of schizophrenia have typically shown relatively low, or inconsistent levels of diagnostic reliability. Most famously, David Rosenhan's 1972 study, published as On being sane in insane places, demonstrated that the diagnosis of schizophrenia was (at least at the time) often subjective and unreliable. More recent studies have found agreement between any two psychiatrists when diagnosing schizophrenia tends to reach about 65% at best33. This, and the results of earlier studies of diagnostic reliability (which typically reported even lower levels of agreement) have led some critics to argue that the diagnosis of schizophrenia should be abandoned34.
Proponents have argued for a new approach that would use the presence of specific neurocognitive deficits to make a diagnosis. These often accompany schizophrenia and take the form of a reduction or impairment in basic psychological functions such as memory, attention, executive function and problem solving. It is these sorts of difficulties, rather than the psychotic symptoms (which can in many cases be controlled by antipsychotic medication), which seem to be the cause of most disability in schizophrenia. However, this argument is relatively new and it is unlikely that the method of diagnosing schizophrenia will change radically in the near future.
The diagnostic approach to schizophrenia has also been opposed by the anti-psychiatry movement, who argue that classifying specific thoughts and behaviours as an illness allows social control of people that society finds undesirable but who have committed no crime. They argue that this is a way of unjustly classifying a social problem as a medical one to allow the forcible detention and treatment of people displaying these behaviours, which is something which can be done under mental health legislation in most western countries.
An example of this can be seen in the former Soviet Union, where an additional sub-classification of sluggishly progressing schizophrenia was created. Particularly in the RSFSR (Russian Soviet Federated Socialist Republic) this diagnosis was used for the purpose of silencing political dissidents or forcing them to recant their ideas by the use of forcible confinement and treatment. In 2000 similar concerns about the abuse of psychiatry to unjustly silence and detain members of the Falun Gong movement by the Chinese government led the American Psychiatric Association's Committee on the Abuse of Psychiatry and Psychiatrists to pass a resolution to urge the World Psychiatric Association to investigate the situation in China.
Western psychiatric medicine tends to favour a definition of symptoms that depends on form rather than content (an innovation first argued for by psychiatrists Karl Jaspers and Kurt Schneider). Therefore, you should be able to believe anything, however unusual or socially unacceptable, without being diagnosed delusional, unless your belief is judged to be held in a particular way. In principle, this would stop people being forcibly detained or treated simply for what they believe. However, the distinction between form and content is not easy, or always possible, to make in practice (see delusion). This had led to accusations by anti-psychiatry, surrealist and mental health system survivor groups that psychiatric abuses exist to some extent in the West as well.
While the reliability of the schizophrenia diagnosis introduces difficulties in measuring the relative effect of genes and environment (for example, symptoms overlap to some extent with severe bipolar disorder or major depression), there is evidence to suggest that genetic vulnerability modified by environmental stressors can act in combination to cause schizophrenia.
A recent review listed seven genes as likely to be involved in the inheritance of schizophrenia or the risk of developing schizophrenia26. Evidence comes from research (such as linkage studies) suggesting multiple chromosomal regions are transmitted to people who are later diagnosed as having schizophrenia. Some family association studies have demonstrated a relationship to a gene known as COMT that is involved in encoding the dopamine catabolic enzyme catechol-O-methyl transferase27. This is particularly interesting because of the known link between dopamine function, psychosis, and schizophrenia.
While highly heritable (some estimates are as high as 70%), schizophrenia is a disorder of complex inheritance (analogous to diabetes or high blood pressure). Thus, several genes interact to generate risk for schizophrenia. Genetic evidence for the role of the environment comes from the observation that identical twins do not universally develop schizophrenia. A recent review of the genetic evidence has suggested a 28% chance of one identical twin developing schizophrenia if the other already has it7.
A study conducted about schizophrenia in twins, carried out for persons in the Finnish Twin Cohort, involving 16,649 like-sexed twin pairs, found a concordance rate for schizophrenia of only 11.0% among monozygotic twins, and only 1.8% among dizygotic twins 37.
There is also considerable evidence indicating that stress may trigger episodes of schizophrenia. For example, emotionally turbulent families8 and stressful life events9 have been shown to be risk factors for relapses or triggers for episodes of schizophrenia. Other factors such as poverty and discrimination may also be involved. This may explain why minority communities have much higher rates of schizophrenia than when members of the same ethnic groups are resident in their home country.
One particularly stable and replicable finding has been the association between living in an urban environment and risk of developing schizophrenia, even after factors such as drug use, ethnic group and size of social group have been controlled for29. A recent study of 4.4 million men and women in Sweden found a 6877% increased risk of psychosis for people living in the most urbanized environments, a significant proportion of which is likely to be accounted for by schizophrenia30.
One curious finding is that people diagnosed with schizophrenia are more likely to have been born in winter or spring32 (at least in the northern hemisphere). However, the effect is not large and it is still not clear why this may occur.
It is also thought that processes in early neurodevelopment are important, particularly during pregnancy. For example, women who were pregnant during the Dutch famine of 1944, where many people were close to starvation, had a higher chance of having a child who would later develop schizophrenia10. Similarly, studies of Finnish mothers who were pregnant when they found out that their husbands had been killed during the Winter War of 19391940 have shown that their children were much more likely to develop schizophrenia when compared with mothers who were found out about their husbands' death before or after pregnancy11, suggesting that even psychological trauma in the mother may have an effect.
In adult life, particular importance has been placed upon the function (or malfunction) of dopamine in the mesolimbic pathway in the brain. This theory, known as the dopamine hypothesis of schizophrenia largely resulted from the accidental finding that a drug group which blocks dopamine function, known as the phenothiazines, reduced psychotic symptoms. These drugs have now been developed further and antipsychotic medication is commonly used as a first line treatment.
However, this theory is now thought to be overly simplistic as a complete explanation. Partly as newer antipsychotic medication (called atypical antipsychotic medication) is equally effective as older medication, but also affects serotonin function and may have slightly less of a dopamine blocking effect. Psychiatrist David Healy has also argued that pharmaceutical companies have promoted certain oversimplified biological theories of mental illness to promote their own sales of biological treatments12.
Much recent research has focused on differences in structure or function in certain brain areas in people diagnosed with schizophrenia.
Early evidence for differences in the neural structure came from the discovery of ventricular enlargement in people diagnosed as schizophrenic, for whom negative symptoms were most prominent35. However, this finding has not proved particularly reliable on the level of the individual person, with considerable variation between patients.
More recent studies have shown a large number of differences in brain structure between people with and without diagnoses of schizophrenia36. However, as with earlier studies, many of these differences are only reliably detected when comparing groups of people, and are unlikely to predict any differences in brain structure of an individual person with schizophrenia.
Studies using neuropsychological tests and brain scanning technologies such as fMRI and PET to examine functional differences in brain activity have shown that differences seem to most commonly occur in the frontal lobes, hippocampus, and temporal lobes13. These differences are heavily linked to the neurocognitive deficits which often occur with schizophrenia, particularly in areas of memory, attention, problem solving, executive function and social cognition.
Incidence and prevalence
Schizophrenia is typically diagnosed in late adolescence or early adulthood. It is found approximately equally in men and women, though the onset tends to be later in women, who also tend to have a better course and outcome.
The lifetime prevalence of schizophrenia is commonly given at 1%; however, a recent review of studies from around the world estimated it to be 0.55%14. The same study also found that prevalence may vary greatly from country to country, despite the received wisdom that schizophrenia occurs at the same rate throughout the world. It is worth noting however, that this may be in part due to differences in the way schizophrenia is diagnosed. The incidence of schizophrenia was given as a range of between 7.5 and 16.3 cases per 100,000 of the population.
Schizophrenia is also a major cause of disability. In a recent 14-country study15, active psychosis was ranked the third most disabling condition after quadriplegia and dementia and before paraplegia and blindness.
The first line treatment for schizophrenia is usually the use of antipsychotic medication. The newer atypical antipsychotic medication (such as olanzapine, risperidone and clozapine) is preferred over older typical antipsychotic medication (such as chlorpromazine and haloperidol), as the atypicals have different side effect profiles, including less frequent development of extrapyramidal side-effects. However, it is still unclear whether newer drugs reduce the chances of developing the rare but potentially life-threatening neuroleptic malignant syndrome.
Atypical antipsychotics have been claimed to have additional beneficial effects on negative as well as positive symptoms. However, the newer drugs are much more costly as they are still within patent, whereas the older drugs are available in inexpensive generic forms. Aripiprazole a drug from a new class of antipsychotic drugs (variously named 'dopamine system stabilizers' or 'partial dopamine agonists') has recently been developed and early research suggests that it may be a safe and effective treatment for schizophrenia16.
Hospitalisation may occur with severe episodes. This can be voluntary or (if mental health legislation allows it) involuntary (called civil or involuntary commitment). Mental health legislation may also allow a person to be treated against their will. However, in many countries such legislation does not exist, or does not have the power to enforce involuntary hospitalisation or treatment.
Psychotherapy or other forms of talk therapy may be offered, with cognitive behavioural therapy being the most frequently used. This may focus on the direct reduction of the symptoms, or on related aspects, such as issues of self-esteem, social functioning, and insight. There have been some promising results with cognitive behavioural therapy, but the balance of current evidence is inconclusive17.
Other support services may also be available such as drop-in centres, visits from members of a 'community mental health team' and patient-led support groups. In recent years the importance of service-user led recovery based movements has grown substantially throughout Europe and America. Groups such as the Hearing Voices Network and more recently, the Paranoia Network, have developed a self-help approach that aims to provide support and assistance outside of the traditional medical model adopted by mainstream psychiatry. By avoiding framing personal experience in terms of criteria for mental illness or mental health, they aim to destigmatise the experience and encourage individual responsibility and a positive self-image.
In many non-Western societies, schizophrenia may be treated with more informal, community-led methods. A particularly sobering thought for Western psychiatry is that the outcome for people diagnosed as schizophrenic in non-Western countries may actually be much better18 than for people in the West. The reasons for this recently discovered fact are still far from clear, although cross-cultural studies are being conducted to find out why. One important factor may be that many non-Western societies (including intact Native American cultures) are collectivist societies, in that they emphasize working together for the good of other society members. This is in contrast to many Western societies, which can be highly individualistic. Collectivist societies tend to stress the importance of the connectedness of extended family, providing a useful support mechanism for the stress that mental illness plays on both the ill and others around them.
Prognosis for any particular individual affected by schizophrenia is particularly hard to judge as treatment and access to treatment is continually changing as new methods become available and medical recommendations change.
However, retrospective studies have shown that about a third of people make a full recovery, about a third show improvement but not a full recovery, and a third remain ill19.
There is an extremely high suicide rate associated with schizophrenia. A recent study showed that 30% of patients diagnosed with this condition had attempted suicide at least once during their lifetime20. Another study suggested that 10% of persons with schizophrenia die by suicide21.
Schizophrenia and drug use
Schizophrenia can sometimes be triggered by heavy use of stimulant or hallucinogenic drugs, although some claim that a predisposition towards developing schizophrenia is needed for this to occur. There is also some evidence suggesting that people suffering schizophrenia but responding to treatment can have relapse as a result of subsequent drug use.
Drugs such as methamphetamine, ketamine, PCP and LSD have been used to mimic schizophrenia for research purposes, although this has now fallen out of favour with the scientific research community, as the differences between the drug induced states and the typical presentation of schizophrenia have become clear.
Hallucinogenic drugs were also briefly tested as possible treatments for schizophrenia by psychiatrists such as Humphry Osmond and Abram Hoffer in the 1950s. Ironically, it was mainly for this experimental treatment of schizophrenia that LSD administration was legal, briefly before its use as a recreational drug led to its criminalization.
There is now increasing evidence that cannabis use can be a contributing trigger to developing schizophrenia. The most recent studies suggest that cannabis is neither a sufficient nor necessary factor in developing schizophrenia, but that cannabis may significantly increase the risk of developing schizophrenia and may be, among others, a significant causal factor31.
It has been noted that the majority of people with schizophrenia (estimated between between 75% and 90%) smoke tobacco. However, people diagnosed with schizophrenia have a much lower than average chance of getting and dying from lung cancer. While the reason for this is unknown, it may be because of a genetic resistance to the cancer, a side-effect of drugs being taken, or a statistical effect of increased likelihood of dying from causes other than lung cancer22. It is argued that the increased level of smoking in schizophrenia may be due to a desire to self-medicate with nicotine. A recent study of over 50,000 Swedish conscripts found that there was a small but significant protective effect of smoking cigarettes on the risk of developing schizophrenia later in life.28 Whilst the authors of the study stressed that the risks of smoking far outweigh these minor benefits, this study provides further evidence for the 'self-medication' theory of smoking in schizophrenia and may gives clues as to how schizophrenia might develop at the molecular level.
Alternative approaches to schizophrenia
Psychiatrist Thomas Szasz has argued that psychiatric patients are not ill but are just individuals with unconventional thoughts and behaviour that make society uncomfortable. He argues that society seeks to unjustly control such individuals by classifying their behaviour as an illness and forcibly treating them as a method of social control. An important but subtle point is that Szasz has never denied the existence of the phenomena that mainstream psychiatry classifies as an illness (such as delusions, hallucinations or mood changes) but simply does not believe that they are a form of illness.
Similarly, psychiatrist R. D. Laing has argued that the symptoms of what we call mental illness are just reasonable (although perhaps not always obviously comprehensible) reactions to impossible demands that society and particularly family life puts on some individuals. Laing was revolutionary in valuing the content of psychotic experience as worthy of interpretation, rather than considering it simply as a secondary but essentially meaningless marker of underlying psychological or neurological distress.
It is worth noting that neither Szasz nor Laing ever considered themselves to be 'anti-psychiatry' in the sense of being against psychiatric treatment, but simply believed that it should be conducted between consenting adults, rather than imposed upon anyone against their will.
In the 1976 book The Origin of Consciousness in the Breakdown of the Bicameral Mind, psychologist Julian Jaynes proposed that until the beginning of historic times, schizophrenia or a similar condition was the normal state of human consciousness. This would take the form of a "bicameral mind" where a normal state of low affect, suitable for routine activities, would be interrupted in moments of crisis by "mysterious voices" giving instructions, which early people characterized as interventions from the gods. This theory was briefly controversial. Continuing research has failed to either further confirm or refute the thesis.
Psychiatrist Tim Crow has argued that schizophrenia may be the evolutionary price we pay for a left brain hemisphere specialisation for language25. Since psychosis is associated with greater levels of right brain hemisphere activation and a reduction in the usual left brain hemisphere dominance, our language abilities may have evolved at the cost of causing schizophrenia when this system breaks down.
Researchers into shamanism have speculated that in some cultures schizophrenia or related conditions may predispose an individual to becoming a shaman24. Certainly the experience of having access to multiple realities is not uncommon in schizophrenia, and is a core experience in many shamanic traditions. Equally, the shaman may have the skill to bring on and direct some of the altered states of consciousness psychiatrists label as illness. (See anti-psychiatry.)
Alternative medicine tends to hold the view that schizophrenia is primarily caused by imbalances in the body's reserves and absorption of dietary minerals, vitamins, fats, and/or the presence of excessive levels of toxic heavy metals. The body's adverse reactions to gluten are also strongly implicated in some alternative theories (see gluten-free, casein-free diet).
- dopamine hypothesis of schizophrenia
- formal thought disorder
- schizoaffective disorder
Famous people affected by schizophrenia
- Eduard Einstein (Albert's Son. Eduard inherited his fathers outstanding mind before developing schizophrenia.)
- Antonin Artaud (artist, poet, actor, theater philosopher)
- Buddy Bolden (jazz pioneer)
- Clara Bow (actress)
- The Genain quadruplets (a set of four girls who each developed schizophrenia)
- Peter Green (founder of rock group Fleetwood Mac)
- Jim Gordon (drummer for the rock group Derek and the Dominos)
- Zelda Fitzgerald (painter and wife of F. Scott Fitzgerald)
- James Tilly Matthews (subject of first book-length psychiatric case study)
- William Chester Minor (army surgeon and major contributor to the Oxford English Dictionary)
- John Nash (mathematician and subject of the movie A Beautiful Mind)
- Vaslav Nijinsky (ballet dancer and choreographer)
- Gene Ray (self-proclaimed doctor of cubicism)
- Phil Spector (music producer)
- Mark Vonnegut (Son of the famous writer Kurt Vonnegut; "The Eden Express", his first book, is his personal account of his bout with schizophrenia which was precipitated in large part by heavy marijuana use.)
- Louis Wain (artist)
- Wesley Willis (musician)
- Brian Wilson (songwriter and member of the Beach Boys)
- Adolf Wφlfli (artist, in the outsider art tradition)
- Nancy Spungeon (girlfriend of Sid Vicious of the punk rock band The Sex Pistols)
- Bentall, R. (2003) Madness explained: Psychosis and Human Nature. London: Penguin Books Ltd. ISBN 0713992492
- Green, M.F. (2001) Schizophrenia Revealed: From Neurons to Social Interactions. New York: W.W. Norton. ISBN 0393703347
- Torey, E.F. (2001) Surviving Schizophrenia: A Manual for Families, Consumers, and Providers (4th Edition). Quill (HarperCollins Publishers) ISBN 0060959193
- Vonnegut, M. The Eden Express. ISBN 0553027557
- Schizophrenia brain fault may have been found (http://news.bbc.co.uk/2/hi/health/3991925.stm)
- Understanding Schizophrenia: A Mind Factsheet (http://www.mind.org.uk/Information/Booklets/Understanding/Understanding+schizophrenia.htm)
- DSM-IV-TR Full diagnostic criteria for schizophrenia (http://www.behavenet.com/capsules/disorders/schiz.htm)
- World Health Organisation data on schizophrenia (http://www.who.int/whr2001/2001/main/en/chapter2/002e3.htm) from 'The World Health Report 2001. Mental Health: New Understanding, New Hope'
- Description of extrapyramidal side effects (http://www.futur.com/edu-info/nurses/leaf4/nur_4a.htm)
- Schizophrenia in history (http://www.hubin.org/facts/history/history_schizophrenia_en.html)
- Schizophrenia.com (http://www.schizophrenia.com/) A non-profit making information site (pharmaceutical company sponsored)
- National Institute of Mental Health (USA) Schizophrenia information (http://www.nimh.nih.gov/publicat/schizmenu.cfm)
- LONI: Laboratory of Neuro Imaging (http://www.loni.ucla.edu/Research/Projects/Schizophrenia.html)
- Schizophrenia On-Line News Articles (http://www.y.addr.com/mn/index.html)
- The current WHO definition of Schizophrenia (http://www.who.int/mental_health/management/schizophrenia/en/)
K., McGrath, J., & Milns, R. (2003) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12752027&dopt=Abstract)
Searching for schizophrenia in ancient Greek and Roman literature: a systematic
review. Acta Psychiatrica Scandanavica, 107(5), 323330.
2 Kraepelin, E. (1907) Text book of psychiatry (7th ed) (trans. A.R. Diefendorf). London: Macmillan.
3Turner, T. (1999) 'Schizophrenia'. In G.E. Berrios and R. Porter (eds) A History of Clinical Psychiatry. London: Athlone Press. ISBN 0485242117
4Bertelsen, A. (2002) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12145490&dopt=Abstract) Schizophrenia and Related Disorders: Experience with Current Diagnostic Systems. Psychopathology, 35, 8993.
5Tsuang, M. T., Stone, W. S., & Faraone, S. V. (2000) (http://www.ncbi.nlm.nih.gov:80/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10873908&dopt=Abstract) Toward reformulating the diagnosis of schizophrenia. American Journal of Psychiatry, 157(7), 10411050.
6Verdoux, H., & van Os, J. (2002) (http://www.ncbi.nlm.nih.gov:80/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=11853979&dopt=Abstract) Psychotic symptoms in non-clinical populations and the continuum of psychosis. Schizophr Res, 54(12), 5965.
7Torrey, E.F., Bowler, A.E., Taylor, E.H. & Gottesman, I.I (1994) Schizophrenia and manic depressive disorder. New York: Basic books. ISBN 0465072852
8Bebbington, P., Kuipers, L. (1994) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=7991753&dopt=Abstract) The predictive utility of expressed emotion in schizophrenia: an aggregate analysis. Psychological Medicine, 24 (3),70718.
9Day R, Nielsen JA, Korten A, Ernberg G, Dube KC, Gebhart J, Jablensky A, Leon C, Marsella A, Olatawura M et al (1987) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=3595169&dopt=Abstract). Stressful life events preceding the acute onset of schizophrenia: a cross-national study from the World Health Organization. Culture, Medicine and Psychiatry, 11 (2), 123205
10Susser E, Neugebauer R, Hoek HW, Brown AS, Lin S, Labovitz D, Gorman JM (1996) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=8540774&dopt=Abstract) Schizophrenia after prenatal famine. Further evidence. Archives of General Psychiatry, 53(1), 2531.
11Huttunen MO, Niskanen P. (1978) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=727894&dopt=Abstract) Prenatal loss of father and psychiatric disorders. Archives of General Psychiatry, 35(4), 42931.
12Healy, D. (2002) The Creation of Psychopharmacology. Cambridge, MA: Harvard University Press. ISBN 0674006194
13Green, M.F. (2001) Schizophrenia Revealed: From Neurons to Social Interactions. New York: W.W. Norton. ISBN 0393703347
14Goldner EM, Hsu L, Waraich P, Somers JM (2002) (http://www.ncbi.nlm.nih.gov:80/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12500753&dopt=Abstract) Prevalence and incidence studies of schizophrenic disorders: a systematic review of the literature. Canadian Journal of Psychiatry, 47(9), 83343.
15άstόn TB, Rehm J, Chatterji S, Saxena S, Trotter R, Room R, Bickenbach J, and the WHO/NIH Joint Project CAR Study Group (1999) (http://www.ncbi.nlm.nih.gov:80/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10408486&dopt=Abstract). Multiple-informant ranking of the disabling effects of different health conditions in 14 countries. Lancet (http://www.thelancet.com/), 354(9173), 111115.
16Potkin SG, Saha AR, Kujawa MJ, Carson WH, Ali M, Stock E, Stringfellow J, Ingenito G, Marder SR (2003) (http://www.ncbi.nlm.nih.gov:80/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12860772&dopt=Abstract) Aripiprazole, an Antipsychotic With a Novel Mechanism of Action, and Risperidone vs Placebo in Patients With Schizophrenia and Schizoaffective Disorder. Archives of General Psychiatry, 60(7), 68190.
17Cormac I, Jones C, Campbell C. (2002) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=11869579&dopt=Abstract) Cognitive behaviour therapy for schizophrenia. Cochrane Database of Systematic Reviews, (1), CD000524.
18Kulhara P. (1994) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=7893767&dopt=Abstract) Outcome of schizophrenia: some transcultural observations with particular reference to developing countries. European Archives of Psychiatry and Clinical Neuroscience, 244(5), 22735.
19Harding CM, Brooks GW, Ashikaga T, Strauss JS, Breier A. (1987) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=3591992&dopt=Abstract) The Vermont longitudinal study of persons with severe mental illness, II: Long-term outcome of subjects who retrospectively met DSM-III criteria for schizophrenia. American Journal of Psychiatry, 144(6), 72735.
20Radomsky ED, Haas GL, Mann JJ, Sweeney JA (1999) (http://www.ncbi.nlm.nih.gov:80/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10518171&dopt=Abstract) Suicidal behavior in patients with schizophrenia and other psychotic disorders. American Journal of Psychiatry, 156(10), 15905.
21Caldwell CB, Gottesman II. (1990) (http://www.ncbi.nlm.nih.gov:80/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=2077636&dopt=Abstract) Schizophrenics kill themselves too: a review of risk factors for suicide. Schizophrenia Bulletin, 16(4), 57189.
22"Conditions in Occupational Therapy: effect on occupational performance." ed. Ruth A. Hansen and Ben Atchison (Baltimore: Lippincott Williams & Williams, 2000), 5474. ISBN 0-683-30417-8
23Psychiatrie. 8. Aufl., Bd. 1: Allgemeine Psychiatrie; Bd. 11: Klinische Psychiatrie, 1. Teil. Barth, Leipzig 1909. Bd. 111, 1913; Bd. IV, 1915. (Translation of section on the disease from the German)
24Polimeni J, Reiss JP. (2002) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12018978&dopt=Abstract) How shamanism and group selection may reveal the origins of schizophrenia. Medical Hypothesis, 58(3), 2448.
25Crow, T. J. (1997) (http://www.ncbi.nlm.nih.gov:80/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=9246721&dopt=Abstract) Schizophrenia as failure of hemispheric dominance for language. Trends in Neuroscience, 20(8), 339343.
26Harrison PJ, Owen MJ. (2003) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12573388&dopt=Abstract) Genes for schizophrenia? Recent findings and their pathophysiological implications. Lancet (http://www.thelancet.com/), 361(9355), 4179.
27Shifman S, Bronstein M, Sternfeld M, Pisante-Shalom A, Lev-Lehman E, Weizman A, Reznik I, Spivak B, Grisaru N, Karp L, Schiffer R, Kotler M, Strous RD, Swartz-Vanetik M, Knobler HY, Shinar E, Beckmann JS, Yakir B, Risch N, Zak NB, Darvasi A (2002) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12402217&dopt=Abstract) A highly significant association between a COMT haplotype and schizophrenia. American Journal of Human Genetics, 71(6), 1296302.
28Zammit S, Allebeck P, Dalman C, Lundberg I, Hemmingsson T, Lewis (2003) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=14638593&dopt=Abstract) Investigating the association between cigarette smoking and schizophrenia in a cohort study. American Journal of Psychiatry, 160 (12), 221621.
29Van Os J. (2004) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=Abstract&list_uids=15056569) Does the urban environment cause psychosis? British Journal of Psychiatry, 184 (4), 287288.
30Sundquist K, Frank G, Sundquist J. (2004) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=Abstract&list_uids=15056572) Urbanisation and incidence of psychosis and depression: Follow-up study of 4.4 million women and men in Sweden. British Journal of Psychiatry, 184 (4), 293298.
31Arseneault L, Cannon M, Witton J, Murray RM. (2004) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=Abstract&list_uids=14754822) Causal association between cannabis and psychosis: examination of the evidence. British Journal of Psychiatry, 184, 1107.
32Davies G, Welham J, Chant D, Torrey EF, McGrath J. (2003) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=Abstract&list_uids=14609251) A systematic review and meta-analysis of Northern Hemisphere season of birth studies in schizophrenia. Schizophrenia Bulletin, 29 (3), 58793.
33McGorry PD, Mihalopoulos C, Henry L, Dakis J, Jackson HJ, Flaum M, Harrigan S, McKenzie D, Kulkarni J, Karoly R. (1995) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=Abstract&list_uids=7840355) Spurious precision: procedural validity of diagnostic assessment in psychotic disorders. American Journal of Psychiatry, 152 (2), 2203.
34Read, J. (2004) Does 'schizophrenia' exist ? Reliability and validity. In J. Read, L.R. Mosher, R.P. Bentall (eds) Models of Madness: Psychological, Social and Biological Approaches to Schizophrenia. ISBN 1583919066
35Johnstone EC, Crow TJ, Frith CD, Husband J, Kreel L. (1976) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=Abstract&list_uids=62160) Cerebral ventricular size and cognitive impairment in chronic schizophrenia. Lancet, 30;2 (7992), 924-6.
36Flashman LA, Green MF (2004) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=Abstract&list_uids=15062627) Review of cognition and brain structure in schizophrenia: profiles, longitudinal course, and effects of treatment. Psychiatric Clinics of North America, 27 (1), 1-18, vii.
37Koskenvuo M, Langinvainio H, Kaprio J, Lonnqvist J, Tienari P.(1984) (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=Abstract&list_uids=6540965) Psychiatric hospitalization in twins. Acta Genet Med Gemellol (Roma), 33(2),321-32. | 1 | 13 |
<urn:uuid:dfc13f6a-89c1-49de-843f-e2808c170319> | MacGyle History, Family Crest & Coats of Arms
- Origins Available:
The Gaelic name used by the MacGyle family in ancient Ireland was Mac Ciele, which is derived from the word ciele, which means companion.
Early Origins of the MacGyle family
The surname MacGyle was first found in County Mayo (Irish: Maigh Eo) located on the West coast of the Republic of Ireland in the province of Connacht, from before the 12th century.
Important Dates for the MacGyle family
This web page shows only a small excerpt of our MacGyle research. Another 62 words (4 lines of text) are included under the topic Early MacGyle History in all our PDF Extended History products and printed products wherever possible.
MacGyle Spelling Variations
Scribes and church officials, lacking today's standardized spelling rules, recorded names by how they were pronounced. This imprecise guide often led to the misleading result of one person's name being recorded under several different spellings. Numerous spelling variations of the surname MacGyle are preserved in documents of the family history. The various spellings of the name that were found include MacHale, McHale, MacHail, McHail, McCale, MacCale and others.
Early Notables of the MacGyle family (pre 1700)
More information is included under the topic Early MacGyle Notables in all our PDF Extended History products and printed products wherever possible.
Migration of the MacGyle family
A massive wave of Irish immigrants hit North America during the 19th century. Although many early Irish immigrants made a carefully planned decision to leave left Ireland for the promise of free land, by the 1840s immigrants were fleeing a famine stricken land in desperation. The condition of Ireland during the Great Potato Famine of the late 1840s can be attributed to a rapidly expanding population and English imperial policies. Those Irish families that arrived in North America were essential to its rapid social, industrial, and economic development. Passenger and immigration lists have revealed a number of early Irish immigrants bearing the name MacGyle: Anthony, James, John, Martin, Patrick, Peter and Richard MacHale all arrived in Philadelphia between 1840 and 1860. | 1 | 2 |
<urn:uuid:204dc3a1-4b79-44f3-ab2c-9cccc89c423b> | Glossary of terms
Technical terms simply explained
laser, light amplification by stimulated emission of radiation laser technical light source with special properties of the emitted rays: 1. monochromatic (extremely narrow-band wavelength range, to be limited exactly to 0.1 nanometers), 2. coherent (predictable route and interference properties through high constancy of the wavelength, important for example in distance measuring, holography); 3. parallel (ideal bundling capacity for high energy densities); thus numerous possible applications, including printing forme illustration, laser printing (photoconductive drum), marking plastic packages.
laser diode, LD diode laser semiconductor radiation source related to the LED, with emission ranges in the infrared, red, and violet ranges; applications include the illustration of printing plates.
LCD, liquid-crystal display écran à cristaux liquides see active matrix display
LCH, lightness–chroma–hue clarté – chroma – teinte see CIELAB
LED, light-emitting diode DEL, diode électroluminescente semiconductor light source which, depending on its design, can emit narrow to broadband wavelength ranges. Examples of applications: Imaging of photoconductive drums in digital printing systems, light source in spectral measuring devices as an alternative to the gas-filled light bulb, simulation of random illuminants in color-matching booths (patent: Just Normlicht), backlighting of LCD monitors and television sets.
light lumière wavelength range of the visible radiation of the electromagnetic spectrum, approx. 380 to 780 nanometers.
light fastness solidité à la lumière ageing stability of pigments and dyes vis-à-vis UV rays in natural daylight (simulated with xenon high-pressure lamps in accordance with ISO 12040, determined as fade resistance (résistance à la décoloration) of the Blue Wool Scale in accordance with DIN EN ISO 105-B01).
light gathering/trap captage de la lumière phenomenon in halftone printing in which incident light, in its reflection in the upper layers of the substrate, is prevented from leaving the substrate, because the printing ink layer of the halftone dot blocks the way; this occurs mainly in substrates with low opacity and increases the tonal value increase of the halftone print. The strength of the effect is also dependent on the screen ruling (screen resolution, l/cm or lpi) and the presence of a lacquer coating. For this reason, the light trap correction exponent (Yule/Nielsen) is handled empirically for the screenvalue formula of Murray-Davies; a light trap occurs in the formula as a divisor of the screen density and of the solid density in values between 1 (no light trap detectable) and 1.9.
light scattering dispersion de la lumière on rough/matt surfaces the effect of diffuse reflection, which causes a chaotic distribution of the radiated light. In the measurement of color and color density, only the share of scattered light is registered.
light source source lumineuse any self-luminant body
lighting conditions conditions d’éclairage technical conditions prevailing during a color stimulus; these include the illuminant of the source, the influence of the surroundings (neutral or colored, stray light) and the angle of observation (glare and reflection-free).
lightness, luminance clarté, luminance luminance L in candela per square meter [cd/m²], strength of a light perception. Each color sensation is also associated with a lightness sensation, i.e. the luminance channel light–dark supplements the two chrominance channels red–green and blue–yellow, e.g. L* in the CIELAB/CIELCH color space model.
line work , LW dessin au trait monochrome pixel graphics file with 1 bit color depth, i.e. binary status “hue” and “no hue”.
linearization linéarisation procedure in the calibration of digital printing systems and inkjet digital proof printers. Here, with the help of a spectrodensitometer, e.g. Techkon Spectro- Dens/SpectroJet, a control strip provided for the linearization, e.g. ECI Gray Control Strip, is recorded on a printed sheet in order to be able to adapt the actual values to the target values. This option is also available on offset-print copies printedunder PSO/ISO-standardized conditions; there, the “GrayCon strips” are recorded for two purposes: 1. Quality monitoring of the printing press (with impakt medien iQIP and Techkon SpectroJet), 2. indirect RIP linearization through retroactive calculation of the print sheet measured values with respect to the imaging curve, which can then be corrected in the imagesetter RIP with a validation program, e.g. MMS/basICColor Calibri print control. So far it has been common practice to linearize the imagesetter RIP to a stepped wedge on the basis of a printing plate measurement, e.g. with the Techkon SpectroPlate image analyzer.
liquid-crystal color display écran à cristaux liquides see active matrix display
LMS color space espace LMC see fundamental stimulus
long color encre longue see ink stringing
long key/black noir long in the UCR-modified chromatic composition with gray component replacement GCR, the use of the achromatic process color black instead of the three chromatic colors CMY, in a wide section of the tone value range, that is, with a low starting point.
look-up table table de correspondance see CLUT
low-migration ink encre à faible migration classification of a printing ink when the permissible SML values (“specific migration limit”, in mg material per kg food) of all substances that have an adverse effect on the smell/taste/appearance of the packed substance (test substance: modified polyphenylene oxide, Tenax) fall below permissible limits (DIN EN 14338/1186-13).
lpi, lines per inch lignes par pouce see definition
lumen lumen see luminous flux
luminance luminance SI unit: candela per square meter [cd/m²]; non-SI units: Apostilb (asb), Blondel, Bril, Lambert (la), International Stilb (isb), Skot (sk), Stilb (sb); see also brightness
luminance factor facteur de luminance value A for marking the brightness of a surface color, independent of the color ordering system and thus comparable between systems; in CIEXYZ A is equated with the Y stimulus (green).
luminance factors facteurs de luminance standardized values for brightness signal formation in color television; the following applies for the 3 color channels RGB: “luma -red” = 0.299, “lumagreen” = 0.587 and “lumablue” = 0.114.
luminescence luminescence the ability of solids, liquids or gases to emit light of a certain wavelength spectrum during the transition from a stimulated state to a basic state. The emission of light always requires an internal or external stimulation: through electric current (LEDs, OLEDs), light particles (phosphorescence: afterglow; fluorescence: real-time luminescence), chemical reactions (luminol in forensic medicine, luciferin in glowworms), heat input (embers), X-rays (projection screen), amplification through stimulated emission (laser) etc.
luminescence scanner détecteur de luminescence detector for the presence of fluorescent pigments in printing inks and optical brighteners in paper, e.g. from Leuze, Sick.
luminescent colors couleurs luminescentes color stimuli of self-luminant bodies (lamps, monitors, data projectors); see also additive color blending; opposite: surface colors.
luminescent diode diode électroluminescente see LED
luminous efficiency function courbe d’efficacité lumineuse V(λ) describes the spectral luminous efficiency of the human eye in daylight for the 2° standard observer. The maximum in green is 555 nanometers. The luminous efficiency function also explains why human beings can distinguish more green color shades than red or blue, why “green” is good for the eyes and why green coats are usually worn in the operating theater.
luminous efficiency/yield efficacité lumineuse the minimum relationship between radiated and received luminous flux required for successful color measurement.
luminous energy quantité de lumière light energy times time in lumen seconds [lm·s]
luminous flux/power flux lumineux product of luminous intensity and radiated solid angle [candela times steradian, cd·sr]; the SI unit is lumen [lm]; the weakening of the luminous flux through absorption is measured as degree of remission and, as a spectral degree of remission, forms the basis for all colorimetric parameters over the entire visible wavelength range. With video projectors, the illuminance reaching a surface [lux times square meter, lx·m²] at a defined distance is designated “ANSI lumen” (IEC DIN EN 61947-1).
luminous intensity intensité lumineuse radiant power per solid angle [watt per steradian, W/sr] or luminous flux per solid angle [lumen per steradian, lm/sr], weighted with the luminous efficiency function V(λ); SI unit candela [cd]
LUT table de correspondance see CLUT
Luther’s demand condition de Luther demand to be met by the spectral characteristics of the overall system, formulated in 1927 by the color film pioneer Robert Luther, applied today to colorimetric measuring instruments/color densitometer. The components measured light source (spectral radiation characteristics), measuring filter (spectral transmission characteristics) and photorecipient (relative spectral sensitivity distribution) must match each other in such a way that when they interact, they allow the desired spectral evaluation, without the individual components having to meet certain requirements. | 1 | 4 |
<urn:uuid:968ae1e4-f10d-484c-b40a-6fdf66f5c420> | What is Attention Deficit Hyperactivity Disorder?
Attention deficit hyperactivity disorder (ADHD) is one of the most common childhood disorders and it can continue through adolescence and into adulthood. Symptoms include difficulty staying focused and paying attention, difficulty controlling behaviour, and hyperactivity (over-activity).
ADHD has three subtypes:
- Predominantly inattentive type
- Predominantly hyperactive-impulsive type
- Combined inattention and hyperactive impulsive type.
Predominantly inattentive. The majority of symptoms (six or more) are in the inattention category and fewer than six symptoms of hyperactivity-impulsivity are present, although hyperactivity-impulsivity may still be present to some degree. Children with this subtype are less likely to act out or have difficulties getting along with other children. They may sit quietly, but they are not paying attention to what they are doing. Therefore, the child may be overlooked, and parents and teachers may not notice that he or she has ADHD.
Combined hyperactive-impulsive and inattentive. Six or more symptoms of inattention and six or more symptoms of hyperactivity-impulsivity are present. Most children have the combined type of ADHD.
Patients in Somerset Diagnosed with ADHD
The data below show the number of patients in Somerset at 2nd October 2014 who have received a diagnosis of ADHD, by the year in which they were diagnosed.
- Since 1995, there have been 843 diagnoses, of which 722 (86%) were made when the patient was a child (aged under 18)
- 269 patients are currently being treated by the Trust - as mental health referrals but not necessarily for ADHD
- 571 have no current 'open' mental health referrals and 3 patients have died
- Diagnoses for ADHD were relatively few in the 1990s, peaking for children in the 2005-08 period and for adults in more recent years
The following charts show the trends in year-by-year diagnoses and current status of the patient in terms of mental health referrals with the Somerset Partnership. There is a chart for each of the following:-
- those diagnosed as a child (aged 0-17)
- those who were diagnosed as an adult (aged 18+).
- the total diagnosed.
To understand the data, take the Child patients in 2004 as an example. There were 44 patients aged 0-17 who received a diagnosis of ADHD for the first time, of which seven are currently being treated by the Trust, 36 have no ongoing treatment with the Trust and one has died.
The following ICD 10 codes were used to determine if a patient had been diagnosed with ADHD:
- F90 - Hyperkinetic disorders
- F90.0 - Disturbance of activity and attention
- F90.1 - Hyperkinetic conduct disorder
- F90.8 - Other hyperkinetic disorders
- F90.9 - Hyperkinetic disorder, unspecified
For more information on ADHD, please see the ADHD Foundation Website | 1 | 6 |
<urn:uuid:47fa0870-de55-45cb-9952-93cb8de28f57> | Autism Spectrum Disorder (ASD) is a neurological and developmental disorder which affects communication and behavior. Autism can be diagnosed at any age. But still it is called a “developmental disorder” because symptoms generally appear in the first two years of life. Autism affects affects the overall cognitive, emotional, social and physical health of the affected individual.
Autism is called as a “spectrum” disorder because there is wide variation in the type and severity of symptoms people experience.
What Happens in Autism Spectrum Disorder?
The exact cause of Autism Spectrum Disorder is so far unknown. There is no one single cause. However, following aspects may increase the risk:
- Family history
- Genetic mutations
- Fragile X syndrome and other genetic disorders
- Being born to older parents
- Low birth weight
- Metabolic imbalances
- Exposure to heavy metals and environmental toxins
- Fetal exposure to the medications valproic acid (Depakene) or thalidomide (Thalomid)
Symptoms of Autism Spectrum Disorder
- Abnormal Body Posturing or Facial Expressions
- Abnormal Tone of Voice
- Avoidance of Eye Contact or Poor Eye Contact
- Behavioral Disturbances
- Deficits in Language Comprehension
- Delay in Learning to Speak
- Flat or Monotonous Speech
- Inappropriate Social Interaction
- Intense Focus on One Topic
- Lack of Empathy
- Lack of Understanding Social Cues
- Learning Disability or Difficulty
- Not Engaging in Play With Peers
- Preoccupation With Specific Topics
- Problems With Two-Way Conversation
- Repeating Words or Phrases
- Repetitive Movements
- Self-Abusive Behaviors
- Sleep Disturbances
- Social Withdrawal
- Unusual Reactions in Social Settings
- Using Odd Words or Phrases
Does Vaccination cause ASD?
No. A number of studies have been conducted to see whether there is a link between any vaccine and Autism. None, of the studies showed any such link.
Can ASD be Prevented?
No. Autism can not be prevented. But with treatment can improve behavior, skills and language development.
Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5)
The DSM-5 was published on 18 May 2013. autism and less severe forms of the condition, including Asperger syndrome and pervasive developmental disorder not otherwise specified (PDD-NOS), have been combined into the diagnosis of autism spectrum disorder (ASD).
Therefore, now Asperger syndrome is no longer considered a separate syndrome. Now, it is part of ASD.
Population of People with Autism
- In 2015, globally more than 24.8 million people were estimated to be affected with Autism, while Asperger syndrome affected a further 37.2 million.
- Autism occurs four-to-five times more often in males than females.
- In the developed countries, about 1.5% of children are diagnosed with ASD as of 2017.
ICD-10 Code for Autism Spectrum Disorder
For Autism Spectrum Disorder, the ICD-10-CM Diagnosis Code is F84.0
F84.0 code is a billable/specific ICD-10-CM code that can be used to indicate a diagnosis for reimbursement purposes.
For Asperger’s syndrome the ICD-10 code is F84.5 | 1 | 3 |
<urn:uuid:37a903ee-a5cb-4710-8ad1-f2a72ee71e99> | "Once you start down the dark path, forever will it dominate your destiny" - Master Yoda
There are those that think working with surface mount devices (SMD) is a path to the dark side. The same has also been said as technology has changed in amateur radio, from tubes to transistors, and from AM to single side band, and now from FM to digital.
They are indeed tiny parts and you probably need to do it under magnification. You WILL see a part snap out of your tweezers never to find it again. It’s all part of learning a new technique. But it’s not as difficult, or expensive, as many expect. It just takes a little patience and practice.
In some ways SMD soldering can be even easier to work with than conventional through hole soldering. No holes to drill or try to suck out during a change or repair. There are also no leads to clip off after soldering. If you have built anything with Manhattan style construction then you have already surface mount soldered, just with bigger parts.
A few other reasons to move to SMD is the obsolescence of popular through hole IC’s and transistors, gone are the days of the MPF102, 40673, MC1496, and what now appears the popular SA612/NE602 DIP IC’s are going by way of dinosaur. With all these going away there are still plentiful SMD counterparts available to use such as the SA612AD or MC1496D. In the case of resistors and capacitors the SMD versions are not only plentiful but far cheaper. Common PCB carrier boards exist (see picture above) that can turn almost all of these small parts into conventional through hole style.
SMD parts vary in size such as the 2512 size 1W resistors used on the SMD Dummy Load or the SMD version of a 4N26 opto-coupler which is just a DIP form with pads instead of legs. Of course it can also go down to the very tiny 0402, and smaller, sized resistor. The picture at the top is an example of common SMD style parts compared with their through hole cousins. Numbers such as 2512 represent the width and length of a part. For example a 1206 is .120″ x .060″. In metric this same 1206 is called a 3216 for 3.2mm x 1.6mm.
Personally, I prefer to focus on 1206 sized parts. Many common parts are 0805 and 0603 sized, a little trickier to deal with but still tolerable. I try to avoid 0402 and smaller parts, they are just too small for me even though at times I have successfully struggled with a few repairs under high power magnification. For IC’s I will usually go down to TSSOP sized (.026″ between pins). There are some other chips to avoid such as ball grid array (BGA) or quad flat no-lead (QFN) as they require special soldering techniques.
- Temperature controlled soldering iron with fine tip. My favorites tips are a 2x17mm angle chisle and a .8x17mm chisel.
- Thin Solder – .025″ (.6mm) – Kester SN60PB40(or it’s PB=free equivalent). You can still solder with the larger stuff but the smaller is a little easier.
- Tweezers – Find a comfortable pair that works well with your hands and holds the parts. A nice pair can be had for a couple of dollars. I found an excellent pair at an army surplus store, wish I picked up a second one when I was there. Do not use the type that is always closed and you squeeze to open. Those are the best to use as part launchers. I also have some Bamboo tweezers. They are a little larger and stiffer than my metal ones but seem to work fine picking up parts.
- Avoid cheap vacuum pickup tools – You might be tempted to get one of those cheap vacuum pickup tools for a few dollars. I actually find them to be most useless and many are defective from the start. For a small projects I still find tweezers to be the best tool. Farther below we will talk a bit more about vacuum pickup tools.
- Board holder – With SMD you don’t even need a board holder! Just use a little blue painters tape on the board edges onto a piece of board or sheet metal. You can tape right to the bench but the advantage of a board is you can rotate it into position. I also use an inexpensive PCB holder picked up off from Amazon for around $12 (above photo) or a small bench vacuum vice picked up at Lowes for around $30 (no longer there but available on the internet and cheaper).
- Flux pen – Good fluxing is the key to soldering and these little flux pens can be had on the internet for just a dollar or two.
- Solder wick – .035″, .055″, & .1″ widths to help remove a solder short between IC pins or clean up a pad. Cost is around $5 for a small spool. Note there are both fluxed and un-fluxed varieties. I usually buy the un-fluxed and apply my own with flux paste or the flux pen.
- Unless you have better than perfect vision a magnifier is a must. Myself it’s needed even with through hole assembly. Consider a lighted bench 5″ magnifier of 3 to 5 diopter ($50-$200). They come in handy for more than electronics, hand splinters, repairing kids toys, etc. Another magnifier which comes in handy is a simple 3″ 10x eye loupe ($10) to spot check areas. Highly recommended is a set of magnifying glasses. Search on the big sites for “Head Mount Magnifier”, often found for under $15. These include up to 5 changeable lenses and an LED head lamp.
|Bench mounted magnifier lamp||$15 Head Mount Magnifier|
- Hold-Down tool to keep a part in place while soldering. These can range from various bent pieces of metal to modified dental massage picks. Some swear by them but most of the time I don’t use them.
- Containers to hold small parts – Coin tubes from the hobby shop works great at holding small loose parts and small pieces of parts on tape. Some people will build inside a cake pan or cookie sheet in case the parts try to escape. Commonly found in a part of the house rarely seen called the kitchen.
You can solder a fairly large variety of SMD components with this simple technique:
- Apply a small amount of solder onto one of the PCB pads. Since I’m left handed I usually use a left side pad.
- Get your part ready to put in place.
- Heat the end of the pad until the solder melts. Then with your tweezers slide and align the part into position.
- Solder down the opposite corner (or side for resistors).
- For IC’s and other multi-legged parts solder up the remaining legs each point or by flux & drag soldering.
Below is a video soldering 2 resistors onto an SMD Dummy Load kit. Sorry about the poor quality. The microscope resolution is low and it is a bit difficult to solder between the board and microscope without melting anything.
- Dave Jones at EEVblog has a much more in depth tutorial on this technique including drag soldering an IC – a highly recommend viewing:
To remove a bad SMD resistor or capacitor I just place the iron sideways across both ends at the same time, the part usually comes right off but stuck to the iron. There are a couple of techniques to un-soldering devices with more than 2 legs but we will not go into details here. Just do a YouTube search on “SMD Desoldering”. Using a hot air gun and a little flux is the quickest and easiest but “Solder Flooding” and “Flossing” techniques also work good if you don’t have a gun handy. Once the device is removed just clean up the pads with some flux and solder wick and you are ready to put thenew part in.
If you plan on investing a little more into SMD soldering I would recommend the next set of tools:
- Air Gun or SMD rework station – Now days you can get entire rework stations which contains both a hot air gun, various nozzles, soldering iron, and tips for under $100 (even under $50). DO NOT USE a hardware store hot air gun. Those will put out enough heat to melt solder but will also blow your parts to the next country. A rework station has a variable adjustment to control the flow of air. Yes even with these you can crank up the air enough to blow the part across the board.
- Solder paste – Solder paste in a srynge will help speed up a large build. Apply just a small dab on each pad then set your part on. For IC’s a thin line across the row of pads works great. The surface tension will help align the chip in place. Any shorts can be quickly removed with a little flux and soler wick. You can also use paste with a soldering iron. My favorites are: ChipQuik SMD291SNL10 (PB-Free) or SMD291AX10 (PB) and Kester EP256 NC – No Clean Don’t forget the temperatures are higher for lead free solder – check the spec sheets that come with it. Solder paste seems expensive at $25-$30 for 10CC’s but it goes a long long way.
- Flux paste – Once in a while I need paste for a bigger (or stubborn) part and use a Flux paste also in a syringe. A little bit can also help tin wire leads. – MG Chemicals 8341-10ML (no clean) $11
- USB Microscope – Kids USB microscopes can be found starting at $15, just plug in to your PC/laptop and you have instant magnification. Below are pictures uusing one of these cheap units. For more money get can get higher resolutions and even ones with an LCD screen built in. I don’t recommend soldering under one, it can be difficult to maneuver and of course the risk of accidentally melting the unit. Sometimes soldering under a ‘scope is the only way to get the job done. The only jobs I had to solder under the scope with is when I had to attach wires into the RTL-SDR Chinese receiver kit and working with a couple of parts smaller than an 0402.
- Vacuum pickup tool (not pictured) – Those cheap $1 Chinese hand pump pickup tools DO NOT WORK WELL. Most even require a modification to even work at all! While a professional vacuum pickup tool can be had for big bucks I found a $40 “Pick-It-Up” tool for art and jewelry beads works like a charm! It’s a commercial version of the “Fish Pump Pick and Place” modification.
|My old magnifier – a Cheap $15 kids USB microscope. It works great with a 20″ monitor but takes poor video|
|The new Magnifier – still under $50 will give higher resolution video picture but about the same performance for static images..|
Disclaimer – The photos taken above were done AFTER a good bench cleaning. My bench usually looks like a post-disaster scene. | 1 | 4 |
<urn:uuid:67085b46-c587-4beb-b9b4-ce9136c91a0a> | Acrylic glass is a transparent thermoplastic that is lightweight and shatter-resistant, making it an attractive alternative to glass. Although forms of man-made glass date back to 3500 BC, acrylic glass and its versatile uses are a more recent discovery.
In 1907, Dr. Otto Röhm teamed up with Otto Haas to create the Röhm and Haas chemical company, which initially focused on creating goods for the leather and textile industry. Despite their initial focus, Dr. Röhm was determined to expand on his doctoral research in acrylic acid ester polymerisate, a colorless and transparent material, and how it could be used commercially. In 1928, the Röhm and Haas chemical company used their findings to create Luglas, which was a safety glass used for car windows.
Dr. Röhm wasn’t the only one focusing on safety glass – in the early 1930s, British chemists at Imperial Chemical Industries (ICI) discovered polymethyl methacrylate (PMMA), also known as acrylic glass. They trademarked their acrylic discovery as Perspex.
The Röhm and Haas researchers followed closely behind; they soon discovered that PMMA could be polymerized between two sheets of glass and separated as its own acrylic glass sheet. Röhm trademarked this as Plexiglass in 1933. Around this time, the United States-born E.I. du Pont de Nemours & Company (more commonly known as DuPont) also produced their version of acrylic glass under the name Lucite.
Tensions between nations and the resulting shortage of raw materials during World War II boosted the demand for acrylic glass. Both Allied and Axis forces used acrylic glass for windshields, aircraft windows, periscopes, protective canopies, and gun turrets. Service members who were wounded by broken PMMA faired much better than those who were cut with shattered glass, demonstrating that “safety glass” was indeed much safer than splinters of real glass.
As World War II drew to an end, the companies who made acrylics faced a new challenge: what could they make next? Commercial uses of acrylic glass began to appear in the late 1930s and early 1940s. The impact and shatter resistant qualities that made acrylic great for windshields and windows have now expanded to helmet visors, the exterior lenses on cars, police riot gear, aquariums, and even the “glass” around hockey rinks. Acrylics are also found in modern medicine, including hard contacts, cataract replacements, and implants. Your home is most likely filled with acrylic glass as well: LCD screens, shatterproof glassware, picture frames, trophies, decorations, toys, and furniture are all often made with acrylic glass.
Since its creation, acrylic glass has proven itself to be an affordable and durable choice for building goods that last. Naturally, designers of trophies and awards have gravitated towards acrylic glass to create durable, lightweight, and affordable awards for consumers.
For over 15 years, the recognition experts at Acrylic Warehouse have been a leading provider of team, association, personal and corporate acrylic awards. Contact Acrylic Warehouse today to learn more about their custom designs, laser engraving, and full-color digital printing services for your acrylic award needs. | 1 | 3 |
<urn:uuid:42a4e5f3-7dcb-4039-a865-a309b840dca4> | The big focus in modern computing is to make the computer act as little as possible like a computer and as much as possible like some skewed representation of a physical desk. Each new version of any given window manager is tweaked and modified to be “intuitive” and “easy to use.” But what does that mean? Essentially that they have moved things around into a, possibly, more logical menu system. All this does is change the incorrect metaphor that the user is confronted with. But it doesn’t change the fact that most of the GUI is cruft. A large portion of what users face when they use the computer is a waste of time.
Over two decades ago they had it partly right. The command line was the primary way to interact with the computer, this interface is powerful, but limited; GUI is useful sometimes. But not always. That is where the principle of dialectic comes in. Hegel proposed that everything progresses through dialectic to reach a state of equilibrium that is higher, or better, than that original state. There are three steps to this process:
- Thesis: The initial concept
- Antithesis: A reaction to the initial concept, usually strongly polarized against the thesis
- Synthesis: The Thesis and Antithesis merge to reach a higher level of thought or existence.
By applying this to the UI concept we have:
- Command Line Interface: Powerful, Straightforward, and nearly Cruft-less. Designed by hackers for hackers. This UI was designed to be powerful and intuitive to computer people.
- Graphical User Interface: Colorful, Vague, Metaphorical. Designed for the common, usually clueless, user. This UI was designed to allow the common person to use a computer.
- Graphical Command Line Interface: Powerful, Straightforward, as Cruft-less as in practical. This is a synthesis of the GUI and CLI. Designed to be powerful and naturally intuitive; ready to pave the way from natural speech commands. The UI is designed for the future, that is, everyone: hackers as well as common people.
What might a fusion of the CLI and the GUI act and/or look like? How would the user interact with his/her computer? This is my vision:
The Command Line makes a comeback
We have been using the command line for a long time. First to interact with our computers and later to find information on the Internet. The command line is now an everyday tool that nobody could live without. So why has this marvel of the modern world been abandoned on the desktop? The command line, on our new dialectic desktop, will always be available, this is how the user will launch applications and find help and information about his computer and various other topics. This command line, however, wont be bash, or any other kind of standard command line interpreter. This will be more of a pseudo-command line. I would imagine that most of the standard commands used in current command line interpreters wont work (ie, exist) directly. However, there should be some kind of easy, and logical, command to switch over to a “real” command line. The most important thing to consider is language. The command line, as it currently exists on the desktop, uses a language syntax the makes sense and works, but it isn’t natural language. Google, the now-ubiquitous command line, unlike the Linux terminal, understands, or, in the least, makes use of natural language. A good example of this is Ubiquity, the Firefox extension from Mozilla Labs. What I propose is that something that is equal in capability to Ubiquity become a central interaction point on the computer desktop.
Pretty Doesn’t Have to Mean Crufty
People like pretty desktops because people like anything that is pretty, simply because it is pretty. Mac OS does a great job a creating a beautiful user experience, usually because the philosophy is this: make it look pretty at all costs, even if that means making something else more complex. I would propose something more moderate: make it look pretty, unless it interferes with the user experience (as in usability). But how do you make a pretty UI without cruft? Easy: Enlightenment. Enlightenment e17 is a great window manager that looks great and, as far as I have seen, isn’t particularly crufty or resource heavy and it has a heavy focus on eye candy which means a pretty (and nicely animated) UI. I recently updated to a more recent version of e17 and it appears that the project has taken a huge tangent. It is no longer the Window Manager that I fell in love with. Looks like my next best bet is KDE, although It’ll take a bit of work to make it light enough for my liking.
The my concept of the ZUI is a spin off of the GUI that I rather enjoy. It is a sort of fusion of a compositing window manager and a dynamic tiling window manager. All of your windows are shown in a grid on the screen, grouped by application, when you mouse over a label with the name of the window, the name of the application, tags?, and possibly other information. When you click on a window it zooms out and expands to fill most of the screen. A hotkey will un-zoom the window and another will un-zoom it to allow you to zoom into another window as well.IntegrationWhat’s the point of all this if there is no integration. The lack thereof is a big problem for me when I use Enlightenment because there isn’t any integration. GNOME uses additional packages to accomplish this. And although I’m not a big fan of GNOME (or KDE for that matter) the idea, and to an extent the application, is good. There needs to be some kind of integration package and it should be light and un-obstrusive (way lighter than all the GNOME daemons you have scurrying around when you run GNOME). This includes:
- Keyring Manager (daemon)
- Settings Manager (rc files should take care of this)
- Clipboard (daemon)
- Drag ‘n Drop
GTK and Qt
As I said before, I don’t like GNOME, but most GUI applications for linux use GTK or Qt as their UI engine. And as little as I like to say it GTK has to have some kind of inclusion. The solution? A light-weight daemon that converts Enlightenment theme files into GTK or Qt settings. The same goes for icons.
The Proposed Result | 1 | 2 |
<urn:uuid:9680aa45-97e5-4dc1-9103-8440160efcc7> | By Collin Greene, Manager of Product Security
Billions of people use Facebook to connect with the people who matter to them. We have a responsibility to build secure services that help keep people safe.
At Facebook we take what’s called a “defense-in-depth” approach to security, meaning we layer a number of protections to make sure we prevent and address vulnerabilities in our code from multiple angles. It is a massive, ongoing effort that spans teams, departments and time zones. Security engineers and practices are embedded throughout the company to help ensure that data protections are built into our code and designs from the get-go, rather than added on at the end.
As it’s practically impossible to write flawless code, it’s not uncommon for software to have bugs. While most bugs don’t have serious consequences, some can create security vulnerabilities that can potentially be exploited to gain access to data or user accounts. Because of this, Facebook is committed to finding, fixing and preventing those bugs. We work to continually improve our defenses so we can counter emerging threats and stay ahead of our adversaries, which means that this type of work is never finished.
In the graphic below, you can see how our “defense-in-depth” approach relies on a combination of technology, expert security teams and the wider security community to help protect our platform. In the following article, we’ll dive into each of these five components — secure frameworks, automated testing tools, peer and design reviews, red team exercises and our bug bounty program — in greater depth.
Secure frameworks: Reduce programming errors
Every engineer who joins Facebook goes through a comprehensive 6-week bootcamp, where they learn the foundational security processes described throughout this post. This ensures that our entire engineering workforce has training in information security.
We also invest heavily in building frameworks that help engineers prevent and remove entire classes of bugs when writing code. Frameworks are development building blocks, such as customized programming languages and libraries of common bits of code, that provide engineers with built-in safeguards as they write code.
One example of a framework we built is called Hack, an update of the popular programming language PHP. Hack helps developers avoid introducing errors by requiring them to explicitly define and type out certain variables and parameters in their code. The addition of this information allows the development software to flag potential errors as the developer is coding. You can think of these requirements like inflatable bumpers in a bowling alley: They channel and guide the actions a programmer can make, in effect limiting the number of errors that may be introduced. As a bonus, the additional information Hack developers must add to their code can make later analysis much easier and helps us develop the analysis tools we’ll describe in the next section. (Coders can read our developer blog post on Hack, which is open source, for more technical details.)
We also created XHP, an open-source augmentation to PHP/Hack that helps engineers integrate PHP and HTML code more seamlessly, reducing the likelihood that errors will inadvertently be introduced into the code. This helps prevent a common type of problem known as a cross-site scripting vulnerability.
Hack and XHP are examples of secure frameworks that help ensure our engineers build technology that is more secure from the very beginning, rather than requiring they write additional code.
Automated testing tools: Analyze code non-stop, automatically and at scale
Since secure frameworks alone can’t anticipate and prevent all issues, we also invest in building analysis tools that can inspect code and find security errors at scale and as quickly as possible.
We are continually learning from security incidents that affect both Facebook and other technology companies, discovering new types of software vulnerabilities and then using this knowledge to prevent similar issues in the future. This is where our preventative and detection-based tools come in.
There are many different types of tools we deploy across Facebook, including static analysis tools, which review written source code, and dynamic analysis tools, which run the code to observe errors as the program is running. These tools look for potential issues so they can either be fixed or flagged for further analysis.
No matter how we find a security bug, we respond by triaging the issue and then doing a root-cause analysis, which allows us to learn from each bug to prevent it — or errors like it — from occurring in the future. This analysis then feeds back into the other phases of our defense-in-depth approach. For example, it may lead to us building new coding frameworks, new tools or new training.
As an example, we built a unique tool that we regularly update to detect new types of bugs. That tool then continually analyzes Facebook’s entire codebase — currently more than 100 million lines of Hack code — to identify these vulnerabilities. It would be incredibly time- and resource-intensive to continually monitor that much code, which changes thousands of times a day, with manual reviewers. This tool allows us to automatically audit our code for certain types of bugs on an ongoing basis.
Peer reviews, design reviews, red team exercises: Use human experts to find flaws technology misses
All code changes go through mandatory peer review in addition to the automated analysis described in the previous section. Certain new features will also undergo design reviews, in which Facebook’s security experts provide feedback to help engineers spot any deficiencies that could lead to security problems. These internal reviews provide another layer of scrutiny to help ensure we are following industry best practices.
To imagine how our products could be misused or attacked, we also run threat modeling exercises, in which we try to anticipate how malicious actors could abuse our systems or misuse our platform. We fix the issues that are surfaced in these exercises and also use these learnings to help design new coding frameworks and analysis tools.
The next layer in our defense-in-depth approach is to regularly test our protections, to verify that our code and our defense mechanisms are behaving as intended and that our response teams are ready and able to detect and investigate attacks. To do that, we have a so-called “red team” of internal security experts who plan and execute staged “attacks” on our systems. These unannounced exercises help provide us a realistic picture of our readiness as we stress-test our systems and processes.
We then take the red team’s findings and, along with other partner teams across the company, map out how we would respond to a similar security incident in what’s known as a “table-top” exercise. This helps us improve the coordination between our teams working on security, privacy, public policy, communications, product and legal and helps us exercise the organizational muscles we would need during a real incident.
Bug bounty program: Engage the global security community
As we face many of the same security challenges as the rest of the technology industry, we have long invested in sharing our tools and knowledge so we can improve our community’s collective defense. In turn, outside information security experts provide their expertise to us through Facebook’s bug bounty program, one of the longest running in the industry.
Since 2011, we have encouraged security researchers to responsibly disclose potential issues so we can fix the bugs, publicly recognize their work and pay them a bounty. Our bug bounty program has been instrumental in helping us quickly detect new bugs, spot trends and engage the best security talent outside of Facebook to help us keep the platform safe. The lessons learned from each report feed back into our larger security effort, making us better and faster at finding, fixing and preventing bugs. To date, we have paid over $7.5 million in bounties to researchers from more than 100 countries. We continue to innovate in this area by expanding the bug bounty program to include issues that can lead to data abuse and compromises of third-party apps on the platform.
A unique mission to protect over 2 billion people on Facebook
Supporting our global community is a great responsibility that has driven continuous improvement and investment in our security technology and talent. Our focus on finding, fixing and preventing security issues has allowed us to scale our defenses as Facebook has grown to support billions of people connecting with one another. At times this has meant adapting our strategies to protect our expanding global community, rewriting our widely-used coding frameworks and open-sourcing unique security tools. And because we know that security work is never 100% finished, our security team will continue to innovate as the Facebook community grows.
Source: Designing Security for Billions | Facebook Newsroom
- Operating System
- Windows 10 Pro 64-bit
- Intel i7-8700K 5 GHz
- ASUS ROG Maximus XI Formula Z390
- 16 GB (8GBx2) G.SKILL TridentZ DDR4 3200 MHz
- Graphics Card(s)
- ASUS ROG-STRIX-GTX1080TI-O11G-GAMING
- Sound Card
- Integrated Digital Audio (S/PDIF)
- Monitor(s) Displays
- 3 x 27" Asus VE278Q
- Screen Resolution
- Hard Drives
1TB Samsung 970 EVO Plus M.2,
250GB Samsung 960 EVO M.2,
6TB WD Black WD6001FZWX
8TB WD MyCloudEX2Ultra NAS
- Seasonic Prime Titanium 850W
- Thermaltake Core P3
- Corsair Hydro H115i
- Logitech MX Master
- Logitech wireless K800
- Internet Speed
- 1 Gb/s Download and 35 Mb/s Upload
- Other Info
Logitech Z625 speaker system,
Logitech BRIO 4K Pro webcam,
HP Color LaserJet Pro MFP M477fdn,
Linksys EA9500 router,
Arris SB8200 cable modem,
APC SMART-UPS RT 1000 XL - SURT1000XLI,
Lumia 1520 phone
- Operating System
- Windows 10 Pro
- HP Envy Y0F94AV
- i7-7500U @ 2.70 GHz
- 16 GB DDR4-2133
- Graphics card(s)
- NVIDIA GeForce 940MX
- Sound Card
- Conexant ISST Audio
- Monitor(s) Displays
- 17.3" UHD IPS touch
- Screen Resolution
- 3480 x 2160
- Hard Drives
- 512 GB M.2 SSD | 1 | 5 |
<urn:uuid:68ba41df-714e-414d-8bba-e2ba6818bee8> | In computing, the kernel is a computer program that manages input/output requests from software, and translates them into data processing instructions for the central processing unit and other electronic components of a computer. The kernel is a fundamental part of a modern computer's operating system.
Because of its critical nature, the kernel code is usually loaded into a protected area of memory, which prevents it from being overwritten by other, less frequently used parts of the operating system or by application programs. The kernel performs its tasks, such as executing processes and handling interrupts, in kernel space, whereas everything a user normally does, such as writing text in a text editor or running programs in a GUI (graphical user interface), is done in user space. This separation is made in order to prevent user data and kernel data from interfering with each other and thereby diminishing performance or causing the system to become unstable (and possibly crashing).
When a computer program (in this context called a process) makes requests of the kernel, the request is called a system call. Various kernel designs differ in how they manage system calls and resources. For example, a monolithic kernel executes all the operating system instructions in the same address space in order to improve the performance of the system. A microkernel runs most of the operating system's background processes in user space, to make the operating system more modular and, therefore, easier to maintain.
- 1 Functions of the kernel
- 2 Kernel design decisions
- 3 Kernel-wide design approaches
- 4 History of kernel development
- 5 See also
- 6 Notes
- 7 References
- 8 Further reading
- 9 External links
Functions of the kernel
The kernel's primary function is to mediate access to the computer's resources, including:
- The central processing unit
- This central component of a computer system is responsible for running or executing programs. The kernel takes responsibility for deciding at any time which of the many running programs should be allocated to the processor or processors (each of which can usually run only one program at a time).
- Random-access memory
- Random-access memory is used to store both program instructions and data. Typically, both need to be present in memory in order for a program to execute. Often multiple programs will want access to memory, frequently demanding more memory than the computer has available. The kernel is responsible for deciding which memory each process can use, and determining what to do when not enough is available.
- Input/output (I/O) devices
- I/O devices include such peripherals as keyboards, mice, disk drives, printers, network adapters, and display devices. The kernel allocates requests from applications to perform I/O to an appropriate device and provides convenient methods for using the device (typically abstracted to the point where the application does not need to know implementation details of the device).
A kernel may implement these features itself, or rely on some of the processes it runs to provide the facilities to other processes, although in this case it must provide some means of IPC to allow processes to access the facilities provided by each other.
Finally, a kernel must provide running programs with a method to make requests to access these facilities.
The kernel has full access to the system's memory and must allow processes to safely access this memory as they require it. Often the first step in doing this is virtual addressing, usually achieved by paging and/or segmentation. Virtual addressing allows the kernel to make a given physical address appear to be another address, the virtual address. Virtual address spaces may be different for different processes; the memory that one process accesses at a particular (virtual) address may be different memory from what another process accesses at the same address. This allows every program to behave as if it is the only one (apart from the kernel) running and thus prevents applications from crashing each other.
On many systems, a program's virtual address may refer to data which is not currently in memory. The layer of indirection provided by virtual addressing allows the operating system to use other data stores, like a hard drive, to store what would otherwise have to remain in main memory (RAM). As a result, operating systems can allow programs to use more memory than the system has physically available. When a program needs data which is not currently in RAM, the CPU signals to the kernel that this has happened, and the kernel responds by writing the contents of an inactive memory block to disk (if necessary) and replacing it with the data requested by the program. The program can then be resumed from the point where it was stopped. This scheme is generally known as demand paging.
Virtual addressing also allows creation of virtual partitions of memory in two disjointed areas, one being reserved for the kernel (kernel space) and the other for the applications (user space). The applications are not permitted by the processor to address kernel memory, thus preventing an application from damaging the running kernel. This fundamental partition of memory space has contributed much to current designs of actual general-purpose kernels and is almost universal in such systems, although some research kernels (e.g. Singularity) take other approaches.
To perform useful functions, processes need access to the peripherals connected to the computer, which are controlled by the kernel through device drivers. A device driver is a computer program that enables the operating system to interact with a hardware device. It provides the operating system with information of how to control and communicate with a certain piece of hardware. The driver is an important and vital piece to a program application. The design goal of a driver is abstraction; the function of the driver is to translate the OS-mandated function calls (programming calls) into device-specific calls. In theory, the device should work correctly with the suitable driver. Device drivers are used for such things as video cards, sound cards, printers, scanners, modems, and LAN cards. The common levels of abstraction of device drivers are:
1. On the hardware side:
- Interfacing directly.
- Using a high level interface (Video BIOS).
- Using a lower-level device driver (file drivers using disk drivers).
- Simulating work with hardware, while doing something entirely different.
2. On the software side:
- Allowing the operating system direct access to hardware resources.
- Implementing only primitives.
- Implementing an interface for non-driver software (Example: TWAIN).
- Implementing a language, sometimes high-level (Example PostScript).
For example, to show the user something on the screen, an application would make a request to the kernel, which would forward the request to its display driver, which is then responsible for actually plotting the character/pixel.
A kernel must maintain a list of available devices. This list may be known in advance (e.g. on an embedded system where the kernel will be rewritten if the available hardware changes), configured by the user (typical on older PCs and on systems that are not designed for personal use) or detected by the operating system at run time (normally called plug and play). In a plug and play system, a device manager first performs a scan on different hardware buses, such as Peripheral Component Interconnect (PCI) or Universal Serial Bus (USB), to detect installed devices, then searches for the appropriate drivers.
As device management is a very OS-specific topic, these drivers are handled differently by each kind of kernel design, but in every case, the kernel has to provide the I/O to allow drivers to physically access their devices through some port or memory location. Very important decisions have to be made when designing the device management system, as in some designs accesses may involve context switches, making the operation very CPU-intensive and easily causing a significant performance overhead.
In computing, a system call is how a program requests a service from an operating system's kernel that it does not normally have permission to run. System calls provide the interface between a process and the operating system. Most operations interacting with the system require permissions not available to a user level process, e.g. I/O performed with a device present on the system, or any form of communication with other processes requires the use of system calls.
A system call is a mechanism that is used by the application program to request a service from the operating system. They use a machine-code instruction that causes the processor to change mode. An example would be from supervisor mode to protected mode. This is where the operating system performs actions like accessing hardware devices or the memory management unit. Generally the operating system provides a library that sits between the operating system and normal programs. Usually it is a C library such as Glibc or Windows API. The library handles the low-level details of passing information to the kernel and switching to supervisor mode. System calls include close, open, read, wait and write.
To actually perform useful work, a process must be able to access the services provided by the kernel. This is implemented differently by each kernel, but most provide a C library or an API, which in turn invokes the related kernel functions.
The method of invoking the kernel function varies from kernel to kernel. If memory isolation is in use, it is impossible for a user process to call the kernel directly, because that would be a violation of the processor's access control rules. A few possibilities are:
- Using a software-simulated interrupt. This method is available on most hardware, and is therefore very common.
- Using a call gate. A call gate is a special address stored by the kernel in a list in kernel memory at a location known to the processor. When the processor detects a call to that address, it instead redirects to the target location without causing an access violation. This requires hardware support, but the hardware for it is quite common.
- Using a special system call instruction. This technique requires special hardware support, which common architectures (notably, x86) may lack. System call instructions have been added to recent models of x86 processors, however, and some operating systems for PCs make use of them when available.
- Using a memory-based queue. An application that makes large numbers of requests but does not need to wait for the result of each may add details of requests to an area of memory that the kernel periodically scans to find requests.
Kernel design decisions
Issues of kernel support for protection
An important consideration in the design of a kernel is the support it provides for protection from faults (fault tolerance) and from malicious behaviors (security). These two aspects are usually not clearly distinguished, and the adoption of this distinction in the kernel design leads to the rejection of a hierarchical structure for protection.
The mechanisms or policies provided by the kernel can be classified according to several criteria, including: static (enforced at compile time) or dynamic (enforced at run time); pre-emptive or post-detection; according to the protection principles they satisfy (e.g. Denning); whether they are hardware supported or language based; whether they are more an open mechanism or a binding policy; and many more.
Many kernels provide implementation of "capabilities", i.e. objects that are provided to user code which allow limited access to an underlying object managed by the kernel. A common example occurs in file handling: a file is a representation of information stored on a permanent storage device. The kernel may be able to perform many different operations (e.g. read, write, delete or execute the file content's) but a user level application may only be permitted to perform some of these operations (e.g. it may only be allowed to read the file). A common implementation of this is for the kernel to provide an object to the application (typically called a "file handle") which the application may then invoke operations on, the validity of which the kernel checks at the time the operation is requested. Such a system may be extended to cover all objects that the kernel manages, and indeed to objects provided by other user applications.
An efficient and simple way to provide hardware support of capabilities is to delegate the MMU the responsibility of checking access-rights for every memory access, a mechanism called capability-based addressing. Most commercial computer architectures lack such MMU support for capabilities.
An alternative approach is to simulate capabilities using commonly supported hierarchical domains; in this approach, each protected object must reside in an address space that the application does not have access to; the kernel also maintains a list of capabilities in such memory. When an application needs to access an object protected by a capability, it performs a system call and the kernel then checks whether the application's capability grants it permission to perform the requested action, and if it is permitted performs the access for it (either directly, or by delegating the request to another user-level process). The performance cost of address space switching limits the practicality of this approach in systems with complex interactions between objects, but it is used in current operating systems for objects that are not accessed frequently or which are not expected to perform quickly. Approaches where protection mechanism are not firmware supported but are instead simulated at higher levels (e.g. simulating capabilities by manipulating page tables on hardware that does not have direct support), are possible, but there are performance implications. Lack of hardware support may not be an issue, however, for systems that choose to use language-based protection.
An important kernel design decision is the choice of the abstraction levels where the security mechanisms and policies should be implemented. Kernel security mechanisms play a critical role in supporting security at higher levels.
One approach is to use firmware and kernel support for fault tolerance (see above), and build the security policy for malicious behavior on top of that (adding features such as cryptography mechanisms where necessary), delegating some responsibility to the compiler. Approaches that delegate enforcement of security policy to the compiler and/or the application level are often called language-based security.
The lack of many critical security mechanisms in current mainstream operating systems impedes the implementation of adequate security policies at the application abstraction level. In fact, a common misconception in computer security is that any security policy can be implemented in an application regardless of kernel support.
Hardware-based protection or language-based protection
Typical computer systems today use hardware-enforced rules about what programs are allowed to access what data. The processor monitors the execution and stops a program that violates a rule (e.g., a user process that is about to read or write to kernel memory, and so on). In systems that lack support for capabilities, processes are isolated from each other by using separate address spaces. Calls from user processes into the kernel are regulated by requiring them to use one of the above-described system call methods.
An alternative approach is to use language-based protection. In a language-based protection system, the kernel will only allow code to execute that has been produced by a trusted language compiler. The language may then be designed such that it is impossible for the programmer to instruct it to do something that will violate a security requirement.
Advantages of this approach include:
- No need for separate address spaces. Switching between address spaces is a slow operation that causes a great deal of overhead, and a lot of optimization work is currently performed in order to prevent unnecessary switches in current operating systems. Switching is completely unnecessary in a language-based protection system, as all code can safely operate in the same address space.
- Flexibility. Any protection scheme that can be designed to be expressed via a programming language can be implemented using this method. Changes to the protection scheme (e.g. from a hierarchical system to a capability-based one) do not require new hardware.
- Longer application start up time. Applications must be verified when they are started to ensure they have been compiled by the correct compiler, or may need recompiling either from source code or from bytecode.
- Inflexible type systems. On traditional systems, applications frequently perform operations that are not type safe. Such operations cannot be permitted in a language-based protection system, which means that applications may need to be rewritten and may, in some cases, lose performance.
Edsger Dijkstra proved that from a logical point of view, atomic lock and unlock operations operating on binary semaphores are sufficient primitives to express any functionality of process cooperation. However this approach is generally held to be lacking in terms of safety and efficiency, whereas a message passing approach is more flexible. A number of other approaches (either lower- or higher-level) are available as well, with many modern kernels providing support for systems such as shared memory and remote procedure calls.
I/O devices management
The idea of a kernel where I/O devices are handled uniformly with other processes, as parallel co-operating processes, was first proposed and implemented by Brinch Hansen (although similar ideas were suggested in 1967). In Hansen's description of this, the "common" processes are called internal processes, while the I/O devices are called external processes.
Similar to physical memory, allowing applications direct access to controller ports and registers can cause the controller to malfunction, or system to crash. With this, depending on the complexity of the device, some devices can get surprisingly complex to program, and use several different controllers. Because of this, providing a more abstract interface to manage the device is important. This interface is normally done by a Device Driver or Hardware Abstraction Layer. Frequently, applications will require access to these devices. The Kernel must maintain the list of these devices by querying the system for them in some way. This can be done through the BIOS, or through one of the various system buses (such as PCI/PCIE, or USB). When an application requests an operation on a device (Such as displaying a character), the kernel needs to send this request to the current active video driver. The video driver, in turn, needs to carry out this request. This is an example of Inter Process Communication (IPC).
Kernel-wide design approaches
Naturally, the above listed tasks and features can be provided in many ways that differ from each other in design and implementation.
The principle of separation of mechanism and policy is the substantial difference between the philosophy of micro and monolithic kernels. Here a mechanism is the support that allows the implementation of many different policies, while a policy is a particular "mode of operation". For instance, a mechanism may provide for user log-in attempts to call an authorization server to determine whether access should be granted; a policy may be for the authorization server to request a password and check it against an encrypted password stored in a database. Because the mechanism is generic, the policy could more easily be changed (e.g. by requiring the use of a security token) than if the mechanism and policy were integrated in the same module.
In minimal microkernel just some very basic policies are included, and its mechanisms allows what is running on top of the kernel (the remaining part of the operating system and the other applications) to decide which policies to adopt (as memory management, high level process scheduling, file system management, etc.). A monolithic kernel instead tends to include many policies, therefore restricting the rest of the system to rely on them.
Per Brinch Hansen presented arguments in favor of separation of mechanism and policy. The failure to properly fulfill this separation is one of the major causes of the lack of substantial innovation in existing operating systems, a problem common in computer architecture. The monolithic design is induced by the "kernel mode"/"user mode" architectural approach to protection (technically called hierarchical protection domains), which is common in conventional commercial systems; in fact, every module needing protection is therefore preferably included into the kernel. This link between monolithic design and "privileged mode" can be reconducted to the key issue of mechanism-policy separation; in fact the "privileged mode" architectural approach melts together the protection mechanism with the security policies, while the major alternative architectural approach, capability-based addressing, clearly distinguishes between the two, leading naturally to a microkernel design (see Separation of protection and security).
While monolithic kernels execute all of their code in the same address space (kernel space) microkernels try to run most of their services in user space, aiming to improve maintainability and modularity of the codebase. Most kernels do not fit exactly into one of these categories, but are rather found in between these two designs. These are called hybrid kernels. More exotic designs such as nanokernels and exokernels are available, but are seldom used for production systems. The Xen hypervisor, for example, is an exokernel.
In a monolithic kernel, all OS services run along with the main kernel thread, thus also residing in the same memory area. This approach provides rich and powerful hardware access. Some developers, such as UNIX developer Ken Thompson, maintain that it is "easier to implement a monolithic kernel" than microkernels. The main disadvantages of monolithic kernels are the dependencies between system components — a bug in a device driver might crash the entire system — and the fact that large kernels can become very difficult to maintain.
Monolithic kernels, which have traditionally been used by Unix-like operating systems, contain all the operating system core functions and the device drivers (small programs that allow the operating system to interact with hardware devices, such as disk drives, video cards and printers). This is the traditional design of UNIX systems. A monolithic kernel is one single program that contains all of the code necessary to perform every kernel related task. Every part which is to be accessed by most programs which cannot be put in a library is in the kernel space: Device drivers, Scheduler, Memory handling, File systems, Network stacks. Many system calls are provided to applications, to allow them to access all those services. A monolithic kernel, while initially loaded with subsystems that may not be needed can be tuned to a point where it is as fast as or faster than the one that was specifically designed for the hardware, although more in a general sense. Modern monolithic kernels, such as those of Linux and FreeBSD, both of which fall into the category of Unix-like operating systems, feature the ability to load modules at runtime, thereby allowing easy extension of the kernel's capabilities as required, while helping to minimize the amount of code running in kernel space. In the monolithic kernel, some advantages hinge on these points:
- Since there is less software involved it is faster.
- As it is one single piece of software it should be smaller both in source and compiled forms.
- Less code generally means fewer bugs which can translate to fewer security problems.
Most work in the monolithic kernel is done via system calls. These are interfaces, usually kept in a tabular structure, that access some subsystem within the kernel such as disk operations. Essentially calls are made within programs and a checked copy of the request is passed through the system call. Hence, not far to travel at all. The monolithic Linux kernel can be made extremely small not only because of its ability to dynamically load modules but also because of its ease of customization. In fact, there are some versions that are small enough to fit together with a large number of utilities and other programs on a single floppy disk and still provide a fully functional operating system (one of the most popular of which is muLinux). This ability to miniaturize its kernel has also led to a rapid growth in the use of Linux in embedded systems.
These types of kernels consist of the core functions of the operating system and the device drivers with the ability to load modules at runtime. They provide rich and powerful abstractions of the underlying hardware. They provide a small set of simple hardware abstractions and use applications called servers to provide more functionality. This particular approach defines a high-level virtual interface over the hardware, with a set of system calls to implement operating system services such as process management, concurrency and memory management in several modules that run in supervisor mode. This design has several flaws and limitations:
- Coding in kernel can be challenging, in part because one cannot use common libraries (like a full-featured libc), and because one needs to use a source-level debugger like gdb. Rebooting the computer is often required. This is not just a problem of convenience to the developers. When debugging is harder, and as difficulties become stronger, it becomes more likely that code will be "buggier".
- Bugs in one part of the kernel have strong side effects; since every function in the kernel has all the privileges, a bug in one function can corrupt data structure of another, totally unrelated part of the kernel, or of any running program.
- Kernels often become very large and difficult to maintain.
- Even if the modules servicing these operations are separate from the whole, the code integration is tight and difficult to do correctly.
- Since the modules run in the same address space, a bug can bring down the entire system.
- Monolithic kernels are not portable; therefore, they must be rewritten for each new architecture that the operating system is to be used on.
Microkernel (also abbreviated μK or uK) is the term describing an approach to Operating System design by which the functionality of the system is moved out of the traditional "kernel", into a set of "servers" that communicate through a "minimal" kernel, leaving as little as possible in "system space" and as much as possible in "user space". A microkernel that is designed for a specific platform or device is only ever going to have what it needs to operate. The microkernel approach consists of defining a simple abstraction over the hardware, with a set of primitives or system calls to implement minimal OS services such as memory management, multitasking, and inter-process communication. Other services, including those normally provided by the kernel, such as networking, are implemented in user-space programs, referred to as servers. Microkernels are easier to maintain than monolithic kernels, but the large number of system calls and context switches might slow down the system because they typically generate more overhead than plain function calls.
Only parts which really require being in a privileged mode are in kernel space: IPC (Inter-Process Communication), Basic scheduler, or scheduling primitives, Basic memory handling, Basic I/O primitives. Many critical parts are now running in user space: The complete scheduler, Memory handling, File systems, and Network stacks. Micro kernels were invented as a reaction to traditional "monolithic" kernel design, whereby all system functionality was put in a one static program running in a special "system" mode of the processor. In the microkernel, only the most fundamental of tasks are performed such as being able to access some (not necessarily all) of the hardware, manage memory and coordinate message passing between the processes. Some systems that use micro kernels are QNX and the HURD. In the case of QNX and Hurd user sessions can be entire snapshots of the system itself or views as it is referred to. The very essence of the microkernel architecture illustrates some of its advantages:
- Maintenance is generally easier.
- Patches can be tested in a separate instance, and then swapped in to take over a production instance.
- Rapid development time and new software can be tested without having to reboot the kernel.
- More persistence in general, if one instance goes hay-wire, it is often possible to substitute it with an operational mirror.
Most micro kernels use a message passing system of some sort to handle requests from one server to another. The message passing system generally operates on a port basis with the microkernel. As an example, if a request for more memory is sent, a port is opened with the microkernel and the request sent through. Once within the microkernel, the steps are similar to system calls. The rationale was that it would bring modularity in the system architecture, which would entail a cleaner system, easier to debug or dynamically modify, customizable to users' needs, and more performing. They are part of the operating systems like AIX, BeOS, Hurd, Mach, Mac OS X, MINIX, QNX. Etc. Although micro kernels are very small by themselves, in combination with all their required auxiliary code they are, in fact, often larger than monolithic kernels. Advocates of monolithic kernels also point out that the two-tiered structure of microkernel systems, in which most of the operating system does not interact directly with the hardware, creates a not-insignificant cost in terms of system efficiency. These types of kernels normally provide only the minimal services such as defining memory address spaces, Inter-process communication (IPC) and the process management. The other functions such as running the hardware processes are not handled directly by micro kernels. Proponents of micro kernels point out those monolithic kernels have the disadvantage that an error in the kernel can cause the entire system to crash. However, with a microkernel, if a kernel process crashes, it is still possible to prevent a crash of the system as a whole by merely restarting the service that caused the error.
Other services provided by the kernel such as networking are implemented in user-space programs referred to as servers. Servers allow the operating system to be modified by simply starting and stopping programs. For a machine without networking support, for instance, the networking server is not started. The task of moving in and out of the kernel to move data between the various applications and servers creates overhead which is detrimental to the efficiency of micro kernels in comparison with monolithic kernels.
Disadvantages in the microkernel exist however. Some are:
- Larger running memory footprint
- More software for interfacing is required, there is a potential for performance loss.
- Messaging bugs can be harder to fix due to the longer trip they have to take versus the one off copy in a monolithic kernel.
- Process management in general can be very complicated.
- The disadvantages for micro kernels are extremely context based. As an example, they work well for small single purpose (and critical) systems because if not many processes need to run, then the complications of process management are effectively mitigated.
A microkernel allows the implementation of the remaining part of the operating system as a normal application program written in a high-level language, and the use of different operating systems on top of the same unchanged kernel. It is also possible to dynamically switch among operating systems and to have more than one active simultaneously.
Monolithic kernels vs. microkernels
As the computer kernel grows, a number of problems become evident. One of the most obvious is that the memory footprint increases. This is mitigated to some degree by perfecting the virtual memory system, but not all computer architectures have virtual memory support. To reduce the kernel's footprint, extensive editing has to be performed to carefully remove unneeded code, which can be very difficult with non-obvious interdependencies between parts of a kernel with millions of lines of code.
By the early 1990s, due to the various shortcomings of monolithic kernels versus microkernels, monolithic kernels were considered obsolete by virtually all operating system researchers. As a result, the design of Linux as a monolithic kernel rather than a microkernel was the topic of a famous debate between Linus Torvalds and Andrew Tanenbaum. There is merit on both sides of the argument presented in the Tanenbaum–Torvalds debate.
Monolithic kernels are designed to have all of their code in the same address space (kernel space), which some developers argue is necessary to increase the performance of the system. Some developers also maintain that monolithic systems are extremely efficient if well-written. The monolithic model tends to be more efficient through the use of shared kernel memory, rather than the slower IPC system of microkernel designs, which is typically based on message passing.
The performance of microkernels constructed in the 1980s the year in which it started and early 1990s was poor. Studies that empirically measured the performance of these microkernels did not analyze the reasons of such inefficiency. The explanations of this data were left to "folklore", with the assumption that they were due to the increased frequency of switches from "kernel-mode" to "user-mode", to the increased frequency of inter-process communication and to the increased frequency of context switches.
In fact, as guessed in 1995, the reasons for the poor performance of microkernels might as well have been: (1) an actual inefficiency of the whole microkernel approach, (2) the particular concepts implemented in those microkernels, and (3) the particular implementation of those concepts. Therefore it remained to be studied if the solution to build an efficient microkernel was, unlike previous attempts, to apply the correct construction techniques.
On the other end, the hierarchical protection domains architecture that leads to the design of a monolithic kernel has a significant performance drawback each time there's an interaction between different levels of protection (i.e. when a process has to manipulate a data structure both in 'user mode' and 'supervisor mode'), since this requires message copying by value.
By the mid-1990s, most researchers had abandoned the belief that careful tuning could reduce this overhead dramatically, but recently, newer microkernels, optimized for performance, such as L4 and K42 have addressed these problems.
Hybrid (or Modular) kernels
Hybrid kernels are used in most commercial operating systems such as Microsoft Windows NT 3.1, NT 3.5, NT 3.51, NT 4.0, 2000, XP, Vista, 7, 8, and 8.1 . Apple Inc's own Mac OS X uses a hybrid kernel called XNU which is based upon code from Carnegie Mellon's Mach kernel and FreeBSD's monolithic kernel. They are similar to micro kernels, except they include some additional code in kernel-space to increase performance. These kernels represent a compromise that was implemented by some developers before it was demonstrated that pure micro kernels can provide high performance. These types of kernels are extensions of micro kernels with some properties of monolithic kernels. Unlike monolithic kernels, these types of kernels are unable to load modules at runtime on their own. Hybrid kernels are micro kernels that have some "non-essential" code in kernel-space in order for the code to run more quickly than it would were it to be in user-space. Hybrid kernels are a compromise between the monolithic and microkernel designs. This implies running some services (such as the network stack or the filesystem) in kernel space to reduce the performance overhead of a traditional microkernel, but still running kernel code (such as device drivers) as servers in user space.
Many traditionally monolithic kernels are now at least adding (if not actively exploiting) the module capability. The most well known of these kernels is the Linux kernel. The modular kernel essentially can have parts of it that are built into the core kernel binary or binaries that load into memory on demand. It is important to note that a code tainted module has the potential to destabilize a running kernel. Many people become confused on this point when discussing micro kernels. It is possible to write a driver for a microkernel in a completely separate memory space and test it before "going" live. When a kernel module is loaded, it accesses the monolithic portion's memory space by adding to it what it needs, therefore, opening the doorway to possible pollution. A few advantages to the modular (or) Hybrid kernel are:
- Faster development time for drivers that can operate from within modules. No reboot required for testing (provided the kernel is not destabilized).
- On demand capability versus spending time recompiling a whole kernel for things like new drivers or subsystems.
- Faster integration of third party technology (related to development but pertinent unto itself nonetheless).
Modules, generally, communicate with the kernel using a module interface of some sort. The interface is generalized (although particular to a given operating system) so it is not always possible to use modules. Often the device drivers may need more flexibility than the module interface affords. Essentially, it is two system calls and often the safety checks that only have to be done once in the monolithic kernel now may be done twice. Some of the disadvantages of the modular approach are:
- With more interfaces to pass through, the possibility of increased bugs exists (which implies more security holes).
- Maintaining modules can be confusing for some administrators when dealing with problems like symbol differences.
A nanokernel delegates virtually all services — including even the most basic ones like interrupt controllers or the timer — to device drivers to make the kernel memory requirement even smaller than a traditional microkernel.
Exokernels are a still experimental approach to operating system design. They differ from the other types of kernels in that their functionality is limited to the protection and multiplexing of the raw hardware, providing no hardware abstractions on top of which to develop applications. This separation of hardware protection from hardware management enables application developers to determine how to make the most efficient use of the available hardware for each specific program.
Exokernels in themselves are extremely small. However, they are accompanied by library operating systems, providing application developers with the functionalities of a conventional operating system. A major advantage of exokernel-based systems is that they can incorporate multiple library operating systems, each exporting a different API, for example one for high level UI development and one for real-time control.
History of kernel development
Early operating system kernels
Strictly speaking, an operating system (and thus, a kernel) is not required to run a computer. Programs can be directly loaded and executed on the "bare metal" machine, provided that the authors of those programs are willing to work without any hardware abstraction or operating system support. Most early computers operated this way during the 1950s and early 1960s, which were reset and reloaded between the execution of different programs. Eventually, small ancillary programs such as program loaders and debuggers were left in memory between runs, or loaded from ROM. As these were developed, they formed the basis of what became early operating system kernels. The "bare metal" approach is still used today on some video game consoles and embedded systems, but in general, newer computers use modern operating systems and kernels.
In 1969 the RC 4000 Multiprogramming System introduced the system design philosophy of a small nucleus "upon which operating systems for different purposes could be built in an orderly manner", what would be called the microkernel approach.
Time-sharing operating systems
In the decade preceding Unix, computers had grown enormously in power — to the point where computer operators were looking for new ways to get people to use the spare time on their machines. One of the major developments during this era was time-sharing, whereby a number of users would get small slices of computer time, at a rate at which it appeared they were each connected to their own, slower, machine.
The development of time-sharing systems led to a number of problems. One was that users, particularly at universities where the systems were being developed, seemed to want to hack the system to get more CPU time. For this reason, security and access control became a major focus of the Multics project in 1965. Another ongoing issue was properly handling computing resources: users spent most of their time staring at the screen and thinking instead of actually using the resources of the computer, and a time-sharing system should give the CPU time to an active user during these periods. Finally, the systems typically offered a memory hierarchy several layers deep, and partitioning this expensive resource led to major developments in virtual memory systems.
The Commodore Amiga was released in 1985, and was among the first — and certainly most successful — home computers to feature a hybrid architecture. The AmigaOS kernel's executive component, exec.library, uses a microkernel message-passing design, but there are other kernel components, like graphics.library, that have direct access to the hardware. There is no memory protection, and the kernel is almost always running in user mode. Only special actions are executed in kernel mode, and user-mode applications can ask the operating system to execute their code in kernel mode.
For instance, printers were represented as a "file" at a known location — when data was copied to the file, it printed out. Other systems, to provide a similar functionality, tended to virtualize devices at a lower level — that is, both devices and files would be instances of some lower level concept. Virtualizing the system at the file level allowed users to manipulate the entire system using their existing file management utilities and concepts, dramatically simplifying operation. As an extension of the same paradigm, Unix allows programmers to manipulate files using a series of small programs, using the concept of pipes, which allowed users to complete operations in stages, feeding a file through a chain of single-purpose tools. Although the end result was the same, using smaller programs in this way dramatically increased flexibility as well as ease of development and use, allowing the user to modify their workflow by adding or removing a program from the chain.
In the Unix model, the Operating System consists of two parts; first, the huge collection of utility programs that drive most operations, the other the kernel that runs the programs. Under Unix, from a programming standpoint, the distinction between the two is fairly thin; the kernel is a program, running in supervisor mode, that acts as a program loader and supervisor for the small utility programs making up the rest of the system, and to provide locking and I/O services for these programs; beyond that, the kernel didn't intervene at all in user space.
Over the years the computing model changed, and Unix's treatment of everything as a file or byte stream no longer was as universally applicable as it was before. Although a terminal could be treated as a file or a byte stream, which is printed to or read from, the same did not seem to be true for a graphical user interface. Networking posed another problem. Even if network communication can be compared to file access, the low-level packet-oriented architecture dealt with discrete chunks of data and not with whole files. As the capability of computers grew, Unix became increasingly cluttered with code. It is also because the modularity of the Unix kernel is extensively scalable. While kernels might have had 100,000 lines of code in the seventies and eighties, kernels of modern Unix successors like Linux have more than 13 million lines.
Modern Unix-derivatives are generally based on module-loading monolithic kernels. Examples of this are the Linux kernel in its many distributions as well as the Berkeley software distribution variant kernels such as FreeBSD, DragonflyBSD, OpenBSD, NetBSD, and Mac OS X. Apart from these alternatives, amateur developers maintain an active operating system development community, populated by self-written hobby kernels which mostly end up sharing many features with Linux, FreeBSD, DragonflyBSD, OpenBSD or NetBSD kernels and/or being compatible with them.
Apple Computer first launched Mac OS in 1984, bundled with its Apple Macintosh personal computer. Apple moved to a nanokernel design in Mac OS 8.6. Against this, Mac OS X is based on Darwin, which uses a hybrid kernel called XNU, which was created combining the 4.3BSD kernel and the Mach kernel.
Microsoft Windows was first released in 1985 as an add-on to MS-DOS. Because of its dependence on another operating system, initial releases of Windows, prior to Windows 95, were considered an operating environment (not to be confused with an operating system). This product line continued to evolve through the 1980s and 1990s, culminating with release of the Windows 9x series (upgrading the system's capabilities to 32-bit addressing and pre-emptive multitasking) through the mid-1990s and ending with the release of Windows Me in 2000. Microsoft also developed Windows NT, an operating system intended for high-end and business users. This line started with the release of Windows NT 3.1 in 1993, and has continued through the years of 2010 with Windows 8 and Windows Server 2012.
The release of Windows XP in October 2001 brought the NT kernel version of Windows to general users, replacing Windows 9x with a completely different operating system. The architecture of Windows NT's kernel is considered a hybrid kernel because the kernel itself contains tasks such as the Window Manager and the IPC Managers, with a client/server layered subsystem model.
Development of microkernels
Although Mach, developed at Carnegie Mellon University from 1985 to 1994, is the best-known general-purpose microkernel, other microkernels have been developed with more specific aims. The L4 microkernel family (mainly the L3 and the L4 kernel) was created to demonstrate that microkernels are not necessarily slow. Newer implementations such as Fiasco and Pistachio are able to run Linux next to other L4 processes in separate address spaces.
- Wulf 74 pp.337–345
- cf. Daemon (computing)
- Roch 2004
- Silberschatz 1991
- Denning 1976
- Swift 2005, p.29 quote: "isolation, resource control, decision verification (checking), and error recovery."
- Schroeder 72
- Linden 76
- Stephane Eranian and David Mosberger, Virtual Memory in the IA-64 Linux Kernel, Prentice Hall PTR, 2002
- Silberschatz & Galvin, Operating System Concepts, 4th ed, pp445 & 446
- Hoch, Charles; J. C. Browne (University of Texas, Austin) (July 1980). "An implementation of capabilities on the PDP-11/45" (PDF). ACM SIGOPS Operating Systems Review 14 (3): 22–32.
- A Language-Based Approach to Security, Schneider F., Morrissett G. (Cornell University) and Harper R. (Carnegie Mellon University)
- P. A. Loscocco, S. D. Smalley, P. A. Muckelbauer, R. C. Taylor, S. J. Turner, and J. F. Farrell. The Inevitability of Failure: The Flawed Assumption of Security in Modern Computing Environments. In Proceedings of the 21st National Information Systems Security Conference, pages 303–314, Oct. 1998. .
- J. Lepreau et al. The Persistent Relevance of the Local Operating System to Global Applications. Proceedings of the 7th ACM SIGOPS Eurcshelf/book001/book001.html Information Security: An Integrated Collection of Essays], IEEE Comp. 1995.
- J. Anderson, Computer Security Technology Planning Study, Air Force Elect. Systems Div., ESD-TR-73-51, October 1972.
- * Jerry H. Saltzer, Mike D. Schroeder (September 1975). "The protection of information in computer systems". Proceedings of the IEEE 63 (9): 1278–1308.
- Jonathan S. Shapiro; Jonathan M. Smith; David J. Farber (1999). "EROS: a fast capability system". Proceedings of the seventeenth ACM symposium on Operating systems principles 33 (5): 170–185.
- Dijkstra, E. W. Cooperating Sequential Processes. Math. Dep., Technological U., Eindhoven, Sept. 1965.
- Brinch Hansen 70 pp.238–241
- "SHARER, a time sharing system for the CDC 6600". Retrieved 2007-01-07.
- "Dynamic Supervisors – their design and construction". Retrieved 2007-01-07.
- Baiardi 1988
- Levin 75
- Denning 1980
- Jürgen Nehmer The Immortality of Operating Systems, or: Is Research in Operating Systems still Justified? Lecture Notes In Computer Science; Vol. 563. Proceedings of the International Workshop on Operating Systems of the 90s and Beyond. pp. 77–83 (1991) ISBN 3-540-54987-0 quote: "The past 25 years have shown that research on operating system architecture had a minor effect on existing main stream systems."
- Levy 84, p.1 quote: "Although the complexity of computer applications increases yearly, the underlying hardware architecture for applications has remained unchanged for decades."
- Levy 84, p.1 quote: "Conventional architectures support a single privileged mode of operation. This structure leads to monolithic design; any module needing protection must be part of the single operating system kernel. If, instead, any module could execute within a protected domain, systems could be built as a collection of independent modules extensible by any user."
- Open Sources: Voices from the Open Source Revolution
- Virtual addressing is most commonly achieved through a built-in memory management unit.
- Recordings of the debate between Torvalds and Tanenbaum can be found at dina.dk, groups.google.com, oreilly.com and Andrew Tanenbaum's website
- quote: "The tightly coupled nature of a monolithic kernel allows it to make very efficient use of the underlying hardware [...] Microkernels, on the other hand, run a lot more of the core processes in userland. [...] Unfortunately, these benefits come at the cost of the microkernel having to pass a lot of information in and out of the kernel space through a process known as a context switch. Context switches introduce considerable overhead and therefore result in a performance penalty."
- Liedtke 95
- Härtig 97
- Hansen 73, section 7.3 p.233 "interactions between different levels of protection require transmission of messages by value"
- The L4 microkernel family – Overview
- KeyKOS Nanokernel Architecture
- Ball: Embedded Microprocessor Designs, p. 129
- Hansen 2001 (os), pp.17–18
- BSTJ version of C.ACM Unix paper
- Introduction and Overview of the Multics System, by F. J. Corbató and V. A. Vissotsky.
- The UNIX System — The Single Unix Specification
- The highest privilege level has various names throughout different architectures, such as supervisor mode, kernel mode, CPL0, DPL0, Ring 0, etc. See Ring (computer security) for more information.
- Unix’s Revenge by Horace Dediu
- Linux Kernel 2.6: It's Worth More!, by David A. Wheeler, October 12, 2004
- This community mostly gathers at Bona Fide OS Development, The Mega-Tokyo Message Board and other operating system enthusiast web sites.
- XNU: The Kernel
- Windows History: Windows Desktop Products History
- The Fiasco microkernel – Overview
- L4Ka – The L4 microkernel family and friends
- QNX Realtime Operating System Overview
- Roch, Benjamin (2004). "Monolithic kernel vs. Microkernel" (PDF). Retrieved 2006-10-12.
- Ball, Stuart R. (2002) . Embedded Microprocessor Systems: Real World Designs (first ed.). Elsevier Science.
- Deitel, Harvey M. (1984) . An introduction to operating systems (revisited first ed.). Addison-Wesley. p. 673.
- ACM SIGOPS Operating Systems Review, v.31 n.5, p. 66–77, Dec. 1997
- Houdek, M. E., Soltis, F. G., and Hoffman, R. L. 1981. IBM System/38 support for capability-based addressing. In Proceedings of the 8th ACM International Symposium on Computer Architecture. ACM/IEEE, pp. 341–348.
- Intel Corporation (2002) The IA-32 Architecture Software Developer’s Manual, Volume 1: Basic Architecture
- Levin, R.; E. Cohen, W. Corwin, F. Pollack,
- Levy, Henry M. (1984). Capability-based computer systems. Maynard, Mass: Digital Press.
- Liedtke, Jochen. On µ-Kernel Construction, Proc. 15th ACM Symposium on Operating System Principles (SOSP), December 1995
- Linden, Theodore A. (December 1976). "Operating System Structures to Support Security and Reliable Software".
- Lorin, Harold (1981). Operating systems.
- Shaw, Alan C. (1974). The logical design of Operating systems. Prentice-Hall. p. 304.
- Baiardi, F.; A. Tomasi, M. Vanneschi (1988). Architettura dei Sistemi di Elaborazione, volume 1 (in Italian). Franco Angeli.
- Swift, Michael M.; Brian N. Bershad; Henry M. Levy. Improving the reliability of commodity operating systems.
- "Improving the reliability of commodity operating systems". Doi.acm.org.
- "ACM Transactions on Computer Systems (TOCS), v.23 n.1, p. 77–110, February 2005".
- Andrew Tanenbaum, Operating Systems – Design and Implementation (Third edition);
- Andrew Tanenbaum, Modern Operating Systems (Second edition);
- Daniel P. Bovet, Marco Cesati, The Linux Kernel;
- Morgan Koffman (ISBN 1-55860-428-6);
- B.S. Chalk, Computer Organisation and Architecture, Macmillan P.(ISBN 0-333-64551-0).
- Detailed comparison between most popular operating system kernels | 1 | 16 |
<urn:uuid:4edd0af2-4ca3-4eee-b64d-9107b0091bea> | Below you find the Sparton Rugged Electronics Knowledge Base / FAQ which contains answers to common questions as well as detailed explanations on technical terms and technology.
Human Machine Interface software gives machine operators a way to interact with and manage a system. This interaction is through a graphical user interface (GUI).
Supervisory control and data acquisition (SCADA) is a control system architecture that uses computers, networked data communications and graphical user interfaces for high-level process supervisory management, but uses other peripheral devices such as programmable logic controllers (PLCs) and discrete PID controllers to interface to the process plant or machinery.
The operator interfaces which enable monitoring and the issuing of process commands, such as controller set point changes, are handled through the SCADA supervisory computer system. However, the real-time control logic or controller calculations are performed by networked modules which connect to the field sensors and actuators.
- Both large and small systems can be built using the SCADA concept. These systems can range from just tens to thousands of control loops, depending on the application. Example processes include industrial, infrastructure, and facility-based processes, as described below:
- Industrial processes include manufacturing, Process control, power generation, fabrication, and refining, and may run in continuous, batch, repetitive, or discrete modes.
- Infrastructure processes may be public or private, and include water treatment and distribution, wastewater collection and treatment, oil and gas pipelines, electric power transmission and distribution, and wind farms.
- Facility processes, including buildings, airports, ships, and space stations. They monitor and control heating, ventilation, and air conditioning systems (HVAC), access, and energy consumption.
What are the key advantages of using a SBC?
Conventional desktop computers generally have most of their electronics on a single large “motherboard” or “main board” that mounts to the bottom of the computer’s chassis. While this is the least expensive packaging method, it leaves a lot to be desired when used in industrial applications. Replacing a motherboard necessitates complete disassembly, removing all cards and cables from the system. Downtime ranges from 30 minutes to several hours. This is unacceptable for many mission critical applications where downtime can cost thousands of dollars per hour. Since motherboard technology changes literally on a monthly basis, it is sometimes impossible to find an exact replacement. Using another motherboard often causes software problems due to BIOS changes, changing device drivers, and different timing and interface issues. These can take days or weeks to completely solve.
Another key area of concern is the availability of expansion slots. Many of today’s motherboard type systems do not offer as many ISA/PCI expansion slots as they did in the past. In fact, the ISA slot has virtually disappeared from the latest boards available today.
Enter – the SBC
The Single Board Computer or SBC contains all the functionality of a conventional motherboard only designed onto a single plug-in type card, which looks similar to a standard ISA/ PCI card. This SBC plugs directly into what is referred to as a “Passive Backplane”. The Backplane is simply a combination of ISA/PCI expansion slots into which the SBC and other cards are inserted. There are many different configurations of Backplanes available ranging from a couple of slots up to 20 or more.
Utilizing SBC and Backplane technology has several key and distinct advantages.
- Available ISA/PCI expansion slots for add-in cards
Lower mean time to repair (MTTR). System’s can be upgraded or repaired in seconds
Better configuration control and longer product life cycle and technical support
Designed for 24 hour 7 day operation
For more information on Industrial Computer standards checkout PICMG at: http://www.picmg.org
The viewing angle is the angle at which the image quality of an LCD degrades and becomes unacceptable for the intended application. Viewing angles are usually quoted in horizontal and vertical degrees with importance dependent on the specific application. As the observer physically moves to the sides of the LCD, the images will degrade in three ways. First, the luminance drops. Second, the contrast ratio usually drops off at large angles. Third, the colors may shift. Most modern LCD’s have acceptable viewing angles even for viewing from the sides.
For LCD’s used in outdoor applications, defining the viewing angle based on CR alone is not adequate. Under very bright ambient light conditions the display is hardly visible when the screen luminance drops below 200 nits. Therefore, the viewing angles are defined based on both the CR and the Luminance.
The dot pitch specification for a display monitor tells you how sharp the displayed image can be. The dot pitch is measured in millimeters (mm) and a smaller number means a sharper image. In desk top monitors, common dot pitches are .31mm, .28mm, .27mm, .26mm, and .25mm. Personal computer users will usually want a .28mm or finer. Some large monitors for presentation use may have a larger dot pitch (.48mm, for example). Think of the dot specified by the dot pitch as the smallest physical visual component on the display. A pixel is the smallest programmable visual element and maps to the dot if the display is set to its highest resolution. When set to lower resolutions, a pixel encompasses multiple dots.
Yes -any high brightness backlight system will consume a significant amount of power, thereby increasing the LCD temperature. The brighter the backlight, the greater the thermal issue. As well, if the LCD is used under direct sunlight additional heat will be generated as a result of sunlight exposure. Temperature issues have been handled through proper thermal management design incorporating passive and active cooling methods. This is extremely important in maintaining overall reliability and long-term operation.
First, the display screen on a sunlight readable/outdoor readable LCD should be bright enough so that the display is visible in direct or strong sunlight. Second, the display contrast ratio must be maintained at 5 to 1 or higher.
Although a display with less than 500 nits screen brightness and a mere 2 to 1 contrast ratio can be read in outdoor environments, the quality of the display will be dreadfully poor and not get the desired information across effectively. A true sunlight readable display is normally considered to be an LCD with at least 1000 nits of screen brightness and a contrast ratio greater than 5 to 1. In outdoor environments under the shade, such a display can provide an excellent image quality.
Applications will vary depending on the location of the LCD and how much ambient light is available that could cause the display to become washed out or unreadable. As a rule of thumb; notebooks and desktop LCD’s which are generally used in office light conditions are in the 200-250 nit range. For indoor use with uncontrolled or indirect sunlight it is recommended that a display of 500 – 900 nits be used. If the application is outdoors or in direct sunlight then at least 1000 nits and up should be considered.
Contrast ratio (CR) is the ratio of luminance between the brightest “white” and the darkest “black” that can be produced on a display. CR is another influence of perceived picture quality. If a picture has high CR, you will consider it to be sharper and crisper than a picture with lower CR. For example, a typical newspaper picture has a CR of about 5 to 7, whereas a high quality magazine picture has a CR that is greater than 15. Therefore, the magazine picture will look better even if the resolution is the same as that of the newspaper picture.
A typical AMLCD exhibits a CR of approximately 300 to 700 when measured in a dark room. The CR on the same unit measured under ambient illumination is drastically lowered due to surface reflection (glare). For example, a standard 200 nit LCD measured in a dark room has a 300 CR, but will have less than a 2.0 CR under intense direct sunlight. This is due to the fact that surface glare increases the luminance by over 200 nits both on the “white” and the “black” that are produced on the display screen. The result is the luminance of the white is slightly over 400 nits, and the luminance of the black is over 200 nits. The CR ratio then becomes less than 2 and the picture quality is drastically reduce and not acceptable.
Luminance is the scientific term for “Photopic Brightness” which specifies the visual brightness of an object. In layman’s terms, it is commonly referred to as “brightness”. Luminance is specified in candelas per square meter (Cd/m2) or nits. In the US, the British unit Foot-lamberts (fL) is also frequently used. To convert from fL to nits, multiply the number in fL by 3.426 (i.e. 1 fL = 3.426 nits).
Luminance is an influential factor of perceived picture quality in an LCD. The importance of luminance is enhanced by the fact that humans will react more positively to a brightly illuminated screen. In indoor environments, a standard active-matrix LCD with a screen luminance of around 250 nits will look good. In the same scenario an LCD with a luminance of 1,000 nits or more will look utterly captivating.
A NIT is a measurement of light in candelas per meter square (Cd/m2)
For an LCD monitor it is brightness out of the front panel of the display. A NIT is a good basic reference when comparing brightness from monitor to monitor. Most desktop LCD’s or Notebook LCD’s have a brightness of 200 to 250 Nits. These standard LCD’s are not readable in direct or even indirect sunlight as they become washed out.
Note – End of XP Support
April 8, 2014 marked the official end of Windows XP support from Microsoft. On April 8, 2014 security updates will stop being issued for the aging but still popular operating system.
Moving forward Stealth recommends the Windows 7, 64-bit operating system as a base for all of its new and existing systems. If you still require Windows XP for your specific application it can still be provided on new orders upon request by purchasing Windows 7 and exercising your Windows 7 Downgrade rights.
Stealth can provide Windows XP for systems powered up to 3rd Generation Intel Core processors. We also highly recommend the use of up-to-date anti-virus protection to help protect against security threats since Microsoft is no longer supporting it.
How Does Downgrade Rights for Windows 7 Work?
Windows 7 is the latest offering in desktop operating systems from Microsoft. In some cases end-users still need to operate with previous versions of O/S such as Windows XP or Vista. As part of Microsoft’s End User License Agreement (EULA), Stealth Computer is an authorized Microsoft OEM and is permitted to offer its customers the ability to run previous versions of XP or Vista Operating Systems.
How does this work? – Downgrade to Microsoft XP as an example.
Windows XP is no longer available for purchase from Microsoft. What you are purchasing is a Windows 7 Pro license with product key, installation media along with a service that allows us (with your permission) to exercise your downgrade rights. You will need to purchase a separate license for each computer ordered.
When choosing the downgrade option Stealth will preinstall Windows XP Pro, 32-Bit, SP3 on your new machine. You will not be provided with Windows XP Pro media.
What happens if I need to re-load Windows XP Pro?
As per the terms of Microsoft’s downgrade rights, we are unable to provide you with the installation media or a product key for Windows XP Pro. This is supposed to be provided by you, the end user. Since we have media and a product key on file, we utilize these as a courtesy to do the initial installation.
If you need to reinstall Windows XP Pro there are two choices detailed as follows;
Return the PC directly to Stealth Computer (The authorized OEM) for a re-installation of the O/S. Note: nominal charges may apply
If you wish to reload Windows XP on your own, you must obtain a copy of the genuine Microsoft Windows XP Pro installation media and a previously activated Windows XP product key. You may use any Windows XP Pro key from a previously activated system. You must use this media and product key to reinstall Windows XP Pro.
After reinstalling please do not activate via the internet. You MUST activate your installation of Windows XP Pro via Microsoft’s telephone activation line. When speaking to a representative tell them that you are exercising your downgrade rights from your Windows 7 Pro license. They will then provide you with an installation ID for you to activate your Win XP Pro re-installation.
You can find additional information on Microsoft’s downgrade rights by clicking, HERE
If you have further questions please contact Stealth.
What is it and why I should care
by Ray Franklin
RoHS stands for Restriction of use of Hazardous Substances (ref. 1). The acronym is pronounced Rose, Roz, Ross, or is spelled out, depending on the speaker’s preference. RoHS is a directive issued January 27, 2003 by the European Commission (EC). It directs European Union (EU) member nations to enact local legislation by August 13, 2004, which will implement the RoHS directive as regulatory requirements before the activation date of July 1, 2006. And that means what?
The directive is a legally binding document, for the EU member nations. It establishes regulations at the EU level, which flow to each member nation. Each government must pass its own laws, patterned after the RoHS directive, and do so by a deadline.
RoHS is part of a growing wave of environmental regulations or green initiatives. In addition to RoHS for Europe, there are similar regulations being written in China and other Asian nations. Japanese companies have created a non-governmental group to standardize green procurement requirements. In the US, individual states are passing laws restricting some substances and requiring recycling of certain classes of products. A common theme is the so-called “take-back” feature that requires manufacturers to accept old products from consumers and reuse or recycle the items.
The RoHS directive requires that six hazardous substances be removed from all electrical and electronic equipment. The substances may be present incidentally at certain levels as long as they are declared. The six substances are Cadmium (Cd), hexavalent Chromium (CR VI), Lead (Pb), Mercury (Hg), polybrominated biphenyls (PBB) and polybrominated diphenyl ethers (PBDE). The maximum concentration of Cd is 0.01% by weight of homogeneous material, and 0.1% by weight for the other five substances. “Homogeneous material” means a material that cannot be mechanically disjointed into different materials (ref. 2). A substance is “present incidentally” if it was not intentionally added.
Some exemptions are declared in the RoHS annex, such as Hg in fluorescent lamps, Pb in certain alloys, and Pb in solder for servers (until 2010). All the details are in the RoHS directive text, with discussion and explanation in the dti RoHS guidance notes.
It all sounds pretty straightforward. There are, however, some kinks. For one, the EU member nations have not followed through and produced legislation. The August 13 deadline is long past and only a few countries have passed legislation (ref 3). This delay is creating uncertainty among corporations striving for compliance. Compounding the confusion, local legislation could tighten restrictions and possibly remove exemptions. Any company counting on a particular exemption could run into trouble in countries that nullified the exemption. Furthermore, the EC failed to meet its own October 2004 deadline of finalizing the directive.
Though the regulatory climate is still unsettled, a few certainties have popped up. Compliance is not optional. If you don’t face regulation directly, your customers probably will, and they will push the requirements down to you. The safest strategy is to comply with the most stringent requirements – aim for RoHS with no exemptions. You are not alone. Every other business is in the same boat, and industry groups are working hard to formulate standards for compliance. Use the links on the RoHSwell home page to research RoHS in greater depth. Form your own plan, and get compliant.
1. From directive 2002/95/EC, of the European Parliament.
2. From dti RoHS Regulations, Government Guidance Notes, Consultation Draft, July 2004.
3. The Perchards Report, summary of the transposition of the WEEE and RoHS directives into law by EU member states, January 2005.
Intrinsic safety is a protection concept deployed in sensitive and potentially explosive atmospheres. Intrinsic safety relies on the equipment being designed so that it is unable to release sufficient energy, by either thermal or electrical means, to cause an ignition of a flammable gas.
Intrinsically safe is achieved by limiting the amount of power available to the electrical equipment in the hazardous area to a level below that which will ignite the gases.
In order to have a fire or explosion, fuel, oxygen and a source of ignition must be present. An intrinsically safe system assumes the fuel and oxygen is present in the atmosphere, but the system is designed so the electrical energy or thermal energy of a particular instrument loop can never be great enough to cause ignition.
Traditionally, protection from explosion in hazardous environments has been accomplished by either using explosion proof equipment which can contain an explosion inside an enclosure, or pressurization and/or purging which isolates the explosive gas from the electrical equipment.
Intrinsically safe equipment cannot replace these methods in all applications, but where possible can provide significant cost savings in installation and maintenance of the equipment in a Hazardous area. The basic design of an intrinsic safety barrier uses Zener Diodes to limit voltage, resistors to limit current and a fuse.
Most applications require a signal to be sent out of or into the hazardous area. The equipment mounted in the hazardous area must first be approved for use in an intrinsically safe system. The barriers designed to protect the system must be mounted outside of the hazardous area in an area designated as Non-hazardous or Safe in which the hazard is not and will not be present.
Intrinsic safety equipment must have been tested and approved by an independent agency to assure its safety. The customer should specify the type of approval required for their particular application. The most common Agencies involved are as follows:
USA – FM, UL
Canada – CSA
Great Britain – BASEEFA
France – LCIE
Germany – PTB
Italy – CESI
Belgium – INEX
As in 1U, 2U, 4U … etc
A “Rack Unit” or “U” is an Electronic Industries Alliance or more commonly “EIA” standard measuring unit for rack mount type equipment. This term has become more prevalent in recent times due to the proliferation of rack mount products showing up in a wide range of commercial, industrial and military markets. A “Rack Unit” is equal to 1.75″ in height. To calculate the internal useable space of a rack enclosure you would simply multiply the total amount of Rack Units by 1.75″. For example, a 44U rack enclosure would have 77″ of internal usable space (44 x 1.75).
Stealth manufactures computers and peripherals that are designed to fit into a standard EIA size rack enclosures. Stealth’s Rackmount PCs, LCD Monitors and Keyboards are available in many sizes and configurations. The slim space saving series rack products are available in 1U (1.75″) and 2U (3.5″) in overall height. Since rack space is at a premium these slim products represent significant cost savings to the end user. Standard rackmount products are available in 1U, 2U, 4U, 5U and 6U configurations.
The four most common touch screen technologies include resistive, infrared, capacitive and SAW (surface acoustic wave). Each technology offers its own unique advantages and disadvantages as described below. Resistive and capacitive touch screen technologies are the most popular for industrial applications. They are both very reliable. If the application requires that operators can wear gloves when using the touch screen, then we generally recommend the resistive technology (capacitive doesn’t support). Otherwise the capacitive technology (better optical characteristics) is more often recommended.
A resistive touch screen typically uses a display overlay consisting of layers, each with a conductive coating on the inner surface. The conductive inner layers are separated by special separator dots, evenly distributed across the active area. Finger pressure causes internal electrical contact at the point of touch, supplying the electronic interface (touch screen controller) with vertical and horizontal analog voltages for digitization. For CRT applications, resistive touch screens are generally spherical (curved) to match the CRT and minimize parallax. The nature of the material used for curved (spherical) applications limits light throughput such that two options are offered: Polished (clear) or antiglare. The polished choice offers clarity but includes some glare. The antiglare choice will minimize glare, but will also slightly diffuse the light throughput (image). Either choice will demonstrate either more glare (polished) or more light diffusion (antiglare) than associated with typical non-touch screen displays. Despite the tradeoffs, the resistive touch screen technology remains a popular choice, often because it can be operated while wearing gloves (unlike capacitive technology). Note that resistive touch screen materials used for flat panel touch screens are different and demonstrate much better optical clarity (even with antiglare). The resistive technology is far more common for flat panel applications.
A capacitive touch screen includes an overlay made of glass with a coating of capacitive (charge storing) material deposited electrically over its surface. Oscillator circuits located at corners of the glass overlay will each measure the capacitance of a person touching the overlay. Each oscillator will vary in frequency according to where a person touches the overlay. A touch screen controller measures the frequency changes to determine the X and Y coordinates of the touch. Because the capacitive coating is even harder than the glass it is applied to, it is very resistant to scratches from (SIC) sharp objects. It can even resist damage from sparks. A capacitive touch screen cannot be activated while wearing most types of gloves (non-conductive).
An infrared touch screen surrounds the face of the display with a bezel of light emitting-diodes (LEDs) and diametrically opposing phototransistor detectors. The controller circuitry directs a sequence of pulses to the LED’s, scanning the screen with an invisible lattice of infrared light beams just in front of the surface. The controller circuitry then detects input at the location where the light beams become obstructed by any solid object. The infrared frame housing the transmitters can impose design constraints on operator interface products.
SAW (Surface Acoustic Wave)
A SAW touch screen uses a solid glass display overlay for the touch sensor. Two surface acoustic (sound) waves, inaudible to the human ear, are transmitted across the surface of the glass sensor, one for vertical detection and one for horizontal detection. Each wave is spread across the screen by bouncing off reflector arrays along the edges of the overlay. Two receivers detect the waves, one for each axis. Since the velocity of the acoustic wave through glass is known and the size of the overlay is fixed, the arrival time of the waves at the respective receivers is known. When the user touches the glass surface, the water content of the user’s finger absorbs some of the energy of the acoustic wave, weakening it. The controller circuitry measures the time at which the received amplitude dips to determine the X and Y coordinates of the touch location. In addition to the X and Y coordinates, SAW technology can also provide Z axis (depth) information. The harder the user presses against the screen, the more energy the finger will absorb, and the greater will be the dip in signal strength. The signal strength is then measured by the controller to provide the Z axis information. Today, few software applications are designed to make use of this feature.
Touch Screen Controllers
Most manufacturers offer two controller configurations–ISA Bus and Serial-RS232. ISA bus controllers are contained on a standard printed circuit plug-in board and can only be used on ISA or EISA PCs. Depending on the manufacturer they may be interrupt driven, polled or be configured as another serial port. Serial controllers are contained on a small printed circuit board and are usually mounted in the video monitor cabinet. They are then cabled to a standard RS232 serial port on the host computer.
Most touch screen manufacturers offer some level of software support which include mouse emulators, software drivers, screen generators and development tools for Windows, OS/2, Macintosh and DOS. Most of the supervisory control and data acquisition (SCADA) software packages now available contain support for one or more touch technologies.
Cooling fans draw in dirt and dust from their operating environments potentially causing catastrophic failures and/or costly interruptions and downtime. Stealth’s Fanless PCs are engineered to dissipate heat by utilizing the rugged aluminium chassis which acts as a heat sink, additionally some models use the latest heat pipe technology to cool and provide noise free operation.
How will fanless computers benefit you?
Save time/money and reduce the worry. No planned maintenance to clean cooling fans or discovering a clogged fan has shut down your application or process.
No noise! When used with SSD the fanless PC is completely noise free making it ideal for control rooms, audio recording, board rooms, deep thinking and other areas that need to be low in ambient noise.
Numerous designs and configurations to meet your application needs. Deployable in limited space or in rack packaging offering multi I/O, Sealed/Waterproof, DC power, Wireless and more.
Stealth’s fanless computer products are an excellent fit for many applications including; Embedded Control Audio/Video recording, Digital Signs, Interactive Kiosks, Thin-Clients, Human/Machine Interface and your next applications.
Built to meet your applications
- No noise and low power consumption
- Solid State Drives (SSD)
- Small Rugged Chassis Designs
- Mobile, DC Power Input, Multi I/O
- Sealed/Waterproof, Multi-LAN Networking Ports and 16×9 1080p models available
- Windows 7, & 10 Compatible, *other O/S Options Available
A watchdog timer is a piece of hardware, often built into a Single Board Computer (SBC) or embedded PC that can cause a reset when it determines that the system has either hung up or is no longer executing the correct sequence of code.
A properly designed watchdog mechanism should, at the very least, catch events that hang the system. In electrically noisy environments, a power glitch may corrupt the program counter, stack pointer, or data in RAM. The software could crash, even if the code is completely bug free. This is precisely the sort of transient failure that watchdogs will catch.
Bugs in software will cause systems to hang, therefore it is better to fix the root cause rather than relying on a watchdog timer. In complex embedded systems it may not be possible to guarantee that there are no bugs, however by using a watchdog you can prevent those bugs from hanging the system indefinitely.
A good watchdog system requires careful consideration of both software and hardware. Make certain to decide early on in the design process how you intend to use it and when a failure is detected you will reap the benefits of a more robust system. | 1 | 3 |
<urn:uuid:2a5fa710-7dbc-44da-b658-fb8baad6fe45> | Best known for his artistic innovations in theatre, radio, and film as an actor, director, writer, and producer, Orson Welles’s greatest achievements are the so-called “panic broadcast” of “The War of the Worlds” in 1938, and the making of “Citizen Kane” (1941), which consistently ranks as one of the all-time greatest films in cinematic history.
George Orson Welles was born May 6, 1915, in Kenosha, Wisconsin, son of Richard Head Welles and Beatrice Ives Welles. He was named after his paternal great-grandfather, the influential Kenosha attorney Orson S. Head, and his brother George Head.
Despite his family’s affluence, Welles encountered hardship in childhood. His parents separated and moved to Chicago in 1919. His father, who made a fortune as the inventor of a popular bicycle lamp, became an alcoholic and stopped working. Welles’s mother, a pianist, played during lectures by Dudley Crafts Watson at the Art Institute of Chicago to support her son and herself; the oldest Welles boy, “Dickie,” was institutionalized at an early age because he had learning difficulties. Beatrice died of hepatitis in a Chicago hospital on May 10, 1924, aged 42, just after Welles’s ninth birthday.
“During the three years that Orson lived with his father, some observers wondered who took care of whom”, wrote biographer Frank Brady.
“In some ways, he was never really a young boy, you know,” said Roger Hill, who became Welles’s teacher and lifelong friend.
Welles briefly attended public school in Madison, Wisconsin, enrolled in the fourth grade. In September of 1926, he entered the Todd Seminary for Boys, an expensive independent school in Woodstock, Illinois, that his older brother, Richard Ives Welles, had attended ten years before but was expelled for misbehavior. At Todd School Welles came under the influence of Roger Hill. Hill provided Welles with an ad hoc educational environment that proved invaluable to his creative experience, allowing Welles to concentrate on subjects that interested him. Welles performed and staged theatrical experiments and productions there.
On December 28, 1930, when Welles was 15, his father died of heart and kidney failure at the age of 58, alone in a hotel in Chicago. Shortly before this, Welles had told him that he would stop seeing him, believing it would prompt his father to refrain from drinking. As a result, Orson felt guilty because he believed his father had drunk himself to death because of him.
Following graduation from Todd in May 1931, Welles was awarded a scholarship to Harvard University, while his mentor Roger Hill advocated he attend Cornell College in Iowa. Rather than enrolling, he chose travel. He studied for a few weeks at the Art Institute of Chicago with Boris Anisfeld, who encouraged him to pursue painting.
Welles would occasionally return to Woodstock, the place he eventually named when he was asked in a 1960 interview, “Where is home?” Welles replied, “I suppose it’s Woodstock, Illinois, if it’s anywhere. I went to school there for four years. If I try to think of a home, it’s that.”
RKO Radio Pictures president George Schaefer offered Welles what is widely considered the greatest contract offered to a filmmaker, untried or otherwise. Engaging him to write, produce, direct and perform in two motion pictures, the contract subordinated the studio’s financial interests to Welles’s creative control, and granted Welles the right of final cut. The agreement was bitterly resented by Hollywood studios.
RKO rejected Welles’s first two movie proposals, but agreed on the third offer — Citizen Kane. Welles co-wrote, produced and directed the film, and performed the lead role. Welles conceived the project with screenwriter Herman J. Mankiewicz. Mankiewicz based the original outline on the life of William Randolph Hearst, whom he knew socially and came to hate after being ejected from Hearst’s circle.
After agreeing on the storyline and character, Welles supplied Mankiewicz with 300 pages of notes and put him under contract to write the first draft screenplay under the supervision of John Houseman. Welles wrote his own draft, then drastically condensed and rearranged both versions and added scenes of his own. Welles was accused of unfairly minimizing Mankiewicz’s contribution to the script, but Welles countered the attacks by saying, “At the end, naturally, I was the one making the picture, after all — who had to make the decisions. I used what I wanted of Mank’s and, rightly or wrongly, kept what I liked of my own.”
Welles’s project attracted some of Hollywood’s best technicians, including cinematographer Gregg Toland. For the cast, Welles primarily used actors from his Mercury Theatre. Filming Citizen Kane took ten weeks.
Hearst’s newspapers barred all reference to Citizen Kane and applied enormous pressure on Hollywood to force RKO to scuttle the film. RKO chief George Schaefer received a cash offer from MGM’s Louis B. Mayer and other major studio executives if he would destroy the negative and existing prints of Citizen Kane.
While waiting for Citizen Kane to be released, Welles directed the original Broadway production of Native Son, a drama written by Paul Green and Richard Wright based on Wright’s novel. Starring Canada Lee, the show ran March 24 – June 28, 1941, at the St. James Theatre. The Mercury Production was the last time Welles and Houseman worked together.
Citizen Kane was given a limited release and received overwhelming critical praise. It was voted the best picture of 1941 by the National Board of Review and the New York Film Critics Circle. The film racked up nine Academy Award nominations but won only for Best Original Screenplay, shared by Mankiewicz and Welles. Variety reported that block voting by screen extras deprived Citizen Kane of Oscars for Best Picture and Best Actor (Welles), and similar prejudices likely accounted for the film winning no technical awards.
The delay in the film’s release and uneven distribution contributed to mediocre box office earnings. After it ran its course theatrically, Citizen Kane was retired to the vault in 1942. In postwar France, however, the film’s reputation grew after it was seen for the first time in 1946. In the United States, it began to be re-evaluated after it began to appear on television in 1956. That year it was also re-released theatrically, and film critic Andrew Sarris described it as “the great American film” and “the work that influenced the cinema more profoundly than any American film since Birth of a Nation.” Citizen Kane has long been hailed as one of the greatest films ever made.
Welles’s second film for RKO was The Magnificent Ambersons, adapted by Welles from the Pulitzer Prize-winning novel by Booth Tarkington. Prior to production, Welles’s contract was renegotiated, revoking his right to control the final cut.
Throughout the shooting of the film Welles was also producing a weekly half-hour radio series, The Orson Welles Show. Many of the Ambersons cast participated in the CBS Radio series.
At RKO’s request, Welles worked on an adaptation of Eric Ambler’s spy thriller, Journey into Fear, co-written with Joseph Cotten. In addition to acting in the film, Welles was the producer. Direction was officially credited to Norman Foster. Welles later said that they were in such a rush that the director of each scene was determined by whoever was closest to the camera.
In July 1941, Welles conceived It’s All True as an omnibus film mixing documentary and docufiction in a project that emphasized the dignity of labor and celebrated the cultural and ethnic diversity of North America. It was to have been his third film for RKO, following Citizen Kane (1941) and The Magnificent Ambersons (1942). Duke Ellington was put under contract to score a segment with the working title, “The Story of Jazz”, drawn from Louis Armstrong’s 1936 autobiography, Swing That Music. Armstrong was cast to play himself in the brief dramatization of the history of jazz performance, from its roots to its place in American culture in the 1940s. “The Story of Jazz” was to go into production in December 1941.
In 1942 RKO Pictures underwent major changes under new management, taking control of Ambersons and editing the film into what the studio considered a commercial format. Welles’s attempts to protect his version failed.
In the fall of 1945 Welles began work on The Stranger (1946), a film noir drama about a war crimes investigator who tracks a high-ranking Nazi fugitive to an idyllic New England town. Edward G. Robinson, Loretta Young and Welles star.
Producer Sam Spiegel initially planned to hire director John Huston, who had rewritten Anthony Veiller’s screenplay. When Huston entered the military, Welles was given the chance to direct and prove himself able to make a film on schedule and under budget — something he was so eager to do that he accepted a disadvantageous contract. One of its concessions was that he would defer to the studio in any creative dispute.
The Stranger was Welles’s first job as a film director in four years. He was told that if the film was successful he could sign a four-picture deal with International Pictures, making films of his own choosing. Welles was given some degree of creative control, and he endeavored to personalize the film and develop a nightmarish tone. He worked on the general rewrite of the script and wrote scenes at the beginning of the picture that were shot but subsequently cut by producers. He filmed in long takes that largely thwarted the control given to editor Ernest J. Nims under the terms of the contract.
The Stranger was the first commercial film to use documentary footage from the Nazi concentration camps. Welles had seen the footage as a correspondent and discussion moderator at a UN Conference.
Completed a day ahead of schedule and under budget, The Stranger was the only film made by Welles to have been a bona fide box office success upon release. Its cost was $1.034 million; 15 months later it had grossed $3.216 million. Within weeks of the completion of the film, International Pictures backed out of its promised four-picture deal with Welles. No reason was given, but the impression was left that The Stranger would not make money.
The film that Welles was obliged to make in exchange for Harry Cohn’s help in financing the stage production Around the World was The Lady from Shanghai, filmed in 1947 for Columbia Pictures. Intended as a modest thriller, the budget skyrocketed after Cohn suggested that Welles’s then-estranged second wife Rita Hayworth co-star.
Cohn disliked Welles’s rough-cut, particularly the confusing plot and lack of close-ups. He ordered extensive editing and re-shoots. After heavy editing by the studio, approximately one hour of Welles’s first cut was removed, including much of a climactic confrontation scene in an amusement park funhouse. While expressing displeasure at the cuts, Welles was appalled particularly with the musical score. The film was considered a disaster in America at the time of release, though the closing shootout in a hall of mirrors has since become a touchstone of film noir. Not long after release, Welles and Hayworth finalized their divorce.
Prior to 1948, Welles convinced Republic Pictures to let him direct a low-budget version of Macbeth, which featured highly stylized sets and costumes, and a cast of actors lip-syncing to a pre-recorded soundtrack, one of many innovative cost-cutting techniques Welles deployed in an attempt to make an epic film from B-movie resources. The script, adapted by Welles, is a violent reworking of Shakespeare’s original, freely cutting and pasting lines into new contexts via a collage technique and recasting Macbeth as a clash of pagan and proto-Christian ideologies. Of all Welles’s post-Kane Hollywood productions, Macbeth is stylistically closest to Citizen Kane.
Republic initially trumpeted the film as an important work but decided it did not care for the Scottish accents and held up general release for almost a year after early negative press reaction, including Life’s comment that Welles’s film “doth foully slaughter Shakespeare.” Welles left for Europe, while co-producer and lifelong supporter Richard Wilson reworked the soundtrack. Welles returned and cut 20 minutes from the film at Republic’s request and recorded narration to cover some gaps. The film was decried as a disaster, although it did have influential fans in Europe.
During this time, Welles was channeling his money from acting jobs into a self-financed film version of Shakespeare’s play Othello. From 1949 to 1951, Welles worked on Othello, filming on location in Europe and Morocco. The film featured Welles’s friends, Micheál Mac Liammóir as Iago and Hilton Edwards as Desdemona’s father Brabantio. Suzanne Cloutier starred as Desdemona and Campbell Playhouse alumnus Robert Coote appeared as Iago’s associate Roderigo.
Filming was suspended several times as Welles ran out of funds and left for acting jobs. The American release prints had a flawed soundtrack, suffering from a drop-out of sound at every quiet moment (Welles’s daughter, Beatrice Welles-Smith, would eventually restore Othello in 1992 for a wide re-release). The restoration included reconstructing Angelo Francesco Lavagnino’s original musical score, which was originally inaudible, and adding ambient stereo sound effects, which were not in the original film. The restoration had a successful theatrical run in America.
Welles briefly returned to America to make his first appearance on television, starring in the Omnibus presentation of King Lear, broadcast live on CBS October 18, 1953. Directed by Peter Brook, the production costarred Natasha Parry, Beatrice Straight and Arnold Moss.
Welles’s next turn as director was the film Mr. Arkadin (1955), which was produced by his political mentor from the 1940s, Louis Dolivet. It was filmed in France, Germany, Spain and Italy on a very limited budget. Based loosely on several episodes of the Harry Lime radio show, it stars Welles as a billionaire who hires a man to delve into the secrets of his past. The film stars Robert Arden, who had worked on the Harry Lime series; Welles’s third wife, Paola Mori, whose voice was dubbed by actress Billie Whitelaw; and guest stars Akim Tamiroff, Michael Redgrave, Katina Paxinou and Mischa Auer. Frustrated by his slow progress in the editing room, producer Dolivet removed Welles from the project and finished the film without him. Eventually five different versions of the film would be released, two in Spanish and three in English. The version that Dolivet completed was retitled Confidential Report. In 2005 Stefan Droessler of the Munich Film Museum oversaw a reconstruction of the surviving film elements.
In 1955, Welles also directed two television series for the BBC. The first was Orson Welles’ Sketch Book, a series of six 15-minute shows featuring Welles drawing in a sketchbook to illustrate his reminiscences for the camera, and the second was Around the World with Orson Welles, a series of six travelogues set in different locations around Europe (such as Venice, the Basque Country between France and Spain, and England). Welles served as host and interviewer, his commentary including documentary facts and his own personal observations (a technique he would continue to explore in later works).
In 1956, Welles completed Portrait of Gina. The film cans would remain in a lost-and-found locker at the hotel for several decades, where they were discovered after Welles’s death.
In 1956, Welles returned to Hollywood, where he began filming a projected pilot for Desilu, owned by Lucille Ball and her husband Desi Arnaz, who had recently purchased the former RKO studios. The film was The Fountain of Youth, based on a story by John Collier. Originally deemed not viable as a pilot, it was not aired until 1958 — and won the Peabody Award for excellence.
Welles’s next feature film role was in Man in the Shadow for Universal Pictures in 1957, starring Jeff Chandler.
Welles stayed on at Universal to direct (and co-star with) Charlton Heston in the 1958 film Touch of Evil, based on Whit Masterson’s novel Badge of Evil. Originally only hired as an actor, Welles was promoted to director by Universal Studios at the insistence of Heston. The film reunited many actors and technicians with whom Welles had worked in Hollywood in the 1940s, including cameraman Russell Metty (The Stranger), makeup artist Maurice Seiderman (Citizen Kane), and actors Joseph Cotten, Marlene Dietrich and Akim Tamiroff. Filming proceeded smoothly, with Welles finishing on schedule and on budget, and the studio bosses praising the daily rushes. Nevertheless, after the end of production, the studio re-edited the film, re-shot scenes, and shot new exposition scenes to clarify the plot. Welles wrote a 58-page memo outlining suggestions and objections, stating that the film was no longer his version — it was the studio’s, but as such, he was still prepared to help with it.
In 1978, a longer preview version of the film was discovered and released.
As Universal reworked Touch of Evil, Welles began filming his adaptation of Miguel de Cervantes’ novel Don Quixote in Mexico, starring Mischa Auer as Quixote and Akim Tamiroff as Sancho Panza.
He continued shooting Don Quixote in Spain and Italy, but replaced Mischa Auer with Francisco Reiguera, and resumed acting jobs. In Italy in 1959, Welles directed his own scenes as King Saul in Richard Pottier’s film David and Goliath. In Hong Kong he co-starred with Curt Jürgens in Lewis Gilbert’s film Ferry to Hong Kong. In 1960, in Paris he co-starred in Richard Fleischer’s film Crack in the Mirror. In Yugoslavia he starred in Richard Thorpe’s film The Tartars and Veljko Bulajić’s Battle of Neretva.
Throughout the 1960s, filming continued on Quixote on-and-off as Welles evolved the concept, tone and ending several times. Although he had a complete version of the film shot and edited at least once, he would continue toying with the editing well into the 1980s, he never completed a version film he was fully satisfied with, and would junk existing footage and shoot new footage. (In one case, he had a complete cut ready in which Quixote and Sancho Panza end up going to the moon, but he felt the ending was rendered obsolete by the 1969 moon landings, and burned 10 reels of this version.) As the process went on, Welles gradually voiced all of the characters himself and provided narration. In 1992, the director Jesús Franco constructed a film out of the portions of Quixote left behind by Welles, though some of the film stock had decayed badly. While the Welles footage was greeted with interest, the post-production by Franco was met with harsh criticism.
In 1961, Welles directed In the Land of Don Quixote, a series of eight half-hour episodes for the Italian television network RAI. Similar to the Around the World with Orson Welles series, they presented travelogues of Spain and included Welles’s wife, Paola, and their daughter, Beatrice. Though Welles was fluent in Italian, the network was not interested in him providing Italian narration because of his accent, and the series sat unreleased until 1964, by which time the network had added Italian narration of its own. Ultimately, versions of the episodes were released with the original musical score Welles had approved, but without the narration.
In 1962, Welles directed his adaptation of The Trial, based on the novel by Franz Kafka and produced by Michael and Alexander Salkind. The cast included Anthony Perkins as Josef K, Jeanne Moreau, Romy Schneider, Paola Mori and Akim Tamiroff. While filming exteriors in Zagreb, Welles was informed that the Salkinds had run out of money, meaning that there could be no set construction. No stranger to shooting on found locations, Welles soon filmed the interiors in the Gare d’Orsay, at that time an abandoned railway station in Paris. Welles thought the location possessed a “Jules Verne modernism” and a melancholy sense of “waiting”, both suitable for Kafka. To remain in the spirit of Kafka Welles set up the cutting room together with the Film Editor, Frederick Muller (as Fritz Muller), in the old un-used, cold, depressing, station master office. The film failed at the box-office. Peter Bogdanovich would later observe that Welles found the film riotously funny. Welles also told a BBC interviewer that it was his best film. While filming The Trial, Welles met Oja Kodar, who later became his mistress and collaborator for the last 20 years of his life.
In 1966, Welles directed a film for French television, an adaptation of The Immortal Story, by Karen Blixen. Released in 1968, it stars Jeanne Moreau, Roger Coggio and Norman Eshley. The film had a successful run in French theaters. At this time Welles met Oja Kodar again, and gave her a letter he had written to her and had been keeping for four years; they would not be parted again. They immediately began a collaboration both personal and professional. The first of these was an adaptation of Blixen’s The Heroine, meant to be a companion piece to The Immortal Story and starring Kodar. Unfortunately, funding disappeared after one day’s shooting. After completing this film, he appeared in a brief cameo as Cardinal Wolsey in Fred Zinnemann’s adaptation of A Man for All Seasons—a role for which he won considerable acclaim.
In 1967, Welles began directing The Deep, based on the novel Dead Calm by Charles Williams and filmed off the shore of Yugoslavia. The cast included Jeanne Moreau, Laurence Harvey and Kodar. Personally financed by Welles and Kodar, they could not obtain the funds to complete the project, and it was abandoned a few years later after the death of Harvey. The surviving footage was eventually edited and released by the Filmmuseum München. In 1968 Welles began filming a TV special for CBS under the title Orson’s Bag, combining travelogue, comedy skits and a condensation of Shakespeare’s play The Merchant of Venice with Welles as Shylock. In 1969 Welles called again the Film Editor Frederick Muller to work with him re-editing the material and they set up cutting rooms at the Safa Palatino Studios in Rome. Funding for the show sent by CBS to Welles in Switzerland was seized by the IRS. Without funding, the show was not completed. The surviving film clips portions were eventually released by the Filmmuseum München.
Welles returned to Hollywood, where he continued to self-finance his film and television projects. While offers to act, narrate and host continued, Welles also found himself in great demand on television talk shows. He made frequent appearances for Dick Cavett, Johnny Carson, Dean Martin and Merv Griffin.
Welles’s primary focus during his final years was The Other Side of the Wind, an unfinished project that was filmed intermittently between 1970 and 1976. Written by Welles, it is the story of an aging film director (John Huston) looking for funds to complete his final film. The cast includes Peter Bogdanovich, Susan Strasberg, Norman Foster, Edmond O’Brien, Cameron Mitchell and Dennis Hopper. Financed by Iranian backers, ownership of the film fell into a legal quagmire after the Shah of Iran was deposed. While there have been several reports of all the legal disputes concerning ownership of the film being settled, enough disputes still exist to prevent its release.
In 1971, Welles directed a short adaptation of Moby-Dick, a one-man performance on a bare stage, reminiscent of his 1955 stage production Moby Dick—Rehearsed. Never completed, it was eventually released by the Filmmuseum München. He also appeared in Ten Days’ Wonder, co-starring with Anthony Perkins and directed by Claude Chabrol, based on a detective novel by Ellery Queen. That same year, the Academy of Motion Picture Arts and Sciences gave him an honorary award “For superlative artistry and versatility in the creation of motion pictures”. Welles pretended to be out of town and sent John Huston to claim the award, thanking the Academy on film. Huston criticized the Academy for awarding Welles, even while they refused to give Welles any work.
In 1973, Welles completed F for Fake, a personal essay film about art forger Elmyr de Hory and the biographer Clifford Irving. Based on an existing documentary by François Reichenbach, it included new material with Oja Kodar, Joseph Cotten, Paul Stewart and William Alland. An excerpt of Welles’s 1930s War of the Worlds broadcast was recreated for this film; however, none of the dialogue heard in the film actually matches what was originally broadcast.
In 1975, Welles narrated the documentary Bugs Bunny: Superstar, focusing on Warner Bros. cartoons from the 1940s. Also in 1975, the American Film Institute presented Welles with its third Lifetime Achievement Award (the first two going to director John Ford and actor James Cagney). At the ceremony, Welles screened two scenes from the nearly finished The Other Side of the Wind.
In 1979, Welles completed his documentary Filming Othello, which featured Michael MacLiammoir and Hilton Edwards. Made for West German television, it was also released in theaters. That same year, Welles completed his self-produced pilot for The Orson Welles Show television series, featuring interviews with Burt Reynolds, Jim Henson and Frank Oz and guest-starring The Muppets and Angie Dickinson. Unable to find network interest, the pilot was never broadcast. Also in 1979, Welles appeared in the biopic The Secret of Nikola Tesla, and a cameo in The Muppet Movie as Lew Lord.
Beginning in the late 1970s, Welles became a fixture in a series of famous television commercial advertisements. For two years he was on-camera spokesman for the Paul Masson Vineyards, and sales grew by one third during the time Welles intoned what became a popular catchphrase: “We will sell no wine before its time.” He was also the voice behind the long-running Carlsberg “Probably the best lager in the world” campaign, promoted Domecq sherry on British television and provided narration on adverts for Findus.
In 1981, Welles hosted the documentary The Man Who Saw Tomorrow, about Renaissance-era prophet Nostradamus. In 1982, the BBC broadcast The Orson Welles Story in the Arena series. Interviewed by Leslie Megahey, Welles examined his past in great detail, and several people from his professional past were also interviewed. It was reissued in 1990 as With Orson Welles: Stories of a Life in Film. Welles provided narration for the tracks “Defender” from Manowar’s 1987 album Fighting the World and “Dark Avenger” on their 1982 album, Battle Hymns. His name was misspelled on the latter album, as he was credited as “Orson Wells”.
During the 1980s, Welles worked on such film projects as The Dreamers, based on two stories by Isak Dinesen and starring Oja Kodar, and Orson Welles’ Magic Show, which reused material from his failed TV pilot. Another project he worked on was Filming The Trial, the second in a proposed series of documentaries examining his feature films. While much was shot for these projects, none of them was completed. All of them were eventually released by the Filmmuseum München.
In 1984, Welles narrated the short-lived television series Scene of the Crime. During the early years of Magnum, P.I., Welles was the voice of the unseen character Robin Masters, a famous writer and playboy. Welles’s death forced this minor character to largely be written out of the series. In an oblique homage to Welles, the Magnum, P.I. producers ambiguously concluded that story arc by having one character accuse another of having hired an actor to portray Robin Masters. He also, in this penultimate year released a music single, titled “I Know What It Is To Be Young (But You Don’t Know What It Is To Be Old)”, which he recorded under Italian label Compagnia Generale del Disco.
His last television appearance was on the television show Moonlighting. He recorded an introduction to an episode entitled “The Dream Sequence Always Rings Twice”, which was partially filmed in black and white. The episode aired five days after his death and was dedicated to his memory.
In the mid-1980s, Henry Jaglom taped lunch conversations with Welles at Los Angeles’s Ma Maison as well as in New York. Edited transcripts of these sessions appear in Peter Biskind’s 2013 book My Lunches With Orson: Conversations Between Henry Jaglom and Orson Welles.
Orson Welles’s directing credits include…
|2000||Moby Dick (Short)|
|1955||Around the World with Orson Welles (7 episodes)|
|1993||It’s All True (Documentary)|
|1992||Don Quixote (original footage)|
|1985||Orson Welles’ Magic Show (TV Short)|
|1984||The Spirit of Charles Lindbergh (Short)|
|1982||The Dreamers (Documentary short)|
|1981||Filming ‘The Trial’ (Documentary)|
|1979||The Orson Welles Show (TV Movie) (as G.O. Spelvin)|
|1978||Filming ‘Othello’ (Documentary)|
|1973||F for Fake (Documentary)|
|1970||The Golden Honeymoon (Short)|
|1969||The Merchant of Venice (TV Short)|
|1969||The Southern Star (opening scenes, uncredited)|
|1968||The Immortal Story (TV Movie)|
|1965||Treasure Island (Short)|
|1965||Chimes at Midnight|
|1964||Nella terra di Don Chisciotte (TV Series documentary)|
|1962||Sinners Go to Hell (uncredited)|
|1961||Tempo (TV Series) (1 episode)|
|1960||David and Goliath (his own scenes, uncredited)|
|1958||Orson Welles at Large: Portrait of Gina (TV Short documentary)|
|1958||Colgate Theatre (TV Series) (1 episode)|
|1958||The Fountain of Youth (TV Short)|
|1958||Touch of Evil|
|1956||Orson Welles and People (TV Movie)|
|1955||Moby Dick Rehearsed (TV Movie)|
|1955||Orson Welles’ Sketch Book (TV Series) (6 episodes)|
|1955||Three Cases of Murder (segment “Lord Mountdrago”)|
|1950||The Miracle of St. Anne (Short)|
|1949||Black Magic (uncredited)|
|1947||The Lady from Shanghai (uncredited)|
|1943||Journey Into Fear (uncredited)|
|1943||The Story of Samba (Short)|
|1942||The Magnificent Ambersons|
|1939||The Green Goddess (Short)|
|1938||Too Much Johnson|
|1934||The Hearts of Age (Short)|
“Even if the good old days never existed, the fact that we can conceive such a world is, in fact, an affirmation of the human spirit.”
“I’m not very fond of movies. I don’t go to them much.”
[on Hollywood in the 1980s] “We live in a snake pit here… I hate it but I just don’t allow myself to face the fact that I hold it in contempt because it keeps on turning out to be the only place to go.”
“If there hadn’t been women we’d still be squatting in a cave eating raw meat, because we made civilization in order to impress our girlfriends. And they tolerated it and let us go ahead and play with our toys.”
“For thirty years, people have been asking me how I reconcile X with Y! The truthful answer is that I don’t. Everything about me is a contradiction and so is everything about everybody else. We are made out of oppositions; we live between two poles. There is a philistine and an aesthete in all of us, and a murderer and a saint. You don’t reconcile the poles. You just recognize them.”
“I made essentially a mistake staying in movies, because I… but it… it’s the mistake I can’t regret because it’s like saying, “I shouldn’t have stayed married to that woman, but I did because I love her.” I would have been more successful if I’d left movies immediately. Stayed in the theater, gone into politics, written–anything. I’ve wasted the greater part of my life looking for money, and trying to get along… trying to make my work from this terribly expensive paint box which is an… a movie. And I’ve spent too much energy on things that have nothing to do with a movie. It’s about 2% movie making and 98% hustling. It’s no way to spend a life.”
Welles’ Oscar statuette sold for $861,542, when it was auctioned by Nate D. Sanders Memorabilia on December 20, 2011.
H.G. Wells was driving through San Antonio, Texas, and stopped to ask the way. The person he happened to ask was none other than Welles’, who had recently broadcast “The War of the Worlds” on the radio. They got on well and spent the day together.
ABC-TV wanted him to play Mr. Roarke on Fantasy Island (1977), but the series’ producer, Aaron Spelling, insisted on Ricardo Montalban.
He died on the same day as his The Battle on the River Neretva (1969) co-star Yul Brynner: October 10, 1985.
Despite his reputation as an actor and master filmmaker, he maintained his memberships in the International Brotherhood of Magicians and the Society of American Magicians (neither of which are unions, but fraternal organizations), and regularly practiced sleight-of-hand magic in case his career came to an abrupt end. Welles occasionally performed at the annual conventions of each organization, and was considered by fellow magicians to be extremely accomplished.
He became obese in his 40s, weighing over 350 pounds towards the end of his life. | 1 | 11 |
<urn:uuid:4b95e562-58fb-4cfb-beec-bfb7e0d2e7c4> | The monitor is a part of computer hardware that displays the video and graphics information generated by the computer through the graphic cards. It connects via a cable to a port on the motherboard. Most monitors are in a widescreen format and range in size from 17’’ to 24’’ or more. A monitor, no matter the type, usually connects to either an HDMI, DVI or VGA port. Other connectors may include USB, DisplayPort and Thunderbolt.
A front view of a monitor
Refresh Rates (Hz)
The refresh rate of a monitor is the speed at which the monitor’s image changes. The faster the refresh rate, the more times the image can update every second and the smoother the image will look. This number of changes per second is measured in hertz (Hz). A typical PC monitor will have a refresh rate of 60Hz but the latest gaming display can reach all the way to 240Hz. Fast refresh rates is crucial for gaming as it allows the screen to keep up with the rapid movements of a player.
The different images captured with refresh rate from 60Hz, 144Hz and 240Hz
Response time (Ms)
Response time is the time it takes for a pixel to change from one colour to another. Generally measured in milliseconds (ms), it’s directly related to refresh rate (Hz) in that a monitor can only really refresh its image quickly if the pixels can respond quick enough. For example, a 16ms response time translates to a theoretical maximum of a 60Hz refresh rate 1s/60 = 16.6ms.
High response time – Blur and Ghosting visual while Low response time – Clear and Clarity visual
A screen’s brightness or luminance is measured by number of nits. Nits is a scientific name for “candelas per square meter” (cd/m2). Nowadays, display manufacturers generally list nits in their spec sheets. The higher the nit spec, the brighter the display. Most LCD displays are 300 to 350 nits which look sharp and clear in a dimly lit room.
1800R or 2000R, which is better?
Depending on the user, 1800R is the better choice for daily use or gaming because it can reduce eye strain as the user are likely sit closer to 1.8m from his monitor. If the user use monitor for long periods watching movies where he sit further back, then 2000R might make better choice. | 1 | 4 |
<urn:uuid:fa5ea0e5-5775-4402-9076-56f6500c7df5> | |T lymphocyte cell|
Scanning electron micrograph of a human T cell
|Anatomical terms of microanatomy|
A T cell is a type of lymphocyte which develops in the thymus gland (hence the name) and plays a central role in the immune response. T cells can be distinguished from other lymphocytes by the presence of a T-cell receptor on the cell surface. These immune cells originate as precursor cells, derived from bone marrow, and develop into several distinct types of T cells once they have migrated to the thymus gland - for which these cells are named. T cell differentiation continues even after they have left the thymus.
Groups of specific, differentiated T cells have an important role in controlling and shaping the immune response by providing a variety of immune-related functions. One of these functions is immune-mediated cell death, and it is carried out by T cells in several ways: CD8+ T cells, also known as "killer cells", are cytotoxic - this means that they are able to directly kill virus-infected cells as well as cancer cells. CD8+ T cells are also able to utilize small signalling proteins, known as cytokines, to recruit other cells when mounting an immune response. A different population of T cells, the CD4+ T cells, function as "helper cells". Unlike CD8+ killer T cells, these CD4+ helper T cells function by indirectly killing cells identified as foreign: they determine if and how other parts of the immune system respond to a specific, perceived threat. Helper T cells also use cytokine signalling to influence regulatory B cells directly, and other cell populations indirectly. Regulatory T cells are yet another distinct population of these cells that provide the critical mechanism of tolerance, whereby immune cells are able to distinguish invading cells from "self" - thus preventing immune cells from inappropriately mounting a response against oneself (which would by definition be an "autoimmune" response). For this reason these regulatory T cells have also been called "suppressor" T cells. These same self-tolerant cells are co-opted by cancer cells to prevent the recognition of, and an immune response against, tumour cells.
All T cells originate from c-kit+Sca1+haematopoietic stem cells (HSC) which reside in the bone marrow. In some cases the origin might be the fetal liver during embryonic development. The HSC then differentiate into multipotent progenitors (MPP) which retain the potential to become both myeloid and lymphoid cells. The process of differentiation then proceeds to a common lymphoid progenitor (CLP), which can only differentiate into T, B or NK cells. These CLP cells then migrate via the blood to the thymus, where they engraft. The earliest cells which arrived in the thymus are termed double-negative, as they express neither the CD4 nor CD8 co-receptor. The newly arrived CLP cells are CD4-CD8-CD44+CD25-ckit+ cells, and are termed early thymic progenitors (ETP) cells. These cells will then undergo a round of division and downregulate c-kit and are termed DN1 cells.
At the DN2 stage (CD44+CD25+), cells upregulate the recombination genes RAG1 and RAG2 and re-arrange the TCR? locus, combining V-D-J and constant region genes in an attempt to create a functional TCR? chain. As the developing thymocyte progresses through to the DN3 stage (CD44-CD25+), T cell expresses an invariant ?-chain called pre-T? alongside the TCR? gene. If the rearranged ?-chain successfully pairs with the invariant ?-chain, signals are produced which cease rearrangement of the ?-chain (and silences the alternate allele). Although these signals require this pre-TCR at the cell surface, they are independent of ligand binding to the pre-TCR. If the pre-TCR forms, then the cell downregulates CD25 and is termed a DN4 cell (CD25-CD44-). These cells then undergo a round of proliferation and begin to re-arrange TCR? locus.
Double-positive thymocytes (CD4+/CD8+) move deep into the thymic cortex, where they are presented with self-antigens. These self-antigens are expressed by thymic cortical epithelial cells on MHC molecules on the surface of cortical epithelial cells. Only those thymocytes that interact with MHC-I or MHC-II will receive a vital "survival signal". All that cannot (if they do not interact strongly enough) will die by "death by neglect" (no survival signal). This process ensures that the selected T cells will have an MHC affinity that can serve useful functions in the body (i.e., the cells must be able to interact with MHC and peptide complexes to effect immune responses). The vast majority of developing thymocytes will die during this process. The process of positive selection takes a number of days.
A thymocyte's fate is determined during positive selection. Double-positive cells (CD4+/CD8+) that interact well with MHC class II molecules will eventually become CD4+ cells, whereas thymocytes that interact well with MHC class I molecules mature into CD8+ cells. A T cell becomes a CD4+ cell by down-regulating expression of its CD8 cell surface receptors. If the cell does not lose its signal, it will continue downregulating CD8 and become a CD4+, single positive cell.
This process does not remove thymocytes that may cause autoimmunity. The potentially autoimmune cells are removed by the process of negative selection, which occurs in the thymic medulla (discussed below).
Negative selection removes thymocytes that are capable of strongly binding with "self" MHC peptides. Thymocytes that survive positive selection migrate towards the boundary of the cortex and medulla in the thymus. While in the medulla, they are again presented with a self-antigen presented on the MHC complex of medullary thymic epithelial cells (mTECs). mTECs must be AIRE+ to properly express self-antigens from all tissues of the body on their MHC class I peptides. Some mTECs are phagocytosed by thymic dendritic cells; this allows for presentation of self-antigens on MHC class II molecules (positively selected CD4+ cells must interact with MHC class II molecules, thus APCs, which possess MHC class II, must be present for CD4+ T-cell negative selection). Thymocytes that interact too strongly with the self-antigen receive an apoptotic signal that leads to cell death. However, some of these cells are selected to become Treg cells. The remaining cells exit the thymus as mature naïve T cells (also known as recent thymic emigrants). This process is an important component of central tolerance and serves to prevent the formation of self-reactive T cells that are capable of inducing autoimmune diseases in the host.
?-selection is the first checkpoint, where the T cells that are able to form a functional pre-TCR with an invariant alpha chain and a functional beta chain are allowed to continue development in the thymus. Next, positive selection checks that T cells have successfully rearranged their TCR? locus and are capable of recognizing peptide-MHC complexes with appropriate affinity. Negative selection in the medulla then obliterates T cells that bind too strongly to self-antigens expressed on MHC molecules. These selection processes allow for tolerance of self by the immune system. Typical T cells that leave the thymus (via the corticomedullarly junction) are self-restricted, self-tolerant, and single positive.
About 98% of thymocytes die during the development processes in the thymus by failing either positive selection or negative selection, whereas the other 2% survive and leave the thymus to become mature immunocompetent T cells. The thymus contributes fewer cells as a person ages. As the thymus shrinks by about 3% a year throughout middle age, a corresponding fall in the thymic production of naïve T cells occurs, leaving peripheral T cell expansion and regeneration to play a greater role in protecting older people.
T cells are grouped into a series of subsets based on their function. CD4 and CD8 T cells are selected in the thymus, but undergo further differentiation in the periphery to specialized cells which have different functions. T cell subsets were initially defined by function, but also have associated gene or protein expression patterns.
T helper cells (TH cells) assist other lymphocytes, including maturation of B cells into plasma cells and memory B cells, and activation of cytotoxic T cells and macrophages. These cells are also known as CD4+ T cells as they express the CD4 on their surfaces. Helper T cells become activated when they are presented with peptide antigens by MHC class II molecules, which are expressed on the surface of antigen-presenting cells (APCs). Once activated, they divide rapidly and secrete cytokines that regulate or assist the immune response. These cells can differentiate into one of several subtypes, which have different roles. Cytokines direct T cells into particular subtypes.
|Cell type||Cytokines Produced||Key Transcription Factor||Role in immune defence||Role in autoimmunity|
|Th1||IFN?||Tbet||Produce an inflammatory response, key for defense against intracellular bacteria, viruses and cancer.||MS, Type 1 diabetes|
|Th2||IL-4||GATA-3||Aid the differentiation and antibody production by B cells||Asthma and other allergic diseases|
|Th17||IL-17||ROR?t||Defense against gut pathogens and at mucosal barriers||Rheumatoid Arthritis, Psoriasis|
|Th9||IL-9||IRF4, PU.1||Defense against helminths (parasitic worms)||Multiple Sclerosis|
|Tfh||IL-21, IL-4||Bcl-6||Help B cells produce antibody||Asthma and other allergic diseases|
Cytotoxic T cells (TC cells, CTLs, T-killer cells, killer T cells) destroy virus-infected cells and tumor cells, and are also implicated in transplant rejection. These cells are defined by the expression of CD8+ on the cell surface. These cells recognize their targets by binding to short peptides (8-11AA) associated with MHC class I molecules, present on the surface of all nucleated cells. CD8+ T cells also produce the key cytokines IL-2 and IFN?, which influence the effector functions of other cells, in particular macrophages and NK cells.
Antigen-naïve T cells expand and differentiate into memory and effector T cells after they encounter their cognate antigen within the context of an MHC molecule on the surface of a professional antigen presenting cell (e.g. a dendritic cell). Appropriate co-stimulation must be present at the time of antigen encounter for this process to occur. Historically, memory T cells were thought to belong to either the effector or central memory subtypes, each with their own distinguishing set of cell surface markers (see below). Subsequently, numerous new populations of memory T cells were discovered including tissue-resident memory T (Trm) cells, stem memory TSCM cells, and virtual memory T cells. The single unifying theme for all memory T cell subtypes is that they are long-lived and can quickly expand to large numbers of effector T cells upon re-exposure to their cognate antigen. By this mechanism they provide the immune system with "memory" against previously encountered pathogens. Memory T cells may be either CD4+ or CD8+ and usually express CD45RO.
Memory T cell subtypes:
Regulatory T cells are crucial for the maintenance of immunological tolerance. Their major role is to shut down T cell-mediated immunity toward the end of an immune reaction and to suppress autoreactive T cells that escaped the process of negative selection in the thymus.
Two major classes of CD4+ Treg cells have been described -- FOXP3+ Treg cells and FOXP3- Treg cells.
Regulatory T cells can develop either during normal development in the thymus, and are then known as thymic Treg cells, or can be induced peripherally and are called peripherally derived Treg cells. These two subsets were previously called "naturally occurring", and "adaptive" or "induced", respectively. Both subsets require the expression of the transcription factor FOXP3 which can be used to identify the cells. Mutations of the FOXP3 gene can prevent regulatory T cell development, causing the fatal autoimmune disease IPEX.
Several other types of T cell have suppressive activity, but do not express FOXP3. These include Tr1 cells and Th3 cells, which are thought to originate during an immune response and act by producing suppressive molecules. Tr1 cells are associated with IL-10, and Th3 cells are associated with TGF-beta. Recently, Treg17 cells have been added to this list.
Natural killer T cells (NKT cells - not to be confused with natural killer cells of the innate immune system) bridge the adaptive immune system with the innate immune system. Unlike conventional T cells that recognize peptide antigens presented by major histocompatibility complex (MHC) molecules, NKT cells recognize glycolipid antigen presented by CD1d. Once activated, these cells can perform functions ascribed to both Th and Tc cells (i.e., cytokine production and release of cytolytic/cell killing molecules). They are also able to recognize and eliminate some tumor cells and cells infected with herpes viruses.
MAIT cells display innate, effector-like qualities. In humans, MAIT cells are found in the blood, liver, lungs, and mucosa, defending against microbial activity and infection. The MHC class I-like protein, MR1, is responsible for presenting bacterially-produced vitamin B metabolites to MAIT cells. After the presentation of foreign antigen by MR1, MAIT cells secretes pro-inflammatory cytokines and are capable of lysing bacterially-infected cells. MAIT cells can also be activated through MR1-independent signaling. In addition to possessing innate-like functions, this T cell subset supports the adaptive immune response and has a memory-like phenotype. Furthermore, MAIT cells are thought to play a role in autoimmune diseases, such as multiple sclerosis, arthritis and inflammatory bowel disease, although definitive evidence is yet to be published.
Gamma delta T cells ( T cells) represent a small subset of T cells which possess a TCR rather than the TCR on the cell surface. The majority of T cells express TCR chains. This group of T cells is much less common in humans and mice (about 2% of total T cells) and are found mostly in the gut mucosa, within a population of intraepithelial lymphocytes. In rabbits, sheep, and chickens, the number of T cells can be as high as 60% of total T cells. The antigenic molecules that activate T cells are still mostly unknown. However, T cells are not MHC-restricted and seem to be able to recognize whole proteins rather than requiring peptides to be presented by MHC molecules on APCs. Some murine T cells recognize MHC class IB molecules. Human T cells which use the V?9 and V?2 gene fragments constitute the major T cell population in peripheral blood, and are unique in that they specifically and rapidly respond to a set of nonpeptidic phosphorylated isoprenoid precursors, collectively named phosphoantigens, which are produced by virtually all living cells. The most common phosphoantigens from animal and human cells (including cancer cells) are isopentenyl pyrophosphate (IPP) and its isomer dimethylallyl pyrophosphate (DMPP). Many microbes produce the highly active compound hydroxy-DMAPP (HMB-PP) and corresponding mononucleotide conjugates, in addition to IPP and DMAPP. Plant cells produce both types of phosphoantigens. Drugs activating human V?9/V?2 T cells comprise synthetic phosphoantigens and aminobisphosphonates, which upregulate endogenous IPP/DMAPP.
Activation of CD4+ T cells occurs through the simultaneous engagement of the T-cell receptor and a co-stimulatory molecule (like CD28, or ICOS) on the T cell by the major histocompatibility complex (MHCII) peptide and co-stimulatory molecules on the APC. Both are required for production of an effective immune response; in the absence of co-stimulation, T cell receptor signalling alone results in anergy. The signalling pathways downstream from co-stimulatory molecules usually engages the PI3K pathway generating PIP3 at the plasma membrane and recruiting PH domain containing signaling molecules like PDK1 that are essential for the activation of PKC?, and eventual IL-2 production. Optimal CD8+ T cell response relies on CD4+ signalling. CD4+ cells are useful in the initial antigenic activation of naïve CD8 T cells, and sustaining memory CD8+ T cells in the aftermath of an acute infection. Therefore, activation of CD4+ T cells can be beneficial to the action of CD8+ T cells.
The first signal is provided by binding of the T cell receptor to its cognate peptide presented on MHCII on an APC. MHCII is restricted to so-called professional antigen-presenting cells, like dendritic cells, B cells, and macrophages, to name a few. The peptides presented to CD8+ T cells by MHC class I molecules are 8-13 amino acids in length; the peptides presented to CD4+ cells by MHC class II molecules are longer, usually 12-25 amino acids in length, as the ends of the binding cleft of the MHC class II molecule are open.
The second signal comes from co-stimulation, in which surface receptors on the APC are induced by a relatively small number of stimuli, usually products of pathogens, but sometimes breakdown products of cells, such as necrotic-bodies or heat shock proteins. The only co-stimulatory receptor expressed constitutively by naïve T cells is CD28, so co-stimulation for these cells comes from the CD80 and CD86 proteins, which together constitute the B7 protein, (B7.1 and B7.2, respectively) on the APC. Other receptors are expressed upon activation of the T cell, such as OX40 and ICOS, but these largely depend upon CD28 for their expression. The second signal licenses the T cell to respond to an antigen. Without it, the T cell becomes anergic, and it becomes more difficult for it to activate in future. This mechanism prevents inappropriate responses to self, as self-peptides will not usually be presented with suitable co-stimulation. Once a T cell has been appropriately activated (i.e. has received signal one and signal two) it alters its cell surface expression of a variety of proteins. Markers of T cell activation include CD69, CD71 and CD25 (also a marker for Treg cells), and HLA-DR (a marker of human T cell activation). CTLA-4 expression is also up-regulated on activated T cells, which in turn outcompetes CD28 for binding to the B7 proteins. This is a checkpoint mechanism to prevent over activation of the T cell. Activated T cells also change their cell surface glycosylation profile.
The T cell receptor exists as a complex of several proteins. The actual T cell receptor is composed of two separate peptide chains, which are produced from the independent T cell receptor alpha and beta (TCR? and TCR?) genes. The other proteins in the complex are the CD3 proteins: CD3 and CD3 heterodimers and, most important, a CD3? homodimer, which has a total of six ITAM motifs. The ITAM motifs on the CD3? can be phosphorylated by Lck and in turn recruit ZAP-70. Lck and/or ZAP-70 can also phosphorylate the tyrosines on many other molecules, not least CD28, LAT and SLP-76, which allows the aggregation of signalling complexes around these proteins.
Phosphorylated LAT recruits SLP-76 to the membrane, where it can then bring in PLC-?, VAV1, Itk and potentially PI3K. PLC-? cleaves PI(4,5)P2 on the inner leaflet of the membrane to create the active intermediaries diacylglycerol (DAG), inositol-1,4,5-trisphosphate (IP3); PI3K also acts on PIP2, phosphorylating it to produce phosphatidlyinositol-3,4,5-trisphosphate (PIP3). DAG binds and activates some PKCs. Most important in T cells is PKC?, critical for activating the transcription factors NF-?B and AP-1. IP3 is released from the membrane by PLC-? and diffuses rapidly to activate calcium channel receptors on the ER, which induces the release of calcium into the cytosol. Low calcium in the endoplasmic reticulum causes STIM1 clustering on the ER membrane and leads to activation of cell membrane CRAC channels that allows additional calcium to flow into the cytosol from the extracellular space. This aggregated cytosolic calcium binds calmodulin, which can then activate calcineurin. Calcineurin, in turn, activates NFAT, which then translocates to the nucleus. NFAT is a transcription factor that activates the transcription of a pleiotropic set of genes, most notable, IL-2, a cytokine that promotes long-term proliferation of activated T cells.
PLC? can also initiate the NF-?B pathway. DAG activates PKC?, which then phosphorylates CARMA1, causing it to unfold and function as a scaffold. The cytosolic domains bind an adapter BCL10 via CARD (Caspase activation and recruitment domains) domains; that then binds TRAF6, which is ubiquitinated at K63.:513-523 This form of ubiquitination does not lead to degradation of target proteins. Rather, it serves to recruit NEMO, IKK? and -?, and TAB1-2/ TAK1. TAK 1 phosphorylates IKK-?, which then phosphorylates I?B allowing for K48 ubiquitination: leads to proteasomal degradation. Rel A and p50 can then enter the nucleus and bind the NF-?B response element. This coupled with NFAT signaling allows for complete activation of the IL-2 gene.
While in most cases activation is dependent on TCR recognition of antigen, alternative pathways for activation have been described. For example, cytotoxic T cells have been shown to become activated when targeted by other CD8 T cells leading to tolerization of the latter.
In spring 2014, the T-Cell Activation in Space (TCAS) experiment was launched to the International Space Station on the SpaceX CRS-3 mission to study how "deficiencies in the human immune system are affected by a microgravity environment".
A unique feature of T cells is their ability to discriminate between healthy and abnormal (e.g. infected or cancerous) cells in the body. Healthy cells typically express a large number of self derived pMHC on their cell surface and although the T cell antigen receptor can interact with at least a subset of these self pMHC, the T cell generally ignores these healthy cells. However, when these very same cells contain even minute quantities of pathogen derived pMHC, T cells are able to become activated and initiate immune responses. The ability of T cells to ignore healthy cells but respond when these same cells contain pathogen (or cancer) derived pMHC is known as antigen discrimination. The molecular mechanisms that underlie this process are controversial.
Causes of T cell deficiency include lymphocytopenia of T cells and/or defects on function of individual T cells. Complete insufficiency of T cell function can result from hereditary conditions such as severe combined immunodeficiency (SCID), Omenn syndrome, and cartilage-hair hypoplasia. Causes of partial insufficiencies of T cell function include acquired immune deficiency syndrome (AIDS), and hereditary conditions such as DiGeorge syndrome (DGS), chromosomal breakage syndromes (CBSs), and B-cell and T-cell combined disorders such as ataxia-telangiectasia (AT) and Wiskott-Aldrich syndrome (WAS).
The main pathogens of concern in T cell deficiencies are intracellular pathogens, including Herpes simplex virus, Mycobacterium and Listeria. Also, fungal infections are also more common and severe in T cell deficiencies.
T cell exhaustion is a state of dysfunctional T cells. It is characterized by progressive loss of function, changes in transcriptional profiles and sustained expression of inhibitory receptors. At first cells lose their ability to produce IL-2 and TNF? followed by the loss of high proliferative capacity and cytotoxic potential, eventually leading to their deletion. Exhausted T cells typically indicate higher levels of CD43, CD69 and inhibitory receptors combined with lower expression of CD62L and CD127. Exhaustion can develop during chronic infections, sepsis and cancer. Exhausted T cells preserve their functional exhaustion even after repeated antigen exposure.
T cell exhaustion can be triggered by several factors like persistent antigen exposure and lack of CD4 T cell help. Antigen exposure also has effect on the course of exhaustion because longer exposure time and higher viral load increases the severity of T cell exhaustion. At least 2-4 weeks exposure is needed to establish exhaustion. Another factor able to induce exhaustion are inhibitory receptors including programmed cell death protein 1 (PD1), CTLA-4, T cell membrane protein-3 (TIM3), and lymphocyte activation gene 3 protein (LAG3). Soluble molecules such as cytokines IL-10 or TGF-? are also able to trigger exhaustion. Last known factors that can play a role in T cell exhaustion are regulatory cells. Treg cells can be a source of IL-10 and TGF-? and therefore they can play a role in T cell exhaustion. Furthermore, T cell exhaustion is reverted after depletion of Treg cells and blockade of PD1. T cell exhaustion can also occur during sepsis as a result of cytokine storm. Later after the initial septic encounter anti-inflammatory cytokines and pro-apoptotic proteins take over to protect the body from damage. Sepsis also carries high antigen load and inflammation. In this stage of sepsis T cell exhaustion increases. Currently there are studies aiming to utilize inhibitory receptor blockades in treatment of sepsis.
While during infection T cell exhaustion can develop following persistent antigen exposure after graft transplant similar situation arises with alloantigen presence. It was shown that T cell response diminishes over time after kidney transplant. These data suggest T cell exhaustion plays an important role in tolerance of a graft mainly by depletion of alloreactive CD8 T cells. Several studies showed positive effect of chronic infection on graft acceptance and its long-term survival mediated partly by T cell exhaustion. It was also shown that recipient T cell exhaustion provides sufficient conditions for NK cell transfer. While there are data showing that induction of T cell exhaustion can be beneficial for transplantation it also carries disadvantages among which can be counted increased number of infections and the risk of tumor development.
During cancer T cell exhaustion plays a role in tumor protection. According to research some cancer-associated cells as well as tumor cells themselves can actively induce T cell exhaustion at the site of tumor.T cell exhaustion can also play a role in cancer relapses as was shown on leukemia. Some study even suggested that it is possible to predict relapse of leukemia based on expression of inhibitory receptors PD-1 and TIM-3 by T cells. In recent years there is a lot of experiments and clinical trials with immune checkpoint blockers in cancer therapy. Some of them were approved as valid therapies and are now used in clinics. Inhibitory receptors targeted by those medical procedures are vital in T cell exhaustion and blocking them can reverse these changes.
( See also Immunosenescence ). | 1 | 4 |
<urn:uuid:982ac3f2-e24a-42e7-b2d6-52d3a3c6523d> | Most key search machines are designed around similar ideas. A controller operates a number of independent search units. This controller usually interfaces with a general purpose computer. Each search unit contains a key generator, decryptor (or encryptor) and comparator. The key generator produces trial keys that need to be checked. Some designs combine the key generator and decryptor modules to improve performance. The decryptor decrypts the known ciphertext with the trial key. An encryptor can also be used with some limitations. The comparator checks the plaintext that is generated by the decryptor to see if it is correct. If it is, the controller is signalled.
Due to its complexity, the cipher module is usually considered to be the bottleneck in the system. All other modules must be able to operate at least as quickly.
Conceptual key search machine design
A counter is an obvious choice for a key generator. The count sequence is predictable, but performance is not adequate on all devices. Xilinx FPGAs provide a dedicated carry chain which improves performance significantly.
The EFF DES cracker uses a counter where the 24 most significant bits are held constant and the 32 least significant bits counted. This technique is useful to reduce a counter’s propagation delay. The most significant bits must be counted and loaded externally. This scheme introduces the idea of a “block” of keys – a subset of the key space which can be searched in a short period of time. The 24 constant bits can be viewed as the block number. There must be a mechanism with which the controller can detect the end of block condition and start the key generator on a new block.
One design uses a single counter that is shared between all of the available search units. Each unit adds or concatenates a unique ID to the counter value to obtain its trial key. This scheme works well when the number of search units is a power of two since the ID can simply be concatenated, saving resources.
A number of designs , , use Linear Feedback Shift Registers (LFSRs) to generate trial keys. The main advantage of an LFSR over a counter is its high speed; propagation delays remain constant regardless of the length of the LFSR. One disadvantage of LFSRs is that their count sequence is nonlinear. Evenly breaking up a large key space between search units requires more effort than with a linear counter. One simple scheme is to use a shorter LFSR than usual and set the remainder of the key bits to a constant value. This works similarly to the block scheme for linear counters described above; the LFSR can be 32 bits long, and the remaining 24 bits set by the controller.
When performing a known plaintext attack, the choice of encryptor or decryptor is dependent on which has higher performance. Most ciphers (including all stream ciphers) have identical performance regardless of their mode. Some may have a more efficient implementation when implemented in one way or another. RC5 is an example of this. An RC5 encryptor can operate more efficiently than a decryptor because the order that the S array is used in during key setup matches that used in the encryption stage, allowing the phases to overlap.
Most key search machines will use a decryptor. Ciphertext-only attacks require a decryptor. Known-plaintext attacks will also require a decryptor under some conditions. This is to allow a more flexible comparator scheme that can detect correct plaintext regardless of imperfect knowledge.
Two major approaches are used when implementing the cipher module; a long pipeline or a small iterative module. Most ciphers are comprised of a number of round functions, making an iterative implementation natural. Pipelined approaches can achieve much greater speeds at the expense of FPGA resources. DES is frequently implemented as either a small iterated module or a long pipeline. The iterated version takes a multiple of 16 cycles (one for each application of the round function) to produce one block of output, while the pipelined version can produce one unit of output every clock cycle. The resource gains made by using an iterated cipher implementation almost never outweigh the loss of speed. Resource constraints may force a cipher to be implemented in iterative form.
An FPGA-Based Performance Evaluation of the AES Block Cipher Candidate Algorithm Finalists explores these issues in depth. It presents FPGA performance figures for the MARS, RC6, Rijndael, Serpent and Twofish ciphers. Loop unrolling, pipelining and sub-pipelining are investigated as architectural choices. In most cases, a pipelined implementation was fastest. Fast DES Implementation for FPGAs and Its Application to a Universal Key-Search Machine explores pipelined, combinatorial and iterative approaches for the DES cipher.
Some ciphers may benefit from having values precomputed during compilation time. This is usually used to achieve higher performance in systems that have very infrequent key changes. FPGAs fit well with this approach, allowing the programmed-in key to be changed with only a small period of downtime. The speed efficiency of an FPGA DES implementation was improved significantly using this technique . The utility of this technique in a key search machine is dependent on the cipher. The plaintext or ciphertext would be compiled into the design instead of the key. This may yield improvements for some ciphers, but exploratory experiments only showed very small resource savings.
The environment that the key search machine operates in determines the choice of comparator. If a perfect ciphertext/plaintext pair is known, simply checking for bit equality will be adequate. Ignoring certain bits in the trial plaintext may be a useful extension when only a portion of the sought plaintext is known.
If a ciphertext-only attack will be attempted or the plaintext is not precisely known, it may be necessary to implement a heuristic matching scheme. Such a scheme will generally flag a number of keys as potential matches and allow humans or software to check them further for correctness.
A simple scheme to detect ASCII text is to require that the most significant bit of each plaintext byte be 0. This can be further generalised into a statistical approach that scores each plaintext byte in the plaintext according to its probability of occurrence. A Programmable Plaintext Recognizer uses similar ideas to extend Wiener’s theoretical key search machine . Applying compression to a message before encrypting it causes their heuristics to fail. This is an effective countermeasure against any statistical comparator, since the compression makes the message “look like” random data.
Some applications may also benefit from a specialist comparator. A machine designed to solve the Blaze Challenge would need a specialist comparator that will find a match on any block that fits the form of the solution (in this case, when the plaintext is composed only of a single repeated byte.)
At some point in a key search machine’s operation it will be necessary to return potential keys to the host computer. Several schemes have been used to achieve this goal.
Most key search machines simply stop running when a match is found and wait for the computer to read out the key value. This is simple and flexible, but inefficient when many keys need to be returned – while a search unit is waiting to release the key it halted on, it cannot be used to search the key space.
A hardware buffer can be used to reduce the waiting time. When a key needs to be returned it is read into the hardware buffer, and the controller can read the keys out. This has the advantage of improved efficiency, but costs hardware resources.
One novel approach is to measure the amount of time needed to find the key. Using knowledge of how quickly the key space can be searched, an approximate trial key can be found. A number of keys need to be checked to account for timer inaccuracies. This method removes the need for key storage and retrieval hardware.
The programming interface for this design is supplied in Key search engine 1 interface.
To produce a FPGA-based key search machine which can operate independently of cipher algorithm. It should communicate with a computer for instructions and data. It should be reasonably scalable for large key cracks, and be easily modifiable for ciphertext-only attacks. It should allow rapid prototyping of key search machines for different ciphers.
Initial key search machine top level design
The bus provided by the Pilchard interface runs synchronously at 100 or 133MHz. The remainder of the key search machine must operate at this speed.
The top-level Status register provides general status information for the entire key search machine.
The key buffer stores potentially correct keys for the computer to read out and check further. This prevents search units from being paused for very long when a potential key is located. It is particularly useful for ciphertext-only attacks, where there may be a large number of potentially correct keys. 256 keys can be stored; this figure fully utilises the four Block SelectRAM units that are needed to store a 64 bit word.
The controller operates the search bus. It relays commands from the computer to individual search units, stores the ciphertext and plaintext registers, and polls each search unit on the bus to see if there are any keys waiting. It uses a simple state machine. While there are no commands waiting to execute, it polls search units to see if there are any keys waiting. If a key is found, it is read into the key buffer. If a command arrives, it temporarily stops polling and executes the command.
Initial key search machine search unit design
Each search unit has its own status register which the controller uses to determine if a key has been located. The key generator provides a trial key to the decryptor, which uses the key and the supplied ciphertext to produce trial plaintext. This trial plaintext is compared with the known plaintext or has a set of heuristics applied to determine if it appears to be valid. If it is, the search unit is halted until it is instructed to restart by the controller.
Several problems were identified with the original key search machine that justified the design of a new one.
The new design and its driver software took approximately four days to implement and debug. The programming interface is described in Key search machine 2 interfaces.
Revised key search machine design
There are two controllers in this design; a master and a slave. The master handles all communication with the host computer and links to the slave with an asynchronous bus modelled closely on VME. The slave controller’s only purpose is to link the asynchronous bus and the search bus, which can run synchronously at any speed. This allows search units to run at any speed, simplifying cipher implementation.
The search unit is designed to use a block system for key allocation. Only the block number is transmitted to the search unit. When retrieving the key from the search unit, the least significant 32 bits of the key are returned. The software is expected to track which search unit is searching which block.
The search bus runs at the same speed as the search units. The two clock domains (SDRAM clock and search unit clock) are linked with an asynchronous bus using similar protocols to VME.
High speed search units were later identified as a problem; it was found to be difficult to route a wide high speed bus over the entire FPGA and still meet timing constraints. A potential improvement to the machine would be to decouple the clock rate of the search bus from that of the search units or make the bus completely asynchronous. The latter was the original intent of the asynchronous bus, but the resources required to implement it made it unwieldy to use on every search unit. It remains the best solution for a large-scale machine (at least between FPGA devices).
Two linear counters were implemented. The first was a simple 64 bit counter. The second was designed to work with block schemes and added functionality to allow counting to be inhibited for ciphers that do not need a new key every clock cycle. It counts through 32 bits of range and has a further 48 bits of range that is set externally.
A comparator that checks for exact bit equality was implemented. It was 64 bits wide. It flags a match when its two inputs are identical. To ensure that it runs quickly enough with high clock speeds, it was implemented as a short pipeline. On the first clock cycle, four 16 bit segments of the trial plaintext are compared individually. On the second cycle if the result of these four comparisons is true, a match is flagged.
A simple statistical comparator was implemented using some of the ideas within . Its purpose is to use the probabilities of different bytes within the produced plaintext to determine if the plaintext “looks right”. The definition of “looks right” varies depending on the attack scenario; English text would have different statistical properties to an executable file, for example.
The algorithm used is fairly simple. The comparator takes a 64 bit input and splits it into 8 bit bytes. Each byte value has an assigned “score” – higher scores correspond with more frequently occurring byte values. The scores for each byte are added and compared against a threshold value. If the threshold is exceeded, a match is flagged.
Implementation of the algorithm was more challenging, but still straightforward. The main design constraint was that the comparator be no slower than any decryption module – in this case, the DES module running at over 149MHz and producing one word of plaintext per cycle. In order to meet this timing requirement, steps of the algorithm were split up as much as possible.
The figure below shows the steps performed by the implementation. Four RAM blocks were used to store the byte value scores (8 bits each). Each RAM block has two ports, allowing a total of eight memory lookups every cycle. The scores are added in parallel in pairs to minimise delays on each cycle. Finally, the total score is compared with the threshold (which is set by the plaintext value). Splitting up the steps in this way produces a deep pipeline, but allows very high clock rates.
Statistical comparator design
The threshold comparison stage is the main timing bottleneck. Speed improvements can be made by reducing the comparison resolution. By only comparing the most significant four bits, synthesis reports a maximum speed of 181MHz. The required resolution depends on the statistical properties of the text being attacked.
A small C program was written to generate character scores from files. It counts the frequency of each character in the file. The scores are then normalised down to 8 bits and output in a format suitable for entry directly into the VHDL RAM initialisation code. This data could also be used to modify the bitstream after compilation if desired. This program allows the comparator to be “trained” on similar input data to what is expected.
The DES implementation is a modified copy of the DES demonstration provided with the Pilchard board, which is itself a modified version of the Xilinx optimised DES implementation in . The order of the round functions was reversed to convert the encryptor into a decryptor.
Registers were added to the key schedule logic, but later removed when the efficient keying system described in was implemented. This scheme integrated an LFSR key generator with the DES key schedule logic. A 72 bit LFSR with taps suitable for a 56 bit LFSR was used. As the previous keys were shifted through the LFSR they remain available to the key schedule logic, which can generate the necessary subkeys with rotations. This saved approximately 500 slices that were previously used for subkey registers. Subkey generation was essentially free, although fanout on the LFSR bits did reduce the speed slightly.
The possibility of attacking a key and its complement simultaneously was considered. This halves the search space, but not the search time. The decryption portion of DES comprised the bulk of the area requirement in a hardware implementation, and this improvement only saves key schedule logic. After implementing the LFSR keying scheme above, performance improvements would be negligible.
Using the XCV1000E, a key search machine containing controllers and five search units was operated at 100MHz, giving a total search rate of 500Mkeys/sec.
The A5/1 implementation was produced from scratch using the algorithm description given in . It aims to find the initial key state rather than the key itself. Time constraints did not allow the more efficient stream cipher attack in Stream Ciphers to be implemented, and so no further work was performed using this module.
The (already small) resources needed to implement the A5/1 module could be further reduced by configuring the Xilinx LUTs as shift registers . This would complicate key loading; the entire key state could no longer be loaded in a single cycle.
The RC5 implementation was produced completely from scratch using the algorithm description given by Rivest . It implemented RC5-32/12/9. It was intended to be used to complete the RSA Secret-Key Challenge contests . The possibility of connecting the complete key search machine to distributed.net was considered as an extension.
Few prior works in this area could be located. claims to have schematics for a functional RC5 implementation on Xilinx FPGAs, but they are no longer available. The author was not able to be contacted. contains a Verilog model which was not found to be useful.
A fully pipelined design similar to that used for DES was investigated. This possibility was considered to be impractical due to the large number of registers needed for the S array.
After implementing the iterative version, the possibility of implementing a pipelined version was considered again. This time, the number of LUTs required was identified as being excessive. A prototype implementation determined that each stage of the key mixing phase would require 256 LUTs, and each half-round of the decryption phase would require 192 LUTs. Given 78 mixing steps and 24 decryption half-rounds, the number of LUTs required is 24576 – coincidentally, the exact number of LUTs available on the Virtex 1000E. Many more would be required for state decoding, communication, key generation, comparisons, routing overhead and so on. This possibility was not investigated further, but would almost certainly be feasible given more hardware resources to work with. Such an implementation would be able to provide very high search rates on sufficiently large FPGA devices.
An iterative design for the RC5 implementation was used. Block SelectRAM memories within the FPGA were used to store the S array. The number of RAM blocks was anticipated to be the limiting factor, similar to the RC4 key search engine described in . The L array was stored in three rotating registers; this eased timing constraints and prevented reads and writes to the RAM becoming a bottleneck.
The key mixing phase of RC5 took the bulk of the time needed to check a key. It required 78 iterations, each of which consists of a read and a write to the S and L arrays. To minimise the time required per cycle, the key mixing stage of the algorithm was set up to operate continuously on two separate regions of RAM. The initialisation and decryption stages were arranged to work on the opposite region of RAM. When a key mix phase completes, the decryption and initialisation phases begin on that region of RAM. In this way, the average time required to check a key would effectively be the time required to perform the key mixing phase.
RC5 RAM timing
The key mix phase needs to be completed as quickly as possible. The decryption and initialisation phases are not timing critical, and can be completed more slowly in order to save FPGA resources. The decryption module takes advantage of this by performing twice as many rounds and interchanging the A and B registers at the end of each round. In this way the subtract, shift, XOR and RAM lookup resources can be reused. The initialisation module actually performs the additions required to initialise the S array, even though these results could be trivially precomputed. This saves FPGA resources.
The general goal for the key mix operation is to complete as quickly as possible. The general goal for the decryption and initialisation operations is to use as few resources as possible, so long as the time taken for these two operations does not exceed that needed by the key mix operation.
One problematic area in the implementation was the 32 bit barrel shifter required by RC5. The initial naïve implementation required 352 slices; with the help of this was improved to 80 slices. One shifter is required for each of the key mix stage and the decryption stages. These account for a significant amount of the resource usage. Some research and experimentation was conducted to find smaller or faster shifter designs, without success. Shrinking or speeding up the barrel shifters would provide large benefits to the overall performance of the design.
Running the module at 100MHz proved difficult. Routing delays introduced after the place and route stage were the cause of the problem; congestion was present at one of the RAM blocks. The delay at this point increased when the number of search units was increased, suggesting that floorplanning may be useful to reduce the delay or at least make it consistent. A brief unsuccessful attempt at floorplanning was made.
To solve this problem, two approaches were used. Originally, two RAM blocks were used to provide a 32 bit wide RAM. One port was used by the key schedule module and the other by the decryptor and initialisation module. The number of RAM blocks was doubled and writes made to both pairs. Reads could be made from either pair of RAM blocks, allowing unrelated logic to be moved to different areas of the FPGA by the place and route tools. This helped to reduce delays. The RAM blocks were not being otherwise used. Adding a wait state after RAM access allowed the module to meet its timing requirements at the cost of reduced performance.
The total time required to check an RC5 key is 469 clock cycles. Each iteration needs 6 clock cycles, and 78 iterations are required. One cycle is needed for initialisation. At the target clock speed of 100MHz, this gives a search rate of 213,220 keys/sec. 16 search units could be fit into an XCV1000E device, giving an aggregate search rate of 3.4Mkeys/sec.
The possibility of increasing the clock speed of the RC5 module was investigated, but found to be counterproductive. The intent was to balance the time spent in each pipeline stage better, hopefully overcoming the increase in resource usage and number of stages required. Registers were inserted at locations responsible for timing limitations. These registers did not increase resource usage significantly due to the structure of the Virtex slice . The number of cycles per round increased from 5 to 8 and the synthesis clock speed from 102MHz to 142MHz, which was not an effective tradeoff. Many previously trivial operations such as the comparison needed to be split into stages instead of being simple combinatorial operations, which greatly increased the complexity of the source code. The overall resource usage also increased.
Replacing each bit in the three registers used to implement the L array with a short LUT shift register would reduce the resources allocated and potentially ease routing.
Some work was conducted to see if it was possible to take shortcuts in the key mixing operation; this was unsuccessful.
Including the ciphertext and IV at synthesis time reduced resource usage for the search unit to 539 slices. This would be a worthwhile approach for an attack where the ciphertext and IV are known in advance. It would generally not be suitable for an ASIC implementation.
This module was implemented before the second key search machine. Performance could be improved by running at a lower clock speed with fewer pipeline stages.
Benchmarks were conducted on a number of different CPUs to measure how quickly they could perform key searches. Setting up and running the benchmarks was very rapid, so many different CPUs were tested to determine if any would provide significant price/performance advantages.
Pre-written benchmarks were used. These benchmarks were faster and more thoroughly tested than what could otherwise be produced in the available time.
Each benchmark was run at least three times until consistent results were achieved. Linux benchmarks were run as the root user, prefixing the benchmark command with
nice -20 to ensure that the benchmark ran with the highest priority.
Tables containing the gathered results are given in CPU benchmark results. distributed.net maintains an online database of search rates for each CPU, allowing some of the benchmark results to be verified.
Two benchmark programs were used: the distributed.net client version 19991117 (which had to be compiled from source), and the SolNET DES client . The distributed.net client gave far better benchmark results, but could only be run on Linux machines with appropriate compiler versions. Neither DES client had been optimised for modern CPUs.
dnetc -benchmark des was used to run the distributed.net benchmarks, and
desclient-x86-linux -m for the SolNET benchmarks. The SolNET client’s benchmark results were unstable on faster CPUs, requiring them to be run a large number of times.
distributed.net maintains an online database of search rates for each CPU . The DES benchmarks for newer CPUs could not be verified because the CPUs did not exist at the time that the online benchmarks were gathered. The results for older CPUs were far higher than those in the online database.
Benchmarks for Celeron, P4HT and Athlon XP (Barton) CPUs had to be inferred from others based on the same core. The Mkeys/sec/MHz ratios obtained for RC5 remained fairly constant under this assumption, and this is assumed to remain true for DES.
The distributed.net client version 03033120 was used to conduct RC5-72 benchmarks. Binaries from the distributed.net website were downloaded for the relevant platform, unpacked, and the benchmark executed from the command line with
dnetc -benchmark rc5-72.
The RC5 benchmark results were verified against those in the distributed.net database. Confusion is apparent with the Athlon speed ratings; it is not obvious whether an entry marked “1900” refers to a 1900+ or a 1900MHz Athlon. Nevertheless, the RC5 benchmark results gathered were found to mesh well with those in the database.
No Celeron machines based on the Pentium IV core were available to run benchmarks on, so the online benchmark results were used for analysis. These appeared internally consistent, so a Mkeys/sec/MHz rating was determined and averaged across the available benchmark results to reduce error. This rating was used to infer the missing benchmark results.
J. Keller and B. Seitz, “A hardware-based attack on the A5/1 stream cipher,” in APC 2001. VDE Verlag, 2001, pp. 155 158. [Online]. Available: http://www.informatik.fernuni-hagen.de/ti2/papers/apc2001-nal.pdf
P. Leong, M. Leong, O. Cheung, T. Tung, C. Kwok, M. Wong, and K. Lee, “Pilchard - a reconfigurable computing platform with memory slot interface,” in Proceedings of the IEEE Symposium on Field-Programmable Custom Computing Machines (FCCM), April 2001. [Online]. Available: http://www.cse.cuhk.edu.hk/~phwl/papers/pilchard_fccm01.pdf
I. Goldberg and D. Wagner, “Architectural considerations for cryptanalytic hardware,” CS252 Report, 1996. [Online]. Available: http://www.cs.berkeley.edu/~iang/isaac/hardware/paper.ps
I. Hamer and P. Chow, “DES cracking on the Transmogrifier 2a,” in Lecture Notes in Computer Science, ser. Cryptographic Hardware and Embedded Systems. Springer-Verlag, 1999, no. 1717, pp. 13 24. [Online]. Available: http://www.eecg.toronto.edu/~pc/research/publications/des.ches99.ps.gz
K. L. K.H. Tsoi and P. Leong, “A massively parallel RC4 key search engine,” in Proceedings of the IEEE Symposium on Field-Programmable Custom Computing Machines (FCCM), 2002, pp. 13 21. [Online]. Available: http://www.cse.cuhk.edu.hk/~phwl/papers/vrvw_fccm02.pdf
M. Blaze. (1997, June) A better DES challenge. [Online]. Available: http://www.privacy.nb.ca/cryptography/archives/cryptography/html/1997-0%6/0127.html
(2003, October) distributed.net: Node Zero. [Online]. Available: http://www.distributed.net/
The RSA Laboratories Secret-Key Challenge. RSA Security. [Online]. Available: http://www.rsasecurity.com/rsalabs/challenges/secretkey/index.html
A. Elbirt, W. Yip, B. Chetwynd, and C. Paar, “An FPGA-based performance evaluation of the AES block cipher candidate algorithm finalists,* in IEEE Transactions on VLSI Systems, ser. IEEE Transactions on VLSI Systems, August 2001, vol. 9, no. 4.
J. Leonard and W. H. Mangione-Smith, “A case study of partially evaluated hardware circuits: Key-specific DES,” in Field-Programmable Logic and Applications. 7th International Workshop, W. Luk, P. Y. K. Cheung, and M. Glesner, Eds., vol. 1304. London, U.K.: Springer-Verlag, 1997, pp. 151 160. [Online]. Available: http://citeseer.nj.nec.com/leonard97case.html
D. Wagner and S. M. Bellovin, “A programmable plaintext recognizer,” 1994. [Online]. Available: ftp://ftp.research.att.com/dist/smb/recog.ps
C. Eilbeck. My crypto page. [Online]. Available: http://www.yordas.demon.co.uk/crypto/
Xilinx, Inc. SRL16 16-bit shift register look-up-table (LUT). [Online]. Available: http://toolbox.xilinx.com/docsan/xilinx5/data/docs/lib/lib0393_377.html
E. Soha. (1998, May) RC5 on FPGAs. No longer available from original source. [Online]. Available: http://web.archive.org/web/19981205053422/http://www-inst.eecs.berkeley%.edu/~barrel/rc5.html
Xilinx, Inc., “XC4000E and XC4000X Series Field Programmable Gate Arrays,” May 1999. [Online]. Available: http://www.xilinx.com/bvdocs/publications/4000.pdf
Xilinx Inc., “Virtex-E 1.8V Field Programmable Gate Arrays,” July 2002. [Online]. Available: http://direct.xilinx.com/bvdocs/publications/ds022.pdf
distributed.net. (2003, October) distributed.net: Client Speed Comparisons. [Online]. Available: http://n0cgi.distributed.net/speed/
(1997, May) SolNET DES Challenge Attack: Download Page. [Online]. Available: http://www.des.sollentuna.se/download.html | 1 | 7 |
<urn:uuid:14e6dba2-65c3-4b6f-9adb-f9ba967c4485> | Apparently innocuous nontuberculous mycobacteria (NTM) species, classified simply by their rapid or slower growth rates, can cause an array of illnesses, from skin ulceration to severe pulmonary and disseminated disease. was performed on more than half of a million mycobacterial cultures to look for the annual occurrence of culture-verified NTM disease from 1991 to 2015 . Greater than expected incidence prices of disease due to NTM in the analysis by Hermansen and co-workers were seen in babies and toddlers 0C4 years (5.36/105/yr) and in old people (those aged 65C69; 2.39/105/yr) . In america, an elevated prevalence of NTM-associated Quizartinib kinase activity assay lung disease instances in people above 65 years in addition has been noticed . This bimodal age group association with NTM occurrence eludes towards the significant contribution of the insufficient immune system response in susceptibility. Quizartinib kinase activity assay Additionally overall upsurge in burden from NTM disease may be the increase in immediate medical costs connected with it, that are staggeringly high also. This year 2010 only, 815 million dollars had been used to take care of 86,244 instances of NTM in america . Furthermore, NTM disease frequently qualified prospects to chronic disease that will require extended, complex, and sometimes poorly tolerated drug regimens over many months to years, and following treatment, patients can experience relapse from incomplete treatment or reinfection [8C13]. These studies and intricacies underscore the need to develop effective vaccines and drug treatments for use in highly susceptible populations and settings of emerging drug resistance . Open in a separate Quizartinib kinase activity assay window Fig 1 Body sites affected by NTM species.Pulmonary infections are generally due to inhalation from environmental sources. Quizartinib kinase activity assay Disseminated infections are most prevalent in immunocompromised persons, such as those on anti-TNF antibody therapy or suffering from HIV. Cervical lymphadenitis presents most commonly in children. Bone and joint infections by NTM are usually introduced via trauma. Lastly, skin and soft tissue infections are initiated via surgery, trauma, or broken skin barriers contacting contaminated water. Figure represents more commonly encountered species; some less-common species are not depicted. HIV, human immunodeficiency virus; MAC, complex; NTM, nontuberculous mycobacteria; TNF, Quizartinib kinase activity assay tumor necrosis factor. Unfortunately, NTM infection and disease is not a reportable condition across much of the United States, and identification of NTM to the species level is not done routinely. Not surprisingly variance in strategy and confirming across physical areas, NTM prevalence has risen since 1950 and is probable an underestimate steadily. The most frequent NTM varieties to trigger lung disease participate in the MACcomposed mainly of and . Mac pc varieties are most abundant over the Americas (85%C35.4%), Australia (83%C67.3%), Europe (82%C22.4%), and parts of Asia (71.4%C39.7%) in comparison to additional varieties leading to pulmonary disease . Additional cultured NTM include and frequently. Whole-genome sequencing (WGS) of isolates can be advancing our knowledge of epidemiology, physical variety, and transmissibility [19, 20], which process could possibly be applied to additional medical NTM isolates. Despite an huge prevalence currently, varieties of the genus are destined to improve in the approaching years further; in fact, isolates not installing any known varieties are experienced in research laboratories frequently. This review shall specifically highlight MAC and because they represent a substantial proportion of disease worldwide. See the superb reviews highlighting problems surrounding the analysis of NTM Isl1 [21, 22], including this varied representation of varieties in any provided infection. Significantly, species-level recognition of NTM includes a large. | 1 | 4 |
<urn:uuid:5469afe6-3611-49b7-b936-204ae4801c2a> | Everywhere you turn, people are talking green.
It is fun to watch as products and ideas continue to surface that are helping our planet become less wasteful in energy consumption. Items that we never dreamed could possibly be replaced are now slowly disappearing with new technology presenting, literally, a better light bulb.
The light bulb has been marveled ever since its presentation in 1879 by Thomas Edison. At first an alternative to using kerosene lamps, it was hard to imagine something better than a light bulb coming along. However, today the regular incandescent light bulb is one of the most energy wasting products that there is.
Nonetheless, it is also the cheapest product of its kind.
During the 1920s, fluorescent lighting was used in garages and commercial buildings where it was a good replacement for the regular light bulb. Fluorescent lighting produces up to 75% less heat than a regular light bulb, but it also had its drawbacks that kept it from being a suitable replacement for residential use. The problems with fluorescent lighting were the large tubes filled with mercury, the temperature sensitivity, and intermittent flickering, as well as the problems of disposing of the lighting.
It was only during the 1970s that people really started taking energy consumption into consideration and that was when the tungsten, or quartz, lamp was developed. This bulb was great, because of the light clarity which is much better than what is created by the incandescent light bulb. The new bulb also had a longer life span and required lower voltage to work. Despite all the benefits and better quality, there was still a reluctance to make the switch away from the regular light bulb since the new bulb was more expensive to buy. However, people who did go in the direction of the new technology were doing so to cut energy consumption.
But now, there is an option that we have all been waiting for.
The most awesome and creative discovery of alternative lighting was discovered in the 1960s and is known as the LED. This device is made out of a Silicon element that converts electrical energy into light energy without the drawbacks found in other lighting types. Invented by Nick Holonyak, known as ‘the father of the light-emitting diode’, Holonyak has received many honors for his innovation that is changing the world.
The light emitting diode, or LED, is different from all other light bulbs. LED lights are the same as the red indicator lights on electronic devices which show if it is on or off. What makes these lights different is that LED lights give off no heat or distorted tint, they have no movable parts and have a long life. On top of that, they are also very energy efficient.
This technology is being worked on all the time now and as a result, there are bulbs being used as replacements in street lights and decorative lighting, as well as for LCD displays of flat screen televisions. Although the costs associated with this technology are high at first, the savings and other benefits outweigh that. Pretty soon all light bulbs will be replaced by the energy efficient and saving LED. This only shows that it is possible to make a better light bulb.
Article source: https://www.allbestarticles.com | 1 | 3 |
<urn:uuid:c458112d-53b3-436a-9ac0-c9ff54035477> | Mary Rodwell is the founder and principal of Australian Close Encounter Resource Network (ACERN). Born in the United Kingdom (UK) and eventually migrating to Western Australia in 1991, she currently resides in Queensland.
Mary Rodwell is the founder and principal of Australian Close Encounter Resource Network (ACERN). Born in the United Kingdom (UK) and eventually migrating to Western Australia in 1991, she currently resides in Queensland.
On July 20th we will be celebrating the 50th Anniversary Apollo 11 Moon Landing. This is an epic event in human history when the first humans landed on the moon in 1969, of these were: Commander Neil Armstrong, and Buzz Aldrin both from the USA. Being the first to land on the moon, the pilots must of seen many great things, but did they see any UFO’s?
As Buzz Aldrin quoted:
“On Apollo 11 in route to the Moon, I observed a light out the window that appeared to be moving alongside us. There were many explanations of what that could be, other than another spacecraft from another country or another world – it was either the rocket we had separated from, or the 4 panels that moved away when we extracted the lander from the rocket and we were nose to nose with the two spacecraft. So in the close vicinity, moving away, were 4 panels. And i feel absolutely convinced that we were looking at the sun reflected off of one of these panels. Which one? I don’t know. So technically, the definition could be “unidentified.””
And Neil Armstrong quoted:
“It was incredible, of course we had always known there was a possibility, the fact is, we were warned off! (by the Aliens). There was never any question then of a space station or a moon city. I can’t go into details, except to say that their ships were far superior to ours both in size and technology – Boy, were they big! and menacing! No, there is no question of a space station.”
I will let you decide what went on with the Apollo 11 moon landing. But from what the space pilots said, there is no doubt that there was, and still is something strange happening up there.
Let us remember the 50th Anniversary Apollo 11 pilots.
On May 28th, 1996, a package of odd artifacts was received by popular radio talkshow host Art Bell. It contained what might turn out to be a piece of an alien spacecraft. The anonymous sender claims that his grandfather was on the crash recovery team at Roswell, and kept these samples of debris for himself. Since 1947, when the Roswell UFO crash occurred, the pieces had “sat for years inside a closet.” Of particular interest are the small broken pieces of what is purported to be from the shell or skin of the Roswell UFO.
Concerning these pieces, the anonymous sender told Mr. Bell, in a letter:
|"I now include the enclosed, and can only say that these scrapings came from the exterior underside of the Disc itself. It literally was a "shell-like" shielding of the Disc. Brittle and layered, almost with a prefabricated design and placing."|
The man who sent the material also mentioned that his grandfather ‘s diary, from which he is gathering the facts, mentions that the crashed craft was a “wedge shaped disc.”
Roswell UFO ©1994 William McDonald
|These are interesting statements because Roswell UFO crash researchers had previously interviewed crash site witnesses, such as military personnel who were at the scene, and found that the underside of the craft was described as being covered with an unusual “tile,” possibly for heat protection, or some other purpose. The pieces of “shell-like shielding” sent to Art Bell might very well be what the Roswell UFO crash witnesses were describing. Note also the “wedge shaped disc” form, just as described in the diary.|
“They are made up of dozens of many microscopic layers of Bismuth, with thicker layers of Magnesium separating them – layers thinner than a sheet of paper”
“Bismuth, with a strong positive electrostatic charge applied to it, has also been shown to actually loose weight or mass, right down to ZERO.”
|Interestingly, the initial report on the analysis of the “shielding” pieces indicates that they are made of dozens of microscopic (3-4 micron) layers of Bismuth, with thicker (20 micron) layers of Magnesium separating them – layers thinner than a sheet of paper. Bismuth is an interesting element, not ordinarily used in thin layers. No lab or technician has been able to explain how these pieces were made or what purpose they might have. They had never seen anything like them. Since the Bismuth is very dense (like lead) it makes sense to use thick layers of Magnesium (very light) to separate the Bismuth layers, and build a thick panel of material.Bismuth is also mentioned in the literature as being electrogravitic. Spinning discs of Bismuth (patented by GE Engineer Henry Wallace: Pat.#3626605, 3626606, & 3823570) have apparently been shown to DEFY GRAVITY. Bismuth, with a strong positive electrostatic charge applied to it, has also been shown to actually loose weight or mass, right down to ZERO. Spheres of Bismuth, when dropped, have reportedly fallen faster than they are supposed to (according to Newton’s “Laws”).
It is strange stuff, and this “UFO crash debris” not only contains ultra thin layers (3 microns – much thinner than paper) of Bismuth, but it is absolutely PURE – no Oxygen, nothing (according to the reports coming in from researcher Linda Moulten Howe)
Perhaps even more significantly, Linda Howe has been speaking with a man called “Dan” who has verified that he worked at the Aeronautical Systems Division at Edwards AFB and also at Wright Patterson AFB, back in the 70’s. He became very disturbed when he heard of this “shell” objects that Art Bell had been sent. Dan was worried because during his employment as a research physicist, working on Top Secret back engineering projects, he says he was handed some pieces of material to analyze, and found, at the time, that they were made of about 30 ultra thin layers of PURE BISMUTH, separated by thicker layers of Magnesium! In other words, the strange piece of material that the Air Force wanted him to study and identify was nearly identical to the pieces which Art Bell now has. Dan never did identify the purpose of the Air Force piece, but now thinks that the purpose might indeed be the levitation of a craft.
| Click the image to view the full Table of Elements
| Adding to the mystery, the portion of the Period Table of the Elements at right shows how Bismuth (Bi) and the mysterious “element 115” lie right on top of each other. (click the image to view the full table of Elements) When elements appear in the same column (series), they tend to share characteristics (such as the inertness of the noble gases). Element 115 has yet to be discovered or created (by humans), but it is of interest because of the story that Robert Lazar and others tell, namely that alien spacecraft make use of it in their propulsion systems!Is it just a coincidence? Could it be that element 115 has even more potent “antigravity” properties than the far easier to obtain Bismuth? Come back to this site for further details, later.
Roswell Crash Anonymous Letters
April 10th, 1996
Dear Mr. Bell,
I’ve followed your broadcasts over the last year or so, and have been considering whether or not to share with you and your listeners, some information related to the Roswell UFO crash.
My grandfather was a member of the Retrieval Team, sent to the crash site, just after the incident was reported. He died in 1974, but not before he had sat down with some of us, and talked about the incident.
I am currently serving in the military, and hold a Security Clearance, and do NOT wish to “go public”, and risk losing my career and commission.
Nonetheless, I would like to briefly tell you what my own grandfather told me about Roswell. In fact, I enclose for your safekeeping “samples” that were in the possession of my grandfather until he died, and which I have had since his own estate was settled. As I understand it, they came from the UFO debris, and were among a large batch subsequently sent to Wright-Patterson AFB in Ohio from New Mexico.
My grandfather was able to “appropriate” them, and stated that the metallic samples, are “pure extract aluminum”. You will note that they appear old & tempered, and they have been placed in tissue-paper, and in baggies for posterity.
I have had them since 1974, and after considerable thought and reflection, give them to you. Feel free to share them with any of your friends in the UFO Research Community.
I have listened to many people over the years discuss Roswell and the crash events, as reported by many who were either there or who heard about it from eyewitnesses.
The recent Roswell movie, was similar to my grandfather’s own account, but a critical element was left out, and it is that element which I would like to share.
As my grandad stated, the Team arrived at the crash site just after the AAF/USAF reported the ground zero location. They found two dead occupants, hurled free of the Disc.
A lone surviving occupant, was found within the Disc, and it was apparant, it’s left leg was broken. There was a minimal radiation contamination, and it was quicky dispersed with a water/solvent wash, and soon the occupant was dispatched for medical assistance and isolation. The bodies were sent to the Wright-Patterson AFB, for dispersal. The debris was also loaded onto three trucks which finished the on-load just before the sunset.
Grandad was part of the Team that went with the surviving occupant. The occupant communicated via telepathic means. It spoke perfect english, and communicated the following:
The Disc was a “probeship” dispatched from a “launchship” that was stationed at the dimensional gateway to the Terran Solar System, 32 light years from Terra. They had been conducting operations on Terra for over 100 years.
Another group were exploring Mars, and Io.
Each “probeship” carried a crew of three. A “launchship” had a crew of (100) one-hundred.
The Disc that crashed, had collided with a meteor in orbit of Terra, and was attempting to compensate it’s flight vector, but because of the collision, the inter-atmospheric propulsion system malfunctioned, and the occupants sent out a distress signal to their companions on Mars. The “launchship” commander made the decision to authorize an attempted soft-landing on the New Mexican desert. At the same time, the inter-atmospheric propulsion system had a massive electrical burn-out, and the Disc was soon virtually helpless.
There was another option available to the occupants, but it involved activating the Dimensional powerplant for deep space travel. However, it opens an energy vortex around the Disc for 1,500 miles in all directions. Activating the Dimensional powerplant, would have resulted in the annihilation of the states of New Mexico, Arizona, California and portions of Mexico. Possibly even further states would have been affected.
Thus, the occupants, chose to ride the ship down, and hope for the best. They literally sacrificed their lives, rather than destroy the populations within their proximity.
The Dimensional powerplant was self-destructed, and the inter-atmospheric propulsion system was also deactivated, to prevent the technology from falling into the hands of the Terrans. This was done in accordance with their standing orders in regards to any compromise with contact experiences.
Grandad spent a total of 26 weeks in the Team that examined and debriefed the lone survivor of the Roswell crash. Grandad’s affiliation with the “project” ended, when the occupant was to be transported to a long-term facility. He was placed on-board a USAF Transport aircraft, that was to be sent to Washington, D.C.
The aircraft and all aboard disappeared under mysterious and disturbing circumstances, enroute to Washington, D.C.
It may interest you that three Fighter aircraft, dispatched to investigate a distress call from the Transport experienced many electrical malfunctioning systems failures, as they entered the airspace of the transports last reported location. No crash or debris of the Transport was ever found. The Team was disbanded.
Well, I realize I have likely shocked you with this bizzare and incredible account, and seeking to remain “unknown” likely doesn’t do anything for my credibility …eh? And the metal “samples” only will likely add to the controversy.
But, I know you will take this with a “grain of salt”, and I don’t blame you, Mr. Bell.
I just hope that you can understand, my reasons and my own desire to maintain my career and commission.
I am passing through South Carolina with an Operational Readiness Mobility Exercise, and will mail this just prior to this Exercise, possibly from the Charleston area.
I will listen to your broadcast, to receive any acknowledging or confirmation, that you have received this package.
This letter and the contents of the package are given to you, with the hope that it helps contribute to discussion on the subject of UFO Phenomena.
I agree with Neil Armstrong, a good friend of mine, who dared to say, at the WHITE HOUSE no less, that there are things “out there”, which boggle the mind and are far beyond our ability to comprehend.
April 22, 1996
At great risk, I am writing you in regards to the package sent your way. I had opportunity to listen to a tape recording of the radio broadcasts, when I returned home after having participated in the Mobility Readiness Exercises.
My son, a Senior in college recorded them for me.
I must say that I was somewhat surprised by the negative and closed-minded responses directed your way, by some of your own listeners.
You seemed to indicate that receiving the package has vastly upset your life, and in this I would like to say that wasn’t my intention, and I offer my apologies.
Further information regarding the Roswell Crash, and my own grandfathers affiliation would likely be potentially beneficial in your efforts at correlation & verification.
In this regards, I can only say, based on past conversations on the subject with Grandad, that the Retrieval Team, consisted of three segments. The On-site Team, the In-House Team, and the Security Team. The credentials of the team members weren’t only military related. There were individuals with backgrounds from The University of Colorado,Office of Naval Research, AAF/USAF & US Army, UCLA and Atomic Energy Commision & National Advisory Committee on Aeronautics & Office of Scientific Research and Developement. Additionally, there were Consultants from England, France and Russia involved.
Grandad stated their own analysis of the samples indicated it as pure extract aluminum, as a conductor for the electromagnetic fields created in the propulsion systems. However, critically-needed data was “eliminated” by the self-destruct mechanisms on the disc vehicle itself. Furthermore, the occupant-survivor of the crash, refused to disclose technical information, despite a series of interrogative attempts to extract technological data. No means could be found to secure the information.
There were always two Security Team members present at every face-to-face meeting with the survivor. The survivor had the ability to deduce thoughts and questions, prior to them being asked. Sometimes it became frustrating.
The Disc itself was literally dissected, and it was discovered that the propupsion system had actually fused together the many interior components. There were control-type devises forged in the shape of the alien hand, which were assumed as controls and activation surfaces.
What is today fiber-optic technology, was part and parcel of the alien technology within the control panels, albeit fused and melted when the self-destruct mechanism was activated.
There were Westinghouse-affiliated persons on the Team, and Grandad always thought, some of them had gone back, with the knowledge and incorporated it into the future research with the phone systems.
Of course the Military was concerned as to the ability of the Aliens to enter our atmosphere at will, undetected, and thusly they recommended to the President that a Space Program be set into motion, and that a system of satellites be placed into orbit by 1957, and this satellite system be patched into the then DEW Line early warning system…which became later NORAD.
Grandad stated that it was his opinion, that NORAD was formed not only to track possible ICBM’s from hostile nations, but as a established Detection System for UFO craft. That is why the NASA Space Agency has been “incorporated” by and large with our Armed Forces, and there are so many “classified” missions.
This is my oponion, but Grandad prophesied such occuring as far back as 1971.
Well, I am scheduled to travel back to Charleston AFB and then Pope AFB. I’ll mail this from somewhere in SC.
I’ll not likely communicate again. My wife is concerned, as am I that the Intelligence agencies will put two & two together, so it is inadvisable to further communicate this information. I hope you understand my position.
I could likely face a courts martial or sedition charges, for stating some of this information, and opinions.
You would be surprised at the extent of internal policies on this subject, and the consequences for current commissioned officers talking about UFO Phenomena.
I was surprised by Edgar Mitchell’s statements of recent date and I imagine there are many involved with Roswell, who are a bit upset at events underway.
However, he is a man of outstanding character & integrity, and knows whereof he speaks, as do quite a few other astronauts.
I wish you all the best, and will be listening.
I commend your courage and integrity.
I hope your listeners understand, that the subject of Roswell has great potential at extrapolating the truth on UFO’s, and what has come to be known as a cosmic watergate, is only the tip of an iceberg.
Grandad said that when the truth does come out, humanity will be changed beyond comprehension. He also said many on the In-House Team lobbied to release the information to the public. Not all of them were paranoid in trusting the public with the truth.
On May 19th, Linda Howe provided a followup report on Art’s Parts. Here is Real Audio file of the report.
No Date (arrived May 28, 1996)
I have listened with interests, to the ongoing reports on the samples I sent your way.
I noted that the Researcher discussing the testing of the samples noted that basically, it is merely Aluminum.
Slight variations on the testing, but indistinguishable from “normal” Aluminum.
Actually, this is precisely the same initial findings of Grandad’s Team. However, I neglected to include metallic samples of the exterior of the crashed Roswell disc.
I now include the enclosed, and can only say that these scrapings came from the exterior underside of the Disc itself. It literally was a “shell-like” sheilding of the Disc. Brittle and layered, almost with a prefabricated design and placing.
Keep in mind Mr Bell, that these are the last of Grandad’s samples. They have sat for years inside a closet, with his personal effects.
Because of certain concerns, I will not be contacting you on this matter. Perhaps, I am a bit paranoid, but I do have a family & career to think about. I hope you understand.
Hope these last samples are helpful.
Of course, I will be listening.
On May 31th, Linda Howe provided a further extensive report on Art’s Parts. This program is captured in our Real-Audio Archives.
Letter to Linda Howe
Another letter appeared, this time sent to Linda Howe.
May 25, 1996
Dear Ms Howe,
I’m not certain if this is how your last name is spelled. Nonetheless, I wanted to “touch base” with you, personally. So I have taken this opportunity to do so.
I’m the individual who has dispatched the “artifacts” of Roswell Disc debris, to talk show Art Bell.
While I was on a flight to Hungary, with Joint Endeavor; my wife acquired your address on a “Dreamland” Broadcast. I’ve been able until this deployment to listen with interests to your comments & opinions, as to the “artifacts”, and noted some apprehension as to the materials, itself. This is fully understandable.
When I decided over 6 months ago, to turn these “artifacts” over to the UFO Research Community, I decided that it would be accomplished by allowing Art Bell to be the mediator/trustee of the “artifacts”. Partly because of his record of complete neutrality on these matters, but moreso, because I thought the “artifacts” safety would be insured completely. I regret the undue & unwarranted turmoil, caused to the Bell household, but still feel justified in my initial decision. I’ve been looking over Grandad’s journal, and wanted to share a few brief extractions with you and Mr Bell.
Firstly, I note that the intent of your own efforts as to the exams of the “artifacts” is to confirm some sort of other-world extraterrestial, source of the metals.
This was the initial aspect of the original exams. But, what came to light was that the metal was virtually indiscernable to Earth metals. According to Grandad’s notations: “this was to insure that in the event of crash or capture, that no verification as to the alien network or the homeworld confederation could be proven, and a compromise of the Alien directives transpired”. Apparantly, “the probeships were constructed with metallic base metals, indistinguishable from Terran metals, as a protective measure & security safeguard”.
There was a drawback to this, “a negative consequence was that the probeships would be vulnerable to radar interferance. The Aliens remedy to this was to insure the probeships maintained high velocity trajectories, and activated these velocities upon entering the Terran atmosphere”.
Most of the interatmosphere operations, occured in the Western Hemisphere, and in the areas of Europe & Asia, with populations consisting of “network selectees”. This was a term that the Aliens used to identify persons destined for contact. These contacts were “conceived & predestined prior to birth” and was “scheduled to be accelerated & increased throughout the three decades prior to 2000 A.C.E., when the network would initiate the “transition”.
“Transition” was a term that meant full-contact, occuring around the Terran year 2025.
All this information was acquired from the sole surviving Alien occupant, and was correlated with information retrieved from the databanks of the command console panels within the Disc command center. It took a full year and a half to decipher the Alien language, which resembled ancient Babylonian script. But, with the help of World War II cryptologists who worked with the War Department, the code texts were broken, and deciphered. All the results were shared with the Offices of those overseeing The “Project” as it was known, among them General Curtis Lemay, Secretary of State George Marshall, and President Truman, and the Intelligence operatives, who oversaw Security for the Roswell “Project”.
Most of these activities transpired at Wright-Patterson, and would result “in an extensive reformatting of the entire policy developments, regarding potentiol Alien Contact. Upon advice of the “Project” directors, the decision was made to incorporate a discredation campaign on those claiming Alien Contact, and those who advocated Alien existance.”
Furthermore, the “Project” directors, recommended that the U.N. be granted overall command authority over the “Project”. Both Secretary of Satate Marshall and his successor Dean Acheson argued against this recommendation. President Truman personally visited Wright-Patterson on a number of occasions.
So, you see Ms Howe, that there is alot more to this Roswell “Project” and “incident”, then many can even begin to discern or comprehend.
Try to imagine as Grandad, used to say; that it is 1947, and the USA has just survived a global struggle, millions had been killed or injured, and suddenly, even as the dawn of the Atomic age came to being, that the existance of Alien Life has been accidently discovered. Many facets of inquiry were taken into account, and some of the most brilliant & accomplished minds in Government would do anything to ascertain the exact extent and perspective purpose of these Aliens.
What was once considered fiction, was proven reality. Absolutely every attempt was made to exhaust the efforts at understanding & comprehending this extraordinary event. It was decided to keep the information secret, classified and beyond retrieval.
It has been almost 50 years, and the secret has been steadily maintained, protected, and insured. WHY?
Grandad was one who lobbied for full-disclosure. He understood that release of the information could result in chaos, of a sort in civilization. Because of his concern for the safety of his family, he complied with the secrecy oath.
It wasn’t until the mid 1960’s that he talked with anyone on the “Project”. He spoke with one particular person, who had promised confidentiality, and was faithful to that promise. You may have known that person: Dr J. Allen Hynek.
Other than that brief discourse with Dr Hynek, Grandad was silent.
The year 2000 approaches. It is time that the Roswell secret is disclosed. The weight of the evidence will demand a verdict. That verdict will be rendered by the people. Millions have witnessed the UFO Phenomena in the last 50 years.
Virtually thousands have claimed contact with Aliens. What happened at Roswell was a beginning. Not a beginning of the end of civilization, as some feared if the information was disclosed. But, actually a beginning of beginnings.
One can argue the impact of an accidental discovery of this magnitude.
50 years of UFO Phenomena have occurred for a purpose. A stellar purpose predetermined since that first A-Bomb was detonated in New Mexico.
Humanity has survived in spite of itself. Because of itself.
I believe that the “Transition” Grandad wrote of is underway. What I’ve shared with Art Bell, you, and the entire world was done in Honor of a quiet and courageous gentleman who was a witness to the wonders of this Universe.
Our world is changing.
Nothing can stop that. Nothing.
This concludes my own part in what lies ahead. I now go back to my career & responsibilities, to my family and my own life. I have dared to risk, and no matter the consequenced, I am confident Grandad would support this action.
You have the “evidence”, some have sought for and demanded.
Of course, for some it will not be enough. Discount it, discredit it, so be it. It matters little in the final analysis…sound & fury signifying nothing.
The Truth will come out about Roswell.
Indeeed, it has come out. Was it not a Honorable man who once stated: “Ye shall know the Truth, and it shall set you free”?
I wish you and Art Bell, all the Best!
Dreamland Audio Report
On the 6/23/96 Dreamland, Linda reports on the Hardness Testing of the Aluminum Pieces and some Telephone Research regarding the Skin Pieces. These are Real-Audio files. Get a Real-Audio Player from Progressive Networks web site.
Dreamland Audio Report
On the 9/15/96 Dreamland, Linda reports on the Testing of the Skin pieces by Travis. This is a Real-Audio file approx 3 and a half minutes. Here is Linda and Art discussing Reactions to the Video of Testing, shown at a recent UFO conference and some discussion on the Finnish anti-gravity experiment. This runs 8 minutes.
Voltage Test Video Clips and Report
After the discovery of the Planet Pluto in 1930CE, astronomers soon noted that earlier theories regarding the hypothetical influences of such a planet on the orbits of Planets Uranus and Neptune were not validated by the existence of only Pluto. So eventually in the 1970s after computers were becoming commonplace, a computer-generated model of this “Planet X,” as it was called, was created. It was determined that Planet X would have to be at least five times bigger than the Planet Earth. They also calculated the length and shape of its orbit around the Sun as well as the number of years necessary to complete such an orbit.
In 1983 with NASA’s cooperation a group of astronomers began a comprehensive survey of the sky with the Infrared Astronomical Satellite (IRAS). In the fall of that year the IRAS discovered several moving objects in the vicinity of this solar system, including 5 previously unknown comets, a few “lost” comets, 4 new asteroids and “an enigmatic comet-like object.” Headlines read “Giant Object Mystifies Astronomers” and “Mystery Body Found in Space.” In The Washington Post was the headline and story “At Solar System’s Edge Giant Object is a Mystery — A heavenly body possibly as large as the giant planet Jupiter and possibly so close to Earth that it would be part of this solar system has been found in the direction of the Constellation Orion by an orbiting telescope called the IRAS. So mysterious is the object that astronomers do not know if it is a planet, a giant comet, a ‘protostar’ that never got hot enough to become a star, a distant galaxy so young that it is still in the process of forming its first stars, or a galaxy so shrouded in dust that none of the light cast by its stars ever gets through. ‘All I can tell you is that we con’t know what it is,’ said Gerry Neugebauer, chief IRAS scientist.”
The United States Government squashed the story immediately! For some arcane top-secret reason the government doesn’t want to alarm or panic the general public by disclosure of this discovery. Why? Because a race of super-beings inhabits that planet, and common knowledge of this fact would have people screaming in the streets. Listen to this story. . . .
Long, long ago, approximately 500,000 years back into the past, our Planet Earth/Tiamat was a quite different place than it is today. To clarify a point, this planet’s original name is Tiamat. The nickname “Earth,” from the Greek “Gaia,” is only a recent innovation. Here in these WebPages, this planet will always be referred to as Planet Tiamat.
A half a million years ago, Tiamat was not located in Space where it is today. It orbited farther out from the Sun, in between the orbits of Mars and Jupiter. Mars was orbiting at a distance much closer to the Sun than now and was quite habitable, with a temperate climate and liquid water. This fact has been verified numerous times by NASA and other scientific groups. What those groups do not acknowledge is the fact that Mars had a different orbital pattern than today. Establishment groups do not accept a cataclysmic evolution of this solar system, at least not publicly.
Then, too, Tiamat’s system was closer to the Star Sirius (or Sothis, as the ancient Egyptians called it). This solar system and the Sirian planetary system are part of a unit. The two systems are gravitationally connected to one another, a new fact that is now beginning to gain widespread consensus from the scientific community. Our “Sirian Regional System” as a unit revolves around the Central Sun Alcyone in the Pleiades Cluster, which might be termed “the Pleiades Quadrant.” This greater sector revolves around the Galactic Center, in the direction of the Stars of Sagittarius, once every 200,000,000 years or so. What is so significant about our present-day epoch is that certain great cycles relating to orbital alignments within the Pleiades Quadrant and between this Quadrant and the Galactic Center are starting to repeat themselves, and there’s nothing that we can do about it. It’s happening, folks! Prepare for the unexpected! This Great Cycle should be in full swing by 21 December 2012CE. Mark your calendars!
But returning to the history of Nibiru and Tiamat, our planet had a much colder environment than today. The humanoid population, which our paleontolgists and the like, call “Neanderthals,” were hardier and hairier than we are. They lived in caves to take advantage of the natural internal warmth of Tiamat. These early Tiamatians were directly descended from ancestors in the Pleiades, a fact that can be amply verified by the history and mythology of the Mayans and the Polynesians, to name a couple. However, the Pleiadian origins of Tiamat in the first place will not be discussed in this brief history. That is for another time and space.
This solar system was internally stable 500,000 years ago. But an unanticipated, strange event occurred. One of the larger planets from the Sirian System strayed off course and drifted our way. This planet was unwittingly captured by our Sun and thrown into an extremely elongated comet-like orbit, lasting 3,600 of our current years, with its aphelion at the Oort Cloud, the very boundary of the Sun’s gravitational field. It is approximately the size of our Planet Neptune. It is populated with a reptilian super-race governed by an elite aristocracy known as “the Nefilim.” The general population is known as “Anunnaki” or “Anakim.” At the time of this event, the Planet Nibiru was being ruled by Emperor Alalu and Empress Lilitu.
After its capture by our Sun, the Planet Nibiru slowly began to suffer atmospheric deterioration. The governing “Council of Twelve” headed by Emperor Alalu met in emergency session and concluded that in order for their planet to survive, a heat-shield of gold dust would have to be constructed to protect their atmosphere and prevent a cooling off of the planet, which would have been disastrous for a reptilian-based existence constantly dependent upon external heat sources for bodily warmth. They began an immediate exploration of their new solar system. A fleet of spacecraft was dispatched to Tiamat and other planets in an attempt to search for gold.
Commander and Crown-Prince Anu, along with his two sons Enki and Enlil and daughter Ninkhursag, landed in what is now the Persian Gulf and first went ashore in what is now modern-day Kuwait. Eventually they established a spaceport in Mesopotamia at a place known in our mythology and religion as E-din or Eden. Guided by those early Tiamatians, they found abundant sources of gold on Tiamat and successfully created their Nibiruan heat-shield. They soon found out, however, that this heat-shield required periodic maintenance, in turn forcing them to always have a contingent of Anunnaki gold-miners stationed here on Tiamat, constantly shipping more gold home to Planet Nibiru.
After the passage of several Nibiruan orbits around their new Sun, perhaps 26,000 Tiamatian Years, as their planet’s orbit stabilized, Nibiru passed perilously close to Tiamat. One of its host of moons and moonlets crashed into Tiamat in what is now the Pacific Ocean, blowing the lost continent of Lemuria to smithereens, leaving its remnants floating about in Space in what we now so casually refer to as the “Asteroid Belt,” and knocking Tiamat out of it old orbit and into a newer one closer to the Sun. In the process, Tiamat captured one of Nibiru’s moons named “Kingu,” which is the correct term for our present Moon. Tiamat and Kingu finally settled down into their current positions, and in all of this chaos the Planet Mars got displaced to a new orbit farther out from the Sun, eventually causing that planet’s surface to freeze and die. On the up-side of all of this was the fact that this cosmic catastrophe made Nibiru’s orbit finally stabilize, so that no events of this awful magnitude have subsequently occurred.
In connection with this upheaval, Crown-Prince Anu and his consort Crown-Princess Antu became distressed with Emperor Alalu’s actions and decisions, so the next time Emperor Alalu and Empress Lilitu visited Tiamat to check for gold-mine damage that might have resulted from the collision, they staged a coup d’etat and proclaimed themselves Emperor Anu and Empress Antu. They banished deposed Emperor Alalu and Empress Lilitu to Tiamat and forbade them from ever returning to Nibiru. Alalu and Lilitu, believe it or not, are still alive and living in a palatial underground spaceport in the Grand Teton Mountains near Yellowstone National Park. They are surrounded by massive hordes of gold and precious gems, and they operate an enormous transmitter device (the source of the Taos Hum?) that allows them to stay in communication with Nibiru and its space patrols. It should be emphasized now that the Nibiruans are not the typical big-eyed “Greys” that we hear about so often in the media.
To digress for a moment at this point, the reptilian inhabitants of the Planet Nibiru are anywhere from 10-20 feet (3-6 meters) tall. They have elaborate cranial hair, often multi-colored, but very little, if any, body hair, although some of the males do have mustaches and beards. Many of the males have goat-like horns on their heads, and many of the females are winged. They do not sweat and have no body odors, which was one reason they didn’t let the Tiamatians do their gold-mining for them. They thought the hairy Tiamatian mammals stank, and they didn’t like to be around them. They stayed to themselves. They can have up to seven pupils in each eye, seven fingers on each hand, and seven toes on each foot. They regularly wear clothing made of pure goldleaf, and their diet mainly consists of liquids which provide them with their necessary nutrients without their having to consume many solid foods. They seeded Tiamat with many of our current fruits and vegetables, however, and they deserve credit for that.
Time passed. The Anunnaki gold-miners got restless, despising their drudgery in the mines, which were primarily located in what is now South Africa, under the command of Nefilim Duke Nergal and Duchess Ereskigal. Eventually they revolted and refused to do anymore mining. Alarmed, Emperor Anu summoned Queen Ninkhursag to the throne room. Queen Ninkhursag is Nibiru’s Chief Medical Officer and Geneticist. Emperor Anu implored Queen Ninkhursag to come up with a cloned hybrid Nibiruan-Tiamatian to act as a slave gold-miner. She accepted the challenge and eventually created a hybrid male from the egg of Tiamatian female and the sperm of her brother Prince Enki. She referred to this hybrid as the “Adamu.” At first, only males were produced, cloned one just like the other. A group of Nibiruan females was ordered to serve as incubation vessels for the Adamu clones. These female Anunnaki were known as the “Birth Goddesses.”
Life just went merrily along. Nibiru had its gold, the Anunnaki males were freed from the mines, the Adamu clones went into fullscale production and became excellent gold-mining slaves. But as always happens both here and on Nibiru, a hitch developed. The Birth Goddesses got fed up with sitting around pregnant all the time with little Adamu clones, so they revolted and refused to continue with the incubation process. Emperor Anu immediately had Queen Ninkhursag brought back to the throne room and this time ordered her to create a “female Adamu” that could mate with the males and produce their own clones. By now, for Queen Ninkhursag, this was a piece of cake. She created the female “Eva” prototype along with an absolutely perfect Adamu clone. These two were allowed to romp around naked on the palace grounds at Eden, in hopes that they would produce a child, a little-bitty-baby-slave gold-miner…, the PERFECT ROBOT!
But alas – they were infertile. Just as two mules cannot produce another mule, requiring instead that a horse mate with a donkey, these two hybrid clones could not reproduce. At this point Emperor Anu was beside himself. He had to have gold! So he ordered Queen Ninkhursag to undo the hybridization! She had her brother Prince Enki travel to the Eden Palace to do the job for her. Prince Enki had the hybrid Adamu and Eva ingest a chemical substance of some sort that caused them to revert back to more of a Tiamatian than a Nibiruan creation. They immediately shed their reptilian outer-skins and began to mate. Realizing that he’d been dealt a major blow, in that as a result of the dehybridzation he’d lost power over these new creatures, Emperor Anu banished the Adamu and Eva from forever re-entering the Eden Palace grounds.
Cro-Magnon Man was born. It was 20,000 years ago. Neanderthal Man was slowly but inexorably going extinct as a result of Tiamat’s closer proximity to the Sun and its warmer climate. By 10,000BCE, they really no longer existed, leaving Cro-Magnon Man to dominate Tiamat. Certain pockets of these early Tiamatians are still around in parts of Melanesia and Amazonia, but they are few and far between.
(August 1996 Update Insert: In his book The Rainbow Conspiracy, Brad Steiger writes in connection with the Philadelphia Experiment that U.S. President Franklin Delano Roosevelt met with some “nearly human aliens” in the 1930s. These aliens had a greenish tint to their skin. In order to mingle unnoticed here on Tiamat, they use a bleaching solution to lighten their skin-tone. And then there are all those old drawings of “gods” from India, nearly human-looking “gods” who have a bluish skin-tone. Did you ever look closely at the skin color and quality of anole lizards (faux chameleons)? Their skin is so, so smooth and silky! And green! According to a report on TV in August, medical researchers, looking into better ways to develop medicine-delivery skin-patches for sick people, have discovered that snakeskin is extremely similar to human skin. They are developing new medicine-delivery skin-patches from plain old snakeskin! Why is this important? Well, R. A. Boulay in his book Flying Serpents and Dragons writes in connection with our Nibiruan ancestors that the snake was a correlative creation that occurred during the dehybridization process of the Adamu and Eva! What unnecessary or useless reptilian characteristics that had to be “shed” during dehybridization rematerialized as a “serpent.” Thus, one might contemplate the following: if an anole or a chameleon can change colors, can a Nibiruan do the same? Isn’t it amazing how modern scientific discoveries such as this verify our “mysticism” and mythology? . . . )
What happened next is our recorded history. Nibiru once again came too close to Tiamat, unleashing a worldwide catastrophe of floods and earthquakes. This time the Eden Palace and Spaceport themselves were inundated and destroyed. Nefilim Prince Utu was ordered to rebuild the Spaceport in what is now the Sinai Peninsula. Life on Tiamat eventually got back to normal, but then the “Pyramid Wars” broke out.
One of Emperor Anu’s favorites was a Princess-Royal Inanna, whom he appointed as ruler of what is now known as India and Nepal. Her Hindi/Sanskrit title was Lakshmi. She is still worshipped there today. Her lover and consort was the most handsome man on Nibiru, the Duke Dumuzi. Duke Dumuzi became involved in a quarrel with Baron Marduk, resulting in the outbreak of the “Pyramid Wars.” Princess-Royal Inanna and Duke Dumuzi began a protracted power struggle with Baron Marduk and Baroness Sarpanit. In the process Duke Dumuzi was murdered by Baron Marduk; and Prince Utu in cahoots with Princess-Royal Inanna blew up the Sinai Spaceport along with the satellite R&R cities of Sodom and Gommorah. The South African goldmines fell into disarray when Duke Nergal and Duchess Ereskigal allied themselves with Baron Marduk and Baroness Sarpanit. Once again there was chaos among the Nefilim Ruling Council.
Emperor Anu was forced to rebuild the Nibiruan Spaceport, this time putting it under the command of his son Prince Enki. Prince Enki and his consort Princess Ninki were sent to what is now the area around Lake Titicaca, Peru, where they rebuilt the complex on the Nazca Plain. Massive new sources of gold were found in the surrounding Andes Mountains, so the South African mining operation was moved to Lake Titicaca.
And that brings us up to about 3,000 years ago. At that point it seems that the Nibiruans began to abandon Tiamat. Perhaps their atmospheric heat-shield had “crystallized” by then, and they no longer had need of so much gold from Tiamat. This is unclear. The last time their planet passed by Tiamat was in 687BCE when they once again headed out for their long winter hibernation at the Oort Cloud. But they have always continued to maintain some sort of presence here. In addition to the underground installation in the Grand Teton Mountains, they have other underground facilities in South America, in the Saudi Arabian Empty Quarter and in the Himalaya Mountains, to name a few. There’s another underground chamber just southwest of the Great Pyramid of Egypt, which the Nefilim built in connection with the Sinai Spaceport; but there is a debate of sorts about which extraterrestial race controls access to that most sacred, ancient chamber. The answer to that question, as well as many others, will undoubtedly be forthcoming in the very near future. One has only to read the works of Drunvalo Melchizedek, Valdamar Valerian, Michael Topper and John Baines to realize that point. Galaxy37 will continue to keep you updated.
Where is Planet Nibiru today? Well, it’s still out there. Occasionally some astronomer will spot it and name it some enigmatic cosmic object, like an “Object Kowal” or a “mini-galaxy.” Our governments know it’s there but are keeping this knowledge secret from the general population. Ancient skywatchers in the Middle East are said to have looked for Nibiru’s grand arrival in the Constellation of Cancer. When it periodically returns to this part of the solar system, it apparently sort of appears out of nowhere. Suddenly it’s just up there, like a golden miniature sun with a long and fearsome cometary tail trailing along behind it and its host of satellites, hovering, dangling like a jewel over the Tiamatian North Pole for a “millenium of the gods.” The World Mountain. Mount Olympus. Mount Meru. Hyperborea, the North Country, the nation beyond the land where the North Wind rises, the land of eternal springtime, the nation beyond the mountains of the God of the North Wind! Hyperborea!
The material presented here in only the tip of the iceberg of our available knowledge of the Planet Nibiru and her stormy interaction with Planet Tiamat. If you would wish to obtain additional information, you are referred to the following books, including all of the Sitchin material in seven volumes consisting of more than 2,500 pages total.
Genesis Revisited by Zecharia Sitchin
Divine Encounters by Zecharia Sitchin
The Earth Chronicles by Zecharia Sitchin
1 - The Twelfth Planet
2 - The Stairway to Heaven
3 - The Wars of Gods and Men
4 - The Lost Realms
5 - When Time Began
The Gods of Eden by William Bramley
Flying Serpents and Dragons by R.A. Boulay <very hard to find!>
Matrix IV: The Equivideum by Val Valerian, Drunvalo Melchizedek & Michael Topper
The Secret Science by John Baines
The Stellar Man by John Baines
The Rainbow Conspiracy by Brad Steiger <hard to find - pulled from shelves!>
The Hidden History of the Human Race by Michael A. Cremo & Richard L. Thompson
1967 Picacho Peak, New Mexico, USA: At about 2:00pm on March 12th, 1967, a New Mexico State University student was hiking in a desert area near Picacho Peak, New Mexico, when he spotted a big round silvery object hovering in the air just above a rocky hill about 500 yards away. He prepared his 4″ X 5″ Press Camera, set it at F8 and 1/100 shutter speed, and snapped one good black and white picture of the object. It appeared stationary or was moving very little at the time of the photograph. He looked down to change the plates of his camera, needing only 3 seconds, but when he looked back to take another shot, the object was gone. He recalled smelling an electrical odor in the air too!
1968 Sicuani, Peru: December 6, Sr. Pedro Chavez, a photographer for La Prensa, on assignment in Sicuani, Peru, was in the cathedral square near the big church where he took this picture before they disapeared.
1973 Morelos Cocoyoc, Mexico: November 3, at 6:45 PM, with its arms splayed out evenly around it, an object descended and landed in a grass scrub area beside a road. It landed on the arms projected downward.
1977 Centeno, Argentina: Shaped like a giant sombrero, this UFO hovered outside of Centeno for an unknown period of time, and was photographed by an anonymous source. A similar object was witnessed in Ontario, Canada two years earlier.
1977 Floradad, Uruguay: A reporter from one of the larger Uruguayan cities while on assignment, photographed this strange object as it circled numerous times around him.
1978 Rio de Janeiro, Brazil: At 5:10pm on June 20, 1978, Sr. Saul Janusas was able to snap two photographs of a dark metallic looking Saturn-shaped UFO. He could see it clearly in the reddish dusk haze peculiar to the Winter sunset.
1978 Kagawa, Miki-cho, Japan: At 3:00pm on February 4, 1978, Mr. Hirobumi Matsushita was taking snapshots with a friend when they noticed a shiny golden metallic colored object flying high above them in the clear blue sky. There were no clouds and a slight breeze was blowing, but not enough to raise debris that high in the air. The object seemed to be moving purposefully, obviously under intelligent control. Mr. Matsushita adjusted his camera and took several pictures of the strange object before it disappeared.
1980 Charleston, South Carolina, USA: At 5:30pm on April 4, 1980, William J. Herrmann, a local auto mechanic, saw and photographed a silvery disc-shaped object flying erratic maneuvers near Charleston Air Force Base.
1963 Peralta, New Mexico, USA: Paul Villa was in an area south of Albuquerque near the town of Peralta when he found a landed silver circular craft about 70ft in diameter. Nine human-like beings came out of the ship through a sealed door. They spoke to Villa in English and Spanish and could communicate telepathically. The visitors told Villa they had several discs that could pick up pictures and sounds from any place and relay their data back to the ship instantly. After a long conversation the ship took off, taking care not to harm the small creatures on the ground below, per Villa’s request.
1965 Tulsa, Oklahoma, USA: At 1:45am on August 2, 1965, young Alan Smith, then 14 years old, snapped two color photographs of a blinking, colored, luminous, discoid flying object that passed directly over his house there in Tulsa. It had alternating blue, orange, and white lights shining from its underside.
1965 Bernalillo, New Mexico, USA: On Easter Sunday Apolinar Villa was guided telepathically to this spot near where he spotted this craft hovering silently in the air and photographed it.
1965 Santa Ana, California, USA: While racing home to Santa Ana, Mr. Rex Heflin took several photographs of this hat-shaped saucer. Because of the presence of the telephone poles in the background, it has been estimated that this craft is more than 30ft in diameter. Mr. Heflin has donated these photographs to independent UFO researchers for no monetary compensation. Seven years and 7000 miles away, in Cluj, Romania, Mr. Amiel Barnea photographed a similar (or possibly the same) hat-shaped UFO. Six years after that occurrence, Belotie, Yugoslavia was the sight of another similar witnessing.
1966 Lake Tiorati, New York: The west side of the Hudson River, three fishermen noticed an unusual circular metallic flying object. One of the men grabbed a box camera and managed to snap four pictures before it flew away over Stockbridge Mountain.
1967 Baton Rouge, Louisiana, USA: January 12, a fisherman sitting in his boat on the west side of Old River saw this object and only had time to take one picture before it shot off at a 45 degree angle at a great rate of speed. No sound was heard during the experience.
1967 Cumberland, Rhode Island, USA: At 7:15pm on July 3, Joseph L. Ferriere went to east Woonsocket to investigate reports of a strange object flying around in the area, he spotted a large cigar-shaped object hovering in the sky. After taking four pictures of the object he noticed a smaller disc-shaped object coming out from the larger object and took a picture of the disc-shaped object also.
1910 France A rare first ufo sighting photograph of entry number 5 in the Catalan Cup Race at a track in France. This picture also included a strange unidentified flying object seen over the tree tops beyond the line of spectators watching this race. There is no indication whether the photographer saw this object and intended to get it in the picture.
1932 St. Paris, Ohio, USA: The unidentified flying object in this picture could not have been a street lamp, simply because there were no street lamps there at that time. There are no power poles or power lines visible anywhere in this picture. Summer 1932, Mid-day, St. Paris, Ohio. This picture of George Sutton of St. Paris, Ohio, taken near mid-day (as may be seen from the shadows on the ground) shows a vintage automobile with a 1932 license plate on the front bumper. Nobody has been able to account for the dark object seen over Sutton’s left shoulder in this photograph.
1942 Sea of Japan: An Imperial Japanese Sally bomber aircraft, on a mission over the Sea of Japan, was approached by a small dark spherical object which flew around and between the aircraft in the formation. An alert gun-cameraman snapped one photograph.
1950 Red Bud, Illinois, USA: 4:00pm on April 23, Mr. Dean Morgan, a part-time photographer, was coming down the south side of a wooded hill from Red Bud, when he saw this object and got one photograph of it.
1952 Puerto Maldonado, Peru: 4:30pm on July 19, the attention of Customs Inspector Sr. Domingo Troncoso, then with the Peruvian Customs Office at Puerto Maldonado on the jungle frontier with Bolivia, was called to a very strange cigar-shaped flying object over the river area. The big dirigible-shaped craft was flying horizontally and fairly low in the sky, passing from right to left from the observers position. It was leaving a dense trail of thick smoke, vapor, or substance on its wake. This object was a real, structured, physical machine and may be seen from its reflection in the waters of the Madre de Dios river underneath it. The object was estimated to be over a hundred feet long.
1952 Passiac, New Jersy, USA: Mr. George J. Stock, in the yard working on his lawn-mower, around 4:30pm, took this picture with his box reflex camera. It was coming directly over his house from the IT&T; tower, and was estimated to be 20 to 25ft above the ground. This sighting lasted only a couple of minutes.
1958 Trindade, Brazil: Off the Brazilian coast near Trindade many of the ship’s vcrew as well as a Mr. Barania, photographer for the Brazilian Navy witnessed this saucer shaped object that shadowed the movements of the ship. No warfare exercises were practiced. Instead, the Brazilian vessel was participating in the International Geophysical Year of 1958. Computer enhancement reveals that the craft is composed of a globe circled by a large, metallic ring much like the planet Saturn.
A Corona, New Mexico-area rancher named W.W. “Mac” Brazel heard a mighty explosion during a thunderstorm sometime between July 2nd and 4th, 1947. Thinking it was different than a normal clasp of thunder, he ventured out the next day to check on the ranch and his sheep. While moving the sheep to a different location on the ranch, he came upon a roswell alien crash, metallic debris scattered within a 200 to 300 yard area across a pasture about 75 miles northwest of Roswell. The sheep refused to walk through the debris and he was forced to lead them around it.
After consulting his neighbors the Proctors about the material that was found on the ranch, Mac Brazel went into town and met with Sheriff George A. Wilcox who told him to contact the (RAAF) Roswell Army Airforce Base. Marcel and a member of the Counter Intelligence Corps (CIC) from the base, responded to the call and accompanied Brazel back to his ranch. After spending a somewhat uncomfortable night in a small ranch shack with no facilities, Marcel and the CIC agent accompanied Mac to the crash site and collected as much of the material as their two vehicles could carry. They tested the material they found back at the ranch house and noted that although it was very thin, and light, they were unable to bend, tear or burn it.
On the way back to the RAAF, Marcel stopped at his home late at night, woke his wife and son and showed the material he had gathered to them. His son Jesse Jr, kept some of the debris which was later collected by military officials. The base commander “Col. (William) Blanchard (commander of the 509th) after his experts had researched the debris, took the story public on the morning of July 8th, 1947. 1st Lt. Walter Haut was the RAAF’s public information officer, and he wrote the press release announcing the discovery even though he never saw the debris. That afternoon, the Roswell Daily Record carried a banner headline announcing: “RAAF Captures Flying Saucer On Ranch in Roswell Region.” (Click Headline To Read Actual Article)
Also during all this excitement, a crashed UFO along with a number of alien bodies, 4 dead and one alive, had been discovered some distance away from the Brazel ranch. Came up-upon by rock hunters, a heel shaped craft was embedded into a dried stream bank, with rips in the side and bodies laying around it exposed to the hot dry temperatures. Before the new on-lookers could assess the crash site and damage, a military convoy approached the site and forced silence among the civilians using death threats against them and their families. It had seemed an explosion had occurred aboard the craft or it and another spacecraft had collided in midair, causing the debris to pour down on the Brazel pasture before the other damaged craft or same craft, crashed a few miles further. Air Corps personnel quickly sealed the second site having tracked it on radar the night before from the White Sands missile base.
After the newspaper release of the Brazel discovery was made public, Marcel was ordered to load the material he’d collected aboard one of the 509th’s B-29 Superfortress bombers and take it to Wright Field (now Wright Patterson Airforce Base in Ohio), with a stop first at 8th Air Force headquarters in Fort Worth Texas. It was at the Fort Worth Army Air Field that General Ramey constructed the weather balloon story which the Airforce in 1997, admitted it was Project Mogaul, a top secret balloon which monitors nuclear explosions from other countries, preferably, Russia. (Click Picture To See Another Version)
At Ramey’s office, the general, a couple of his aides, Marcel and about three local newspaper reporters started taking pictures of this alleged crash debris and creating the new story about the weather balloon. San Antonian, Irving “Newt” Newton, was the only meteorologist working at the base’s busy flight operations office the afternoon of July 8 and was suddenly ordered through the Warrant Officer, by an assistant to Gen. Roger Ramey, to come to his office and confirm the weather balloon story. What Newton saw he confirmed as a Rawin attached weather balloon which was what Ramey needed him to tell the press. Was the debris switched?
Later in years, Marcel still claimed the material displayed in Ramey’s office wasn’t what he’d actually brought from Roswell but was switched at one point. One of Marcel’s positions in the military was to identify top secret crashed aircraft, and he was quite aware of what a weather balloon looked like, he had retrieved many in his career since that July. What he found obviously was amazing enough that he had to wake his wife and son in the middle of the night and show them.
As for the bodies at the second crash site, they were air-lifted along with the damaged craft to Wright Air Base, and stored in the famed, Hanger 18 there. At that time Wright base was where the Foreign Technology Department was located, and their job was to reverse engineer new technology from other countries and see how they functioned. All the labs were already set-up there and it was a perfect place to analyze the new material. Later, researchers have speculated, the main body of the craft and the surviving being, nick-named, “EBE” for Exstraterrestrial Biological Entity”, was transferred to Groom Lake, Nevada, Area 51 for further analysis.
And so the saga continues. New evidence about the famed pictures from Fort Worth are starting to shed light on whether or not the debris were switched. Digital photo analysis taken from the original pictures at Fort Worth are focusing on hieroglyphics on part of the debris and a memo that was in the General’s hand. More and more witnesses are coming forward about the 1947 incident which is making this grand-daddy of all ufo incidents harder and harder to debunk. I personally met with a gentlemen in Sedona Arizona, a couple of years ago, which told me his story of years before in 1947 when him and his new wife were being re-located on board a military aircraft. A soldier confide in him about a crash site he just left in New Mexico, in which he had to pick up debris from a crashed area. He said it was a craft like no type he had ever seen, which doesn’t sound like a weather balloon story to me.. Do your own research, there are plenty of books on the subject, start with the ones referenced at the end of this site. This particular event is too important to get shoved under the carpet, and frankly ufo researchers are having good laughs listening to the Airforce’s lame excuses. Remember the parachuting dummies the Airforce claims eye witnesses were mistaken for little 4 foot tall aliens? The dummies are on display in a museum near White Sands, they stand 6 feet tall and weren’t in use until the mid 50s. The Airforces explanation for that was, “Well some of the witnesses must have their dates wrong”. Ask your grandparents when they were born, ask them about important dates in their lives, then tell them too their face, that they’re just too old to remember the right dates.. Well that’s what the Airforce said..
Last weekend on Sunday the 31/3 around 12:00AM 2019 I had the craziest experience at Karekare beach. I am here looking for answers, but to also see if anyone else has seen these strange lights in the sky.
We were on the beach, my boyfriend and I, and saw two bright orange/red lights far on the other end of the beach. I pointed them out and said that they might be helicopters looking for someone. In a matter of seconds, one of the lights started moving at full speed, in a straight line, no more that 200m above the water until directly infront of us. The UFO had changed from having a red light to a bright white light, somewhat like a torch, shining directly at us.
We presumed that it was a helicopter with a spotlight, but there was no sound at all coming from it, and it was so steady and barely moving. It stayed like that and followed us every so slightly as we paced a few meters forward and back to see if it would follow. After about 2 minutes (or what I think, we could not tell time at this point) it moved away very quickly, flying incredibly fast and with perfect precision, until disappearing behind a mountain.
The same thing happend with the next red light which was on the other side of the beach. It moved as the other UFO had until directly infront of us again with this bright light, and soon followed in the same direction as the first.
In total we experienced this happen with three others before they all disappeared for a while. We were lying down looking at the stars for a while and then stood up to leave, before realizing there were 5 surrounding us in different areas of the beach. One of them was flying up and down and another from side to side. The others were just still switching between red and white spotlights. No sound.
This was when we decided to leave, scared shitless. We drove back to Titirangi, about 30 mins from the beach and up to a look up spot with a great view of the city. At the top I noticed a red light, completely still in the sky. I remember saying to my boyfriend that it couldn’t be one of those things. We got out of the car and the light again shifted from being red to a white spotlight sort of light, this time so bright it was like looking directily at the lights from a car in front of you, but much brighter, we could barely look at it. No sound coming from it. It then flew off, this time like a plane, very fast in the sky and shaped like a triangle with three lights. One white, red and the other white or maybe blue. | 1 | 2 |
<urn:uuid:207458a8-e31c-4405-bdb5-6b1df605a9ea> | The Boeing 777 is a long-range, wide-body twin-engine airliner built by Boeing Commercial Airplanes. The world's largest twinjet and commonly referred to as the "Triple Seven", it can carry between 283 and 368
passengers in a three-class configuration and has a range from 5,235 to 9,450 nautical miles (9,695 to 17,500 km). Distinguishing features of the 777 include the six wheels on each main landing gear, its circular fuselage cross section, the largest diameter turbofan engines of any aircraft, the pronounced "neck" aft of the flight deck, and the blade-like end to the tail cone.
As of August 2008, 56 customers have placed orders for 1,092 777s. Direct market competitors to the 777 are the Airbus A330-300, A340, and A350 XWB. The classic 777 family will be replaced by a new set of aircraft, tentatively known as the 777-8X and 777-9X. The -8X will be approximately the same size as the -300 while the -9X will have similar range to the -300ER but with a longer fuselage, increasing passenger capacity. It will be the largest twinjet in the world in terms of length, maximum takeoff weight, and passenger capacity. Both variants will incorporate new engines and other advanced technologies from the 787. Boeing is deciding to replace the classic 777 series (the -200 and -300) by a new next gen 777, the 777X. The -200ER, -200LR, -300ER, and the -F00 will be in production.
In the 1970s, Boeing unveiled new models: the twin-engine 757 to replace the venerable 727, the twin-engine 767 to challenge the Airbus A300, and a trijet 777 concept to compete with the McDonnell Douglas DC-10 and the Lockheed L-1011 TriStar. Based on a re-winged 767 design, the proposed 275-seat 777 was to be offered in two variants: a 2,700 nautical miles (5,000 km) transcontinental and an 4,320 nmi (8,000 km) intercontinental.
The twinjets were a big success, due in part to the 1980s ETOPS regulations. However the trijet 777 was cancelled (much like the trijet concept of the Boeing 757) in part because of the complexities of a trijet design and the absence of a 40,000 lbf (178 kN) engine. The cancellation left Boeing with a huge size and range gap in its product line between the 767-300ER and the 747-400. The DC-10 and L-1011, which entered service in early 1970s, were also due for replacement. In the meantime, Airbus developed the A340 to fulfill that requirement and to compete with Boeing.
In the mid-1980s Boeing produced proposals for an enlarged 767, dubbed 767X. There were also a number of in-house designations for proposals, of which the 763-246 was one internal designation that was mentioned in public. The 767X had a longer fuselage and larger wings than the existing 767, and seated about 340 passengers with a maximum range of 7,300 nautical miles (13,500 kilometers). The airlines were unimpressed with the 767X: they wanted short to intercontinental range capability, a bigger cabin cross section, a fully flexible cabin configuration and an operating cost lower than any 767 stretch. By 1988 Boeing realized that the only answer was a new design, the 777 twinjet.
Designing of the 777 was different from previous Boeing jetliners. For the first time, eight major airlines (Cathay Pacific, American, Delta, ANA, BA, JAL, Qantas, and United) had a role in the development of the plane as part of a "Working Together" collaborative model employed for the 777 project.
At the first "Working Together" meeting in January 1990, a 23-page questionnaire was distributed to the airlines, asking each what it wanted in the new design. By March 1990 a basic design for the 767X had been decided upon; a cabin cross-section close to the 747's, 325 passengers, fly-by-wire controls, glass cockpit, flexible interior, and 10% better seat-mile costs than the A330 and MD-11. ETOPS was also a priority for United Airlines.
All software, whether produced internally to Boeing or externally, was to be written in Ada. The bulk of the work was undertaken by Honeywell who developed an Airplane Information Management System (AIMS). This handles the flight and navigation displays, systems monitoring and data acquisition (e.g. flight data acquisition).
United's replacement program for its aging DC-10s became a focus for Boeing's designs. The new aircraft needed to be capable of flying three different routes; Chicago to Hawaii, Chicago to Europe and non-stop from the hot and high Denver to Hawaii.
In October 1990, United Airlines became the launch customer when it placed an order for 34 Pratt & Whitney-powered 777s with options on a further 34. Production of the first aircraft began in January 1993 at Boeing's Everett plant near Seattle. In the same month, the 767X was officially renamed the 777, and a team of United 777 developers joined other airline teams and the Boeing team at the Boeing Everett Factory. Divided into 240 design teams of up to 40 members, working on individual components of the aircraft, almost 1,500 design issues were addressed.
The 777 was the first commercial aircraft to be designed entirely on computer. Everything was created on a 3D CAD software system known as CATIA, sourced from Dassault Systemes. This allowed a virtual 777 to be assembled, in simulation, to check for interferences and to verify proper fit of the many thousands of parts before costly physical prototypes were manufactured. Boeing was initially not convinced of the abilities of the program, and built a mock-up of the nose section to test the results. It was so successful that all further mock-ups were cancelled.
Into production Edit
The 777 included substantial international content, to be exceeded only by the 787. International contributors included Mitsubishi Heavy Industries and Kawasaki Heavy Industries (fuselage panels), Fuji Heavy Industries, Ltd. (center wing section), Hawker De Havilland (elevators), ASTA (rudder) and Ilyushin (jointly designed overhead baggage compartment).
On April 9 1994 the first 777, WA001, was rolled out in a series of fifteen ceremonies held during the day to accommodate the 100,000 invited guests. First flight took place on June 14 1994, piloted by 777 Chief Test Pilot John E. Cashman, marking the start of an eleven month flight test program more extensive than that seen on any previous Boeing model.
On May 15 1995 Boeing delivered the first 777, registered N777UA, to United Airlines. The FAA awarded 180 minute ETOPS clearance ("ETOPS-180") for PW4074 engined 777-200s on May 30 1995, making the 777 the first aircraft to carry an ETOPS-180 rating at its entry into service. The 777's first commercial flight took place on June 7 1995 from London's Heathrow Airport to Washington Dulles International Airport. The development, testing, and delivery of the 777 was the subject of the documentary series, "21st century Jet: The Building of the 777".
Due to rising fuel costs, airlines began looking at the Boeing 777 as a fuel-efficient alternative compared to other widebody jets. With modern engines, having extremely low failure rates (as seen in the ETOPS certification of most twinjets) and increased power output, four engines are no longer necessary except for very large aircraft such as the Airbus A380 or Boeing 747.
Boeing employed advanced technologies in the 777. These features included:
- The largest and most powerful turbofan engines on a commercial airliner with a 128 inch (3.25 m) fan diameter on the GE90-115B1.
- Honeywell LCD glass cockpit flight displays
- Fully digital fly-by-wire flight controls with emergency manual reversion
- Fully software-configurable avionics
- Electronic flight bag
- Lighter design including use of composites (12% by weight)
- Raked wingtips
- Fiber optic avionics network
- The largest landing gear and the largest tires ever used in a commercial jetliner. Each main gear tire of a 777-300ER carries a maximum rated load of 64,583 lb (29,294 kg) when the aircraft is fully loaded, the heaviest load per tire of any production aircraft ever built.
The 777 has the same Section 41 as the 767. This refers to the part of the aircraft from the tip of the nose, going to just behind the cockpit windows. From a head-on view, the end of the section is very evident. This is where the bulk of the aircraft's avionics are stored.
Boeing made use of work done on the cancelled Boeing 7J7, which had validated many of the chosen technologies. A notable design feature is Boeing's decision to retain conventional control yokes rather than fit sidestick controllers as used in many fly-by-wire fighter aircraft and in some Airbus transports. Boeing viewed the traditional yoke and rudder controls as being more intuitive for pilots.
Folding wingtips were offered when the 777 was launched, this feature was meant to appeal to airlines who might use the aircraft in gates made to accommodate smaller aircraft, but no airline has purchased this option.
The interior of the Boeing 777, also known as the Boeing Signature Interior, has since been used on other aircraft, including the 767-400ER, 747-400ER, and newer 767-200s and 767-300s. The interior on the Next Generation 737 and the Boeing 757-300 also borrows elements from the 777 interior, introducing larger, more rounded overhead bins than the 737 Classics and 757-200, and curved ceiling panels. The 777 also features larger, more rounded, windows than most other aircraft. The 777-style windows were later adopted on the 767-400ER and Boeing 747-8. The Boeing 787 and Boeing 747-8 will feature a new interior evolved from the 777-style interior and, in the case of the 787, will have even larger windows.
Some 777s also have crew rest areas in the crown area above the cabin. Separate crew rests can be included for the flight and cabin crew, with a two-person crew rest above the forward cabin between the first and second doors, and a larger overhead crew rest further aft with multiple bunks.
Boeing uses two characteristics to define their 777 models. The first is the fuselage size, which affects the number of passengers and amount of cargo that can be carried. The 777-200 and derivatives are the base size. A few years later, the aircraft was stretched into the 777-300.
The second characteristic is range. Boeing defined these three segments:
- A market: 3,900 to 5,200 nautical miles (7,223 to 9,630 km)
- B market: 5,800 to 7,700 nautical miles (10,742 to 14,260 km)
- C market: 8,000 nautical miles (14,816 km) and greater
When referring to variants of the 777, Boeing and the airlines often collapse the model (777) and the capacity designator (200 or 300) into a smaller form, either 772 or 773. Subsequent to that they may or may not append the range identifier. So the base 777-200 may be referred to as a "772" or "772A", while a 777-300ER would be referred to as a "773ER", "773B" or "77W". Any of these notations may be found in aircraft manuals or airline timetables.
The 777-200 (772A) was the initial A-market model. The first customer delivery was to United Airlines in May 1995. It is available with a maximum take-off weight (MTOW) from 505,000 to 545,000 pounds (229 to 247 tonnes) and range capability between 3,780 and 5,235 nautical miles (7,000 to 9,695 km).
The -200 is currently powered by two 77,000 lbf (343 kN) Pratt & Whitney PW4077 turbofans, 77,000 lbf (343 kN) General Electric GE90-77Bs, or 76,000 lbf (338 kN) Rolls Royce Trent 877s.
The first 777-200 built was used by Boeing's non-destructive testing (NDT) campaign in 1994–1995, and provided valuable data for the -200ER and -300 programs (see below). This A market aircraft was sold to Cathay Pacific Airways and delivered in December 2000.
Originally known as the 777-200IGW (for "increased gross weight"), the longer-range B market 777-200ER (772B) features additional fuel capacity, with increased MTOW range from 580,000 to 631,000 pounds (263 to 286 tonnes) and range capability between 6,000 and 7,700 nautical miles (11,000 to 14,260 km). ER stands for Extended Range. The first 777-200ER was delivered to British Airways in February 1997, who also were the first carrier to launch, in 2001, a 10 abreast economy configuration in this airframe, which had originally been designed for a maximum 9 abreast configuration.
The 777-200ER can be powered by any two of a number of engines: the 84,000 lbf (374 kN) Pratt & Whitney PW4084 or Rolls-Royce Trent 884, the 85,000 lbf (378 kN) GE90-85B, the 90,000 lbf (400 kN) PW4090, GE90-90B1, or Trent 890, or the 92,000 lbf (409 kN) GE90-92B or Trent 892. In 1998 Air France took delivery of a 777-200ER powered by GE90-94B engines capable of 94,000 lbf (418 kN) thrust. The Rolls Royce Trent 800 is the leading engine for the 777 with a market share of 43%. The engine is used on the majority of 777-200s, ERs and 300s but is not offered for the 200LR and 300ER.
On March 1997, China Southern Airlines made history by flying the 1st Boeing 777 scheduled transpacific route, which was the flagship Guangzhou-Los Angeles route. On April 2 1997, a Boeing 777-200ER (dubbed the "Super Ranger") of Malaysia Airlines, broke the Great Circle Distance Without Landing record for an airliner by flying east (the long way) from Boeing Field, Seattle, to Kuala Lumpur, Malaysia, a distance of 20,044 km (10,823 nmi), in 21 hours, 23 minutes, more than a scheduled range of the 777-200LR. The flight was non-revenue with no passengers on board. The -200ER is also recognized for another feat; the longest ETOPS-related emergency flight diversion (177 minutes under one engine power) was conducted on a United Airlines' Boeing 777-200ER carrying 255 passengers on March 17 2003 over the southern Pacific Ocean.
The direct equivalents to the 777-200ER from Airbus are the Airbus A340-300 and the proposed A350-900. As of August 2008, 407 777-200ERs had been delivered with 32 unfilled orders. As of August 2008, 397 Boeing 777-200ER aircraft were in airline service.
The stretched A market 777-300 (773A) is designed as a replacement for 747-100s and -200s. Compared to the older 747s, the stretched 777 has comparable passenger capacity and range, and also burns one third less fuel and has 40% lower maintenance costs.
It features a 33 ft 3 in (10.1 m) fuselage stretch over the baseline 777-200, allowing seating for up to 550 passengers in a single class high density configuration and is also 29,000 pounds (13 tonnes) heavier. The 777-300 has tailskid and ground maneuvering cameras mounted on the horizontal tail and underneath the forward fuselage to aid pilots during taxi due to the aircraft's length.
It was awarded type certification simultaneously from the U.S. FAA and European JAA and was granted 180 min ETOPS approval on May 4 1998 and entered service with Cathay Pacific later in that month.
The typical operating range with 368 three-class passengers is 6,015 nautical miles (11,135 km). It is typically powered by two of the following engines: 90,000 lbf (400 kN) PW4090 turbofans, 92,000 lbf (409 kN) Trent 892 or General Electric GE90-92Bs, or 98,000 lbf (436 kN) PW-4098s.
Since the introduction of the -300ER in 2004, all operators have selected the ER version of the -300 model, in some cases replacing 747-400 aircraft. The 777-300ER, with 365 seats, is capacious enough to displace 747-400 configured with 416 seats, and burns 20% less fuel per trip than the latter. Operators try to maintain operating margins by retaining first-class and business-class seats and reducing economy seating on flights that previously were served by the 747; Japan Airlines is introducing semi-partitioned "suites" that offer each passenger 20% more space than current first class seating. Air New Zealand will replace all of its 747-400s with the 777-300ER.
This aircraft has no direct Airbus equivalent but the A340-600 is offered in competition. A total of 60 -300s have been delivered to eight different customers, and all were in airline service as of August 2008.
Longer range modelsEdit
The 777-200LR (772C) ("LR" for "Longer Range") became the world's longest range commercial airliner when it entered service in 2006. Boeing named this plane the Worldliner, highlighting its ability to connect almost any two airports in the world, although it is still subject to ETOPS restrictions. It is capable of flying 9,450 nautical miles (17,501.40 km, equivalent to 7/16 of the earth's circumference) in 18 hours. Developed alongside the 777-300ER, the 777-200LR achieves this with either 110,000 lbf (489 kN) thrust General Electric GE90-110B1 turbofans, or as an option, GE90-115B turbofans used on the -300ER.
Rolls Royce originally offered the Trent 8104 engine with a thrust of 104,000 to 114,000 lbf (463 to 507 kN) that has been tested up to 117,000 lbf (520 kN). However, Boeing and Rolls Royce could not agree on risk sharing on the project so the engine was eventually not offered to customers. Instead GE agreed on risk-sharing for the development of long range derivatives of the Boeing 777. The agreement stipulated that only GE engines would be offered on the 777-200LR and 777-300ER.
The 777-200LR was initially proposed as a 777-100X. It would have been a shortened version of the 777-200, analogous to the Boeing 747SP. The shorter fuselage would allow more of the take-off weight to be dedicated to fuel tankage, increasing the range. Because the aircraft would have carried fewer passengers than the 777-200 while having similar operating costs, it would have had a higher cost per seat. With the advent of more powerful engines the 777-100X proposal was replaced by the 777X program, which evolved into the Longer Range 777-200LR.
The -200LR features a significantly increased MTOW and three optional auxiliary fuel tanks manufactured by Marshall Aerospace in the rear cargo hold. Other new features include raked wingtips, a new main landing gear and additional structural strengthening. The roll-out was on February 15 2005 and the first flight was at March 8, 2005. The second prototype made its first flight on May 24, 2005. The -200LR's entry into service was in January 2006. The only mass-produced aircraft with greater unrefueled range is the KC-10 Extender military tanker.
On November 10 2005, a 777-200LR set a record for the longest non-stop flight by passenger airliner by flying 11,664 nautical miles (13,422 statute miles, or 21,602 km) eastwards (the westerly great circle route is only 5,209 nautical miles) from Hong Kong, China, to London, UK, taking 22 hours and 42 minutes. This was logged into the Guinness World Records and surpassed the 777-200LR's design range of 9,450 nmi with 301 passengers and baggage.
On February 2 2006, Boeing announced that the 777-200LR had been certified by both FAA and EASA to enter into passenger service with airlines. The first Boeing 777-200LR was delivered to Pakistan International Airlines on February 26 2006 and the second on March 23 2006. PIA has at least nine 777s in service and the company plans to replace all of its older jets with the series.
Other customers include Air India and Turkmenistan Airlines. In November 2005, Air Canada confirmed an order for the jets. Also that month Emirates Airline announced they bought ten -200LRs as part of a larger 777 order (42 in all). On September 12 2006, Qatar Airways announced firm orders for the Boeing 777-200LR along with Boeing 777-300ER. On October 10 2006, Delta Air Lines announced two firm orders of the aircraft to add to its long-haul routes and soon after announced three more orders. Air New Zealand is looking at the possibility of using the 777-200LR variant to add to their -200ERs for a new Auckland to New York route, beginning an ultra-long range route. Later, Air New Zealand elected to focus on the Boeing 787 and 777-300ER for future plans instead.
The closest Airbus equivalent is the A340-500HGW. The proposed future A350-900R model, aims to have a range up to 9,500 nautical miles or 17,600 km. As of August 2008, 20 777-200LR aircraft had been delivered with 25 unfilled orders. A total of 19 -200LRs were in airline service as of August 2008.
The 777-300ER is the Extended Range (ER) version of the 777-300 and contains many modifications, including the GE90-115B engines, which are the world's most powerful jet engine with 115,300 lbf (513 kN) thrust. Other features include raked wingtips, a new main landing gear, extra fuel tanks (2,600 gallons), as well as strengthened fuselage, wings, empennage, nose gear, engine struts and nacelles, and a higher MTOW, 775,000 lb versus 660,000 lb for the 777-300. The maximum range is 7,930 nautical miles (14,685 km). The 777-300ER program was launched by Air France, though for political reasons, Japan Airlines was advertised as the launch customer. The first flight of the 777-300ER was February 24 2003. Delivery of the first 777-300ER to Air France occurred on April 29 2004.
The main reason for the 777-300ER's extra 1,935 nmi (3,550 km) range over the 777-300 is not merely the capacity for an extra 2,600 gallons of fuel (45,220 to 47,890 gal), but the increase in the maximum take-off weight (MTOW).
The -300ER is slightly less fuel efficient than the regular -300 because it weighs slightly more and has engines that produce more thrust. Both the -300 and -300ER weigh approximately 360,000 lb empty and have the same passenger and payload capacity, but the ER has a higher MTOW and therefore can carry about 110,000 lb more fuel than the -300. This enables the -300ER to fly roughly 34% farther with the same passengers and cargo. Without the increase in fuel capacity due to larger fuel tanks, the -300ER's range would still be 25% greater at equal payload. In a maximum payload situation, the -300 would only be able to fill its fuel tanks about 60%, while the -300ER could be filled to full capacity.
Since the introduction of the -300ER, six years after the -300's first delivery, all orders for the -300 series have been the ER variant. The 777-300ER's direct Airbus equivalent is the A340-600HGW; however, as noted above, this model is also displacing the 747-400 as fuel prices rise, airline passenger traffic drops and airlines look for every opportunity to save fuel and fill airplanes with higher-margin customers.
The 777-300ER has been test flown with only one working engine for as long as six hours and 29 minutes (389 minutes) over the Pacific Ocean as part of its Extended-range Twin-engine Operational Performance Standards (ETOPS) trials. 180 minutes of successful and reliable operation on one workable engine are required for the ETOPS 180-minute certification.
As of August 2008, 156 777-300ERs had been delivered with 223 unfilled orders. Additional firm commitments are believed to have been signed, as some airlines intend to use the 777 as a stopgap while they await the arrival of the delayed Boeing 787.
The 777 Freighter (777F) is an all-cargo version of the 777-200LR. The 777F is expected to enter service in late 2008. It amalgamates features from the 777-200LR and the 777-300ER, using the -200LR's structural upgrades and 110,000 lbf (489 kN) GE90-110B1 engines, combined with the fuel tanks and undercarriage of the -300ER.
With a maximum payload of 103 tons, the 777F's capacity will be similar to the 112 tons of the 747-400F, with a nearly identical payload density. As Boeing's forthcoming 747-8 will offer greater payload than the -400F, Boeing is targeting the 777F as a replacement for older 747F and MD-11F freighters. It was launched on May 23 2005.
The 777F promises improved operating economics compared to existing 100+ ton payload freighters. With the same fuel capacity as the 777-300ER, the 777F will have a range of 4,895 nmi (9,065 km) at maximum payload, although greater range will be possible if less weight is carried. For example, parcel and other carriers which are more concerned with volume than weight will be able to make non-stop trans-Pacific flights.
Airbus currently has no comparable aircraft but is developing two models with similar specifications to the 777F. The A330-200F will carry less payload but is a smaller and a cheaper alternative. With a capacity of around 90 tons the proposed A350-900F will be a more capable competitor, although slightly smaller than the 777F. The MD-11F is another comparable aircraft but with less range than the 777F. When the 777F enters service in 2008, it is expected to be the longest-range freighter in the world. The 747-400ERF can carry more cargo and travel farther than the 777F, but the 747-8F replacing it will have less range than the 747-400ERF in the interest of more payload.
On November 7 2006, FedEx Express cancelled its order of ten Airbus A380-800Fs, citing the delays in delivery. FedEx Express said it would buy 15 777Fs instead, with an option to purchase 15 additional 777Fs. FedEx's CEO stated that "[t]he availability and delivery timing of this aircraft, coupled with its attractive payload range and economics, make this choice the best decision for FedEx."
Air France-KLM has signed on as the 777F launch customer. The order is for five aircraft with the first delivery in 2008. In May 2008, there were firm orders for 78 777 freighters from 11 airlines.
On May 19 2008, Boeing released a photo of the first 777 Freighter emerging from Boeing's paint hangar in Everett, Washington. On May 21 2008, the 777F made an official rollout ceremony in Everett, Washington. The first 777F took off on its inaugural flight at 10 AM July 14 2008 from Paine Field. A total of 75 777Fs are on order as of August 2008.
777 Tanker (KC-777)Edit
The KC-777 is a proposed tanker version of the 777. In September 2006, Boeing publicly announced that it was ready and willing to produce the KC-777, if the USAF requires a bigger tanker than the KC-767. In addition the tanker will be able to transport cargo or personnel. Boeing instead offered its KC-767 Advanced Tanker for U.S. Air Force's KC-X competition in April 2007.
North America Edit
- United Airlines
- Air Austral
- Air France
- Austrian Airlines
- British Airways
- DHL Aviation
- Turkish Airlines
Middle East Edit
- Egypt Air
- EI AI
- Etihad Airways
- Iran Air (to be a future operator)
- Iraqi Airways
- Kuwait Airways
- Qatar Airways
- Saudi Arabian Airlines
- Air China
- Air India
- All Nippon Airways
- Asiana Airlines
- Biman Bangladesh Airlines
- Cathay Pacific
- China Cargo Airlines
- China Southern
- EVA Air
- Garuda Indonesia
- Japan Airlines
- Jet Airways
- Korean Air
- Malaysia Airlines
- Pakistan International Airlines
- Philippine Airlines
- Singapore Airlines
- Thai International Airways
- Turkmenistan Airlines
- Vietnam Airlines
- Virgin Australia
Latin America Edit
- Aero Mexico
- TAM Airlines
- Ethiopian Airlines
- Kenya Airways
- TAAG Angola Airlines
As of July 2013, 13 aviation accidents and incidents, including five hull-loss accidents involving 777s had occurred, resulting in 540 fatalities
- The first fatality involving a Boeing 777 occurred in a refueling fire at Denver International Airport on September 5, 2001, during which a ground worker sustained fatal burns. Although the aircraft's wings were badly scorched, it was repaired and put back into service with British Airways.
- On October 18, 2002, An Air France Boeing 777-200 en route from Paris to Los Angeles made an emergency landing in Churchill, Manitoba when a small fire broke out by the front left windshield in the cockpit. Passengers in rows 42–-44 were the first to notice the odor and alert the flight crew. The aircraft dumped fuel over Hudson Bay before landing at Churchill. Because Churchill's airport does not regularly handle aircraft the size of a 777-200 the passengers deplaned using the slides.
- On August 24, 2004, A Singapore Airlines Boeing 777-312 had an engine explosion on takeoff at Melbourne Airport. This was caused by erosion of the high pressure compression liners in the Rolls-Royce engines.
- On March 1, 2005, after a PIA Boeing 777-200ER landed at Manchester International Airport, UK, fire was seen around the left main landing gear. The crew and passengers were evacuated and the fire was extinguished. Some passengers suffered minor injuries and the aircraft sustained minor damage.
- On August 1, 2005, Malaysia Airlines Flight 124, a 777-200ER had instruments showing conflicting reports of low airspeed on climb-out from Perth, Western Australia en route to Kuala Lumpur, Malaysia, then overspeed and stalling. The aircraft started to pitch up at 41,000 feet, and the pilots disconnected the autopilot and made an emergency landing at Perth. No one was injured. Subsequent examination revealed that one of the aircraft's several accelerometers had failed some years before, and another at the time of the incident.
- On January 17, 2008, British Airways Flight 38, a 777-200ER flying from Beijing to London, crash-landed approximately 1,000 ft short of London Heathrow Airport's runway 27L, and slid onto the runway's threshold. There were thirteen injuries and no fatalities. This damaged the landing gear, wing roots and engines, resulting in the type's first hull loss. This is believed to have been caused by ice in the fuel system restricting fuel flow to both engines.
- On July 6, 2013, Asiana Airlines Flight 214, a 777-200ER, crashed while trying to land at San Francisco International Airport. 2 people were killed and 182 were injured, making it the first fatal crash of a 777.
- On March 8, 2014, Malaysia Airlines Flight 370, 777-200ER registered 9M-MRO, carrying 227 passengers and 12 crew, en route from Kuala Lumpur to Beijing was reported missing. Air Traffic Control's last reported coordinates for the aircraft were over the South China Sea at 6°55′15″N 103°34′43″E. The search for the aircraft began after it disappeared. On March 24, 2014, Malaysia's prime minister announced after analysis of fresh satellite data it is now to be assumed "beyond reasonable doubt" that the plane was lost and there were no survivors. The investigation has centered around the airplane's Captain after police raided his home and found suspicious files on his flight simulator program.
- On July 17, 2014, a 777-200ER operating Malaysia Airlines Flight 17 was lost over Eastern Ukraine, killing all 298 people on board. The investigation is ongoing, but jet is widely believed in the West to be shot down by a surface-to-air missile fired by either pro-Russian separatists or the Russian military itself. The separatists and Russian government deny this, blaming the Ukrainian government, despite the airplane disappearing (and later being recovered) in separatist-controlled territory.
|Seating capacity, |
| 305 (3-class) |
| 301 (3-class) |
|301 (3-class)||N/A (cargo)|| 368 (3-class) |
|Length||209 ft 1 in (63.7 m)||242 ft 4 in (73.9 m)|
|Wingspan||199 ft 11 in (60.9 m)||212 ft 7 in (64.8 m)|| 199 ft 11 in|
| 212 ft 7 in|
|Tail height||60 ft 9 in (18.5 m)||61 ft 9 in (18.8 m)||61 ft 1 in (18.6 m)|| 60 ft 8 in|
| 61 ft 5 in |
|Cabin width||19 ft 3 in (5.86 m)|
|Fuselage width||20 ft 4 in (6.19 m)|
|Cargo capacity|| 5,655 ft³ (160 m³)|
| 5,302 ft³ (150 m³)|
| 22,455 ft³ (636 m³)|
| 7,080 ft³ (200 m³)|
|Empty weight|| 307,000 lb |
| 315,000 lb |
| 326,000 lb |
| 353,600 lb |
| 366,940 lb |
|Maximum take-off weight (MTOW)|| 545,000 lb |
| 656,000 lb |
| 766,000 lb |
| 660,000 lb |
| 775,000 lb |
|Cruising speed||0.84 Mach (560 mph, 905 km/h, 490 knots) at 35,000 ft cruise altitude|
|Maximum cruise speed||0.89 Mach (587 mph, 945 km/h, 510 knots) at 35,000 ft cruise altitude|
|Maximum payload range|| 3,250 nmi|
| 5,800 nmi|
| 7,500 nmi|
| 4,895 nmi|
| 3,800 nmi|
| 5,500 nmi|
|Maximum range|| 5,235 nmi|
| 7,700 nmi|
| 9,450 nmi|
| 4,885 nmi|
| 6,015 nmi|
| 7,930 nmi|
|Takeoff run at MTOW ISA+15 MSL|| 8,200 ft|
| 11,600 ft|
| 11,200 ft|
| 10,500 ft|
|Maximum fuel capacity|| 31,000 US gal|
| 45,220 US gal|
| 53,440 US gal|
| 47,890 US gal|
| 45,220 US gal|
| 47,890 US gal|
|Service ceiling||43,100 ft (13,140 m)|
|Engine (x 2)|| PW 4077 |
| PW 4090 |
| GE90-110B |
|GE90-110B|| PW 4098 |
|Thrust (x 2)|| PW: 77,000 lbf (330 kN) |
RR: 77,000 lbf (330 kN)
GE: 77,000 lbf (330 kN)
| PW: 90,000 lbf (400 kN) |
RR: 94,000 lbf (410 kN)
GE: 94,000 lbf (410 kN)
| GE: 110,000 lbf (480 kN) |
GE: 115,000 lbf (510 kN)
|GE: 110,000 lbf (480 kN)|| PW: 98,000 lbf (430 kN) |
RR: 92,000 lbf (400 kN)
GE: 94,000 lbf (410 kN)
|GE: 115,000 lbf (510 kN)|
Sales and deliveriesEdit
- ↑ 1.0 1.1 Norris, Guy and Wagner, Mark. Boeing Jetliners. Zenith Imprint, 1996, p. 89. ISBN 0760300348
- ↑ Norris and Wagner (1996), p. 92
- ↑ 3.0 3.1 3.2 3.3 3.4 3.5 3.6 777 Model Orders and Deliveries summary, Boeing. July 2008. Retrieved 14 August 2008.
- ↑ http://www.time.com/time/magazine/article/0,9171,946981,00.html
- ↑ http://webarsiv.hurriyet.com.tr/2000/01/02/168791.asp
- ↑ Norris and Wagner (1996), p. 9-14
- ↑ Error on call to Template:cite book: Parameter title must be specifiedBirtles, Philip (1998). . MBI Publishing.
- ↑ Norris and Wagner (1996), p.13
- ↑ Norris and Wagner (1996), p.14
- ↑ http://www.time.com/time/magazine/article/0,9171,971474,00.html
- ↑ Sabbagh, p. 180.
- ↑ Norris and Wagner (1996), p.15
- ↑ Norris and Wagner (1996), p.20
- ↑ "Computing & Design/Build Processes Help Develop the 777." Boeing Commercial Airplanes.
- ↑ Norris and Wagner (1996), p.21
- ↑ http://www.boeing.com/companyoffices/aboutus/boejapan.html
- ↑ Sabbagh, p. 112-114.
- ↑ http://www.boeing.com/news/releases/1998/news_release_980609b.html
- ↑ Sabbagh, p. 281 - 284.
- ↑ http://www.boeing.com/commercial/777family/pf/pf_milestones.html
- ↑ 180 minutes ETOPS approval was granted to the General Electric GE90 powered 777 on October 3 1996, and to the Rolls-Royce Trent 800 powered 777 on October 10 1996.
- ↑ http://www.boeing.com/companyoffices/aboutus/boejapan.html
- ↑ "http://www.jas.co.jp/new777/e/g.htm
- ↑ http://www.jas.co.jp/new777/e/indexe.htm
- ↑ 25.0 25.1 25.2 http://www.theaustralian.news.com.au/story/0,25197,23853824-23349,00.html
- ↑ Singapore Airlines fleet listing, Plane-spotters.net. Retrieved 15 August, 2008.
- ↑ Emirates Airline fleet listing, Plane-spotters.net. Retrieved 15 August, 2008.
- ↑ http://www.boeing.com/commercial/787family/programfacts.html
- ↑ http://www.faa.gov/ATS/asc/publications/TACTICAL/LGATac.pdf
- ↑ https://www.caa.govt.nz/aircraft/Type_Acceptance_Reps/Boeing_777.pdf
- ↑ http://www.fromthecockpit.com/Gallery/displayimage.php?
- ↑ 32.0 32.1 32.2 http://airtransportbiz.free.fr/Aircraft/777X-1.html
- ↑ http://www.airlinecodes.co.uk/arctypes.asp
- ↑ 777-200/-200ER Technical Characteristics, Boeing.
- ↑ 35.0 35.1 35.2 35.3 "World Airliner Census", Flight International, 19-25 August 2008.
- ↑ 36.0 36.1 The Boeing 777 Program Background, Boeing
- ↑ http://www.rolls-royce.com/civil_aerospace/products/airlines/trent800/default.jsp
- ↑ http://www.alpa.org/DesktopModules/ALPA_Documents/ALPA_DocumentsView.aspx?itemid=1049&ModuleId=1284&Tabid=256
- ↑ JAL Is Upgrading Some Seats, Wall Street Journal, June 11, 2008, p.D2
- ↑ http://www.boeing.com/commercial/news/2005/q2/nr_050610g.html
- ↑ http://www.flightglobal.com/articles/1999/07/21/54116/777-operators-object-to-ge-as-sole-supplier.html
- ↑ "Boeing Looking Ahead to 21st century", Boeing, July 10, 1995.
- ↑ 777-200LR Auxiliary Fuel Tanks at Boeing.com
- ↑ "Boeing 777-200LR Sets New World Record for Distance", Boeing, November 10, 2005.
- ↑ "Boeing 777-200LR Worldliner Certified to Carry Passengers Around the World", Boeing, February 2, 2006.
- ↑ Qatar Airways confirms 777 orders.
- ↑ http://news.delta.com/article_display.cfm?article_id=10431
- ↑ http://www.flightglobal.com/articles/2006/10/30/210267/air-new-zealands-rob-fyfe-completes-restructuring-and-plots.html
- ↑ http://boeing.com/commercial/777family/news/2005/q2/nr_050524g.html
- ↑ 50.0 50.1 "FedEx Express cancels order for 10 Airbus A380s, orders 15 Boeing 777s". Frost, L. The San Diego Union-Tribune. November 7, 2006.
- ↑ http://www.flightglobal.com/articles/2008/05/23/224065/boeing-777f-makes-its-debut-ahead-of-flight-test-phase.html
- ↑ First Boeing 777 Freighter Leaves Paint Hangar, Boeing, May 19, 2008.
- ↑ "Boeing 777 Freighter Makes First Flight", Boeing, 14 July 2008.
- ↑ "Aerospace Notebook: Boeing now offers the 777 as a tanker", Seattle PI, September 27, 2006.
- ↑ "Ready to Fill 'er Up", Boeing November 2006.
- ↑ "Boeing Submits KC-767 Advanced Tanker Proposal to U.S. Air Force", Boeing, April 11, 2007.
- ↑ British Airways Flight 2019 ground fire, Aviation Safety Network.
- ↑ http://www.tsb.gc.ca/en/reports/air/2002/a02c0227/a02c0227.asp
- ↑ http://www.atsb.gov.au/publications/investigation_reports/2004/AAIR/pdf/Report_200403110.pdf
- ↑ http://www.aaib.dft.gov.uk/cms_resources/AP-BGL%201-06.pdf
- ↑ http://www.atsb.gov.au/publications/investigation_reports/2005/AAIR/pdf/aair200503722_001.pdf
- ↑ http://www.airlinesafety.com/faq/777DataFailure.htm
- ↑ http://www.investegate.co.uk/Article.aspx?id=200802010700330296N
- ↑ Factsheet Boeing 777-200
- ↑ Factsheet Boeing 777-300
- ↑ http://www.airliners.net/info/stats.main?id=106
- ↑ http://www.boeing.com/commercial/777family/specs.html
- ↑ http://www.boeing.com/commercial/airports/777.htm
- ↑ http://www.airliners.net/info/stats.main?id=106
- ↑ http://www.airliners.net/info/stats.main?id=107
- ↑ "Orders and Deliveries search page", The Boeing Company. Retrieved 27 August, 2008.
- ↑ Boeing 777 deliveries, construction numbers 400-599, The Boeing Company. Retrieved September 2008</span> </li> <li id="cite_note-Boeing777_Deliveries_600-799-72">[[#cite_ref-Boeing777_Deliveries_600-799_72-0|↑]] <span class="reference-text">[http://www.seattle-deliveries.com/777deliveries600-799.htm Boeing 777 deliveries, construction numbers 600-799], The Boeing Company. Retrieved [[September 2008]]</span> </li></ol>
- Norris, Guy and Wagner, Mark (1996), Motorbooks International, Boeing 777. ISBN 0-7603-0091-7.
- Norris, Guy and Wagner, Mark (1996), Zenith Imprint, Modern Boeing Jetliners. ISBN 0-7603-0034-8.
- Sabbagh, Karl (1995), Scribner, 21st Century Jet: The Making of the Boeing 777. ISBN 0-333-59803-2. | 1 | 28 |
<urn:uuid:b3b94a6b-1529-4589-ae1b-8dc2aad08ef3> | CO2 Concentration Changes
Do Not Drive Sea Levels
From about 7000 years ago to 2000 years ago, or from the Mid- to Late-Holocene, atmospheric CO2 concentrations varied between only about 260 and 270 parts per million, or ppm. Such low CO2 concentrations are believed to be “safe” for the planet, as they are significantly lower than today’s levels, which have eclipsed 400 ppm in recent years. These high CO2 concentrations are believed to cause dangerous warming, rapid glacier melt, and catastrophic sea level rise.
And yet, despite the surge in anthropogenic CO2 emissions and atmospheric CO2 since the 20th century began, the UN’s Intergovernmental Panel on Climate Change (IPCC) has concluded that global sea levels only rose by 1.7 mm/yr during the entire 1901-2010 period, which is a rate of less than 7 inches (17 cm) per century. A new paper even suggests the global trend is better represented as closer to 1.3 mm/yr, or about 5 inches per century:
McAneney et al., 2017 “Global averaged sea-level rise is estimated at about 1.7 ± 0.2 mm year−1 (Rhein et al. 2013), however, this global average rise ignores any local land movements. Church et al. (2006) and J. A. Church (2016; personal communication) suggest a long-term average rate of relative (ocean relative to land) sea-level rise of ∼1.3 mm year.”
According to Wenzel and Schröter (2014), the acceleration rate for the sea level rise trend since 1900 has been just +0.0042 mm/yr, which is acknowledged by the authors to be “not significant” and well within the range of uncertainty (+ or – 0.0092 mm/yr) to put the overall 20th/21st century sea level rise acceleration rate at zero.
Further complicating the paradigm that contends changes in CO2 concentrations drive sea levels is the fact that ice core evidence affirms CO2 levels remained remarkably constant (fluctuating around 255 to 260 ppm) during the same period that there was an explosively fast rate of sea level rise — between 1 and 2 meters per century (about 10 times today’s rates) — between 12,000 to 8,000 years ago. Sea levels rose by ~60 meters during those 4,000 years while CO2 levels effectively remained constant.
And casting even more doubt on the assertion that variations in CO2 drive sea level rise is the fact that there is robust paleoclimate evidence to suggest that today’s mean sea levels as well as today’s sea level rise rates are both relatively low (from a historical standpoint) and also well within the range of natural variability. Nothing unusual is happening to sea levels today. For even though we have evidence that modern CO2 concentrations (~405 ppm) are historically high relative to the last 10,000 years, we also possess a growing body of evidence that modern sea levels are still about 1 to 2 meters lower than they have been for most of the last 7,000 years.
The fundamental problem for the CO2-rise-causes-sea-level-rise paradigm, then, is that rising CO2 concentrations have not been correlated with rising sea levels for nearly all of the last 12,000 years. In fact, the opposite has been observed during the last 2,000 years, or during the Late Holocene: CO2 levels have risen (gradually, then rapidly) while sea levels have fallen overall, with recent changes so modest (inches per century) that they do not override the overall trend. In the 8,000 years before that, sea levels rose rapidly while CO2 concentrations remained flat. Simply put, the supposed anthropogenic “signal” in sea level rise trends has largely gone undetected — a point that has been affirmed by more and more scientists.
Listed below are a collection of 35 scientific papers published since 2014 that indicate sea levels were, on average, about 1 to 2 meters higher than they are now throughout the Mid-Holocene (7,000-2,000 years ago) and even into the last millennium, with lower-than-now sea levels largely confined to the Little Ice Age period (~1300 to 1900 AD). Links to the papers are embedded in the authors’ names and the regional locations for mean sea level are notated.
Dechnik et al., 2017 (Tropical Western Pacific)
[I]t is generally accepted that relative sea level reached a maximum of 1–1.5 m above present mean sea level (pmsl) by ~7 ka [7,000 years ago] (Lewis et al., 2013)
Zondervan, 2016 (Great Barrier Reef, Australia)
Preserved fossil coral heads as indicators of Holocene high sea level on One Tree Island [GBR, Australia] … Complete in-situ fossil coral heads have been found on beach rock of One Tree Island, a small cay in the Capricorn Group on the Great Barrier Reef. Measurements against the present low-tide mark provide a [Holocene] high stand of at least +2.85 m [above present sea levels], which can be determined in great accuracy compared to other common paleo sea-level record types like mangrove facies. The sea level recorded here is higher than most recent findings, but supports predictions by isostatic adjustment models. … Although the late Holocene high stand has been debated in the past (e.g. Belperio 1979, Thom et al. 1968), more evidence now supports a sea level high stand of at least + 1- 2 m relative to present sea levels (Baker & Haworth 1997, 2000, Collins et al. 2006, Larcombe et al. 1995, Lewis et al. 2008, Sloss et al. 2007).
Prieto et al., 2016 (Argentina, Uruguay)
Analysis of the RSL [relative sea level] database revealed that the RSL [relative sea level] rose to reach the present level at or before c. 7000 cal yr BP, with the peak of the sea-level highstand c. +4 m [above present] between c. 6000 and 5500 cal yr BP [calendar years before present] … This RSL [relative sea level] curve was re-plotted by Gyllencreutz et al. (2010) using the same index points and qualitative approach but using the calibrated ages. It shows rising sea-levels following the Last Glacial Termination (LGT), reaching a RSL [relative sea level] maximum of +6.5 m above present at c. 6500 cal yr BP [calendar years before present], followed by a stepped regressive trend towards the present.
Hodgson et al., 2016 (East Antarctica)
Rapid early Holocene sea-level rise in Prydz Bay, East Antarctica … The field data show rapid increases in rates of relative sea level rise of 12–48 mm/yr [1.2 to 4.8 meters per century] between 10,473 (or 9678) and 9411 cal yr BP in the Vestfold Hills and of 8.8 mm/yr between 8882 and 8563 cal yr BP in the Larsemann Hills. … The geological data imply a regional RSL [relative sea level] high stand of c. 8 m [above present levels], which persisted between 9411 cal yr BP and 7564 cal yr BP [calendar years before present], and was followed by a period when deglacial sea-level rise was almost exactly cancelled out by local rebound.
Dura et al., 2016 (Vancouver)
In northern and western Sumatra, GIA models predict high rates (>5 mm/year) of RSL [relative sea level] rise from ∼12 to ∼7 ka [12000 to 7000 years ago], followed by slowing rates of rise (<1 mm/year) to an RSL [relative sea level] highstand of <1 m (northern Sumatra) and ∼3 m (western Sumatra) between 6 and 3 ka [6,000-3,000 years ago], and then gradual (<1 mm/ year) RSL fall until present.
Spotorno-Oliveira et al., 2016 (Brazil)
At ~7000 cal. years BP the sea level in the bay was approximately 4 m below the present sea level and the upper subtidal benthic community was characterised by fruticose corallines on coarse soft substrate, composed mainly of quartz grains from continental runoff input. The transgressing sea rapidly rose until reaching the ~ +4 m highstand [above present] level around 5000 years BP.
Lee et al., 2016 (Southeast Australia)
The configuration suggests surface inundation of the upper sediments by marine water during the mid-Holocene (c. 2–8 kyr BP), when sea level was 1–2 m above today’s level.
Yokoyama et al., 2016 (Japan)
The Holocene-high-stand (HHS) inferred from oyster fossils (Saccostrea echinata and Saccostrea malaboensis) is 2.7 m [above present sea level] at ca. 3500 years ago, after which sea level gradually fell to present level.
May et al., 2016 (Western Australia)
Beach ridge evolution over a millennial time scale is also indicated by the landward rise of the sequence possibly corresponding to the mid-Holocene sea-level highstand of WA [Western Australia] of at least 1-2 m above present mean sea level.
Mann et al., 2016 (Indonesia)
Radiometrically calibrated ages from emergent fossil microatolls on Pulau Panambungan indicate a relative sea-level highstand not exceeding 0.5 m above present at ca. 5600 cal. yr BP [calendar years before present].
Clement et al., 2016 (New Zealand)
In North Island locations the early-Holocene sea-level highstand was quite pronounced, with RSL [relative sea level] up to 2.75 m higher than present. In the South Island the onset of highstand conditions was later, with the first attainment of PMSL being between 7000–6400 cal yr BP. In the mid-Holocene the northern North Island experienced the largest sea-level highstand, with RSL up to 3.00 m higher than present.
Long et al., 2016 (Scotland)
RSL [relative sea level] data from Loch Eriboll and the Wick River Valley show that RSL [relative sea level] was <1 m above present for several thousand years during the mid and late Holocene before it fell to present.
Chiba et al., 2016 (Japan)
Highlights: We reconstruct Holocene paleoenvironmental changes and sea levels by diatom analysis. Average rates of sea-level rise and fall are estimated during the Holocene. Relative sea level during Holocene highstand reached 1.9 m [higher than today] during 6400–6500 cal yr BP [calendar years before present]. The timing of this sea-level rise is at least 1000 years earlier in the Lake Inba area by Holocene uplift than previous studies. The decline of sea-level after 4000 cal yr BP may correspond to the end of melting of the Antarctic ice sheet.
Leonard et al., 2016 (Great Barrier Reef, Australia)
Holocene sea level instability in the southern Great Barrier Reef, Australia … Three emergent subfossil reef flats from the inshore Keppel Islands, Great Barrier Reef (GBR), Australia, were used to reconstruct relative sea level (RSL). Forty-two high-precision uranium–thorium (U–Th) dates obtained from coral microatolls and coral colonies (2σ age errors from ±8 to 37 yr) in conjunction with elevation surveys provide evidence in support of a nonlinear RSL regression throughout the Holocene. RSL [relative sea level] was at least 0.75 m above present from ~6500 to 5500 yr before present (yr BP; where “present” is 1950). Following this highstand, two sites indicated a coeval lowering of RSL of at least 0.4 m from 5500 to 5300 yr BP which was maintained for ~200 yr. After the lowstand, RSL returned to higher levels before a 2000-yr hiatus in reef flat corals after 4600 yr BP at all three sites. A second possible RSL lowering event of ~0.3 m from ~2800 to 1600 yr BP was detected before RSL stabilised ~0.2 m above present levels by 900 yr BP. While the mechanism of the RSL instability is still uncertain, the alignment with previously reported RSL oscillations, rapid global climate changes and mid-Holocene reef “turn-off” on the GBR are discussed.
Sander et al., 2016 (Denmark)
The data show a period of RSL [relative sea level] highstand at c. 2.2 m above present MSL [mean sea level] between c. 5.0 and 4.0 ka BP [5,000 to 4,000 years before present]. After that, RSL drops by c. 1.3 m between c. 4.0 and 3.4 ka BP to an elevation roughly 1 m above present MSL. Since then, RSL has been falling at more or less even rates. … Yu et al. (2007) present evidence for a sea-level ‘jump’ of several meters occurring at 7.6 ka bp [7600 years before present] in SE Sweden, and data suggesting RSL changes with a similar timing and magnitude were obtained for a field site in the southern Gulf of Finland (Rosentau et al., 2013). The suddenness of the RSL change has been attributed to the collapse of parts of the Laurentide Ice Sheet (Blanchon and Shaw, 1995; Carlson et al., 2007), though the global indications and the potential triggers of such a eustatic event remain inconclusive (Törnqvist and Hijma, 2012).
Bradley et al., 2016 (China)
In general, the data indicate a marked slowdown between 7 and 8 kyr BP, with sea level rising steadily to form a highstand of ~2-4 m [above present sea level] between 6 and 4 kyr BP [6000 and 4000 years before present]. This is followed by a steady fall, reaching present day levels by ~1 kyr BP.
Accordi and Carbone, 2016 (Africa)
Then, the skeletal carbonate storage on the shelf reached its maximum 5 to 4 ka BP [5000 to 4000 years before present] (Ramsay, 1995) during a highstand about 3.5 m above the present sea level, when shallow marine accommodation space was greater than at present. … A detailed sea level curve of the last 9 ka BP is reported for the Southern African coastline by Ramsay (1995), who indicates a sea level similar to that of the present (at about 6.5 ka). Ramsay also indicates successive, frequent oscillations below and above the present sea level, between a maximum of +3.5 and a minimum of -2 m. Sea level positive pulses since 7 ka BP are also documented in Siesser (1974), Jaritz et al. (1977) and Norstrom et al. (2012) for the Mozambique coast. Along the Kenyan coast, a sea level stand above the present one during the mid-Holocene is documented in many places along the coast by various authors (Hori, 1970; Toyah et al., 1973; Åse, 1981, 1987; Oosterom, 1988), where the sea level might have reached +6 m above the Kenyan Datum between 2 and 3 ka BP [2000 and 3000 years before present].
Hansen et al., 2016 (Denmark)
Continuous record of Holocene sea-level changes … (4900 years BP to present). … The curve reveals eight centennial sea-level oscillations of 0.5-1.1 m superimposed on the general trend of the RSL [relative sea level] curve [relative sea levels ~1.5 m higher than present from 1400 to 1000 years ago].
Macreadie et al., 2015 (Austalia, Eastern)
[R]esults from other studies … suggest that high-stand, at perhaps 2 m above present msl [mean sea level] was achieved as early as 7000 radiocarbon years BP [before present] (7800 cal. years BP) and that sea-level has exceeded the present value for much of the mid- to late-Holocene [~7000 to ~1000 years ago].
Lewis et al., 2015 (Australia, Northeastern)
Thick (> 10 cm) fossil oyster visors above the equivalent modern growth suggest higher relative sea-levels in the past (i.e. > 1200 cal. yr BP [prior to 1,200 years before present]). … [D]ata show a Holocene sea-level highstand of 1–2 m higher than present which extended from ca. 7500 to 2000 yr ago (Woodroffe, 2003; Sloss et al., 2007; Lewis et al., 2013). The hydro-isostatic adjustment is thought to account for these 1–2 m sea-level changes [falling] to present levels over the past 2000 yr (Lambeck and Nakada, 1990; Lambeck, 2002). … [R]eliable SLI data such as coral pavements and tubeworms from Western Australia suggest that relative sea-level was 0.86 m and 0.80 m above present at 1060 ± 10 and 1110 ± 170 cal. yr BP [~1100 calendar years before present], respectively (Baker et al., 2005; Collins et al., 2006).
Lokier et al., 2015 (Persian Gulf)
Late Quaternary reflooding of the Persian Gulf climaxed with the mid-Holocene highstand previously variously dated between 6 and 3.4 ka. Examination of the stratigraphic and paleoenvironmental context of a mid-Holocene whale beaching allows us to accurately constrain the timing of the transgressive, highstand and regressive phases of the mid- to late Holocene sea-level highstand in the Persian Gulf. Mid-Holocene transgression of the Gulf surpassed today’s sea level by 7100–6890 cal yr BP, attaining a highstand of > 1 m above current sea level shortly after 5290–4570 cal yr BP before falling back to current levels by 1440–1170 cal yr BP. These new ages refine previously reported timings for the mid- to late Holocene sea-level highstand published for other regions. By so doing, they allow us to constrain the timing of this correlatable global eustatic event more accurately.
Harris et al., 2015 (Great Barrier Reef, Australia)
This hiatus in sediment infill coincides with a sea-level fall of ∼1–1.3 m during the late Holocene (ca. 2000 cal. yr B.P.), which would have caused the turn-off of highly productive live coral growth on the reef flats currently dominated by less productive rubble and algal flats, resulting in a reduced sediment input to back-reef environments and the cessation in sand apron accretion. Given that relative sea-level variations of ∼1 m were common throughout the Holocene, we suggest that this mode of sand apron development and carbonate production is applicable to most reef systems.
Microatoll death was most likely caused by a fall in sea level that stranded the microatolls on the reef flat due to their location in open-water unmoated environments. This suggests that paleo–sea level between 3900 and 2200 cal. yr B.P. was 1–1.3 m higher than present (based on an offset from MLWS tidal level to fossil microatoll elevation; Fig. 2). This paleo–sealevel elevation is similar to the ranges of 1–1.5 m suggested by Lewis et al. (2013) and Sloss et al. (2007) and data from Moreton Bay in southern Queensland of an elevation of 1.3 m (Leonard et al., 2013).
Hein et al., 2015 (Brazil)
In southern Brazil, falling RSL [relative sea level] following a 2–4 m [above present sea level] highstand at 5 to 6 ka [5,000 to 6,000 years ago] forced coastal progradation. … Relative SL [sea level] along the southern Brazil coast reached a highstand elevation of 1–4 m above MSL [mean sea leve] at ca. 5.8 ka [5800 years ago].
Barnett et al., 2015 (Arctic Norway)
Relative sea-level fell at −0.7 to −0.9 mm yr−1 over the past 3300 years in NW Norway. … Prior to 3000 cal yr BP the marine limiting date represents an important constraint for the late Holocene sea-level trend and yields a minimum RSL [relative sea level] decline of approximately 2.2 m over 3200 years when assuming a linear trend. The maximum possible linear decline constrained by the data is approximately 2.6 m in 2800 years, providing an estimated late Holocene sea-level trend of 0.7 to 0.9 mm yr (shown by the grey shaded region in Fig. 8A). [Relative sea level was 2.2 to 2.8 m higher ~3,000 years ago in Arctic Norway]
Engel et al., 2015 (Western Australia)
The foredunes overlie upper beach deposits located up to >2 m above the present upper beach level and provide evidence for a higher mid-Holocene RSL [relative sea level]. … [O]bservations made near Broome by Lessa and Masselink (2006) [indicate] the deposition of backshore deposits up to c. 1.5 m above present MHW [mean high water] between c. 2100–800 cal BP [2100-800 calendar years before present].
Reinink-Smith, 2015 (Kuwait)
[B]ased on bottle characteristics, glass bottles within the debris zonemwere manufactured mostly between 1940 and 1960 (some as early as the 1920s), indicating high tides were more common in the recent past. … The normal tidal cycle affects only a narrow 0.6–0.7 km-wide band parallel to the coast when the prevailing wind (the Shamal) is from the northwest (Gunatilaka, 1986). Within this narrow zone, washed-up glass bottles were manufactured more recently than ~1960 and are not frosted. None of these new [made after 1960] bottles were found near the beach ridges … [A]ssuming the tidal ranges were similar in the middle Holocene, a rough estimate of the MSL [mean sea level] during the middle Holocene highstand is 5.2 m − 1.7 m = +3.5 m above the present MSL [mean sea level]. … The +3.5 m highstand estimate in northeastern Kuwait derived in this study is also higher than the previously reported maximum estimates of +2 to +2.5 m responsible for other Holocene beach ridges in the Arabian Gulf (Gunatilaka, 1986; Lambeck, 1996; Kennett and Kennett, 2007; Jameson and Strohmenger, 2012). Some beach ridges in Qatar and Abu Dhabi are at elevations of 2–4 m above MSL [present mean sea level] as far as 5-15 km inland (Alsharhan and Kendall, 2003).
Rashid et al., 2014 (French Polynesia)
Upon correction for isostatic island subsidence, we find that local relative sea level was at least ~1.5±0.4 m higher than present at ~5,400 years ago.
Strachan et al., 2014 (South Africa)
During the last 7000 years, southern African sea levels have fluctuated by no more than ±3 m. Sea-level curves based on observational data for southern Africa indicate that Holocene highstands occurred at 6000 and again at 4000 cal years BP, followed by a lowstand from 3000 to 2000 cal years B P. The mid-Holocene highstands culminated in a sea-level maximum of approximately 3 m above mean sea level (MSL) from 7300 to 6500 cal years BP [calendar years before present] and of 2 m above MSL at around 4000 cal years BP. Thereafter, RSL dropped to slightly below the present level between 3500 and 2800 cal years BP. Sea-level fluctuations during the late Holocene in southern Africa were relatively small (1-2 m); however, these fluctuations had a major impact on past coastal environments. Evidence from the west coast suggests that there was a highstand of 0.5 m above MSL from 1500 to 1300 cal years BP [calendar years before present] or possibly earlier (1800 cal years BP), followed by a lowstand (-0.5 m above MSL) from 700 to 400 cal years BP [during the Little Ice Age].
Yamano et al., 2014 (Southwest Pacific Ocean)
Mba Island initially formed around ~ 4500 cal yr B.P. [4500 calendar years before present], when sea level was ~ 1.1 m higher than at present.
Kench et al., 2014 (Central Pacific Ocean)
[T]he mid-Holocene [sea level] highstand is reported to have peaked at approximately +1.1 m above present and was sustained until approximately 2000 years B.P. [before present] in the Marshall Islands.
Hein et al., 2014 (Brazil)
Along the eastern and southern Brazilian coasts of South America, 6000 years of sea-level fall have preserved late-stage transgressive and sea-level highstand features 1–4 m above present mean sea level and several kilometers landward of modern shorelines.
Bracco et al., 2014 (Uruguay)
Highlights: We present a sea level change curve for mid Holocene in Uruguay. Sea level reached 4 m amsl [above present mean sea level] between 6000 and 5500 yr BP [before present]. A rapid sea level fall to about 1 m amsl [above present mean sea level] was inferred for 4700-4300 yr BP. A further sea level increase to about 3 m amsl [above present mean sea level] was inferred after 4300 yr BP. After 4300 yr BP there was a constant sea level a decline.
Holocene Sea Levels Rose Much Faster With Stable CO2 Levels
Khan et al., 2017 (Caribbean)
Only Suriname and Guyana [Caribbean] exhibited higher RSL [relative sea level] than present (82% probability), reaching a maximum height of ∼1 m at 5.2 ka [5,200 years ago]. … Because of meltwater input, the rates of RSL change were highest during the early Holocene, with a maximum of 10.9 ± 0.6 m/ka [10.9 meters per 1000 years, 1.9 meters per century] in Suriname and Guyana and minimum of 7.4 ± 0.7 m/ka [7.4 meters per 1000 years, 0.74 meters per century] in south Florida from 12 to 8 ka [12,000 to 8,000 years ago].
Zecchin et al., 2015 (Mediterranean)
Episodic, rapid sea-level rises on the central Mediterranean shelves after the Last Glacial Maximum: A review … The evidence presented here confirms drowned shorelines documented elsewhere at similar water depths and shows that melt-water pulses have punctuated the post-glacial relative sea-level rise with rates up to 60 mm/yr. [6 meters per century] for a few centuries.
Boski et al., 2015 (Brazil) | 1 | 2 |
<urn:uuid:f51b91d6-f1b7-41fa-8594-287aee36ad93> | - Optical fiber connector
An optical fiber connector terminates the end of an optical fiber, and enables quicker connection and disconnection than splicing. The connectors mechanically couple and align the cores of fibers so that light can pass. Better connectors lose very little light due to reflection or misalignment of the fibers.
Optical fiber connectors are used to join optical fibers where a connect/disconnect capability is required. The basic connector unit is a connector assembly. A connector assembly consists of an adapter and two connector plugs. Due to the polishing and tuning procedures that may be incorporated into optical connector manufacturing, connectors are generally assembled onto optical fiber in a supplier’s manufacturing facility. However, the assembly and polishing operations involved can be performed in the field, for example, to make cross-connect jumpers to size.
Optical fiber connectors are used in telephone company central offices, at installations on customer premises, and in outside plant applications. Connectors are used to connect equipment and cables, or to cross-connect cables within a system.
Most optical fiber connectors are spring-loaded. The end faces of the fibers in the two connectors are pressed together, resulting in a direct glass to glass or plastic to plastic contact. This avoids a trapped layer of air between two fibers, which would increase connector insertion loss and reflection loss.
Every fiber connection has two values :
Measurements of these parameters are now defined in IEC standard 61753-1. The standard gives five grades for insertion loss from A (best) to D (worst), and M for multimode. The other parameter is return loss, with grades from 1 (best) to 5 (worst).
A variety of optical fiber connectors are available, but SC and LC connectors are the most common types of connectors on the market. Typical connectors are rated for 500–1,000 mating cycles. The main differences among types of connectors are dimensions and methods of mechanical coupling. Generally, organizations will standardize on one kind of connector, depending on what equipment they commonly use. Different connectors are required for for multimode, and for single-mode fibers.
In datacom and telecom applications nowadays small connectors (e.g., LC) and multi-fiber connectors (e.g., MTP) are replacing the traditional connectors (e.g., SC), mainly to provide a higher number of fibers per unit of rack space.
Features of a good connector design:
- Low Insertion Loss
- Low Return Loss
- Ease of installation
- Low cost
- Low environmental sensitivity
- Ease of use
Outside plant applications may involve locating connectors underground in subsurface enclosures that may be subject to flooding, on outdoor walls, or on utility poles. The closures that enclose them may be hermetic, or may be free-breathing. Hermetic closures will subject the connectors within to temperature swings but not to humidity variations unless they are breached. Free-breathing closures will subject them to temperature and humidity swings, and possibly to condensation and biological action from airborne bacteria, insects, etc. Connectors in the underground plant may be subjected to groundwater immersion if the closures containing them are breached or improperly assembled.
Depending on user requirements, housings for outside plant applications may be tested by the manufacturer under various environmental simulations, which could include physical shock and vibration, water spray, water immersion, dust, etc. to ensure the integrity of optical fiber connections and housing seals.
Fiber connector types Short name Long form Coupling type Ferrule diameter Standard Typical applications Avio (Avim) Screw Aerospace and avionics ADT-UNI Screw 2.5 mm Measurement equipment Biconic Screw 2.5 mm Obsolete D4 Screw 2.0 mm Telecom in the 1970s and 1980s, obsolete Deutsch 1000 Screw Telecom, obsolete DIN (LSA) Screw IEC 61754-3 Telecom in Germany in 1990s; measurement equipment; obsolete DMI Clip 2.5 mm Printed circuit boards E-2000 (AKA LSH) Snap, with light and dust-cap 2.5 mm IEC 61754-15 Telecom, DWDM systems; EC push-pull type IEC 1754-8 Telecom & CATV networks ESCON Enterprise Systems Connection Snap (duplex) 2.5 mm IBM mainframe computers and peripherals F07 2.5 mm Japanese Industrial Standard (JIS) LAN, audio systems; for 200 μm fibers, simple field termination possible, mates with ST connectors F-3000 Snap, with light and dust-cap 1.25 mm IEC 61754-20 Fiber To The Home (LC Compatible) FC Ferrule Connector or Fiber Channel Screw 2.5 mm IEC 61754-13 Datacom, telecom, measurement equipment, single-mode lasers; becoming less common Fibergate Snap, with dust-cap 1.25 mm Backplane connector FSMA Screw 3.175 mm IEC 60874-2 Datacom, telecom, test and measurement LC Lucent Connector , Little Connector, or
Local Connector
Snap 1.25 mm IEC 61754-20 High-density connections, SFP transceivers, XFP transceivers ELIO Bayonet 2.5 mm ABS1379 PC or UPC LuxCis 1.25 mm ARINC 801 PC or APC configurations (note 3) LX-5 Snap, with light- and dust-cap IEC 61754-23 High-density connections; rarely used MIC Media Interface Connector Snap 2.5 mm Fiber distributed data interface (FDDI) MPO / MTP Multiple-Fibre Push-On/Pull-off Snap (multiplex push-pull coupling) 2.5×6.4 mm IEC-61754-7; EIA/TIA-604-5 (FOCIS 5) SM or MM multi-fiber ribbon. Same ferrule as MT, but more easily reconnectable. Used for indoor cabling and device interconnections. MTP is a brand name for an improved connector, which intermates with MPO. MT Mechanical Transfer Snap (multiplex) 2.5×6.4 mm Pre-terminated cable assemblies; outdoor applications MT-RJ Mechanical Transfer Registered Jack or Media Termination - recommended jack Snap (duplex) 2.45×4.4 mm IEC 61754-18 Duplex multimode connections MU Miniature unit Snap 1.25 mm IEC 61754-6 Common in Japan NEC D4 Screw 2.0 mm Common in Japan telecom in 1980s Opti-Jack Snap (duplex) OPTIMATE Screw Plastic fiber, obsolete SC Subscriber Connector or
square connector or
Snap (push-pull coupling) 2.5 mm IEC 61754-4 Datacom and telcom; GBIC; extremely common SMA 905 Sub Miniature A Screw Typ. 3.14 mm Industrial lasers, military; telecom multimode SMA 906 Sub Miniature A Screw Stepped; typ. 0.118 in (3.0 mm), then 0.089 in (2.3 mm) Industrial lasers, military; telecom multimode SMC Sub Miniature C Snap 2.5 mm ST / BFOC Straight Tip/Bayonet Fiber Optic Connector Bayonet 2.5 mm IEC 61754-2 Multimode, rarely single-mode; APC not possible (note 3) TOSLINK Toshiba Link Snap Digital audio VF-45 Snap Datacom 1053 HDTV Broadcast connector interface Push-pull coupling Industry-standard 1.25 mm diameter ceramic ferrule Audio & Data (broadcasting) V-PIN V-System Snap (Duplex) Push-pull coupling Industrial and electric utility networking; multimode 200 μm, 400 μm, 1 mm, 2.2 mm fibers
- Modern connectors typically use a "physical contact" polish on the fiber and ferrule end. This is a slightly curved surface, so that when fibers are mated only the fiber cores touch, not the surrounding ferrules. Some manufacturers have several grades of polish quality, for example a regular FC connector may be designated "FC/PC" (for physical contact), while "FC/SPC" and "FC/UPC" may denote "super" and "ultra" polish qualities, respectively. Higher grades of polish give less insertion loss and lower back reflection.
- Many connectors are available with the fiber end face polished at an angle to prevent light that reflects from the interface from traveling back up the fiber. Because of the angle, the reflected light does not stay in the fiber core but instead leaks out into the cladding. Angle-polished connectors should only be mated to other angle-polished connectors. Mating to a non-angle polished connector causes very high insertion loss. Generally angle-polished connectors have higher insertion loss than good quality straight physical contact ones. "Ultra" quality connectors may achieve comparable back reflection to an angled connector when connected, but an angled connection maintains low back reflection even when the output end of the fiber is disconnected.
- Angle-polished connections are distinguished visibly by the use of a green strain relief boot, or a green connector body. The parts are typically identified by adding "/APC" (angled physical contact) to the name. For example, an angled FC connector may be designated FC/APC, or merely FCA. Non-angled versions may be denoted FC/PC or with specialized designations such as FC/UPC or FCU to denote an "ultra" quality polish on the fiber end face.
- SMA 906 features a "step" in the ferrule, while SMA 905 uses a straight ferrule. SMA 905 is also available as a keyed connector, used e.g., for special spectrometer applications.
- LC connectors are sometimes called "Little Connectors".
- MT-RJ connectors look like a miniature 8P8C connector — commonly (but erroneously) referred to as RJ-45.
- ST connectors refer to having a "straight tip", as the sides of the ceramic (which has a lower temperature coefficient of expansion than metal) tip are parallel—as opposed to the predecessor bi-conic connector which aligned as two nesting ice cream cones would. Other mnemonics include "Set and Twist", "Stab and Twist", and "Single Twist", referring to how it is inserted (the cable is pushed into the receiver, and the outer barrel is twisted to lock it into place). Also they are known as "Square Top" due to the flat end face.
- SC connectors have a mnemonic of "Square Connector", and some people believe that to be the correct name, rather than the more official "Subscriber Connector". This refers to the fact the connectors themselves are square. Other terms often used for SC connectors are "Set and Click" or "Stab and Click".
- FC connectors' floating ferrule provides good mechanical isolation. FC connectors need to be mated more carefully than the push-pull types due to the need to align the key, and due to the risk of scratching the fiber end face while inserting the ferrule into the jack. FC connectors have been replaced in many applications by SC and LC connectors.
- There are two incompatible standards for key widths on FC/APC and polarization-maintaining FC/PC connectors: 2 mm ("Reduced" or "type R") and 2.14 mm ("NTT" or "type N"). Connectors and receptacles with different key widths either cannot be mated, or will not preserve the angle alignment between the fibers, which is especially important for polarization-maintaining fiber. Some manufacturers mark reduced keys with a single scribe mark on the key, and mark NTT connectors with a double scribe mark.
- SC connectors offer excellent packing density, and their push-pull design reduces the chance of fiber end face contact damage during connection; frequently found on the previous generation of corporate networking gear, using GBICs.
- LC connectors have replaced SC connectors in corporate networking environments due to their smaller size; they are often found on small form-factor pluggable transceivers.
- ST connectors have a key which prevents rotation of the ceramic ferrule, and a bayonet lock similar to a BNC shell. The single index tab must be properly aligned with a slot on the mating receptacle before insertion; then the bayonet interlock can be engaged, by pushing and twisting, locking at the end of travel which maintains spring-loaded engagement force on the core optical junction.
- In general the insertion loss should not exceed 0.75 dB and the return loss should be higher than 20 dB. Typical insertion repeatability, the difference in insertion loss between one plugging and another, is 0.2 dB.
- On all connectors, cleaning the ceramic ferrule before each connection helps prevent scratches and extends the connector life substantially.
- Connectors on polarization-maintaining fiber are sometimes marked with a blue strain relief boot or connector body, although this is far from a universal standard. Sometimes a blue buffer tube is used on the fiber instead.
- MT-RJ (Mechanical Transfer Registered Jack) uses a form factor and latch similar to the 8P8C (RJ45) connectors. Two separate fibers are included in one unified connector. It is easier to terminate and install than ST or SC connectors. The smaller size allows twice the port density on a face plate than ST or SC connectors do. The MT-RJ connector was designed by AMP, but was later standardized as FOCIS 12 (Fiber Optic Connector Intermateability Standards) in EIA/TIA-604-12. There are two variations: pinned and no-pin. The pinned variety, which has two small stainless steel guide pins on the face of the connector, is used in patch panels to mate with the no-pin connectors on MT-RJ patch cords.
- Hardened Fiber Optic Connectors (HFOCs) and Hardened Fiber Optic Adapters (HFOAs) are passive telecommunications components used in an Outside Plant (OSP) environment. They provide drop connections to customers from fiber distribution networks. These components may be provided in pedestal closures, aerial and buried closures and terminals, or equipment located at customer premises such as a Fiber Distribution Hub (FDH) or an Optical Network Terminal or Termination (ONT) unit.
These connectors, which are field-mateable, and hardened for use in the OSP, are needed to support Fiber to the Premises (FTTP) deployment and service offerings. HFOCs are designed to withstand climatic conditions existing throughout the U.S., including rain, flooding, snow, sleet, high winds, and ice and sand storms. Ambient temperatures ranging from –40°C (–40°F) to +70°C (158°F) can be encountered.
Telcordia contains the industry’s most recent requirements for HFOCs and HFOAs.
Glass fiber optic connector performance is affected both by the connector and by the glass fiber. Concentricity tolerances affect the fiber, fiber core, and connector body. The core optical index of refraction is also subject to variations. Stress in the polished fiber can cause excess return loss. The fiber can slide along its length in the connector. The shape of the connector tip may be incorrectly profiled during polishing. The connector manufacturer has little control over these factors, so in-service performance may well be below the manufacturer's specification.
Testing fiber optic connector assemblies falls into two general categories: factory testing and field testing.
Factory testing is sometimes statistical, for example, a process check. A profiling system may be used to ensure that the overall polished shape is correct, and a good quality optical microscope to check for blemishes. Optical Loss / Return Loss performance is checked using specific reference conditions, against a "reference standard" single mode test lead, or using an "Encircled Flux Compliant" source for multi-mode testing. Testing and rejection ("yield") may represent a significant part of the overall manufacturing cost.
Field testing is usually simpler. A special hand-held optical microscope is used to check for dirt or blemishes, and an optical time-domain reflectometer may be used to identify significant point losses or return losses. A power meter and light source or loss test set may also be used to check end-to-end loss.
- Optical fiber cable Color coding of connector boot and fiber cable jackets
- Optical attenuator Fiber optic attenuator
- Gap loss Attenuation sources and causes
- ^ Alwayn, Vivek (2004). "Fiber-Optic Technologies". http://www.ciscopress.com/articles/article.asp?p=170740&seqNum=8. Retrieved Aug. 15, 2011.
- ^ a b c d e f g h i Keiser, Gerd (August 2003). Optical Communications Essentials. McGraw-Hill Networking Professional. p. 132–. ISBN 0071412042.
- ^ a b c Shimoji, Naoko; Yamakawa, Jun; Shiino, Masato (1999). "Development of Mini-MPO Connector". Furukawa Review (18): 92. http://www.furukawa.co.jp/review/fr018/fr18_16.pdf.
- ^ "Frequently asked questions". US Conec. http://www.usconec.com/pages/faq/faqfrm.html. Retrieved 12 Feb 2009.
- ^ Hayes, Jim (2005). "Connector Identifier". The Fiber Optic Association — Tech Topics. http://www.thefoa.org/tech/connID.htm. Retrieved Feb. 6, 2009.
- ^ Sezerman, Omur; Best, Garland (December 1997). "Accurate alignment preserves polarization". Laser Focus World. http://www.laserfocusworld.com/display_article/31401/12/none/none/News/Accurate-alignment-preserves-polarization. Retrieved March 12, 2009.
- ^ "Polarization maintaining fiber patchcords and connectors" (pdf). OZ Optics. http://www.ozoptics.com/ALLNEW_PDF/DTS0071.pdf. Retrieved Feb. 6, 2009.
- ^ GR-3120, Issue 2, April 2010, Generic Requirements for Hardened Fiber Optic Connectors (HFOCs) and Hardened Fiber Optic Adapters (HOFAs),
- Fiber optic connectors
- More fiber optic connectors
- Fiber optic connector identifier (with pictures and more connectors)
Wikimedia Foundation. 2010.
Look at other dictionaries:
Optical fiber — A bundle of optical fibers A TOSLINK fiber optic audio c … Wikipedia
Optical fiber cable — A TOSLINK optical fiber cable with a clear jacket. These plastic fiber cables are used mainly for digital audio connections between devices. An optical fiber cable is a cable containing one or more optical fibers. The optical fiber elements are… … Wikipedia
Optical attenuator — Variable Optical Attenuator An optical attenuator is a device used to reduce the power level of an optical signal, either in free space or in an optical fiber. The basic types of optical attenuators are fixed, step wise variable, and continuously … Wikipedia
Optical communication — is any form of telecommunication that uses light as the transmission medium. An optical communication system consists of a transmitter, which encodes a message into an optical signal, a channel, which carries the signal to its destination, and a… … Wikipedia
Optical Carrier transmission rates — are a standardized set of specifications of transmission bandwidth for digital signals that can be carried on Synchronous Optical Networking (SONET) fiber optic networks. Transmission rates are defined by rate of the bitstream of the digital… … Wikipedia
Optical wireless — is the combined use of optical (optical fibre) and wireless (radio frequency) communication to provide telecommunication to clusters of end points which are geographically distant. The high capacity optical fibre is used to span the longest… … Wikipedia
Fiber-optic communication — An optical fiber junction box. The yellow cables are single mode fibers; the orange and blue cables are multi mode fibers: 50/125 µm OM2 and 50/125 µm OM3 fibers respectively. Fiber optic communication is a method of transmitting information from … Wikipedia
Optical time-domain reflectometer — Fluke Networks OTDR in use Yokogawa s OTDR … Wikipedia
Optical Transport Network — ITU T defines an Optical Transport Network (OTN) as a set of Optical Network Elements (ONE) connected by optical fibre links, able to provide functionality of transport, multiplexing, switching, management, supervision and survivability of… … Wikipedia
Optical link — An optical link is a communications link that consists of a single end to end optical circuit. A cable of optical fiber, possibly concatenated into a dark fiber link, is the simplest form of an optical link. Other forms of optical link can… … Wikipedia | 1 | 20 |
<urn:uuid:f1e79bca-c4e1-4d69-955b-ca6aa0212046> | The planetary fiscal crisis of 2007 to the present is a crisis caused by the liquidness deficit in the United States banking system and which outcome in the crumple of big fiscal establishments, downswings in stock markets and the bailout of Bankss by national authoritiess around the universe ( Ivry, 2008 ) . The lodging market in many countries has besides been suffered which showed the effects of legion ejection and drawn-out vacancies. Many economic experts considered this crisis as the worst planetary fiscal crisis as compared to the Great Depression of the 1930s ( Pendery, 2009 ) . Fiscal crisis put in the prostration of cardinal concerns, diminutions in consumer wealth expected in the 100s of millions of US dollars, a momentous diminution in economic activity and significant fiscal committednesss incurred by authoritiess. Many grounds have been recommended with changing weight allotted by experts ( Baily & A ; Elliott, 2009 ) .
Regulatory and market-based solutions have been executed or are below consideration, while notable hazards would stay over the 2010-2011 periods for the universe economic system ( Roubini, 2008 ) .In 2006, the prostration of the lodging bubble peaked in the U.S for which the values of securities fixed to existent estate pricing to immerse subsequently on, destructing fiscal establishments worldwide ( Glass, 2009 ) .
The universe saw a major impact on planetary stock markets where securities experienced immense losingss during the late 2008 and early 2009 because of the inquiries sing smooth diminution in recognition handiness, bank solvency and damaged investor ‘s assurance. As recognition tightened and lessening in international trade, the world-wide economic systems slowed down during the period ( IMF, 2009 ) .
Harmonizing to some critics, investors and recognition rated bureaus failed to accurately value the hazard involved with mortgage-related fiscal merchandises and to turn to twenty-first century fiscal markets, authoritiess were non able to set their regulative patterns. Cardinal Bankss and authoritiess reacted with pecuniary policy enlargement, institutional bailouts and alone financial stimulation ( Declaration of G20, 2008 ) .
The crisis spread around the universe when the fiscal Armageddon was avoided, fall ining Bankss across Europe and floging states from Iceland to Pakistan to seek out crisis assistance from International Monetary Fund. At last, the universe autumn into recession when despicable circle of fastening recognition condensed demand and rapid occupation cuts took clasp ( The New York Times, 2010 ) .
1.1.1 Causes of the Financial Crisis
The instantaneous ground of the crisis was the bursting of the U.S lodging bubble which reached in about 2005-2006 ( Lahart, 2007 ) . Default rates lifting on adjustable rate mortgages ( ARM ) and subprime commenced to increase quickly after that. Large influxs of foreign financess and low involvement rates formed easy recognition conditions for a figure of old ages before the preceding crisis which encouraged debt funding ingestion and fueled a lodging building roar ( The New York Times, 2008 ) . Both these money influx and easy recognition contribute to the U.S lodging bubble. Mortgage loans, recognition cards and car loans were easy accessible for consumers and they assumed an unprecedented debt burden ( Paul Kragman, 2009 ) .
The figure of fiscal understandings like collateralized debt duties ( CDO ) and mortgage backed securities ( MBS ) derived their significance from mortgage payments and lodging monetary values get greater than earlier. Institutions and investors globally were permitted to put in the U.S lodging market after such fiscal inventions. Major fiscal establishments reported in large losingss that had invested and borrowed to a great extent on subprime MBS when the lodging monetary values autumn, which so besides resulted in place worth less than the mortgage loan, supplying a fiscal incentive to travel through foreclosure. The on-going pestilence which commenced in late 2006 in the U.S continues to gnaw the fiscal authority of banking establishments and drain wealth from consumers. As the crisis spread out from the lodging market to other divisions of the economic system, defaults and losingss on other types of loans are besides increased well. Entire losingss worldwide are projected in the millions of U.S dollars ( IMF, 2010 ) .
As the recognition & A ; lodging bubbles assembled, the fiscal system of the universe become expandible and besides go progressively delicate because of series of factors. The of import function played by fiscal establishments were non acknowledge by the policy shapers such as investing and hedge financess, as these establishments were supplying recognition to the U.S economic system but they were non focus to the same set of Torahs, policies and ordinances ( Timothy F. Geithner, 2008 ) .
Economic activities went down when immense losingss crashed the strength of fiscal establishments to impart. Assuming major extra fiscal duties, authoritiess besides bailed many large fiscal concerns and executed economic inducement plans. Lehman Brothers, who filed their bankruptcy on 15th September 2008, was culminated by the crisis ( Financial Times, 2009 ) . Large European and American Bankss lost more than $ 1 trillion from bad loans and on toxic assets get downing from January 2007 to September 2009. These losingss are likely go up to top out $ 2.8 trillion from 2007-10. European Bankss were forecast to make $ 1.6 trillion and American Bankss losingss will make $ 1 trillion ( Stephen Mihm, 2008 ) .
1.1.2 Impact of Financial Crisis on the Developing Countries
Developing states are affected by current fiscal crisis in two ways.
First, the developing states have been impacted by the economic downswing in developed states. The specifics which come under force per unit area on developing states include:
Trade Prices & A ; Trade: Imports and the demand for oil, Cu and natural resources have been increased in India and China which piloted to greater exports and higher monetary values from African states. Ultimately, poorer states will hold knocked on effects when the growing of India and China is likely to decelerate down ( Dirk Willem Te Velde, 2008 ) .
Remittances: During recession, fewer migrators will head towards developed provinces, so remittals will travel down and most likely lesser figure of remittals per migrator. Remittances fluxing to developing states will besides fall ( Dirk Willem Te Velde, 2008 ) .
Foreign Direct Investment: This will come under force per unit area. As 2007 was the witness twelvemonth for FDI to emergent states and equity concern is under force per unit area and corporate and project funding is already weakening ( Dirk Willem Te Velde, 2008 ) .
Commercial Lending: In developed states, even high possible Bankss are under force per unit area as they may non be capable to impart to the extent that they have done earlier. Investors are of all time more factoring in the threat of some emerging market states failure to pay their debt, subsequent to the fiscal prostration of Iceland. Investment would be limited in states like Argentina, Greece, Iceland, Pakistan and Ukraine ( Dirk Willem Te Velde, 2008 ) .
Aid: Because of weak financial and debt jobs and places, Aid budget is under force per unit area. Some grounds are at that place why the developed states provide the assistance budget to developing states ; this includes the desire to advance planetary public goods and the demand to cut down poorness in developing states ( Dirk Willem Te Velde, 2008 ) .
Other Official Flow: Capital adequateness ratio ( CAR ) of fiscal establishments will be in some tremendous force per unit area. On the other manus these have been relatively high late, so there is capacity for taking on extra hazards ( Dirk Willem Te Velde, 2008 ) .
Second, there could be fiscal spillovers for stock markets in assuring markets. Since May 2008, across the Earth all stock markets have slump down significantly whether it ‘s developed or developing states ( Dirk Willem Te Velde, 2008 ) .
1.2 Global Financial Crisis & A ; Pakistan
The universe is acquiring into the new complexness and problems with three critical constituents: nutrient, finance and fuel. These three factors have diverse geographical effects and beginnings on different sections of the Earth and their dwellers if really much rough. The developing disposition of the fiscal country has been a salvaging elegance for the Pakistan economic system because the less developed association with the international markets have destined that the direct impact of the fiscal crisis has non been sensed by the Pakistani fiscal part ( Butt, 2009 ) .
Pakistan, a delicate economic system, has been confronting both economic and political crisis which predate the planetary fiscal crisis. Cardinal indexs of Pakistan economic crisis are hapless public presentation of banking sector, trade shortage, balance of payment foreign exchange militias, rising prices, round debt and political instability of stock markets. Pakistan is an exciting instance since both are in crisis. The war on panic has turn out to be a violent death blade operating expense as the rate of self-destruction bombardment is increasing twenty-four hours by twenty-four hours ( Butt, 2009 ) . GDP growing rate is a major gage to entree the strength of an economic system ; it was 9.0 % in 2004-05 and turns in into 2.0 % in 2008-09. Expected gross per twelvemonth spend by the Government of Pakistan is about $ 26 Billion, based on the expected grosss of about $ 20 Billion incurring a immense balance of payment ( BOP ) difference when the integral giver community was besides traveling through fiscal decomposition ( Saleem, 2009 ) .
So far, the planetary fiscal crisis impact has been really limited but a few convincing menaces still stay behind. The external part still faces assorted menaces in the form of a farther autumn in international demand. With respect to external funding, if existing fortunes in international markets persist, the authorities will hold to raise dependance on support from bilateral and many-sided bureaus ( Shahnaz, 2010 ) .
Due to financial restraints, covering with the crisis is hard for Pakistan. Balance of payments defects and mistakes enforced the authorities of Pakistan to route to an IMF standby agreement that obligatory farther environment on the budget. Subsidies on electricity, wheat, oil and fertiliser had to be phased out which in bend amplified the inflationary load on consumers. At the federal and provincial degrees there are some societal safety grids ; entree to these has normally become farther hard ( Shahnaz, 2010 ) .
As portion of pecuniary policy supervising, the SBP has besides introduced a figure of restructuring in the foreign exchange market. Prominently, the SBP determined to easy but certainly phase out the requirement of foreign exchange for importing oil. Presently for the import of furnace oil, the inter-bank market is piecing the foreign exchange demand ( Shahnaz, 2010 ) .
Pakistan ‘s banking sector is made up of 55 Bankss, which include four public sector Bankss, four specialised Bankss, 20 private commercial Bankss, seven foreign Bankss, five Islamic Bankss, eight development finance establishments and seven micro-finance Bankss ( SBP, 2010 ) .
Harmonizing to the State Bank of Pakistan, Pakistan ‘s banking sector has remained highly strong and elastic, despite confronting force per unit areas arising from weakening macroeconomic state of affairs since late 2007. Harmonizing to Fitch evaluation, the international recognition evaluation bureau with the Pakistani banking system more than the last four decennaries bit by bit progressed from a weak nationalized system to a small improved and dynamic private sector driven system. Liquidity is stretched, surely, but that has more to make with serious authorities borrowing from the banking sector and small to make with the fiscal crisis ( Saleem, 2009 ) .
1.2.2 Round Debt
The round debt in Pakistan has taken topographic point because the authorities of Pakistan owes and is non capable to pay one million millions of rupees to independent power manufacturers ( IPPs ) oil selling companies ( OMCs ) . The round debt job is critically impacting the procedure of the whole energy value concatenation. Due to moo hard currency balances and liquidness as a effect of the debt job ; the companies have to manner out to short-run funding at high involvement rates. Refineries are holding problems in opening LCs to import rough oil due to intensifying payables and receivables. IPPs like HUBCO and KAPCO are besides holding trouble buying oil and go oning operations ( Saleem, 2009 ) .
1.2.3 Karachi Stock Exchange ( KSE )
Pakistan ‘s largest and most liquid exchange is Karachi stock exchange ( KSE ) . In 2002, Business Week cited it as the ‘best-performing stock market in the universe ‘ . In Dec 2008 on the last trading twenty-four hours, KSE listed a sum of 653 companies, with an accrued market capitalisation of Rs1.85 trillion ( $ 23 billion ) . On 26th December 2007, the KSE-100 Index had its upper limit close of all time on at 14,814 points with a market capitalisation of Rs4.57 trillion ( $ 58 billion ) and if we compare with 23rd January 2009, the KSE-100 Index stood at 4,929 points with a market capitalisation of Rs1.58 trillion ( $ 20 billion ) , a loss of over 65 % from its extremum point ( KSE, 2009 ) . Foreign investing in the KSE stands at around $ 500 million. During the 2006 and 2007 calendar old ages, abroad investors were sharply puting in KSE-listed securities ( SBP, 2008 ) .
Beginning: Karachi Stock Exchange ( KSE, 2009 )
1.2.4 International Monetary Fund ( IMF )
In November 2008, IMF agreed to bail out Pakistan through a Stand-By Arrangement ( SBA ) esteemed at $ 7.6 billion. There were two conditions that should be met: Karachi must cut down its budgetary shortage from around 7 % of GDP to 4.2 % of GDP, and somewhat raise revenue enhancement from 10 % of GDP to 10.5 % of GDP. Increase in revenue enhancement means it would farther lag the economic system and that would take to unemployment greater than earlier. This extra unemployment could convey Pakistanis out into the streets and that would signal a all-out political crisis. There is no beliing that the state is in an dismaying fiscal muss ( Saleem, 2009 ) . Harmonizing to the IMF, the SBA bundle bents around the existent GDP growing to 3 % in 2009 and added two to three million to stop consequence unemployment ( IMF, 2009 ) .
1.3 Impact on Migrant Workers and Migration
Global fiscal crisis has led to a inexorable lag in world-wide economic growing and to extensive walloping of occupations. Prediction of ILO provinces that there might be a singular addition in unemployment globally and in the figures of hapless workers ; planetary unemployment degrees mount from 18 million to 30 million forces in 2009, and farther 50 million if the fortunes continue to worsen ( ILO, 2009 ) .
1.3.1 Discrimination, Violence and Xenophobia against Migrant Workers
It is critical that migratory work force do non go whipping boies of the bing crisis in advancement. Past studies highlight a growing in xenophobic and racist attitudes towards migrators, chiefly migratory workers. In Malaysia, these sorts of attitudes have provided sufficient justification for favoritism, and improper extinction of employment without payment of rewards. Among others, similar inclination have been celebrated in Russia and Thailand and every bit good as in United States and the United Kingdom excessively. A Russian NGO provinces that around 113 migrators were killed during first 10 months of 2008, duplicate the rate of the predating twelvemonth ( The Economist, 2009 ) . In United Kingdom, a figure of xenophobic expostulations have taken topographic point together with a wildcat work stoppage alongside employment of abroad labour ( BBC News, 2009 ) .
1.3.2 Migrant Workers and Job losingss
Chiefly the crisis in Europe and the United States may hold disturbed skilled emigrant workers in the finance sector. Burden of retrenchments as a effect of the planetary crook in economic activity are borne by the building, fabrication and services. These are besides the sectors with high height of migratory employment. In the urbanised states of Europe, North America and Arab part, fabrication activity has fallen. Spain, Greece, Portugal and Italy have a relatively high border of migratory human resources engaged in building so the economic downswing in these states is accordingly have an inauspicious impact on migratory employment ( Azfar Khan, 2009 ) .
In East Asia, where migratory workers are chiefly working with fabrication endeavors, the diminution in planetary client demand may hold guided to immense retrenchment. Services, and hotels and eating houses are sectors which are besides unconstructively disturbed by the current crisis. While no specific statistics are gettable on the existent sum of occupation losingss for migratory human resources and their jobs, a assortment of media studies advocate that they are inside the forepart of employment cuts ( Azfar Khan, 2009 ) .
1.3.3 Few Employment Opportunities
Future employment chance for migrator work force besides appears to be falling. Several authoritiess have given importance to their national workers by superiorly suiting them. The authoritiess of the United Kingdom and Australia together merely declared the policy of cut downing skilled foreign workers to vouch occupations for local alumnuss. Under this program, outlook is that 87,000 will return to their states of beginning under this program ( Azfar Khan, 2009 ) .
1.3.4 Falling Remittances
Remittance flows are besides on the ebb after demoing little growing over the past few old ages. Some expect a ascent in the figure of migratory work force in irregular position and suggest that informal transportations may partly counterbalance for the autumn in official remittals. Yet even a small autumn in emigres ‘ remittals is likely to hold extended effects, peculiarly in environments where such transportations form a cardinal munition against poorness ( Saleem, 2009 ) .
1.3.5 Worsening Conditionss of work
Emigrant work force may be enforced to hold to take lower rewards and undergo inferior quality of working environment in an attempt to maintain clasp of their employment. Emigrant workers, peculiarly adult females employees and those in doubted position are in the center of the hardest hit and are uncovered during crisis fortunes ( Juan Somavia, 2008 ) .
During crisis, the chief beliefs of equal behavior for emigrant workers and a rights-based attack to running labour in-migration need back up. Origin and finish states should vehicle policies receptive to the demands of all human resource that assured at least labour rules. Emigrant worker ‘s protection is a cardinal class of action concern in the wake of occupation losingss, steady with the upholding of their basic homo and working rights ( Azfar Khan, 2009 ) .
Remittances are the most touchable and seeable benefits of labour migration. At the macro-level they bring in considered necessary foreign exchange and contribute to rectifying balances on current fiscal records in states of beginning. In many states, remittals signify a high proportion of GDP as they maintain demand and therefore elicit economic activity so the employment is created as a consequence ( Ibrahim Awad, 2009 ) .
At the household-level, remittals can assist in cut downing the poorness and to human capital growing through outgos on instruction and wellness attention. This is notable for development in states of beginning of migratory work force. Drop in remittals is accordingly troublesome for migratory workers, their households and their states. It besides bears highlight even if their entire planetary worth is smaller than Foreign Direct Investment ( FDI ) , remittals are better spread. They are the first footing of external funding for a great sum of developing states ( Ibrahim Awad, 2009 ) .
The current crisis is likely to cut down perchance the growing and size of entire remittals well as it would unconstructively impact both the size of the migrators ‘ occupants and the sum remitted per capita. Migration is driven by the fluctuation between the expected pay obtained in the finish state and the existent pay received in the beginning state ( Ibrahim Awad, 2009 ) .
The current crisis would diminish rewards in developed states, squashing the differentiation in rewards, and the degree of migratory flows. But the migration modesty may besides be affected as some migrators may lose their occupations, therefore mounting the rate of return migration or the degree of unemployed migrators. Furthermore, the downswing may coerce even those who preserve their occupation to distill the sums remitted, due for case to a decrease in existent rewards ( Ibrahim Awad, 2009 ) .
Turning trust on remittals will help on the short term to prolong consumer disbursement as compared to long term which could increase consumer disbursement exposure to external events. On a regional footing, Pakistan receives a medium sum of remittals, with Bangladesh and India having 8.8 % of GDP and 2.8 % of GDP in remittals in 2006-07 severally during the crisis ( Media Eghbal, 2008 ) .
1.4.3 Workers ‘ Remittances and Capital Flows
Foreign Investment has been held back by an under force per unit area International Economic environment as it showed the diminution of 47.5 % throughout the twelvemonth 2008-09 compared to the tantamount period of the old twelvemonth. A big sum of this decrease has come in the signifier of an escape of private portfolio investing of US $ 1 billion. Investing from states such as the United Kingdom, United States, Hong Kong and Singapore, which have been at the vertex of the international crisis, has dropped significantly. Some Asiatic economic systems have observed a predictable autumn in workers ‘ remittals because of the rise in the unemployment in advanced host economic systems. Though, workers ‘ remittals to Pakistan remained dynamic and unaffected by the crisis, numbering US $ 6.36 billion in July-April 2008-09 as compared to US $ 5.32 billion in the consequent period last twelvemonth, therefore exposing a rise of 19.5 % ( Mohammed, 2009 ) .
During the first two months of the current 2010-2011 ( July-August ) , remittances deserving US $ 1.742 billion were flow back to place by Pakistani migrators shacking abroad, demoing an addition of US $ 198.86 million or 13 % when evaluated with US $ 1.525 billion in the similar period predating twelvemonth ( Daily Times, 2010 ) .
1.4.4 Pakistan Remittance Initiative ( PRI )
On 22nd August 2009, Pakistan Remittance Initiative ( PRI ) was commenced as a combined undertaking of State Bank of Pakistan, Ministry of Overseas Pakistani ‘s and Ministry of Finance with the principle of keeping, economical, assists ready to hand, quicker and well-organized flow of emigres ‘ remittals into Pakistan. Among tonss of stairss that PRI is capturing to better the flow of workers ‘ remittals are the visualizing, procedure and executing of a remittal hub for Pakistan. The hub amid will guarantee that all the remittals are acknowledged, processed and disbursed in a appropriate and suited mode so that the remittal hub will go the medium of pick for all migratory workers and remitters to direct remittal minutess ( PRI, 2009 ) .
On behalf of the PRI, State Bank of Pakistan is the geting organisation for this assignment. Every action of this assignment is being done by SBP for PRI and all state of affairss to SBP in this proclamation are planned to repeat that same spirit ( PRI, 2009 ) .
1.5 Statement of the Problem
“ Global Financial Crisis and its Impact on Remittances in Pakistan ”
1.6 Significance of the Study
This survey helped the research worker spread out his skyline and give his penetration into Remittances. It will besides increase his cognition sing fiscal crisis impact on remittals in Pakistan and how theory can be applied as solution.
This research paper will steer the Pakistan Remittance Initiative to concentrate more and to take sensible actions against refering transparence and consumer protection, payment system substructure, legal and regulative environment, market construction and competition, administration and hazard direction and Altruism. In general, local remittal companies of similar features can besides acquire some penetration into the job countries that might come up subsequently within their organisations and be proactive at managing them.This survey will be significantly an of import papers for general reading sing the topic as there is no research stuff available on fiscal crisis impact on remittals in Pakistan.
Besides, it would supply an chance for Financial Institution Division and Centralized Operations Divisions of the Banks to fit the theory with patterns.
1.7 Scope of the Study
The range of the research is limited to the Pakistan Remittance Initiative ( PRI ) and State Bank of Pakistan ( SBP ) caput office and covers the current decennary fiscal and economical informations.
1.8 Boundary lines of the Study
Certain premises have been taken in this survey which may alter in the hereafter. These include factors like economic and political stableness, authorities ‘s deficiency of intervention, and the quality standards that have been used. | 1 | 2 |
<urn:uuid:acab4ab5-5390-4d17-a163-bab36ddd1159> | It's a New Day in Public Health.
The Florida Department of Health works to protect, promote & improve the health of all people in Florida through integrated state, county & community efforts.
Locate a Flu Shot
The statewide toll-free hotline offers counseling information and referrals about
WIC is a federally funded nutrition program for Women, Infants, and Children.
WIC provides the following at no cost: healthy foods, nutrition education and counseling, breastfeeding support, and referrals for health care.
FLHealthCHARTS is your one-stop-site for Florida public health statistics and community health data.
Order birth, death, divorce, and marriage certificates from the Department of Health.
Influenza or 'flu' is a viral respiratory illness, mainly spread by droplets made when people with flu cough, sneeze
or talk. Influenza can cause mild to severe illness. Serious outcomes of flu infection are hospitalization or death. Florida is currently experiencing a moderately severe influenza season. The best way to protect yourself from flu
is to get vaccinated, practice good hand washing hygiene, and stay home/keep children home when sick. To find a vaccine, please visit
Office of the CMS Managed Care Plan
4052 Bald Cypress Way, Bin A06
Tallahassee, FL 32399
Standards for Systems of Care for Children and Youth with Special Health Care Needs Version 2.0
You Get What You Pay for: Measuring Quality in Value-Based Payment for Children’s Health Care
Alternate Payment Models (APM) Whitepaper
Overview of PCMH
PCMH and the Impact of Cost and Quality
Child Health Data
National Performance Measures
State Strategies for Medicaid Quality Improvement for Children and Youth with Special Health Care Needs
PROMIS Pediatric Measures
"The Future of CMS" Presentation
Connect with DOH
Florida Health Across the State
By using this site, you agree to the | 1 | 2 |
<urn:uuid:bdc3d727-a4db-48f1-9547-481bd8bc08c9> | As technology increases our capacity to observe tiny biological systems, communication among cells has been observed with ramifications everywhere in biology. Up until recently, it has been very difficult to observe the behavior of individual cells everywhere in the body, but especially in the vast brain. Observing molecular signals often sent a great distance, such as immune cytokines, are much harder. This data is only now appearing. For many years the brain was considered to have very little immune activity under normal circumstances. But, posts have described how vital T cells travel in the brain and are to normal brain function, even sending signals to stimulate normal cognition or to alter cognition for the “sick feeling”.
Just as neurotransmitters are known to be one vocabulary of the brain, immune cytokine networks are now found to be another. Immune cytokine signals are part of the normal brain conversations among all neurons and glia, and they are very important in infections and abnormal brain states. These signals can be sent either in larger synapses that include brain cells as well as immune cells, or can be secreted into tissue and travel to an immune cell.
With a much smaller number of immune cells in the brain than anywhere else, cytokines can have outsized importance in abnormal states. It is the brain cells themselves that produce most of the cytokines causing inflammation in neurodegenerative diseases involving protein clumps (e.g., Alzheimer’s and Parkinson’s). Immune cells, often coming from outside of the brain, are mostly involved in other types of inflammations such as autoimmune disease multiple sclerosis and brain nfections called encephalitides.
There are now more than 300 cytokine signals identified and all have the property of producing different effects in different cells and situations. Some of these are also called growth factors, interferons (IFN), interleukins (IL), chemokines (travel signals), and lyphokines. One cytokine can cause a cell to grow or do the opposite. Making their effects more complex, various cytokines operate in the context of other signals at the same time and these co stimulators alter their effects, even having one cytokine produce many different effects, even opposite effects. Immune signals in the brain can have varied effects based on circumstances such as the cell producing it, other stimulated cofactors, and the stages of inflammation.
One example is a factor that triggers a T cell regulating inflammation. But, if combined with the cytokine IL-6, they then produce a cytokine that does the opposite (IL-17). In reality, many cytokines operate at the same time and this makes it very hard to understand what is occurring. Also, each organ responds differently to the same signals in different circumstances.
The brain is most unique because of the relatively small amount of immune cells present as well as the unique supportive glia brain cells that also serve as immune cells. Each region of the brain can respond differently to a cytokine. Inflammation of degenerative illness comes from glia signals, whereas other infections be triggered by signals from immune cells that enter the brain from the blood. Many other effects occur in the special environments produced by cancer cells with communication among all the local supportive cells.
Signals in the Developing Brain
Signals coming from all cells and even extra cellular matrix direct brain development in the fetus. Cytokines help by supporting brain cells from very early in fetal life. Several families of signals, such as TGFβ, (in a family including BMP or bone morphogenic proteins), and IL-6 (includes LIF leukemia inhibitory factor and CNTF or ciliary neurotrophic factor) each have many variations in protein super families. Both are highly involved in determining the outcomes of stem cells producing neurons, astrocytes, and other glia in the brain. These signals are a vast subject in themselves.
The cytokine IL-34 from cortex, olfactory regions, and hippocampus neurons is vital for microglia in the adult, but its effect is still questioned in development. Similar but different signals regulate macrophages in other tissues, which are relatives of microglia. This cytokine may be involved in Alzheimer’s and affects the blood brain barrier tight junctions. This cytokine is vital for normal function but might be vital in disease also.
Neuro Inflammation and Degeneration
Previous posts have described how neurons can produce all of the symptoms of inflammation not just pain, as a form of neuroplasticity. But, the word neuroinflammation is used also for inflammation in the brain generally, which can be quite different. This can be from strokes and trauma where astrocytes and microglia produce a large number of different cytokines. In diseases like multiple sclerosis (MS), immune cells come from outside of the brain and produce cytokines and autoimmune damage. Cytokines can increase damage and stop inflammation. When networks are altered, either can happen.
Damage to Brain Tissues
Alarmins are signal molecules that alert glial cells to damage in the brain. With damage from amyloid-β peptide (Aβ) and tau, a wide range of receptors on microglia respond (SRA1, 2, CD36, RAGE) and the more well-known Toll like receptors (TLRs). TLRs bind to the amyloid clumps themselves. When blood cells produce inflammation, an array of cytokines are produced and these also respond to tissue damage—IL-2, IL-23, IL-6, IL-1β.
IL-33 and 34 are triggered by tissue damage and neurodegeneration. These are inside oligodendrocytes while they are making myelin and then released with damage. Astrocytes and microglia respond to this and produce many other cytokines that increase repair and decrease damage. But, they can also promote damage.
In chronic situations, like degenerative diseases that develop over many years, cells can become exhausted and alter their responses. Many opposite findings have emerged where the same signals hurt rather than help the situation.
Too much of the signal IL-6 increased abnormal amyloid. But, it also increased microglia efforts to eat it. TNF is increased with Alzheimer’s. Too much IL-1β reduced some amyloid and increased microglia
But, other research showed the exact opposite. That is, getting rid of these cytokines helped reduce amyloid and tau and helped the brain function.
It is true that many cytokines are produced when accumulations of abnormal proteins form. The signals come from glia and they increase inflammation—IL-1, 6, 12, 23, and TNF. These seem to help at first and then when they become chronic, they hurt the tissues. At first, they clear some of the abnormal protein, then they increase it. They hurt by producing as yet unknown other factors.
Inflammation Disease in the Brain
In dementia, there are few outside immune cells in the brain. The activity is from the glia functioning as immune cells. When outside immune blood cells invade, the situation is different. Some responses are the same from glia when leukocytes enter the brain—IL-1, IL-6, and TNF. But, the leukocyte cytokines take prominence. They stimulate glia to alter their production of signals and they call for more leukocytes to enter the brain.
Experimentally produced inflammation of the brain in mice is called EAE (for experimental autoimmune encephalomyelitis). In some ways it is similar to MS. It can be produced from antigens to myelin and other parts of the brain. The disease is caused by Th cells (T helper cells) that are uniquely produced in this situation. These Th cells that respond can be injected into others and produce the same disease. These Th cells produce specific cytokines.
There are cytokines that stimulate the production of this specific type of T cells and there are special cytokines that are produced by the T cells.
IL-23 comes from both glia and APCs (antigen presenting cells). We are just learning, that like T cells, there are many different types of APCs with different actions. The APC superfamily T-12 super family is made of sub units p35 and p40. IL-23 is made up of subtypes p19 and p40. These APCs are the cells that present antigens to T cells to activate them. These occur in lymph tissues outside of the brain and with the interaction they alter the T cells. IL-23 helps produce MS and EAE and stimulates the change to Th cells before they enter the brain. It is not known what the small amount of IL-23 produced by microglia does.
As well, IL-1 and 6 outside of the brain are both necessary to make Th cells and maintain them. IL-6 is necessary to stimulate EAE. In the brain these cytokines keep the Th cells in the state that causes damage. Their natural tendency without these cytokines is to help not hurt the brain inflammation.
T cells are very dynamic and can alter their appearance and function continually. They need stimulation to continue a particular activity. The environment with these many signals can allow T cells to totally change their types. Also, T cells can produce many different cytokines at once producing variable behaviors and effects.
In the brain, T cells can change to produce IL-17a and F, and IL-22 as well as others. The cells that pick up these signals are connective tissue cells, astrocytes, and lining cells. They produce holes in the barriers of the blood brain barrier, which allows more blood cells and T cells to come into the brain. Astrocytes then produce damaging signals and calls for more blood cells into the brain
Cytokines that Increase Inflammation – IFN and TNF
Although some cytokines that have been associated with MS for example appear to increase inflammation, this is not certain since there are so many factors and each cytokine as multiple opposite effects. Both injecting and blocking a factor increased inflammation. Results vary with the timing of injections—early application increased inflammation and later did the opposite. IFNγ can stop Th and IL-17. Without this cytokine there are more Th cells. With it, astrocytes produce cytokines and it also increases tight junctions in the blood brain barrier (BBB). Surprisingly, the same IFNγ has different effects in the cortex and spinal cord with opposite astrocyte signaling. In experiments, inflammation was down in brain, up in spinal cord. This could be different effects on the BBB versus blood CSF barriers.
One signal is produced only by lymphocytes and therefore doesn’t cause brain protein clumping. But, T cells with it can decrease abnormal amyloid. Meanwhile, other research showed increased amyloid plaques.
TNF is another signal that goes both ways and can be sent from many kinds of cells. It is soluble, found on membranes, and triggers many different receptors. It can kill oligodendrocytes possibly from less astrocyte uptake of glutamate. It helps T cells cross the BBB with many more adhesion molecules for the transfer with signals from lining cells including astrocytes.
Without TNF, MS relapsed more. This was through medications blocking TNF. TNF effects involve multiple receptors.
Conversations of T Helper Cells (Th) with Other Immune Cells
Although it has been shown that leukocytes have affects in the brain, the effects of helper T cells are less clear as they cause damage related to MS. These T cells seem to be the organizers of the other cells including those that eat debris—phagocytes. These eating phagocytes can also cause damage and cause damaging reactive oxygen species (ROS). Microglia are closely related to the phagocytes that come from outside the brain. Neutrophils also help cause MS. Helper T cells can signal to all of these increasing damage. Multiple different vital cytokines and chemokines regulate the actions of all of these cells.
Another gets into the act when monocytes are attracted from the blood and become phagocytes in the brain. They can eat myelin.
Immune Signals in the Brain
Cytokines have been known to contribute to MS for some time. But, now these same cytokine pathways are shown to be involved in all degenerative brain diseases. MS has leukocytes producing cytokines causing more inflammation and destruction. Glia cells are now known to be involved. Now conversations are found using many different new cytokines with many different immune cells in the brain, notably helper T cells. These T cells talk with all other cells.
The question is not whether cytokines are influencing disease, but what are the exact conversations. The same signals from varied cells will cause different effects. Multiple overlapping conversations also influence the outcome. Finding the exact signals will determine future treatments for many different brain diseases. | 1 | 4 |
<urn:uuid:022dc9f5-9c65-4ad6-a91f-32d6d3141a09> | - Research article
- Open Access
- Open Peer Review
Lifestyle choices and mental health: a longitudinal survey with German and Chinese students
BMC Public Health volume 18, Article number: 632 (2018)
A healthy lifestyle can be beneficial for one’s mental health. Thus, identifying healthy lifestyle choices that promote psychological well-being and reduce mental problems is useful to prevent mental disorders. The aim of this longitudinal study was to evaluate the predictive values of a broad range of lifestyle choices for positive mental health (PMH) and mental health problems (MHP) in German and Chinese students.
Data were assessed at baseline and at 1-year follow-up. Samples included 2991 German (Mage = 21.69, SD = 4.07) and 12,405 Chinese (Mage = 20.59, SD = 1.58) university students. Lifestyle choices were body mass index, frequency of physical and mental activities, frequency of alcohol consumption, smoking, vegetarian diet, and social rhythm irregularity. PMH and MHP were measured with the Positive Mental Health Scale and a 21-item version of the Depression Anxiety and Stress Scale. The predictive values of lifestyle choices for PMH and MHP at baseline and follow-up were assessed with single-group and multi-group path analyses.
Better mental health (higher PMH and fewer MHP) at baseline was predicted by a lower body mass index, a higher frequency of physical and mental activities, non-smoking, a non-vegetarian diet, and a more regular social rhythm. When controlling for baseline mental health, age, and gender, physical activity was a positive predictor of PMH, smoking was a positive predictor of MHP, and a more irregular social rhythm was a positive predictor of PMH and a negative predictor of MHP at follow-up. The good fit of a multi-group model indicated that most lifestyle choices predict mental health comparably across samples. Some country-specific effects emerged: frequency of alcohol consumption, for example, predicted better mental health in German and poorer mental health in Chinese students.
Our findings underline the importance of healthy lifestyle choices for improved psychological well-being and fewer mental health difficulties. Effects of lifestyle on mental health are comparable in German and Chinese students. Some healthy lifestyle choices (i.e., more frequent physical activity, non-smoking, regular social rhythm) are related to improvements in mental health over a 1-year period.
Mental health is recognized as a critical component of public health . The need for health promotion, prevention, and treatment programs for mental disorders is one of the primary health challenges of the twenty-first century . The World Health Organization (WHO) describes mental health as a “state of well-being in which every individual realizes his or her own potential, can cope with the normal stresses of life, can work productively and fruitfully, and is able to make a contribution to her or his community” . In line with this definition, mental health researchers increasingly acknowledge that the absence of mental illness does not necessarily imply a state of psychological well-being [4,5,6]. Mental health problems (MHP) can be defined as the presence of psychopathological symptoms (e.g., depressive mood, excessive anxiety, or compulsive behavior) that indicate mental disorders defined in the classification systems of the American Psychiatric Association or the WHO . Two theoretical approaches are most relevant for positive mental health (PMH), as it is interpreted in this study: From a hedonic perspective, PMH includes positive affect, positive mood, and high life-satisfaction, whereas from an eudaimonic perspective, PMH is an optimal functioning of an individual in everyday life (for more information, please see [4, 9, 10]. Thus, PMH is defined as the presence of general emotional and psychological well-being . For the current study, both hedonic and eudaimonic approaches were taken into account and assessed with the Positive Mental Health Scale .
Lifestyle and health
It is commonly known that leading a healthy life can be beneficial for one’s well-being. But what exactly does a healthy lifestyle entail? According to the WHO, a healthy lifestyle means to engage in regular physical activity, to refrain from smoking, to limit alcohol consumption, to eat healthy food in order to prevent overweight. These behaviors should lead not only to better physical health, but also foster mental well-being . The WHO fact sheet is supported by research data showing that engaging in sports or moderate to rigorous physical activity , partaking in cultural or mental activities [15, 16], refraining from smoking , practicing moderation in alcohol consumption , maintaining a body mass index (BMI) within the range of normal weight , and following a healthy diet can have positive health effects and reduce the risk of various somatic diseases, including cancer , heart disease , or stroke . In addition to the relevance of lifestyle for physical health, findings concerning the importance of healthy lifestyle choices for both PMH and MHP are accumulating, with prospective studies consistently finding a bidirectional relationship between lifestyle and mental health variables [24, 25]. More precisely, studies showed that lifestyle can have a positive effect on symptoms of depression and anxiety [25, 26], life satisfaction , and self-perceived general mental health [28,29,30]. In order to investigate the relevance of lifestyle on mental health we selected seven lifestyle factors that, according to the WHO fact sheet and our own data [31,32,33], show significant associations to mental health outcomes.
Which lifestyle choices are beneficial to mental health?
Body mass index
Obesity—which describes severe overweight with a BMI higher than 30 —is associated with worse MHP, especially with self-reported symptoms of depression, anxiety, and stress . In a sample of 886 midlife women, higher BMI was associated with symptoms of anxiety and depression and also with lower PMH, measured with a mental health subscale of the Medical Outcomes Study Short-Form 36 (SF-36; ), a questionnaire that measures well-being and quality of life . In another population-based study of 7937 adults, higher BMI was associated with higher MHP, namely symptoms of depression, anxiety, and stress, and also predictive of lower PMH measured with the Satisfaction with Life Scale , a short measure designed to assess the judgmental component of personal well-being . A meta-analysis of longitudinal studies that assessed the relationship between overweight—a BMI between 25 to 29.99—obesity, and mental health in 58,745 participants showed that both overweight and obesity predicted future symptoms of depression as well as onset of a depressive disorder .
In a review of prospective studies, physical activity was identified as a protective factor against the risk of developing depression . In patients with chronic medical conditions, exercise training interventions reduced symptoms of anxiety. Compared to non-exercise control conditions, a small reduction in anxiety symptoms was found (d = 0.29) . Individuals diagnosed with Major Depression participating in an aerobic exercise intervention showed significant improvements in depression comparable to participants receiving pharmacological treatment . In a meta-analysis of intervention studies in older adults, aerobic exercise (d = 0.29) and moderate intensity training (d = 0.34) were most beneficial for participants’ psychological well-being .
Mental or cultural activities
Receptive (e.g., visiting museums or concerts) and creative (e.g., playing an instrument or painting) cultural or mental activities were associated with lower MHP, namely symptoms of anxiety and depression, and higher PMH, namely life satisfaction [33, 43]. In a large longitudinal study including more than 16,000 middle-aged individuals, cultural leisure-time activities were predictive of lower MHP at follow-up 5 years later . Not all studies supported the relevance of mental activities for mental health. In a prospective study with first-year medical students, leisure-time activities such as playing music or being active in a religious community were not predictive of self-reported mental health at follow-up 1 year later . As bivariate correlations and non-significant regression coefficients for cultural activities and mental health were not reported, the findings of this study should be interpreted with caution.
The relationship between alcohol consumption and mental health is quite controversial. While some studies identified a nonlinear relationship—with elevated risks for depression and anxiety for abstainers and heavy drinkers as compared to light/moderate drinkers —other studies did not find a meaningful correlation between alcohol consumption and symptoms of MHP . Individuals who identify themselves as alcohol abstainers show higher levels of anxiety and depression in comparison to those who do not report alcohol consumption but do not label themselves abstainers . It is, however, problematic to interpret these findings to support the positive effects of moderate alcohol consumption, as many individuals who abstain from alcohol do so for a reason, such as a history of alcohol abuse or other health issues . To identify the relevance of other confounding variables in the relationship between a facet of PMH, life satisfaction, a series of analyses was conducted using a large population sample in Russia . The u-shaped relationship between alcohol consumption and life satisfaction that was found in men and women, was flattened when sociodemographic factors (e.g., age, gender, and occupational status), smoking, and BMI where included as control variables. In other words, the positive relationship between moderate alcohol consumption and mental health is likely to be influenced by other sociodemographic characteristics or lifestyle factors and not caused by the alcohol consumption itself .
Smoking has been identified as a risk factor for MHP [49, 50]. A meta-analysis of prospective studies with follow-up periods between 7 weeks and 9 years showed that individuals who quit smoking experience a significant decrease in MHP—symptoms of depression, anxiety, and stress—and an increase in PMH—psychological quality of life and positive affect—compared to continuing smokers . In line with these findings, a Dutch study with more than 5000 participants showed that individuals who quit smoking do not experience a loss in life satisfaction, but rather an increase in well-being . During early adolescence, smoking is associated with MPH and especially symptoms of depression . The relationship between mental health and smoking is bidirectional: Young smokers with anxiety and depression are more likely to develop a nicotine addiction in early adulthood .
Compared to other lifestyle choices, a vegetarian diet has only rarely been investigated in the context of mental health. In a prospective study of more than 9000 young women in Australia, vegetarians and semi-vegetarians (i.e., individuals who excluded red meat, but would eat other meat, poultry, and fish) were more likely to experience mental health problems such as symptoms of depression and anxiety, or sleeplessness . They also reported lower PMH as measured by the SF-36. In a representative study of German adults, individuals who reported a vegetarian diet were more likely to be diagnosed with depressive, anxiety, somatoform, or eating disorders . These findings are in stark contrast to the proposed positive effects of a vegetarian diet on physical health (e.g., ). Authors propose that these findings may be caused by psychological traits that influence both eating habits and mental health such as perfectionism. An alternative explanation is based on the finding that mental health difficulties often precede a vegetarian diet. Individuals who experience mental health difficulties may try to modify their behavior in a way that is perceived as healthier or their mental health issues may sensitize them to the suffering of animals . The relevance of a vegetarian diet for PMH and MHP has not been investigated in a prospective study including a variety of other lifestyle choices. As prevalence of a vegetarian diet varies drastically between, for example Asian and European countries , the lack of cross-cultural studies including more than one country is another shortcoming in the literature .
Social rhythm irregularity
The association between MHP and disturbances of circadian rhythms is documented, especially for schizophrenia, bipolar disorder, and depression . Disruptions of the circadian rhythm may trigger or exaggerate manic episodes . There is evidence that the circadian system does also influences one’s capacity for mood regulation . Additionally, an irregular social rhythm, which includes social contacts, are also associated with mood disorders in elderly patients . In one of the first population-based studies investigating rhythm irregularity and mental health, irregular circadian and social rhythm were both associated with more MHP and lower life satisfaction in a German sample . This finding was replicated cross-culturally in representative Russian and Chinese samples . Prospective studies concerning the relevance of social rhythm irregularity on future MHP and PMH are still lacking.
While some longitudinal studies indicate the relevance of lifestyle factors for future MHP, evidence for the prediction of future PMH is much rarer. In addition, most studies focus on one or two lifestyle choices and do not investigate the relative importance of a broad range of behaviors for PMH and MHP. As many lifestyle choices are interrelated—for example, individuals who exercise regularly are less likely to be overweight or obese—those studies do not explain the individual contribution of certain lifestyle choices on mental health outcomes. Lastly, the majority of studies only include U.S. American or European participants. The focus on Western samples precludes cross-cultural conclusions about the relevance of lifestyle choices for PMH and MHP. Germany is an individualistic Western country which has undergone structural changes in the 1990s (specifically the reunification of West and East Germany) . In contrast, China is a collectivistic Asian country [63, 64] in which old values and traditions interact with a rapid economical and technical development (e.g., ). Thus, for this study, data from German and Chinese students were analyzed as both countries differ regarding various cultural, historical, social, and geographical conditions.
The present study
The aim of this study was to overcome these limitations by investigating the impact of seven major lifestyle factors—BMI, physical and mental activities, alcohol consumption, smoking, vegetarianism, and social rhythm irregularity—on concurrent and future PMH and MHP in two large student samples from Germany and China. We predicted that a lower BMI, a higher frequency of physical and mental activities, a lower frequency of alcohol consumption, non-smoking, a non-vegetarian diet, and a more regular social rhythm would be predictive of better mental health at baseline, operationalized as higher PMH and fewer MHP. As longitudinal studies including a variety of lifestyle factors are lacking, no predictions about the relevance of specific lifestyle factors for the prediction of future mental health were made. We expected, however, that mental health at baseline would predict mental health at follow-up.
The present study was conducted as part of the Bochum optimism and mental health studies (BOOM-studies), which aim to investigate risk and protective factors of mental health in population-based and student samples with cross-sectional and longitudinal assessments across different cultures. Data presented in this study were collected in different student cohorts between 2012 and 2016 in a German (Ruhr-Universität Bochum) and three Chinese universities (Capital Normal University Beijing, Hebei United University, and Nanjing University). Participants in both countries were sent an invitation to the study via email. They received no incentives for participation. Data were assessed through self-administered surveys (i.e., paper-pencil and online surveys in China and an online survey in Germany). Depending on the assessment method, participants gave their informed consent written or online after being informed about anonymity and voluntariness of the survey. Language specific versions of psychometric instruments were administered using the forward-backward-translation method . In case of discrepancies, the procedure was repeated until complete agreement was achieved. All procedures were carried out in accordance with the provisions of the World Medical Association Declaration of Helsinki (2013). The Ethics Committee of the Faculty of Psychology of the Ruhr-Universität Bochum approved the study in total (Reference number 073). Since the data were anonymized from the beginning of data collection, no statement by an ethics committee was required in China. The participating Chinese Universities were, however, informed and acknowledged the approval by the German ethics board. The Chinese sample included students below age 18. Chinese laws grant inscribed university students of all ages the rights to decide for themselves about study-related issues including participation in studies. Thus, no consent to participate was collected from the parents or guardians.
In total, 2,991 German and 12,405 Chinese participants had valid data at baseline, which was defined as having data for at least half of lifestyle predictors (see Additional file 1). German participants, Mage = 21.69, SD = 4.07, range = 15 to 65, were significantly older than Chinese participants, Mage = 20.59, SD = 1.58, range = 15 to 36, t(14461) = 23.05, p < .001, d = 0.47. Both samples included more female than male participants (German sample: 58.9% female; Chinese sample: 61.9% female). In the German sample, 50.6% reported being in a committed partnership, while in the Chinese sample 20.8% did so, χ2(1) = 1083.37, p < .001, d = 0.55. Six-hundred and thirty-six German and 8,933 Chinese students had at least one valid value at follow-up.
Body mass index
Body mass index (BMI) was calculated from weight and height as weight divided by height squared (kg/m2). According to the WHO, overweight is a BMI greater than or equal to 25, and obesity is a BMI greater than or equal to 30 . Height and weight were assessed via self-report. Self-reported measurements of height and weight have been found to be very reliable, except for highly obese individuals. For this group, a slight underestimation of weight has been reported .
Frequency of physical and mental activity
Frequency of physical as well as cultural activities was assessed using items rated on a scale ranging from 0 (none) to 3 (more than 4 times a week): “Do you exercise/engage yourself in a mental activity regularly? If yes, with what intensity have you done so in the last 12 months?” Participants were provided with examples for physical (i.e., sports and intensive physical work) and mental (i.e., reading, going to the movies/theater, making music) activities. Single-item measures of those activities are characterized by an acceptable reliability and construct validity compared to objective measurement methods and perform at least as well as longer questionnaires . Research has also shown that such items show especially high criterion validity in younger and female participants .
Frequency of alcohol consumption was assessed using the item “How often do you drink alcohol?” Answer categories were never, once a month, 2 to 4 times a month, 2 to 3 times a week, and 4 times a week and more. While there is an on-going debate about the validity of self-reported alcohol consumption compared to objective data, recent studies conclude that self-report can reliably estimate alcohol consumption in low to moderate drinkers . Such items are regularly used in epidemiological studies on alcohol consumption (e.g., ).
Current smoking was assessed using one item: “Do you smoke regularly?” Answer categories were ‘no’, ‘yes, sometimes’, and ‘yes, regularly’. For the present analyses the two latter categories were combined into ‘yes’, which was coded as 1. ‘No’ was coded as 2.
A vegetarian diet pattern was assessed with the item “Do you currently follow a vegetarian diet?”. Answer categories were ‘no’, ‘yes (no meat and no fish)’, ‘yes (no meat)’, ‘vegan’. For this study, the latter three categories were combined to ‘yes’. Similar items were used in previous large-scale studies on eating habits and mental health .
Social rhythm irregularity
Irregularity of social rhythm was assessed using the Brief Social Rhythm Scale which includes 10 items measuring an individual’s perceived social rhythm regarding sleep, meals, wake-up time, and social contacts on a scale ranging from 1 (very regularly) to 6 (very irregularly). In the present study, internal consistency was acceptable to good with α = .78 and .90 in the German and Chinese sample, respectively. Validity evidence comes from data indicating that the full scale is related to physical health, consistent with past research on rhythmicity and certain aspects of mental health [31, 33].
Positive mental health
The unidimensional Positive Mental Health Scale was used to assess PMH . The PMH-scale is a self-report instrument consisting of nine non-specific judgments and was developed to measure eudaimonic and hedonic aspects of well-being. Targeted at general psychological functioning, subjects were asked to indicate their agreement on a 4-point Likert scale ranging from 0 (do not agree) to 3 (agree). Example items are “I am often carefree and in good spirits” and “I feel that I am actually well equipped to deal with life and its difficulties”. High internal consistency, good retest- reliability as well as good and discriminant validity were confirmed in a series of six studies comprising samples of students, patients, and general populations . Cronbach’s alpha in the present study was α = .93 in the German and .92 in the Chinese sample.
Mental health problems
The negative emotional states of depression, anxiety, and stress—which are subsumed under MHP in this study—were assessed with the Depression Anxiety Stress Scales-21 (DASS-21) , a short version of the Depression Anxiety Stress Scales . The DASS-21 consists of 21 items, 7 for each subscale, rated on a 4-point Likert scale ranging from 0 (did not apply to me at all) to 3 (applied to me very much, or most of the time). Summing across the complete scale yields a total score ranging from 0 to 63. The DASS-21 is a widely used instrument with good psychometric properties . Cronbach’s alpha for the complete scale was α = .92 in the German and α = .95 in the Chinese sample.
Data were analyzed using SPSS 24 and Mplus 7.4 . Missing data analysis at baseline showed < 1% of missing data concerning most outcome and predictor variables. Three percent of participants had missing values for MHP, 6.1% did not report their age, and 6.9% did not provide valid height and weight estimates to calculate BMI. Missing values were not replaced. For the primary hypotheses, path analysis was used to examine whether the lifestyle factors explained current and predicted future PMH and MHP. To evaluate whether the proposed model would fit the data of German and Chinese students equally well, a multi group path analysis was conducted. To control for differences in sample size, a case-control sample of the Chinese students, matched for age and gender, was randomly drawn and included in this analysis. Cohen’s d was calculated as the effect size measure (small effect: d ≥ .20, medium effect: d ≥ .50, large effect: d ≥ .80) .
To assess whether the proposed model would fit the data, three fit indices were inspected . The comparative fit index (CFI) compares the hypothesized model’s χ2 with that resulting from the independence model. For an acceptable fit, CFI values above .90 are recommended; a good model fit requires values above .95 . The Root Mean Square Error of Approximation (RMSEA) measures the difference between the reproduced covariance matrix and the population covariance matrix, with values less than .06 reflecting a small approximation error, indicating a good model fit, values between .08 and .10 a mediocre fit and values above .10 a poor model fit . For the standardized root-mean-square residual (SRMR), values smaller than .09 indicated a good fit . Due to the large sample size, the χ2-values were not interpreted.
Table 1 shows the sample characteristics regarding lifestyle variables and mental health outcomes for Chinese and German participants. In the German sample, no significant differences in baseline PMH, t(2977) = 0.08, p = .934, d = 0.00, or MHP, t(2622) = 0.08, p = .935, d = 0.00, emerged between baseline-only and other participants, meaning that participation at follow-up was not associated with baseline mental health. In the Chinese sample, baseline-only participants were comparable concerning baseline PMH, t(12737) = − 1.59, p = .111, d = 0.03, but showed slightly higher levels of baseline MHP, t(12304) = 2.21(12304), p = .027, d = 0.04, than Chinese students who participated at both assessment points. This group difference, however, was very small. For descriptive purposes, Table 2 shows non-parametric bivariate correlations between all predictor and outcome variables.
Lifestyle choices and mental health in German students
PMH and MHP were negatively correlated at baseline, r = −.55 (2960), p < .001, and follow-up, r = −.50 (2960), p < .001. Explained outcome variance by lifestyle factors at baseline was 13.2% for MHP, and 13.4% for PMH. Baseline lifestyle explained 10.8% of variance in PMH, and 10.5% of variance in MHP. Table 3 shows the results of the regression paths of the path analysis for the prediction of current and future mental health by lifestyle choices in German students.
Positive mental health
Female gender, a higher body mass index, smoking, vegetarian diet, and social rhythm irregularity were predictive of lower PMH at baseline. Higher frequency of physical activity, mental activity, and alcohol consumption were positive predictors of baseline PMH. Effects were mostly small. Somewhat larger effects were found for physical activity (d = 0.29) and social rhythm irregularity (d = 0.59). PMH and MHP were both predictive of PMH at follow-up. In addition, only social rhythm irregularity was a negative predictor of future PMH.
Mental health problems
Male gender, a higher body mass index, smoking, a vegetarian diet, and an irregular social rhythm were positive, physical and mental activity were negative predictors of baseline MHP. Effects were small, except for social rhythm irregularity which had a medium-sized effect (d = 0.68). PMH and MHP were both predictive of MHP at follow-up. In addition, being female and having a more irregular social rhythm at baseline were positive predictors of future MHP.
Lifestyle and mental health in Chinese students
PMH and MHP were negatively correlated at baseline, r = −.39 (10803), p < .001, and follow-up, r = −.36 (10803), p < .001. Explained outcome variance by lifestyle factors at baseline was 12.8% for PMH, and 16.1% for MHP. Baseline lifestyle explained 6.4% of variance in PMH, and 8.1% of variance in MHP. Table 4 shows the results of the regression paths of the path analysis for the prediction of current and future mental health in Chinese students.
Positive mental health
At baseline, a higher frequency of alcohol consumption, smoking, and a higher social rhythm irregularity were predictive of lower PMH. Being older and reporting a higher frequency of physical and mental activities were positive predictors of PMH. Effects were mostly small. A medium effect was found for social rhythm irregularity (d = 0.62). PMH and MHP were both predictive of PMH at follow-up. In addition, male gender, older age, and higher frequency of physical and mental activities were positive predictors of future PMH. Vegetarian diet and a more irregular social rhythm were negative predictors of future PMH. All effects of lifestyle factors on future PMH were small.
Mental health problems
At baseline, female gender, a higher body mass index, more frequent alcohol consumption, smoking, a vegetarian diet, and an irregular social rhythm were positive, frequency of physical and mental activities as well as higher age were negative predictors of MHP. Effects were small, except for social rhythm irregularity which had a medium-sized effect (d = 0.68). PMH and MHP were both predictive of MHP at follow-up. In addition, younger and female participants, those with fewer mental activities, more frequent alcohol consumption, smokers, vegetarians, and those having a more irregular social rhythm reported higher MHP at follow-up. All effects of lifestyle factors on future MHP were small.
Lifestyle and mental health across samples
To evaluate the impact of lifestyle choices for PMH and MHP across samples, a multi-group path analysis was conducted including all German participants (n = 2800) as well as a randomly selected sample of Chinese students (n = 2745) that was matched for age and gender (see Additional file 2). While correlations between PMH and MHP, intercepts, and residual variances were allowed to differ, all regression paths were set equal between countries. Different indices supported a good fit of this model (RMSEA = .043, CFI = .955, SRMR = .048). In German and Chinese students, explained outcome variance by lifestyle factors at baseline was 12.5% and 11.2% for PMH, and 11.9% and 13.1% for MHP, respectively. Lifestyle at baseline explained 5.8% and 6.0% of variance in future PMH, and 7.6% and 8.5% of variance in future MHP in German and Chinese students. Table 5 shows the results of the regression paths of the path analysis for the prediction of current and future mental health in our multi-group model of German and Chinese students.
Positive mental health
At baseline, female gender, a higher body mass index, smoking, a vegetarian diet, and a more irregular social rhythm were predictive of lower PMH. In addition, higher frequencies of physical and mental activities were positive predictors of PMH. Effects were small, except for social rhythm irregularity which had a medium effect on PMH (d = 0.56). PMH was a positive and MHP were a negative predictor of PMH at follow-up. In addition, reporting higher frequency of physical activities was a positive predictor, while a more irregular social rhythm was a negative predictor of future PMH. All effects of lifestyle factors on future PMH were small.
Mental health problems
At baseline, female gender, higher body mass index, more frequent alcohol consumption, smoking, a vegetarian diet, and an irregular social rhythm were positive, older age and a higher frequency of physical and mental activities were negative predictors of MHP. Effects were small, except for social rhythm irregularity which had a medium-sized effect on MHP (d = 0.56). PMH was a negative and MHP were a positive predictor of MHP at follow-up. In addition, female gender, younger age, smoking, and having a more irregular social rhythm at baseline were positive predictors of future MHP. All effects of lifestyle factors on future MHP were small.
The primary objective of this study was to evaluate the predictive value of a broad range of lifestyle choices for PMH and MHP in a cross-cultural study using two longitudinal student samples from Germany and China. In both samples, the lifestyle factors under investigation explained a substantial amount of variance in mental health outcomes at baseline and at follow-up. In a multi-group model including samples of German and Chinese students that were matched for gender and age, some lifestyle choices—physical activities, smoking, and social rhythm irregularity—were predictive of future PMH and/or MHP even when controlling for age, gender, and baseline mental health. A good fit of this multi-group model indicated that, overall, the impact of lifestyle on PMH and MHP was comparable across countries. These findings suggest that choosing healthier lifestyle behaviors can increase psychological well-being and reduce symptoms of depression, anxiety, and stress. We found, however, some differences in lifestyle between German and Chinese students as well as a differential impact of certain lifestyle choices on the mental health of the two groups. In the following section, we will first describe differences in lifestyle and mental health between the two student samples and proceed to discuss which hypotheses were supported cross-culturally or within one of our samples.
Lifestyle choices in German and Chinese students
German and Chinese students lead different lifestyles. Medium to large group differences were found for BMI and alcohol consumption, with higher values for German compared to Chinese students. Despite increasing levels of overweight and obesity in Asian children, adolescents, and adults, high BMI are still much more prevalent in Europe than in Asia . In addition, Germany is among the countries with the highest alcohol consumption rates worldwide . China has undergone substantial social and economic changes with increasing urbanization, changes to traditional family structure, and developments towards a free market which have been accompanied by an increase in alcohol consumption and changed dietary habits shifting away from high-carbohydrate foods toward high-fat, high-energy density foods . Our findings, however, suggest that on an absolute level, German students still surpass their Chinese counterparts with respect to these two lifestyle factors.
The remaining differences, even though statistically significant, were minimal. Chinese students reported engaging in mental and physical activities more frequently, as well as having a more regular social rhythm. Fewer Chinese students were smokers. Overall, Chinese students lead a somewhat healthier lifestyle than German students except for more Chinese participants indicating a vegetarian diet. It may, however, be an oversimplification to devaluate a meat-free diet as an unhealthy lifestyle choice as positive effects of a vegetarian diet on physical health are well documented and research on the relationship between vegetarianism and mental health is still in its infancy (see the following section for more information).
Mental health in German and Chinese students
Chinese students showed higher levels of PMH at baseline than their German counterparts (d = 0.27). When comparing these findings to a previous study that included three population-based samples in Germany, Russia, and the United States, both student samples scored significantly lower than all of these samples . This finding can not only be explained by the small, but significant correlation between age and PMH. In other words, these findings support the notion that college students report higher mental disorder prevalence rates than the general population . Regarding birth cohorts, psychopathology among college students increased .
With respect to MHP at baseline, differences between German and Chinese students were even larger (d = 0.42). Again, German students showed values that were substantially higher than those reported in a representative study in Germany . Surprisingly, Chinese students reported much fewer MHP than expected based on previous studies using the DASS-21 in Asian samples . A potential explanation could be the face-to-face assessment method used in our Chinese sample, which might lead to more socially desirable responding and lower levels of MHP compared to online surveys .
Lifestyle choices and mental health across countries
Lifestyle choices assessed in this study explained a small to medium amount of PMH and MHP variance in both German and Chinese students at baseline. Effect sizes were comparable to those found in another cross-sectional study that investigated a similar set of predictors of life satisfaction and MHP in a population-based sample of German adults . Lifestyle choices explained a small amount of variance in PMH and MHP at follow-up even when controlling for baseline mental health. Thus, other variables, that were not included in the study such as socioeconomic factors, could explain more variance. Next to lifestyle choices that are manifested in behaviors, internal factors, like personality (e.g. neuroticism or extraversion) , positive factors (e.g., resilience or social support) , and cognitive as well as emotional processes (e.g. rumination or other emotion-regulation strategies) [92, 93] also influence PMH and MHP.
Body mass index
The predictive value of BMI for PMH differed between samples: While a higher BMI was indicative of more MHP in both countries, only in Germany it was associated with lower PMH as well. While all effect sizes were small, the predictive value of BMI for mental health at both time-points was somewhat higher in German than in Chinese students. These findings might be related to the, on average, higher level of BMI and its greater variance in Germany. A substantial number of Chinese students were underweight (BMI < 18.5), (21.7% vs. 6.4% in the German sample), which is a known risk factor for MHP . To reduce complexity, a quadratic polynomial of BMI was not added to our analyses. In order to further investigate the impact of underweight, normal weight, and overweight on PMH and MHP in (Chinese) college students future studies should include different BMI variables (i.e., linear and quadratic terms).
Physical and mental activities
As hypothesized, both physical and mental activities were independently predictive of higher PMH and fewer MHP at baseline. Both variables were also predictive of future PMH in Chinese students above and beyond baseline mental health. In our multi-group model, physical activity was associated with increases in PMH from baseline to follow-up. Our findings underline the importance of exercise, sports, and cultural or mental leisure-time activities [33, 39, 43] for psychological well-being as well as for the prevention of mental disorders. Effects were small but compared to other lifestyle choices, the impact of both variables on PMH and MHP was the second and third largest in size. These findings imply that lifestyle interventions that aim to increase physical activities could not only strengthen students’ physical health (i.e., reduce overweight) , but may also improve mental health outcomes .
Although previous evidence concerning alcohol consumption and mental health was mixed [25, 46], we assumed that more frequent alcohol consumption would be associated with more MHP and lower PMH in our student samples. Our hypothesis was supported only in the Chinese sample. In German students, more frequent drinking was indeed predictive of higher PMH and was not a predictor of MHP. In other words, German students who reported more frequent drinking, showed greater psychological well-being. Previous studies indicated that positive associations between alcohol intake and mental health are moderated by other confounding variables . Thus, it seems unlikely that the alcohol intake itself is responsible for improved mental health. A possible explanation might be that German students who consume alcohol on a regular basis, exhibit other social or trait characteristics (e.g., higher socioeconomic status, more social support, more openness to experience) that are related to higher PMH and fewer MHP.
Our assumptions concerning smoking being associated with more MHP and lower PMH were supported. Using longitudinal data, smoking was a positive predictor of MHP in Chinese students as well as our multi-group model. In other words, students who were smokers at baseline, reported an increase in symptoms of depression, anxiety, and stress from baseline to follow-up. In both samples and across both assessment points, smoking was more strongly associated with MHP than with PMH. Further dissemination of smoking cessation programs might help more college students to cease smoking and thereby not only improve their physical, but also their mental health .
As hypothesized, students who reported a vegetarian diet had lower PMH and more MHP compared to other participants. This finding is in line with a previous study in German adults and is especially interesting as rates of vegetarians were high in both of our student samples (16% in German and 22% in Chinese students). Although effects were small, in Chinese students a vegetarian diet was also a significant predictor of future PMH and MHP when controlling for other lifestyle choices, age, gender, and baseline mental health. This finding contradicts the proposition that many individuals become vegetarians in order to cope with ongoing mental health issues . A possible explanation is that—similar to the positive effect of alcohol consumption on German students’ mental health—a vegetarian diet might be related to other factors not assessed in this study (e.g., a lower socioeconomic status or worrying and rumination about animal suffering) which mediate this relationship.
Irregular social rhythm
As expected, a more irregular social rhythm—not going to bed or eating meals at a similar time every day—was predictive of lower PMH and more MHP in both samples. Compared to other lifestyle choices, social rhythm irregularity showed the strongest associations with mental health across samples (d > 0.55). In line with the social zeitgeber theory (Zeitgeber is German for time–giver), disruptions in time-cues that trigger the body’s patterns of biological and social behavior can lead to increased MHP [97, 98]. Our findings suggest that more irregular social rhythms—that may be typical in college students who must deal with varying course schedules, periods of intensive learning, part-time jobs, and non-lecture periods—can be detrimental to students’ mental health across countries.
This study supports the cross-cultural relevance of lifestyle for students’ mental health by suggesting that some lifestyle choices may be more relevant for PMH and MHP than others. As a more regular social rhythm was predictive of future mental health in Chinese and German students, more precise measures including ambulatory assessment methods should be applied to assess this factor with more precision and to minimize recall biases. Use of a more precise instrument, like the Social Rhythm Metric , can help distinguish between frequency and regularity of social activities. The same is true for mental/cultural activities which were also predictive of future mental health in Chinese students. As most studies into receptive (e.g., reading, listening to music) and creative (i.e., playing an instrument) leisure time activities have been conducted in Western countries (i.e., Norway) , more studies are needed to identify which cultural/mental activities are accessible and beneficial to mental well-being in Asian students. For many individuals, young-adulthood is a phase of transition which can be characterized by the pursuit of educational opportunities and employment prospects as well as development of personal relationships which can foster personal growth but may also lead to stress that precipitates the onset or recurrence of MHP . From a public health perspective, it may be promising for campuses to promote low-barrier lifestyle interventions (e.g., ) to improve not only students’ physical, but also mental health. Such programs or interventions may be especially useful to reach the large number of students with MHP who do not seek out services that directly address mental health issues (i.e., psychological counselling centers) .
Some limitations reduce the validity and reliability of our findings. The use of student samples precludes generalization of our findings to other populations (i.e., older individuals or those with lower education levels). While baseline mental health of individuals who did not to participate at follow-up was similar to individuals who have partaken in both assessment points, it might be possible that individuals whose mental health worsened over the course of the study period were less inclined to participate again. Lifestyle choices explained only a small to medium amount of variance in MHP and PMH. Thus, other factors not included in this study might also be relevant for students’ mental health. Although we aimed to include a broad range of lifestyle factors, some aspects such as religious activities, game playing, or travel were not assessed . Furthermore, we did not ask the participants whether they carried out their activities alone or with others, a factor that impacts the relationship between lifestyle choices and mental health, especially in men . While our outcome variables as well as social rhythm irregularity were assessed with carefully constructed psychometric instruments, body mass index, physical and mental activities, alcohol consumption, smoking, vegetarian diet, and body mass index were assessed with only one item each. In a study combining daily diary entries as well as retrospective single-item self-reports of different behaviors, students’ alcohol use was reliably assessed via single-item self-report. Similar single-item measures have been used in several previous studies , and have exhibited sufficient reliability and validity; memory biases and socially desirable responding may, however, have had an additional effect on these items.
Investigating a broad range of lifestyle choices in German and Chinese students showed that lifestyle has a significant impact on students’ psychological, emotional, and social well-being as well as on symptoms of mental health difficulties. Across samples, a lower body mass index, more frequent physical and mental activities, non-smoking, a non-vegetarian diet, and a more regular social rhythm were associated with better mental health. While some differences between German and Chinese students emerged, a multi-group model showed that most lifestyle factors predict students’ mental health similarly across countries. As some lifestyle choices—physical activity, non-smoking, and a more regular social rhythm—predicted the state of future mental health when controlling for age, gender, and baseline mental health, interventions promoting lifestyle changes in students may be effective in improving students’ mental health.
Body mass index
Comparative Fit Index
Depression Anxiety Stress Scales 21
Mental health problems
Positive mental health
Root Mean Square Error of Approximation
Standardized Root-Mean-Square Residual
Herrman H, Saxena S, Moodie R. Promoting mental health: concepts, emerging evidence, practice: a report of the World Health Organization, Department of Mental Health and Substance Abuse in collaboration with the Victorian Health Promotion Foundation and the University of Melbourne: World Health Organization; 2005.
Wittchen HU, Jacobi F, Rehm J, Gustavsson A, Svensson M, Jönsson B, et al. The size and burden of mental disorders and other disorders of the brain in Europe 2010. Eur Neuropsychopharmacol. 2011;21:655–79.
World Health Organization. Mental health: strengthening our response. Fact sheet Nr. 220. Geneva: World Health Organization; 2014.
Keyes CLM. Mental illness and/or mental health? Investigating axioms of the complete state model of health. J Consult Clin Psychol. 2005;73:539–48.
Vaillant GE. Positive mental health: is there a cross-cultural definition? World Psychiatry. 2012;11:93–9.
Schönfeld P, Brailovskaia J, Bieda A, Zhang XC, Margraf J. The effects of daily stress on positive and negative mental health: mediation through self-efficacy. Int J Clin Heal. 2016;16:1–10.
American Psychiatric Association. Diagnostic and statistical manual of mental disorders. 5th revision ed. Washington, DC; 2013.
World Health Organization. The ICD-10 classification of mental and behavioural disorders: diagnostic criteria for research. Geneva: World Health Organization; 1993.
Keyes CLM. Social well-being. Soc Psychol. 1998:121–40.
Waterman AS. Two conceptions of happiness: contrasts of personal expressiveness (eudaimonia) and hedonic enjoyment. J Pers Soc Psychol. 1993;64:678–91.
Keyes CLM, Shmotkin D, Ryff CD. Optimizing well-being: the empirical encounter of two traditions. J Pers Soc Psychol. 2002;82:1007–22.
Lukat J, Margraf J, Lutz R, van der Veld WM, Becker ES. Psychometric properties of the positive mental health scale (PMH-scale). BMCPsychol. 2016;4:8.
World Health Organization. What is a healthy lifestyle? 1999. p. 1–24. Available from: http://apps.who.int/iris/bitstream/10665/108180/1/EUR_ICP_LVNG_01_07_02.pdf
Fogelholm M. Physical activity, fitness and fatness: relations to mortality, morbidity and disease risk factors. A systematic review. Obes Rev. 2010;11:202–21.
Bygren LO, Weissglas G, Wikström B-M, Konlaan BB, Grjibovski A, Karlsson A-B, et al. Cultural participation and health: a randomized controlled trial among medical care staff. Psychosom Med. 2009;71:469–73.
Bygren LO, Konlaan BB, Johansson S. Attendance at cultural events, reading books or periodicals, and making music or singing in a choir as determinants for survival: Swedish interview survey of living conditions. BMJ Br Med J. 1996;313:1577–80.
Schane RE, Ling PM, Glantz SA. Health effects of light and intermittent smoking a review. Circulation. 2010;121:1518–22.
Ronksley PE, Brien SE, Turner BJ, Mukamal KJ, Ghali WA. Association of alcohol consumption with selected cardiovascular disease outcomes: a systematic review and meta-analysis. BMJ Br Med J. 2011;342:d671.
Harriss DJ, Atkinson G, George K, Tim Cable N, Reilly T, Haboubi N, et al. Lifestyle factors and colorectal cancer risk (1): systematic review and meta-analysis of associations with body mass index. Color Dis. 2009;11:547–63.
Scarborough P, Nnoaham KE, Clarke D, Capewell S, Rayner M. Modelling the impact of a healthy diet on cardiovascular disease and cancer mortality. J Epidemiol Community Heal. 2012;66:420–6.
Harriss DJ, Atkinson G, Batterham A, George K, Tim Cable N, Reilly T, et al. Lifestyle factors and colorectal cancer risk (2): a systematic review and meta-analysis of associations with leisure-time physical activity. Color Dis. 2009;11:689–701.
Sattelmair J, Pertman J, Ding EL, Kohl HW, Haskell W, Lee I-M. Dose response between physical activity and risk of coronary heart disease a meta-analysis. Circulation. 2011;124:789–95.
He FJ, Nowson CA, MacGregor GA. Fruit and vegetable consumption and stroke: meta-analysis of cohort studies. Lancet. 2006;367:320–6.
Jonsdottir IH, Rödjer L, Hadzibajramovic E, Börjesson M, Ahlborg G. A prospective study of leisure-time physical activity and mental health in Swedish health care workers and social insurance officers. Prev Med. 2010;51:373–7.
Xu Q, Courtney M, Anderson D, Courtney M. A longitudinal study of the relationship between lifestyle and mental health among midlife and older women in Australia: findings from the healthy aging of women study. Health Care Women Int. 2010;31:1082–96.
Scott KM, Bruffaerts R, Simon GE, Alonso J, Angermeyer M, de Girolamo G, et al. Obesity and mental disorders in the general population: results from the world mental health surveys. Int J Obes. 2008;32:192–200.
Headey B, Muffels R, Wagner GG. Choices which change life satisfaction: similar results for Australia, Britain and Germany. Soc Indic Res. 2013;112:725–48.
Chaney EH, Chaney JD, Wang MQ, Eddy JM. Lifestyle behaviors and mental health of American adults. Psychol Rep. 2007;100:294–302.
Hamer M, Stamatakis E, Steptoe A. Dose-response relationship between physical activity and mental health: the Scottish health survey. Br J Sport Med. 2009;43:1111–4.
Rohrer JE, Pierce JR Jr, Blackburn C, et al. Lifestyle and mental health. Prev Med. 2005;40:438–43.
Margraf J, Lavallee KL, Zhang XC, Schneider S. Social rhythm and mental health: a cross-cultural comparison. PLoS One. 2016;11:e0150312.
Cai D, Zhu M, Lin M, Zhang XC, Margraf J. The bidirectional relationship between positive mental health and social rhythm in college students: a three-year longitudinal study. Front Psychol. 2017;8:1–7.
Velten J, Lavallee KL, Scholten S, Meyer AH, Zhang XC, Schneider S, et al. Lifestyle choices and mental health: a representative population survey in Germany. BMC Psychol. 2014;2:2.
World Health Organization. Obesity: preventing and managing the global epidemic. Geneva: World Health Organization; 2000.
Kelly SJ, Daniel M, Dal Grande E, Taylor A. Mental ill-health across the continuum of body mass index. BMC Public Health. 2011;11:765.
McCallum J. The SF-36 in an Australian sample: validating a new, generic health status measure. Aust N Z J Public Health. 1995;19:160–6.
Pavot W, Diener E. Review of the satisfaction with life scale. Psychol Assess. 1993;5:164–72.
Luppino FS, de Wit LM, Bouvy PF, Stijnen T, Cuijpers P, Penninx BW, Zitman FG. Overweight, obesity, and depression. Arch Gen Psychiatry. 2010;67:220–9.
Mammen G, Faulkner G. Physical activity and the prevention of depression: a systematic review of prospective studies. Am J Prev Med. 2013;45:649–57.
Herring MP, O’Connor PJ, Dishman RK. The effect of exercise training on anxiety symptoms among patients: a systematic review. Arch Intern Med. 2010;170:321–31.
Babyak M, Blumenthal JA, Herman S, Khatri P, Doraiswamy M, Moore K, et al. Exercise treatment for major depression: maintenance of therapeutic benefit at 10 months. Psychosom Med. 2000;62:633–8.
Netz Y, Wu M-J, Becker BJ, Tenenbaum G. Physical activity and psychological well-being in advanced age: a meta-analysis of intervention studies. Psychol Aging. 2005;20:272–84.
Cuypers K, Krokstad S, Lingaas Holmen T, Skjei Knudtsen M, Bygren LO, Holmen J. Patterns of receptive and creative cultural activities and their association with perceived health, anxiety, depression and satisfaction with life among adults: the HUNT study, Norway. J Epidemiol Community Heal. 2012;66:698–703.
Takeda F, Noguchi H, Monma T, Tamiya N. How possibly do leisure and social activities impact mental health of middle-Aged adults in Japan? An evidence from a national longitudinal survey. PLoS One. 2015;10:e0139777.
Kötter T, Tautphäus Y, Obst KU, Voltmer E, Scherer M. Health-promoting factors in the freshman year of medical school: a longitudinal study. Med Educ. 2016;50:646–56.
Rodgers B, Korten AE, Jorm AF, Christensen H, Henderson S, Jacomb PA. Risk factors for depression and anxiety in abstainers, moderate drinkers and heavy drinkers. Addiction. 2000;95:1833–45.
Skogen JC, Harvey SB, Henderson M, Stordal E, Mykletun A. Anxiety and depression among abstainers and low-level alcohol consumers. The Nord-Trøndelag health study. Addiction. 2009;104:1519–29.
Massin S, Kopp P. Is life satisfaction hump-shaped with alcohol consumption? Evidence from Russian panel data. Addict Behav. 2014;39:803–10.
Kinnunen T, Haukkala A, Korhonen T, Quiles ZN, Spiro A, Garvey AJ. Depression and smoking across 25 years of the normative aging study. Int J Psychiatry Med. 2006;36:413–26.
Lien L, Satatun Å, Heyerdahl S, Søgaard AJ, Bjertness E. Is the relationship between smoking and mental health influenced by other unhealthy lifestyle factors? Results from a 3-year follow-up study among adolescents in Oslo, Norway. J Adolesc Health. 2009;45:609–17.
Taylor G, McNeill A, Girling A, Farley A, Lindson-Hawley N, Aveyard P. Change in mental health after smoking cessation: systematic review and meta-analysis. BMJ. 2014;348:g1151.
Weinhold D, Chaloupka FJ. Smoking status and subjective well-being. Tob Control. 2017;26:195–201.
Tjora T, Hetland J, Aaroe LE, Wold B, Wiium N, Oeverland S. The association between smoking and depression from adolescence to adulthood. Addiction. 2014;109:1022–30.
McKenzie M, Olsson CA, Jorm AF, Romaniuk H, Patton GC. Association of adolescent symptoms of depression and anxiety with daily smoking and nicotine dependence in young adulthood: findings from a 10-year longitudinal study. Addiction. 2010;105:1652–9.
Baines S, Powers J, Brown WJ. How does the health and well-being of young Australian vegetarian and semi-vegetarian women compare with non-vegetarians? Public Health Nutr. 2007;10:436–42.
Michalak J, Zhang XC, Jacobi F. Vegetarian diet and mental disorders: results from a representative community survey. Int J Behav Nutr Phys Act. 2012;9:67.
Refsum H, Yajnik CS, Gadkari M, Schneede J, Vollset SE, Örning L, et al. Hyperhomocysteinemia and elevated methylmalonic acid indicate a high prevalence of cobalamin deficiency in Asian Indians. Am J Clin Nutr. 2001;74:233–41.
Jagannath A, Peirson SN, Foster RG. Sleep and circadian rhythm disruption in neuropsychiatric illness. Curr Opin Neurobiol. 2013;23:888–94.
McClung CA. Circadian genes, rhythms and the biology of mood disorders. Pharmacol Ther. 2007;114:222–32.
McClung CA. How might circadian rhythms control mood? Let me count the ways. Biol Psychiatry. 2013;74:242–9.
Lieverse R, de Vries R, Hoogendoorn AW, Smit JH, Hoogendijk WJG. Social support and social rhythm regularity in elderly patients with major depressive disorder. Am J Geriatr Psychiatry. 2013;21:1144–53.
Sutherland C. Introduction: German politics and society from a cosmopolitan perspective. Ger Polit Soc. 2011;29:1–19.
Markus HR, Kitayama S. Culture and the self: implications for cognition, emotion, and motivation. Psychol Rev. 1991;98:224–53.
Gorber, SC, Tremblay, M, Moher, D, & Gorber, B. A comparison of direct vs. self-report measures for assessing height, weight and body mass index: a systematic review. Obes Rev. 2007;8(4):307–26.
Jin S, Zheng J, Xin Z. The structure and characteristics of contemporary Chinese values. Acta Psychol Sin. 2009;41:1000–14.
Brislin RW. Back-translation for cross-cultural research. J Cross-Cult Psychol. 1970;1:185–216.
Milton K, Bull FC, Bauman A. Reliability and validity testing of a single-item physical activity measure. Br J Sport Med. 2011;45:203–8.
Wanner M, Probst-Hensch N, Kriemler S, Meier F, Bauman A, Martin BW. What physical activity surveillance needs: validity of a single-item questionnaire. Br J Sports Med. 2014;48:1570–6.
Gmel G, Rehm J. Measuring alcohol consumption. Contemp Drug Probl. 2004;31:467.
Skogen JC, Knudsen AK, Hysing M, Wold B, Sivertsen B. Trajectories of alcohol use and association with symptoms of depression from early to late adolescence: the Norwegian longitudinal health behaviour study. Drug Alcohol Rev. 2016;35:307–16.
Henry JD, Crawford JR. The short-form version of the depression anxiety stress scales (DASS-21): construct validity and normative data in a large non-clinical sample. Br J Clin Psychol. 2005;44:227–39.
Lovibond PF, Lovibond SH. The structure of negative emotional states: comparison of the depression anxiety stress scales (DASS) with the Beck depression and anxiety inventories. Behav Res Ther. 1995;33:335–43.
Shea TL, Tennant A, Pallant JF. Rasch model analysis of the depression, anxiety and stress scales (DASS). BMC Psychiatry. 2009;9:21.
IBM. IBM SPSS statistics for Macintosh. Armonk, NY: IBM Corp; 2012.
Muthén LK, Muthén BO. Mplus User’s Guide. 8th ed. Muthén & Muthén: Los Angeles; 1988-2017.
Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale: Erlbaum; 1988.
Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Model. 1999;6:1–55.
RC MC, Browne MW, Sugawara HM. Power analysis and determination of sample size for covariance structure modeling. Psychol Methods. 1996, 1:130.
Hu L, Bentler PM. Fit indices in covariance structure modeling: sensitivity to underparameterized model misspecification. Psychol Meth. 1998;3:424–53.
NCD Risk Factor Collaboration. Worldwide trends in body-mass index, underweight, overweight, and obesity from 1975 to 2016: a pooled analysis of 2416 population-based measurement studies in 128.9 million children, adolescents, and adults. Lancet. 2016;390:2627–42.
Yusuf S, Hawken S, Ounpuu S, Bautista L, Franzosi MG, Commerford P, et al. Obesity and the risk of myocardial infarction in 27,000 participants from 52 countries: a case-control study. Lancet. 2005;366:1640–9.
World Health Organization. Global status report on alcohol and health, 2014. Geneva: World Health Organization; 2014.
Cochrane J, Chen H, Conigrave KM, Hao W. Alcohol use in China. Alcohol. 2003;38:537–42.
Du S, Mroz TA, Zhai F, Popkin BM. Rapid income growth adversely affects diet quality in China - particularly for the poor! Soc Sci Med. 2004;59:1505–15.
Appleby L, Warner R, Whitton A, Faragher B. A controlled study of fluoxetine and cognitive-behavioural counselling in the treatment of postnatal depression. BMJ. 1997;314:932.
Alonso J, Angermeyer MC, Bernert S, Bruffaerts R, Brugha TS, Bryson H, et al. Prevalence of mental disorders in Europe: results from the European study of the epidemiology of mental disorders (ESEMeD) project. Acta Psychiatr Scand. 2004;109:21–7.
Twenge JM, Gentile B, DeWall CN, Ma D, Lacefield K, Schurtz DR. Birth cohort increases in psychopathology among young Americans, 1938-2007: a cross-temporal meta-analysis of the MMPI. Clin Psychol Rev. 2010;30:145–54.
Norton Norton PJ. Depression anxiety and stress scales (DASS-21): psychometric analysis across four racial groups. Anxiety Stress Coping. 2007;20:253–65.
Zhang XC, Kuchinke L, Woud ML, Velten J, Margraf J. Survey method matters: online/offline questionnaires and face-to-face or telephone interviews differ. Comput Human Behav. 2017;71:172–80.
Costa PT, McCrae RR. Influence of extraversion and neuroticism on subjective well-being: happy and unhappy people. J Pers Soc Psychol. 1980;38:668–78.
Brailovskaia J, Schönfeld P, Zhang XC, Bieda A, Kochetkov Y, Margraf J. A cross-cultural study in Germany, Russia, and China: are resilient and social supported students protected against depression, anxiety, and stress? Psychol Rep. 2017:1–19.
Berking M, Wupperman P. Emotion regulation and mental health. Curr Opin Psychiatry. 2012;25:128–34.
Zawadzki MJ. Rumination is independently associated with poor psychological health: comparing emotion regulation strategies. Psychol Health. 2015;30:1146–63.
Molarius A, Berglund K, Eriksson C, Eriksson HG, Linden-Bostrom M, Nordstrom E, et al. Mental health symptoms in relation to socio-economic conditions and lifestyle factors – a population-based study in Sweden. BMC Public Health. 2009;9:302.
Stathopoulou G, Powers MB, Berry AC, JAJ S, Otto MW. Exercise interventions for mental health: a quantitative and qualitative review. Clin Psychol Sci Pract. 2006;13:179–93.
Hutton HE, Wilson LM, Apelberg BJ, Tang EA, Odelola O, Bass EB, et al. A systematic review of randomized controlled trials: web-based interventions for smoking cessation among adolescents, college students, and adults. Nicotine Tob Res. 2011;13:227–38.
Grandin LD, Alloy LB, Abramson LY. The social zeitgeber theory, circadian rhythms, and mood disorders: review and evaluation. Clin Psychol Rev. 2006;26:679–94.
Ehlers CL, Kupfer DJ, Frank E, Monk TH. Biological rhythms and depression: the role of zeitgebers and zeitstorers. Depression. 1993;1:285–93.
Monk TH, Kupfer DJ, Frank E, Ritenour AM. The social rhythm metric (SRM): measuring daily social rhythms over 12 weeks. Psychiatry Res. 1991;36:195–207.
Blanco C, Okuda M, Wright C, Hasin DS, Grant BF, Liu S-M, et al. Mental health of college students and their non–college-attending peers. Arch Gen Psych. 2008;65:1429–37.
Hunt J, Eisenberg D. Mental health problems and help-seeking behavior among college students. J Adolesc Health. 2010;46:3–10.
Jopp DS, Hertzog C. Assessing adult leisure activities: an extension of a self-report activity questionnaire. Psychol Assess. 2010;22:108–20.
The current study was financially supported through the Alexander von Humboldt Professorship awarded to Jürgen Margraf by the Alexander von Humboldt-Foundation. We also acknowledge support by the DFG Open Access Publication Funds of the Ruhr-Universität Bochum.
Availability of data and material
All data generated or analyzed during this study are included in this published article [and its Additional files].
Ethics approval and consent to participate
Depending on the data assessment method, participants gave their informed consent written or online after being informed about anonymity and voluntariness of the survey. All procedures were carried out in accordance with the provisions of the World Medical Association Declaration of Helsinki (2013). The Ethics Committee of the Faculty of Psychology of the Ruhr-Universität Bochum approved the study in total. Since the data were anonymized from the beginning of data collection, no statement by an ethics committee was required in China. The participating Chinese Universities were informed and acknowledged the approval by the German ethics board. The Chinese sample included students below age 18. Chinese laws grant inscribed University students of all ages the rights to decide for themselves about study-related issues including participation in studies. Thus, no consent to participate was collected from the parents or guardians.
The author(s) declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Velten, J., Bieda, A., Scholten, S. et al. Lifestyle choices and mental health: a longitudinal survey with German and Chinese students. BMC Public Health 18, 632 (2018) doi:10.1186/s12889-018-5526-2
- Positive mental health
- Physical activity
- Body mass index
- Social rhythm | 2 | 9 |
<urn:uuid:ca20be72-a49c-4d91-91e9-3e4b0dd9d426> | Accident insurance was first offered in the United States by the Franklin Health Assurance Company of Massachusetts. This firm, founded in 1850, offered insurance against injuries arising from railroad and steamboat accidents. Sixty organizations were offering accident insurance in the US by 1866, but the industry consolidated rapidly soon thereafter. While there were earlier experiments, sickness coverage in the US effectively dates from 1890. The first employer-sponsored group disability policy was issued in 1911, but this plan's primary purpose was replacing wages lost because the worker was unable to work, not medical expenses.
Health benefits are provided to active duty service members, retired service members and their dependents by the Department of Defense Military Health System (MHS). The MHS consists of a direct care network of Military Treatment Facilities and a purchased care network known as TRICARE. Additionally, veterans may also be eligible for benefits through the Veterans Health Administration.
Shortly after his inauguration, President Clinton offered a new proposal for a universal health insurance system. Like Nixon's plan, Clinton's relied on mandates, both for individuals and for insurers, along with subsidies for people who could not afford insurance. The bill would have also created "health-purchasing alliances" to pool risk among multiple businesses and large groups of individuals. The plan was staunchly opposed by the insurance industry and employers' groups and received only mild support from liberal groups, particularly unions, which preferred a single payer system. Ultimately it failed after the Republican takeover of Congress in 1994.
Private health insurance may be purchased on a group basis (e.g., by a firm to cover its employees) or purchased by individual consumers. Most Americans with private health insurance receive it through an employer-sponsored program. According to the United States Census Bureau, some 60% of Americans are covered through an employer, while about 9% purchase health insurance directly. Private insurance was billed for 12.2 million inpatient hospital stays in 2011, incurring approximately 29% ($112.5 billion) of the total aggregate inpatient hospital costs in the United States.
Insurance companies are not allowed to have co-payments, caps, or deductibles, or to deny coverage to any person applying for a policy, or to charge anything other than their nationally set and published standard premiums. Therefore, every person buying insurance will pay the same price as everyone else buying the same policy, and every person will get at least the minimum level of coverage.
In 2003, according to the Heartland Institute's Merrill Matthews, association group health insurance plans offered affordable health insurance to "some 6 million Americans." Matthews responded to the criticism that said that some associations work too closely with their insurance providers. He said, "You would expect the head of AARP to have a good working relationship with the CEO of Prudential, which sells policies to AARP's seniors."
A health maintenance organization (HMO) is a type of managed care organization (MCO) that provides a form of health care coverage that is fulfilled through hospitals, doctors, and other providers with which the HMO has a contract. The Health Maintenance Organization Act of 1973 required employers with 25 or more employees to offer federally certified HMO options. Unlike traditional indemnity insurance, an HMO covers only care rendered by those doctors and other professionals who have agreed to treat patients in accordance with the HMO's guidelines and restrictions in exchange for a steady stream of customers. Benefits are provided through a network of providers. Providers may be employees of the HMO ("staff model"), employees of a provider group that has contracted with the HMO ("group model"), or members of an independent practice association ("IPA model"). HMOs may also use a combination of these approaches ("network model").
As per the Constitution of Canada, health care is mainly a provincial government responsibility in Canada (the main exceptions being federal government responsibility for services provided to aboriginal peoples covered by treaties, the Royal Canadian Mounted Police, the armed forces, and Members of Parliament). Consequently, each province administers its own health insurance program. The federal government influences health insurance by virtue of its fiscal powers – it transfers cash and tax points to the provinces to help cover the costs of the universal health insurance programs. Under the Canada Health Act, the federal government mandates and enforces the requirement that all people have free access to what are termed "medically necessary services," defined primarily as care delivered by physicians or in hospitals, and the nursing component of long-term residential care. If provinces allow doctors or institutions to charge patients for medically necessary services, the federal government reduces its payments to the provinces by the amount of the prohibited charges. Collectively, the public provincial health insurance systems in Canada are frequently referred to as Medicare. This public insurance is tax-funded out of general government revenues, although British Columbia and Ontario levy a mandatory premium with flat rates for individuals and families to generate additional revenues - in essence, a surtax. Private health insurance is allowed, but in six provincial governments only for services that the public health plans do not cover (for example, semi-private or private rooms in hospitals and prescription drug plans). Four provinces allow insurance for services also mandated by the Canada Health Act, but in practice there is no market for it. All Canadians are free to use private insurance for elective medical services such as laser vision correction surgery, cosmetic surgery, and other non-basic medical procedures. Some 65% of Canadians have some form of supplementary private health insurance; many of them receive it through their employers. Private-sector services not paid for by the government account for nearly 30 percent of total health care spending.
The national system of health insurance was instituted in 1945, just after the end of the Second World War. It was a compromise between Gaullist and Communist representatives in the French parliament. The Conservative Gaullists were opposed to a state-run healthcare system, while the Communists were supportive of a complete nationalisation of health care along a British Beveridge model.
PPO (Preferred Provider Organization) - A type of insurance plan that offers more extensive coverage for the services of healthcare providers who are part of the plan's network, but still offers some coverage for providers who are not part of the plan's network. PPO plans generally offer more flexibility than HMO plans, but premiums tend to be higher.
Supporters of a public plan, such as Washington Post columnist E. J. Dionne, argue that many places in the United States have monopolies in which one company, or a small set of companies, control the local market for health insurance. Economist and New York Times columnist Paul Krugman also wrote that local insurance monopolies exist in many of the smaller states, accusing those who oppose the idea of a public insurance plan as defenders of local monopolies. He also argued that traditional ideas of beneficial market competition do not apply to the insurance industry given that insurers mainly compete by risk selection, claiming that "[t]he most successful companies are those that do the best job of denying coverage to those who need it most."
According to a 2000 Congressional Budget Office (CBO) report, Congress passed legislation creating "two new vehicles Association Health Plans (AHPs) and HealthMarts, to facilitate the sale of health insurance coverage to employees of small firms" in response to concerns about the "large and growing number of uninsured people in the United States."
Health insurance premiums have risen dramatically over the past decade. In the past, insurers would price your health insurance based on any number of factors, but after the Affordable Care Act, the number of variables that impact your health insurance costs have been reduced dramatically. We conducted a study to look at how health insurance premiums vary based on these characteristics. In our data we illustrate these differences by using an example 21 year old. Older consumers will see higher rates with 30 year olds paying 1.135 times more, 40 year olds paying 1.278 times more, 50 year olds paying 1.786x and 64 year olds paying 2.714 times the cost listed.
Since 1974, New Zealand has had a system of universal no-fault health insurance for personal injuries through the Accident Compensation Corporation (ACC). The ACC scheme covers most of the costs of related to treatment of injuries acquired in New Zealand (including overseas visitors) regardless of how the injury occurred, and also covers lost income (at 80 percent of the employee's pre-injury income) and costs related to long-term rehabilitation, such as home and vehicle modifications for those seriously injured. Funding from the scheme comes from a combination of levies on employers' payroll (for work injuries), levies on an employee's taxable income (for non-work injuries to salary earners), levies on vehicle licensing fees and petrol (for motor vehicle accidents), and funds from the general taxation pool (for non-work injuries to children, senior citizens, unemployed people, overseas visitors, etc.)
Lifetime Health Cover: If a person has not taken out private hospital cover by 1 July after their 31st birthday, then when (and if) they do so after this time, their premiums must include a loading of 2% per annum for each year they were without hospital cover. Thus, a person taking out private cover for the first time at age 40 will pay a 20 percent loading. The loading is removed after 10 years of continuous hospital cover. The loading applies only to premiums for hospital cover, not to ancillary (extras) cover.
Still, private insurance remained unaffordable or simply unavailable to many, including the poor, the unemployed, and the elderly. Before 1965, only half of seniors had health care coverage, and they paid three times as much as younger adults, while having lower incomes. Consequently, interest persisted in creating public health insurance for those left out of the private marketplace.
Your ParTNers EAP provides confidential financial, legal and emotional counseling at no cost to members and their dependents. EAP services are offered to all full-time state and higher education employees and their eligible family members (at no cost), regardless of whether they participate in the State's Group Insurance Program. Members may receive up to five sessions per issue. Just a few of the many issues EAP can help with:
Cost assistance is available to help lower the monthly expense of health insurance. Know as a tax credit or tax subsidy, federal money helps those that make between 100%-400% of the Federal Poverty Level. (For an individual that is between $11,770 – $47,080, depending on the state.) With cost assistance, individuals paid an average of less than $100 a month for a plan on the marketplace in 2015. That is a $268 savings each month.
Public programs provide the primary source of coverage for most seniors and also low-income children and families who meet certain eligibility requirements. The primary public programs are Medicare, a federal social insurance program for seniors (generally persons aged 65 and over) and certain disabled individuals; Medicaid, funded jointly by the federal government and states but administered at the state level, which covers certain very low income children and their families; and CHIP, also a federal-state partnership that serves certain children and families who do not qualify for Medicaid but who cannot afford private coverage. Other public programs include military health benefits provided through TRICARE and the Veterans Health Administration and benefits provided through the Indian Health Service. Some states have additional programs for low-income individuals. In 2011, approximately 60 percent of stays were billed to Medicare and Medicaid—up from 52 percent in 1997.
Coinsurance: Instead of, or in addition to, paying a fixed amount up front (a co-payment), the co-insurance is a percentage of the total cost that insured person may also pay. For example, the member might have to pay 20% of the cost of a surgery over and above a co-payment, while the insurance company pays the other 80%. If there is an upper limit on coinsurance, the policy-holder could end up owing very little, or a great deal, depending on the actual costs of the services they obtain.
In the late 1990s federal legislation had been proposed to "create federally-recognized Association Health Plans which was then "referred to in some bills as 'Small Business Health Plans.' The National Association of Insurance Commissioners (NAIC), which is the "standard-setting and regulatory of chief insurance regulators from all states, the District of Columbia and territories, cautioned against implementing AHPs citing "plan failures like we saw The Multiple Employer Welfare Arrangements (MEWAs) in the 1990s." "[S]mall businesses in California such as dairy farmers, car dealers, and accountants created AHPs "to buy health insurance on the premise that a bigger pool of enrollees would get them a better deal." A November 2017 article in the Los Angeles Times described how there were only 4 remaining AHPs in California. Many of the AHPs filed for bankruptcy, "sometimes in the wake of fraud." State legislators were forced to pass "sweeping changes in the 1990s" that almost made AHPs extinct.
Erica Block is an Editorial Fellow at HealthCare.com, where she gets to combine her interest in healthcare policy with her penchant for creating online content. When she isn't reading or writing, Erica can be found wandering around Brooklyn, playing softball, or listening to podcasts. She counts music, rescue dogs, and lumberjack sports among her greatest passions. Follow Erica on Twitter: @EricaDaleBlock
The 1960 Kerr-Mills Act provided matching funds to states assisting patients with their medical bills. In the early 1960s, Congress rejected a plan to subsidize private coverage for people with Social Security as unworkable, and an amendment to the Social Security Act creating a publicly run alternative was proposed. Finally, President Lyndon B. Johnson signed the Medicare and Medicaid programs into law in 1965, creating publicly run insurance for the elderly and the poor. Medicare was later expanded to cover people with disabilities, end-stage renal disease, and ALS. | 2 | 4 |
<urn:uuid:8fce670a-4582-4595-aa21-c643df8186eb> | Video conferencing, up until a few years ago, was traditionally exclusively based on hardware (codecs) produced by companies such as Tandberg, Polycom, HP, Sony, Aethra, Lifesize and Radvision to mention a few. Development of software video conference solutions such as Bluejeans, Vidyo, Pexip, Zoom and Slack to mention a few, are progressively making inroads on the traditional technology turf.
Video communication/conference history
- AT&T experiments with video phones.
- The Gegensehn-Fernsprechanlagen, the world’s first video telephone service launches in Germany. This device is mostly used by the German post office, establishing a channel of coaxial cables running a hundred miles between Berlin and Leipzig.
- NASA uses video communications to keep in touch with its astronauts during its first manned space missions. It used two radio links set up for video conferencing through UHF/VHF.
- AT&T uses video communications for its Picturephone service. It is able to transmit very crude images in two directions via standard telephone lines. (1964)
- Ethernet 10 Mbit/s protocol is released.
- ARPANET , serves as a backbone for interconnection of regional academic and military networks
- The development of personal computers contributes to a wider demand for video communication. Sending video images becomes practical now that the data communications components were in place, such as the video codecs along with the introduction of of broadband services such as ISDN.
- PictureTel, creates one of the first real-time video conferencing systems (1986)
- Tandberg develops its first picture telephone for ISDN (1989)
- The linking of commercial networks and enterprises, marks the beginning of the transition to the modern Internet, and generated a sustained exponential growth as generations of institutional, personal, and mobile computers were connected to the network. Although the Internet was widely used by academia since the 1980s, the commercialization incorporated its services and technologies into virtually every aspect of modern life.
- Cornell University develops CU-SeeMe video conferencing software for Mac in 1992 and Windows in 1994.
- JCU from its location at Cairns TAFE, uses PictureTel ISDN videoconferencing to CRC in Townsville. The unit is relocated to the Smithfield campus the following year (1994)
- The first commercial webcam, QuickCam, is introduced (1994)
- PictureTel launches Concord 4500 and Venue 2000 (1995)
- Fast Ethernet (100Mb/s) is introduced (1996)
- JCU - acquires another 5 PictureTel ISDN videoconference units, 2 for Cairns and 3 for Townsville (1998)
- Polycom introduces the ViewStation product line which includes models with multipoint capabilities and content sharing (1998)
- Gigabit Ethernet (1Gb/s protocol) becomes available (1999)
- Internet connections become faster and webcams are more common place.
- Tandberg introduces IP capable H323 videoconferencing codecs
- Polycom acquires PictureTel Corp
- Skype, one of the first software based video chat services, provides communication free of charge over the Internet by voice, video and instant messaging using a personal computer.
- JCU is one of the first to use Tandberg Movi (now Cisco Jabber) which uses ActiveX and allows standards based SIP based videoconferencing from desktop PCs.
- Cisco acquires Tandberg
- JCU - The JCU Singapore videoconferencing endpoints are now on the JCU network enabling connection over IP
- The standard for Audio Video Bridging (AVB) is established, paving the way for the development of audio and video distribution without compression in real-time over IP.
- JCU - Videoconferencing system upgrades commences, migrating from the XGA resolution and aspect ratio of 4:3 to high definition (HD)16:9 (aspect ratio)
- Zoom launches its service
- Microsoft announces that Skype for Business is replacing Lync in 2015
- 65,000 organizations subscribe to Zoom Meetings
- Zoom in partnership with Polycom introduces features such as multiple screen and BYOD meetings, HD and wireless screen sharing, and calendar integration with Outlook, Google Calendar, and iCal
- Microsoft announces that Microsoft Teams will eventually replace Skype for Business.
- JCU - launches Zoom for all staff and students
The refinement and improvement by the major video conference hardware manufacturers, continued throughout the 2000s with more and more software based video conference solutions appearing. Cisco, a network hardware manufacturer, progressively became a major video conference player after the Tandberg take-over in 2010.
The problem facing many organizations including government departments, educational institutions and global corporations, is to cost effectively, seamlessly and without a degradation in quality, transition from a hardware platform based endpoint system for video conferencing to a software/cloud solution. JCU has embarked on this journey and currently uses Zoom Meetings and a mix of existing hardware with Tandberg and Cisco codec endpoints deployed in fit for purpose video conference rooms. The university is currently experimenting with Zoom Rooms and has turned a previously traditional hardware based room system into a fully functional room at the Singapore Campus controlled from an iPad. Other Zoom Rooms have been set up at both the Cairns and Townsville campuses but have not yet been deployed in a common teaching space.
Zoom was introduced at JCU in March 2018 as the preferred software based video conference solution. Some of the major advantages of Zoom are:
- inter-operates with the existing video conference hardware endpoints
- used as an external host in TelePresence Management Suit (TMS) the video conference scheduler
- meetings can be scheduled via Outlook
- Zoom Meetings — A collaborative cloud-based video and web conferencing product.
- Zoom Rooms — A low-cost video conference room based system running on Apple and PC hardware, touch screen compatibility, and support for multi-screen setups
- Zoom Video Webinar — A version of video conferencing that allows up to 100 active and 10,000 passive participants
Other software systems used for collaboration by JCU staff and students | 1 | 3 |
<urn:uuid:dabb2351-f0a6-4b09-b0f3-c82c21e9b5b8> | While impetigo, the horrendously looking, vile and contagious skin infection, usually troubles little rug rats, it can certainly target grownups as well. It’s unexpected and atypical, but not utterly impossible.
Pro Tip: Impetigo is coded in the ICD 10 manual as L01.0 which is not billable. Further classifications are exclusive to children. Impetigo in adults would likely be coded L01.00 which signifies an overarching diagnosis of the unspecified sort, established by the World Health Organization (WHO) in November 2018.
Adults with children in the home are more susceptible to acquiring the infection.
What Causes Impetigo in Adults?
t’s precisely the same two pathogenic bacteria that make impetigo an issue in children are what causes impetigo in adults—staph and strep. Both of these harmlessly live and thrive everywhere on the epidermis, or more specifically, the stratum corneum.
Stratum Corneum: The most outward skin layer that’s made up of all the dead cells that have yet to shed and serves as a protective barrier against bacteria like staph and strep (keratinocytes). This layer also supports the containment of moisture. Gazing upon a mirror at your bare body, it’s both the epidermis and the stratum corneum that you’ll see staring back. Aging stratum corneum is responsible for the dry, flaky, scaly appearance in older people.
Infectious problems arise where the skin is broken apart (even the minutest graze) and the pathogens are able to sneak inside, gaining entry to the world beneath your sheathing.
There are typically only two types of impetigo that garner notoriety, however, there is a third which should be talked about too. All three kinds of impetigo can either be impetigo staph (Staphylococcus aureus) or strep impetigo (Streptococcal aureus).
Non Bullous Impetigo
This kind of impetigo is the most seen in pediatric patients. Non bullous impetigo was originally referred to as impetigo contagiosa, but that moniker has since become antiquated.
Starting out as a singular reddish papule that rapidly transforms into a vacuole or vesicle. After the vesicles pop, the oozing innards become encrusted, and the color changes from red to honey and pruritus ensues.
Bullous impetigo is classically categorized as presenting with blisters in formation. These raised, pustular sacs have defined edges and hyperemia and erythema don’t usually develop around the lumps. They enlarge fast and are prone to breakage, as the covering is thin and its integrity is easily compromised. When explosion occurs, a yellowy liquid oozes out, subsequently crusting or scabbing over.
Of the impetigo categories, ecthyma is the only one that leaves everlasting scars in its wake. The ulcerative pyoderma is considered the “deepest” version of impetigo. Because of its depth in the integumentary system and dermis, this penetrative and invasive impetigo is very painful for the patient.
Symptoms and signs of impetigo in adults and onset can be broken down into two main categories—blistering and non.
With blistering impetigo, itty bitty blisters abruptly appear on miscellaneous parts of the body—be them facial macules (most common), or on the torso or extremities. These pustules are often without pain, though general dread, feelings of unwellness, and aches may coincide with the infection’s manifestation. These blisters will be less than an inch in total diameter and are delicate, spontaneously bursting in a moment’s notice.
Non-blistering impetigo, on the other hand, presents more like zits instead of papules. These smallish pimples are crimson red like a Macintosh apple—but not for long. Soon after they reveal themselves, they too will burst and dribble with an amber fluid that quickly hardens and crusts over. This version is very apparent on the face and on regions of the skin where there’s been recent trauma. Most grown sufferers of non-blistering impetigo report overwhelming itchiness and weakness. The lymph nodes sometimes protrude as they become swollen.
Impetigo can be mixed up with many other skin conditions, so seeing a healthcare professional is suggested to ensure a legitimate verdict.
Irrespective of the prognosis, avoid neurotic excoriations as this can promote the rapid spreading of bacteria and infectious pathogens. Seek a medical opinion ASAP to begin treatment, and once an antibiotic therapy has been instituted, you can count on the pruritic sensations to cease.
If left untreated, most instances of impetigo will subside without consequence or sequelae. However, due to impetigo’s incredibly infectious nature, promptly starting a course of antibiotics (topicals or orals) is the safest bet.
Indeed, certain strains of the affliction are unresponsive to antibiotic treatments, such as MRSA. Mild cases of this can be treated in an outpatient setting, but it should be attended to with haste.
Effective Prevention Methods
If your child brings back an impetigo rash from daycare, impetigo in adults can still be prevented by maintaining a few practices associated with a hygienic lifestyle.
- Thoroughly clean and completely cover any open skin lacerations on all household members.
- Don’t lend or divvy up any linen or clothes amongst eachother.
- Wash the infected child’s clothing in hot water and strong detergent daily.
- When applying topical ointments to their impetigo ulcers, always wear disposable latex (or equivalent) gloves.
- Make perfectly sure your kid isn’t picking at or scratching at their impetigo sores—this fosters infection spreading.
Skin disorders & diseases: blue skin, EB, common, rare, picking, vitiligo.
Common skin disorders Most people have heard of the common skin disorders such as acne, eczema, even Psoriasis but there are plenty of others more
How to Remove Scars – Best Remedies and Removal Procedures That Actually Work.
Anyone who has been alive long enough will have scars. Whether from surgeries, acne, chickenpox, or injuries the scars start out as pinkish makes which
How to Get rid of hickeys fast: Definition, How to cover, overnight removal.
Also going under the aliases hickie or love bite across the pond in Britain, this is a marking caused by sucking onto the skin. Normally,
Impetigo Medicine : Mupirocin, Neosporin, Amoxicillin, Bacitracin and Cephalexin
Searching for the perfect cure for impetigo? Need that pesky rash to stop spreading? We’ll cover all your bases concerning the options for impetigo management
How Dogs and Puppies Get Impetigo: Signs, Symptoms, Treatment
Pet owners often disregard illnesses and conditions that are usually associated with humankind, mainly focussing on vaccinating and preventing against common canine ailments such as
Medicine for Impetigo: Antibiotics, Cream, Ointment and OTC Drugs
A conundrum that afflicts many parents is how to treat their children with mild illness. Do we need to run them off to the pediatrician | 1 | 2 |
<urn:uuid:a1a5a140-a251-48ef-b4e2-acfaf1adeaea> | How to make a 3D Printers
The most fascinating three-dimensional (3D) printer design to watch print is the delta printer. The delta design is quite different from most 3D printers and is best known for its vertical orientation.
The world of 3D printing is growing in popularity. This blog explains how to make a 3D printer. And also explores how delta 3D printer works with its functions.
A delta 3D printer, hence delta printer, is a type of parallel robot that uses geometric algorithms to position each of three vertical axes simultaneously to move the nozzle to any position in a cylindrical build area. Thus, when the printer is printing, all three axes move in a mesmerizing ballet of mathematical magic.
If all this sounds too fantastic, don’t worry; Before we jump into how the hardware mechanisms work, let’s take a short tour of what 3D printing is all about. A firm understanding of the concepts of 3D printing is essential to getting the most out of your 3D printer investment.
Even if you are already a 3D printing enthusiast (and especially if you have never used a delta printer), you may want to read the following sections because I present the material with delta printers in mind.
How to make a 3D Printer
The world of 3D printing is growing in popularity as more people find creative ways to use 3D printers. People buy 3D printers for creating solutions for the home, gifts, artistic expression, and of course, for rapid prototyping of components for manufacture.
I have even seen 3D printers used in architectural firms to replace the somewhat tedious art of 3D modeling—from scale models of buildings to elaborate terrain maps. The world of 3D printing is growing in popularity. This blog explains how to make a 3D printer. And also explores how delta 3D printer works with its functions.
The major contributor for this expansion is that 3D printers are getting easier to find and afford. While far from the point of finding a 3D printer in your local small retailer or as a bonus for buying a new mattress, you don’t have to look very far to find a 3D printer manufacturer or reseller. Even printing supplies are getting easier to find.
In fact, some of the larger retailers such as Home Depot are starting to stock 3D printers and supplies. For some time now, MakerBot Industries has sold their products on the Microsoft online store, as well as at their own retail stores. Similarly, other 3D printer suppliers have opened retail stores.
Naturally, nearly all 3D printing retailers have an online store where you can order anything from parts to build or maintain your own, to printing supplies such as filament and other consumables. So the problem that you are most likely to encounter is not finding a 3D printer, but rather it is choosing the printer that is best for you.
Unless you have spent some time working with 3D printers and have mastered how to use them, the myriad of choices may seem daunting and confusing.
I have encountered a lot of people who, despite researching their chosen printer, have many questions about how the printer works, what filament to use, and even how to make the printer do what they want it to.
Too often I have discovered people selling their 3D printer because they cannot get decent print quality, or it doesn’t print well, or they don’t have the time or skills to complete the build, or they have had trouble getting the printer calibrated. Fortunately, most of these issues can be solved with a bit of knowledge and some known best practices.
This section will help you avoid these pitfalls by introducing you to the fundamentals of 3D printing with a specific emphasis on delta printers. You will learn that there are several forms of 3D printing and be provided with an overview of the software you can use with your printer. You will also learn about the
Consumables used in 3D printing, including the types of filament available. To round out the discussion on getting started, I present a short overview on buying a delta printer, including whether to build or buy and what to consider when buying a used printer.
What is 3D Printing?
Mastering the mysteries of 3D printing should be the goal of every 3D printing enthusiast. But where do you find the information and how do you get started?
This section presents the basics of 3D printing, beginning with the process of 3D printing and followed by a discussion on how the printer assembles or prints an object, and finally, it takes a look at the consumables involved in 3D printing.
The 3D Printing Process
The 3D printing process also called a workflow, involves taking a three-dimensional model and making it ready for print. This is a multistep process starting with a special form of the model and software to break the model into instructions the printer can use to make the object.
The following provides an overview of the process, classifying each of the steps by software type.
An object is formed using computer-aided design (CAD) software. The object is exported in a file format that contains the Standard Tessellation Language (STL) for defining a 3D object with triangulated surfaces and vertices.
The resulting .stl file is split or sliced into layers, and a machine-level instruction file is created (called a .gcode file) using computer-aided manufacturing (CAM) software.
The file contains instructions for controlling the axes, the direction of travel, the temperature of the hot end, and more. In addition, each layer is constructed as a map of traces (paths for the extruded filament) for filling in the object outline and interior.
The printer uses its own software (firmware) to read the machine-level file and print the object one layer at a time.
This software also supports operations for setting up and tuning the printer. Now that you understand how a 3D printer puts the filament together to form an object, let’s take a look at how the object is printed by the printer.
How an Object is Printed
It is important to understand the process by which objects are built. Knowing how the printer creates an object will help you understand the hardware better, as well as help you tune and maintain your printer.
That is, it will help you understand topics such as infill, shells (outer layers), and even how parts need to be oriented for strength.
The process is called additive manufacturing and is used by most 3D printers available to the consumer. Conversely, computer numeric control (CNC) machines start with a block of material and cutaway parts to form the object. This is called subtractive manufacturing.
Both forms of manufacturing use a Cartesian coordinate system (X, Y, and Z axes) to position the hardware to execute the build. Thus, the mechanical movements for 3D printing are very similar to the mechanisms used in CNC machines.
In both cases, there are three axes of movement controlled by a computer, each capable of very high-precision movement.
Additive manufacturing has several forms or types that refer to the material used and the process used to take the material and form the object. However, they all use the same basic steps (called a process or workflow, as described earlier) to create the object.
When a 3D printer creates an object, the material used to print an object comes in filament form on a large spool to make it easier for the printer to draw material.
The filament is then loaded into an extruder that has two parts: one to pull the filament off the spool and push it into a heating element, and another to heat the filament to its melting point.
The part that pulls the filament and feeds it to the heating element is called the cold end, whereas the heating element is called the hot end. Sometimes manufacturers refer to both parts as the extruder, but others distinguish the extruder from the hot end (but they sometimes don’t call it a cold end).
Delta printers typically separate the parts with the first part fixed to the frame and the second on the axis mechanism (called the effectors). Just one of the many nuances to 3D printing I hope to explain!
Delta Printer Hardware
The Delta 3D printer design, despite the radically different axes arrangement, uses the same basic hardware as a Cartesian printer.
The hardware and materials used to construct delta printers vary greatly, despite some fundamental concepts. You can find printers that are made from wood, others constructed with major components made from plastic, and some that are constructed from a sturdy metal frame, but most will be made using a combination of these materials.
Not only do the materials used in constructing the frame vary, so do the mechanisms used to move the print head and extrude filament, but not nearly as much as the frame.
A delta printer is a special type of machine called a robot. You may think of robots as anthropomorphic devices that hobble around bleeping and blinking various lights (or bashing each other to scrap in an extremely geeky contest), but not all robots have legs, wheels, or other forms of mobility.
Indeed, according to Wikipedia, a robot is “a mechanical or virtual agent, usually an electro-mechanical machine that is guided by a computer program or electronic circuitry”. As you will see, delta printers fit that description quite well.
The following sections introduce the hardware as follows: the hardware used in extruding plastic (extruder or cold end and the hot end), delta arms, axes, types of electric motors used, build platform, electronics, and finally, the frame. Each section describes some of the variants you can expect to find, and some of the trade-offs for certain options.
The extruder is the component that controls the amount of plastic used to build the object. On a delta printer, the extruder is normally mounted in a fixed position on the frame6 (also called the cold end) and connected to the hot end via a Bowden tube.
I have seen at least one delta printer that mounted the extruder on the effector, but that design is an exception because the goal is normally to reduce the weight of the effector, delts arms, and a hot end to allow for faster movement.
When the hot end is at the correct temperature for the filament used, the extruder pushes the filament through the Bowden tube, and as the effector is moved, the plastic is extruded through the nozzle.
The hot end, if you recall, is responsible for accepting the filament fed from the extruder body, and heating it to its melting point. You can see one of the latest hot-end upgrades for 3D printers.
This hot end can be used with higher heat ranges and provides a very good extrusion rate for PLA, ABS, and other filaments. Notice the fan and shroud. This hot end, like most all-metal hot ends, must have a fan blowing across the cooling fins at all times. That is, the fan is always running.
There are dozens of hot end designs available for 3D printers. This may seem like an exaggeration, but it isn’t. My research revealed several sites listing 10 or even 20 designs. The site that seems most complete is at of the more than 50 hot ends listed—and this list is a bit out of date because there are even more available.
With so many hot-end designs, how do you know which one to choose or even which one is best for your printer, filament, and object design choices? Fortunately, most hot-end designs can be loosely categorized by their construction into two types: all metal and PEEK body with PTFE (polytetrafluoroethylene) liner.
There are some exceptions, like an all-metal design with a PTFE liner; but in general, they either are made from various metals, such as brass for the nozzle, aluminum for the body (with cooling fins), and stainless steel for the heat barrier or use a liner of some sort.
The most popular PEEK variant is called a J-head hot end. Most delta printers come with either an all-metal hot end or a J-head hot end.
The earliest form of vertical delta printer axis mechanisms used a pair of smooth rails with linear roller bearings, brass bushings, or even bushings made from Nylon or PLA (and thus printed).
While this solution results in a very secure mechanism, rods can be more expensive, depending on the quality of the material.
That is, precision ground smooth rods may be too expensive and a bit of overkill for most home 3D printers. Cheaper drill rod quality items may be much more economical. In fact, you can often find lower prices for drill rods if bought in bulk.
In addition, the linear bearings can be expensive too. Even if you use printed bearings or less expensive bushings, smooth rods have an inherent problem. The longer the rod, the more likely the rod will have a slight bend or imperfection that causes the carriage to ride unevenly, which is transmitted to the effector.
While this is not a serious problem unless the bend is significant, it can result is slightly lower print quality. You can check this easily by rolling the rods on a flat, smooth surface looking for gaps between the rod and the surface.
Another disadvantage of smooth rods is that the frame formed by the rods is not stiff enough and can twist unless braced with wood or metal supporting framework. Thus to get the frame stiff enough to remove flexing, you end up with a much bulkier frame with more material.
Linear rails are much more rigid than smooth rods. Linear rails use a thick 15×10mm steel bar with grooves milled on each side. A carrier is mounted on the rail, suspended by a set of steel ball bearings (most use recirculating arrangements, but some versions use linear ball bearings).
The rail is drilled so that it can be mounted to the frame rail using a number of bolts. Linear rails are very rigid and can provide additional rigidity to the frame of a delta printer.
This is advantageous for the Kossel Pro because it uses the same 1515 extrusions as the Mini Kossel, which can flex if used in longer segments. The linear rails help stiffen the frame greatly.
Notice that there is an additional carriage that mounts to the linear rail carrier. The added complexity is the delta arm mount point positioned farther from the frame rail than the smooth rod version. This only means the offset is a bit larger, but otherwise isn’t a problem.
Linear rails are also very precise and do not require any adjustment other than periodic cleaning and a small amount of lubrication. However, linear rails are the most expensive option among the popular options for delta axis mechanisms. You can get linear rails in a variety of lengths.
An alternative to the expensive linear rods is the use of Delrin-encased bearings that ride in the center channel of an aluminum frame extrusion. Some solutions use Nylon rollers. 3D printer enthusiasts have also had success using hardware-store-quality shower and screen door rollers.
Notice that there are four rollers (two in the front, two in the rear). The pair of rollers on one side is fixed, and the pair on the other side use concentric cams to allow adjusting the tension of the rollers. Also, notice that the roller carriage is larger than the linear rail.
Indeed, the linear rail is mounted on the inside of the frame rail, leaving the sides and rear open; whereas the roller carriage requires the use of both sides, as well as the inside of the frame rail. This limits the types of attachments that you can use on the frame.
While this solution is a lot cheaper (perhaps by half), it is harder to set up because the rollers must be adjusted so that they press against the rails with enough tension to prevent the carriage from moving laterally or rotating, and yet not so much as to bind when moving along the channel.
I have found that the rollers need adjusting periodically and can be affected by environmental changes. This is made worse by carriages made from materials such as wood. Thus, while linear rails need occasional cleaning and lubrication, roller carriages require more frequent adjustment.
They may or may not need periodic lubrication but that depends on whether the roller bearings used. Since delta printer axes are vertical, cleaning the channel isn’t normally an issue but is something you should inspect from time to time.
Recall that each axis is connected to the effector via a carriage and a set of parallel arms (delta arms). The delta arms of most new delta printers are either 3D printed, injection molded or assembled from joints glued to carbon fiber tubes.
There are several types of rod ends that are used. These include ball ends (e.g., Traxxas), concave magnets with steel balls (or similar), 3D printed or injection-molded joints, and joints with captured bearings. Except for the magnet option most delta printers include one of these types of rod ends.
For example, the Mini Kossel design typically uses the Traxxas rod ends, the SeeMeCNC printers use injection molded parts (arms and joints), and the Kossel Pro uses injection molded parts with captured roller bearings.
The original Rostock and variants typically have printed arms. The Rostock Max v2 and Orion have injection-molded arms. The Kossel Pro, OpenBeam Kossel RepRap, and many Mini Kossel kits have carbon arms made with carbon fiber tubes. I have seen some examples with threaded rods, but these tend to be pretty heavy and may limit movement speed.
A stepper motor is a special type of electric motor. Unlike a typical electric motor that spins a shaft, the stepper is designed to turn in either direction a partial rotation (or step) at a time.
Think of them as having electronic gears where each time the motor is told to turn, it steps to the next tooth in the gear. Most stepper motors used in 3D printers can “step” 1.8 degrees at a time.
Another aspect of stepper motors that makes them vital to 3D printers (and CNC machines) is the ability to hold or fix the rotation. This means that it is possible to have a stepper motor turn for so many steps, and then stop and keep the shaft from turning.
Most stepper motors have a rating called holding torque that measures how much torque they can withstand and not turn. Four stepper motors are used on a typical delta printer. One each is used to move the X, Y, and Z axes, and another is used to drive the extruder (E axis).
The build platform or build plate (sometimes called print bed) can be made from glass, wood, Lexan, aluminum, and composite materials. Glass is the most common choice.
It is to this surface that the first Electronics. The component responsible for reading the G-codes and translating them into signals to control the stepper motors is a small microcontroller platform utilizing several components.
Most notably is the microprocessor for performing calculations, reading sensors (endstops, temperature), and controlling the stepper motors. Stepper motors require the use of a special board called a stepper driver. Some electronics packages have the stepper drivers integrated, and others use pluggable daughterboard’s.
Most delta printers use a commodity-grade electronics board (RAMPS, Rambo, etc.). The most common choice for smaller delta printers such as the Mini Kossel is RAMPS, which uses an Arduino Mega, a special daughterboard (called a shield) , and separate stepper driver boards.
The electronics board is where you load the firmware, which contains the programming necessary for the printer to work. This is either a variant of Marlin (Mini Kossel, Kossel Pro) or Repetier-Host (Orion, Rostock Max v2). As discussed previously, this source code is compiled and then uploaded to the electronics board.
Now that I have discussed the axes and how they are moved, as well as the electric motors that move the component, the extruder, the hot end, the build platform used to form the object, and the electronics, it is time to discuss how a frame holds all these parts together
Delta printers share a common design for the frame. While there are some differences in how the top and bottom portions are constructed and that there are several types of materials used, most designs use metal beams (sometimes called rods) or aluminum extrusions for the vertical frame components.
I have seen at least one design that used an all-wood frame, but that was a custom design and not a popular choice.
Recall that the delta printer has a base that secures the build platform, steppers for the axes, as well as a top section that holds the idler pulleys for the axes. Most designs incorporate the electronics, power supply, and other electronics in the lower section.
While the vertical axes use aluminum extrusions, the choice of frame material can vary among delta designs. The best frames are those that are rigid and do not flex when the extruder is moving or when the printer moves an axis in small increments. As you can imagine, this is very important to a high-quality print.
Some printers, such as the Mini Kossel and Kossel Pro, use the same aluminum extrusions to form the base and top of the printer. The Mini Kossel uses printed vertices bolted to the extrusions. The Kossel Pro uses aluminum vertices bolted to the extrusions. While both form very stiff components, the Kossel Pro is noticeably stiffer.
In this section, I present three popular delta variants that represent excellent examples of good delta printers. Here I present more information about each printer, including its capabilities, and a short review.
Keep in mind these are only three examples of delta printers. While there are many others available, most are some variant of a Rostock or Mini Kossel. Thus, these printers represent what I consider the best examples of delta printers available.
SeeMeCNC Rostock Max v2
The Rostock Mac v2 is an iteration of the original Rostock by Johann C. Rocholl, manufactured and sold by SeeMeCNC. The most significant aspect of this variant is the massive build volume. You can print objects up to 1300 cubic inches of build volume (an 11-inch diameter and a height of 14-3/4 inches).
Indeed, with a spool of the filament on the top-mounted spool holder, the printer itself is over 4 feet tall—presenting a very impressive profile.
I mentioned previously that the printer is constructed using laser-cut frame pieces bolted to large aluminum extrusions for the axes. However, the printer also uses injection-molded delta arms, joints, carriage mounts, and effector.
There are also Lexan panels covering the upper and lower axis towers, making the overall package clean and modern looking.
The Rostock Max v2 comes in kit form only. It is a nontrivial build given the number of parts and the moderately complicated hot-end assembly. Soldering, mechanical, and general electronics skills are required. That is, you should be familiar with using crimping tools, stripping, and soldering wires.
Although that may sound challenging, and it can be for those who have never built a 3D printer, SeeMeCNC provides a detailed, lengthy assembly manual with all the steps explained in clear language and reinforced with photos. I printed the manual so that I could make notes, and I was impressed by the size of the manual. It is very well done.
SeeMeCNC also hosts one of the best user forums I’ve seen forum.seemecnc.com/. If you get stuck or have questions, one visit, and a short query later, you will have your answers. Furthermore, the customer and technical support are also among the best in the business, easily overshadowing the larger vendors.
For example, I made a mistake assembling one of the Lexan panels and managed to break it. SeeMeCNC sent me a new one the very next day and I received it only two days later. It doesn’t get better than that.
Calibration is easy and the manual makes the steps very simple. In fact, SeeMeCNC uses macros to help set the endstops and calibrate the axes. The hot end has operated flawlessly without extrusion failures or extraneous artifacts, and mechanical noise is moderate. It just works.
This printer has everything you need for great-looking prints. The only thing I found missing is an auto bed leveling (Z-probe) feature. However, I found there was no need for this, as the print surface is very flat with no visible imperfections. Indeed, when testing the maximum build diameter, I found that the hot end tracked evenly across the entire print bed.
To understand this significance, consider that I have spent countless hours tuning and adjusting print beds on other printers, whereas the Rostock Max v2 was dead-on without any bed adjustment whatsoever!
In fact, I found the Rostock Max v2 to be a high-quality, professional-grade delta printer (despite the kit factor). If you need a delta printer with a large build volume and you can handle the assembly, the Rostock Max v2 will provide many hundreds of hours of reliable printing.
The Kossel Pro is a consumer-grade edition of the Mini Kossel made by OpenBeamUSA and sold by MatterHackers (matterhackers.com/store/printer-kits). But it is much more than that. The printer is of very high quality and in all ways is a step up from its RepRap Mini Kossel ancestor.
Interestingly, OpenBeamUSA also offers a lower-cost version called the OpenBeam Kossel RepRap, which is a smaller version of the Kossel Pro that uses more plastic parts and fewer specialized parts.
However, the OpenBeam Kossel RepRap shares many of the same parts as the Kossel Pro and it can be upgraded to the Kossel Pro with the purchase of an upgrade kit. Both printers are sold in kit form.
The most significant feature of this printer is the frame. As I mentioned previously, it uses the same OpenBeam 1515 aluminum extrusion that is popular with the Mini Kossel, as well as milled aluminum vertices and linear rails.
There are very few plastic parts, all of which are high quality and injection molded. Another impressive feature is the effector, which incorporates a Z-probe, an always-on fan for the hot end, two part-cooling fans, as well as an LED light ring.
At first glance, this printer has all the most-requested features. Indeed, the Kossel Pro has an impressive list of features, as follows. The only thing I found oddly missing is a spool holder—there is not even amount. However, due to the origins of the OpenBeamUSA components, it isn’t hard to find a spool holder that works.
Despite that this printer comes only in kit form, the build is very easy. In fact, one of the objectives of OpenBeamUSA is to make the printer easy to build quickly. This is achieved by using only bolt-on or plug-in wiring and components.
In fact, the main wiring harness for the hot end, Z-probe, and light ring use a single wiring bundle with molded connectors eliminating the need for any soldering. Furthermore, most of the tools you need are included in the kit.
The Kossel Pro and OpenBeam Kossel RepRap are very new designs and are only just now moving from Kickstarter funding to full-on production. Thus, some things are still evolving and may change slightly.
Fortunately, many of the minor issues in the first run of production kits have been solved. There is a small army of eager enthusiasts sharing the latest information about the printers.
Unlike the Rostock Max v2, there is no fine adjustability in the axis (i.e., endstops). This is partly because the assembly requires precise placement of the components during the build, and bolstered by superior- quality components and auto bed leveling (Z probing).
I really like this printer. It is a sophisticated black-on-black, serious-looking delta printer that always gets a look when friends stop by. If you want a high-quality delta printer that has a moderate-sized build volume.
And you want to experience building your own from a kit without the tedious soldering and electronics work, the Kossel Pro is an excellent choice.
Given the documentation and the ever-expanding and improving user forums, I expect this printer to be a very popular choice for those who want a printer with better quality, reliability, and maintainability than the RepRap variants.
[Note: You can free download the complete Office 365 and Office 2019 com setup Guide for here]
The Mini Kossel is one of the newest RepRap delta printers, also designed by Johann C. Rocholl. The Mini Kossel is a hobbyist-grade (RepRap) printer that is entirely DIY. While you can buy kits that include all the parts, most people source their own parts or buy subcomponents from various vendors.
In fact, the Mini Kossel has been copied and modified by many people. I find a new variant of the Mini Kossel almost weekly. Some have minor changes, like using a different extrusion for the frame or a different carriage mechanism (see the earlier discussion on axis movement).
But others have more extensive changes, such as alternative frame vertices and use of injection-molded parts, and a few have even increased the build volume.
Since this design is a pure RepRap, DIY endeavor, listing standard features isn’t helpful because there are so many options that you can choose.
For example, you can choose your own hot end (1.75mm or 3mm, all-metal, peek, etc.), optionally add a heated build plate, add cooling fans, and so on. Indeed, there is almost no limit to what you can do with this little printer.
About the only thing I can say that is standard on most variants of the Mini Kossel is a small size. The printer is only a little over two feet tall and takes up very little room. Although the build volume is quite small (about 5 to 6 inches in diameter and 6 to 8 inches tall), it is large enough to print most moderate-sized parts one at a time.
Building the printer is pretty easy if you have basic mechanical and electrical skills. The build time is only slightly less than what would be required for the Rostock Max v2, but because there are fewer parts, the build is a bit faster and the frame is less complicated to assemble.
While some vendors offer the Mini Kossel in kit form, few offer any form of help beyond the basic assembly. Fortunately, there are numerous articles, blogs, and independent forums that offer a lot of help. I would start with a visit to the Mini Kossel wiki and then search for topics you need help with, such as “Z-probe assembly” or “Mini Kossel calibration”.
Another helpful forum is the RepRap general forum , but enthusiasts building all manner of delta printers use this forum, so be aware that some of the information may not apply to the Mini Kossel. You can also check out the delta bot mailing list.
Not surprisingly, given its small size and use of standard components, the Mini Kossel is the cheapest of the delta printers available today. If you opt to go without a heated print bed and use the less expensive axis components, you can find kits for under $500, and even a few around $400.
I managed to source one of my Mini Kossel printers at about $300, but I opted to use some used parts from other Cartesian printers.
If you are looking for a delta printer to start with, or want to experience building a delta printer from scratch or as a companion to a fleet of Cartesian printers, the Mini Kossel is an excellent choice. Build one for yourself or invite a friend to build one together.
Applications of 3D Printing
Every new invention is motivated by the desire to do something that was never done before or improve on currently existing ways to solve a problem. Since the 1990s, the applications for 3-D printing have literally exploded as size limitations and costs have dropped and the list of materials that can be used with this technology has expanded dramatically.
The applications can be grouped into several broad categories:
Let’s look like a few examples in each category.
One attraction of 3D printing for commercial applications is the ability to make complex 3D prototypes or finished products that are not easily manufactured by conventional means.
At their present stage of development, 3D printers cannot crank out large quantities of identical parts at costs as low as can be achieved through mass production.
There are other features of 3D printing that are appealing in situations where time and cost are important. Compared to conventional (subtractive) manufacturing methods there is less wasted material.
Conventionally manufactured products are often transported long distances, even across continents before reaching their final destination. With 3D printing, production and assembly can be local. When unsold products are discontinued, they often wind up in landfills. With 3D printing, they can be made as needed.
Rapid prototyping is still the main attraction of 3D printing for industrial applications. Slowly, that is changing. Today, it is estimated that about 28% of the money spent on printing things is for the final product, as opposed to a prototype.
An alternate approach to a huge printer is a series of industrial size printers which can produce components of an object which can then be assembled to make the whole, something larger than the capacity of an individual printer.
Nozzles are relatively simple devices, specially shaped tubes through which hot gases flow. All jet engines use nozzles to produce thrust, conduct exhaust gases out of the nozzle, and to set the mass flow rate through the engine.
GE Aviation is pulling 3D printing out of the laboratory and installing it in the world’s first factory to use this technology in the manufacture of jet engine fuel nozzles.
The LEAP fuel nozzles are 5 times more durable than previous models. 3D printing allowed GE Aviation engineers to design them as one part rather than the 20 individual parts required by conventional manufacturing techniques.
Employing additive manufacturing also enabled engineers to redesign the complex internal structure required for this critical part, making it both lighter and more efficient.
GE is also developing 3D-printed parts for the GE9X engine, the world’s largest jet engine which will be installed in the next generation Boeing 777X long-haul passenger jet.
Notable is the close relationship between Ford’s research and development and its 3D manufacturing facility. CAD files are often sent back and forth between the two so that a design for a prototype can be built as a physical model, examined, tweaked as needed and then returned for fine-tuning of the CAD model.
Engineers are constantly finding practical applications for the 3D print technology. An example, reported in The Economist, deals with the physical principle that “…fluids flow more efficiently through rounded channels than they do around sharp corners, but it is very difficult to make such channels inside a solid metal structure by conventional means, whereas a 3D printer can do this easily.
3D Printing in Space
The National Aeronautics and Space Administration (NASA) has statically tested a 3D printed fuel injector for rocket engines. Existing fuel injectors were made by traditional manufacturing methods.
This required making 163 individual components and assembling them. Using 3D print technology, only 2 parts needed to be made, saving time and money as well as allowing engineers to build parts that enhance rocket engine performance and are less prone to failure.
The injector performed exceedingly well. Nicholas Case, the propulsion engineer leading the testing, summed up the case for including 3D print technology as part of the manufacture of rocket components:
“Having an in-house additive manufacturing capability allows us to look at test data, modify parts or the test stand based on the data, implement changes quickly and get back to testing.
This speeds up the whole design, development and testing process and allows us to try innovative designs with less risk and cost to projects.”
In September 2014 NASA launched its first 3D printer into space. Before the launch, it had to be tested and modified to work in a low-gravity environment. Its short-term application will be for building tools for the International Space Station astronauts.
In the longer term, 3D printers may be used to supplement the rations carried on space missions by printing food. NASA is exploring ways to develop food that is safe, acceptable and nutritious for long missions.
Current food systems don’t meet the nutritional needs and 5-year shelf life required for a Mars mission. Because refrigeration and freezing require significant spacecraft resources, NASA is exploring alternatives.
Telescopes seem like simple devices but they are made up of many parts, are hard to build and hard to operate in space. Jason Budinoff of NASA Goddard is simplifying the process while working on the first space telescope made entirely of 3D printed parts.
3D printing has been used for some time to build architectural models. These help clients visualize the design, reduce the hours spent on crafting models and create a library of reusable designs.
Thousands more can be found with a quick Google search. These models are not limited to a building here or a stadium there but include scale models of cities.
The question naturally arises: if one can build a model of a house, can one build a full-scale house? The short answer is – almost. There are house printers on the market.
The procedure is to build one level of a house at a time. Once the first level is completed, the machine can be moved upward to build the next level and so on until the desired height is reached. It can be yours for a little over $15,000 (not including the concrete).
Billed as the world’s first 3D printed house, the Canal House in the Netherlands is under construction and is expected to be completed in 2015. It is being built at ¼ scale entirely from bioplastics.
It is not expected to be an actual residence but a proof-of-concept undertaking. Each of the rooms will have furniture that illustrates the capabilities of 3D printing.
The 3D printer used to build the Canal House, called the Kamermaker, is an upscale version of the Ultimaker 3D desktop printer. The material used is a bioplastic made with 80% vegetable oil.
As a brief aside, we should mention that people are looking into alternatives to concrete for building materials.
A leading candidate is a humble soybean. Used both as food and as an ingredient in non-food products, students at Purdue University have now developed a soybean-based material which can be used for 3D printing Called Filasoy, it is a low-energy, low-temperature, renewable and recyclable filament created with a mixture of soy, tapioca root, cornstarch, and sugar cane.
The aim is to provide an alternative to plastics for 3D printing as plastics are petroleum based and not derived from renewable resources. The walls of the small fortress, as well as the tower tops, were fabricated separately and then assembled into a free-standing structure.
Ideally, 3D printing could be used to build entire full-scale houses or groups of houses. Prof. Behrockh Khoshnevis of the University of Southern California has developed a process called contour crafting which might make that possible.
Contour crafting is a fabrication process by which large-scale parts can be fabricated quickly in a layer-by-layer fashion.
For now, 3D printing has attracted the attention of the fashion industry by way of a fashion-as-art concept. The dresses consist of 3D printer fabricated components and the completed garment and its accessories are then finished by hand. A fascinating example of this approach is the Spire Dress designed by Alexis Walsh and Ross Leonard.
It is made up of 400+ individual pieces, some in the form of spires, using nylon plastic. Then the individual pieces are assembled by hand to form the finished product.
Two major drawbacks to printing full-size garments are the size of the printers and the ability to print natural materials. There are 2D printers which, given a t-shirt, can print virtually any imaginable design on it.
Shoe manufacturers have taken an interest in 3D printing. NIKE has used the technology for a football cleat. A prototype for a lightweight plate attached to the shoe was made with a 3D printer. The company was able to manufacture the plates using 3D technology as well.
New Balance has done customization for runners as a pilot program to test the utility of 3D printing for athletic shoes. To make the shoe, New Balance fits the runner with a pair of shoes that used sensors to record data under simulated race conditions.
The healthcare sector has become a major user of 3D print technology. They range from creating customized crowns and braces for teeth, shells for hearing aids, various prosthetics, and implantable devices, and models of various body organs to allow surgeons to refine their approaches and reduce the time needed for operations.
Bioprinting, still in its infancy, will eventually allow customizing the delivery of medicines to specific organs, print human tissue, and even cosmetics. Some recent articles review aspects of the medical applications of 3D printers.
There is some speculation that the dental laboratory as we know it today may be replaced by 3D printing in the future. Traditionally, crowns are made in a dental lab.
In addition, the traditional materials used in dentistry expand and shrink with exposure to temperature and moisture. This is difficult to control. The result of this on the patient is more time in the chair.
A few days or a few weeks later the crown is sent to the dentist. Another visit is scheduled for the patient to fit the crown. Depending on the fit, a third visit may be required for a final adjustment.
It is now possible for a dentist to make a 3D scan of the tooth (or crown) and print it on the spot. Since it is made to measure, less time is required in the chair. As with almost all medical applications, the process is still in its infancy but shows great promise for increased patient comfort and (eventually) reduced cost.
Maxillofacial prosthetics (eyes, noses, ears, facial bones) are very laborious and expensive to produce. Ears and noses can cost up to $4,000 each. An impression is taken of the damaged area, the body part is then sculpted out of wax and that shape is cast in silicone.
Using 3D print technology, digital cameras are used to scan the injured area. A digital model is then created for the part, which incorporates the patient’s skin tone. This information is sent to a 3D color printer.
The cost of the printed part is about the same as that of a handcrafted prosthetic. The advantage lies in the fact that now a digital model exists. In the future, when replacements are needed for whatever reason, they can be made very cheaply.
3D printing has been effectively used to customize mechanical limbs. The usual goals are to add a capability missing due either to a birth defect or injury. Other reasons include improving the comfort and fit of an existing prosthetic device. As an example, consider a prosthetic hand designed for a man who, since birth, was missing a large part of his left hand.
A high-tech prosthetic which cost over $40,000 was replaced with a 3D printed hand which provided him a stronger grip and cost much less.
Brain surgery requires drilling holes in skulls. Cranial plugs made on 3D printers can fill those holes. Cranial plates can replace large sections of a skull lost due to head trauma or cancer.
The replacement joints were for a former athlete who had suffered for more than ten years with bowlegged legs bent six degrees out of alignment.
The motivation for using 3D technology to create replacement joints was to minimize the amount of bone that had to be shaved off to install each implant.
3D printed replica models of body parts and organs have proven to be valuable in medical applications. They allow analysis of complexities and alternative approaches prior to a patient’s surgery.
In the new approach, a custom-made stent graft based on measurements from CT scans (Computerized Tomography – technique combining a series of X-ray views taken from many different angles and computer processing to create cross-sectional images of the bones and soft tissue inside the body) and measurements is placed on the aneurysm by going through the groin.
Most patients go home after a few days with minimal pain or discomfort. The patients experience less blood loss. A shorter ICU (Intensive Care Unit) stay and a quicker return to a normal diet and regular activities than those who undergo an open procedure to fix this problem.
A number of bioprinters have appeared recently. According to, bioprinting is “using a specialized 3D printer to create human tissue. Instead of depositing liquid plastic or metal powder to build objects, the bioprinter deposits living cells layer by layer”.
The goal, still at least a decade away, is to build human tissues for surgical therapy and transplantation. Many laboratories are testing the concept by printing tissue for research and drug testing. The speculation is that patching damaged organs with strips of human tissue will occur in the near future.
Before the more than 250 3D printers now on the market became available, those interested in the subject had to build their own from scratch, known as DYI (Do It Yourself) or from kits.
The only materials available then were easy-to-melt plastics. 3D printing that was the province of hobbyists interested in learning the new technique. The items printed were small, mono-colored and the end product of a learning process.
With the greater selection of printers available today, a vastly increased selection of materials and greatly reduced costs, there now exist a very large number of
Statuettes and figurines (bunnies, birds, cats, dogs, characters from movies or video animations, game pieces, etc.)
Jewelry of every description, limited only by the artist’s imagination
Toys and action figures
Art (practical things like vases to abstract art)
Gadgets – phone cases are especially popular
Household tools – screwdrivers, wrenches, broken part replacements
Sunglasses of every description and on and on and on.
There is constant experimentation with an expansion of the capabilities of 3D printers. The main constraints are:
The cost of the printer (fully assembled desktop units are available at prices ranging from under $400 to over $10,000)
The cost of material (plastics for printing are not cheap. Don’t be surprised if in the future the machines themselves will be available at giveaway prices because the profit will be in the materials they use)
The size of the object to be printed (this can be overcome by printing individual parts and then assembling them into a finished whole)
Constraints on printed objects by intellectual property laws. Remember, the lawyers haven’t paid much attention to the 3D printing community yet, but that is bound to change.
Food printing is being explored now that a larger selection of foods is available for that purpose. Small and large food printers are available for specialized purposes.
3D scanners have been around since the 1970s. They have been used for a variety of applications, including surveying, terrain mapping, documenting construction and mining projects.
Scans are regularly made of ships, consumer products, coins, medical devices, and dental appliances, among many other items.
A 3D scanner creates a digital representation of a physical object. Therefore, if it exists and is accessible, it can be scanned. The data collected by the scanning process is called a point cloud.
This is an intermediate step for the creation of a mesh, also called a 3D model, a digital representation of the scanned object. The mesh can be used for:
Creating models for rapid prototyping or milling
Analysis of structures under a variety of internal or external forces using finite element or finite difference methods
Computational fluid dynamics
After some additional processing, this is the information sent to a 3D printer which creates a physical object.
For 3D printing, a solid model is a goal and the expected input for a 3D printer. The problem of mesh design and the compromises needed to get a sufficiently accurate mesh for an affordable computational price.
Methods of Data Collection
Both contact and non-contact methods are used to collect information about 3D objects. Depending on the nature of the object these can be used individually or combined.
In contact-based procedures, a probe touches various points of an object’s surface to produce a data point (x, y, z coordinates of the location). Probes can be hand-held or part of a machine referred to as a coordinate measurement machine or CMM. Such machines can be stationary or be in the form of portable arms.
Sometimes physical contact with an object is impossible, impractical or undesirable. Then it becomes necessary to resort to non-contact methods. These involve using lasers, ultrasound or CT machines.
In laser scanning, a laser (red, white or blue depending on the application) passes over the surface of an object to record 3D information. As it strikes the object’s surface, the laser illuminates the point of contact.
A camera mounted in the laser scanner records the 3D distribution of the points in space. The more points recorded, the greater the accuracy. The greater the accuracy, the more time is required to complete the scan and the greater the complexity of the 3D model created from the data. With this method, it is possible to have very accurate data without ever touching the object.
Generally, contact digitization is more accurate in defining geometric forms than organic, free-form shapes. If it has been some time since you’ve taken an art class, geometric shapes have clearly defined edges typically achieved with tools. Crystals also fall into this category even though they are created by nature.
Examples include spheres, squares, triangles, rectangles, tetrahedra, etc. Organic shapes are typically irregular and asymmetric. They have a natural look and a curving, flowing appearance. Organic shapes are associated with things from nature such as plants, animals, fruits, rivers, leaves, mountains, etc.
Laser scans produce good representations of an object’s exterior but cannot record interior or covered surfaces. As a simplistic example, consider a hollow sphere. The laser scanner will accurately describe its general shape, but if there is anything inside the sphere, this would have to be detected with an ultrasound or a CT scan.
The choice of which scanning technology to use will depend on the attributes of what you are attempting to scan, such as its shape, size, and fragility. As a general rule, laser scanning is better for organic shapes. It is also used for high-volume work – scans of cars, planes, buildings, terrain, etc.
It is the method of choice if an object cannot be touched, e.g., in documenting important artifacts. Digitizing is used in engineering projects where precise measurements of geometrically shaped objects are required.
Both methods can be combined when necessary. The end result of a scan, regardless of which method is used, is called a point cloud.
From Point Cloud to 3D Model
The nice thing about point clouds is that they can be measured and dimensioned. This makes them valuable to architects and engineers. For the former, the ability to view and measure their project directly from their computer reduces the number of trips needed to the job site and thus reduces cost.
For engineers, point clouds can be converted to surface models for visualization or animation and to 3D solid models for use in 3D printing, manufacturing and engineering analyses, including finite element analyses and computational fluid dynamics.
Now that we have hundreds of thousands if not millions of data points obtained from 3D scans, we can go on to print our object, right?
The points have disappeared and have been replaced by a reasonable facsimile of a structure. There are two considerations before creating a 3D model:
First, the point cloud data must be cleaned up a bit. A number of point clouds are generated in a scan to fully represent a 3D object. For purposes of analysis, these clouds must be merged into a single point cloud. This process is referred to as registration. Then some housekeeping must be performed on the consolidated cloud.
Second, CAD software, which produces the 3D model suitable for various applications, doesn’t know what to do with point clouds. Since CAD software expects to receive data in the form of surface representations of geometric forms and mathematical curves, the data must be translated into a form that it can interpret.
The software packages that create 3D models from point cloud data consist of both proprietary packages and commercially available ones. Among the popular commercial packages are (in no particular order):
and many, many others. These software packages input the point cloud data, clean and organize it and produce 3D models in a wide variety of file formats, including the.STL file format popular in the 3D printing community. The final mesh can be achieved by way of a Polygonal mesh model
NURBS (Non-Uniform B-Spline) model
Before continuing, we should point out that the business of going from point cloud data to a CAD mesh is a non-trivial exercise. There is no unique path for this. It is, in fact, an area of active research.
With polygonal modeling, one works primarily with faces, edges, and vertices of an object. To make desired changes to a model, vertices can be repositioned, new edges inserted to establish additional rows of vertices and branching structures created. With polygon models, the process is easier to grasp.
However, as polygons are faceted, it can take quite a few of them to create a smooth surface. The more polygons, the greater the storage requirements. Even without this consideration, polygon modeling creates much larger files than NURBS modeling because the software keeps track of points and shapes in 3D space rather than mathematical formulas.
With NURBS modeling, one obtains smoother results. A NURBS object has only four sides. These are manipulated to create surfaces. This approach requires less storage than polygonal models.
NURBS surfaces can be deformed, have shapes cut out of them, be stitched and blended together to form complex shapes. NURBS are constantly and always smooth as they are mathematically a continuous curve offering an easy way to keep smoothness within a model.
The requirements of your project – especially the time available for it - and the capability of the software package you are using – will guide your choice of the method you use to achieve the final mesh. In some cases, both will come into play. For low-resolution polygon models, NURBS smoothing can be applied to provide a nice finish, polygon control, and small file sizes.
Software for 3D Printing
The key element in that process is the software. In 3D printing, as in other fields of engineering, the hardware development tends to precede the creation of software. Almost weekly there is an announcement of a new 3D printer either being developed or being brought to market.
By contrast, some of the Computer-Aided-Design (CAD) software used to develop 3D models for those computers dates back to the 1970s. Software packages developed later tended to be evolutionary, rather than revolutionary, so that the philosophy that guided software developments some 40 years ago is still embedded in newer software, albeit with much-improved user interfaces.
There are a number of reviews and listings of software for developing 3D models. Here we follow the approach in and categorize these as:
Freeform modeling tools
Print Preparation and Slicing software
All will produce a 3D model suitable for printing. Each category, however, is aimed at a different audience. CAD programs typically deal with hard geometries and are well suited for engineering applications.
Freeform and sculpting tools are aimed more at artists and creative modelers interested in animation, visual effects, simulation, rendering and modeling. Because 3D printers are very picky about the input they will accept.
We need another software category that checks out the model and generates the g-code used by printers (preprocessing and slicing software).
There appears to be general agreement that, for beginners, the easiest to use programs are:
They have several characteristics in common. In most cases, the basic software is free, with an option to purchase an advanced version as your capabilities and design needs increase. All are browser-based (a typically current version of Google Chrome and Firefox). Tutorials are available to help get started.
In the case of 123D and TinkerCAD, these are extensive. All come with a library of primitive shapes (cones, spheres, squares, cylinders, toruses) which can be imported to the workspace and, by addition or subtraction, combined to form almost arbitrary shapes.
SketchUp has extensive capabilities and documentation for applications to architecture and interior design in addition to civil and mechanical engineering. Most of the free versions and all of the advanced versions allow export of STL files to 3D printers, either your own or to a 3D print service.
The above is good for learning the basics of CAD software. There are several options available once you decide you need additional computing power. Some developers of 3D printers have proprietary 3D modeling software which is geared to their hardware and is not available unless you purchase their printer.
The alternative is commercially available software, except as noted. These packages come with a price tag ranging from moderate to expensive. The learning curve rises steeply because of the added capability, and therefore complexity, of these packages.
Most are intended for engineering applications. All come at a minimum with tutorials on various aspects of the software. Training and consultation services are also available, at additional cost, for some of the software packages.
A partial list of software for intermediate, advanced and professional users includes:
Free 3D CAD
PTC Creo Elements/ Direct Modeling Express
As full descriptions of these software packages are available in the cited references, only a few comments need be made here. All but Free 3D CAD are available for purchase, the cost depending on the capabilities required.
Most offer a free download of a downscale version to allow potential users to decide whether the program meets their needs.
SolidWorks and Inventor are comprehensive programs for engineering design and analysis. SolidWorks comes in three versions – Standard, Professional, and Premium. There is a wide range of books, tutorials, guides, project files and videos to assist in various aspects of its use, from setup to engineering design applications.
Training is available at an additional cost. The inventor is available in two versions – Inventor and Inventor Professional. Extensive support is available ranging from help with installation, online tutorials and a community forum through 24-5 direct contact with and assistance from support staff.
Rhino3D most likely belongs in the next section dealing with software aimed at artists yet it is frequently used to develop models for 3D printing. Rhino is a NURBS program for creation, editing, analyzing and translating NURBS surfaces.
Like other programs in this category, it supports a variety of file formats. Rhino3D runs under Windows.
CorelCAD 2015 has both 2D drafting and 3D design tools. It runs on both Mac and Windows computers. OpenSCAD is a 3D solid modeling program running under Linux/Unix, Windows and Mac OS X operating systems. Being a Unix-based system it is, in our opinion, not particularly easy to use unless one has first mastered Unix.
Free 3D CAD is an open source, modular program that is designed as a parametric modeler – it allows modification of a design by going back into the model history and changing its parameters. The program is intended primarily for mechanical design. It is still in the early stages of development.
CAD software is not the only path to the creation of a 3D digital model. For some time, 3D computer graphics software has been available offering features which permit 3D computer animation, modeling, simulation and rendering for games, film and motion graphics artists.
The basic principles are taught at universities and the subject of many texts, among them the book by Vaughan.
Since the tools employed by graphic artists include polygons, NURBS and subdivisions, a 3D model suitable for printing is created in the process. As we haven’t mentioned it before, a subdivision surface is a method of representing a smooth surface using a piecewise linear polygonal mesh.
A mesh is piecewise linear if every edge is a straight line and every surface is a plane. The most common examples are triangles in two dimensions and tetrahedra in three dimensions, though other options are possible.
Given a mesh, it is refined by subdividing it, creating new meshes and vertices. The result is a finer mesh than the original one, containing more polygonal faces. This can be done over and over until the desired degree of surface refinement is achieved.
The software in this category includes:
Both Autodesk Maya and 3ds Max are the standards for the gaming and film industries. They used to be competing pieces of software, but are now owned by the same company.
The only difference between these two is the layout and inclusion of certain tools. 3ds Max works well with motion capture tools, while Maya allows you to import various plug-ins to create realistic effects.
Many artists have a favorite between the two and will swear up and down by it. For the purposes of 3D printing, they are identical. The full versions of both are free if you are a student. Otherwise, they are expensive.
Cinema4D is another 3D modeling, rendering, and animation tool. It has a large number of sculpting tools which let you mold the model as if it were clay to create shapes and contours.
Unlike all of the other tools mentioned in this section Cinema 4D has a tool called the PolyPen Tool which lets you draw polygons on the screen, instead of starting with a base shape and working from there.
This lets you create complex shapes very quickly and easily. For those who also work in gaming, it’s designed to import seamlessly with the Unity 3D game engine, as well as auto-update in Unity when changes to a model are made in Cinema4D. Just like Maya and 3ds Max, it is very expensive.
Blender is the free alternative to Maya/3ds Max. It’s an open source program that aims to be just as good as its Autodesk cousins. Unlike the others, it comes with its own game engine and video editor included, which is useful for anyone looking to create a game on a budget.
For 3D modeling, it’s a good option to use if you aren’t a student and want access to advanced software. If you need help getting your model from Blender to your 3D printer, Shapeways has a tutorial on how to export from Blender to a .stl file.
Remember when you were young and played with modeling clay? You started with a ball of the stuff and then by pushing, pulling, pinching and squeezing you made a figure of some kind.
Well, now you can do the same thing on a computer with the help of sculpting software. Make a model, export it as an STL file, send it to a 3D printer and you can relive the glory days of youth with greater accuracy and without getting your fingers dirty. What software lets you do this? Some examples are:
Digital sculpting is a relatively new but gaining in popularity. Many of the programs available use polygons to represent an object. Others use voxel-based geometry in which the volume of the object is the basic element. Each has advantages and disadvantages for particular applications.
Almost But Not Quite
Let’s assume that you have used one of the methods described above and have designed an object for 3D printing. Naturally, you are anxious to send it to send it to your own printer or to a 3D print service. First, though, there are a few things to consider:
Each printer bed has a finite size. If your object is bigger, you can always scale it down. Reality will intrude again if you do not ensure that critical dimensions such as wall thickness are of the minimum size required by the printer.
If you are working with metals you might get away with thin elements but plastics are much weaker and far less forgiving, especially when heated. Even if you succeed in printing a thin member of your object, it may break during handling or shipment.
Be careful of units. If your design is in millimeters, be sure that the printer does not expect centimeters or inches.
To overcome the limitations of the printer bed or for artistic reasons, an object can be built out of dozens or hundreds of separate pieces. Hair, buttons on a coat, different components of an object such as attire or accessories can be created as separate components in a 3D model.
This won’t work for 3D printing. Unless the individual parts are to be glued together after printing, the model received by the printer needs to be a single seamless mesh. Attaching a few parts to the object after printing is a nuisance. Attaching hundreds can be painful.
By default, your object will print as a solid model. If this is what you want, fine. A solid model, however, requires significantly more material to print than a hollow one. Most printing services charge by volume.
It is in your financial interest to print a hollow figure instead of a solid one if this is feasible. When hollowing a model, be aware of the minimum wall thickness that the printer you are using is capable of producing.
Other services (such as checking for water tightness and other geometric factors) are performed by print preparation software or printer frontends. This type of software is a collection of utilities that check your 3D model and load STL files.
The programs in this category have an integrated slicing capability to create the layers in the z-direction and send the resulting G-code to the printer. Examples include:
MakerWare (for MakerBot printers)
Finally, you’ve reached the stage where you can make a printed object. But when it comes out of the printer or is returned by the print service, you see changes you’d like to make. Do you start the model creation process all over again? No. The software comes to the rescue again.
Depending on the source that you consult, there are now between 200 – 300 3D printers on the market. These range from machines for industrial applications and manufacturing through specialized printers for medical research (bioprinters) and housing (concrete printers) down to consumer-oriented desktop-sized machines or smaller.
Many of the consumer-oriented machines have been developed by small companies (20 or fewer employees) who are very good at building 3D printers but are in no position to provide extensive support and training to their customers.
Eventually, 3D printers will reach plug-and-play status just as 2D printers have, but it will not happen soon and it won’t require 300 of them. Before that happens, though, 3D modeling software needs to be drastically improved for the non-engineer, non-artist market.
There are a number of reviews and evaluations of 3D printers. Almost all are focused on the hobbyist or home user. Of necessity, this limits consideration to FFF and SLA printers since currently, these are the only ones safe for use in a home environment.
Industrial additive manufacturing machines and materials are listed in the Senvol Database. We will not repeat here material that is readily available on the internet. Instead, we point out some salient features involving the use of 3D printers.
Nice to Know
Before tackling the decision about which printer to get, if any, let’s cover a little background, starting with Expectations: There are a countable infinity of multicolored 3D printed images to be found on the internet. Now is the time to remember a few facts:
(1) SLA printers print in only one color; (2) an FFF multi-nozzle printer will give you some color but not the quality or resolution you get from a machine that costs 80 times more than your desktop; (3) you need a 3D model to even think of printing.
2D printing is far more advanced technologically than 3D printing, but even in 2D printing you can’t turn on your equipment, press “PRINT” and expect a fully composed letter to come out. It won’t happen unless you first use the word processing software to compose the letter. A similar situation exists in 3D.
Materials for 3D FFF printing are not cheap. If you are making small objects the cost is small. If you make large objects, the cost is naturally higher. If you plan to make things in bulk, the cost can be very high, so much so that it would behoove you to look into alternative methods of manufacture.
3D printers are good for awakening one’s creativity. They allow tinkering with new designs and are good for learning about technology. They are especially useful in making unique designs of objects that would be prohibitively costly and slower to make by other methods.
Are the materials you will need for your project readily available and economical? Home printers work with a single material. If you plan to use multiple materials, you may need a more expensive machine. Are the materials available locally? If not, factor in shipping costs and time delays into your project.
Home printers have nozzles that clog, moving parts that break down. Can you repair the problems that will inevitably occur? If not, can you get support for your printer, either from the manufacturer, the retailer or locally? How long will service take – a day, a week, several months? If it breaks down and you need to ship it for repairs, who pays for the shipping?
Since you are printing layer by layer in the z-direction, the bonding will be imperfect so that, in effect, you are dealing with a laminated object. Laminates are inherently weaker than an equivalent object that is machined. Will your object have the necessary strength for its intended application?
From a mechanical engineering standpoint, the greater the number of layers, the greater the degradation in its strength. If the object is to sit on a shelf, no problem. If it is to be put to some use if bending is involved, will it be strong enough for all practical purposes?
3D Printing Without a 3D Printer
So far we’ve talked about the different kinds of 3D printers, accessories, and different applications of 3D printing, now it’s time to get into more detail on 3D printing itself.
To Buy or Not to Buy
First, the big question – should you buy a 3D printer yourself or use one of the many 3D printing services that are available? For some, the choice to buy or not to buy is a bit easier.
Many companies can invest in a 3D printer (if not an entire line of 3D printers) because they have the capital to afford the printer(s) and know that they will be getting constant use out of the machine. The overhead cost is recouped in manpower savings by letting the printers run overnight.
This allows them to create prototypes and finished products quickly. They can also recoup the cost of the printer(s) and supplies by working with other companies and individuals to offer 3D printing services. On the other hand, a hobbyist looking to purchase a 3D printer without that pool of funds needs to think more carefully about such a purchase.
For both a large company and the hobbyist there are several factors to consider when making the decision to buy a 3D printer. What kind of 3D printer do you want/need? How often are you going to use it? Also, how much are you willing and/or able to spend on the machine?
Let’s start with the kind of printer. We’ve gone over the various different types of 3D printers in previous blogs, so you have an idea of what each kind of printer can do. You need to select the kind of printer you want based on what kind of things you are going to print.
Are you making mostly household décor and gifts, other highly detailed finished products ready to be shipped when printed, or are you printing out prototypes of ideas before sending it off to get printed by a commercial strength 3D printer?
These have different requirements for the kind of filament needed, the space of the print bed, and the print resolution. If you buy before you know what you want to print you might spend a lot of money to find out your printer can’t do what you need it to do.
What kinds of things you want to print will determine what kind of printer you need? After that, you need to look at how much you’re willing to spend. Speaking of money, you can judge the quality of a 3D printer by its cost.
If the price is too good to be true, it usually is. Printers under $600 often require assembly like IKEA furniture (except it’s even less intuitive), have a small print space, and/or have low-quality prints for various reasons.
If you’re trying to figure out how much a 3D printer is going to cost, assume an average of $1200 for an at home non-industrial strength printer. Add on the cost for the filament, usually $30 per roll for ABS/PLA plastics, and you have a large start-up cost.
When choosing a 3D printer you need to decide if having a smaller print space and limited color options (as smaller printers usually only have one print nozzle) is worth the saving a few hundred, or if you want to spend more for a larger machine with more options. Another financial consideration is determining what conditions your printed objects will encounter.
3D print services will have commercial strength printers that are far too expensive for most people to buy and can print in materials that can take a beating and keep on going. On the other hand, if you’re printing out something that doesn’t need to be super strong you can get by with the plastic filament that is used by less expensive printers.
Lastly, how often you intend to use the printer is important. If you will be turning out multiple prints on a regular basis, or need to be able to print on your own schedule, then the upfront cost is worth it.
The convenience of not having to wait for your print to be shipped to you, the quick turnaround of designs, and the savings of being able to print your creations (as opposed to wasting expensive material when creating designs by traditional methods), makes having your own printer a sound investment.
If, on the other hand, you’re new to 3D printing and looking to see what the buzz is about, wanting to print the perfect gift for someone, or doing any kind of work that will only occasionally use a 3D printer, then it’s better to start off with any of the large number of 3D printing services available.
There is no point in investing a few thousand dollars in machine and filament only to have it sit around collecting dust. Meanwhile, using a 3D print service will allow you to test out the quality of various machines and filaments with a much smaller upfront cost.
In summary, if you plan to use a 3D printer frequently, as either part of your work or hobbies, and have a clear idea of what you want to do with your printer then it’s worth the investment.
If you only plan to use a 3D printer occasionally and/or are still figuring out what you want to do with the technology then we highly recommend using one of the many 3D printing services available.
3D Printing Services
If you want to get a model printed but don’t have a 3D printer there are several places that will handle the printing for you. The time and cost to have your model printed will vary from place to place.
Time is usually determined by how complex and how large the print is, as well as what other projects are in the queue at the print location.
Cost is also based on the size of the print, but just as important is the material the model is being printing in. Gold, stainless steel, and other metals will be more expensive than plastics.
In determining price there is the base cost for printing, based on the material, and then an additional cost determined by the volume of the model. If you need more details on pricing, all of the services listed below allow you to upload your model and get a quick price quote.
A simple google search will give you several results for places that offer 3D printing services, with more being added all the time. Below is a list of some of the more popular 3D printing options. Some of these are designed for industrial use, while others are geared toward the hobbyist.
Shapeways is a large 3D printing service. In addition to printing, it has both a 3D model database and a service that connects you to 3D modelers you can hire to design a custom model to be printed.
Shapeways uses industrial strength 3D printers to create high-quality prints, shipped to your door. Each print goes through a checking, post-production, finishing, and quality control phase before it is packed and shipped.
There is a wide variety of materials available, including brass, bronze, gold, full-color sandstone, and of course various kinds of plastics.
They have also spent the past year and half testing porcelain as a material and are currently working with experienced designers in a pilot program. If those tests succeed, 3D printed porcelain will be available to everyone.
Not sure what material you want? Shapeways can send you material kits so you can see and feel samples of what each material is like. There are three different kits that can be purchased and each ship in 3 business days.
Solid Concepts focuses on printing for industrial prototypes and components. They have a wide variety of 3D printers including PolyJet, SLA, full 3D color machines, FDM, and cast urethane.
The number of materials available varies for each printer type. Their website details the best applications for models created with each type of printer. In addition, Solid Concepts offers finishing services to smooth out the final product and remove the visible layers.
3D Hubs connects people with 3D printers to people who want something 3D printed. Upload the file you want to be printed and you can get a quote from several different places that can print the file for you, based on your location.
Each print location has reviews, just like Amazon, that rate the printer so you know what to expect. PLA and ABS filament are the most common print material; however, depending on what locations are nearby, you might be able to get a different material.
Each print location also has a Hub Profile which details what 3D printer(s) they use, the cost, the print resolution, material, and colors available for print, and the delivery time.
Once an order is placed, both 3D Hubs and the print location is notified and a 3D Hubs representative is assigned to the order to ensure that everything goes smoothly.
i. Materialize is another 3D printing service geared toward the hobbyist. There are 17 different materials you can choose from, including ceramics, alumide (a combination of aluminum and polyamide powder), various kinds of resin, ABS plastics, and a rubber-like material.
3D Model Repositories
So you know where you want to print your model – now you just need the model. There are several places online that will allow you to download 3D models that can be printed.
From there you can either print the files on your own 3D printer, send the file off to be printed by someone else, or in some cases order the file to be printed from the same site that is offering it.
The major differences between the various repositories are the models they have available. If you can’t find something you like from one location, look at another. Every site gives you the .stl file for the model, .stl being the universal “this is an object that is made to be 3D printed” file type.
A .stl file can be read by all 3D printers, so once you have that there is no limitation on what kind of printer you can use.
We’ve already mentioned i.materalise and Shapeways as places that offer both 3D models and 3D printing services. Another newer site along the same lines is Pinshape.
One of the most popular sites for finding 3D models is Thingiverse. Operated by MakerBot, Thingiverse is one of the largest repositories of 3D models designed for printing. There is a wide range of models from the very simple to the very complex. You can download models designed for gaming, household décor, fashion, and other uses.
If any of the models need to be printed in certain ways then instructions are provided. All of the models can be downloaded for free; however, you either need a 3D printer or must send the files to a printing service to get the final printed model. Other sites that also offer 3D models for download are (but not limited to):
My Mini Factory: This site has a combination of free and paid models available for download. Unlike other repositories, they have a props and costumes section. This gives you access to 3D replicas of swords, guns, and various items from popular movies and games.
Cults: This French site also contains a mix of free and paid models. While it has a smaller selection than others, many of the models are very unique (such as a necklace that looks like a window plant) and are worth checking out.
Autodesk 123D: Another large repository like Thingiverse and YouMagine. All of the models on this site were made using various Autodesk 3D modeling software, such as 123D Design and Meshmixer.
If the 3D model you select is designed correctly, downloading the .stl is all you need to do. However, you may need to clean up the model using software such as Meshmixer and Autodesk 123D. More information on how to use that kind of software will be covered in the next blog.
For now, all you need to know is that depending on how the model was designed and how large the print space of the 3D printer is, you may have to make some modifications.
For example, you may need to cut the model in half so that it fits in the print space and then glue the pieces together once it has been printed. You may also need to add a support structure to hold up pieces of the model that hang out from the rest.
3D Printing Considerations
When choosing which 3D print service to use there are several things you need to take into account; the time needed to print, the cost of printing, and the quality of the print.
First, the time needed to print. If you need your print done in a hurry that may limit what materials you can use, as some materials can be printed more quickly than others. In general, the larger the model the longer time you will need to account for. Secondly is cost. The higher the quality of the material being printed, the more expensive it will be.
A plastic model will be less expensive than one made in bronze or gold. Lastly, there is quality. Some materials are higher quality than others. Along with quality is choosing the right material.
Some materials are better for household décor then they are for constant wear and tear. You may need to use higher quality (and possibly more expensive) material if you need your print to be able to withstand some damage.
A 3D Printing Example
To put some of the information we’ve gone over so far in perspective, let’s go over a practical example of using a 3D printing service. For this, we used a 3D model of a ratchet wrench NASA recently sent to their astronauts aboard the International Space Station.
The wrench was designed by Noah Paul-Gin, approved by NASA, and then sent up to the specifically designed 3D printer which operates in a zero-gravity environment on the space station. It’s the first object to be 3D printed in space on request from one of the astronauts. The .stl file for the wrench is available as a free download on NASA’s website.
After downloading the .stl file from the NASA website we uploaded the file to two different 3D printing services, Shapeways and i.materialise. Both sites asked if the model was created using inches or millimeters for measurements. This is done so that the website can accurately determine the dimensions of your model.
We chose millimeters and next were given the option to select the material for the wrench and the make any size adjustments we might want. Along with the list of materials was the price of printing the model in each type of material.
Shapeways also had another step in the process. Their site has a “3D Tool” menu that opens when you upload the model and can be viewed again later if you want to make another print.
This is a good lesson in working with 3D printing – the first print will often not be perfect. You will need to make modifications after seeing the first result in order to refine the print and get your print to come out exactly as you want. This is a problem you will encounter if you use a printing service or own your own printer.
Sometimes the internal temperate of the printer was off. Other times the model had walls that were too thin and the support structure melded together with the model.
One small model of Cinderella’s Castle in Disney World had too fine a detail for its size, and so the area around the towers looked like a mess. Whether the fault lies with the machine or the design of the model, mistakes will happen. 3D printing is an iterative process. Be prepared to have a few messes when you start out. | 1 | 3 |
<urn:uuid:e1806c77-017f-4e8b-a8d3-282a664e98c5> | The long awaited Integrated Resource Plan published on Friday, 18 October, affirms the increasingly important contribution of renewables to South Africa’s energy mix.
By 2030, the government intends to procure almost 25% of the country’s electricity from power plants driven by our abundant sun and wind resources. However, a question regularly asked is how much do renewables cost relative to coal-fired power, and what has the impact been of renewables on Eskom’s deepening financial crisis?
A misleading narrative peddled by uninformed commentators is that the costs relating to Eskom’s current fleet of coal-fired plants built in the 1960s and 70s, and which are fast approaching the end of their intended operational life spans, should determine the cost benchmark for renewables. The correct answer requires a far wider perspective on the matter.
In fact, it is more appropriate to compare the cost of constructing new power plants for each technology, in today’s terms. Then the fuel supply and the ongoing repairs and maintenance should be added, when calculating the all-in cost of producing a unit of electricity from renewables and from coal respectively. Lastly, the party who bears these costs is critical to the comparison.
It is instructive that over the past 10 years both forms of energy have been developed in South Africa. Medupi and Kusile are Eskom’s new flagship coal-fired plants, whilst some 92 Independent Power Producers (IPPs) have signed contractual agreements to build renewable energy projects and supply electricity to Eskom under the Renewable Energy Independent Power Producer Programme (REIPPP).
The following key differences between the coal and renewable developments are clear:
- Construction cost versus budget
Medupi and Kusile are still in construction, and although some units are supplying power to the grid, full commercial operation is at least four years overdue. Consequently, their cost to date of just over R400 billion, excluding the interest incurred on the debt used to build the plants, has ballooned to more than double the original estimates.
In contrast, most renewable energy projects have been built on time and within budget, and critically, the delivery risk has been carried entirely by the IPPs and not by Eskom. On some days almost as much energy is generated by the IPPs in operation, as Medupi and Kusile combined due to their frequent operational failures. This despite the IPP’s construction cost being less than 50% of the new coal-fired plants - and with no hint of corruption, theft or malfeasance.
- Responsibility for maintenance and repairs
Most of the risk of design and construction of Medupi and Kusile has been assumed by Eskom. Already, there have been equipment breakdowns prior to full commissioning of the plants, which has resulted in added strain to Eskom’s finances.
In the case of renewables, the IPPs are responsible for maintenance and repairs of the projects over their contracted 20-year operational life span. The financial risk to Eskom of covering these costs, plus increases in other overhead expenses, is completely outsourced to the IPPs.
- Cost of energy produced
As Medupi and Kusile are owned by Eskom, the cost of each unit of electricity produced should include the fuel required to operate these plants. Together with the construction cost over-runs, massive increases in the price of coal supply contracts have made Medupi and Kusile the most expensive coal-fired plants in the world. This is before the cost of their harmful carbon emissions is taken into account.
Conversely, the IPPs are paid only for the environmentally clean electricity they produce, at an inflationary linked tariff that was set at the inception of their contracts.
The most striking feature of the REIPPP has been the significant decline in the tariffs bid in each successive round of the programme.
Many people forget that almost 10 years ago, renewable energy technology was not well known in South Africa. The costs of the technology (i.e. the solar PV panels and the wind turbines) were much higher than current levels, and the investment case for introducing billions of Rands into new projects was unproven. The tariffs bid by early investors reflected this elevated risk and the commensurate rates of return were obtained in an open market, that was transparently and competitively run by the government’s IPP office. This paved the way for future projects to be bid, and the establishment of one of the most successful renewable investment programmes in the world.
The widely acclaimed success and growth of investor confidence in the REIPPP, together with the downward trend of international prices for renewable technology, has resulted in a competitor driven decline in the tariffs bid in each consecutive bid window. The weighted average tariff across all technologies awarded in the latest bid window of the REIPPP was R0.70 per kWh, a reduction of almost 66% on the weighted average tariff of R2.02 per kWh that was awarded in the first bid window in 2012.
Source: Antone Eberhard and Rain Naude
Despite the higher weighted average unit cost of electricity paid by Eskom to the initial IPPs, it is still cheaper than the electricity produced by Medupi and Kusile, in view of the massive increase in the construction costs, the delays in completing the plants, the environmental cost and the costs of the massive debt incurred by Eskom in implementing these two projects.
Furthermore, a fact often overlooked is the full pass-through to Eskom’s customers of the cost of electricity supplied by the IPPs. The National Energy Regulator of South Africa has approved annual electricity price increases so that Eskom achieves full recovery of tariffs paid to the IPPs, as well as the costs of Eskom’s own generated electricity. In the context of IPPs, Eskom plays the role of a cash collector, thus ensuring the cost neutrality of its operations.
However, perhaps the biggest difference between renewables and coal-fired energy is the positive impact the former has had on investor confidence in South Africa since 2012. More than R200 billion of fixed investment under the REIPPP has been made by the public and private sector, which has in turn contributed directly to GDP growth. It is revealing that the slump in GDP since 2016 has coincided with government’s stalling of the REIPPP and has undermined future investments in the country’s renewable energy sector.
Renewables are the cheapest form of new energy generation available today. They are also quicker to construct and, given the shortage of electricity experienced recently, are an obvious answer to ensure new generation capacity is brought online in the shortest possible time frame. Relative to the direct costs incurred by Eskom for Medupi and Kusile, plus the environmental costs of this “dirty” technology, and the indirect cost to the economy from load-shedding as a result of the dire state of Eskom’s coal fleet, renewable energy has to be the pre-eminent solution to South Africa’s new energy requirements in future.
Download the PDF version of Setting the record straight: the cost of renewable versus coal-fired energy. | 1 | 3 |
<urn:uuid:101d2415-bdae-499c-b9c3-52faba216f51> | From quantum cryptography to the quantum Internet. The fundamental research in the quantum world promises several new technical possibilities in the future. The future is approaching fast and becoming evident in the world. Chungli Bai, President of the Chinese Academy of Sciences, as well as his colleague Anton Zeilinger, President of the Austrian Academy of Sciences, and Heinz W. Engle, Rector of the University of Vienna, talked face to face in a secure video conference mode. Quantum technology encrypted this call. This first cryptographically protected quantum video call took place on September 29, 2017. It was between Vienna and Beijing and covered two continents. The experiment was in the presence of representatives of the media and scientists from the Austrian Academy of Sciences. Also, it was their colleagues from China. Quantum cryptography made this conversation at least a million times safer than using conventional encryption methods.
The first test for orbital quantum encryption
International cooperation of researchers from the Chinese Academy of Sciences made possible the world’s first intercontinental quantum challenge. Anton Zailinger and his graduate student Jian-Wei Pan initiated the research project called QUESS (Quantum Experiments at Space Scale) in 2013. To implement the project, a Chinese satellite was used, which was launched into space last year for experiments in quantum physics. The scientists demonstrated the extraordinary practical potential of this orbital quantum technology for the future development of global communication. The decisive advantage of quantum technology, in comparison with traditional communication technologies, is that it is impossible to hack hackers because of the various laws of quantum physics.
«The exchange of encrypted quantum information at intercontinental distances confirms the potential of quantum communication technologies discovered by fundamental research,» says Anton Zeilinger. He convinced: «This is a very important step towards a worldwide and safe quantum Internet.»
Quantum encryption between Vienna and Beijing
To create a quantum key used for a video call between two scientific institutions, researchers from the Austrian and Chinese academies of sciences first worked with the quantum Micius satellite. Named after the Chinese philosopher of antiquity, Mitius circles around the Earth at an altitude of about 500 km. From orbit, it sends light particles, so-called photons, to ground stations in China and Europe. It included a long-range satellite laser station used by the Space Research Institute in Graz.
On the eve of the video call, the satellite first generated light particles with a random direction of oscillations. It is the so-called polarization. Then, single photons with their various polarizations were transmitted as a sequence of ones and zeros to a ground station. There the polarization states measured and compared randomly with the sequence sent by satellite.
Inaccessible quantum key
Johannes Rudneiner explained that if someone tries to intercept photons exchanged between a satellite and a ground station and measure their polarization, the photon quantum state will instantaneously change, thereby exposing the hackers. Any deviation of the data measured at the transmitter and receiver allows immediate detection of any attempt at listening. If the measured data matches, then the sender and receiver have a single quantum key.
The combined key, both ground stations, generated into a standard code for unambiguous encryption and decryption of information. This key was then used to securely encrypt the world’s first quantum call between Vienna and Beijing. Practically, it is impossible to crack such a quantum video call.
It seems that soon such quantum video calls will become as familiar as conversations and video calls in Skype.
Last week, IMF chief Christine Lagarde spoke at the Bank of England conference. In her speech, she predicts that cryptocurrencies can seriously compete with the current financial system, led by central banks.
How will fintech change the system led by central banks during the next generation? Let’s start with virtual currencies. I will clarify that this is not about digital payments in existing currencies. For example, through PayPal and other providers of electronic money, like Alipay in China or M-Pesa in Kenya.
Virtual currencies are in a different category because they provide their unit of payment and payment systems. These systems allow you to conduct p2p transactions without central banks.
Virtual currencies such as Bitcoin have little or no threat to the current system of the existing currencies and central banks. Why? Because they are too volatile, risky, energy-intensive, and the underlying technologies are not yet scalable. Many of them are too opaque for regulators, and some were even hacked.
But many of these technological challenges can be resolved over time. Not so long ago, experts argued that personal computers will never be popular among population, and tablets will become only expensive trays for coffee. Therefore, I think it is not very wise to discard virtual currencies from accounts.
More value for the same money?
For example, think about countries with weak institutions and an unstable national currency. Instead of introducing currencies of another country like the US dollar, some of these economies can see the increasing use of cryptocurrencies. Call it dollarization 2.0.
The IMF experience shows that there is a turning point after which coordination around the new currency occurs exponentially. In the Seychelles, for example, the use of the dollar jumped from 20% in 2006 to 60% in 2008.
But still, why should citizens use virtual currency instead of physical dollars, euros or sterling? Because one day it can become simpler and cheaper than getting paper notes, especially in remote regions. And because virtual currencies can become more stable.
For example, they can be issued one-to-one to a dollar or a stable currency basket. The issue can be completely transparent, controlled by a reliable, predetermined rule, an algorithm that can be monitored. Or even a «smart rule» that can reflect changing macroeconomic circumstances.
Therefore, in many cases, virtual currencies can compete with existing currencies and monetary policy. The best answer for central banks is to continue to pursue an effective monetary policy, but be open to fresh ideas, new demand arising from the development of the economy.
The best payment services?
Look at the growing demand for new payment services in countries where the decentralized economy of sharing is growing. This is an economy in which p2p transactions are fixed, frequent payments for small amounts, especially cross-border transactions.
Four dollars for advice on garden care from a lady from New Zealand, three euros for an expert translation of a Japanese poem, 80 pence for a virtual render of Fleet Street. These payments can be through credit cards and other forms of electronic money. But the commission is large relative to their size, especially if the payment is cross-border.
Instead, citizens can sometimes prefer virtual currencies, because they potentially provide the same costs and amenities as cash. There are no settlement risks, no clearing delays, no central registration, no intermediary for checking accounts and identification. If privately issued virtual currencies remain risky and unstable, citizens can even ask central banks to provide digital forms of legal tender.
New models of financial intermediation
This leads us to new models of financial intermediation. One of the possibilities is the division of banking services. In the future, we can keep the minimum balance for payment services on electronic wallets.
The remaining money can be stored in mutual funds or invested in p2p-credit platforms that use the benefits of large data and artificial intelligence for automatic credit scoring.
This is the world of six-month product development cycles and constant updates, primarily software, with a special focus on simple user interfaces and trusted security. A world where data reigns. The world of a lot of new players, not burdened by a lot of branches.
Some will argue that this puts into question the banking model of partial reservation, with which we are working today if the volume of deposits decreases and money flows through new channels.
Future of monetary policy
Today’s central banks usually affect asset prices through major dealers or large banks to which they provide liquidity at fixed prices, so-called open market operations. But if such banks become less relevant in the new financial world and the demand for balancing by the central bank falls, will the transition of monetary policy remain as effective?
In any case, central banks will need to increase the number of counterparties of their operations. This, of course, has regulatory consequences. More counterparties — more firms are falling under the regulatory umbrella of the Central Bank. This is the price to pay for liquidity on a rainy day. Whether there will be more rain or less in the future is an open question.
We will also see a shift in regulatory practices. Traditionally, regulators have focused on the supervision of clearly defined entities. But with the arrival of new service providers in new forms, putting them in a simple basket can be a daunting task. Think about social media companies that offer payment services without active balance management. What label should be on them?
Regulators, most likely, will have to expand their focus from financial institutions to financial activities further. Also, you will have to become experts in assessing the reliability and safety of algorithms. Easier said than done.
The new macOS High Sierra, which was released yesterday, is available on all computers that support the previous version. The complete list is as follows:
MacBook (end of 2009 and newer)
MacBook Air (late 2010 and later)
Mac mini (mid-2010 and newer)
iMac (end of 2009 and newer)
MacBook Pro (mid-2010 and newer)
Mac Pro (mid-2010 and newer)
As the name implies, the update is an improved version of the previous macOS.
Nevertheless, it contains many changes and new functions. Many of them are invisible at first glance. However, this does not detract from their usefulness.
The APSF file system
The new file system is just one of those «invisible» improvements. Apple’s development is aimed at increasing the efficiency of disks and is designed to increase their speed, security, and reliability.
At the moment, APFS is only available on solid-state SSDs. Support for traditional hard drives and hybrid Fusion Drive will appear in the following updates. When you install macOS High Sierra automatically converts the drive from HFS + to APFS while saving all the files. It works for both a clean install and for upgrading from macOS Sierra or previous versions.
It seems that Apple listened to the criticism of users and made a standard application for viewing photos more functional. In macOS High Sierra «Photo» received advanced editing tools and several useful innovations that make it more similar to its predecessors.
It became possible to change many parameters of the images, including correction of saturation and contrast of the image, as well as curves. The sidebar contains more information, allowing you to find and organize collections quickly. Now you can simultaneously change several photos at once and apply effects to Live Photos. Also, there were new themes for the albums «Memories».
HEIF and EMU Codecs
New codecs for photos and video are now supported in iOS and macOS, allowing you to use device resources and disk space more efficiently. Using the HEIF compression format, images take up almost half the disk space. The HEVC codec is designed to compress 4K video more efficiently with minimal loss of quality.
At the same time, the company took care of backward compatibility, so when you transfer content to devices without HEIF and HEVC support, it will automatically be converted to JPG and H.264, respectively.
Since the new version of macOS, support for the graphics API Metal, previously available on mobile devices, now exists in Apple computers. In the future, this will give an increase in productivity in games and resource-intensive applications. Moreover, it will give new opportunities for developers. In the meantime, only the owners of iMac can feel the benefits of Metal 2. On computers with a large display the interface of the system began to work much smoother.
Also, Metal 2 will provide improved support for virtual reality on the Mac. And it concerns not only the work with VR-headsets but also software with the possibility of creating such kind of content.
With each new update of macOS, Safari is getting better. In addition to the overall performance increase, the browser has received several useful functions.
Apple turned off the automatic playback of media content on the pages of the sites, protecting users from annoying advertising and various promos. There is an Intelligent Tracking Prevention feature, which disables tracking scripts that store what you are looking for and offer contextual advertising.
Also, now Safari remembers the tinctures of displaying different sites and automatically opens the pages in Reader mode with appropriate support.
The standard email client in macOS High Sierra has received an even more intelligent search. It analyzes user actions and learns to produce the most relevant results. Mail remembers which contacts you write to more often and what documents you use in attachments to offer exactly what you need.
Users of the MacBook will appreciate another innovation with a small screen diagonal: now the application automatically opens the form of a reply to the letter in Split View mode when working in full-screen mode.
Following the iOS in the «Notes» on macOS, it’s also possible to pin the necessary entries at the top of the list for quick access to them, as well as the function of inserting tables, which will be very useful for creating small bulletins right in the notes without the need to use Numbers.
Siri and Spotlight
With each new version, Siri becomes smarter, and her voice is becoming more human. In macOS High Sierra, the voice assistant became a real DJ. Now he knows how to pick up music for you and knows a lot about the performers.
And Spotlight has learned to search for information about planes on the flight number. Enter the number, and the search will show you the departure time, terminals, delays and even a flight map.
Apple continues to implement support for Live Photos in its ecosystem. Now on the Mac, you can not only view animated photos but also create them. The corresponding function appeared in FaceTime, which allows you to save memorable moments right during the conversation. Of course, your interlocutor will be notified of the recording and, if desired, can disable this feature.
Google tested the web browsers for security: Safari has the most vulnerabilities in the DOM engine.
The Google Expert Team on Zero Day Vulnerabilities («Zero Project») analyzed the most popular browsers for vulnerabilities in DOM engines. The worst result Safari showed —17 vulnerabilities. The safest was the native Chrome — only two bugs.
The team used the utility Domato for testing, which Fratrich developed specifically for testing DOM engines. This is a fuzzing tool for security testing, which transmits to the application in question a random data set. Also, it analyzes the anomalies of the output data.
The team chose five browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Microsoft Edge and Safari from Apple. The team conducted using Domato about 100 million tests. The results showed that the first three are the safest.
The results showed that Safari has the worst DOM engine with 17 security errors. On the second place from the end is the Edge with six problems.
Google reported the bugs found to the developers of each browser. Also, it provided them with copies of Domato so that everyone can independently perform more extensive tests of their products. The source code for Domato was also published on GitHub so that anyone can use it or adapt it to work with other applications, not just with DOM engines of web browsers.
Fratrich emphasized that this experiment focuses on the security of only one component (DOM engine). So you can not perceive its results as an indicator of the security of browsers in general. Although historically, vulnerabilities in the DOM have been the source of many security problems. «This experiment does not take into account other aspects such as the existence and security of the sandbox. Errors in other components such as script engines, etc. I also can not ignore the possibility that in DOM my fuzzer better finds certain types of problems, which can affect the overall statistics», — wrote the developer.
DOM (Document Object Model) engines are browser components that read HTML and organize it in the Document Object Model, which is then rendered and displayed in a browser window as an image that users see on their screens. According to Fratrich, developers rarely release updates that do not contain fixes for critical problems in DOM engines. So the problem is quite significant. In particular, given the fact that after the final abandonment of Flash technology in 2020, which bears the palm of priority for exploited vulnerabilities, DOM engines will become one of the main targets of attacks by intruders.
Uber problems have grown to the size of an iceberg. Will they stay the leader of the rider-sharing market?
Long vacation for Kalanick
June 11, the board of directors held an urgent meeting to discuss the problems that the company had created. On the agenda is the future of the company’s top managers. In particular, the founder of Uber Travis Kalanick and his first deputy, vice-president Emil Michael.
The board meeting was devoted to discussing the report of former US Attorney General Eric Holder about corporate culture and leadership. They voted fully for Holder’s recommendations. The board of directors did not demand the immediate dismissal of Kalanick, but the founder of Uber, apparently, will take a long vacation. People close to the leader say that he is going to spend time with his family, which recently faced a great grief: the parents of Kalanick crashed on the boat.
Scandals, scandals, scandals
Uber is experiencing a severe crisis. Recently numerous scandals break out around Uber. Management hired the law firm Perkins Coie that examined complaints about harassment and discrimination of Uber employees. According to the audit results, they dismiss more than 20 employees. Another scandal is with the history of access to medical records of a woman in India, who accused Uber of rape.
Unethical business practices and the company’s ambiguous corporate culture are only part of Uber’s problems. The company is growing and scaling. They received the latest investment in 2016 when Saudi Arabia invested $ 3.5 billion in Uber.
Drivers are unhappy because of the ever-changing rules of the game. Moreover, there is a lawsuit from the company Waymo, accusing Uber of stealing their technology. The judge forbade Uber to use the developments handed over to her former employee Waymo.
Indeed, at first glance, Uber has more than enough problems. However, is there something threatening or someone’s hegemony of Uber on the market? How real are these threats and are there serious business risks?
Сompetitors are coming
According to the latest financial statements of the company, the growth of Uber continues to outpace its losses. Nevertheless, these losses seem to be enormous. In 2016, the company’s loss amounted to $ 2.8 billion. This is more than any other start-up in history. This led to the fact that some industry analysts began to predict the death of Uber from its financial obligations. However, last year the company doubled the turnover and brought it up to $ 20 billion. The net profit was $ 6.8 billion. All figures are without taking into account the Chinese market. Thus, despite everything, the company continues to grow steadily.
Most of the negative news related to Uber – from the management’s connection with the Trump administration. It is obvious that it is too early to talk about the financial state of the company in 2017. But supporters of Uber predict that the company will continue to grow, despite numerous problems. And most importantly – the product remains popular with ordinary consumers. And this situation does not change even against the background of serious efforts made by competitors of the startup. In particular, the company Lyft.
The image of Lyft as a safe and friendly alternative to Uber helped to get the investment and reach an estimate of $ 7 billion. But it is still a pale copy of its main competitor: it still works only in the US, and Uber – in 600 cities around the world. Personnel of Lyft – 1.6 thousand employees, which is incomparably less than the army of drivers – 12 thousand people.
Still the best and unique
Harry Campbell, a former driver, and blogger from The Rideshare Guy agrees that while the company’s main product, its app for calling cars, will outperform competitors’ apps, the company will continue to dominate the market. “All bad rumors can damage the morale within the company. But its customers are still able to safely and cheaply travel from point A to point B. Few clients are aware of the company’s problems. Most of them interact with drivers, and global problems are of little interest to them.”
Real risks for company can concern the morale of employees, problems with recruiting and partnership relations. Investors and partners are those who suffer most from the long-term prospects of this company. At one time they believed in the company and invested money in it, but now the profitability of their investments seems to be moving in an incorrect direction.
Ordinary consumers read the news, but they usually make purchasing decisions based on their immediate needs and their preferences. Now, this startup remains the most affordable alternative to other services.
Despite the scandals, the ambiguous personality of the leader and the risks voiced by analysts, Uber still does not threaten anything. While people need to travel, and drivers are willing to earn extra money, and most importantly, – the application is not removed from the users’ smartphones, the company will live.
New technology platform can change the market. This happened before, for example, in the 1990s Microsoft Visual Basic or in the 2000s, there was .NET Framework and C# on Windows.
But the thing is that nowadays Microsoft is no longer in control of the OS. To deal with this, there is cross-platform that gives programmers an opportunity to break free from Windows and move to other operating systems for their development requirements.
Nowadays programmers use C# and .NET instead of AWS or Java because for the past decade C# has been one of the most popular programming languages. Also, it is the main language that enterprises pay programmers to use to build serious apps. Geolance C# and.NET Core developers have all necessary skills to create everything that you need for your website or application.
Anders Hejlsberg created C#. He is a well-known programming language creator who has co-designed several languages, like Delphi and Turbo Pascal.
Hejlsberg used his knowledge and passion for languages to Microsoft as the head architect of C# and TypeScript. He has almost 30 years of experience in designing programming languages, so you can be sure that his creation is high quality and professionally made.
Working with it is a real pleasure
C# is well-implemented and well-designed, so it does language features properly. From asynchronous support to generics, from tuples to LINQ, C# incorporates modern thinking in programming languages without overfilling its syntax with odd bits and awkward hacks.
The greatest way to provide a cross-platform programmer stack is to build in a combined manner by open sourcing it. So, the C# and .NET Core compiler are open resources. This allows engagement and transparency with clients in real time. It makes the C# and .NET Core some of the best tools available for an agile and reliable response.
The best interactive development environments
Here at Geolance, we reckon that .NET Core and C# have the best IDEs. Visual Studio 2017 can make available every feature that you could want or need. Also, there are extensions like Visual Studio Code and ReSharper, which are super-fast, light-weight, and cross-platform.
Everything is possible
Both .NET and C# are very productive and flexible for creating any app. From websites to mobile apps running on iOS and Android, to cloud services, .NET and C# are tools that are agile enough for a range of projects. Also, there are the base class libraries that have richer functionality spectrum.
C# – means skills
You get improved support
The larger a language’s community is, the better support you will get. C# and .NET have a good old power system. C#, NET and the IDEs have better support by third party class tools and libraries; the entire system will help you to create better.
Catch the future
C# has bright future. Statistics demonstrate that C# continues to be one of the most popular languages with developers. We can expect that developers will add more complex features to C#. It is likely to stay as significant as ever thanks to its key role in Unity, as virtual reality continues to be in trend in the world of programming.
Nowadays C# and .NET remain some of the most fundamental tools a programmer needs to know today, even though they appeared almost 15 years ago.
They’ve been around long enough, so they have proven themselves on the market. That’s why our development company choose C# and .NET for Core for big projects. | 1 | 2 |
<urn:uuid:d7d0aab0-700c-46c9-b783-e1c3c288930f> | The flashcards below were created by user
on FreezingBlue Flashcards.
What is the most important factor resulting in thrombosis?
Reduction in speed of blood flow.
Cardiac failure is a result of stenosis of which valve in the heart?
- Mitral valve stenosis: narrowing of the left AV valve.
- = more blood accumulates in the left aorta = decreased speed of blood leaving the heart = thrombulous.
- Ball-valve thrombus: a thrombi in circulation (thromboembolite) gets stuck in the atria, blocking the AV valve.
True or false: polycythemia results in decreased blood viscosity.
- False: polycythemia results in increased blood viscosity.
- A/k/a: erythremia = overproduction of RBC's.
- 7-8 million in 1 square mm of blood (vs the normal 5 million).
Is blood flow slower in veins or arteries?
- Veins: lower blood pressure = slower rate.
- Veins are dependent upon muscular contractions to get the blood back to the heart.
What are the 3 risk factors for thrombosis?
- 1. Physical inactivity.
- 2. Varicose veins.
- 3. Blood hypercoagulation.
Sequelae of thrombosis:
- 1. Resolution: dissolving of a thrombus = most benign.
- 2. Organization: elimination of blood clot & tissue debris via pahgocytosis = replaced by CT.
- 3. Recanalization: formation of canals through a thrombus (angiogenesis).
- 4. Propagation: enlargement of thrombus in the veins, occurring near the branching of veins, (most commonly in the legs).
- 5. Infarction: an area of necrosis due to hypoxia.
What type of necrosis occurs in the heart?
Strokes = liquefactive necrosis: replaced with neural glia = area of gliosis.
- Coagulative necrosis: preserves the size & shape of the necrotic tissue, allowing for healing to occur.
- The necrotic tissue is replaces with CT.
Which step in the sequelae of thombosis is responsible for returning functioning back to normal?
- Dissolving of a thrombus activates the fibrinolytic anticoagulation system (plasminogen -> plasmin) to take care of the thrmobus.
What is the most common cause of infarction?
- Myocardial infarction - coagulative necrosis.
- Ischemic stroke (brain infarct) = liquefactive necrosis.
- Stroke + myocardial infarct = most common cause of death in the U.S.
Name 2 examples of vasculitis.
- 1. Temporal arteritis ("giant cell arteritis," "horton's disease").
- 2. Polymyalgia rheumatica: dramatic pain in the upper & lower extremities, commonly associated with Horton's Disease.
Horton's Disease generally involves which 6 arteries?
- 1. Superficial temporal artery.
- 2. Cerebral artery.
- 3. Opthalmic artery.
- 4. Vertebral artery.
- 5. Arch of aorta.
- 6. Thoracic aorta.
- The most common type of vasculitis.
- Chronic granulomatous inflammation of the vascular wall.
- =headaches & blindness in the elderly.
Where do fat emboli usually wind up?
- In the lungs, causing minor respiratory problems.
- Fat embolus: fat gets into the venous system via fractured bones releasing yellow marrow.
Define air lock.
Air lock: following an air embolus, compressible air does not leave the heart upon contraction, also reducing the flow of blood leaving the heart.
*Rest patient on their RIGHT side*
Which type of hernia protrudes through the esophageal hiatus of the diaphragm?
- Paraesophageal hernia.
- = diaphragm may pinch veins to the stomach thus preventing blood flow = gangrenous necrosis.
What is another name for sheeham syndrome?
Post-partum syndrome: after delivery leads to anterior pituitary infarct.
True or false: the liver is most vulnerable to hypoxia.
False: the brain & myocaridum are most vulnerable... the liver is not vulnerable.
Why are the lungs well protected from infarction?
- 1. Clot retraction: reduction in size of the thrombus.
- 2. Double blood supply to the lungs: supplied from the pulmonary system & the independent bronchial arteries.
- 3. Fibrinolytic activity.
Which vitamin maintains membrane conduction of neuron cells & axons?
- Thiamin (Vitamin B1).
- Think (th1am1n).
What are the 3 diseases associated with B1 deficiency?
- 1. Dry beriberi, peripheral polyneuropathy: symmetrical loss of peripheral NS myelenation = wrist drop, foot frop, & first toe drop.
- 2. Wet beriberi, cardiovascular syndrome: peripheral vasodialation (loss of sympathetic vasoconstriction).
- 3. Wernicke Korsakoff syndrome: Opthalmoplegia, apathy, listlessness, disorientation... Korasakoff’s psychosis, retrograde amnesia, confabulation.
What is the term used to describe retrograde amnesia, the inability to accept new information, & non-stop talking (confabulation)?
Beriberi is associated with a deficiency of which vitamin?
Dry Beriberi is associated with which pathological condition?
Wernicke-Koraskoff Syndrome is associated with deficiency of which vitamin?
Which vitamin is also known as riboflavin?
Ariboflavinosis is characterized by what 4 symptoms?
- Ariboflavinosis: lack of B2.
- 1. Cheilosis (chelitis): cracks in corner of mouth which get infected.
- 2. Glossitis: inflammation of tongue.
- 3. Superficial interstitial keratitis: scar tissue in the corneas.
- 4. Dermatitis: rash on cheeks, behind the ears, around the naso-labial folds, scrotum, & vulva.
True or false: vitamin B2 is associated with diseases of the nervous system.
False: B2 & B6 are not associated with diseases of the nervous system.
Name 3 a/k/a's for Vitamin B3.
- 1. Niacin.
- 2. Nicotinic acid.
- 3. Nicotinamide.
B3 is synthesized in small amounts by the human body. True or false?
Name 3 functions of Niacin.
- 1. Vasodialation.
- 2. Prevention of LDL production in the liver (sometimes used to atherosclerosis).
- 3. Antioxidant.
Which disease is also known as the "4 D's?"
*May show symptoms as VB2 deficiency.
- Pellegra: lack of B3 (niacin) = rough/dry skin.
- 1. Dermatitis: rough red skin... Cassal’s necklace.
- 2. Diarrhea: resulting in atrophy of columnar epithelial cells, submucosal inflammation, ulcerations.
- 3. Dementia: affects grey matter cerebral cells resulting in weakness, dizziness, headache, & depression.
- 4. Death.
Which vitamin is associated with Pellagra?
What is the major function of Pyrodoxine?
- Pyrodoxine (B6): metabolism of epithelial cells.
- Made of pyrodoxine, pyridoxal, & pyridoxamine.
- If you heat the food, B6 undergoes destruction.
Which patients are at risk for developing a B6 deficiency?
- 1. Alcoholics.
- 2. Pregnant women.
- 3. Medicated patients... TB, birth control pills, Wilson's Disease, Systemic Sclerosis.
Which drug interferes with the functions of both B3 & B6?
Izoniazid: medication used to treat TB.
Name the 8 pathologies associated with B6 deficiencies.
- 1. Cheilosis: corners of mouth turn white (also seen with B2).
- 2. Glossitis: (also seen with B2&3).
- 3. Brain growth impairment.
- 4. Seborrheic dermatitis: wet scales in hair.
- 5. Peripheral polyneuropathy: (also seen w/ B1&12).
- 6. Convulsive seizures in infants.
- 7. Promotion of oxalate kidney stones: decreased epithelial cell metabolism results in a nidus (network of organic material).
- 8. Hypochromic anemia: less hemoglobin in RBC's.
Cheilosis & glossitis result from deficiency of which 2 vitamins?
- 1. B2 (riboflavin).
- 2. B6 (pyrodoxine).
Name 2 a/k/a's for Vitamin B12.
- Vitamin B12: Cyan, Cobalamin.
- B12 is bound to proteins.
Is B12 absorbed into the body via an intrinsic or extrinsic factor?
- Intrinsic factor (of Cassel): B12 is picked up in the duodenum via the intrinsic factor.
- Intrinsic factor is produced by parietal cells in the stomach wall, flows into duodenum, SI, then the ileum… the ileum has receptors for the intrinsic factor, allowing release into the blood flow.
What are 6 predisposing factors for a B12 deficiency?
- 1. Malabsorption.
- 2. Vegetarians.
- 3. Chronic gastritis.
- 4. Following gastrectomy.
- 5. Regional enteriris.
- 6. Tropical sprue.
Name 3 ways a B12 deficiency can occur.
- 1. Blocking antibodies: prohibits intrinsic factor from binding to B12 or blocks the intrinsic factor from binding to receptors in the ileum... Type II hypersensitivity reaction... Autoimmune.
- 2. Different antibodies: blocks binding of intrinsic factor & B12... Type II hypersensitivity reaction.
- 3. Chronic autoimmune gastritis: inflammation of the stomach wall, selectively killing PARIETAL cells of the stomach... no intrinsic factor can produce this.
Name 2 a/k/a's for Pernicious Anemia.
- Pernicious anemia: one of the two megaloblastic anemias, interfering with hematopoiesis.
- 1. Malignant Anemia.
- 2. B12 Deficient Anemia.
How does B12 play into pathology of the CNS?
- Lack of B12 = demylenation of the posterior & lateral tracts of the spinal cord.
- B12 maintinas the membranes of the nervous system.
- Deficiency = demyelenation of the posterior & lateral tracts.
- Results in ataxia in lower extremities initially.
Cobalamin deficiency leads to pathology of which 2 systems of the body?
- 1. Nervous system.
- 2. Hematpoietic system.: macrocytosis, rigid cell membranes, hypersegmented neutrophils.
B12 promotes the reduction of folic acid (folate, B9) into tetrahydropholate.
THP promotes hematopoiesis.
*Note that B9 (folic acid) plays a role in the development of the neural tube in the embryo.
- monofolate → reductase → tetrahydropholate (THF) → DNA.
- **Vitamin B12: can restore THF so it’s active again.
- (Methotrexate: inhibits reductase for treatment of cancer, Hodgkin’s disease, etc).
- (folate → dihydrofolate→ tetrahydrofolate ↔ methylene-THF → methyl-THF).
Deficiency of which 2 vitamins can lead to megaloblastic anemia?
- 1. B12 deficiency.
- 2. B9 (folic acid) deficiency.
: production of RBC’s with very ridged membranes... RBC’s are much larger & they are unable to bend to fit through capillaries.
- Boudeinyi WTF moment: …WBC’s are also involved…
- …hypersegmented neutrophils: too many lobes.-cells are immature & undergo death earlier.
- -pancitopenia: ……..*glossitis, chelossis (cheilitis)…
Name 4 functions of Vitamin C (ascorbic acid).
- 1. Hydroxylation of procollagen (proline into hydroxyproline).
- 2. Stimulation of protein formation.
- 3. Provides strength for collagen fibers (via hydroxyproline).
- 4. Antioxidant (free radical scavenger along with selenium & vitamin E).
Name 3 results from ascorbic acid deficiency.
- 1. Scurvy.
- 2. Skeletal changes.
- 3. Decreased wound healing: delayed wound healing because it takes a longer time to build the necessary connective tissue.
Impairment of wound healing is associated with the deficiency of which vitamin?
Which pathology is characterized by bone abnormalities in children, leg bowing (genu varus), hemorrhages, improper healing, loss of teeth/gingivitis, death of pupils, & an inward depression of the sternum of babies, subperiostal hematoma, & hemarthrosis?
Development of subperiosteal hematomas are a manifestation of which vitamin deficiency?
Vitamin C (ascorbic acid): scurvy.
Vitamin C deficiency results in impaired function of which substance?
- Retinol (Vit A) = rhodopsin.
- Thiamine (B1) = myelin.
- Vit K = prothrombin.
Which vitamin actively participates in collagen synthesis (through the synthesis of its precursor, procollagen)?
Ascorbic acid (vitamin C).
Skeletal changes in scurvy are associated with disturbance of which process?
Formation of osteoid matrix.
Vitamin A consists of retinol, retinal, & reinoic acid, thus making it fat insoluble. True or false?
False: vitamin A is fat soluble.
90% of Vitamin A is stored in the liver. This gives humans enough storage for how long?
- 6 months.
- Vitamin A is stored in the liver is Retinol.
Which food is Provitamin A abundantly found in?
Carrots via beta-carotene (a carotenoid).
The synthetic substance 'Retenoid' has a close chemical formula to Vitamin A & is used to treat which pathology?
- Psoriasis: the proretenoid activity serves as an anti-inflammatory medication.
- However, Retenoids are actually very dangerous medications.
What are the 3 main functions of Vitamin A?
- 1. Maintaining normal vision in reduced light.
- 2. Differentiation of specialized epithelial cells: via retinoic acid.
- 3. Enhances immunity to infections.
Which component of Vitamin A is responsible for maintaining normal vision in reduced light?
- Retinal: used as a material for rhodopsin synthesis in the rods of the eye.
- Rhodopsin: most light sensitive light pigment.
Which component of Vitamin A is responsible for potentiating the differentiation of specialized epithelial cells in the human body?
Retenoic acid: potentiates the differentiation of specialized epithelial cells (mainly mucus-secreting cells).
Name 3 manifestations of Vitamin A deficiencies.
- 1. Poor night vision.
- 2. Squamous cell metaplasia: the replacement of normal epithelial cells with kereatinizing epithelia which cannot preform normally.
- 3. Measles, pneumonia, or infectious diarrhea: 30% more likely to kill a patient with a Vitamin A deficiency.
What are the 4 organs typically affected by squamous cell metaplasia?
- 1. Eye: resulting in xeropthalmia (dry eyes) resulting in bitot's spots, corneal erosion, keratomalacia, & blindness.
- 2. Urinary Tract: normal cells are sloughed off being replaced by keratinizing cells, thus forming a nidus (stone).
- 3. Repsiratory tract: resulting in an increased susceptibility for secondary pulmonary infections.
- 4. Adnexal glands: sebaceous glands resulting in follicular or papular dermatosis.
Why does squamous cell metaplasia increase vulnerability to secondary pulmonary infections?
Because normal epithelial cells have an immune function of brushing away foreigners with their villi, but keratinized epithelia have lost this function.
Which is worse, hypervitaminoses A or hypovitaminoses A?
Hypervitaminoses A: Vitamin A is not supposed to be in the body in high amounts, especially if it is synthetic.
Name 7 manifestations of hypervitaminoses A.
- 1. Increased ICP = headache, nausea/vomiting, papilledema.
- 2. Weight loss.
- 3. Bone pain & muscle pain. 4. Skin rash/dermatosis.
- 5. Nausea/vomiting.
- 6. Hyperostosis: too much bone growth (DISH: calcification/ALL).
- 7. Hepatomegaly (enlarged liver) = liver fibrosis.
Overdose of which vitamin results in hyperostosis?
- Vitamin A.
- Hyperostosis = DISH.
Which vitamin is responsible for lowering mortality rate due to infectious diseases in children?
Bitot's spots are characteristic with deficiency of which vitamin?
The deficiency of which 2 vitamins results in kidney & bladder stones?
What are the 2 a/k/a's for Vitamin E?
- 1. Tocopherol.
- 2. Alfa-tocopherol.
What 2 things does Vitamin E specialize in protecting as a free radical scavenger?
- 1. Nervous system: deficiency results in demyelenation of peripheral sensory nerves & the posterior columns.
- 2. RBC's: deficiency results in anemia in children & babies.
- May also result in impaired eye movements.
Which 2 vitamins are considered antioxidants & free radical scavengers?
- 1. Vitamin E.
- 2. Vitamin C.
- Sometimes B3 also.
Vitamin E can be regenerated by which vitamin?
Which vitamin is characterized by damage to the posterior column of the spinal cord, atrophy of DRG, demyelenation of peripheral sensory fibers, damage to the spinocerebellar tract leading to ataxia, abscence of DTR's, & loss of pain sensation?
- Vitamin E.
- DRG atrophy.
- Periphreal sensory fiber demyelenation.
- Abscence of DTR's.
- Loss of pain sensation.
Vitamin E prevents the formation of HDL's along with B3, C, A, & D. True or flase?
False: prevents the formation of LDL's.
Vitamin K plays a role in Koagulation by promoting the formation of which clotting factors in the liver?
II (prothrombin), VII, IX (Christmas), & X.
Which 2 vitamins are produced by the gut flora?
Body = K, B2/3.
- 1. Vitamin K.
- 2. B2.
- B3 is also synthesized in small amounts by the human body, via tryptophan.
How does Vitamin K play a role in calcification of bone proteins?
- Vitamin K promotes the formation of osteocalcin.
- Vitamins involved with bone formation: C, K.
Deficiency of Vitamin K may result from which 3 things?
- 1. Fat malabsorption: vitamin K is fat soluble.
- 2. Diffused liver disease.
- 3. Long term intake of antibiotics.
Vitamin K deficiencies are result in which 6 manifestations?
- 1. Hemorrhagic disease of newborns: in first week of life since the gut flora is still producing vitamin K.
- 2. Bleeding diathesis.
- 3. Hemorrhages.
- 4. Brusies (ecchymoses)/ hematomas.
- 5. Melena: black poo.
- 6. Gum bleeding.
Hemorrhagic disease of newborns is associated with deficiency of which vitamin?
Which vitamin accounts for the production of ostecalcin?
Vitamin K: osteocalcin promotes calcification of bones.
Deficiency of which vitamin results in a reduction of clotting factors synthesized in the liver?
*Look @ diagram of vitamins!*
- NS: B1, B12, E.
- Body: B2, B3, K.
- Epithelia: B6, A.
- Anemia: B9, B12.
- Free Rad's: B3, C, E.
- C&G: B2, B6.
- Kidneys: B6, A.
- LDL: A, B3, C, D.
- Bone: C, K.
What does PEM stand for?
- PEM: protein-energy malnutrition.
- Range of clinical syndromes characterized by inadequate dietary intake of proteins & calories to meet the body’s needs.
If a person is down to 80% of their normal body weight they have which condition?
- Marasmus: a person is down to 60% or less of their body weight.
- It is caused by lack of energy intake & protein containing foods.
- The body begins to steal protein from the somatic protein compartment (skeletal muscles).
- Results in emaciation (too thin) & growth retardation.
- The head will appear to be too large for the body.
- Serum albumin (protein carriers in blood) levels are normal, or slightly lower.
Kwashiorkor patients have higher or lower protein deprivation in relation to calories?
There is enough energy intake, but not enough protein intake.
- Higher protein deficiency.
- More common in Africa & SE Asia due to diets high in carbs & low in protein.
Which protein compartment is affected in Kwashiorkor patients?
- The visceral protein compartment.
- Results in a decreased amount of albumins in the blood, thus reducing oncotic pressure resulting in generalized edema.
Which comes first in Kwashiorkor patients, hypoalbumenia or fatty liver?
- Steps of Kwashiorkor:
- 1. Hypoalbuminemia.
- 2. Generalized edema.
- 3. Skin lesions: zones of hyperpigmentation, desqumation, hypopigmentation... flaky paint appearance.
- 4. Hair changes: loss of color, straightening, fine texture.
- 5. Fatty liver (steatosis): without proteins the liver cannot produce LDL's so FFA's build up causing fibrosis.
- 6. Apathy, listlessness, anorexia.
- 7. Defects in immunity, secondary infections.
The prognosis is worse than with marasmus.
Which disorder is caused predominately by insufficient dietary intake of calories?
Which disease is characterized by ammenorrhea, thyroid pathology, arrthmias, hypokalmeia, osteoporosis, hypoalbuminemia, constipation, & sudden death?
- Anorexia nervosa.
- Note there are significant decreases in secretion of GRH, LH, & FSH.
- Hypothyroidism = cold, listlessness, weakness, dry hair, etc.
Which disease is characterized by amenorrhea 50% of the time, normal weight & GRH levels, esophageal cancer, espohageal rupture, hypokalemia, & pulmonary aspirations?
Which type of hypersensitivty reaction is characterized by the relase of histamine & other vasoactive substances that are derived from mast cells?
- Type I Hypersensitivity Reaction: affects vascular permeability & smooth muscles in various organs.
- Associated with mast cells & basophils.
- Mast cells: basophils fixed in the tissue, subepithelial areas.
Which hypersensitivity reaction is associated with anaphalactic shock?
- Type 1 hypersensitivity reaction: anaphalactic (allergic) reaction… anaphalactic shock.
- Release of vasoactive amines & other mediators derived from the mast cells or basophils.
- Affects vascular permeability & smooth muscles in various organs.
Will the first exposure to an allergen result in a physiological reaction?
No: the first reaction results in the degranulation of the cytosome within mast cells
, thus allowing for allergic reactions in the future.
- Type I Hypersensitivity Reaction Mechanism
- Allergen goes into body --> activates CD4 & T Helper cells (TH2) --> secretion of IL-4 & 5 cytokines --> IgE antibody production & recruitment of eosinophils (& mast cells).
What are the 4 functions of histamine?
- 1. Vasodialation.
- 2. Increased vessel permeability = swelling.
- 3. Bronchospasm.
- 4. Increased mucous production.
Systemic anaphylaxis is characterized by a long onset. True or false?
False: systemic anaphylaxis has a rapid onset of 1-2 minutes.
A patient is getting a cavity filled. Unbeknownst to the doctor, they are allergic to novocaine. Name the 6 clinical manifestations which would occur rapidly. Is this reaction systemic or local?
- 1. Itching.
- 2. Hives (urticaria).
- 3. Bronchospasm.
- 4. Laryngeal edema.
- 5. Nausea/vomiting/diarrhea.
- 6. Vascular shock.
A patient comes in to your office in the spring, complaining of seasonal allergies. Which type of reaction is this? And what are the possible 5 symptoms?
- 1. Itching of the skin.
- 2. Hives (urticaria).
- 3. Hay fever.
- 4. Atopic bronchial asthma.
- 5. Rhinitis (runny nose).
Which of the following disorders is an example of Type I Hypersensivity Reactions?
A. Hemotransfusion reactions.
B. Laryngeal edema.
D. Graft rejection.
- Which of the following disorders is an example of Type I Hypersensivity Reactions?
- A. Hemotransfusion reactions: type II, complement dependent.
- B. Laryngeal edema.
- C. TB: type IV, delayed type hypersensitivity.
- D. Graft rejection: type IV, cell-mediated cytotoxicity.
Type I Hypersensitivity Reactions are associated with which class of immunoglobulins?
Type II Hypersensitivity Reactions are mediated by which 2 types of immunoglobulins?
- 1. IgG.
- 2. IgM.
- They are both directed at antigens on the body's own cells.
How many types of reactions are associated with Type II Hypersensitivity Reactions?
- 1. Complement dependent: hemotransfusion reactions, erythroblastosis fatalis, auto-immune diseases, & certain drug reactions.
- 2. Antibody-dependent cell-mediated cytotoxicity: parasites, tumors.
- 3. Antibody-mediated cellular dysfunction: Myasthenia Gravis, Hoshimoto's, Grave's Disease.
Which 4 cells are associated with antibody-dependent cell-mediated cytotoxicity?
- 1. Monocytes.
- 2. Eosinophils.
- 3. Neutrophils.
- 4. Natural killers.
Name 3 pathologies associated with Type II Antibody-Dependent Hypersensitivy Reactions.
- 1. Myasthenia Gravis.
- 2. Hoshimoto's.
- 3. Grave's Disease.
- Antibody dependent = antibody-mediated cellular dysfunction.
What is the most common subtype for Type II Hypersensitivity Reactions?
- Complement dependent: antibody attaches to the antigen & fragments into an Ab & Fc fragment.
- Opsonization stops @ C3b, & is more favorable for phagocytosis.
Name 2 pathologies associated with Type II Hypersensitivity Reactions.
- 1. Erythroblastosis fetalis.
- 2. Pernicious anemia.
What is the name of the injection given to a Mom after delivery of her first Rh+ child to prevent erythroblastosis fetalis?
- Rhogam: needs to be given after abortions as well.
- 1st pregnancy: Mom Rh-, Baby Rh+... blood mixes @ birth... Mom develops Rh+ antibodies within the first 72 hours, & her body kills the cells.
- 2nd pregnancy: Mom Rh-, Baby Rh+... blood mixes @ birth, antibodies get into baby = baby need blood transfusion.
True or false: erythroblastosis fetalis could result from a scenario when mother is Rh-, & fetus is Rh+.
Type II & Type III Hypersensitivity Reactions are very similar. How can you tell the difference?
- Type II: fixed in tissue.
- Type III: circulating in blood.
What is the a/k/a for Type III Hypersensitivity Reactions?
Immune Complex Mediated Type.
Name 2 examples of Immune Complex Mediated Type Reactions:
(Type III Hypersensitivity Reaction)
- 1. Serum sickness.
- 2. Arthus reaction: local Type III... Farmer's Lung.
Vasculitis is usually associated with which type of hypersensitivity reactions?
Type III. (Immune complex mediated type).
Sidenote on trace elements: yay!
- Iron: hypochromic, microcytic anemia.
- Iodine: hypothyroidism, goiter.
- Selenium: Keshan disease, myopathy, congestive cardiomyopathy, (also a free radical scavenger along with vitamins C & E).
- Zinc: distinctive rash, acrodermatitis enteroptahica, anorexia/diarrhea, growth retardation, hypogonadism/infertility, impaired wound healing, impaired night vision, impaired immune function, depressed mental function.
- Copper: muscle weakness, hypopigmentation, nueorological defects.
What is an a/k/a for Type IV Hypersensitivity Reaction?
- Cell Mediated = antibody independent mechanism.
- Involves CD4 & CD8 lymphocytes.
What are the 2 subdivision of Cell Mediated Hypersensitivity Reactions (Type IV)?
- 1. Delayed Type Hypersensitivity: TB, contact dermatitis.
- 2. Cell-Mediated Cytotoxicity: anti-viral, graft rejection, tumors.
Which pathology is characteristic of Delayed Type (Type IV) Hypersensitivity Reactions?
- TB (mycobacterium tuberculosis).
- CD4 (T4) T-helper cells are activated by antigens.
- T4 cells release cytokines which recruit macrophages.
- The macrophages result in a granulomatous reaction.
Describe the mechanism associated with Delayed Type (IV) Hypersensitivity Reactions.
- CD4+ T Cells of the TH1 type
- Recruitment of macrophages
What are the 3 main functions of cell-mediated cytotoxicity reactions?
- 1. Anti-tumor activity.
- 2. Anti-viral activity.
- 3. Graft rejection.
- *also immune suppression.
Delayed-type hypersensitivity reactions are induced by sensitization of which of the following cells?
- Delayed-type hypersensitivity reactions are induced by sensitization of which of the following cells?
- A. Neutrophils.
- B. B-cells.
- C. T-helpers: type IV, delayed type hypersensitivity.
- D. T-cytotoxic: type IV, cell-mediated cytotoxicity.
Anaphalactyic: Type I.
- 1.1) Systemic: bee sting... rapid developing.
- 1.2) Local: asthma attack.
Cytotoxic: "Antibody Dependent": Type II.
Fixed in tissue.
Triggered when anti-bodies attach to the surface of individual cells.
- 2.1) Complement Dependent: most common of type II.
- i) Blood transfusion.
- ii) Erythroblastosis fetalis.
- iii) Auto-immune hemolytic reactions.
- iv) Certain drug reactions.
- 2.2) ADCM (Anti-body Dependent Cell Mediated): involves NK & MAC.
- i) Parasites.
- ii) Tumors.
- iii) Cells.
- 2.3) AMCD (Antibody-Mediated Cellular Dysfunction):
- i) MG.
- ii) Grave's Disease.
- iii) Hoshimoto's.
Immune Complex Mediated: Type III.
In blood circulation.
Glomerulonephritis, RA, Lupus, Autoimmune Diseases.
- 3.1) Local reaction:
- i) Arthus reaction: Farmer's lung.
- ii) Vasculitis.
- 3.2) Systemic reaction:
- i) Serum sickness.
Delayed Hypersensitivity "Cell Mediated": Type IV.
- 4.1) Delayed type: CD4/T4, marcrophages, granulomatous reaction.
- i) TB.
- ii) Contact dermatitis.
- 4.2) Cell-mediated cytotoxicitiy: CD8/T8, Gamma IFN.
- i) Anti-tumor.
- ii) Anti-viral.
- iii) Graft rejection.
- iv) Immune supression.
Which cell growth process occurs in response to increased demands?
- Hypertrophy: process of cell & organ enlargement that occurs in response to increased demands.
- Ex) Left ventricular hypertrophy resulting from systemic hypertension.
Which term means shrinking of the cell (or organ) due to a lack of neuronal or endocrinological stimulation?
- May also result from disease.
True or false; the term "trophy" refers to the number of cells present.
- False: trophy refers to the cell size.
- Ex) Hypertrophy: the amount of cells are the same but they are bigger than they were before which makes the whole organ larger.
Which term reefers to mitosis?
- Plasia: referring to mitosis (new cells forming).
- The formation of new cells makes the organ bigger.
Does hyperplasia involve the enlargement of cell size?
- No... Hyperplasia: the process of mitosis producing new cells, but only in quantities needed to meet a particular demand.
- Hyperplasia is still considered normal or ok.
Name 2 examples of hyperplasia.
- 1. Increased amounts of glandular tissue in the female breast during pregnancy.
- 2. Restoration of liver cells following liver resection.
Which form of cell growth is characterized by a change in the type of cell?
Metaplasia: a reversible condition in which there is replacement of normal cells by cells that aren't supposed to be there.
Name 2 examples of metaplasia.
- 1. Chronic gastritis: replacement of stomach columnar epithelial cells by intestinal type cells.
- 2. Heavy smokers: replacement of columnar bronchial epithelial cells by stratified squamous cells.
- *Squamous epithelial cells: most important predisposing factor for lung cancer.
Which form of cell growth is a precursor for neoplasm?
Which type of cell growth is characterized by a loss in uniformity of the individual cells, as well as a loss of their architectural orientation?
- Dysplasia: very close to neoplasia.
Which type of cell growth do you see both normal & abnormal cells?
What is the term used to describe the variability of cell size & shape, in contrast to regularity of the cell structure seen in normal tissue?
- Larger more darkly stained nuclei.
- Increased mitosis rates.
Which 2 forms of cell growth display signs of pleomorphism?
- 1. Neoplasia.
- 2. Dysplasia.
Name 3 characteristics of neoplsitic dysplasia.
- 1. Larger more darkly stained nuclei.
- 2. Increased mitosis rates = higher risk for cancer.
- 3. Irreversible alteration of cell growth patterns.
What is the term used to describe the uncontrolled mitosis of cells beyond normal anatomical boundaries due to the irreversible alteration of cell growth pattern?
What is an a/k/a for neoplasm?
- Neoplasia: tissue formation, involving the overgrowth of a tissue to form a neoplastic mass, of neoplasm, which is called tumor.
- Cells are abnormal.
- Polymorphic cells.
What does aplasia mean?
- Aplasia: the complete lack of organ development.
- All cells are abnormal.
Which type of cell growth results in structures that are immature & functionally deficient?
?????? : development that is inadequate, so that the resulting structure is immature & functionally deficient.
Hypoplasia: inadequate development = immature & functionally deficient.
????? : the process of cell & organ enlargement that occurs in response to increased demands.
Hypertrophy: cellular & organ enlargement due to increased demands.
????? : a loss in uniformity of the individual cells as well as a loss in their architectural orientation.
Dysplasia: loss in uniformity & architectural orientation.
????? : lack of organ development.
????? : production of new cells, but only in quantities needed to meet a particular demand.
Hyperplasia: new cells to meet a demand.
????? : change of the cell type.
????? : lack or reverse of cell differentiation.
Anaplasia: the less mature (lack of cell differentiation) the tissue organ, the more malignant the cells are.
Which of the following is NOT a characteristic of pleomorphism?
A. Increased mitosis rates.
B. Larger, more darkly stained nuclei.
C. Uniformity of cell size & shape.
D. Variability of cell size & shape.
- Which of the following is NOT a characteristic of pleomorphism?
- A. Increased mitosis rates.
- B. Larger, more darkly stained nuclei.
- C. Uniformity of cell size & shape.
- D. Variability of cell size & shape.
Pleomorphism is typical for which of the following?
A. Benign tumor.
B. Malignant tumor.
C. This is characteristic of normal cell growth.
D. None of the above.
- Pleomorphism is typical for which of the following?A. Benign tumor.
- B. Malignant tumor.
- C. This is characteristic of normal cell growth.
- D. None of the above.
Name the 2 characteristics which separate benign & malignant tumors.
- 1. Pattern of growth.
- 2. Tissue of origin.
Are benign tumors more or less pleomorphic?
- Benign tumors are less pleomorphic, meaning that their architecture looks the same as the surrounding cells.
- Slow growth.
- Orderly growth.
Which type of tumor is named according to the tissue that the tumor originated from, followed by the suffix "oma" to the end?
- Benign tumors.
- Ex) Osteoma: bone tumor.
- Ex) Adenoma: tumor from glandular tissues.
Malignant tumors are named according to what?
Their embryonic origin.
Carcinomas originate from where?
Carcinoma: from ectodermal or endodermal tissue.
Where do sarcomas originate?
- Sarcomas: from mesoderm.
- Ex) Fibrosarcoma: from fibrous connective tissue.
- Ex) Chondrosarcoma: from cartilage.
Name the 7 tissues which arise from mesoderm & are thus associated with sarcomas.
- 1. Connective tissues.
- 2. Muscles.
- 3. Skeletal system.
- 4. Circulatory system.
- 5. Lymphatic system.
- 6. Urogenital system.
- 7. Linings of the body cavities.
Which is malignant, leiomyoma or leiomyosarcoma?
Leiomyosarcoma: malignant smooth muscle tumor.
Exceptions to the tumor naming rules:
- 1. Melanoma: appears to be benign by name, but is the most malignant tumor occurring in the body.
- 2. Lymphoma: should be called lymphosarcoma.
- 3. Hepatoma: occurs in people have Hep B or C.
Which type of connective tissue is associated with neoplasm?
What is stroma comprised of?
- Connective tissue (which is made of fibers).
- If a tumor has a lot of stoma it will be soft & fleshy.
What is the term used to describe a tissue that is growing rapidly through mitosis, but the the cells are not differentiating?
- Anaplasia: the lack of cell differentiation (or even a reverse of cell differentiation).
- The more anaplastic a tissue is, the more malignant it will become.
- Does not mean a lack of cell growth through mitosis, it means a lack of cell differentiation.
Does development from more basic structures make the tumor more or less malignant?
- (If cancer starts from a very premature blood cell it will be very malignant).
What type of cancer is made up of more than 90% stroma?
- Scirrhous cancer (scirr).
- Ex) Breast adenocarcinoma: scirrous cancer of the breast in which the nipple invaginates due to the connective tissue pulling on it.
- Ex) Scirrhous cancer of the stomach (Leather Bottle Stomach).
In scirr (scirrhous cancer) the majority of tumor mass consists of which type of tissues?
Supportive (connective) tissue.
What do tumors undergo once they are larger than 1mm?
Malignant tumors secrete tumor angiogenesis factor (TAF) in order to start angiogenesis. True or false?
- The vessels will have larger gaps in between them, & some may be lacking a basement membrane = increased permeability.
Hemoptysis may be associated with which form of cancer?
- Lung cancer.
- Hemoptysis: blood in sputum.
- Resulting from the inability of the necrosis to keep up with the tumor, leading to necrosis of the tumor within the lungs.
Which type of cancer has the presence of a CT capsule around it?
- Benign tumors.
- Ex) Leioyoma: smooth muscle of the stomach.
Name 3 reasons why malignant tumors are so invasive.
- 1. Reduced adhesiveness: cannot be kept within the mass of the tumor.
- 2. The malignant cells are attracted to normal cells as a source of nutrients.
- 3. Autocrine motility factors: chemotaxic comminuication between malignant cells signaling to sources of food = path of destruction to get to the nutrients.
What 3 areas of the body are resistant to invasion?
- 1. Pleura.
- 2. Pericardium.
- 3. Fibrous layer.
True or false: generally speaking, the first symptoms of cancer are due to secondary tumors following metastasis.
Which area of the circulatory system do cancer cells attack first?
What is the normal path of metastasis?
- Capillaries --> venous system (veins/lymph vessels) --> multiplication & breaking --> embolus --> gets stuck in vessel --> lytic enzymes break through & cancer is spread to tissue.
- Metastasis rarely travels through the arterial system.
Where do venous system metastases generally wind up?
- In the lungs.
- Ex) Breast cancer.
- Ex) Osteosarcoma.
- Ex) Melanoma.
What is the most common primary bone malignant tumor in young people, characterized by a cannon ball tumor?
Osteosarcoma: starts in the tibia or fibula & metasizes to the lungs via the venous system.
Tumors starting in the alimentary tract wind up in the heart. True or false?
False: they wind up in the liver via the portal venous system.
Which form of cancer spreads in the arterial system?
Lung cancer: winds up in the brain, spleen, or kidneys.
Metastases spreading through the veins are usually found in which organ?
Malignant tumors of the intestines & stomach usually metastasize into which organ?
What is another name for lymph node enlargement?
- Lymphadenopathy: results from a tumor spreading via the lymphatic system.
- Ex) from stomach cancer to supraclavicular nodes.
- Ex) Virchow's nodes.
What are the 3 possible manifestations of a lymphadenopathy?
- 1. Normal lymph tissue may be completely replaced by neoplastic tissue.
- 2. Lymph flow is blocked, collateral vessels take over leading to further metastasis.
- 3. Once large enough, it may metastasize some cells into blood circulation do to the lymphatic & circulatory systems' close connections.
Where do Virchow's Nodes generally metastasize from?
What does large cell lung carcinoma lead to?
Regional metastasis into the peritracheal lymph nodes = no cure.
What is the name for the secondary ovarian tumor which is composed of stomach cancer cells?
- Krukenberg tumor: secondary ovarian tumor that is composed of stomach cancer cells that spread from the stomach to the ovary via the abdominal cavity.
- Note it is very rare for cancer to metastasize through a cavity.
Where do prosthetic carcinomas spread to?
Prosthetic carcinoma: spreads to bones & spine as a result of lung cancer.
What does cachexia mean?
- Cachexia: generalized weakness, fever, anorexia, wasting, pallor, & fever in the late stages of malignant cancer.
- Infection = fever & weakness.
- Hemorrhage & anemia = weakness & palor.
- Pain & depression = anorexia.
- Bowel tumors = wasting.
- Cachectin & TNF = wasting & pain.
- (Cachectin & TFN: chemicals produced by tumors).
What type of tumors is ectopic secretion associated with?
- Most common location: bronchial mucousa.
- Generally malignant.
Ectopic secretions associated with lung cancer result in overproduction of what? And with breast cancer?
- Lung cancer = overproduction of ADH.
- Breast cancer = overproduction of PTH = osteoporosis.
Name 2 examples of adenomas.
- 1. Somatotroph cell pituitary adenoma: causes acromegaly in adults (giagantism in children) due to excessive growth hormone.
- 2. Pheochromocytoma: benign tumor of the adrenal medulla that causes excessive production of NorEpi = secondary hypertension.
Name 2 manifestations of hypertrophic pulmonary osteoarthropathy (malignant tumors from chest organs).
- 1. Hyperostosis: abnormal growth of bones beyond it’s anatomical boundaries.
- 2. Clubbing of fingers (distal phalanxes enlarges).
True or false: tumors most often form from labile tissues.
- Labile tissues are continually going through mitosis to replace old cells.
- Blood cells, spermatozoa, ova, epithelia, & endothelial cells.
Name 3 types of cancer which can be genetically transmitted from parents to children (hereditarily).
- 1. Retinoblastoma: malignant tumor of the retina in new borns = death within first few years of life.
- 2. Polyposis coli: polyps form on the colon & become cancerous.
- 3. Xeroderma pigmentosum: skin cancer.
- *Bronchogenic carcinoma & breast cancer have a degree of genetic predisposition.
What are the 3 environmental factors which contribute to cancer?
- 1. Physical factors: ionizing radiation & ultraviolet radiation.
- 2. Viruses: HPV, HEP B&C.
- 3. Chemicals.
Which cancer causing chemicals form from the combustion of organic materials?
- Polycyclic hyrocarbons: formed from the combustion of organic materials.
- 1. Benzopryene: cigarette smoke = lung cancer.
- 2. Polycyclic hydrocarbons = scrotal cancer in chimney sweepers.
Aromatic amines are used in food coloring & fabric dyes which promote cancer in which 2 organs?
Nitrosamines are found in food preservatives for bacon, ham, sausage, & canned meat. Which 2 conditions result in these substances to interere with cellular DNA?
- 1. Acidity of the stomach.
- 2. Cooking food (high temperatures).
- *Vitamin C prevents nitrosamine formation.
Hepatocellular carcinomas grow on improperly stored vegetables, & are associated with which chemicals?
Aflatoxins: produced by fungi of the genus Aspergillus.
Nickel, cadmium, lead, cobalt, & asbestos are inorganic carcinogens which are associated with pleural cancers. True or flase?
- Asbestos = mesothelioma.
Which of the following substances in proven to promote development of liver cancer?
- Which of the following substances in proven to promote development of liver cancer?
- A. Benzpyrene - lung cancer.
- B. Asbestos - pleural cancer.
- C. Bacon - interferes with DNA = predisposing factor for cancer.
- D. Aflatoxin.
Smoking is a source of which of the following carcinogenic chemicals?
A. Aromatic amines.
- Smoking is a source of which of the following carcinogenic chemicals?
- A. Aromatic amines - from food coloring & fabric dyes = liver & bladder cancer.
- B. Aflatoxin - from fungi on veggies = hepatocellular carcinoma.
- C. Nitrisamines - from meat preservatives = predisposing factor for cancer.
- D. Benzpyrene.
What type of disease is characterized by the formation of autoantibodies against the body's own connective tissue?
Name 4 examples of autoimmune diseases.
- 1. Systemic lupus erythematosus (SLE).
- 2. Scleroderma (systemic sclerosis): fibrosis throughout the body due to overproduction of collagen.
- 3. Dermatomyositis (polymyositis).
- 4. Sjogren's syndrome.
What form of antibodies are associated with SLE?
Antinuclear antibodies: attack the nuclei.
Which 2 pathognomonic antibodies are associated with SLE?
- Pathognomonic: specific to one disease.
- 1. ANA's against double stranded DNA.
- 2. ANA's against Smith's antigens.
SLE is found mostly in young girls. Which 4 organs are generally attacked?
- 1. Kidneys: lupus nephritis.
- 2. Lungs: lupus pneumonitis.
- 3. Cerebral vessels = stroke.
- 4. Skin = butterfly rash.
Which autoimmune disease is characterized by the overproduction of collagen by fibroblasts?
Systemic sclerosis (scleroderma) = fibrous tissue slowly replacing the parenchyme (functional tissue) thus reducing function.
Name the 5 tissues which are most affected by scleroderma.
- 1. Skin.
- 2. GI-tract.
- 3. Vessels.
- 4. Kidneys.
- 5. Lungs.
True or false: diffused scleroderma is more benign & is associated with the mneumonic "C-R-E-S-T."
False: Localized scleroderm
a: much more benign
, associated with "C-R-E-S-T."
- Raynaud's phenomenon: vasoplastic reactions of fingers & toes = white, blue, red.
- Esophogeal dysmotility: loss of peristaltic activity.
- Sclerodactyly: capillaries in fingers become fibrous & dysfunction = dry gangrene.
- Telangiectasia: vascular lesions of the skin due to dilation of the capillaries... fine irregular lines.
True or false: Diffused scleroderma is less benign and promptly leads to the involvement of the internal organs.
- (less benign = more malignant).
True or false: the most prominent clinical sign of Scleroderma is a butterfly rash.
- False: the most prominent clinical sign associated with scleroderma is a mask-like appearance.
- Change in skin results in mummification.
What 2 autoimmune diseases are characterized by damage to the capillary nets of skeletal muscles?
- 1. Dermatomyositis.
- 2. Polymyositis.
What do patients with Dermatomyositis & Polymyositis wind up dying from?
- Atrophy of breathing musculature.
- These diseases cause skeletal muscles to undergo atrophy.
- 60% undergo paraneoplastic reactions: a tumor causes them to spontaneously occur.
- Treatments = very large doses of corticosteroids.
Where do autoantibodies generally attack with Sjogren's syndrome?
Tubular epithelium of external secretion glands.
What are the 2 a/k/a's for Sjogren's syndrome?
Destruction of glands = drying out of salivary glands, tear glands, mucous glands of GI-tract, mucous glands of the tracheobronchial tree, & mucous glands of the vagina.
- 1. Dry syndrome.
- 2. Sicca syndrome.
What are the 3 symptoms associated with Sjogren's syndrome?
- 1. Xerostomia: dry mouth = loss of teeth.
- 2. Xeropthalmia: dry eyes = corneal ulceration = blindness.
- 3. Joint pain: this disease is associated with rheumatic diseases. | 1 | 18 |
<urn:uuid:46b06715-a6b4-46a3-bfbc-9b83f19355ee> | Imagine having a high-definition TV that is 80 inches wide and less than a quarter-inch thick, consumes less power than most TVs on the market today and can be rolled up when you're not using it. What if you could have a "heads up" display in your car? How about a display monitor built into your clothing? These devices may be possible in the near future with the help of a technology called organic light-emitting diodes (OLEDs).
OLEDs are solid-state devices composed of thin films of organic molecules that create light with the application of electricity. OLEDs can provide brighter, crisper displays on electronic devices and use less power than conventional light-emitting diodes (LEDs) or liquid crystal displays (LCDs) used today.
In this article, you will learn how OLED technology works, what types of OLEDs are possible, how OLEDs compare to other lighting technologies and what problems OLEDs need to overcome.
Like an LED, an OLED is a solid-state semiconductor device that is 100 to 500 nanometers thick or about 200 times smaller than a human hair. OLEDs can have either two layers or three layers of organic material; in the latter design, the third layer helps transport electrons from the cathode to the emissive layer. In this article, we'll be focusing on the two-layer design.
An OLED consists of the following parts:
Substrate (clear plastic, glass, foil) - The substrate supports the OLED.
Anode (transparent) - The anode removes electrons (adds electron "holes") when a current flows through the device.
Organic layers - These layers are made of organic molecules or polymers.
Conducting layer - This layer is made of organic plastic molecules that transport "holes" from the anode. One conducting polymer used in OLEDs is polyaniline.
Emissive layer - This layer is made of organic plastic molecules (different ones from the conducting layer) that transport electrons from the cathode; this is where light is made. One polymer used in the emissive layer is polyfluorene.
Cathode (may or may not be transparent depending on the type of OLED) - The cathode injects electrons when a current flows through the device.
The biggest part of manufacturing OLEDs is applying the organic layers to the substrate. This can be done in three ways:
- Vacuum deposition or vacuum thermal evaporation (VTE) - In a vacuum chamber, the organic molecules are gently heated (evaporated) and allowed to condense as thin films onto cooled substrates. This process is expensive and inefficient.
- Organic vapor phase deposition (OVPD) - In a low-pressure, hot-walled reactor chamber, a carrier gas transports evaporated organic molecules onto cooled substrates, where they condense into thin films. Using a carrier gas increases the efficiency and reduces the cost of making OLEDs.
- Inkjet printing - With inkjet technology, OLEDs are sprayed onto substrates just like inks are sprayed onto paper during printing. Inkjet technology greatly reduces the cost of OLED manufacturing and allows OLEDs to be printed onto very large films for large displays like 80-inch TV screens or electronic billboards.
How do OLEDs Emit Light?
OLEDs emit light in a similar manner to LEDs, through a process called electrophosphorescence.
The process is as follows:
- The battery or power supply of the device containing the OLED applies a voltage across the OLED.
- An electrical current flows from the cathode to the anode through the organic layers (an electrical current is a flow of electrons). The cathode gives electrons to the emissive layer of organic molecules. The anode removes electrons from the conductive layer of organic molecules. (This is the equivalent to giving electron holes to the conductive layer.)
- At the boundary between the emissive and the conductive layers, electrons find electron holes. When an electron finds an electron hole, the electron fills the hole (it falls into an energy level of the atom that's missing an electron). When this happens, the electron gives up energy in the form of a photon of light (see How Light Works).
- The OLED emits light.
- The color of the light depends on the type of organic molecule in the emissive layer. Manufacturers place several types of organic films on the same OLED to make color displays.
- The intensity or brightness of the light depends on the amount of electrical current applied: the more current, the brighter the light.
Types of OLEDs: Passive and Active Matrix
There are several types of OLEDs:
- Passive-matrix OLED
- Active-matrix OLED
- Transparent OLED
- Top-emitting OLED
- Foldable OLED
- White OLED
Each type has different uses. In the following sections, we'll discuss each type of OLED. Let's start with passive-matrix and active-matrix OLEDs.
Passive-matrix OLED (PMOLED)
PMOLEDs have strips of cathode, organic layers and strips of anode. The anode strips are arranged perpendicular to the cathode strips. The intersections of the cathode and anode make up the pixels where light is emitted. External circuitry applies current to selected strips of anode and cathode, determining which pixels get turned on and which pixels remain off. Again, the brightness of each pixel is proportional to the amount of applied current.
PMOLEDs are easy to make, but they consume more power than other types of OLED, mainly due to the power needed for the external circuitry. PMOLEDs are most efficient for text and icons and are best suited for small screens (2- to 3-inch diagonal) such as those you find in cell phones, PDAs and MP3 players. Even with the external circuitry, passive-matrix OLEDs consume less battery power than the LCDs that currently power these devices.
Active-matrix OLED (AMOLED)
AMOLEDs have full layers of cathode, organic molecules and anode, but the anode layer overlays a thin film transistor (TFT) array that forms a matrix. The TFT array itself is the circuitry that determines which pixels get turned on to form an image.
AMOLEDs consume less power than PMOLEDs because the TFT array requires less power than external circuitry, so they are efficient for large displays. AMOLEDs also have faster refresh rates suitable for video. The best uses for AMOLEDs are computer monitors, large-screen TVs and electronic signs or billboards.
Types of OLEDs: Transparent, Top-emitting, Foldable and White
Transparent OLEDs have only transparent components (substrate, cathode and anode) and, when turned off, are up to 85 percent as transparent as their substrate. When a transparent OLED display is turned on, it allows light to pass in both directions. A transparent OLED display can be either active- or passive-matrix. This technology can be used for heads-up displays.
Top-emitting OLEDs have a substrate that is either opaque or reflective. They are best suited to active-matrix design. Manufacturers may use top-emitting OLED displays in smart cards.
Foldable OLEDs have substrates made of very flexible metallic foils or plastics. Foldable OLEDs are very lightweight and durable. Their use in devices such as cell phones and PDAs can reduce breakage, a major cause for return or repair. Potentially, foldable OLED displays can be attached to fabrics to create "smart" clothing, such as outdoor survival clothing with an integrated computer chip, cell phone, GPS receiver and OLED display sewn into it.
White OLEDs emit white light that is brighter, more uniform and more energy efficient than that emitted by fluorescent lights. White OLEDs also have the true-color qualities of incandescent lighting. Because OLEDs can be made in large sheets, they can replace fluorescent lights that are currently used in homes and buildings. Their use could potentially reduce energy costs for lighting.
In the next section, we'll discuss the pros and cons of OLED technology and how it compares to regular LED and LCD technology.
OLED Advantages and Disadvantages
The LCD is currently the display of choice in small devices and is also popular in large-screen TVs. Regular LEDs often form the digits on digital clocks and other electronic devices. OLEDs offer many advantages over both LCDs and LEDs:
- The plastic, organic layers of an OLED are thinner, lighter and more flexible than the crystalline layers in an LED or LCD.
- Because the light-emitting layers of an OLED are lighter, the substrate of an OLED can be flexible instead of rigid. OLED substrates can be plastic rather than the glass used for LEDs and LCDs.
- OLEDs are brighter than LEDs. Because the organic layers of an OLED are much thinner than the corresponding inorganic crystal layers of an LED, the conductive and emissive layers of an OLED can be multi-layered. Also, LEDs and LCDs require glass for support, and glass absorbs some light. OLEDs do not require glass.
- OLEDs do not require backlighting like LCDs (see How LCDs Work). LCDs work by selectively blocking areas of the backlight to make the images that you see, while OLEDs generate light themselves. Because OLEDs do not require backlighting, they consume much less power than LCDs (most of the LCD power goes to the backlighting). This is especially important for battery-operated devices such as cell phones.
- OLEDs are easier to produce and can be made to larger sizes. Because OLEDs are essentially plastics, they can be made into large, thin sheets. It is much more difficult to grow and lay down so many liquid crystals.
- OLEDs have large fields of view, about 170 degrees. Because LCDs work by blocking light, they have an inherent viewing obstacle from certain angles. OLEDs produce their own light, so they have a much wider viewing range.
Problems with OLED
OLED seems to be the perfect technology for all types of displays, but it also has some problems:
- Lifetime - While red and green OLED films have longer lifetimes (46,000 to 230,000 hours), blue organics currently have much shorter lifetimes (up to around 14,000 hours[source: OLED-Info.com]).
- Manufacturing - Manufacturing processes are expensive right now.
- Water - Water can easily damage OLEDs.
In the next section, we'll talk about some current and future uses of OLEDs.
Current and Future OLED Applications
Currently, OLEDs are used in small-screen devices such as cell phones, PDAs and digital cameras. In September 2004, Sony Corporation announced that it was beginning mass production of OLED screens for its CLIE PEG-VZ90 model of personal-entertainment handhelds.
Kodak was the first to release a digital camera with an OLED display in March 2003, the EasyShare LS633 [source:Kodak press release].
Several companies have already built prototype computer monitors and large-screen TVs that use OLED technology. In May 2005, Samsung Electronics announced that it had developed a prototype 40-inch, OLED-based, ultra-slim TV, the first of its size [source: Kanellos]. And in October 2007, Sony announced that it would be the first to market with an OLED television. The XEL-1 will be available in December 2007 for customers in Japan. It lists for 200,000 Yen -- or about $1,700 U.S. [source: Sony].
Research and development in the field of OLEDs is proceeding rapidly and may lead to future applications in heads-up displays, automotive dashboards, billboard-type displays, home and office lighting and flexible displays. Because OLEDs refresh faster than LCDs -- almost 1,000 times faster -- a device with an OLED display could change information almost in real time. Video images could be much more realistic and constantly updated. The newspaper of the future might be an OLED display that refreshes with breaking news (think "Minority Report") -- and like a regular newspaper, you could fold it up when you're done reading it and stick it in your backpack or briefcase.
For more information on OLEDs and related technologies, check out the links on the next page.
More Great Links
- Antoniadis, Homer, Ph.D. "Overview of OLED Display Technology." Osram Optical Semiconductors. http://www.ewh.ieee.org/soc/cpmt/presentations/cpmt0401a.pdf
- "Brilliant Plastics." Siemen's Webzine. http://w4.siemens.de/FuI/en/archiv/pof/heft2_03/artikel18/)
- "DuPont shows new AMOLED materials and OLED displays" OLED-Info.com. 3/6/2007. (10/9/2007). http://www.oled-info.com/tags/lifetime/blue_color
- Howard, Webster E. "Better Displays with Organic Films." Scientific American. http://www.sciam.com/print_version.cfm?articleID=0003FCE7-2A46-1FFB-AA4683414B7F0000
- Kanellos, Michael. "New Samsung panel pictures inch-thick TV." CNET News.com. 5/18/2005. (10/8/2007). http://www.news.com/New-Samsung-panel-pictures-inch-thick-TV/2100-1041_3-5712842.html
- Kodak: OLED Tutorial.
- " Kodak Unveils World's First Digital Camera with OLED Display" Eastman Kodak. 3/2/2003. (10/8/2007). http://www.kodak.com/US/en/corp/pressReleases/pr20030302-13.shtml
- Michael J. Felton (2001) "Thinner lighter better brighter, Today's Chemist at Work."; 10 (11): 30-34 http://pubs.acs.org/subscribe/journals/tcaw/10/i11/html/11felton.html
- "OLED." AUO. http://www.auo.com/auoDEV/technology.php?sec=OLED
- Universal Display Corporation: Technology. http://www.universaldisplay.com/tech.htm
- Wave Report: OLED Tutorial.
- Williams, Martyn. "PC World - Sony Readies OLED TV".4/12/2007 (10/8/2007). http://www.pcworld.com/article/id,130653-pg,1/article.html | 1 | 2 |
<urn:uuid:9c6d1c2d-cd74-4d7b-82a8-e3b4c4a0956e> | Emerging Pathogens and the Safety of the Blood Supply
Emerging pathogens pose a significant risk to blood safety worldwide. Throughout history, global trade, migration and changes to the environment have all contributed to the spread of pathogens within endemic areas, and the introduction of these formerly localized pathogens and/or their arthropod vectors into new areas.
In the 21st century, advances in molecular technologies and genetics have contributed to the rapid identification of novel species and the ability to conduct surveillance for emerging pathogens.
- Recognize the definition and understand the history of emerging pathogens.
- Describe the factors that drive the spread of emerging pathogens.
- Understand the risks emerging pathogens pose to the blood supply.
- Describe methods to prevent emerging pathogens from entering the blood supply.
Level of Instruction
Author / Speaker
John Pitman, PhD
Medical Science Liaison, Cerus Corporation
John Pitman is an epidemiologist specializing in blood safety. He currently works as a Medical Science Liaison with Cerus Corporation, providing blood safety education to clinicians and serving as a conduit for further research and education specific to pathogen reduced blood products. From 2005-2016 he worked on global health programs with the US Centers for Disease Control and Prevention, including 10 years as a researcher and project leader with the blood safety team in the Division of Global HIV/AIDS.
He has contributed to more than twenty journal articles on blood safety and blood banking issues related to developing countries, and holds a PhD in blood bank epidemiology from the University of Groningen in the Netherlands. Prior to joining CDC, John worked as a journalist for more than a decade, mostly as an anchor and correspondent for the Voice of America. John also earned a Masters degree in Epidemiology from Yale University and a Masters in Journalism from Columbia University. John has also been licensed as an Emergency Medical Technician.
Cerus Corporation is approved as a provider of continuing education programs in the clinical laboratory sciences by the ASCLS P.A.C.E.® Program.
Contact Hour: 1.0
Expires on: Dec 11, 2021 at 11:59 pm CT the day of
Blitz + Associates, Inc. is approved by the California Board of Registered Nursing, provider number 15148 for 1.0 contact hours.
Contact Hour: 1.0
Expires at: Dec 13, 2020 at 11:59 pm CT the day of
This program Has Been Approved by the American Association of Critical-Care Nurses (AACN) for 1.00 CERP, Synergy CERP Category A, File Number 21905.
Expires on: Dec 11, 2018 at 11:59 pm CT the day of
CERP Renewal Pending
Technical Requirements and Assistance
Minimum software requirements:
- Windows XP SP2
- Mac OS X 10.9
- iOS 7
- Android 4
- Learning activities may be in the Adobe Flash format. The latest version of Adobe Flash is always recommended to view this type of activity.
- Learning activities may be encoded as MP4 videos. Your browser must support MP4 playback natively or through the operating system to view these videos.
Minimum hardware requirements:
- 512MB of RAM
- 200MB of hard drive space
- Intel x86 processor
- Device capable of running iOS 7
- Device capable of running Android 4
Minimum connection speed:
- A broadband connection is required for Adobe Flash and MP4 learning activities. | 1 | 3 |
<urn:uuid:162b1745-1dae-4049-b358-da7eeacb40e0> | It is perceived that economic nationalism has slowed the meteoric rise of global trade. Since the Uruguay Round created the World Trade Organization (WTO) in 1995, trade of goods and services has become a dominant feature in global economic growth. As a result, hundreds of millions of people in developing countries have graduated from subsistence living to middle-class status. The accession of China into the World Trade Organization in 2001 accelerated both the volume and character of global trade. By 2008, Global Value Chains (GVCs) have come to explain up to 70% of global trade volumes.1 GVCs optimize comparative advantage across borders and have enabled innovation in trade logistics and services technologies, in addition to a general WTO commitment by member states to facilitate trade.
However, the renegotiation of liberal free trade agreements, such as Brexit and the reconsideration of NAFTA, and the concomitant shift of manufacturing jobs (on-shoring) back into developed economies have accelerated. In the latest WTO Report on G20 Trade Practices, $481 billion in new import restrictive measures were imposed by G20 members in 2018.2 This is the largest increase of such measures ever recorded by the WTO and six times larger than last year’s. Also, according to the World Bank, the growth in GVC’s has stalled.3 The WTO appears unable to broker more ambitious global agreements among member states, and there is a perceptible decline in confidence in the organization’s ability to evolve the rules-based global trading system.
The question of Africa’s ability to adapt to these shifting trends in trade must be analyzed in light of its participation in the global economy and its ability to adopt the tools to become more competitive in a world of rapidly evolving technology and supply chains. It should be noted that this analysis will concentrate on sub-Saharan Africa (SSA) and will disaggregate data accordingly, whenever possible. Africa has enormous diversity amongst its 54 nations and even among its regions. This chapter will examine the political economy of Africa’s trade and identify constraints and opportunities that will define its future, including the adoption of artificial intelligence.
Africa and Global Trade
Pliny the Elder is known for two contributions to global learning. One is the discovery of hops as an essential ingredient for brewing beer and the other for the phrase: “ex Africa semper aliquid novi” (always something new out of Africa).4
What is new about Africa has been the African rising story. While this optimistic narrative is a departure from the doom and gloom scenario of the past, there are storms on the horizon.
The first known evidence of trans-Africa trade dates from the 4th century BC, when the Axumite kingdom traded with the Ptolemaic dynasty of Egypt.5 Later, caravan routes emerged linking trading centers across the Sahel. Yet, African trade has been restricted by its physical and human geography, namely vast deserts, few navigable rivers, and dispersed population.
After centuries of slave traders and Portuguese explorers, European countries began to take a colonizing interest in Africa and by the end of the 19th century every sub-Saharan country but Ethiopia had been colonized by a European power. In 1885, the European powers divided Africa into an illogical array of scattered colonies. This “Scramble for Africa” was fueled by aspirations of global hegemony and a thirst for natural resources to fuel the industrial revolution.6 In some instances, these aspirations resulted in the establishment of settler communities. These colonial regimes built infrastructure to extract raw natural resources, which were then finished in the respective mother country, a “Colonial Value Chain”! Worse yet, colonists established state monopolies that ruled domestic markets and limited by law the emergence of a local business class. Local human capacity development was often limited to coopted elites recruited to administer (and police) these rapacious regimes.
The dawn of independence in Africa occurred from 1957 to 1975, when war-shattered European economies and global public opinion (including from the United States) coalesced to accelerate the exit of colonialism. Due to shortages in both human capacity and business acumen, the economic road to the Kwame Nkrumah’s “Political Kingdom” soon became a dead end. Perhaps as a product of the ideological tendencies of the Cold War era manifested by the adoption of “scientific socialism,” many of Africa’s early leaders eschewed business friendly policies in favor of command economies. One iteration of these doomed practices was the establishment of import substitution regimes, which aimed to create indigenous infant industries through high tariffs and impregnable non-tariff barriers against all imports, whether from neighboring countries or afar. These policies were an abject failure as the contribution of manufacturing to GDP at the beginning of the 21st century was the same as it was in 1970: 10%!7 In the meantime, local consumers were gouged, productivity plunged, and the schemes became vehicles for the destructive rent-seeking and clientelism that define many Africa economies today.
The 1970s saw the promise of independence fade into dysfunction, predation, and manipulation by super powers fighting proxy wars across Africa. South Africa, the continent’s most advanced economy, was roiled by Apartheid and the opposition to it, which spread beyond its borders and stifled trade.8 Nigeria, Africa’s most populous country, was riven by a succession of weak civil governments succeeded by oppressive military regimes, all marked by odious levels of corruption. Development assistance from multilateral and bilateral sources contributed to economic distortion9 and suffocating levels of indebtedness. As a result, Africa’s portion of global trade had fallen from 3.5% in 1971 to 1.5% in 1999 and consisted mostly of unprocessed goods.10 Adding insult to injury, Africa was largely left out of the global trade negotiations under GATT and WTO and thus unable to shape its own economic future. When coupled with the devastation of the HIV/AIDS pandemic, by the 1990s, Africa was on the economic ropes. Famously, in May 2000, The Economist published a feature on Africa entitled “The Hopeless Continent.”11 African trade statistics notoriously fail to quantify the size of the informal economy and the volume of informal trade of goods and services both internal and externally. A recent IMF study revealed that in Benin, Nigeria, and Tanzania about 65% of the economy is informal.12
Current Trade Situation
Nearly twenty years since The Economist declared it doomed, sub-Saharan Africa remains the most under connected region in the world. While absolute trade has increased, the region represents about 2–3% of global trade volume and intra-Africa trade accounts for about 11% of total exports as seen by the below chart comparing Africa’s leading Regional Economic Communities (RECs). These include the East Africa Community (EAC), Economic Community of West African States (ECOWAS), Southern Africa Development Community (SADC) and the Common Market for East and Southern Africa (COMESA). SSA represents Sub-Saharan Africa. This contrasts sharply with South America (22%) and Western Europe (70%).13 [See Figure 1]
Figure 1. African intra-regional trade, as a percentage of trade with the world14
Sadly, what little trade that has occurred remains in raw commodities, mostly agricultural and mineral products. Although economic orthodoxy has long concluded that open markets beget economic growth, our evaluation of World Bank data has shown that there is a very weak correlation between economic growth and merchandise trade. [See Figure 2]
Figure 2. Correlation Between GDP Growth vs. Merchandise Trade.15,16
The most compelling explanation of this is that most external trade from Africa is tied to raw commodities and offer few forward and backward linkages to the local economy. This is in contrast to intra-Africa trade which favors manufactured, fast moving, and consumer goods.17 There are many reasons for the lack of intra-African trade including:
- Weakness of physical and human infrastructure (more on this later)
- Small size of individual African country markets
- Residual tariffs and onerous non-tariff measures (NTM) on processed and semi-processed African products by both developed and emerging markets18
- Export constraints and other pre-border barriers19
- Absence of trade finance
- Institutional constraints on enterprise growth and inability to achieve scale
- Currency risk
- Corruption and rent-seeking clientelism
- Civil disruption
It is beyond the scope of this chapter to delve into each of the above factors, however a couple of items are worthy of note.
As mentioned earlier, prior to independence, physical infrastructure was designed to satisfy the security concerns of competing European powers and related commercial interests seeking access to Africa’s natural resource bounty. While multilateral and bilateral development assistance fueled a surge of investment in infrastructure in the early years of statehood, many of these investments suffered from poor design and lack of maintenance. In other regions, private investment in infrastructure provided higher yields. For Africa to enhance its trade competitiveness internally and externally, trade related to physical and human infrastructure must be enhanced. This is no mean task. In its most recent analysis, the Africa Development Bank has estimated that Africa needs approximately $170 billion per year in infrastructure investment development, of which 20% is available from African sources.20
Another obstacle is the resistance of foreign markets to open themselves to African value-added exports. These constraints can occur in the form of tariff biases. For example, cocoa is offered duty free access into the U.S. market, but chocolate is subject to duty. While meat products could enjoy access to European markets, Sanitary and Phytosanitary (SPS) measures thwart these opportunities, often at the behest of protectionist interests. And despite all the rhetoric toward South–South cooperation, China provides duty free access to fewer African products than the United Sates, and it only does so for those countries that fall under the UN’s least developed country definitions, thereby excluding the most export ready economies. Non-tariff measures equally restrict South-South trade and South-North trade.21
In order to partly remedy these deficiencies and respond to world opinion, G-8 countries have enacted several trade initiatives in the past twenty years, including the U.S. Africa Growth and Opportunity Act (AGOA) and the European Union’s Economic Partnership Agreements (EPA). AGOA was passed in 2000 and expanded upon the Generalized System of Preferences (GSP) by allowing over 6,000 items from qualifying sub-Saharan African nations into the U.S. market on a zero-duty and nonreciprocal basis. AGOA was supported by over $1 billion of trade-related development assistance, largely through USAID’s Africa Trade Hubs. This market access requires minimal compliance with various standards such as labor and human rights and general business norms. In 2015, AGOA was extended until 2025, when it is assumed that a more reciprocal agreement is likely. In the past few months, the Trump administration has indicated its intent to negotiate bilateral Free Trade Agreements (FTAs) with willing African nations. As seen in Figure 3, AGOA has achieved modest direct and indirect results. While total two-way trade between Africa and the United States has trebled between 2000 and 2017, the vast majority of the trade has been in petroleum or mineral related products with the most amount of manufactured and agricultural goods limited to a few countries. [See Figure 3]
Figure 3. Aggregate two-way goods trade between AGOA countries and the United States22
The EU’s EPAs are neither as generous nor comprehensive as AGOA. These agreements are an extension of the Lomé agreement of the 1980s and are available to all qualifying African, Pacific, and Caribbean (APC) countries. While they have much more generous provisions than the Lomé agreement, they also require qualifying countries (which include all but the poorest APC countries) to open their markets to European exporters. As seen in Figure 4, the results have been inconclusive. [See Figure 4]
Figure 4. EU trade in goods with Africa, 2006–2016 (in € billions)23
[i] Eurostat Database, available at https://ec.europa.eu/eurostat/data/database
One of the barriers to intra-Africa trade has been the evolution of a system of regional trade agreements with often conflicting and always confusing results. In order to define its own economic future and accelerate intra-Africa Trade, in 2018, the leaders of 44 African countries signed an agreement to establish the African Continental Free Trade Agreement (CFTA). This agreement aims to establish the world’s largest geographic free trade arrangement by 2019. When in force, the CFTA will remove trade obstacles such as tariffs, quotas, and NTMs to accelerate the flow of goods and services amongst member states. Such integration is also aimed at increasing Africa’s partnership in GVCs. So far, only 14 countries have ratified the agreement and Nigeria has voiced opposition to the agreement. While the achievement evidences a monumental shift in ambition, it remains to be seen whether this will be transformative as Africa’s supply capacity has always limited the impact of market access agreements.
One area where supply constraints have been less daunting is the services sector. Services are less limited by physical barriers and have been greatly impacted by the growth of telecommunication and internet access across Africa. The impact has been dramatic, and there is evidence of a positive correlation to GDP growth. [See Figure 5]
Figure 5. Correlation between GDP growth and services growth.24,25
According to a Brookings report, services exports grew six times faster than merchandise exports between 1998 and 2015.26 According to the World Bank’s most recent data, 53.2% of sub-Saharan African GDP is attributed to services.27 And in Ethiopia and Ivory Coast, the services sector grew respectively by 8.59% and 9.15% in 2016.28 Kenya, Rwanda, and South Africa have undertaken special measures to grow their services sectors and become knowledge based economies. Botswana has become the global center for diamond sales. However, services expansion is dependent upon access to media and the Internet, and some African countries have put constraints on access in order to suppress political dissent.
A key component crucial to fostering a prosperous economy is a country’s ability to develop its human capital, or the benefits people can provide, given their knowledge, skills, and work ethic, to an economy. Unfortunately, many African countries are far behind their developed counterparts in this area, greatly hurting their economy in the present and likely in the future. This can be attributed to the collapse of post-secondary education institutions, lack of STEM (Science, Technology, Engineering, and Mathematics) related programs, and the limited possibilities for those who are educated. Although there is some hope on the horizon for these struggling countries, they still have a long way to go.
It is no secret that many African countries, predominantly in sub-Saharan Africa, fall short in their academic programs (see Figure 6). Research by the World Bank found that fewer than half the secondary school students in these developing countries could meet the benchmark minimum, set by the Programme for International Student Assessment (PISA), and only 26% of South African students reached it.29 Not only do SSA countries struggle to meet requirements, but they also have difficulty getting young children into classrooms. It was estimated that about 30 million children in SSA (1 in 4) are not enrolled in school and that only 28% of youth are enrolled in secondary school.30 Such dwindling numbers of children in school can in part be due to the wealth inequality throughout Africa. Many marginalized groups do not receive the same opportunities as the rich in the area and therefore do not have the means to send their children to school. In sub-Saharan Africa, 68% of children from poor families lack education, compared to only 13% from rich families.31 This is generally due to the rich families’ access to privatized schools, which are overall better than the public school system, which is chronically underfunded and lacks qualified teachers. Research found that in seven SSA countries, 3 in 10 fourth-grade teachers had not mastered the language curriculum they were teaching. It is highly unlikely that children can learn in an environment where their teachers are not even comfortable with the material. Such disparities can be detrimental considering they lead to fewer skilled workers for the continent as a whole. The societal implications could be significant further dividing economies into haves and have nots. These patterns inevitably continue through adulthood and result in workers who cannot provide as much value as their better-educated counterparts, thus leading to overall inefficiencies.
An additional problem for Africa’s education system is that the students who are in school are not studying disciplines that are central to the development of the continent. Technological advancements are central to any country trying to increase its economic activity, which is why they need human capital to excel in this area and help the country prosper. However, most students who study at higher levels of education (secondary and post-secondary) specialize in the areas of humanities and social sciences, not STEM fields.32 This lack of STEM education can be attributed to a variety of factors including inadequate funding for science and technology, small numbers of professionals in the field, and a lack of priority compared to other issues such as poverty and starvation.33 The disadvantages presented are a huge contributing factor to the cycle of underdevelopment. Funds are consistently transferred to sectors other than STEM education and development because the returns on STEM investments are too long-term and there are more pressing priorities. This leaves STEM underfunded, despite its potential to drastically help the economy. Therefore, SSA countries are less likely to get the opportunity to develop further, even though this development can lead to huge benefits in the future if given the chance to grow. In order to improve the STEM areas in Africa, governments need to enforce policy changes and the expansion of educational systems. Through this, they can divide funds up to help those with short-term needs, as well as plan ahead for future prosperity. [See Figure 6]
Figure 6. Median percentage of students in late primary schools who score above a minimum proficiency level on a learning assessment, by income group and region34
Due to the economic downfalls for the majority of the continent, Africa is seeing a huge brain drain among those with higher education; qualified professionals are leaving their home country to pursue better opportunities in other countries, typically in the United States or Europe. Those who do receive a good education in Africa are often incentivized to leave for higher paying jobs, a better quality of life, or more opportunities in other countries, thus leaving their home country worse off. The loss of potential workers leaves Africa increasingly reliant on bringing in workers from outside countries, which then hinders building up local skills in the community.35 However, there has been progress as many Africans have begun creating and innovating products and presenting them at world conferences, such as HackForGood, an annual hack-a-thon partnered with Nigeria that helps teach students computer skills and challenges them to innovate computer products.36 More events like this need to be in place so that students can feel challenged to keep improving and get a sense of gratification that their hard work is being recognized and encouraged. Improved public policy and increased business development would make African countries better able to provide the means for their citizens to study STEM fields and want to stay to continue their work.
Despite all that Africa has stacked against it, there is hope for its developing countries. Africa has the fastest growing middle class in the world, with the potential to grow its labor force immensely. It has a huge population of untapped potential that could be fixed with innovations and proper allocation of funding. Two changes Africa needs to make are improving school systems and pursuing technological advancements. Innovation is crucial for an economy to prosper, and the way to do that is through improved technology. Access to communications technology has the ability to dramatically improve the efficiency across all countries and give people the means to connect to one another. Similarly, growing the education systems will allow students to get a better education and grow up to contribute to the community. Young adults will be better prepared to enter the workforce and have the tools they need to push the economy into a place that can be beneficial for all. These adjustments will not be easy, but if governments and citizens can come together and develop better policies at all levels, change can happen.
Investment: Africa Rising
A brief mention should be made of Africa’s changing investment landscape. The Africa Rising scenario has become a popular mantra over the past decade and a departure from the Natural Resource Curse scenario that dominated African inward investment for decades. With a growing middle class, a growth in demand in Asia for commodities, and a suite of economic reforms, Africa has once again become a more attractive investment destination. While currency and limits on repatriation of earnings still exist, private equity funds are searching for the higher yields that Africa can offer. Private investment and money remittances from the African diaspora now both exceed the amount of inward development assistance flows. China has made a contribution, first by its state-owned companies and now by 10,000 private companies operating in the continent.37 However, much of the Chinese capital is in the form of medium- and long-term debt or is bartered in exchange for access to desired natural resources. Surprisingly, according to EY, the United States is still the leading investor into Africa outside of the African continent as measured by the quantity of investments. Within Africa, South Africa and Kenya has become leading investors as they bring not only capital but also access to world-class capital markets. The Johannesburg Stock Exchange is the world’s fourth oldest capital market. Yet in many countries other than these, the institutional and regulatory environments remain challenged.38
Although low investment returns in the developed world have incentivized investors to look for opportunities in emerging markets, Africa has set up many obstacles to FDI. First, with the exception of Kenya and South Africa, Africa’s capital markets are weak and offer few opportunities for investors to find attractive exits or raise complimentary capital. Second, Africans need more ambition with regards to the reform agenda, especially as there is compelling evidence that coherent, predictable, and transparent enabling environments are a precondition for investment. The World Bank’s annual doing business survey provides ample evidence of an ongoing African reform agenda. While four of the world’s top ten reforming economies are located in Africa,39 six of the bottom ten countries in the ease of doing business rank are located in Africa.40 While some African countries benchmark against other African countries, the reality is that, in a world of GVCs, each country in Africa competes with not only fellow Africa countries but also emerging markets everywhere.
Automation & Artificial Intelligence (AI) in Africa
Enabled by artificial intelligence (AI), machines can learn from patterns in data and proactively improve themselves, bringing human-like cognition to industrial automation and disrupting modes of production and the delivery of services. Global investment in AI has rapidly increased to between $20 billion USD and $30 billion USD in 2016, with 90% allocated to research, development, and deployment, and 10% to acquisitions.41 Much of this capital comes from companies like Google, Amazon, and Baidu. But there is also a growing contingent of private equity and venture capital investors, which spent between $5 and $8 billion in 2016.42 While some recipients of these funds are building AI systems to filter emails and provide legal advice, others are applying automation and AI to improve agriculture and manufacturing.
The abundance of interest, capital, opportunities, and promises reminds one of mobile technology just 10 years ago. Will automation and AI do to African nations over the next decade what mobile technology did to them in the last one, fueling a dramatic rise in connectivity and unlocking significant gains in economic development? Like mobile technology and communication capabilities, will automation and AI permit African nations to dramatically increase their research, development, and production capabilities? Will automation and AI give African nations even more power to leapfrog the need for old-fashioned infrastructure and outdated strategies of industrialization?
Some see application of automation and AI in Africa as “a chimera, not a reality”—a thing hoped or wished for but in fact illusory or impossible to achieve.43 On the contrary, entrepreneurs, startups, and multinational stalwarts alike are already investing in identifying and putting together the policy, regulatory, and investment components needed to create an enabling environment for automation and AI to not only take root but to scale as well. This is especially true in Kenya, South Africa, Nigeria, Ghana, Botswana, and Ethiopia.44
African leaders, entrepreneurs, investors, and policymakers have the opportunity to leverage automation and AI to improve agriculture and manufacturing in particular. With greater productivity, efficiency, and safety, these high-growth sectors enabled by innovative technology and human capacity can advance sustainable development, maintain inclusive growth, and connect supply chains regionally and globally. PricewaterhouseCoopers estimated that AI technologies could increase global GDP by $15.7 trillion, a full 14%, by 2030, of which $1.2 trillion would be added for Africa.45 But these outcomes are not possible without considerable challenge and significant investment.
Critical components necessary for automation and AI to take hold are missing across most of the continent except in a handful of countries—namely Kenya, South Africa, Nigeria, Ghana, Ethiopia, and Botswana. For one, the lack of quality first-generation internet and communications infrastructure leaves little to replace and a heavy burden for mobile networks to process the computationally dense work of AI at scale.
Internet connectivity still costs too much for most people; and many of those that can afford it still complain of poor service. Plus, further improvements to human capacity are also required to speed uptake and adoption of the new features afforded to users by automation and AI. Finally, many African countries remain incapable of requisite reforms in the areas of data collection and data privacy, infrastructure, education, and governance.
But there are also opportunities at large and tools available to solve problems and accelerate progress. Mobile technology should be seen as the foundation for automation and AI. A youth culture of strong interest in business creation is a driving force of technology adoption and development, with many young entrepreneurs looking for global opportunities. Internet connectivity has nonetheless provided unprecedented access to information, partnership, and capital.
Automation & Artificial Intelligence in Agriculture
The Global Opportunity Network’s 2016 Report identified smart agriculture as the opportunity with the biggest potential for a positive impact on society—ahead of the digital labor market and closing the skills gap.46 After all, with 2.3 billion more people in the world by 2050, we will need to produce 70 percent more food than we do today amidst climate change, resource scarcity, and growing inequality.47
Nowhere are these pressures felt more sensitively than in Africa. Agriculture employs 60 percent of Africa’s workers and produces nearly a third of its GDP (see Figure 10).48 [See Figure 7]
Figure 7. Sectoral composition of GDP in Africa, 2000-201649
Until 2025, agriculture will create more jobs than the rest of the economy combined.50 Buoyed by the increasing demands of population growth and the increasing supply pressures of climate change, agriculture will continue to be a critical pillar of Africa’s economic growth and meaningful participation in the changing global trade landscape. As such, improving productivity and efficiency of agriculture and food processing is an important objective for countries in Africa, who can accomplish this goal with the help of automation and AI.
But the challenges are plenty. For one, there is more uncultivated arable land in sub-Saharan Africa than there is cultivated farmland in the United States, and that land needs to be utilized more effectively and efficiently.51 Barriers to accessing financing for modernization persist, so it remains difficult and expensive to do so. Young working-age people are leaving rural homes for cities, attracted away from farming to what are perceived to be more ‘innovative’ industries. Climate change, disease, and drought also remain formidable and will almost certainly become more severe in the future. Nevertheless, automation and AI can help solve these, too.
According to IBM, problems related to weather cause 90 percent of all crop losses. Artificial intelligence tools can help farmers analyze data like humidity, soil pH, air pressure, precipitation, temperature, and weather in real-time and help determine plant strength, predict the best time for planting and irrigation, and increase food production in a time when the world really needs it.
Powered by AI, satellite image analysis can optimize use of uncultivated arable land, recommending locations and schedules for planting, irrigation, fertilization, and harvest. Powered by AI, financial transactions can be more secure, and lenders can more effectively assess risk with less information in order to widen access to capital. Powered by AI, food processors and wholesalers can improve supply chain efficiency and increase profitability. Powered by AI, robots are also attracting significant agricultural interest, namely stationary robots, non-humanoid land robots, and fully automated aerial drones. Between 23 and 37 percent of the companies surveyed said they plan to make investment of this sort (depending on the industry).52
Who is Leading the Way?
Kitovu (Nigeria) – a web/mobile based decentralized fertilizer/seedling warehousing system based in Nigeria that matches the right inputs to different farm locations owned by smallholder farmers in distant pocket locations, using geo-location and soil data collected by the mobile app.
Syecomp (Ghana) – specializes in the acquisition, processing, analysis and synthesis of imagery from remotely sensed satellites and multispectral image data from drones to monitor field crops/vegetative status and identify and mitigate potential diseases across fields in Sub-Saharan Africa on farms in Ghana.
Yellow Beast (South Africa) – develops, manufactures, and installs an easy-to-use, precision micro-irrigation device, called Nosets Simplified Irrigation, senses and interprets the most favorable, in situ conditions in the soil-crop system, using known information on the crop and soil properties, and automatically irrigates the crop.
Automation & Artificial Intelligence in Manufacturing
Manufacturing represents almost a third of African countries’ GDP (see Figure 10). Overseas Development Institute data show that between 2005 and 2014 manufacturing production within Africa more than doubled from $73 billion to $157 billion, growing 3.5% annually in real terms.53 Uganda, Tanzania, and Zambia have achieved more than 5% annual growth in the recent past.54 Manufacturing exports from sub-Saharan African markets almost tripled between 2005 and 2015 to more than $140 billion.55 Foreign Direct Investment (FDI) in African manufacturing is increasing and increasingly comes from other parts of Africa.56 Manufacturing investments represent a quarter of all FDI in Mozambique and Tanzania and more than 40 percent in Rwanda.57 Indeed, The Economist calls Africa “an awakening giant” in 2014; and Irene Yuan Sun, author and consultant, writing in Harvard Business Review in 2017 considered Africa “the world’s next great manufacturing center.58,59 [See Figure 8]
Figure 8. Average annual growth in value of African manufactures exports by sector, 2005-2016 (%)60
Improvements to manufacturing through automation and artificial intelligence (AI) can accelerate diversification of African economies, increase resilience to economic and climate shocks, and decrease dependence on natural resource exports. Automation and AI have the potential to expand manufacturing capabilities for aerospace, military, medical, dental, textiles, and automotive, all high-growth, high-value sub-sectors. Egypt, Tanzania, Morocco, South Africa, Tunisia, and Kenya are stand-out leaders in terms of pro-manufacturing policy, incentives for investment, and environments for experimentation and commercialization of automation and AI in manufacturing.
In fact, Morocco has been able not only to diversify its economy into manufacturing, but also to move higher up into the value chain.61,62 In 2016 slightly more than 13% of Morocco’s total exports were textile manufacturing products, while exports of higher value-added electrical machinery and mechanical appliances represented nearly 18% of exports. Morocco adopted policies to develop its automotive and aerospace sectors, which are now starting to bear fruit. The automotive industry represents nearly 14% of its exports, and the aerospace sector has grown by 216 times over the last 16 years to reach about $443 million worth of exports, 2% of the total. This success was in large part the result of Morocco’s Plan d’Acceleration Industrielle, PAI), which aimed to modernize, stimulate growth, and encourage competition in key export industries including aeronautics, auto-industry, agro-industry, offshoring, textiles, and pharmaceuticals.63
Tunisia is another example of a country steering its economy away from oil and into higher value-added manufactured products. Some of its major non-oil exports were textile products and electrical machinery and mechanical appliances, representing about 22% and 31% of its total exports respectively in 2016. Similar to Morocco, it is also moving into the automotive and aerospace sectors. In 2016, they represented about 3.5% and 1.7% of its exports respectively.
In East Africa, Kenya is also developing its textile manufacturing capabilities. Over the last 15 years, this sector has increased its size 59 times. In 2016, its textile exports reached about $146 million, representing almost 10% of its total exports. Kenya is also expanding its automotive industry. Toyota and Volkswagen have invested in their own assembly plants in the country. In partnership with local companies, Hino Motors, Hyundai, and Tata Motors are also assembling their cars, trucks, and buses locally. Although this sector is still relatively small, locally assembled vehicles represent more than 50 percent of new vehicles sold, and Kenya’s automotive exports have grown more than 20-fold since 2001. The Kenya Center of Excellence in the Africa Center of Tech Studies and Kenyatta University helped incubate manufacturing in the country.
Automation & Artificial Intelligence in Other Growth Sectors
Occupational safety is a significant issue in Africa as most employed individuals work in informal sectors and for MSMEs generally without any form of occupational hazard prevention or control procedures, legal protection, or health care coverage. Over 59,000 work-related fatalities and 4 million non-fatal accidents occur across the African continent each year according to the International Labor Organization (ILO). Promising applications of automation and AI to robotics can protect and safeguard employees by going into dangerous spaces in mines, where they perform tasks such as scouting, operating drills, and capturing detailed information. Multinational mining company Anglo American, which operates mines in multiple African countries, suggested in Bloomberg in 2017 that “robots will run mines within the next decade.”64
Technologies powered by AI and automation can boost healthcare efficiency, freeing doctors to treat those who actually need in-person care and providing increased access for all. In addition, IBM Watson is partnering with Kenya Medical Supplies Agency and local businesses to monitor medical product supply chains. Services with intelligent chatbots interact with healthcare workers to submit supply chain data, which are analyzed by a trusted advisor to identify the causes of stockouts, preventative measures, and relevant resources to improve the availability of product by 50 percent. In 2016, the Rwandan government signed a deal with Zipline, a drone delivery service that delivers medicine and blood to otherwise difficult-to-reach areas and has put every Rwandan within 30 minutes of life-saving medical supplies.65,66
Finally, AI can help protect businesses’ financial security. AI programs such as Ayasdi can take massive data sets, discover discrepancies, and predict when financial hiccups will arise. As companies increasingly focus on improving their risk profiles in our interconnected world, AI security programs will become increasingly common. HSBC is already using Ayasdi to transform its approach to financial crime risk and protect against money laundering, fraud, and other threats. Debeers has implemented a Blockchain system that will track a diamond from a mine to a ring thereby giving granted certainty of origin.67
Automation and artificial intelligence (AI) require large amounts of data from which to find patterns and make predictions. While mobile phones and the growing popularity of social media and messaging applications across Africa have made data more readily available, there remain shortages and barriers. Even in countries where automation and AI hold promise, the quality, timeliness, and availability of data are often poor in quality or missing.68
In sub-Saharan Africa, Kenya leads in internet penetration, mobile density, and in trade in ICT services. Expanding internet access on the continent has led to a 25% increase in GDP, worth $2.2 trillion, and 140 million jobs.69 Even more fundamental than internet access, automation and AI no matter how innovative and disruptive still require basic ICT infrastructure, education, and improved cost and reliability of electricity.
Developing, Not Automating, Human Capacity and Skill
More than just big data, automation and artificial intelligence (AI) rely on significant know-how among its human adopters. No one knows for certain if productivity gains from automation will create more jobs than it destroys, as has occurred during previous technological shifts. The labor effects of automation and AI are often painted as a zero sum, but the truth is more nuanced. The Future of Jobs Report in 2018 predicts the loss of 75 million jobs by 2022 and the creation of 133 million jobs over the same period, for a net increase of 58 million jobs.70 In other words, innovative technology like automation and AI may create more jobs than it destroys.
Rather than displacing employees, machines can empower low-skilled workers and equip them to take on more-complex responsibilities.71 This, in turn, can help meet an urgent need for countries lacking widespread access to education and skills training.72 AI and web-based training programs, for example, could teach more complex skills to a low-skilled worker and respond by adjusting its settings as the worker expresses understanding and knowledge.73 This type of technology is especially applicable in countries like South Africa, where unemployment remains high and employers cannot fill vacancies due to a dearth of skilled workers.
Similarly, a World Economic Forum report suggests new technologies have the capacity to both disrupt and create new ways of working, similar to previous periods of economic history such as the Industrial Revolution, when the advent of steam power and then electricity helped spur the creation of new jobs and the development of the middle class. But workers and the systems that educate and train them need to be ready. Significant investment should be maintained in developing human capacity from primary school to university. For instance, backed by Google and Facebook, the African Institute for Mathematical Sciences (AIMS) has created the first dedicated master’s degree program for machine intelligence in Africa.
Way Forward to Enabling Automation, Artificial Intelligence, and Their Benefits
Will automation and AI fuel economic development and research as mobile technologies did in the past decade? Will they allow African nations to leap ahead, skipping traditional industrialization steps? The answers to these questions remain elusive, and in many respects it remains too early to tell. But for any chance at positive outcomes—that is, for African countries to fully leverage the power of automation and AI to change sectors like agriculture and manufacturing into globally-connected productivity powerhouses—governments and private sector alike must fully understand the advantages and consequences of this technology and deliberately respond to integrating it into national strategy.74
Government leaders should focus on three (3) key activities:
Increase financing of internet and communications technology (ICT) infrastructure development
Integrate technology education into curricula of primary and secondary schools;
Implement reforms to data collection and data privacy policies.
There is an urgent need to accelerate improvements to the agriculture and manufacturing sectors to increase their value alongside other efforts to diversify African economies, strengthen growth, and build resilience. Given the leapfrogging lessons of mobile technology on the continent over the last two decades, many African countries are positioned well to leverage automation and AI with agility and innovation. Automation and AI in agriculture and manufacturing would unlock tremendous value, connecting these markets to new regional and global marketplaces and allowing them to compete more efficiently and effectively. The challenge lies not in maneuvering these technologies as a vehicle but in ensuring that government, industry, and civil society contribute to creating an enabling environment.
Anthony Carroll serves as a vice president at Manchester Trade Ltd. Eric Obscherning is an associate consultant at C&M International. | 1 | 5 |
<urn:uuid:0e5d904f-db07-448e-8740-0bfaf529e5d7> | Comparing Large Files
It is possible to compare very large datasets using XML Compare; example test data has demonstrated that files over 1GB in size can be loaded from disk and compared in around 7 minutes.
There are many different factors that affect performance with large files apart from the CPU type and speed, and the amount of physical memory, both of which must be adequate for the job. Some of the more important are discussed in the following sections. We also provide some typical metrics for DeltaXML on basic machines.
White space nodes are generally significant in XML files. Each white space, e.g. newline or space, can be treated as a node in the XML file. This can increase the memory image size and slow the comparison process. It can also result in differences being identified that are not significant.
In many situations white space nodes are not important and can be ignored. If a file has a DTD, an XML parser can use this as the file is read in to identify whether white space is ignorable or not. If a white space node is ignorable, for example because it appears between two markup tags, DeltaXML will ignore it in the comparison process.
If there is no DTD white space nodes should be removed either using an editor or by processing using an XSL filter such as normalize-space.xsl, though using XSL can be time consuming for large files. The delta files generated by DeltaXML have no white space added to them: if you look at them in an editor you will see that new lines are added only inside tags. This may look strange at first but it is an effective way to have shorter lines without adding white space nodes to an XML file. White space inside tags will be ignored by any XML parser.
Remember also that indentation of PCDATA within a file has an effect: often white space in PCDATA and attributes should be normalized before comparison. Otherwise, again, there will be a lot of differences reported that are not important.
XML file structure
There is a performance difference in comparing 'flat' XML files, i.e. large number of records at one level, and more nested files, which tends to require less processing because there are fewer nodes at each level. Comparison of orderless data is generally slower.
Number of differences
The performance is affacted by the number of differences: it is quickest when there are no differences! The more differences there are the slower the comparison process because the software is trying to find a best match between the files. The LCS algorithm used in DeltaXML for pattern matching ordered sequences has optimal performance for small numbers of differences and slows significantly for large numbers of differences.
DeltaXML shares text strings, so many different text strings will result in a larger memory image and may cause the program to hit memory size limitations sooner. On the other hand, files with many identical strings will be stored very efficiently.
Changes only or Full delta
The DeltaXML API has the ability to generate a delta file with 'changes-only' or a 'full delta' that includes unchanged data as well.
The time for comparison and the memory required is independent of the type of delta being produced. However the full-context delta output is typically larger and will require more disk IO and CPU time to write to disk.
Java Heap Size
The size of the JVM heap is one of the main factors which determines the size of datasets which DeltaXML can process. The size of the heap, amount of available RAM and other JVM configuration options affects both capacity and performance (too small a heap will result in excess garbage collection, similarly not enough RAM will causes performance degradation). The following guidelines are suggested:
java -Xmxcan be used to increase the fairly small default JVM heap size. For example invoking using the: (
java -Xmx512m...) command line argument will allocate half a gigabyte of RAM to the JVM heap.
Performance is poor if there isn't enough RAM available to support the requested JVM heap size. Using disk based swapping to support the heap exhibited significant slow downs. We suggest ensuring that the heap size specified with the
java -Xmxargument is available as free RAM.
The J2SE server JVM can provide much better performance than the client JVM (in some cases twice as fast), but at the expense of increased memory consumption. If enough RAM is available, adding:
java -server... is recommended for best performance.
32 bit Operating Systems and processors can limit the process virtual address space and thus the amount of memory that you can dedicate to JVM heap usage. Some Operating Systems divide the 32 bit process address space into space for system/kernel and space for user code. For example, Windows™, Linux™ (most distributions/kernels) and MacOSX™ do a 50/50 split, leaving on a 32 bit machine around 2GBytes of space available for the Java heap, even on machines which have larger amounts of RAM installed. 32 bit processes on Solaris Sparc™ (7, 8 & 9) avoid the 50/50 split and make most of the 4Gbytes available to the java heap, for example java -Xmx3900m is possible.
To exceed the 2 or 4GByte Java heap size limits, a 64 bit JVM is usually required. However, for this to work usefully and to see benefits, 64 bit processors, corresponding Operating System support (some Operating Systems available for 64 bit processors only support a 32 bit address space, for example MacOSX™ 10.3) and more than 4 GBytes of RAM will be needed.
The use of Multiple Page Size Support
java -XX:+UseMPSS... on Solaris provided a 5% runtime improvement in testing, with no measurable memory overhead.
Using the incremental garbage collector (
java -Xincgc ...) showed no benefit when tested.
It was hoped the use of the Parallel Garbage collector (
java -XX:+UseParallelGC ...) would provide improved run times on multiprocessors as garbage collection could occur concurrently on a separate CPU. It actually had the opposite effect, doubling the elapsed runtime and trebling the CPU time consumed.
SAX input sources
Reading from disk based files, for example, using the command-line interpreter, is typically slower than processing SAX events produced from an existing in memory data representation. As well as the reduced disk IO a more significant speedup arises from the lack of lexical analysis/tokenization that is otherwise performed by a SAX parser. We also recommend testing different SAX parsers and comparing their performance using your data if you need to read XML files from disk.
It is difficult to give accurate performance metrics for the reasons outlined above. But some examples may help as an indication.
There are very few large XML datasets which are publicly available so we have used the XMark benchmark generator - from http://xmlbench.sourceforge.net. This is typically used for testing XML content repositories/databases and XQuery implementations. Suggestions for alternative benchmark data and particularly documents are welcome.
Test file generation
Example files of test data were generated using the following command-lines with the xmlgen application, the intention was to generate files around 1GByte in size:
$ ./xmlgen -v This is xmlgen, version 0.92 by Florian Waas ([email protected]) $ ./xmlgen -f 10.0 -d -o f10.xml $ ./xmlgen -e -o auction.dtd $ ls -l f10.xml -rw-rw-r-- 1 nigelw staff 1172322571 Nov 7 16:36 f10.xml
Test file characteristics
Some characteristics of the generated file are described using the following XPaths and their result values:
While the shortest runtime is from performing an identity comparison (ie comparing the same file or data with itself) we wanted a more realistic test with perhaps a small number of changes. To achieve this we deleted 7 random grand-children elements in the data, from the elements with large numbers of children.
The following XPaths describe the elements which were deleted:
This 'trimmed' file was called f10t.xml and was slightly smaller than the original input.
Test Hardware and Software
Test hardware was a Sun x4100, with:
2 AMD Opteron 275 CPUs (providing 4 cores at 2.2GHz)
12 GBytes of RAM
Internal 73GByte, 10k rpm SAS disks
Solaris 10 Update 3 (11/2006)
Test software was XML Compare 5.2
The command-line driver was used to run and time the tests. The following command was used:
$ time java -server -d64 -Xmx6g -jar /usr/local/deltaxml/DeltaXMLCore-5_2/command.jar compare delta f10.xml f10t.xml f10x.xml DeltaXML Command Processor, version: 1.5 Copyright (c) 2000-2008 DeltaXML Ltd. All rights reserved. Using: XML Compare, version: 5.2 real 7m28.145s user 10m7.796s sys 1m13.039s
The above command represents the basic, default command-line usage. The UNIX time results show a comparison time of around 7.5 minutes for these 1GByte data files. Faster times were obtained with the following techniques:
Turning off output indentation by adding
"Indent=no"to the command-line saves around 10 seconds from both the CPU and elapsed times.
Turning off enhanced match 1 by adding
"Enhanced Match 1=false"to the command line reduces the times to 6m38s real/9m05s user.
Comparing the f10.xml file with itself (an identity comparison which reads the files traverses them and writes a delta result) gives a lower-bound comparison time for the data of 5m33s real/8m17s user.
Some further issues relating to performance include:
The times generated above have a larger amount of CPU or 'user' time compare to the elapsed or 'real' time of the command. This is due to the introduction of multithreading in the XML Compare 5.2 release.
A substantial proportion of these times is spent doing disk IO and therefore the performance of the IO system will matter. Using Java code it is possible to send/receive SAX events from the comparator, this will be faster as it avoids both the disk IO and also parsing times. However, such tests are harder to configure and time.
It is possible to tune certain aspects of the JVM operation such as threading and garbage collection. Such results are often specific to the JVM being used (Sun and IBM for example offer different options) and may also give different results/benefits on different hardware/OS platforms.
We welcome feedback on these results and are prepared to look at tuning and performance issues for customers through our normal support channels. Any suggestions for large XML datasets which can be used for benchmarking and performance testing would also be welcomed.
How to test DeltaXML on your own large files
We often have enquiries about handling large files, 500Mb to several Gb. Here are a few other comments and suggestions.
Download an evaluation of XML Compare to try it on your own files. The Professional Named User edition has a 1M node limit - it will tell you if you hit this. The Professional Server and Enterprise do not have this limit. If you do word-by-word comparison the node count goes up a lot because the text is split into words and a word is counted as a node in the XML tree.
You would need to have sufficient memory - exactly how much depends on the nature of the data but for 500Mb files we would initially suggest somewhere in the 4 to 8 Gb range.
DeltaXML will work fine in a 64 bit environment, provided that you:
1. use a 64 bit OS and hardware
2. use a 64 bit JVM and invoke it appropriately.
For example, use the command line access like this, replacing x.y.z with the major.minor.patch version number of your release e.g. command-10.0.0.jar
$ java -Xmx4g -jar command-x.y.z.jar compare delta file1.xml file2.xml file1-2.xml
java -version command will often report the use of a 32 or 64 bit JVM, for example:
$ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode)
-d64 argument if your default JVM is reported as 32 bit and use the
-Xmx argument to adjust the heap size should you get any Java memory exceptions. The example above was for 4Gbytes which works well on a Mac desktop with 8GB of RAM.
We spend time optimizing our products and associated XML tools for lower memory footprints - some recent work was reported here: "XML Pipeline Performance". This paper describes advanced methods for optimizing XML pipeline performance. Presented at XML Prague 2010, held March 13th and 14th, 2010, Prague, CZ. Paper: PDF. Poster: PDF. (Please note that the performance figures presented predate the Saxon 9.3 release which addresses some of the issues discussed).
Be sure to remove white space from large input files. Performance depends on file structure and text content so needs to be evaluated on your own data.
However, it is clear from the above that XML Compare can be used successfully with very large XML datasets. | 1 | 9 |
<urn:uuid:59f18945-ba08-4ad0-bf6b-61e45e474496> | Objectives—The aim was to determine in New Zealand: (1) to what degree the International Classification of Diseases Supplementary Classification (ICD) external cause (E) codes for drowning identify all deaths involving drowning; (2) how the other drowning deaths are distributed across E codes; and (3) whether the proportion of drownings not identified by traditional ICD E codes has changed over time.
Methods—Mortality files for the period 1977–92, which were coded in the range E800-E999 (external causes of injury and poisoning), were searched electronically using the keyword “drown”.
Results—2718 cases that involved drowning were identified. This represents a 17.7% increase in the number of cases one would identify using ICD drowning E codes alone. The majority (65%) of the 408 drownings not coded as such were coded as motor vehicle traffic crashes. The number of drownings that were not identified by ICD E codes remained relatively constant over time, although the number of deaths E coded as drowning declined significantly in recent years.
Conclusion—Standard ICD E codes for drowning do not identify all drowning related deaths, which may make comparisons of injury rates between countries difficult, especially for injuries such as drownings and burns that can be both nature of injury and external cause codes. Multiple cause coding and the inclusion of free text narratives are an important tool to improve the value of a country's vital statistics for injury prevention, and facilitate comparisons with other countries.
- multiple cause of death statistics
- free text
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Most studies of injury fatalities rely on use of standard groupings of codes to identify the injuries of interest. Little consideration is given to cases that may be missed by this approach, due either to misclassification or to the coding rules needed to define a single underlying cause even when there are multiple events in the sequence of events leading to death. The most widespread coding system for studying injury causes is the International Classification of Diseases Supplementary Classification (ICD) of external causes of injury and poisoning (E) codes.1 As part of the International Collaborative Effort (ICE) on Injury Statistics,2, 3 studies are being undertaken that seek to evaluate and compare differences in vital statistics using specific injury types. One such study is on drownings, and participating countries are being asked to use the following standard codes to identify drownings and compare rates between countries.
E830: Accident to watercraft causing submersion
E832: Other accidental submersion or drowning in water transport accident
E910: Accidental drowning and submersion
E954: Suicide and self inflicted injury by submersion (drowning)
E964: Assault by submersion (drowning)
E984: Submersion (drowning) undetermined whether accidentally or purposely inflicted
Although this is the complete range of specific E codes for submersion injuries listed in ICD, it does not identify all cases of drowning in a country because there are drownings that occur under other circumstances and are thus coded with other E codes. There are well defined coding rules that give precedent to the initiating event for certain causes such as transportation, for example a motor vehicle going off the highway into the water. A single code does not capture that the person may then have drowned from what may have been a very minor traffic crash. An insight into the potential significance of such cases is provided by reference to some of the exclusions for E910 listed in ICD 9.1 The full list of exclusions is:
Diving accident (NOS) resulting in injury except drowning (E883.0)
Diving with insufficient air supply (E913.2)
Drowning and submersion due to cataclysm (E908–E909)
Machinery accident (E919.0–E919.9)
Transport accident (E800.0–845.9)
Effect of high or low pressure (E902.2)
Injury from striking against objects while in running water (E917.2)
In the case where a drowning resulted from a single motor vehicle incident in which the vehicle failed to take a corner and crashed into a river, this would be coded as E816: Motor vehicle traffic accident due to loss of control, without collision on the highway. In this case when a single underlying cause of death statistic is used there is no way to determine that this death occurred due to drowning. Thus, rates based on the E codes listed for the proposed international drowning study will underestimate the frequency of all deaths where drowning was involved. The degree of the underestimate would depend on the physical environment in a given country, such as the length of roadway alongside lakes and rivers.
The aim of this study was to examine the following issues for New Zealand: (1) to what degree the ICD drowning E codes identify all deaths involving drowning; (2) how the drowning deaths are distributed across the full range of E codes; and (3) whether the proportion of drownings not identified by traditional ICD codes has changed over time.
An accurate assessment of all drowning and drowning related deaths is important to understand the true extent of the problem and ensure compatibility when comparing data between countries. In addition by focusing analyses on just single underlying causes of death we may be missing important opportunities for prevention, such as surviving an automobile crash into the water by ensuring a timely escape from the submerged vehicle, or installation of barriers to prevent drowning suicides resulting from jumping from high bridges.
New Zealand maintains an electronic national mortality data file. All injury deaths are coded according to the ICD E code rules.1 Injury diagnoses and multiple causes of death are not coded. For each injury death there is an electronic field of up to 95 characters of narrative that is used to briefly describe the circumstances of death, including the nature of injury. There are no specific guidelines for completing this field. Information for this field is obtained from a variety of sources, including death certificates, coroner reports, and hospital files.
Mortality files for the period 1977–92, coded in the range E800–E999, were electronically searched using the key word “drown”. In addition the standard ICD codes for drowning (E 830, 832, 910, 954, 964, and 984) were also examined. Trends in drowning deaths were evaluated using the GENMOD procedure.
For the period 1977–92, 2310 drownings were recorded under the drowning codes listed above (E830, E832, E910, E954, E964, E984) of which only 397 did not have the word “drown” specifically listed in the free text. By searching for the term “drown” we identified 2321 cases. The narrative search thus revealed an additional 408 cases or 17.7% more cases than those recorded in drowning codes, resulting in a total of 2718 drowning related deaths identified by our study.
Table 1 shows the distribution of the drownings identified by the narrative search according to the E code groupings under which they were classified.
The majority (65%) of the extra 408 drownings discovered in the narrative search (hereafter referred to as “other” drownings) were coded as E810–E819: Motor vehicle traffic accidents. These incidents represent 11.4% of the drownings. The remainder of the “other” drownings were evenly distributed over a range of E code groupings (table 1).
Table 2 provides greater detail of the classification of the other drowning deaths, by listing the most common three digit E code categories to which they were coded. Three findings are of note. First, single vehicle crashes (E816) accounted for just over half of all cases and represent 9.4% of the drownings. For the same period there was a total of 2233 single vehicle crashes (E816), and drowning was mentioned as an outcome in the free text in 9.8% of these. Second, is the large number of suicide and self inflicted injuries by jumping from a high place (E957). Reference solely to E954 (suicide by drowning) will underestimate the size of all suicides involving drowning by 7%. Finally, a similar problem, although less significant, arises when seeking to determine the incidence of drownings associated with water transport. The drowning codes in table 1 suggests there are 521 cases (E830, E832). Reference to non-drowning codes in table 1, however, suggests there were an additional 16 cases.
Analysis of the free text data provided more detail as to the exact circumstances of the injuries that were not E coded as drowning but mentioned “drown . . ..” in the free text. Among the 30 additional suicides (table 1) identified by free text the largest category was jumping from a bridge with no other injury mentioned (14 cases). A further eight cases fell into the water but clearly had other injuries, four cases were suicides by driving automobiles into the water, three were associated with drug overdoses, and one case involved a self inflicted bullet wound to the chest and subsequent drowning in the water. Similarly for the 19 unintentional falls that had drowning mentioned in the free text, seven had no mention of other injuries but fell from places such as bridges, wharves or into pools, seven had clear evidence of head injuries from falls from rocks or cliffs into the water, three were swept off rocks and fell into rock pools with no mention of injury, one case was described as drowning after breaking his jaw in a fall and another after a spinal cord injury from diving into shallow water.
Analysis of drowning deaths over time (fig 1) found no change in the number of drowning related deaths that were coded as other injuries (non-drowning ICD E codes), although as a proportion of all drowning deaths they increased. There was however a significant drop in drowning deaths overall by 7% per year (p < 0.0001), due to the decline in the number of deaths E coded as drowning.
The analyses presented here show that estimates of the incidence of all drowning related deaths in New Zealand will be underestimated by reference solely to the specific drowning E codes because deaths involving drowning can be assigned to other external causes such as motor vehicle crashes. It is likely that a similar situation exists in other countries. As indicated in our study, the actual rate for all types of drownings in New Zealand is increased by almost 18% if we use the free text search for the word “drown” to identify those cases classified elsewhere in the ICD coding system. The majority of the drownings that were not identified by standard ICD drowning codes were due to transportation related deaths (312 of 408 or 76.5%), of which most were due to motor vehicles (table 1). While the problem of drownings coded as traffic injuries has been recognised, to date, other than a study of drownings in one California County4 we are unaware of any studies in peer reviewed literature that examine transportation related drownings. Baker et al estimated that about 5% of all drownings in the USA are due to motor vehicles,5 although it is not described how these were determined. In Canada motor vehicles are estimated to be responsible for 7% of drownings6 compared with our estimate of 11% for New Zealand. The proportion is likely to vary by country depending on both the completeness of case ascertainment and proximity of highways to bodies of water.
Many injuries, including drowning, are multifactorial and not well described by a single cause. In addition many may be more preventable early in the sequence of events (for example barriers to prevent motor vehicles going off an unprotected highway into the ocean or a depressed person jumping off a high bridge and drowning). In these circumstances it would be appropriate to code the initiating event as the primary or underlying cause of death. However, much important information is lost by relying on a single cause of death. The ICD coding rules recognise that there may be multiple factors responsible for a death and have defined rules to select a single underlying cause of death for the purpose of tabulating causes of death.1 There are clear rules that give preference for transportation over other causes such as fires or drowning. However the rules for deciding on other single underlying causes are much less clear for injuries such as suicide, the only exception being for natural and environmental factors (for example floods or tidal waves: E908–E909), which take precedence over drowning. No guideline is provided in the ICD, for example, to clarify if a drowning resulting from a suicide jump from a high place, such as a bridge, should be classified as E954, suicide by submersion (drowning), or E957, suicide by jumping from a high place.
The determination of underlying cause of death is also confounded by the fact that blunt physical trauma may indeed be the actual cause of death for some falls/jumps from a high place. It is often difficult to determine if the subject dies from a ruptured aorta or from drowning, especially if necropsies are not done. The free text in our study did not mention the presence of other injuries for any of the suicide falls. However, it is well known that falls/jumps, such as those occurring from the Auckland Harbour Bridge, can result in serious blunt trauma injuries. Even in these cases however, drowning may be the actual cause of death. In almost half the drownings coded as unintentional falls there were descriptions of obvious associated injuries such as those to the head.
The multiple cause of death classification system7, 8 is used by some countries in an effort to recognise that multiple diseases or conditions are often related to a person's death (for example diabetes and heart disease). This system relies on the selection of one single underlying cause but also codes other related conditions. In the case of injuries this means that the external cause or E code is used as the underlying cause and that other conditions written on the death certificate can be coded. However, these can only be nature of injury codes and the system does not allow for multiple E codes. Drownings, burns, and poisonings are, however, injuries where the cause and nature of injury are similar. Drownings for example can be identified by the four digit nature of injury code ICD 994.1, drowning and non-fatal submersion. We found only one preliminary report that examined both N and E codes for drownings using 1987 data from Canada. The study found that 15% of all drowning deaths were not identified by drowning E codes but only by the nature of injury code N 994.1.6 The distribution of other E codes was similar to our study. The study did however raise concerns that some of the deaths may have been miscoded or due to data entry errors. The author did not have access to free text or death certificates to check the event descriptions. Unfortunately, multiple cause of death data are not available in New Zealand nor in most other countries. In the absence of multiple cause data, the free text provides a useful alternative that is probably as valuable and has the added advantage of providing more insight into the circumstances surrounding the drowning.
Many other studies have shown the value of multiple cause of death data and that even when a single underlying cause of death is used there can be wide differences in practices in certifying the single underlying cause.8–14 In addition, several articles have discussed the problem of under counting injury deaths (especially in the elderly) through the selection of associated diseases or complications (for example pneumonia) as the underlying cause of death.15, 16 However, we found only one other article that specifically examined discrepancies in coding causes for injury deaths and these compared death certificates only with more in-depth case records.17 For our study it should be emphasised that the revised estimate of intentional self drownings (n = 116, 4.9%) probably remains an underestimate. For example, some of the single vehicle motor vehicle crashes may be intentionally self inflicted. Support for this view is provided by a detailed investigation into drownings in the Auckland area. The study estimated that 28% of all adult drownings were intentionally self inflicted.18
It is also recognised that when multiple causes of death are recorded the tendency to select specific conditions as the single underlying cause may vary both over time, within a country, and between countries.8 We were also concerned in our study that changes may have occurred over time in coding practice for drownings. Our analyses revealed that the number of “other” drowning deaths changed little and that there had been an overall decline in deaths coded as due to drownings (fig 1). Similar declines have been noted in other countries.3, 19
Considerable attention has been given to improvement of rules for selecting a single underlying cause of death for diseases but not for injuries. A recent report for the US National Center for Health Statistics has attempted to define these rules for injury hospitalisation but they are not yet internationally recognised.20 To date there have not been published validation studies of the consistency of injury cause of death coding either between coders in the same country or between countries.
Other studies have documented the value of free text to both more fully understand the circumstances surrounding an injury and to use it to identify specific injuries missed by conventional coding. The National Traumatic Occupational Fatalities surveillance system in the United States, for example, records the limited free text description of “how the injury occurred” from work related death certificates.21 This narrative has been used to identify “missed” tractor related deaths because these can be correctly coded as motor vehicles according to ICD 9 rules if they occur on a public roadway. The addition of free text on the vital statistics data also provides a valuable opportunity to verify the coding of specific causes of death and is the reason the system was implemented in New Zealand. It also gives the potential for going back easily and reclassifying causes of death if coding rules change. It is also possible to “train” computers to code narrative data through pattern recognition of key strings of text. Future use of such methods for coding free text could lead to improved means to code key variables.22
In the absence of specific guidelines on the contents of the free text field, it seems likely that the estimates of drowning produced in our study are underestimates because limited space may have precluded mention of injury diagnosis in some cases. Adding a mandatory diagnostic field that addresses the type of injury causing death would remedy this problem and will be available in New Zealand for 1995 mortality data onwards. An indication of the potential significance of further drownings is provided by reference to the New Zealand Water Safety Councils' (NZWSC) independent estimates. These are obtained from a variety of sources in New Zealand. For the period 1980–92 inclusive, they estimated there were 2297 drownings, 35% more than that derived from our free text analyses (n = 1706) for the same period, and still 217 cases more than the estimate of 2080 from combining free text and E codes for drowning. Differences in coding frames and potentially different criteria for counting a case preclude a simple determination of why this discrepancy exists. From a national perspective the discrepancy is of considerable concern given that the NZWSC is seen as the principal organisation for promoting water safety. Future research in this area should give priority to matching the two data files with a view to determining where the discrepancies occur and arriving at an accurate estimate of the incidence of all drownings and specific drowning types.
Given the considerable observed and potential undercount of drownings in New Zealand based on standard ICD codes, caution should also be exercised in comparing injuries, such as drownings, between countries. The likelihood of coding some drownings to another underlying cause is not unique to New Zealand as the same coding rules apply elsewhere. While some countries cause of death statistics allow for coding multiple diseases relating to the cause of death, including several nature of injury codes, only a single underlying external cause of injury is usually reported. The recording of multiple E codes has been recommended for injury hospitalisations23 to enable the sequence of injury causes to be better described but, as yet, the rules to describe the sequencing of several causes such as a jump from a high place followed by drowning (for suicide) have not been determined. However, the expansion of multiple E codes to the well established tradition for disease deaths would allow for a more complete understanding of injury causes, especially in the absence of free text fields such as exist in New Zealand.
Much important information on injury aetiology is lost by reliance on a single underlying cause of death to describe the series of events that ultimately leads to an injury death. Some effective prevention strategies may in fact be missed by focusing only on underlying causes of injury. Examples include erecting barriers around hazardous roadways in close proximity to water, or preventing an epileptic seizure from resulting in a drowning death. Multiple cause coding and inclusion of free text narratives are an important tool available to any country wishing to improve the value of their vital statistics for injury prevention, and comparisons with other countries.
We wish to thank Lois Fingerhut of the US National Center for Health Statistics for her encouragement and assistance in preparing this paper. Comments from David Chalmers, Geraldine Whyte, and Margaret Warner on an earlier version of this paper are appreciated.
Sources of support: Dr Gordon Smith was supported by a grant from the National Institute of Alcohol Abuse and Alcoholism (R29AA07700) and by a grant from the Centers for Disease Control and Prevention (R49/CCR302486). The Injury Prevention Research Unit is funded by the Accident Rehabilitation and Compensation Insurance Corporation, and the Health Research Council (HRC) of New Zealand. Dr Langley is an HRC Principal Research Fellow. The New Zealand Health Information Service provided the fatality data files.
This paper is based on a presentation at the International Collaborative Effort on Injury Statistics meeting in Melbourne on 23 February 1996. | 1 | 40 |
<urn:uuid:af0302f9-9dfc-4516-95d0-7224a58d236c> | Human immunodeficiency virus (HIV) and Mycobacterium tuberculosis annually cause 3 million and 2 million deaths, respectively. Last year, 600,000 individuals, doubly infected with HIV and M. tuberculosis, died. Since World War I, approximately 150 million people have succumbed to these two infections—more total deaths than in all wars in the last 2,000 years. Although the perceived threats of new infections such as SARS, new variant Creutzfeldt-Jakob disease and anthrax are real, these outbreaks have caused less than 1,000 deaths globally, a death toll AIDS and tuberculosis exact every 2 h. In 2003, 40 million people were infected with HIV, 2 billion with M. tuberculosis, and 15 million with both. Last year, 5 million and 50 million were newly infected with HIV or M. tuberculosis, respectively, with 2 million new double infections. Better control measures are urgently needed.
The two culprits
M. tuberculosis infection does not necessarily transform into disease. Of the more than 2 billion individuals infected with M. tuberculosis, only approximately 10% will develop tuberculosis1,2. The pathogen is not eradicated, but is contained in distinct foci by the immune system. Critical for protection are CD4+ T lymphocytes. When disease develops, it generally manifests in the lung and is transmitted through the air. Hence, active tuberculosis is highly contagious and prevention of infection is nearly impossible. A vaccine against tuberculosis was developed in the early twentieth century by the French scientists Albert Calmette and Camille Guérin3, consisting of an attenuated strain of M. bovis, the etiologic agent of tuberculosis in cattle. The bacille Calmette-Guérin (BCG) vaccine protects against severe forms of childhood tuberculosis, but unfortunately, does not lead to eradication of M. tuberculosis and protective activity of the vaccine weakens during adolescence. As a consequence, BCG does not protect against the most prevalent disease form, pulmonary tuberculosis, in adults. Tuberculosis can be cured by chemotherapy, but the complex and long-lasting treatment, involving at least three drugs for 6 months, means compliance is often incomplete, resulting in a rising incidence of multidrug-resistant (MDR) strains. In several countries such as Estonia, the Russian Federation, Israel and Uzbekistan, more than 10% of all tuberculosis cases are caused by MDR strains1. In low-income countries MDR tuberculosis results in no treatment and often leads to death and further spread.
Infection with HIV, in contrast, consistently transforms into disease, and is contagious at all stages. HIV predominantly resides in CD4+ T lymphocytes. Infected and noninfected T cells are damaged, causing immunodeficiency. With new antiretroviral drugs, HIV infection can be controlled, but not eradicated. Treatment, however, is expensive and less than 5% of individuals infected with HIV in developing countries have access to effective treatment. Anti-HIV drugs target HIV reverse transcriptase, protease, integrase or the coiled coil domain of gp41. Used in combinations of three or more, they are very effective at treating HIV infection: patients with advanced AIDS have shown dramatic albeit incomplete recovery of CD4+ T cell counts and immune function, though sometimes the revitalized immune response causes severe problems by reacting to previously silent infecting pathogens. Yet even when virus loads have become undetectable for several years, cessation of drugs results in rapid rebound of virus and return to the pretreatment levels4. This implies that patients need to take these drugs for life. But because of serious side effects and expense, indefinite treatment is not possible for most of the world. Drug resistance by virus mutation is well described5 and although triple therapy reduces the risk substantially, imperfect adherence to treatment regimes may result in selection of resistant virus. As is the case for tuberculosis, this could limit treatment options in the near future. Transmission, however, can be reduced by 'safer sex' practices. Such behavior changes may account for the reduction in prevalence of HIV-1 infection from >20% to 8% in Uganda over 10 years6.
The dangerous liaison between AIDS and tuberculosis
In South Africa, more than 10% of the 40 million inhabitants are infected with HIV and more than 5% suffer from active tuberculosis. In this country, more than half of all individuals with tuberculosis have concurrent HIV infection. Immunodeficiency caused by HIV infection increases the risk of developing tuberculosis dramatically—10% of double-infected individuals will develop active tuberculosis within a year of coinfection, as compared to the 10% lifelong risk in individuals infected with M. tuberculosis alone. Despite the availability of effective chemotherapy to cure tuberculosis and the successful development of antiretroviral drugs that control HIV, vaccines provide the only realistic hope of effective prevention.
Host response to tuberculosis
Tuberculosis most frequently develops in the lung (Fig. 1). Once tubercle bacilli have reached the lung alveoli, they are taken up by alveolar macrophages and probably dendritic cells (DCs) in the lung interstitium (Fig. 1). M. tuberculosis is protected by a recalcitrant cell wall rich in waxes and other glycolipids7, which is responsible for the unique staining characteristics of these microbes and contributes significantly to resistance against host defense. Infected macrophages and DCs, therefore, do not destroy their bacterial prey and thus serve as a mobile habitat, transporting the microbes to the lung parenchyma and eventually to the draining lymph nodes8,9. Infected macrophages attract monocytes and a lesion develops. DCs in the lymph nodes present mycobacterial antigens to T cells. At this early stage of infection, secreted proteins such as early secreted antigen for T cells (ESAT-6) and antigen 85 (Ag85) presumably serve as dominant antigens for CD4+ T cells, which accumulate in great numbers in lesions early after infection. At later stages of infection, CD8+ T cells also become stimulated10. In addition, unconventional T cells are activated, including T cells expressing a γδ T cell receptor with specificity for small phosphorylated ligands and T cells with specificity for glycolipids, which are presented by major histocompatibility complex (MHC) class I–like molecules of the CD1 family11,12,13,14. Studies in nonhuman primates support a role for γδ T cells in protective immunity against tuberculosis13. The CD1-restricted T cells presumably evolved as a result of the evolutionary cross-talk between mycobacteria and the mammalian host, as they are specific for glycolipids abundant in mycobacteria but not found in other microbial pathogens11.
Antigen-specific T cells induce the formation of a granuloma around infected macrophages, primarily composed of monocyte-derived macrophages (some of which transform into multinucleated giant cells), CD4+ T cells and an outer ring of CD8+ T cells. At later stages, the granuloma is surrounded by a fibrotic wall and lymphoid follicular structures develop in the vicinity, probably orchestrating the local immune response15.
In the granulomatous lesion, macrophages are activated by T lymphocytes through production of type I cytokines, notably interferon (IFN)-γ and tumor necrosis factor (TNF)-α16. IFN-γ activates the capacity to control M. tuberculosis in macrophages17,18. Moreover, IFN-γ and TNF-α are instrumental in walling off M. tuberculosis inside granulomatous lesions. The importance of these cytokines in the containment of mycobacteria is well illustrated by the observation that individuals with deficient IFN-γ signaling suffer from rapid onset of mycobacterial diseases, and that treatment with antibodies that neutralize TNF-α in individuals with rheumatoid arthritis reactivates tuberculosis in those with latent infection19.
The granuloma persists for years and efficiently contains tubercle bacilli as long as the individual remains immunocompetent. The granuloma deprives the arrested mycobacteria of oxygen and nutrients and as a direct consequence, the microbes survive, probably in a state of dormancy20,21,22,23. The gene expression profile of M. tuberculosis is altered in response to the restricted growth conditions in the granulomatous lesion. It is highly probable that some of these dormancy-related gene products serve as the major antigens of dormant M. tuberculosis and hence represent promising candidate antigens for a postexposure vaccine aimed at preventing reactivation tuberculosis. The best-known dormancy antigen is the α-crystallin, a small heat-shock protein (Hsp) also termed HspX23.
Host resistance, latency and tuberculosis outbreak
The mechanisms that determine latency and disease outbreak in tuberculosis remain elusive. Although the risk of reactivation is understood to be determined by both host and microbial factors in addition to environmental components, virtually nothing is known about the responsible genes. Increasing epidemiologic evidence that strains of the M. tuberculosis Beijing/W genotype family, many of which are of the MDR type, are spreading throughout the world suggests that this family of M. tuberculosis evolved against the evolutionary pressure of BCG vaccination and chemotherapy24,25.
A careful histologic analysis of lung autopsy material performed from 284 cadavers more than 100 years ago in Zürich showed almost 100% M. tuberculosis infection in individuals over 21 years of age, who had died of unrelated reasons26. We can assume similarly high incidences nowadays for individuals living in tuberculosis 'hot spots' (e.g., prisons in some countries) and infection rates between 10% and 50% in areas with high M. tuberculosis incidences.
How and why disease outbreak occurs after a period of latency are unclear27,28. Increased risk of tuberculosis as a result of unique environmental and behavioral factors such as alcohol abuse or dietary iron overload emphasizes the importance of these components, but again provides little explanation of specific control mechanisms29. A considerable proportion of individuals with tuberculosis develop disease within the first year after infection. Moreover, superinfection as well as reinfection after successful tuberculosis chemotherapy have been described30. It is probable that these individuals develop an insufficient immune response against natural infection with M. tuberculosis because of genetic host susceptibility, immune suppression by the pathogen or both. Elucidation of the genetic mechanisms underlying susceptibility and protective immune mechanisms in resistant individuals that prevent disease outbreak in face of ongoing infection, as well as identification of the pathogen genes that promote transformation of latent infection into active tuberculosis, will facilitate rational design of a postexposure vaccine27,28,31.
Of equal importance will be the careful analysis of the efficacy of BCG vaccination. BCG was found to protect against adult tuberculosis in the UK and against leprosy in Malawi, but did not provide any protection in the World Health Organization trial in south India32. Overall, a 50% protective efficacy has been estimated for BCG33. Although environmental factors, notably frequent infection with atypical mycobacteria, probably have a role, contribution of genetic host factors should be assessed more carefully. It is possible that a small proportion of BCG-vaccinated individuals develops a long-lasting immune response against tuberculosis, whereas the majority does not. Because any new vaccine has to protect individuals who do not mount an appropriate immune response to natural infection with M. tuberculosis or to vaccination with BCG, the susceptible and resistant target populations and the criteria that distinguish them from each other need to be defined carefully.
Vaccination strategies against tuberculosis
Subunit vaccines given alone or in addition to BCG. New subunit vaccines against M. tuberculosis comprise antigen-adjuvant formulations, naked DNA vaccine constructs or recombinant carriers expressing antigen (Table 1)34. They are mainly based on protein antigens. Unfortunately, reliable rules for defining protective antigens for T cells do not exist. Most vaccine antigens investigated thus far are early produced secreted antigens such as Ag85, a shared antigen with BCG, and Esat-6, which is an antigen unique to M. tuberculosis (Fig. 2)35,36. Such antigens are probably adequate for pre-exposure vaccination. For individuals latently infected with M. tuberculosis, a postexposure vaccine composed of dormancy-related antigens such as HspX seems more suitable (Fig. 2)21,22,23.
A recent screen has assessed the protective efficacy in mice of more than 30 antigens that are differentially expressed in the proteomes of M. tuberculosis versus BCG37. From these, only one antigen was identified that induced a robust protection similar to that of BCG, arguing against a unique role in protection of proteins exclusively present in the proteome of M. tuberculosis.
Although proteins will constitute the major antigens of subunit vaccines, glycolipid antigens can be included in these vaccine formulations. Such glycolipids could serve both as antigen in the context of CD1 and as adjuvant for Toll-like receptors (TLRs)11. Generally, subunit vaccines crucially depend on appropriate adjuvants38 that stimulate T helper type 1 (TH1) immune responses by the different T cell populations required for protection against tuberculosis.
Naked DNA constructs can stimulate CD4+ and CD8+ T cells. Although they have proven their high immunogenicity in small-rodent animal models, human studies are thus far mostly disappointing. But new carrier systems such as polylactide glycolyde microparticles may facilitate application of naked DNA vaccines in humans39. Recombinant bacterial and recombinant viral carriers expressing defined mycobacterial antigens have been constructed and their vaccine efficacy against tuberculosis have been determined in experimental animal models. These vectors induce potent CD8+ and CD4+ TH1 responses. The most advanced vaccine of this type is recombinant modified vaccinia virus Ankara (r-MVA) expressing Ag85, which has already entered phase 1 clinical trials40,41.
Vaccination with Mtb 72F protein in AS02A adjuvant or in the form of naked DNA induces protective immunity in both mice and guinea pigs42. Mtb 72F is a recombinant fusion protein composed of Rv0125 and Rv1196 with a predicted size of 70 kDa. The adjuvant AS02A is composed of a nontoxic lipopolysaccharide derivative and QS21 from quillaja saponaria (a triterpene glycoside) as an oil-in-water emulsion. The Mtb 72F-AS02A vaccine formulation has now entered a phase 1 clinical trial in healthy M. tuberculosis-uninfected volunteers (Table 2). Another protein-based vaccine comprising a fusion protein of Esat-6 and Ag85 in a mixture of oligodeoxynucleotides and a polycationic peptide (IC31) as an adjuvant has shown promising results in experimental animals and hence is aimed at entering a phase 1 clinical trial later in 2005 (Table 2)36.
Generally heterologous prime-boost schemes are composed of a relatively simple first component (e.g., DNA) that primes a focused immune response, and a second boost vaccine that induces an inflammatory response (e.g., using a recombinant virus) to magnify the initial response. To achieve a proper prime-boost effect, shared antigens are required for the subunit boost vaccination. In tuberculosis, however, heterologous prime-boost vaccination schemes starting with a BCG prime followed by a subunit boost are considered most promising, not only for scientific reasons but also because BCG vaccination of newborns cannot be given up prematurely. The recently discovered antigen Rv3407 is encoded in the BCG genome but is undetectable in its proteome37. The increased protection seen in BCG-primed mice after boost vaccination with naked DNA encoding Rv3407 indicates that this protein is an important antigen of M. tuberculosis that is suboptimally expressed in BCG. Similarly, boost with Mtb 72F in AS02A adjuvant increased protection induced by BCG prime43. The r-MVA expressing Ag85 induced substantial protection both when given alone or as booster vaccination on top of BCG prime41. A recent clinical phase 1 trial showed that r-MVA-Ag85 induced Ag85-specific IFN-γ–secreting T cells (Table 2)40. More profound effects were observed in the BCG prime–r-MVA-Ag85 boost group, underlining the importance of heterologous prime-boost vaccination in the rational delineation of future tuberculosis vaccination strategies.
Attenuated gene deletion mutants of M. tuberculosis as vaccine candidates. Vaccine efficacy data of mutant M. tuberculosis are still limited. During the early stage of infection, M. tuberculosis replicates actively. Accordingly, M. tuberculosis knockout mutants, in which synthesis of essential amino acids is prohibited, will not grow in lungs of experimentally infected mice. Typically, these are auxotrophic mutants that die of starvation in the absence of the required nutrients (Table 1)44,45. A second group of M. tuberculosis knockout mutants show reduced growth in lungs of experimentally infected mice, but persist at a lower level than wild-type microorganisms. These mutations include deficient regulatory proteins, such as the phoP/phoQ two-component regulatory system and a variety of mutants with deficient lipid metabolism or transport46,47. Although genes of this group include virulence factors, it is also possible that the gene products are needed for persistence rather than harming the host directly. M. tuberculosis knockout mutants deficient in genes expressed during dormancy replicate in the lungs of experimental mice in an unconstrained manner early after infection, but cease to grow at later stages because of their impaired dormancy gene program. One of the best described genes of this group is that which encodes isocitrate lyase48. M. tuberculosis knockout mutants, which grow as well as wild-type microorganisms but induce weaker pathology, lack true virulence factors. Some knockout mutants behave like wild-type M. tuberculosis in the lung but do not disseminate to other organs. The heparin-binding hemagglutinin adhesion molecule seems to participate in the dissemination of M. tuberculosis to extrapulmonary sites and hence qualifies as a member of this group49. Factors that facilitate dissemination to other tissues should be deleted in a viable vaccine in order to focus the response on the lymph nodes. A recently defined group of M. tuberculosis knockout mutants comprises strains that grow as well as wild-type microorganisms in normal mice, but do not persist in mouse knockout mutants with defined immunodeficiency50.
Vaccine candidates based on M. tuberculosis knockout mutants will probably face strong objections against use in human clinical trials largely because of the genetic stability of the mutants. Reversion is best prevented by having two independent chromosomal gene deletions. The advantage of M. tuberculosis knockout mutants is their profound stimulation of all T cell populations relevant to protective immunity. On the other hand, M. tuberculosis is known to impair the host immune response. Therefore, it is probably not sufficient to reduce the virulence of M. tuberculosis knockout mutants. Additional genetic modifications improving the immune response and reducing pathology will be required. Although these vaccines will have a long way to go, a rationally designed M. tuberculosis mutant lacking a defined set of virulence genes in the long run could have great potential as a tuberculosis vaccine.
Improved recombinant BCG as vaccine candidates. As compared to M. tuberculosis, the current vaccine BCG lacks approximately 130 genes that are clustered in 16 regions of difference (RD)51 and are at least in part involved in pathogenicity and persistence. Reintroduction of selected genes may increase immunogenicity, antigenicity or both of BCG without reverting it to a pathogen (Table 1). Improved immunogenicity results in the stimulation of a profound immune response. Antigenicity can be improved by introduction of additional antigens or by overexpression of antigens expressed at suboptimal level.
Pym et al. have introduced the entire RD1 region of M. tuberculosis into BCG, comprising at least 11 genes52. Although it remains to be clarified whether the recombinant BCG (r-BCG) expressing RD1 genes is endowed with improved immunogenicity, antigenicity or both, it was claimed that this vaccine candidate provided slightly better protection than parental BCG in mice. Not unexpectedly, however, this r-BCG candidate showed higher virulence in immunocompromised mice as compared to wild-type BCG.
Reintroduction of genes missing in BCG that do not confer virulence might increase vaccine efficacy of BCG without decreasing its safety. Horwitz et al. introduced the gene encoding Ag85 into BCG, an abundant secreted protein in M. tuberculosis53. Although the gene is also expressed in BCG, it was assumed that overexpression of this immunodominant antigen would increase protective immune responses. Indeed, in guinea pigs, r-BCG expressing Ag85 induced substantially higher protection than parental BCG against tuberculosis. This vaccine was at least as safe as BCG and has recently entered a phase 1 clinical trial (Table 2).
The r-BCG vaccine candidates with higher immunogenicity comprise strains that express human cytokines such as IFN-γ, IFN-α, interleukin (IL)-2, IL-12 and granulocyte macrophage colony stimulating factor (GM-CSF)54,55,56. However, these cytokine-secreting r-BCG have yet to prove superior protection in animal models against tuberculosis, as they have not yet been compared to parental strains.
Another approach to improving immunogenicity of BCG takes advantage of the biological activity of listeriolysin (Hly)57. In its natural host, Listeria monocytogenes, this cytolysin forms pores in the phagosomal membrane, promoting listerial egression into the cytosol58. In the cytosol, listerial antigens are readily introduced into the MHC class I pathway, causing preferential stimulation of CD8+ T cells. Compelling evidence suggests that BCG is insufficiently equipped for stimulating CD8+ T cells, whereas CD4+ T cell stimulation is satisfactory59, the gene encoding Hly was introduced into BCG to improve CD8+ T cell stimulation. Indeed, such vaccine constructs induced better protection than parental BCG in mice (Grode, L. et al; unpublished data). Production of this vaccine under good manufacturing practices (GMP) conditions has been initiated and this candidate is aimed at entering a phase 1 clinical trial in 2005 (Table 2).
Pros and cons of subunit and viable vaccines
The use of subunit vaccines is based on the assumption that one or a few protective antigens and T cell clones will suffice for efficacious protection16. In contrast, attenuated viable vaccines utilize the whole spectrum of expressed antigens to stimulate an optimal combination of T cell subtypes. Genetically engineered viable vaccines are therefore best suited as replacements for BCG. Despite previous reluctance, a recent expert group meeting has strongly advocated development of viable recombinant vaccines against tuberculosis because they are the most potent stimulators of protective immune responses that perform better than BCG in experimental animal models60. As an essential requirement, any new viable vaccine should be at least as safe as BCG in immunocompetent and safer than BCG in immunocompromised individuals.
Cross-reactive immune responses against atypical mycobacteria might prematurely eradicate an improved r-BCG or attenuated recombinant M. tuberculosis and thereby impair efficacy of genetically engineered viable vaccines. Although subunit vaccines are probably not affected by the preexisting cross-reactive immunity to environmental bacteria, generally immunity induced by subunit vaccines is short-lived. Given the complementary strength of both types of new vaccine candidates, a heterologous prime-boost regimen comprising a prime with a viable vaccine candidate superior to BCG and a boost with a subunit vaccine candidate will probably be the most promising combination to ensure long-lasting efficacy.
AIDS: infection, pathogenesis, immune response and disease
HIV is a member of the lentivirus subgroup of the retrovirus family, so named because of a long period of clinical latency after infection. Its envelope is derived from cell membrane and expresses the glycoproteins gp120 (responsible for virus attachment to cells) and gp41 (responsible for membrane fusion). Internally, matrix (gag p17) and capsid (gag p24) proteins enclose the viral RNA. Other viral proteins include the reverse transcriptase, protease, integrase and regulatory proteins nef, tat, rev, vif, vpr and vpu. The virus infects by attaching gp120 to the CD4 molecule and either the chemokine receptor CCR5 or CXCR4 present on T cells61. Gp41 mediates fusion and in the cytosol the RNA is reverse transcribed and the preintegration complex moves to the nucleus, where the cDNA is integrated as provirus into host DNA in activated cells. Gene expression is normally immediate; first regulatory and then structural proteins are made, regulated by tat and rev proteins. New virus particles bud from the surface of infected cells about 24 h after infection. Virus is shed for a further day or so before the cell dies62.
Pathogenesis, immune response and disease. AIDS is insidious. HIV-1 infection often causes a transient acute febrile illness shortly after infection, when virus levels in the blood peak, but after that, infection is silent. A degree of immune control by CD8+ T cells is achieved, and neutralizing antibodies eventually appear, although too late to prevent infection, and are easily evaded by viral mutation. T cell immunity seems to control the infection over several years but with a dynamic process of immune selection and escape occurring63. CD4+ T cells are both deleted and functionally impaired early64, with HIV-specific T cells preferentially infected and depleted65. However, in some cells (e.g., long-lived memory T cells and macrophages), gene expression is delayed and the infection becomes latent. The progressive loss of CD4+ T cells leads to clinical immunodeficiency, which is manifested by opportunistic infections and, often, reactivation of tuberculosis. These infections rapidly cause death if untreated.
HIV infects CD4+ T cells and monocytes and macrophages. The virus is cytopathic in activated T cells, but less so in macrophages, and not at all in latently infected cells with integrated provirus. These different forms of infection present problems for vaccines. Those that stimulate and maintain high levels of neutralizing antibodies could prevent initial infection, but once that has occurred, it is almost impossible to eliminate HIV. Under prolonged antiretroviral drug therapy, virus is eliminated from the activated T cells that express the enzymes targeted by the drugs, but not from cells in which virus turns over slowly or is latent66. Thus if an HIV vaccine does not prevent infection, it is unlikely to eliminate the virus but may be able to control the ongoing infection and prevent disease.
There is no known protective immunity against HIV, in the sense that virus is never eradicated from those infected so that it is impossible to find people protected by previous infection, as is the case for many other viruses, such as measles, mumps and influenza. This poses serious problems for vaccine development, although it is not unique (Epstein-Barr virus is another such example). Adult monkeys infected with attenuated SIV (e.g., with deletions in nef) do not become sick, and are protected from challenge with pathogenic SIV67. This protection is probably, but not certainly, immunological68. Some highly HIV-exposed individuals resist HIV infection; others develop T cell responses to HIV69 but not serum antibodies, and their immunity depends on continuing exposure to the virus70. This may represent immune resistance, although it is hard to prove. Another clue may come from the rarity of superinfection in people infected with HIV who are repeatedly exposed to further infection. Although there are elegant studies of superinfection71, it is suspected that it is rare; if so, its rarity probably reflects immune protection.
The spread of HIV-1 infection around the world in 20 years has been alarming. The worst epidemics are in sub-Saharan Africa, where more than 50% of young adults are infected. Mortality from HIV infection without drug treatment is close to 100%, although infected people usually survive from 5 to 10 years before developing AIDS. The introduction of effective antiretroviral drugs in high-income countries has reduced mortality without reducing infection.
Virus variability. HIV is the most variable virus known. There are six major subtypes, A, BC, D, E, F and G, which are 'ancient' (in the HIV context of 30–50 years) and have distinct geographical distribution (Fig. 3)72. The complexity is even greater because of recombinations between subtypes and within subtypes. Each subtype differs by >20% in amino-acid sequence, so that T cells specific for one subtype are unlikely to cross-react substantially with the others; each epitope of 9–15 amino acids will probably have two or more mutations, and this level of variability is known to adversely affect T cell recognition73. Although there are a few cross-reactive T cell epitopes, as well as conserved segments of HIV, it may be necessary to match any virus subtype with that of the circulating viruses in the population to be vaccinated. It is less certain that subtype-specific vaccines will be necessary for stimulating neutralizing antibodies, although sequence variability is known to be a major problem.
A further complication is the variability within each subtype and within each individual. It is clear that both antibodies and CD8+ T cells are very effective at selecting virus escape mutants74,75,76,77. As human leukocyte antigen (HLA) type controls the epitopes recognized by T cells, it is not surprising that HLA type influences virus sequence in individuals as a consequence of T cell–driven escape78. However, the mutant virus may be less fit and the escape can offer advantages to the host. The constraints on the virus probably account for the overall conservation of virus sequence in primary infection, before the immune response evolves. Therefore, vaccines that focus on conserved consensus sequences may have the best chance, at minimum selecting slightly less fit viruses during primary infection.
A T cell vaccine that permits infection and induces selection of escape mutants could undermine vaccine effectiveness, as has been seen in macaques79. But selection of less fit virus has also been observed in macaques in whom SIV infection remains controlled, so this could favor the host80.
Vaccination strategies against AIDS
Challenges. For HIV there are two major challenges: to find (i) immunogens that can stimulate broadly cross-reacting neutralizing antibodies and (ii) immunogens that stimulate high levels of persisting CD8+ and CD4+ T cells. Both humoral and cellular immunity may be needed, but they require different types of immunogens; eventually vaccines could be mixed to achieve both (Fig. 4).
A prophylactic vaccine will be given months or years before exposure and, hopefully, prevent or ameliorate infection. An antibody-inducing vaccine could prevent infection81, but as indicated above, a vaccine that stimulates T cell immunity cannot stop virus infecting cells. The latter vaccination might enable the host to suppress the infection for long periods and prevent disease. Thus, macaques vaccinated to stimulate T cell immunity against SIV became infected when challenged with virus, but had very low virus loads and experienced little effect on their health as compared to unvaccinated controls82,83,84. Because it is almost impossible to eliminate HIV infection, this may be the best that can be achieved. However, many chronic virus infections as well as latent M. tuberculosis infection are controlled effectively in the long term by cellular immunity without causing disease. HIV-2 does not cause disease in more than 80% of those infected in West Africa85, suggesting that the immune response and the virus are in balance in the infected but healthy individual.
Antibody-stimulating vaccines. At present there is no realistic candidate HIV vaccine. It was thought that gp120 preparations would stimulate neutralizing antibodies, and they do against tissue culture–adapted HIV strains. However, all such vaccines have failed to neutralize primary virus isolates and one tested in two large efficacy trials failed to protect (NAM, http://www.aidsmap.com/). Now the aim is to design HIV envelope protein immunogens that will stimulate protective antibodies. Unfortunately the dice are loaded against this: (i) the envelope is heavily glycosylated, making it largely nonimmunogenic; (ii) it is conformationally variable, so that the most sensitive site, the chemokine receptor binding site, is not exposed unless CD4+ has bound; (iii) the CD4+ binding site is deeply recessed and hard to access by antibodies; (iv) besides the carbohydrates, crucial parts of the gp120 surface are protected by hypervariable loops, which can and do vary by mutation at no cost to the virus, thereby escaping neutralizing antibodies86,87. Real novelty in immunogen design is needed to get around these problems.
T cell–stimulating vaccines. As an alternative, much effort is being made to create vaccines that stimulate T cell immunity, particularly CD8+ T cells. In mice, several approaches comprising plasmid DNA and various recombinant viruses and recombinant bacteria have given promising results88,89,90. Also particulate proteins and peptide-pulsed DCs can efficiently stimulate CD8+ T cells91,92. But these findings are not easily transferable to primates. DNA vaccines stimulate only weak responses in humans93 and recombinant viruses give mixed results. The poxviruses (canary pox, the attenuated vaccinia virus (NYVAC) and MVA) induce weak primary responses94,95, possibly because the recombinant immunogen is swamped by more than 200 poxvirus proteins. Also, these viruses may be too attenuated, with little or no proliferation after infection. However, they may be better at boosting T cell responses that have already been primed (e.g., BCG; Table 1). The most promising results to date come from recombinant adenovirus 5, which has stimulated strong and sustained T cell responses, but its applicability is constrained by the high frequency of antibodies to this virus in most human populations. Alternate candidates in trial include other strains of adenovirus, adeno-associated virus, alphaviruses and r-BCG as vectors for HIV.
As for tuberculosis, heterologous prime-boost regimes offer enhanced T cell immune responses96,97 and have been observed in macaques and humans94. It seems that in humans, recombinant adenoviruses are good at both priming and boosting, whereas recombinant poxviruses such as MVA are better at boosting than priming (McMichael, A.J., Goonetilleke, N., Guimaeres-Walker, A., Dorrell, L., Yan, H., Hanke, T., unpublished data).
Vaccine-induced T cell responses should include both CD4+ and CD8+ T cells, even though the former are targets for the virus. CD4+ T cell help is important for optimal CD8+ T cell responses98,99,100,101—an issue of particular importance to HIV vaccine development because natural infection results in impaired CD4+ T cell immunity.
Protein immunogens typically elicit antibody responses and T helper cell responses, but the latter may be of the TH2 type, especially when alum is used as an adjuvant102; TH2 responses have little or no antiviral activity. Live intracellular infections generally elicit TH1 responses and CD8+ T cell responses, hence the use of live attenuated vectors to mimic a virus infection. These deliver the required immunogen into the cytosol, from which it can enter the MHC class I antigen presentation pathway directly, or indirectly when the cell dies and virus protein is taken up by DCs for 'cross-priming' of CD8+ T cells103,104,105,106.
People who are homozygous for the δ32 variant of CCR5 do not express an intact chemokine receptor and are resistant to HIV infection107. Immune resistance is suspected for people who are highly exposed, yet uninfected. Many generate CD8+ and/or CD4+ T cell responses, but without any serum antibody69. It is not certain that the T cell responses are responsible for resistance. If they are, it is encouraging because the levels of T cell response seen in current vaccine trials could be sufficient.
HIV vaccines and mucosal immunity
HIV enters the host through (usually genital) mucosa. HIV infects, or attaches to, mucosal Langerhans cells first and then migrates to local lymphoid sites108 to set up systemic infection with a predominance of early virus in intestinal tract CD4+ T cells109. Mucosal IgA and T cells may have an important role in control of this infection.
An HIV vaccine should stimulate B and T cell as well as mucosal and innate immunity. Live attenuated HIV probably provokes all of these, but because it will set up a persistent infection, there are insoluble safety issues that render its use unlikely in humans. Therefore, it may be necessary to build a set of vaccines to stimulate each component separately and administer them in combination.
Degree of protection
If control equivalent to that of chronic Epstein-Barr virus or cytomegalovirus or latent M. tuberculosis infection could be achieved for HIV, with reduced or negligible further transmission, it would represent a considerable advance. Measuring the efficacy of the vaccine would be problematic, being dependent on reduced virus load at set point, the time at around 6 months after primary infection when virus level stabilizes. The set-point level is inversely correlated with time of survival. Only neutralizing antibodies, stimulated by vaccines, offer any real chance of sterile immunity and such vaccines are a long way off.
A prophylactic vaccine for HIV is badly needed. Unless current trials of vaginal microbicides are unexpectedly successful, there is no other way of controlling the infection and both antibody- and T cell–inducing vaccines need a total rethink. The most advanced of these vaccines is the Merck recombinant adenovirus 5 (Ad5) vaccine that is about to enter clinical trials. But even if successful, a different vector will be needed to deal with the problem of pre-existing antibody immunity to Ad5. The Merck trial may, however, indicate whether vaccines that stimulate T cell immunity offer any benefit, and that type of news is to be welcomed. Could AIDS and tuberculosis vaccines be given together? Probably, but this may be decades away. In parts of Africa, BCG is given at birth. Combining BCG with an AIDS vaccine, with a boost at puberty, may be feasible in the distant future.
From animal models to clinical trials
Preclinical studies: tuberculosis and AIDS vaccine testing. Most tuberculosis vaccine candidates have been tested in mice or guinea pigs. Studies in mice take advantage of the in-depth knowledge of the mouse immune system, whereas guinea pigs have the advantage of developing a pathology that highly resembles human tuberculosis. These experimental animal studies form the starting point of the tedious pipeline leading to a new vaccine candidate110. The US National Institutes of Health, through their preclinical tuberculosis screening program, and the European Union, through its TBVac integrated project in the Framework Program 6, sponsor vaccine testing in experimental animals under standardized conditions. Thus far, the two programs utilize different screening procedures, which need to be harmonized in order to achieve comparability of different vaccine candidates. However animal models cannot unequivocally predict the efficacy of a vaccine candidate against tuberculosis or HIV and immunogenicity results in mice often do not predict responses in humans. Although nonhuman primate studies can be bypassed before phase 1 and phase 2 clinical trials, they may become necessary before embarking on phase 3 trials.
Phase 1 and phase 2 clinical trials should also be exploited for determination of immunological correlates of protection (i.e., markers that unequivocally predict protective efficacy of a vaccine candidate). Currently, IFN-γ produced by T cells with specificity for defined antigens is probably the best marker of protection111 for tuberculosis, but this is much less clear for HIV. In the most impressive vaccine protections studies, using attenuated SIV as a vaccine it is not known what confers protection68,112 and the IFN-γ ELISPOT assay for T cells does not correlate with protection in vaccine-protected macaques68. Additional markers are required.
The SIV challenge model in macaques has been very useful for HIV vaccine development67, but there are limitations. In these experiments, the virus challenge dose is always large, compared to repeated low-dose exposures in humans. In addition, the vaccine is nearly always homologous to the challenge virus, giving good chances of success that are unrealistic in humans. Attempts are being made to introduce more representative challenges113. Finally the SIV or SIV-HIV hybrid (SHIV) challenges may be slightly misleading; it seems easier to protect against the aggressive hybrid virus SHIV 89.6P compared to the less virulent SIV mac 239. It is unclear which will be closer to an HIV exposure in humans. Despite these problems, the existing data suggest vaccine-induced protection is possible.
Therapeutic vaccination for HIV is rarely discussed, but must be considered as recent studies in macaques and humans114,115,116 give it some credibility. In humans, therapeutic vaccination would probably be used to stimulate T cell responses in HIV-infected people whose virus was well controlled by antiretroviral drugs, with the aim of terminating antiretroviral therapy (ART) once the T cell levels were boosted. The rationale is that T cells, particularly CD8+ T cells, are known to be effective in long-term control of the virus in the absence of drug treatment, but T cell responses decline in patients on effective ART117. When ART is stopped, virus rebounds to pretreatment levels4. If the T cell levels were already boosted, this might lead to lower virus levels at the new set point. At its best, this treatment could enable withdrawal of ART for prolonged periods and offer new options in low-income countries in particular. Patients whose virus was controlled in this way might also be less likely to transmit virus.
Clinical trials. The earliest vaccine trials are small and are concerned with vaccine safety and immunogenicity. Then immunogenicity is optimized by finding the best vaccine dose and route of delivery. Finally, the vaccine efficacy trials involve several thousand volunteers who are at high risk for infection. Translation of vaccine candidates from bench science into clinical trials is relatively straightforward scientifically but complicated logistically and extremely expensive (Box 1).
Phase 1 trials can be done in the country of origin of the vaccine, but increasingly good sites are being established in developing countries and several countries have HIV vaccine trial experience. In contrast, experience with tuberculosis vaccine trials is marginal and currently only a few phase 1 trials have been initiated. Two phase 3 trials of Vaxgen gp120 in 10,000 people showed clearly that the AIDS vaccine candidate offered no protection (e.g., reported at http://www.hivandhepatitis.com/vaccines/022503a.html). A phase 2b trial has now been proposed before phase 3. Here, between 1,000 and 2,000 high-risk volunteers are given placebo or vaccine and all are followed for evidence of infection. Once 30 trial participants are infected, a detailed analysis of the relationship between infection and immunity is made. In the case of AIDS, virus load, subtype and T cell and humoral immune responses are determined and correlated and the degree of protection ascertained. It is arguable that such an approach might have obtained a result for the Vaxgen trial in a shorter time at lower cost. It would be worthwhile to consider a similar strategy for tuberculosis vaccine efficacy trials.
Vaccine recipients for phase 1 trials are volunteers who are at negligible risk of infection. Nevertheless, they have to be counseled about the implications of testing, which adds some burden to all parties. It has proved relatively easy to recruit such volunteers in the UK and in Kenya and Uganda for AIDS vaccine testing. For efficacy trials, phase 2b and phase 3, it will be necessary to work with individuals with high risk for HIV infection. Sex-worker cohorts are usually too small and follow up can be difficult. In a high-risk region in Africa, the annual incidence may be 1–10%, but the implementation of trials inevitably raises awareness and the best possible advice must be given to volunteers, particularly concerning condom use. Therefore, it should be expected that incidence might decrease by half. This is important to realize in determining statistical powers before embarking on the trial. The population of high-risk volunteers is less well circumscribed for tuberculosis than for AIDS.
There are major ethical issues in efficacy trials. Double blinding and placebo controls are essential, but a difficult question is what to offer in terms of chemotherapy to trial participants in low-income countries such as Africa. It is now generally agreed that the best possible (rather than best regional) therapy should be offered to all trial participants who become infected; the unresolved questions are for how long and when, given that drugs may not be required for 5–10 years. Although it is desirable to insist on the highest safety standards for any vaccine candidate as defined by the FDA and the EMEA it may be worthwhile to carefully assess whether risk-benefit relationships differ for countries with low or high tuberculosis or AIDS prevalence. Side effects of new vaccine candidates may be tolerable for a vaccine that can prevent tuberculosis or AIDS mortality and morbidity in countries where incidences are skyrocketing, but not for countries with low tuberculosis or AIDS prevalence118.
HIV and tuberculosis vaccine trials will probably need multiple sites. A trial involving 2,000 volunteers with detailed prescreening, individual counseling and careful follow up will probably necessitate at least 20 sites. Although in theory such sites would have the infrastructure and could also conduct trials for both tuberculosis and HIV, it is unlikely that this would be possible at the same time because of high cost, a requirement for multiple vaccine groups and complicated data analysis.
AIDS and tuberculosis rampage most freely in low-income countries where the total annual budget for public health is below US $5 per capita119,120,121,122. Currently available therapeutic regimes exceed such budgets by several fold. Similarly, vaccine development from the bench to the field consumes huge amounts of financial resources. Consequently development of vaccines against tuberculosis and AIDS will be most successful as joint public-private enterprise with support from governmental and nongovernmental funding organizations and philanthropic foundations. Such a consortium would be best equipped to bring forward vaccine development by a combination of 'push' and 'pull' programs. Although support for preclinical research will be best directed by classical push programs, incentives should be made available for alternative approaches118. The removal of roadblocks serves as a good example for a support strategy emphasizing innovation and not restrictive in the experimental approach. Unconventional research areas, such as therapeutic HIV-1 and tuberculosis vaccination, could be further stimulated by pull programs rewarding a vaccine candidate that proves successful in preclinical studies. At the transition from the preclinical to the clinical phase, pull programs providing incentives for the market-ready final product would be most appropriate118. These include tax reduction for vaccine production, secured future markets in developing countries, a tiered price system, and reduced interest rates or debt release by the World Bank for vaccine purchase and distribution. In the absence of such incentives, even the best vaccine candidate may fail to reach the desired goal of unrestricted distribution in countries affected most by AIDS and tuberculosis.
Lack of financial incentive may provide some unique opportunities for vaccine development against AIDS and tuberculosis. Notably, low-to-absent competition for financial profits can promote comparative clinical trials under the umbrella of governmental, nongovernmental or philanthropic organizations either alone or together with the aim first to select the most promising candidates, and second to proceed by 'learning by doing' (i.e., continuous vaccine improvement of candidates in iterative clinical trial processes). Analysis of the immune responses of vaccine trial participants may provide new information that can be further explored in experimental animal models and help to define the next clinical trial step. Strong efforts are required to harmonize vaccine trials and accompanying vaccine efficacy testing. It is rewarding that several major organizations including the US National Institutes of Health though their intramural and extramural programs, the Bill and Melinda Gates Foundation through Aeras for tuberculosis and their Vaccine Enterprise for AIDS as well as the European Union through their European & Developing Countries Clinical Trials Partnerships program have committed themselves to bring vaccines against AIDS and tuberculosis from the bench to the field.
Vaccination strategies against these two diseases need to be integrated as soon as possible, considering the intimate interdependence of the two deadly pathogens and the consequences of their liaison. Obviously, numerous hurdles need to be overcome. Yet, even if only partially protective vaccines can be developed, return on investment will be enormous. This is not only true for the most impoverished countries, for which a rapid solution is vital, but also for industrialized countries, which may suffer significantly from global economic and social regressions and further weakening of unstable states.
World Health Organization. Global Tuberculosis Control: Surveillance, Planning, Financing (World Health Organization, Geneva, 2004).
Kaufmann, S.H.E. Is the development of a new tuberculosis vaccine possible? Nat. Med. 6, 955–960 (2000).
Calmette, A. Die Schutzimpfung gegen Tuberkulose mit “BCG” (F.C.W. Vogel, Leipzig, 1928).
Rosenberg, E.S. et al. Immune control of HIV-1 after early treatment of acute infection. Nature 407, 523–526 (2000).
Dybul, M., Fauci, A.S., Bartlett, J.G., Kaplan, J.E. & Pau, A.K. Guidelines for using antiretroviral agents among HIV-infected adults and adolescents. Ann. Intern. Med. 137, 381–433 (2002).
White, R.G. et al. Can Population Differences Explain the Contrasting Results of the Mwanza, Rakai, and Masaka HIV/Sexually Transmitted Disease Intervention Trials?: A Modeling Study. J. Acquir. Immune Defic. Syndr. 37, 1500–1513 (2004).
Kremer, L. & Besra, G.S. A waxy tale by Mycobacterium tuberculosis. in Tuberculosis and the Tubercle Bacillus (eds. Cole, S.T., Eisenach, K.D., McMurray, D.N. & Jacobs, W.R., Jr.) 287–305 (ASM Press, Washington, DC, 2005).
Chan, J., Silver, R.F., Kampmann, B. & Wallis, R.S. Intracelluar models of Mycobacterium tuberculosis infection. in Tuberculosis and the Tubercle Bacillus (eds. Cole, S.T., Eisenach, K.D., McMurray, D.N. & Jacobs, W.R., Jr.) 437–449 (ASM Press, Washington, DC, 2005).
Russell, D.G. Mycobacterium tuberculosis: the indigestible microbe. in Tuberculosis and the Tubercle Bacillus (eds. Cole, S.T., Eisenach, K.D., McMurray, D.N. & Jacobs, W.R., Jr.) 427–435 (ASM Press, Washington, DC, 2005).
Kaufmann, S.H.E. & Flynn, J.L. CD8 T cells in tuberculosis. in Tuberculosis and the Tubercle Bacillus (eds. Cole, S.T., Eisenach, K.D., McMurray, D.N. & Jacobs, W.R., Jr.) 465–474 (ASM Press, Washington, DC, 2005).
Dascher, C.C. & Brenner, M.B. CD1 and tuberculosis. in Tuberculosis and the Tubercle Bacillus (eds. Cole, S.T., Eisenach, K.D., McMurray, D.N. & Jacobs, W.R., Jr.) 475–487 (ASM Press, Washington, DC, 2005).
Kaufmann, S.H.E. gamma/delta and other unconventional T lymphocytes: What do they see and what do they do? Proc. Natl. Acad. Sci. USA 93, 2272–2279 (1996).
Shen, Y. et al. Adaptive immune response of V gamma 2V delta 2(+) T cells during mycobacterial infections. Science 295, 2255–2258 (2002).
Fischer, K. et al. Mycobacterial phosphatidylinositol mannoside is a natural antigen for CD1d-restricted T cells. Proc. Natl. Acad. Sci. USA 101, 10685–10690 (2004).
Ulrichs, T. et al. Human tuberculous granulomas induce peripheral lymphoid follicle-like structures to orchestrate local host defence in the lung. J. Pathol. 204, 217–228 (2004).
Kaufmann, S.H.E. How can immunology contribute to the control of tuberculosis? Nat. Rev. Immunol. 1, 20–30 (2001).
Cooper, A.M. et al. Disseminated tuberculosis in interferon gamma gene-disrupted mice. J. Exp. Med. 178, 2243–2247 (1993).
Flynn, J.L. et al. An essential role for interferon gamma in resistance to Mycobacterium tuberculosis infection. J. Exp. Med. 178, 2249–2254 (1993).
Maini, R. et al. Infliximab (chimeric anti-tumour necrosis factor alpha monoclonal antibody) versus placebo in rheumatoid arthritis patients receiving concomitant methotrexate: a randomised phase III trial. ATTRACT Study Group. Lancet 354, 1932–1939 (1999).
Chan, J. & Flynn, J. The immunological aspects of latency in tuberculosis. Clin. Immunol. 110, 2–12 (2004).
Boon, C. & Dick, T. Mycobacterium bovis BCG response regulator essential for hypoxic dormancy. J. Bacteriol. 184, 6760–6767 (2002).
Voskuil, M.I. et al. Inhibition of respiration by nitric oxide induces a Mycobacterium tuberculosis dormancy program. J. Exp. Med. 198, 705–713 (2003).
Sherman, D.R. et al. Regulation of the Mycobacterium tuberculosis hypoxic response gene encoding alpha -crystallin. Proc. Natl. Acad. Sci. USA 98, 7534–7539 (2001).
Bifani, P.J., Mathema, B., Kurepina, N.E. & Kreiswirth, B.N. Global dissemination of the Mycobacterium tuberculosis W-Beijing family strains. Trends Microbiol. 10, 45–52 (2002).
Glynn, J.R., Whiteley, J., Bifani, P.J., Kremer, K. & van Soolingen, D. Worldwide occurrence of Beijing/W strains of Mycobacterium tuberculosis: a systematic review. Emerg. Infect. Dis. 8, 843–849 (2002).
Naegeli, O. Ueber Häufigkeit, Localisation und Ausheilung der Tuberculose. Archiv für pathologische Anatomie und Physiologie und für klinische Medizin 160, 426–472 (1900).
Casanova, J.-L. & Abel, L. Genetic dissection of immunity to Mycobacteria: the human model. Annu. Rev. Immunol. 20, 581–620 (2002).
Cooke, G.S. & Hill, A.V. Genetics of susceptibility to human infectious disease. Nat. Rev. Genet. 2, 967–977 (2001).
Schaible, U.E. & Kaufmann, S.H. Iron and microbial infection. Nat. Rev. Microbiol. 2, 946–953 (2004).
Van Rie, A. et al. Exogenous reinfection as a cause of recurrent tuberculosis after curative treatment. N. Engl. J. Med. 341, 1174–1179 (1999).
Hingley-Wilson, S.M., Sambandamurthy, V.K. & Jacobs, W.R., Jr. Survival perspectives from the world's most successful pathogen, Mycobacterium tuberculosis. Nat. Immunol. 4, 949–955 (2003).
Fine, P.E.M. The BCG story - lessons from the past and implications for the future. Rev. Infect. Dis. 11, S353–S359 (1989).
Colditz, G.A. et al. Efficacy of BCG vaccine in the prevention of tuberculosis. Meta-analysis of the published literature. JAMA 271, 698–702 (1994).
Okkels, L.M., Doherty, T.M. & Andersen, P. Selecting the components for a safe and efficient tuberculosis subunit vaccine—recent progress and post-genomic insights. Curr. Pharm. Biotechnol. 4, 69–83 (2003).
Huygen, K. et al. Immunogenicity and protective efficacy of a tuberculosis DNA vaccine. Nat. Med. 2, 893–898 (1996).
Olsen, A.W., van Pinxteren, L.A.H., Okkels, L.M., Rasmussen, P.B. & Andersen, P. Protection of mice with a tuberculosis subunit vaccine based on a fusion protein of antigen 85B and ESAT-6. Infect. Immun. 69, 2773–2778 (2001).
Mollenkopf, H.J. et al. Application of mycobacterial proteomics to vaccine design: improved protection by Mycobacterium bovis BCG prime-Rv3407 DNA boost vaccination against tuberculosis. Infect. Immun. 72, 6471–6479 (2004).
Kaufmann, S.H.E. (ed.) Novel Vaccination Strategies (John Wiley & Sons, Weinheim, 2004).
O'Hagan, D.T. & Singh, M. Microparticles as vaccine adjuvants and delivery systems. in Novel Vaccination Strategies (ed. Kaufmann, S.H.E.) 147–172 (John Wiley & Sons, Weinheim, 2004).
McShane, H. et al. Recombinant modified vaccinia virus Ankara expressing antigen 85A boosts BCG-primed and naturally acquired antimycobacterial immunity in humans. Nat. Med. 10, 1240–1244 (2004).
McShane, H., Brookes, R., Gilbert, S.C. & Hill, A.V. Enhanced immunogenicity of CD4(+) t-cell responses and protective efficacy of a DNA-modified vaccinia virus Ankara prime-boost vaccination regimen for murine tuberculosis. Infect. Immun. 69, 681–686 (2001).
Skeiky, Y.A. et al. Differential immune responses and protective efficacy induced by components of a tuberculosis polyprotein vaccine, Mtb72F, delivered as naked DNA or recombinant protein. J. Immunol. 172, 7618–7628 (2004).
Brandt, L. et al. The protective effect of the Mycobacterium bovis BCG vaccine is increased by coadministration with the Mycobacterium tuberculosis 72-kilodalton fusion polyprotein Mtb72F in M. tuberculosis-infected guinea pigs. Infect. Immun. 72, 6622–6632 (2004).
Guleria, I. et al. Auxotrophic vaccines for tuberculosis. Nat. Med. 2, 334–337 (1996).
Smith, D.A., Parish, T., Stoker, N.G. & Bancroft, G.J. Characterization of auxotrophic mutants of Mycobacterium tuberculosis and their potential as vaccine candidates. Infect. Immun. 69, 1142–1150 (2001).
Perez, E. et al. An essential role for phoP in Mycobacterium tuberculosis virulence. Mol. Microbiol. 41, 179–187 (2001).
Cox, J.S., Chen, B., McNeil, M. & Jacobs, W.R. Complex lipid determine tissue specific replication of Mycobacterium tuberculosis in mice. Nature 402, 79–83 (1999).
McKinney, J.D. et al. Persistence of Mycobacterium tuberculosis in macrophages and mice requires the glyoxylate shunt enzyme isocitrate lyase. Nature 406, 735–738 (2000).
Pethe, K. et al. The heparin-binding haemagglutinin of M. tuberculosis is required for extrapulmonary dissemination. Nature 412, 190–194 (2001).
Hisert, K.B. et al. Identification of Mycobacterium tuberculosis counterimmune (cim) mutants in immunodeficient mice by differential screening. Infect. Immun. 72, 5315–5321 (2004).
Behr, M.A. et al. Comparative genomics of BCG vaccines by whole-genome DNA microarray. Science 284, 1520–1523 (1999).
Pym, A.S. et al. Recombinant BCG exporting ESAT-6 confers enhanced protection against tuberculosis. Nat. Med. 9, 533–539 (2003).
Horwitz, M.A., Harth, G., Dillon, B.J. & Maslesa-Galic, S. Recombinant bacillus Calmette-Guerin (BCG) vaccines expressing the Mycobacterium tuberculosis 30-kDa major secretory protein induce greater protective immunity against tuberculosis than conventional BCG vaccines in a highly susceptible animal model. Proc. Natl. Acad. Sci. USA 97, 13853–13858 (2000).
Murray, P.J., Aldovini, A. & Young, R.A. Manipulation and potentiation of antimycobacterial immunity using recombinant bacille Calmette-Guerin strains that secrete cytokines. Proc. Natl. Acad. Sci. USA 93, 934–939 (1996).
Wangoo, A. et al. Bacille Calmette-Guerin (BCG)-associated inflammation and fibrosis: modulation by recombinant BCG expressing interferon-gamma (IFN-gamma). Clin. Exp. Immunol. 119, 92–98 (2000).
Luo, Y., Chen, X., Han, R. & O'Donnell, M.A. Recombinant bacille Calmette-Guerin (BCG) expressing human interferon-alpha 2B demonstrates enhanced immunogenicity. Clin. Exp. Immunol. 123, 264–270 (2001).
Hess, J. et al. Mycobacterium bovis bacille Calmette-Guerin strains secreting listeriolysin of Listeria monocytogenes. Proc. Natl. Acad. Sci. USA 95, 5299–5304 (1998).
Portnoy, D.A., Chakraborty, T., Goebel, W. & Cossart, P. Molecular determinants of Listeria monocytogenes pathogenesis. Infect. Immun. 60, 1263–1267 (1992).
Kaufmann, S.H.E. & Nasser, A. Improved protection by recombinant BCG. Microbes Infect. (in the press).
Kamath, A.T. et al. New live mycobacterial vaccines: The Geneva consensus on essential steps towards clinical development. Vaccine in the press.
Markovic, I. & Clouse, K.A. Recent advances in understanding the molecular mechanisms of HIV-1 entry and fusion: revisiting current targets and considering new options for therapeutic intervention. Curr. HIV Res. 2, 223–234 (2004).
Simon, V. & Ho, D.D. HIV-1 dynamics in vivo: implications for therapy. Nat. Rev. Microbiol. 1, 181–190 (2003).
McMichael, A.J. & Phillips, R.E. Escape of human immunodeficiency virus from immune control. Annu. Rev. Immunol. 15, 271–296 (1997).
Younes, S.A. et al. HIV-1 viremia prevents the establishment of interleukin 2-producing HIV-specific memory CD4+ T cells endowed with proliferative capacity. J. Exp. Med. 198, 1909–1922 (2003).
Douek, D.C. et al. HIV preferentially infects HIV-specific CD4+ T cells. Nature 417, 95–98 (2002).
Blankson, J.N., Persaud, D. & Siliciano, R.F. The challenge of viral reservoirs in HIV-1 infection. Annu. Rev. Med. 53, 557–593 (2002).
Daniel, M.D., Kirchhoff, F., Czajak, S.C., Sehgal, P.K. & Desrosiers, R.C. Protective effects of a live attenuated SIV vaccine with a deletion in the nef gene. Science 258, 1938–1941 (1992).
Desrosiers, R.C. Prospects for an AIDS vaccine. Nat. Med. 10, 221–223 (2004).
Rowland-Jones, S.L. et al. Cytotoxic T cell responses to multiple conserved HIV epitopes in HIV-resistant prostitutes in Nairobi. J. Clin. Invest. 102, 1758–1765 (1998).
Kaul, R. et al. Late seroconversion in HIV-resistant Nairobi prostitutes despite pre-existing HIV-specific CD8+ responses. J. Clin. Invest. 107, 341–349 (2001).
Altfeld, M. et al. HIV-1 superinfection despite broad CD8+ T-cell responses containing replication of the primary virus. Nature 420, 434–439 (2002).
Korber, B. et al. Timing the ancestor of the HIV-1 pandemic strains. Science 288, 1789–1796 (2000).
Lee, J.K. et al. T cell cross-reactivity and conformational changes during TCR engagement. J. Exp. Med. 200, 1455–1466 (2004).
Altman, J.D. & Feinberg, M.B. HIV escape: there and back again. Nat. Med. 10, 229–230 (2004).
Derdeyn, C.A. et al. Envelope-constrained neutralization-sensitive HIV-1 after heterosexual transmission. Science 303, 2019–2022 (2004).
Goulder, P.J. & Watkins, D.I. HIV and SIV CTL escape: implications for vaccine design. Nat. Rev. Immunol. 4, 630–640 (2004).
Richman, D.D., Wrin, T., Little, S.J. & Petropoulos, C.J. Rapid evolution of the neutralizing antibody response to HIV type 1 infection. Proc. Natl. Acad. Sci. USA 100, 4144–4149 (2003).
Moore, C.B. et al. Evidence of HIV-1 adaptation to HLA-restricted immune responses at a population level. Science 296, 1439–1443 (2002).
Barouch, D.H. et al. Eventual AIDS vaccine failure in a rhesus monkey by viral escape from cytotoxic T lymphocytes. Nature 415, 335–339 (2002).
Friedrich, T.C. et al. Reversion of CTL escape-variant immunodeficiency viruses in vivo. Nat. Med. 10, 275–281 (2004).
Nishimura, Y. et al. Transfer of neutralizing IgG to macaques 6 h but not 24 h after SHIV infection confers sterilizing protection: implications for HIV-1 vaccine development. Proc. Natl. Acad. Sci. USA 100, 15131–15136 (2003).
Amara, R.R. et al. Control of a mucosal challenge and prevention of AIDS by a multiprotein DNA/MVA vaccine. Science 292, 69–74 (2001).
Barouch, D.H. et al. Control of viremia and prevention of clinical AIDS in rhesus monkeys by cytokine-augmented DNA vaccination. Science 290, 486–492 (2000).
Shiver, J.W. et al. Replication-incompetent adenoviral vaccine vector elicits effective anti-immunodeficiency-virus immunity. Nature 415, 331–335 (2002).
Alabi, A.S. et al. Plasma viral load, CD4 cell percentage, HLA and survival of HIV-1, HIV-2, and dually infected Gambian patients. AIDS 17, 1513–1520 (2003).
Burton, D.R. et al. HIV vaccine design and the neutralizing antibody problem. Nat. Immunol. 5, 233–236 (2004).
Wyatt, R. et al. The antigenic structure of the HIV gp120 envelope glycoprotein. Nature 393, 705–711 (1998).
Aldovini, A. & Young, R.A. Humoral and cell-mediated immune responses to live recombinant BCG-HIV vaccines. Nature 351, 479–482 (1991).
Liu, M.A. Vaccine developments. Nat. Med. 4, 515–519 (1998).
Sutter, G., Wyatt, L.S., Foley, P.L., Bennink, J.R. & Moss, B. A recombinant vector derived from the host range-restricted and highly attenuated MVA strain of vaccinia virus stimulates protective immunity in mice to influenza virus. Vaccine 12, 1032–1040 (1994).
Huang, X.L. et al. Priming of human immunodeficiency virus type 1 (HIV-1)-specific CD8+ T cell responses by dendritic cells loaded with HIV-1 proteins. J. Infect. Dis. 187, 315–319 (2003).
Nabel, G.J. HIV vaccine strategies. Vaccine 20, 1945–1947 (2002).
MacGregor, R.R., Boyer, J.D., Ciccarelli, R.B., Ginsberg, R.S. & Weiner, D.B. Safety and immune responses to a DNA-based human immunodeficiency virus (HIV) type I env/rev vaccine in HIV-infected recipients: follow-up data. J. Infect. Dis. 181, 406 (2000).
McConkey, S.J. et al. Enhanced T-cell immunogenicity of plasmid DNA vaccines boosted by recombinant modified vaccinia virus Ankara in humans. Nat. Med. 9, 729–735 (2003).
Paris, R. et al. HLA class I serotypes and cytotoxic T-lymphocyte responses among human immunodeficiency virus-1-uninfected Thai volunteers immunized with ALVAC-HIV in combination with monomeric gp120 or oligomeric gp160 protein boosting. Tissue Antigens 64, 251–256 (2004).
Hanke, T. et al. Enhancement of MHC class I-restricted peptide-specific T cell induction by a DNA prime/MVA boost vaccination regime. Vaccine 16, 439–445 (1998).
Schneider, J. et al. Enhanced immunogenicity for CD8+ T cell induction and complete protective efficacy of malaria DNA vaccination by boosting with modified vaccinia virus Ankara. Nat. Med. 4, 397–402 (1998).
Sun, J.C., Williams, M.A. & Bevan, M.J. CD4+ T cells are required for the maintenance, not programming, of memory CD8+ T cells after acute infection. Nat. Immunol. 5, 927–933 (2004).
Shedlock, D.J. & Shen, H. Requirement for CD4 T cell help in generating functional CD8 T cell memory. Science 300, 337–339 (2003).
Ridge, J.P., Di Rosa, F. & Matzinger, P. A conditioned dendritic cell can be a temporal bridge between a CD4+ T-helper and a T-killer cell. Nature 393, 474–478 (1998).
Liu, H., Andreansky, S., Diaz, G., Hogg, T. & Doherty, P.C. Reduced functional capacity of CD8+ T cells expanded by post-exposure vaccination of gamma-herpesvirus-infected CD4-deficient mice. J. Immunol. 168, 3477–3483 (2002).
Brewer, J.M., Conacher, M., Satoskar, A., Bluethmann, H. & Alexander, J. In interleukin-4-deficient mice, alum not only generates T helper 1 responses equivalent to freund's complete adjuvant, but continues to induce T helper 2 cytokine production. Eur. J. Immunol. 26, 2062–2066 (1996).
Norbury, C.C. et al. CD8+ T cell cross-priming via transfer of proteasome substrates. Science 304, 1318–1321 (2004).
Wolkers, M.C., Brouwenstijn, N., Bakker, A.H., Toebes, M. & Schumacher, T.N. Antigen bias in T cell cross-priming. Science 304, 1314–1317 (2004).
Beignon, A.S., Skoberne, M. & Bhardwaj, N. Type I interferons promote cross-priming: more functions for old cytokines. Nat. Immunol. 4, 939–941 (2003).
Le Bon, A. et al. Cross-priming of CD8+ T cells stimulated by virus-induced type I interferon. Nat. Immunol. 4, 1009–1015 (2003).
Liu, R. et al. Homozygous defect in HIV-1 coreceptor accounts for resistance of some multiply-exposed individuals to HIV-1 infection. Cell 86, 367–377 (1996).
Haase, A.T. The pathogenesis of sexual mucosal transmission and early stages of infection: obstacles and a narrow window of opportunity for prevention. AIDS 15 Suppl 1, S10–S11 (2001).
Brenchley, J.M. et al. CD4+ T cell depletion during all stages of HIV disease occurs predominantly in the gastrointestinal tract. J. Exp. Med. 200, 749–759 (2004).
Orme, I.M. & Izzo, A.A. in Tuberculosis and the Tubercle Bacillus (eds. Cole, S.T., Eisenach, K.D., McMurray, D.N. & Jacobs, W.R., Jr.) 561–571 (ASM Press, Washington, DC, 2005).
Pai, M., Riley, L.W. & Colford, J.M. Jr. Interferon-gamma assays in the immunodiagnosis of tuberculosis: a systematic review. Lancet Infect. Dis. 4, 761–776 (2004).
Stebbings, R.J. et al. Mechanisms of protection induced by attenuated simian immunodeficiency virus. Virology 296, 338–353 (2002).
McDermott, A.B. et al. Repeated low-dose mucosal simian immunodeficiency virus SIVmac239 challenge results in the same viral and immunological kinetics as high-dose challenge: a model for the evaluation of vaccine efficacy in nonhuman primates. J. Virol. 78, 3140–3144 (2004).
Hel, Z. et al. Viremia control following antiretroviral treatment and therapeutic immunization during primary SIV251 infection of macaques. Nat. Med. 6, 1140–1146 (2000).
Lu, W., Arraes, L.C., Ferreira, W.T. & Andrieu, J.M. Therapeutic dendritic-cell vaccine for chronic HIV-1 infection. Nat. Med. 10, 1359–1365 (2004).
Tryniszewska, E. et al. Vaccination of macaques with long-standing SIVmac251 infection lowers the viral set point after cessation of antiretroviral therapy. J. Immunol. 169, 5347–5357 (2002).
Ogg, G.S. et al. Decay kinetics of human immunodeficiency virus-specific effector cytotoxic T lymphocytes after combination antiretroviral therapy. J. Virol. 73, 797–800 (1999).
Kremer, M. & Glennerster, R. Strong Medicine: Creating Incentives For Pharmaceutical Research on Neglected Diseases (Princeton University Press, Princeton, Oxford, 2004).
World Health Organization. Global Atlas of infectious diseases. http://globalatlas.who.int/. (World Health Organization, Geneva, 2004).
UNAIDS. 2004 Report on the global AIDS epidemic. 2004. Geneva.
World Health Organization. The World Health Report 2004 - Changing History. 2004. Switzerland. (World Health Organization, Geneva, 2004).
Corbett, E.L. et al. HIV-1/AIDS and the control of other infectious diseases in Africa. Lancet 359, 2177–2187 (2002).
Temmerman, S. et al. Methylation-dependent T cell immunity to Mycobacterium tuberculosis heparin-binding hemagglutinin. Nat. Med. 10, 935–941 (2004).
Lowrie, D.B. et al. Therapy of tuberculosis in mice by DNA vaccination. Nature 400, 269–271 (1999).
Brooks, J.V., Frank, A.A., Keen, M.A., Bellisle, J.T. & Orme, I.M. Boosting vaccine for tuberculosis. Infect. Immun. 69, 2714–2717 (2001).
S.H.E. Kaufmann acknowledges financial support for his work on tuberculosis vaccination from: the Fonds Chemie, EU FP6 TB-VAC, EU FP6 MUVAPRED, Bundeministerium für Bildung und Forschung Competence Network “Structural Genomics of M. tuberculosis”, BMBF joint project “Genomics of M. tuberculosis”, EU project “X-TB”, Deutsche Forschungsgemeinschaft Priority Program “Novel Vaccination Strategies”. A.J. McMichael acknowledges grant support from the UK Medical Research Council and the International AIDS Vaccine Initiative. We thank A. von Gabain, J. Cohen, H. McShane, P. Andersen, T. Ottenhoff, M. Klein, T. Hanke and L. Castello-Branco for discussions, T. Walker and A. Ozin for reading this manuscript, D. Schad for graphical work and S. Sibaei for secretarial help.
The authors declare no competing financial interests.
Rights and permissions
About this article
Cite this article
Kaufmann, S., McMichael, A. Annulling a dangerous liaison: vaccination strategies against AIDS and tuberculosis. Nat Med 11 (Suppl 4), S33–S44 (2005). https://doi.org/10.1038/nm1221
This article is cited by
Anti-tubercular activity evaluation of natural compounds by targeting Mycobacterium tuberculosis resuscitation promoting factor B inhibition: An in silico study
Molecular Diversity (2023)
Transcriptional response to the host cell environment of a multidrug-resistant Mycobacterium tuberculosis clonal outbreak Beijing strain reveals its pathogenic features
Scientific Reports (2021)
Depriving Iron Supply to the Virus Represents a Promising Adjuvant Therapeutic Against Viral Survival
Current Clinical Microbiology Reports (2020)
COVID-19 as part of the hyperferritinemic syndromes: the role of iron depletion therapy
Immunologic Research (2020)
The role of host miRNAs on Mycobacterium tuberculosis | 1 | 22 |
<urn:uuid:32404eaa-6d38-4801-9bdc-56fe49c39d32> | About the power bank circuit
In this project, we will be making a power bank using a pre-built power bank circuit and 18650 Li-ion cells. This power bank circuit has an onboard LCD that will show you the total charge remaining, input-output voltages, and current. It has two USB ports for charging your mobile phone. One is of 5V 1A and the other is of 5V 2.1 A. So, it is suitable for charging every mobile phone. One micro-USB port is also present on the board. This micro-USB port is used for charging the power bank. Here, you have to connect a 5V 1A adapter.
One power switch is also present on the board for turning ON/OFF the power bank.
For this project, we are using Samsung 18650 Li-ion cells. These are the best and widely used Li-ion cells. A single cell outputs a voltage of 4.2V when it is fully charged and has a capacity of 2200mAH. We will be using three such cells. If you want to increase the power of the power bank you just have to increase the number of cells.
Features of Power bank circuit
- Dual USB interface one is 1A and the other one is 2A
- Stable output current and voltage
- Compatible with all kind of 18650 Li-ion cells
- Input charging voltage: 5V
- Input charging current: 1A
- Output current and voltage: 5V/1A and 5V/2.1A
- Over-charge protection
- Over-discharge protection
Features of Samsung 18650 Li-Ion Cell
- Long life
- Excellent charge and discharge rate
- Small in size
- Current capacity: 2200mah
- Maximum discharge voltage: 4400mah
- Internal Impedance: <1 ohm
- Standard voltage: 3.7V
- Full charged voltage: 4.2V
- Discharged Voltage: 2.46
Working of this project
We will use three Li-ion cells and connect them in parallel to get a current capacity of 6600mah. If you want to increase the capacity of the power bank increase the number of cells. With each cell, the capacity of the power bank will be increased by 2200mah.
We will get two wires after connecting all the cells. We have to connect both these wires to the power bank circuit. The power bank circuit has an in-built boost converter that will convert the 3.7V from the Li-ion cell to 5V. And we will get 5V 1A on one USB port and 5V 2.1A on the second USB port. Both these ports are used for charging the mobile phone.
Similarly, there is a buck converter present inside this board that will convert the 5V from the micro-USB port to 4.2 volts. This voltage is used to charge the cell.
The power bank circuit board also monitors the capacity of the power bank and displays it on the LCD in %.
This circuit will also protect all the Li-ion cells from over-charge and over-discharge that will help in increasing the life of each cell.
Circuit diagram of this project
Steps to create a power bank
Step1: Connect all the cells in parallel as shown in the picture. Make sure you use a nickel strip and spot welder for joining the cell. If use a soldering iron for soldering the wires to the Li-ion cell that will damage the cell and also decrease the life of the cell.
But if you don’t have both spot welder and nickel strip, use the soldering iron and try not to put the soldering iron on the surface of the cell for a long period.
Make sure you are connecting all the positive terminals of the cell together and all the negative terminals together. Do not connect the positive terminal of one cell to the negative terminal of the other cell, this will short the cell and there will be a chance of fire.
Step2: Connect the common positive terminal wire of the cells to the B+ terminal of the power bank circuit board. Before soldering, makes sure you connect that you are connecting the positive terminal of the cells.
Step3: Connect the common negative terminal of the Li-ion cells to the B- terminal of the power bank circuit board.
Step4: Connect the 5V 1A charger to the board to charge all the cells and wait till all the cells are charged. You can also use your mobile phone charger to charge this power bank. When the power bank is charging you will battery % and IN written on the LCD.
Step5: Now, the power bank is completed and you can charge your phone by connecting it to any of the USB ports depending upon your mobile phone. Press and hold the power button for few seconds. When the phone is charging, you will see the battery % and OUT written on the LCD.
So, the project is completed here.
Comments are closed. | 1 | 4 |
<urn:uuid:868795bf-9fb4-47ce-bc6b-49a9402cb893> | Settled State Science
Posted by Bill Storage in History of Science on August 15, 2022
Science is belief in the ignorance of experts, said Richard Feynman in 1969. Does that describe science today, or is science a state and/or academic institution that dispenses truth?
State science and science herded by well-funded and tenured academics has led to some mammoth missteps, muddles and misdeeds in the application of that science to policy.
It may be inaccurate to say, as Edwin Black did in War Against the Weak, that Hitlers racial hygienics ideology was imported directly and solely from American eugenics. But that Hitler’s race policy was heavily inspired by American eugenics is very well documented, as is the overwhelming support given eugenics by our reigning politicians and educators. For example, Hitler absolutely did borrow much of the 1933 Law for the Prevention of Hereditarily Defective Offspring from the draft sterilization law drawn up for US states by Harry Laughlin, then Superintendent of the Eugenics Record Office in affiliation with the American Association of the Advancement of Science (AAAS).
Academic progressives and the educated elite fawned over the brilliance of eugenics too. Race Science was deemed settled, and that science had to be embodied in policy to change the world for the better. John Maynard Keynes and Woodrow Wilson loved it. Harvard was all over it too. Stanford’s first president, David S Jordan and the Yale’s famed economist and social reformer Irving Fisher were leaders of the Eugenics movement. All the trappings of science were there; impressive titles like “Some Racial Peculiarities of the Negro Brain” (American Journal of Anatomy, 1906) appeared in sciencey journals. In fact, the prestigious journal Science, covered eugenics in it lead story of Oct. 7, 1921.
But 1906 was oh so very long ago. right? Was eugenics a one-off? The lobotomy/leucotomy craze of the 1950s saw similar endorsement from the political and academic elite. More recent, less grotesque, but equally bad and unjustified state science was the low-fat craze of the 1980s and the war on cholesterol.
Last month the California Assembly has passed AB 2098, designating “the dissemination or promotion of misinformation or disinformation related to the SARS-CoV-2 coronavirus, or COVID-19 as unprofessional conduct,” for which MDs could be subjected to loss of license. The bill defines misinformation as “false information that is contradicted by contemporary scientific consensus.”
The U.S. Department of Health & Human Services (HHS) now states, “If your child is 6 months or older, you can now help protect them from severe COVID illness by getting them a COVID vaccine.” That may be consensus alright. I cannot find one shred of evidence to support the claim. Can you?
“The separation of state and church must be complemented by the separation of state and science, that most recent, most aggressive, and most dogmatic religious institution.” – Paul Feyerabend, Against Method, 1975
Fashionable Pessimism: Confidence Gap Revisited
Posted by Bill Storage in Uncategorized on August 19, 2021
Most of us hold a level of confidence about our factual knowledge and predictions that doesn’t match our abilities. That’s what I learned, summarized in a recent post, from running a website called The Confidence Gap for a year.
When I published that post, someone linked to it from Hacker News, causing 9000 people to read the post, 3000 of which took the trivia quiz(es) and assigned confidence levels to their true/false responses. Presumably, many of those people learned from the post that the each group of ten questions in the survey had one question about the environment or social issues designed to show an extra level of domain-specific overconfidence. Presumably, those readers are highly educated. I would have thought this would have changed the gap between accuracy and confidence in those who used the site before and after that post. But the results from visits after the blog post and Hacker News coverage, even in those categories, were almost identical to those of the earlier group.
Hacker News users pointed out several questions where the the Confidence Gap answers were obviously wrong. Stop signs actually do have eight sides, for example. Readers also reported questions with typos that could have confused the respondents. For example “Feetwood Mac” did not perform “Black Magic Woman” prior to Santana, but Fleetwood Mac did. The site called the statement with “Feetwood” true; it was a typo, not a trick.
A Hacker News reader challenged me on the naïve averaging of confidence estimates, saying he assumed the whole paper was similarly riddled with math errors. The point is arguable, but not a math error. Philip Tetlock, Tony Cox, Gerd Gigerenzer, Sarah Lichtenstein, Baruch Fischhoff, Paul Slovic and other heavyweights of the field used the same approach I used. Thank your university system for teaching that interpretation of probability is a clear-cut matter of math as opposed to vexing issue in analytic philosophy (see The Trouble with Probability and The Trouble with Doomsday).
Those criticisms acknowledged, I deleted the response data from the questions where Hacker News reported errors and typos. This changed none of the results by more than a percentage point. And, as noted above, prior knowledge of “trick” questions had almost no effect on accuracy or overconfidence.
On the point about the media’s impact on people’s confidence about fact claims that involve environmental issues, consider data from the 2nd batch of responses (post blog post) to this question:
According to the United Nations the rate of world deforestation increased between 1995 and 2015
This statement about a claim made by the UN is false, and the UN provides a great deal of evidence on the topic. The average confidence level given by respondents was 71%. The fraction of people answering correctly was 29%. The average confidence value specified by those who answered incorrectly was 69%. Independent of different interpretations of probability and confidence, this seems a clear sign of overconfidence about deforestation facts.
Compare this to the responses given for the question of whether Oregon borders California. 88% of respondents answered correctly and their average confidence specified was 88%.
According to OurWorldInData.org, the average number of years of schooling per resident was higher in S. Korea than in USA in 2010
The statement is false. Average confidence was 68%. Average correctness was 20%
For all environmental and media-heavy social questions answered by the 2nd group of respondents (who presumably had some clue that people tend to be overconfident about such issues) the average correctness was 46% and the average confidence was 67%. This is a startling result; the proverbial dart throwing chimps would score vastly higher on environmental issues (50% by chimps, 20% on the schooling question and 46% for all “trick” questions by humans) than American respondents who were specifically warned that environmental questions were designed to demonstrate that people think the world is more screwed up than it is. Is this a sign of deep pessimism about the environment and society?
For comparison, average correctness and confidence on all World War II questions were both 65%. For movies, 70% correct, 71% confidence. For science, 75% and 77%. Other categories were similar, with some showing a slightly higher overconfidence. Most notably, sports mean accuracy was 59% with mean confidence of 68%.
Richard Feynman famously said he preferred not knowing to holding as certain the answers that might be wrong (No Ordinary Genius). Freeman Dyson famously said, “it is better to be wrong than to be vague” (The Scientist As Rebel). For the hyper-educated class, spoon-fed with facts and too educated to grasp the obvious, it seems the preferred system might now be phrased “better wrong than optimistic.”
. . . _ _ _ . . .
The man who despairs when others hope is admired as a sage. – John Stuart Mill. Speech on Perfectibility, 1828
Optimism is cowardice. Otto Spengler, Man and Technics, 1931
The U.S. life expectancy will drop to 42 years by 1980 due to cancer epidemics. Paul Ehrlich. Ramparts, 1969
It is the long ascent of the past that gives the lie to our despair. HG Wells. The Discovery of the Future, 1902
Despising Derrida, Postmodern Scapegoat
Posted by Bill Storage in Philosophy on May 10, 2021
There is a trend in conservative politics to blame postmodernism for everything that is wrong with America today. Meanwhile conservatives say a liberal who claims that anything is wrong with the USA should be exiled to communist Canada. Postmodernism in this context is not about art and architecture. It is a school of philosophy – more accurately, criticism – that not long ago was all the rage in liberal education and seems to have finally reached business and government. Right-wing authors tie it to identity politics, anti-capitalism, anti-enlightenment and wokeness. They’re partly right.
Postmodernism challenged the foundations of knowledge, science in particular, arguing against the univocity of meaning. Postmodernists think that there is no absolute truth, no certain knowledge. That’s not the sort of thing Republicans like to hear.
Deconstruction, an invention of Jacques Derrida, was a major component of heyday postmodernism. Deconstruction dug into the fine points of the relationship between text and meaning, something like extreme hermeneutics (reading between the lines, roughly). Richard Rorty, the leading American postmodernist, argued that there can be no unmediated expression of any non-linguistic entity. And what does that mean?
It means in part that there is no “God’s eye view” or “view from nowhere,” at least none that we have access to. Words cannot really hook onto reality for several reasons. There are always interpreters between words and reality (human interpreters), and between you and me. And we really have no means of knowing whether your interpretation is the same as mine. How could we test such a thing? Only by using words on each other. You own your thoughts but not your words once they’ve left your mouth or your pen. They march on without you. Never trust the teller; trust the tale, said D.H. Lawrence. Derrida took this much farther, exploring “oppositions inside text,” which he argued, mostly convincingly, can be found in any nontrivial text. “There is nothing outside the text,” Derrida proclaimed.
Derrida was politically left but not nearly as left as his conservative enemies pretended. Communists had no use for Derrida. Conservatives outright despised him. Paul Gross and Norman Levitt spent an entire chapter deriding Derrida for some inane statements he made about Einstein and relativity back before Derrida was anyone. In Higher Superstition – The Academic Left and its Quarrels with Science, they attacked from every angle, making much of Derrida’s association with some famous Nazis. This was a cheap shot having no bearing on the quality of Derrida’s work.
Worse still, Gross and Levitt attacked the solid aspects of postmodern deconstruction:
“The practice of close, exegetical reading, of hermeneutics, is elevated an ennobled by Derrida and his followers. No longer is it seen as a quaint academic hobby-horse for insular specialists, intent on picking the last meat from the bones of Jane Austen and Herma Melville. Rather, it has now become the key to comprehension of the profoundest matters of truth and meaning, the mantic art of understanding humanity and the universe at their very foundation.”
There was, and is, plenty of room between Jane Austen hermeneutics and arrogantly holding that nothing has any meaning except that which the god of deconstruction himself has tapped into. Yes, Derrida the man was an unsavory and pompous ass, and much of his writing was blustering obscurantism. But Derrida-style deconstruction has great value. Ignore Derrida’s quirks, his arrogance, and his political views. Ignore “his” annoying scare quotes and “abuse” of “language.” Embrace his form of deconstruction.
Here’s a simple demo of oppositions inside text. Hebrews 13 tells us to treat people with true brotherly love, not merely out of adherence to religious code. “Continue in brotherly love. Do not neglect to show hospitality to strangers, for by so doing some have entertained angels without knowing it.” The Hebrews author has embedded a less pure motive in his exhortation – a favorable review from potential angels in disguise. Big Angel is watching you.
Can conservatives not separate Derrida from his work, and his good work from his bad? After all, they are the ones who think that objective criteria can objectively separate truth from falsehood, knowledge from mere belief, and good (work) from bad.
Another reason postmodernists say the distinction between truth and falsehood is fatally flawed – as is in fact the whole concept of truth – is the deep form of the “view from nowhere” problem. This is not merely the impossibility of neutrality in journalism. It is the realization that no one can really evaluate a truth claim by testing its correspondence to reality – because we have no unmediated access to the underlying reality. We have only our impressions and experience. If everyone is wearing rose colored glasses that cannot be removed, we can’t know whether reality is white or is rose colored. Thus falls the correspondence theory of truth.
Further, the coherence theory of truth is similarly flawed. In this interpretation of truth, a statement is judged likely true if it coheres with a family of other statements accepted as true. There’s an obvious bootstrapping problem here. One can imagine a large, coherent body of false claims. They hang together, like the elements of a tall tale, but aren’t true.
Beyond correspondence and coherence, we’re basically out of defenses for Truth with a capital T. There are a few other theories of truth, but they more or less boil down to variants on these two core interpretations. Richard Rorty, originally an analytic philosopher (the kind that studies math and logic and truth tables), spent a few decades poking all aspects of the truth problem, borrowing a bit from historian of science Thomas Kuhn. Rorty extended what Kuhn had only applied to scientific truth to truth in general. Experiments – and experience in the real world – only provide objective verification of truth claims if your audience (or opponents) agree that it does. For Rorty, this didn’t mean there was no truth out there, but it meant that we don’t have any means of resolving disputes over incompatible truth claims derived from real world experience. Applying Kuhn to general knowledge, Truth is merely the assent of the relevant community. Rorty’s best formulation of this concept was that truth is just a compliment we pay to claims that satisfy our group’s validation criteria. Awesome.
Conservatives cudgeled the avowed socialist Rorty as badly as they did Derrida. Dinesh D’Souza saw Rorty as the antichrist. Of course conservatives hadn’t bothered to actually read Rorty any more than they had bothered to read Derrida. Nor had conservatives ever read a word from Michel Foucault, another postmodern enemy of all things seen as decent by conservatives. Foucault was once communist. He condoned sex between adults and consenting children. I suspect some religious conservatives secretly agree. He probably had Roman Polanski’s ear. He politicized sexual identity – sort of (see below). He was a moral relativist; there is no good or bad behavior, only what people decide is good for them. Yes, Foucault was a downer and a creep, but some of his his ideas on subjectivity were original and compelling.
The conservatives who hid under their beds from Derrida, Rorty and Foucault did so because they relied on the testimony of authorities who otherwise told them what they wanted to hear about postmodernism. Thus they missed out on some of the most original insights about the limitations of what it is possible to know, what counts as sound analytical thinking, and the relationship between the teller and the hearer. Susan Sontag, an early critic of American exceptionalism and a classic limousine liberal, famously condemned interpretation. But she emptied the tub leaving the baby to be found by nuns. Interpretation and deconstruction are useful, though not the trump card the postmodernism founders originally thought they had. They overplayed their hand, but there was something in that hand.
Postmodernists, in their critique of science, thought scientists were incapable of sorting through evidence because of their social bias, their interest, as Marxists like to say. They critically examined science – in a manner they judged to be scientific, oddly enough. They sought to knock science down a peg. No objective truth, remember. Postmodern social scientists found that interest pervaded hard science and affected its conclusions. These social scientists, using scientific methods, were able to sort through interest in the way that other scientists could not sort through evidence. See a problem here? Their findings were polluted by interest.
When a certain flavor of the vegan, steeped in moral relativism, argues that veganism is better for your health, and, by the way, it is good for the planet, and, by the way, animals have rights, and, by the way, veganism is what our group of social activists do…, then I am tempted to deploy some deconstruction. We can’t know motives, some say. Or can we? There is nothing outside the text. Can an argument reasonably flow from multiple independent reasons? We can be pretty sure that some of those reasons were backed into from a conclusion received from the relevant community. Cart and horse are misconfigured.
Conservatives didn’t read Derrida, Foucault and Rorty, and liberals only made it through chapter one. If they had read further they wouldn’t now be parroting material from the first week of Postmodernism 101. They wouldn’t be playing the village postmodernist.
Foucault, patron saint of sexual identity among modern liberal academics, himself offered that to speak of homosexuals as a defined group was historically illiterate. He opined that sexual identity was an absurd basis to form one’s personal identity. They usually skip that part during protest practice. The political left in 2021 exists at the stage of postmodern thought before the great postmodernists, Derrida and crew, realized that the assertion that it is objectively true that nothing is objectively true is more than a bit self-undermining. They missed a boat that sailed 50 years back. Postmodern thought, applied to postmodernism, destroys postmodernism as a program. But today its leading adherents don’t know it. On the death of Postmodernism with a capital P we inherited some good tools and perspectives. But the present postmodern evangelists missed the point where logic flushed the postmodern program down the same drain where objective truth had gone. They are like Sunday Christians, they’re the Cafeteria Catholics of postmodernism.
Richard Rorty, a career socialist, late in his life, using postmodern reasoning, took moral relativism to its logical conclusion. He realized that the implausibility of moral absolutism did not support its replacement by moral relativism. The former could be out without the latter being in. If two tribes hold incommensurable “truths,” it is illogical for either to conclude the other is equally correct. After all, each reached its conclusion based on the evidence and what that community judged to be sound reasoning. It would be hypocritical or incoherent to be less resolved about a conclusion, merely by knowing that a group with whom you did not share moral or epistemic values, concluded otherwise. That reasoning has also escaped the academic left. This was the ironic basis for Rorty’s intellectual defense of ethnocentrism, which got him, once the most prominent philosopher in the world, booted from academic prominence, deleted from libraries, and erased from history.
Rorty’s 1970’s socialist side does occasionally get trotted out by The New Yorker to support identify politics whenever needed, despite his explicit rejection of that concept by name. His patriotic side, which emerged from his five decade pursuit of postmodern thought, gets no coverage in The New Republic or anywhere else. National pride, Rorty said, is to countries what self-respect is to individuals – a necessary condition for self-improvement. Hearing that could put some freshpersons in the campus safe place for a few days. Are the kittens ready?
Postmodern sock puppets, Derrida, Foucault, and Rorty are condemned by conservatives and loved by liberals. Both read into them whatever they want and don’t want to hear. Appropriation? Or interpretation?
Derrida would probably approve. He is “dead.” And he can make no “claim” to the words he “wrote.” There is nothing outside the text.
Smart Folk Often Full of Crap, Study Finds
Posted by Bill Storage in Uncategorized on April 22, 2021
For most of us, there is a large gap between what we know and what we think we know. We hold a level of confidence about our factual knowledge and predictions that doesn’t match our abilities. Since our personal decisions are really predictions about the future based on our available present knowledge, it makes sense to work toward adjusting our confidence to match our skill.
Last year I measured the knowledge-confidence gap of 3500 participants in a trivia game with a twist. For each True/False trivia question the respondents specified their level of confidence (between 50 and 100% inclusive) with each answer. The questions, presented in banks of 10, covered many topics and ranged from easy (American stop signs have 8 sides) to expert (Stockholm is further west than Vienna).
I ran this experiment on a website using 1500 True/False questions, about half of which belonged to specific categories including music, art, current events, World War II, sports, movies and science. Visitors could choose between the category “Various” or from a specific category. I asked for personal information such as age, gender current profession, title, and education. About 20% of site visitors gave most of that information. 30% provided their professions.
Participants were told that the point of the game was not to get the questions right but to have an appropriate level of confidence. For example, if a your average confidence value is 75%, 75% of their your answers should be correct. If your confidence and accuracy match, you are said to be calibrated. Otherwise you are either overconfident or underconfident. Overconfidence – sometime extreme – is more common, though a small percentage are significantly underconfident.
Overconfidence in group decisions is particularly troubling. Groupthink – collective overconfidence and rationalized cohesiveness – is a well known example. A more common, more subtle, and often more dangerous case exists when social effects and the perceived superiority of judgment of a single overconfident participant can leads to unconscious suppression of valid input from a majority of team members. The latter, for example, explains the Challenger launch decision for more than classic groupthink does, though groupthink is often cited as the cause.
I designed the trivia quiz system so that each group of ten questions under the Various label included one that dealt with a subject about which people are particularly passionate – environmental or social justice issues. I got this idea from Hans Rosling’s book, Factfulness. As expected, respondents were both overwhelmingly wrong and acutely overconfident about facts tied to emotional issues, e.g., net change in Amazon rainforest area in last five years.
I encouraged people to use take a few passes through the Various category before moving on to the specialty categories. Assuming that the first specialty categories that respondents chose was their favorite, I found them to be generally more overconfident about topics they presumable knew best. For example, those that first selected Music and then Art showed both higher resolution (correctness) and higher overconfidence in Music than they did in Art.
Mean overconfidence for all first-chosen specialties was 12%. Mean overconfidence for second-chosen categories was 9%. One interpretation is that people are more overconfident about that which they know best. Respondents’ overconfidence decreased progressively as they answered more questions. In that sense the system served as confidence calibration training. Relative overconfidence in the first specialty category chosen was present even when the effect of improved calibration was screened off, however.
For the first 10 questions, mean overconfidence in the Various category was 16% (16% for males, 14% for females). Mean overconfidence for the nine question in each group excepting the “passion” question was 13%.
Overconfidence seemed to be constant across professions, but increased about 1.5% with each level of college education. PhDs are 4.2% more overconfident than high school grads. I’ll leave that to sociologists of education to interpret. A notable exception was a group of analysts from a research lab who were all within a point or two of perfect calibration even on their first 10 questions. Men were slightly more overconfident than women. Underconfidence (more than 5% underconfident) was absent in men and present in 6% of the small group identifying as women (98 total).
The nature of overconfidence is seen in the plot of resolution (response correctness) vs. confidence. Our confidence roughly matches our accuracy up to the point where confidence is moderately high, around 85%. After this, increased confidence occurs with no increase in accuracy. At at 100% confidence level, respondents were, on average, less correct than they were at 95% confidence. Much of that effect stemmed from the one “trick” question in each group of 10; people tend to be confident but wrong about hot topics with high media coverage.
The distribution of confidence values expressed by participants was nominally bimodal. People expressed very high or very low confidence about the accuracy of their answers. The slight bump in confidence at 75% is likely an artifact of the test methodology. The default value of the confidence slider (website user interface element) was 75%. On clicking the Submit button, users were warned if most of their responses specified the default value, but an acquiescence effect appears to have present anyway. In Superforecasters Philip Tetlock observed that many people seem to have a “three settings” (yes, no, maybe) mindset about matters of probability. That could also explain the slight peak at 75%.
I’ve been using a similar approach to confidence calibration in group decision settings for the past three decades. I learned it from a DoD publication by Sarah Lichtenstein and Baruch Fischhoff while working on the Midgetman Small Intercontinental Ballistic Missile program in the mid 1980s. Doug Hubbard teaches a similar approach in his book The Failure of Risk Management. In my experience with diverse groups contributing to risk analysis, where group decisions about likelihood of uncertain events are needed, an hour of training using similar tools yields impressive improvements in calibration as measured above.
The website I used for this experiment (https://www.congap.com/) is still live with most of the features enabled. It’s running on a cheap hosting platform an may be slow to load (time to spin up an instance) if it hasn’t been accessed recently. Give it a minute. Performance is good once it loads.
Risk Neutrality and Corporate Risk Frameworks
Posted by Bill Storage in Uncategorized on October 27, 2020
Wikipedia describes risk-neutrality in these terms: “A risk neutral party’s decisions are not affected by the degree of uncertainty in a set of outcomes, so a risk-neutral party is indifferent between choices with equal expected payoffs even if one choice is riskier”
While a useful definition, it doesn’t really help us get to the bottom of things since we don’t all remotely agree on what “riskier” means. Sometimes, by “risk,” we mean an unwanted event: “falling asleep at the wheel is one of the biggest risks of nighttime driving.” Sometimes we equate “risk” with the probability of the unwanted event: “the risk of losing in roulette is 35 out of 36. Sometimes we mean the statistical expectation. And so on.
When the term “risk” is used in technical discussions, most people understand it to involve some combination of the likelihood (probability) and cost (loss value) of an unwanted event.
We can compare both the likelihoods and the costs of different risks, but deciding which is “riskier” using a one-dimensional range (i.e., higher vs. lower) requires a scalar calculus of risk. If risk is a combination of probability and severity of an unwanted outcome, riskier might equate to a larger value of the arithmetic product of the relevant probability (a dimensionless number between zero and one) and severity, measured in dollars.
But defining risk as such a scalar (area under the curve, therefore one dimensional) value is a big step, one that most analyses of human behavior suggests is not an accurate representation of how we perceive risk. It implies risk-neutrality.
Most people agree, as Wikipedia states, that a risk-neutral party’s decisions are not affected by the degree of uncertainty in a set of outcomes. On that view, a risk-neutral party is indifferent between all choices having equal expected payoffs.
Under this definition, if risk-neutral, you would have no basis for preferring any of the following four choices over another:
1) a 50% chance of winning $100.00
2) An unconditional award of $50.
3) A 0.01% chance of winning $500,000.00
4) A 90% chance of winning $55.56.
If risk-averse, you’d prefer choices 2 or 4. If risk-seeking, you’d prefer 1 or 3.
Now let’s imagine, instead of potential winnings, an assortment of possible unwanted events, termed hazards in engineering, for which we know, or believe we know, the probability numbers. One example would be to simply turn the above gains into losses:
1) a 50% chance of losing $100.00
2) An unconditional payment of $50.
3) A 0.01% chance of losing $500,000.00
4) A 90% chance of losing $55.56.
In this example, there are four different hazards. Many argue that rational analysis of risk entails quantification of hazard severities, independent of whether their probabilities are quantified. Above we have four risks, all having the same $50 expected value (cost), labeled 1 through 4. Whether those four risks can be considered equal depends on whether you are risk-neutral.
If forced to accept one of the four risks, a risk-neutral person would be indifferent to the choice; a risk seeker might choose risk 3, etc. Banks are often found to be risk-averse. That is, they will pay more to prevent risk 3 than to prevent risk 4, even though they have the same expected value. Viewed differently, banks often pay much more to prevent one occurrence of hazard 3 (cost = $500,000) than to prevent 9000 occurrences of hazard 4 (cost = $500,000).
Businesses compare risks to decide whether to reduce their likelihood, to buy insurance, or to take other actions. They often use a heat-map approach (sometimes called risk registers) to visualize risks. Heat maps plot probability vs severity and view any particular risk’s riskiness as the area of the rectangle formed by the axes and the point on the map representing that risk. Lines of constant risk therefore look like y = 1 / x. To be precise, they take the form of y = a/x where a represents a constant number of dollars called the expected value (or mathematical expectation or first moment) depending on area of study.
By plotting the four probability-cost vector values (coordinates) of the above four risks, we see that they all fall on the same line of constant risk. A sample curve of this form, representing a line of constant risk appears below on the left.
In my example above, the four points (50% chance of losing $100, etc.) have a large range of probabilities. Plotting these actual values on a simple grid isn’t very informative because the data points are far from the part of the plotted curve where the bend is visible (plot below on the right).
Students of high-school algebra know the fix for the problem of graphing data of this sort (monomials) is to use log paper. By plotting equations of the form described above using logarithmic scales for both axes, we get a straight line, having data points that are visually compressed, thereby taming the large range of the data, as below.
The risk frameworks used in business take a different approach. Instead of plotting actual probability values and actual costs, they plot scores, say from one ten. Their reason for doing this is more likely to convert an opinion into a numerical value than to cluster data for easy visualization. Nevertheless, plotting scores – on linear, not logarithmic, scales – inadvertently clusters data, though the data might have lost something in the translation to scores in the range of 1 to 10. In heat maps, this compression of data has the undesirable psychological effect of implying much small ranges for the relevant probability values and costs of the risks under study.
A rich example of this effect is seen in the 2002 PmBok (Project Management Body of Knowledge) published by the Project Management Institute. It assigns a score (which it curiously calls a rank) of 10 for probability values in the range of 0.5, a score of 9 for p=0.3, and a score of 8 for p=0.15. It should be obvious to most having a background in quantified risk that differentiating failure probabilities of .5, .3, and .15 is pointless and indicative of bogus precision, whether the probability is drawn from observed frequencies or from subjectivist/Bayesian-belief methods.
The methodological problem described above exists in frameworks that are implicitly risk-neutral. The real problem with the implicit risk-neutrality of risk frameworks is that very few of us – individuals or corporations – are risk-neutral. And no framework is right to tell us that we should be. Saying that it is somehow rational to be risk-neutral pushes the definition of rationality too far.
As proud king of a small distant planet of 10 million souls, you face an approaching comet that, on impact, will kill one million (10%) in your otherwise peaceful world. Your scientists and engineers rush to build a comet-killer nuclear rocket. The untested device has a 90% chance of destroying the comet but a 10% chance of exploding on launch thereby killing everyone on your planet. Do you launch the comet-killer, knowing that a possible outcome is total extinction? Or do you sit by and watch one million die from a preventable disaster? Your risk managers see two choices of equal riskiness: 100% chance of losing one million and a 10% chance of losing 10 million. The expected value is one million lives in both cases. But in that 10% chance of losing 10 million, there is no second chance. It’s an existential risk.
If these two choices seem somehow different, you are not risk-neutral. If you’re tempted to leave problems like this in the capable hands of ethicists, good for you. But unaware boards of directors have left analogous dilemmas in the incapable hands of simplistic and simple-minded risk frameworks.
The risk-neutrality embedded in risk frameworks is a subtle and pernicious case of Hume’s Guillotine – an inference from “is” to “ought” concealed within a fact-heavy argument. No amount of data, whether measured frequencies or subjective probability estimates, whether historical expenses or projected costs, even if recorded as PmBok’s scores and ranks, can justify risk-neutrality to parties who are not risk-neutral. So why is it embed it in the frameworks our leading companies pay good money for?
The Dose Makes the Poison
Posted by Bill Storage in Uncategorized on October 19, 2020
Toxicity is binary in California. Or so says its governor and most of its residents.
Governor Newsom, who believes in science, recently signed legislation making California the first state to ban 24 toxic chemicals in cosmetics.
The governor’s office states “AB 2762 bans 24 toxic chemicals in cosmetics, which are linked to negative long-term health impacts especially for women and children.”
The “which” in that statement is a nonrestrictive pronoun, and the comma preceding it makes the meaning clear. The sentence says that all toxic chemicals are linked to health impacts and that AB 2762 bans 24 of them – as opposed to saying 24 chemicals that are linked to health effects are banned. One need not be a grammarian or George Orwell to get the drift.
California continues down the chemophobic path, established in the 1970s, of viewing all toxicity through the beloved linear no-threshold lens. That lens has served gullible Californians well since the 1974, when the Sierra Club, which had until then supported nuclear power as “one of the chief long-term hopes for conservation,” teamed up with the likes of Gov. Jerry Brown (1975-83, 2011-19) and William Newsom – Gavin’s dad, investment manager for Getty Oil – to scare the crap out of science-illiterate Californians about nuclear power.
That fear-mongering enlisted Ralph Nadar, Paul Ehrlich and other leading Malthusians, rock stars, oil millionaires and overnight-converted environmentalists. It taught that nuclear plants could explode like atom bombs, and that anything connected to nuclear power was toxic – in any dose. At the same time Governor Brown, whose father had deep oil ties, found that new fossil fuel plants could be built “without causing environmental damage.” The Sierra Club agreed, and secretly took barrels of cash from fossil fuel companies for the next four decades – $25M in 2007 from subsidiaries of, and people connected to, Chesapeake Energy.
What worked for nuclear also works for chemicals. “Toxic chemicals have no place in products that are marketed for our faces and our bodies,” said First Partner Jennifer Siebel Newsom in response to the recent cosmetics ruling. Jennifer may be unaware that the total amount of phthalates in the banned zipper tabs would yield very low exposure indeed.
Chemicals cause cancer, especially in California, where you cannot enter a parking garage, nursery, or Starbucks without reading a notice that the place can “expose you to chemicals known to the State of California to cause birth defects.” California’s litigator-lobbied legislators authored Proposition 65 in a way that encourages citizens to rat on violators, the “citizen enforcers” receiving 25% of any penalties assessed by the court. The proposition lead chemophobes to understand that anything “linked to cancer” causes cancer. It exaggerates theoretical cancer risks stymying the ability of the science-ignorant educated class to make reasonable choices about actual risks like measles and fungus.
California’s linear no-threshold conception of chemical carcinogens actually started in 1962 with Rachel Carson’s Silent Spring, the book that stopped DDT use, saving all the birds, with the minor side effect of letting millions of Africans die of malaria who would have survived (1, 2, 3) had DDT use continued.
But ending DDT didn’t save the birds, because DDT wasn’t the cause of US bird death as Carson reported, because the bird death at the center of her impassioned plea never happened. This has been shown by many subsequent studies; and Carson, in her work at Fish and Wildlife Service and through her participation in Audubon bird counts, certainly had access to data showing that the eagle population doubled, and robin, catbird, and dove counts had increased by 500% between the time DDT was introduced and her eloquence, passionate telling of the demise of the days that once “throbbed with the dawn chorus of robins, catbirds, and doves.”
Carson also said that increasing numbers of children were suffering from leukemia, birth defects and cancer, and of “unexplained deaths,” and that “women were increasingly unfertile.” Carson was wrong about increasing rates of these human maladies, and she lied about the bird populations. Light on science, Carson was heavy on influence: “Many real communities have already suffered.”
In 1969 the Environmental Defense Fund demanded a hearing on DDT. Lasting eight months, the examiner’s verdict concluded DDT was not mutagenic or teratogenic. No cancer, no birth defects. In found no “deleterious effect on freshwater fish, estuarine organisms, wild birds or other wildlife.”
William Ruckleshaus, first director of the EPA didn’t attend the hearings or read the transcript. Pandering to the mob, he chose to ban DDT in the US anyway. It was replaced by more harmful pesticides in the US and the rest of the world. In praising Ruckleshaus, who died last year, NPR, the NY Times and the Puget Sound Institute described his having a “preponderance of evidence” of DDT’s damage, never mentioning the verdict of that hearing.
When Al Gore took up the cause of climate, he heaped praise on Carson, calling her book “thoroughly researched.” Al’s research on Carson seems of equal depth to Carson’s research on birds and cancer. But his passion and unintended harm have certainly exceeded hers. A civilization relying on the low-energy-density renewables Gore advocates will consume somewhere between 100 and 1000 times more space for food and energy than we consume at present.
California’s fallacious appeal to naturalism regarding chemicals also echoes Carson’s, and that of her mentor, Wilhelm Hueper, who dedicated himself to the idea that cancer stemmed from synthetic chemicals. This is still overwhelmingly the sentiment of Californians, despite the fact that the smoking-tar-cancer link now seems a bit of a fluke. That is, we expected the link between other “carcinogens” and cancer to be as clear as the link between smoking and cancer. It is not remotely. As George Johnson, author of The Cancer Chronicles, wrote, “as epidemiology marches on, the link between cancer and carcinogen seems ever fuzzier” (re Tomasetti on somatic mutations). Carson’s mentor Hueper, incidentally, always denied that smoking caused cancer, insisting toxic chemicals released by industry caused lung cancer.
This brings us back to the linear no-threshold concept. If a thing kills mice in high doses, then any dose to humans is harmful – in California. And that’s accepting that what happens in mice happens in humans, but mice lie and monkeys exaggerate. Outside California, most people are at least aware of certain hormetic effects (U-shaped dose-response curve). Small amounts of Vitamin C prevent scurvy; large amounts cause nephrolithiasis. Small amounts of penicillin promote bacteria growth; large amount kill them. There is even evidence of biopositive effects from low-dose radiation, suggesting that 6000 millirems a year might be best for your health. The current lower-than-baseline levels of cancers in 10,000 residents of Taiwan accidentally exposed to radiation-contaminated steel, in doses ranging from 13 to 160 mSv/yr for ten years starting in 1982 is a fascinating case.
Radiation aside, perpetuating a linear no-threshold conception of toxicity in the science-illiterate electorate for political reasons is deplorable, as is the educational system that produces degreed adults who are utterly science-illiterate – but “believe in science” and expect their government to dispense it responsibly. The Renaissance physician Paracelsus knew better half a millennium ago when he suggested that that substances poisonous in large doses may be curative in small ones, writing that “the dose makes the poison.”
To demonstrate chemophobia in 2003, Penn Jillette and assistant effortlessly convinced people in a beach community, one after another, to sign a petition to ban dihydrogen monoxide (H2O). Water is of course toxic in high doses, causing hyponatremia, seizures and brain damage. But I don’t think Paracelsus would have signed the petition.
The Prosecutor’s Fallacy Illustrated
Posted by Bill Storage in Probability and Risk on May 7, 2020
“The first thing we do, let’s kill all the lawyers.” – Shakespeare, Henry VI, Part 2, Act IV
My last post discussed the failure of most physicians to infer the chance a patient has the disease given a positive test result where both the frequency of the disease in the population and the accuracy of the diagnostic test are known. The probability that the patient has the disease can be hundreds or thousands of times lower than the accuracy of the test. The problem in reasoning that leads us to confuse these very different likelihoods is one of several errors in logic commonly called the prosecutor’s fallacy. The important concept is conditional probability. By that we mean simply that the probability of x has a value and that the probability of x given that y is true has a different value. The shorthand for probability of x is p(x) and the shorthand for probability of x given y is p(x|y).
“Punching, pushing and slapping is a prelude to murder,” said prosecutor Scott Gordon during the trial of OJ Simpson for the murder of Nicole Brown. Alan Dershowitz countered with the argument that the probability of domestic violence leading to murder was very remote. Dershowitz (not prosecutor but defense advisor in this case) was right, technically speaking. But he was either as ignorant as the physicians interpreting the lab results or was giving a dishonest argument, or possibly both. The relevant probability was not the likelihood of murder given domestic violence, it was the likelihood of murder given domestic violence and murder. “The courtroom oath – to tell the truth, the whole truth and nothing but the truth – is applicable only to witnesses,” said Dershowitz in The Best Defense. In Innumeracy: Mathematical Illiteracy and Its Consequences. John Allen Paulos called Dershowitz’s point “astonishingly irrelevant,” noting that utter ignorance about probability and risk “plagues far too many otherwise knowledgeable citizens.” Indeed.
The doctors’ mistake in my previous post was confusing
P(positive test result) vs.
P(disease | positive test result)
Dershowitz’s argument confused
P(husband killed wife | husband battered wife) vs.
P(husband killed wife | husband battered wife | wife was killed)
In Reckoning With Risk, Gerd Gigerenzer gave a 90% value for the latter Simpson probability. What Dershowitz cited was the former, which we can estimate at 0.1%, given a wife-battery rate of one in ten, and wife-murder rate of one per hundred thousand. So, contrary to what Dershowitz implied, prior battery is a strong indicator of guilt when a wife has been murdered.
As mentioned in the previous post, the relevant mathematical rule does not involve advanced math. It’s a simple equation due to Pierre-Simon Laplace, known, oddly, as Bayes’ Theorem:
P(A|B) = P(B|A) * P(A) / P(B)
If we label the hypothesis (patient has disease) as D and the test data as T, the useful form of Bayes’ Theorem is
P(D|T) = P(T|D) P(D) / P(T) where P(T) is the sum of probabilities of positive results, e.g.,
P(T) = P(T|D) * P(D) + P(T | not D) * P(not D) [using “not D” to mean “not diseased”]
Cascells’ phrasing of his Harvard quiz was as follows: “If a test to detect a disease whose prevalence is 1 out of 1,000 has a false positive rate of 5 percent, what is the chance that a person found to have a positive result actually has the disease?”
Plugging in the numbers from the Cascells experiment (with the parameters Cascells provided shown below in bold and the correct answer in green):
- P(D) is the disease frequency = 0.001 [ 1 per 1000 in population ] therefore:
- P(not D) is 1 – P(D) = 0.999
- P(T | not D) = 5% = 0.05 [ false positive rate also 5%] therefore:
- P(T | D) = 95% = 0.95 [ i.e, the false negative rate is 5% ]
P(T) = .95 * .001 + .999 * .05 = 0.0509 ≈ 5.1% [ total probability of a positive test ]
P(D|T) = .95 * .001 / .0509 = .0019 ≈ 2% [ probability that patient has disease, given a positive test result ]
I hope this seeing is believing illustration of Cascells’ experiment drives the point home for those still uneasy with equations. I used Cascells’ rates and a population of 100,000 to avoid dealing with fractional people:
Extra credit: how exactly does this apply to Covid, news junkies?
Edit 5/21/20. An astute reader called me on an inaccuracy in the diagram. I used an approximation, without identifying it. P = r1/r2 is a cheat for P = 1 – Exp(- r1/r2). The approximation is more intuitive, though technically wrong. It’s a good cheat, for P values less that 10%.
Note 5/22/20. In response to questions about how this sort of thinking bears on coronavirus testing -what test results say about prevalence – consider this. We really have one equation in 3 unknowns here: false positive rate, false negative rate, and prevalence in population. A quick Excel variations study using false positive rates from 1 to 20% and false neg rates from 1 to 3 percent, based on a quick web search for proposed sensitivity/specificity for the Covid tests is revealing. Taking the low side of the raw positive rates from the published data (1 – 3%) results in projected prevalence roughly equal to the raw positive rates. I.e., the false positives and false negatives happen to roughly wash out in this case. That also leaves P(d|t) in the range of a few percent.
Innumeracy and Overconfidence in Medical Training
Posted by Bill Storage in History of Science on May 4, 2020
Most medical doctors, having ten or more years of education, can’t do simple statistics calculations that they were surely able to do, at least for a week or so, as college freshmen. Their education has let them down, along with us, their patients. That education leaves many doctors unquestioning, unscientific, and terribly overconfident.
A disturbing lack of doubt has plagued medicine for thousands of years. Galen, at the time of Marcus Aurelius, wrote, “It is I, and I alone, who has revealed the true path of medicine.” Galen disdained empiricism. Why bother with experiments and observations when you own the truth. Galen’s scientific reasoning sounds oddly similar to modern junk science armed with abundant confirming evidence but no interest in falsification. Galen had plenty of confirming evidence: “All who drink of this treatment recover in a short time, except those whom it does not help, who all die. It is obvious, therefore, that it fails only in incurable cases.”
Galen was still at work 1500 years later when Voltaire wrote that the art of medicine consisted of entertaining the patient while nature takes its course. One of Voltaire’s novels also described a patient who had survived despite the best efforts of his doctors. Galen was around when George Washington died after five pints of bloodletting, a practice promoted up to the early 1900s by prominent physicians like Austin Flint.
But surely medicine was mostly scientific by the 1900s, right? Actually, 20th century medicine was dragged kicking and screaming to scientific methodology. In the early 1900’s Ernest Amory Codman of Massachusetts General proposed keeping track of patients and rating hospitals according to patient outcome. He suggested that a doctor’s reputation and social status were poor measures of a patient’s chance of survival. He wanted the track records of doctors and hospitals to be made public, allowing healthcare consumers to choose suppliers based on statistics. For this, and for his harsh criticism of those who scoffed at his ideas, Codman was tossed out of Mass General, lost his post at Harvard, and was suspended from the Massachusetts Medical Society. Public outcry brought Codman back into medicine, and much of his “end results system” was put in place.
20th century medicine also fought hard against the concept of controlled trials. Austin Bradford Hill introduced the concept to medicine in the mid 1920s. But in the mid 1950s Dr. Archie Cochrane was still fighting valiantly against what he called the God Complex in medicine, which was basically the ghost of Galen; no one should question the authority of a physician. Cochrane wrote that far too much of medicine lacked any semblance of scientific validation and knowing what treatments actually worked. He wrote that the medical establishment was hostile the idea of controlled trials. Cochrane fought this into the 1970s, authoring Effectiveness and Efficiency: Random Reflections on Health Services in 1972.
Doctors aren’t naturally arrogant. The God Complex is passed passed along during the long years of an MD’s education and internship. That education includes rights of passage in an old boys’ club that thinks sleep deprivation builds character in interns, and that female med students should make tea for the boys. Once on the other side, tolerance of archaic norms in the MD culture seems less offensive to the inductee, who comes to accept the system. And the business of medicine, the way it’s regulated, and its control by insurance firms, pushes MDs to view patients as a job to be done cost-effectively. Medical arrogance is in a sense encouraged by recovering patients who might see doctors as savior figures.
As Daniel Kahneman wrote, “generally, it is considered a weakness and a sign of vulnerability for clinicians to appear unsure.” Medical overconfidence is encouraged by patients’ preference for doctors who communicate certainties, even when uncertainty stems from technological limitations, not from doctors’ subject knowledge. MDs should be made conscious of such dynamics and strive to resist inflating their self importance. As Allan Berger wrote in Academic Medicine in 2002, “we are but an instrument of healing, not its source.”
Many in medical education are aware of these issues. The calls for medical education reform – both content and methodology – are desperate, but they are eerily similar to those found in a 1924 JAMA article, Current Criticism of Medical Education.
Covid19 exemplifies the aspect of medical education I find most vile. Doctors can’t do elementary statistics and probability, and their cultural overconfidence renders them unaware of how critically they need that missing skill.
A 1978 study, brought to the mainstream by psychologists like Kahnemann and Tversky, showed how few doctors know the meaning of a positive diagnostic test result. More specifically, they’re ignorant of the relationship between the sensitivity and specificity (true positive and true negative rates) of a test and the probability that a patient who tested positive has the disease. This lack of knowledge has real consequences In certain situations, particularly when the base rate of the disease in a population is low. The resulting probability judgements can be wrong by factors of hundreds or thousands.
In the 1978 study (Cascells et. al.) doctors and medical students at Harvard teaching hospitals were given a diagnostic challenge. “If a test to detect a disease whose prevalence is 1 out of 1,000 has a false positive rate of 5 percent, what is the chance that a person found to have a positive result actually has the disease?” As described, the true positive rate of the diagnostic test is 95%. This is a classic conditional-probability quiz from the second week of a probability class. Being right requires a), knowing Bayes Theorem, and b), being able to multiply and divide. Not being confidently wrong requires only one thing: scientific humility – the realization that all you know might be less than all there is to know. The correct answer is 2% – there’s a 2% likelihood the patient has the disease. The most common response, by far, in the 1978 study was 95%, which is wrong by 4750%. Only 18% of doctors and med students gave the correct response. The study’s authors observed that in the group tested, “formal decision analysis was almost entirely unknown and even common-sense reasoning about the interpretation of laboratory data was uncommon.”
As mentioned above, this story was heavily publicized in the 80s. It was widely discussed by engineering teams, reliability departments, quality assurance groups and math departments. But did it impact medical curricula, problem-based learning, diagnostics training, or any other aspect of the way med students were taught? One might have thought yes, if for no reason than to avoid criticism by less prestigious professions having either the relevant knowledge of probability or the epistemic humility to recognize that the right answer might be far different from the obvious one.
Similar surveys were done in 1984 (David M Eddy) and in 2003 (Kahan, Paltiel) with similar results. In 2013, Manrai and Bhatia repeated Cascells’ 1978 survey with the exact same wording, getting trivially better results. 23% answered correctly. They suggesting that medical education “could benefit from increased focus on statistical inference.” That was 35 years after Cascells, during which, the phenomenon was popularized by the likes of Daniel Kahneman, from the perspective of base-rate neglect, by Philip Tetlock, from the perspective of overconfidence in forecasting, and by David Epstein, from the perspective of the tyranny of specialization.
Over the past decade, I’ve asked the Cascells question to doctors I’ve known or met, where I didn’t think it would get me thrown out of the office or booted from a party. My results were somewhat worse. Of about 50 MDs, four answered correctly or were aware that they’d need to look up the formula but knew that it was much less than 95%. One was an optometrist, one a career ER doc, one an allergist-immunologist, and one a female surgeon – all over 50 years old, incidentally.
Despite the efforts of a few radicals in the Accreditation Council for Graduate Medical Education and some post-Flexnerian reformers, medical education remains, as Jonathan Bush points out in Tell Me Where It Hurts, basically a 2000 year old subject-based and lecture-based model developed at a time when only the instructor had access to a book. Despite those reformers, basic science has actually diminished in recent decades, leaving many physicians with less of a grasp of scientific methodology than that held by Ernest Codman in 1915. Medical curriculum guardians, for the love of God, get over your stodgy selves and replace the calculus badge with applied probability and statistical inference from diagnostics. Place it later in the curriculum later than pre-med, and weave it into some of that flipped-classroom, problem-based learning you advertise.
55 Saves Lives
Posted by Bill Storage in Systems Engineering on April 15, 2020
Congress and Richard Nixon had no intention to pull a bait-and-switch when the enacted the National Maximum Speed Law (NMSL) on Jan. 2, 1974. The emergency response to an embargo, NMSL (Public Law 93-239), specified that it was “an act to conserve energy on the Nation’s highways.” Conservation, in this context, meant reducing oil consumption to prevent the embargo proclaimed by the Organization of Arab Petroleum Exporting in October 1973 from seriously impacting American production or causing a shortage of oil then used for domestic heating. There was a precedent. A national speed limit had been imposed for the same reasons during World War II.
By the summer of 1974 the threat of oil shortage was over. But unlike the case after the war, many government officials, gently nudged by auto insurance lobbies, argued that the reduced national speed limit would save tens of thousands of lives annually. Many drivers conspicuously displayed their allegiance to the cause with bumper stickers reminding us that “55 Saves Lives.” Bad poetry, you may say in hindsight, a sorry attempt at trochaic monometer. But times were desperate and less enlightened drivers had to be brought onboard. We were all in it together.
Over the next ten years, the NMSL became a major boon to jurisdictions crossed by interstate highways, some earning over 80% of their revenues from speeding fines. Studies reached conflicting findings over whether the NMSL had saved fuel or lives. The former seems undeniable at first glance, but the resulting increased congestion caused frequent brake/stop/accelerate effects in cities, and the acceleration phase is a gas guzzler. Those familiar with fluid mechanics note that the traffic capacity of a highway is proportional to the speed driven on it. Some analyses showed decreased fuel efficiency (net miles per gallon). The most generous analyses reported a less than 1% decrease in consumption.
No one could argue that 55 mph collisions were more dangerous than 70 mph collisions. But some drivers, particularly in the west, felt betrayed after being told that the NMSL was an emergency measure (”during periods of current and imminent fuel shortages”) to save oil and then finding it would persist indefinitely for a new reason, to save lives. Hicks and greasy trucker pawns of corporate fat cats, my science teachers said of those arguing to repeal the NMSL.
The matter was increasingly argued over the next twelve years. The states’ rights issue was raised. Some remembered that speed limits had originally been set by a democratic 85% rule. The 85th percentile speed of drivers on an unposted highway became the limit for that road. Auto fatality rates had dropped since 1974, and everyone had their theories as to why. A case was eventually made for an experimental increase to 65 mph, approved by Congress in December 1987. The insurance lobby predicted carnage. Ralph Nader announced that “history will never forgive Congress for this assault on the sanctity of human life.”
Between 1987 and 1995, 40 states moved to the 65 limit. Auto fatality rates continued to decrease as they had done between 1973 and 1987, during which time some radical theorists had argued that the sudden drop in fatality rate in early 1974 had been a statistical blip regressed to the mean a year later and that better cars and seat belt usage accounted for the decreased mortality. Before 1987, those arguments were commonly understood to be mere rationalizations.
In December 1995, more than twenty years after being enacted, Congress finally undid the NMSL completely. States had the authority to set speed limits. An unexpected result of increasing speed limits to 75 mph in some western states was that, as revealed by unmanned radar, the number of vehicles driving above 80 mph dropped by 85% compared to when the speed limit was 65.
From a systems-theory perspective, it’s clear that the highway transportation network is a complex phenomenon, one resistant to being modeled through facile conjecture about causes and effects, naive assumptions about incentives and human behavior, and ivory-tower analytics.
The Covid Megatilt
Posted by Bill Storage in Uncategorized on April 3, 2020
Playing poker online is far more addictive than gambling in a casino. Online poker, and other online gambling that involves a lot of skill, is engineered for addiction. Online poker allows multiple simultaneous tables. Laptops, tablets, and mobile phones provide faster play than in casinos. Setup time, for an efficient addict, can be seconds per game. Better still, you can rapidly switch between different online games to get just enough variety to eliminate any opportunity for boredom that has not been engineered out of the gaming experience. Completing a hand of Texas Holdem in 45 seconds online increases your chances of fast wins, fast losses, and addiction.
Tilt is what poker players call it when a particular run of bad luck, an opponent’s skill, or that same opponent’s obnoxious communications put you into a mental state where you’re playing emotionally and not rationally. Anger, disgust, frustration and distress is precipitated by bad beats, bluffs gone awry, a run of dead cards, losing to a lower ranked opponent, fatigue, or letting the opponent’s offensive demeanor get under your skin.
Tilt is so important to online poker that many products and commitment devices have emerged to deal with it. Tilt Breaker provides services like monitoring your performance to detect fatigue and automated stop-loss protection that restricts betting or table count after a run of losses.
A few years back, some friends and I demonstrated biometric tilt detection using inexpensive heart rate sensors. We used machine learning with principal dynamic modes (PDM) analysis running in a mobile app to predict sympathetic (stress-inducing, cortisol, epinephrine) and parasympathetic (relaxation, oxytocin) nervous system activity. We then differentiated mental and physical stress using the mobile phone’s accelerometer and location functions. We could ring an alarm to force a player to face being at risk of tilt or ragequit, even if he was ignoring the obvious physical cues. Maybe it’s time to repurpose this technology.
In past crises, the flow of bad news and peer communications were limited by technology. You could not scroll through radio programs or scan through TV shows. You could click between the three news stations, and then you were stuck. Now you can consume all of what could be home work and family time with up to the minute Covid death tolls while blasting your former friends on Twitter and Facebook for their appalling politicization of the crisis.
You yourself are of course innocent of that sort of politicizing. As a seasoned poker player, you know that the more you let emotions take control your game, the farther your judgments will stray from rational ones.
Still yet, what kind of utter moron could think that the whole response to Covid is a media hoax? Or that none of it is. | 1 | 9 |
<urn:uuid:59166ea8-f05b-457a-bbbf-28ec670dcca4> | Dr. Myles Mac Evilly.
As early as April 1918 the newly appointed Lord Lieutenant and Military Commissioner in Ireland, Lord French, in a letter to Prime Minister Lloyd George during the Conscription Crisis, advocated the use of airpower, “ that ought to put the fear of God into those playful young Sinn Feiners 1 ( 1 ) ,notwithstanding that military operations by Irish Republicans had not by then even begun. Lacking, he thought, enough troops on the ground he recommended the establishment of strongly entrenched ‘ Air Camps ‘, one in each province in Ireland out of which military aircraft could “play about” with either bombs or machine-guns “ if the locals got out of hand“. However, permission to attach machine-guns to operational aeroplanes in Ireland was not given by Lloyd George until the end of March 1921 (2) on the grounds that their use could be deemed potentially indiscriminate and disproportionate, thus relegating the R.F.C./R.A.F. in the intervening period to a primarily air transport, communications and reconnaissance role in support of the British Army whose complement in Ireland had increased from 53,000 troops in May 1919 to 83,000 by July 1921 (2b). In effect, the British Army’s role was considered a ‘ policing ‘ one on behalf of the civil powers . Military support against the burgeoning Irish Republican movement was provided to the RIC throughout this period by the British Army, to which the R.F.C./R.A.F. provided air support also.
By March 1921, however, with a deteriorating security situation and under pressure from Churchill and Bonham Carter ( Officer Commanding R.A.F. Ireland ) together with the Chief of the General Staff, General Macready, the R.A.F. now had political permission to become more aggressive against the I.R.A. (3). Co-operation between the British Army /RIC and the R.A.F. was by then better integrated by having a R.A.F. liaison officer embedded in each Brigade headquarters for advice on reconnaissance duties (4), while air to ground communication technology and aerial photography were improving.
The degree to which the R.F.C./R.A.F. contributed to the overall British military effort in Ireland during the War of Independence has been questioned in the past (5) but recently a more favourable analysis of their role, primarily in the South , has been made (6) . This article attempts to assess whether RAF air power had a similarly influential role on the activities of the I.R.A. in West Mayo, bearing in mind that the West Mayo Brigade did not become a functionally active unit until late in the War of Independence and only then with the formation of the Flying Column in March 1921.
Organisation and Deployment.
By May 1917 various airfield sites in Ireland were selected by the Lands Officer at Irish Command H.Q. ( Army ) to facilitate the training of new pilots for the expanding R.F.C. These sites were confirmed by Major Sholto Douglas O.C. 84 Squadron and later Marshall of the R.A.F. Air Staff, while on recuperation leave in Ireland after crashing into a plough – horse on taking off in Triezennes in Northern France. His father, Professor Robert Langton Douglas , was Director of The National Gallery of Ireland in Dublin, at the time. In his autobiography ‘ Years of Combat‘ ( Collins, London, 1963 ) Major Douglas recalls landing his aeroplane in the Phoenix Park close to the present day Arus an Uactharan, on his arrival in Dublin!
In November 1917 five substantial Training Depot Stations ( TDS ) were built in Ireland, at Baldonnel ( No. 23 TDS), closed 1922, and Tallaght (No. 25 TDS ) in West Dublin , Gormanstown (No.22 TDS) in North Dublin ( compulsorily purchased from 4 local families! ) Aldergrove outside Belfast, Collinstown in North Dublin ( No. 24 TDS ), as well as the Curragh (19 TDS ) in Kildare which already had a landing field since 1913 and which was also the seat of British Military power in Ireland. In 1915 an aerodrome had been commissioned near the Curragh Camp for the R.F.C. which was, at that time, a Corps of the Army. By Dec.1917 the 19th Training Squadron with a complement of 24 aircraft in canvas hangars and 328 support personnel was based there. Later in 1918 a new unit, The Irish Flying Instructors School, was founded there. These Training Depot Stations were uniform in design and were intended to be permanent structures that included administrative buildings, aeroplane hangars having six in each Station erected in three pairs, aeroplane maintenance areas, officers mess, stores, a wireless station and women’s (WRAF) quarters. In the time period of WW1 ( 1914-1918 ) a total of 23 airship and airfields were developed throughout the island of Ireland on behalf of the R.F.C./R.A.F. , R.N.A.S., and the US Naval Air Service. In addition, as many as 60 emergency landing sites were created in the countryside on the grounds of the landed gentry , at or near a military or R.I.C. barracks and marked with a large cross. The different aviation schools then in Ireland used training methods pioneered by Lieut-Colonel Robert Smith Barry, who had Irish ( Cork ) heritage.
In April 1918 both 106 and 105 Squadrons, each having RE8 biplanes , were selected to move to Ireland in support of the British Army in a ‘policing ‘ role and were fully operational by late May, in Fermoy ( 106 Sqdn.) and at Strathroy House outside Omagh ( 105 Sqdn.), in the latter case at the request of Lord French, in an antisubmarine role ostensibly but in reality for aerial reconnaissance of Nationalists activities with the Conscription Crisis approaching .( His militaristic policies later lead to his family home in Fermanagh being destroyed by the I.R. A. in April 1921 in a counter- reprisal, prior to his recall to London) This was the first deployment of military aircraft in Ireland since a flight by No. 2 Sqdn R.F.C. from Montrose, Scotland, commanded by Major Charles James Bourke and made up of 6 BE2’s and a Farman Leghorn aeroplane participated in joint Inter Divisional and Command Maneouvres with the British Army at Rathbane Camp, Co. Limerick in early September 1913, in the first overseas flight for the R.F.C.
Between January 1918 and October 1919 landing sites were newly developed at Ballylin House near Ferbane, Co. Offaly ( so-called ‘RAF Athlone‘ ), at Crinkill House near Birr 53 degrees 6 ‘ 00 N and 7 degrees 52’12’W (1919) on a 14 acre estate where detachments from 106 Squadron were deployed in Jan 1918 from Fermoy as part of the 11 ( Irish ) Group and including 141 Sqdn from March 1919 , and at Oranmore near Galway City , though not all airfields were operational at the same time. In August 1918 105 Squadron from Omagh had detachments at Castlebar ( C. Flight consisting of 6 RE8 biplanes commanded by Captn.W.U. Dykes ), at Oranmore ( A. and B. Flights under Major Joy ), at Collinstown, the Curragh, Baldonnel and Fermoy (7).In January 1919., because of flooding concerns on the local airfield , 106 Sqdn. ‘Detached flight‘ at Ferbane moved to Crinkill, Birr with a reduced complement of 8 officers and 55 men. Castlebar, Fermoy, Omagh and Oranmore were by now designated as 6th Brigade Stations, the 6th Brigade R.F.C. being responsible for Air Defence of the British Isles during WW1. By January 1919 both Squadrons were re-equipped with the new Bristol F2B Fighters to replace the older and slower RE8 aeroplanes.
No. 141 Squadron, newly equipped with Bristol F2B Fighter aeroplanes, was deployed in March 1919 from Biggin Hill in Kent to Tallaght Airfield near Dublin, with No’s 117 and 149 Squadrons arriving also at Tallaght later in the month, the latter squadron with DH6 aeroplanes serving in an an antisubmarine role over the sea approaches to Dublin Port , but arriving too late by 6 months to save the R.M.S. Leinster , torpedoed by U.B.-123 on October 10th 1918. Tallaght airfield closed in the 1920’s. By July 1919 the total complement of R.A.F. personnel in Ireland then amounted to 1789 officers and men . In August 1919 two Army cooperation squadrons each having a distinguished war-time background arrived in Ireland as part of a ‘Defence of Ireland Scheme‘ ( a comprehensive plan devised in October 1918 by the British Government for Ireland in anticipation of a German invasion or a generalised uprising ) resulting in the disbandment of 106 Squadron ( hitherto a training squadron ) on the 8th of October 1919 at Fermoy though some of their aircraft were deployed to 2 Squadron newly formed at Oranmore on 1st Feb.1920. Personnel and aircraft from 105 Squadron remained operational in a close air support role in Ireland until 1st Feb. 1920 when it also was disbanded, being reconfigured as 2 Squadron at Oranmore. 105 Squadron would later adapt an emerald coloured battle-axe onto it’s badge to commemorate the squadrons service in Ireland.
On arrival in Ireland in May 1918 the R.A.F. were legally accountable to the British Army on a day to day basis i.e.to the Competent Military Authority ( CMA ), a device the British Army used subsequently in other colonial ‘air policing‘ actions in Mesopotamia( Iraq ), The Transjordan, Somaliland against the Mad Mullah , and much later against the Mau Mau in Kenya in the 1950’s (8) . The armistice occurred in Nov. 1918 resulting in many skilled aviation personnel being demobbed or sent abroad . During the War of Independence seven flights from 2. Squadron and 100 Squadron ( flying Bristol Fighters in late 1920 ) were stationed at Castlebar, Baldonnel, Fermoy , Oranmore , Omagh and Aldergrove , all 6th Brigade Stations, the latter airfield having been especially constructed for the purpose of test -flying small numbers of the Handley Page V 1500 bombers amongst others, built by Harland and Wolfe between 1917 and 1920 . Also at Castlebar were 2 Sqdn. detached Flight from Baldonnel ( presently renamed Casement Airdrome ), from July 1920 to January 1921 under the command of Flight Commander ( and much later Air Commodore ) H. G. Bowen, and 100 Sqdrn commanded by James Leonard Neville Bennett -Baggs.
In 1918 the Oranmore Airfield was situated 5 miles East of Renmore Barracks, which was a Connaught Rangers Training and Parade Ground; it was 600 by 400 yards in dimension with a good surface and a slight slope to the South and East. It consisted of two aeroplane sheds, an officers mess, personnel quarters, an armoury, a hospital and a Wireless Shed. The Oranmore Airfield was involved in an aerial ‘drive‘ or search in the East Galway area after the killing of an R.I.C. inspector Captain Cecil Blake and a lady friend Eliza Williams together with two British Army officers, Captain. Cornwallis and Lieut. Mc Creery ( both from 17th Lancers ) at Ballyturin House near Gort on the 15th May 1921, as well as aerial reconnaissance after the Tourmakeady and Carrowkennedy ambushes.
Communication between the ground and aeroplanes were an issue that improved as the War of Independence continued. In April 1922, with the end of hostilities and with the risk of civil war looming, the Irish Flight of four Bristol Fighters commanded by Group Captain Bonham Carter were retained at Baldonnel to supervise the withdrawal of British ground forces from Ireland (9), escorting troop trains as they evacuated the British Army from the Curragh and Dundalk to Dublin Port, which had been delayed up to then following a petition by Collins to Macready to delay the evacuation in case the anti-Treaty group succeeded (9b), until 14th October 1922 when the Irish Flight ceased flying.
Aircraft and Equipment.
Following upon the South African General Jan Smuts report to the War Council in August 1917 the R.A.F. was formally established by the amalgamation of the Royal Flying Corps and the Royal Naval Air Service on the 1st April 1918. By December 1919 each squadron was equipped with the two-seater Bristol F2B Fighter biplanes, developed by Frank Barnwell in October 1916 and built by the British and Colonial Aeroplane Company Ltd, ( later The Bristol Aeroplane Company Ltd ), having had the RE8, an armed reconnaissance aircraft up to then. Ultimately these Bristol F2B Fighters saw service in No’s 100, 105, 106 and 141 Squadrons and the Irish Flight R.A.F. 1919-1921 operating out of Baldonnel, Castlebar, Tallaght and Collinstown airfields. They were armed with two machine-guns and were considered to be versatile fighters and ground attack aircraft, ( having a fixed, front-firing 0.303 in. ( 7.7 mm ) synchronised Vickers machine-gun for the pilot and a 0.303 Lewis gun to the rear in the observers cockpit attached to a Scarff Ring for mobility and having a range of 1,870 yards or 1.7km), though permission to use machine-guns on aircraft in Ireland was delayed until 24th March 1921. In a later version they had 275 h.p. Rolls Royce Falcon 111 engines, were capable of remaining airborne for two and a half hours flying at 125 mph and could carry up to 200kg of bombs. The wings, some of which were made in Carlow by the firm of Thomas Thompson, were covered in taut, lacquered ( with nitrocellulose! ), linen yarn from different linen mills in N. I., thus allowing Lord French to claim at the end of WW1 that “ the war was won on Ulster wings “!
Baldonnel became the H.Q. of the R.A.F. in Ireland in March 1920, the resident squadrons being No’s 2 and 100 Squadrons, each being the first RAF squadron dedicated to a formal Army co-operation role, supporting the sizeable British garrison in Ireland. Each squadron had three flights of four aircraft though by September 1920 there were only 18 serviceable aeroplanes available for active duty, for logistical reasons, primarily poor maintenance. Gormanstown airfield, built in 1917 as one of the early training depots began to be decommissioned as WW1 ended, and by January 1920 any aeroplanes that remained there were transferred to Baldonnel.
During the Mountjoy hunger strike of April 1920 clashes occurred between the British Army and supporters of the strike, leading to RAF aeroplanes flying as low as the eaves of houses in Phibsboro to intimidate the protestors (9c). Later that year on November 21st 1920 an RAF aeroplane was seen circling twice over Croke Park prior to the arrival of the R.I.C./Auxiliaries according to a report in next days ‘Irish Independent’ 22Nov. 1920. It was subsequently denied by the RAF that they had any connection with that days events ( Bloody Sunday ), that it was on another mission at the time and that its guns were not in fact operational!
Over the next six months and after high level intervention by Churchill and Sir Hugh Trenchard ( later to be the first Chief of the Air Staff ), No 2 Squadron was brought up to strength and deployed to Fermoy in the South where it was needed most (10). Also No. 100 Squadron there had their DH9 aeroplanes replaced by the more reliable Bristol Fighters. Fermoy airfield when developed for the R.F.C. in 1918 was then a long field, 300 by 400 yards, south of the old No 1 Cavalry H.Q. on the old Dublin Road on what was originally an old race-course and exercise- grounds for the British Army. An entanglement of barbed wire surrounded the Camp, interspersed by sandbag gun- emplacements and gates to allow access for the aeroplanes to the airfield .These were housed in three canvas hangars while RFC/RAF personnel lived in tents or primitive quarters made from aeroplane packing cases for two years when huts were provided. In Feb. 1922 the base was formally relinquished to Irish National Army forces ( known as the Brigade Headquarters Guards ) four days prior to the taking over of the Military Barracks in Fermoy.
The main duty of the R.F.C./ R.A.F. involved the dropping of mail and VIP passengers to military bases throughout Ireland as travelling by road was both time poor and dangerous, as well as providing aerial reconnaissance of roads and railways for sabotage as well as escorts to road convoys, the prevention of illegal drilling, dropping of British propaganda leaflets advising young men not to be ‘misled’, and searching for flying-columns post ambushes ( ‘drives ‘) and also aerial photography to identify static Irish Volunteer locations. Communication with the ground included the use of carrier pigeons ( as after Kilmeena/Skerdagh engagements ) and dropping message bags with coloured streamers to RIC outposts ( as sometimes occurred at the RIC barracks in Balla, Co. Mayo where a ring, 15 feet in diameter , of whitewashed stones, were laid in the garden to guide aeroplanes as they dropped messages and parcels intended for the ‘enemy’ (11). Other methods involved the use of Very lights and Klaxons as well as “T” Popham Signalling Panels ( developed 1917 ), a basic ground to air signalling system , prior to the advent of wireless telegraphy. An air to ground radio transmitter / receiver system was developed by the Marconi Company in early 1916 and was used by the R.F.C. in France and much later in Ireland as a CW ( Morse ) system, the Marconi Crystal Receiver on the ground receiving signals from aeroplanes, but not vice versa.
A designated landing strip existed in Castlebar as early as September 1912 as one of the participants in the Irish Aero Club ( founded1909 ) sponsored Dublin-Belfast air race, James Valentine, went on to make exhibition flights afterwards in towns around Ireland , including Castlebar. Although an Airship Station at Castlebar was initially proposed by the R.A.F. this plan was altered later to an Advanced Aerodrome or forward landing ground for anti-Republican /IRA operations ( 12). This airfield was developed by the R.A.F. in Castlebar in May 1918 .By the 5th January 1920 it was still listed ‘as unavailable for civilian use‘. Prior to this, the R.F.C’s role along the East coast was primarily an anti-submarine one following upon the laying of mines off the Donegal coast in 1914 and the arrival of German submarines in the Irish Sea in January 1915. But after the entire coast of Gt. Britain and Ireland was designated as a ‘War Zone ‘ by Germany on 4th February 1915 different naval aviation sites throughout Ireland were chosen to counter this threat as in Bantry Bay, Ferrycarrig outside Wexford, Lough Foyle and Queenstown (for R.A.F. and U.S.N.A.V.A.S. ). In July 1918 a proposal was made by the RAF to develop permanent Seaplane Stations at Shannon Mouth, Gorteen Bay, Co. Galway and in Co. Mayo to oversee the West Coast of Ireland but this did not evolve as WW1 was coming to an end.
‘RAF Castlebar’, known locally as ‘ The Aerodrome Field‘, consisted of about 90 acres and was on grass. It was also known as the Drumconlan airfield, a 6th Brigade Station as recorded on Secret Ordinance Map 120, 16th Nov 1918. It lay 50 51 09 N and 009 16 41 W, north of the Castlebar to Breaffy road ( on the site of the present Baxter factory ) and 200 kilometres west-northwest of Dublin . In early January 1919 it was reported ( Connaught Telegraph Jan. 04 ) that Castlebar had been made a permanent centre for the Air Force ( sic ) and that the temporary aerodrome would shortly be replaced by a permanent structure, though no buildings or a hard surface were ever built .Between 1918-1921, as part of a thrice weekly mail service to Dublin from Castlebar, emergency landing strips were designated to include a field 3 miles east of Athlone and a private park near Carrick-on-Shannon, courtesy of the local gentry. A 2nd airfield that was developed later nearby, the Castlebar Airfield, at Knockrower 53 50 54N, 9 16 49W was south of the main Castlebar to Breaffy road and close to the Castlebar to Dublin Midland and Great Western railway line. This airfield later evolved into the Mayo Flying Club having a 600 yards tarmac strip until it’s closure in the 1960’s.
C Flight 105 Squadron RAF, Castlebar.
The Drumconlan airfield ( RAF Castlebar ) had wooden huts and canvas Bessenaux hangers, sited parallel to and 20 yards from the main road and sheltered by a wood of fir trees for protection from the prevailing winds. An apron of granite stones and loose chippings was laid down in front of the hangers by a local contractor. Later a runway of sorts was built but the surface was always soft. Fuel tanks there had a capacity of 4000 gallons of aviation fuel. Mains water was piped to the camp from the nearby town but there was no electricity. A workshop lorry engine at the airfield generated 110 volts DC of light after 10 pm at night(7ibid). Importantly for operational reasons , it also had a Wireless Station for air to ground communication ( as CW telegraphy ). Cadres from 105 Squadron continued in a close air support role ( to the Army ) at RAF Castlebar until Feb. 1920 with Bristol F2b Fighters while detached flight of 2 Squadron from Oranmore remained until July 1920 and D Flight of 100 Squadron from Baldonnel until January 1921, each with Bristol F2b Fighter aeroplanes.
The R.A.F. shared this airfield with a machine-gun platoon of the local 2nd Batt The Border Regiment, to whose commander ( C.M.A. ) they were legally responsible to. In September 1920 this platoon came 2nd in an Army musketry competition that embraced all British Army regiments in Ireland, an indication of this battalion’s overall military capability in Mayo at this time. Other Infantry units of the Border Regiment were stationed in Westport and Ballinrobe.
Officers from each Squadron were billeted in Maryland House (13) . This was less than a mile from the Airfield and conveniently close to the Town’s railway station. The house had 7 bedrooms and was situated on 28 acres of land. In 1917 Maryland was rented by the R.F.C. from the estate of Sir Malachy Kelly ( 1850-1916 ), a former Crown Solicitor for Mayo and later Chief Crown Solicitor for Ireland. He had offices in Ellison St. Castlebar where Ernie O Malley’s father, Luke, was Managing Clerk. Malachy Kelly was a staunch Unionist in outlook and was knighted in 1912. Countess Markievicz once wrote that Malachy Kelly boasted he could ‘ swing ‘ any jury in Ireland. He died on March 25th 1916 and his funeral was one of the largest seen in the then resolutely Unionist town of Castlebar (14).
Ground support personnel to the RFC/RAF squadrons lived in wooden huts and tents beside the airfield. Both personnel and machines were constantly subject to the inclement West of Ireland weather, on one occasion experiencing severe damage to aircraft in a winter storm.
Activities of the RFC/RAF.
Initially the Squadrons role in Castlebar was that of a training and transport type as well as aerial anti-submarine reconnaissance along the West coast. During WW1 British naval intelligence was fearful that German submarines were getting supplies of fresh food, water and fuel from local Sinn Fein sympathisers in bays along the coast as apparently occurred on a beach near Rosport in North Mayo (15a). Michael Henry in his BMH witness statement also mentions fuel being conveyed by local Volunteers to the townland of Carrateigue in North West Mayo and of arms being landed. He himself possessed a German Parabellum that came from a German submarine.
Such aerial duties were not without their share of accidents locally at Castlebar with the loss of five aeroplanes and three fatalities : a 2nd Lieutenant Fred Clarke, age 24, who died from injuries on 6th June 1919 in an aeroplane accident and a Major Henry Francis Chad M.C. from the 2nd Battalion, The Border Regiment, who is buried in the Church of Ireland cemetery in Mountain View ,( formerly Church St. ) Castlebar. Major Chads had been flown to Dublin from Castlebar to give evidence against The Freemans Journal over an incident at Turlough Village involving a military lorry. On their return on the 28th August 1920, the plane, a Bristol F2B ( F4528) piloted by Pilot Officer Norman Herford Dimmock hit a trestle at the end of the airfield killing Major Chads and seriously injuring the pilot. The funeral cortege to the Church of Ireland cemetery in MountainView, Castlebar, was impressive . Soldiers lined the route in double-file while two Army bands took part. Business premises in Castlebar Town were compelled to close for the event as reported by the Connaught Telegraph 4 September 1920 thereby inadvertently allowing the townspeople to attend a Cycling and Sports Meeting in the grounds of the local Mayo Asylum which the Mayo Brigade used for a covert meeting in which the Mayo Brigade was reorganised into four brigade areas, North, South, East and West. The third fatality involved a 17 year old soldier, Pvte. Heap, from The Border Regiment who was shot dead by his friend Pvte. Partington on the 29th December 1920 as part of a prank while the latter was on sentry duty.
In general, the relationship between the R.F.C. and the local people was good initially. Shortly after their arrival they held an air display in the local Airfield on 17 August 1918, watched by a large crowd. They acquired dairy products from local farmers for the Camp. There were air displays throughout the county and officers were welcomed into the social life of the town. Two R.F.C. members, William Munnelly from Castlebar and James Mee from Mullingar and later resident of Castlebar frequently landed at Castlebar during the 1914-1918 WW1.
Bristol Fighter F4351 at Castlebar.
‘Showing the flag‘.
After the R.A.F. was informed by the R.I.C. that Resident Magistrate John Charles Milling, a native of Westport and a former policeman, had been assassinated in Westport by the IRA on the night of the 29th March 1919 the Commanding Officer of the 2nd Batt, The Border Regiment who was the de facto C.M.A. requested the R.A.F. to make a flyover of Castlebar and Westport ‘ to show the flag ‘ (7ibid). Lieut. Dykes led a flight of five RE8’s from Castlebar over Westport on the day after the killing where they let off bursts of ammunition from the Vickers machine-gun mounted on the port wing as well as from the Lewis gun in the rear observer’s compartment. The Lewis light machine-gun had a circular magazine that held 47 rounds of 0.303 ammunition. In addition, each RE8 biplane had a pair of bomb racks at the wing roots that carried two pairs of 20lb bombs which were duly dropped into the sea off Westport thereby ’showing the flag ‘ as ordered by the C.M.A. Westport was then put under Martial Law nobody getting in or out of of the town for a time. Later, after the Truce was declared, the British Chief Justice and Master of the Rolls gave his opinion that the imposition of Martial Law everywhere in Ireland was itself illegal.
Lieut. Dykes, on advice from London , patrolled along the West coast to welcome the first West to East transatlantic flight of Alcock and Brown on board their modified Vickers-Vimy bomber. On 15 June 1919 he finally spotted their crash-site 4kms south of Clifden at Derrigimlagh ,behind the Marconi Station, whereupon he flew back to Castlebar and advised London by telegraph of Alcock and Brown’s arrival. Castlebar Post Office, like virtually every other town in Connaught at the time, including Galway, did not have a telephone system of its own then though it did possess a telegraph system. The R.I.C., however , did have their own telephone system throughout the country. A British military wireless network was not developed until the end of 1921 by which time the War of Independence was over by six months ( 15b ), while a civilian telephone system West of the Shannon did not exist until 1927, ( and in Castlebar until 1928 ).
For the next year Lieut. Dykes continued to patrol the local countryside with his C-flight of six RE8’s from 105 Squadron at Castlebar, on one occasion on the 17th July 1919 at 9.10 pm, welcoming the Airship R34 as it returned from the US by dipping his wings in salute, as the Airship crossed the Mayo coast and echoing the appearance of a mysterious airship that passed over the town of Newport on Wednesday January 8th 1913 at 6.40 pm, at a height of 500 to 1000 feet, as mentioned in the Irish Times on the 11 January 1913. This event was witnessed by 2 members of the RIC one of whom a Sgt. Padian telegraphed Westport RIC station to look out for it. Poor serviceability of existing aeroplanes was a frequent problem, however, as recorded on one occasion in No.100 Squadron War Diary for 14 March 1921, three contact patrols with infantry in the Ballinrobe area “ rounding up IRA “ ( having to ) be abandoned due to engine trouble. Despite such hiccups, a flight from 100 Squadron at Casrlebar continued to patrol as far as the Athlone area as required under ‘ The Defence of Ireland Scheme October 1918 “ (15c). Likewise, 3 flights from the Curragh patrolled it’s own locality as well as Dublin and ‘ Ulster ‘: the Oranmore flight patrolled the Limerick area while Fermoy was patrolled by it’s two resident flights.
Because of their existing terms of engagement C-Flight out of Castlebar was not able to engage directly with active IRA units until 24 March 1921 but instead reported any observed activity to the Competent Military Authority ( O.C. 2nd Battalion The Border Regiment ) on landing. Two weeks earlier a Sunday supplement to a Milan newspaper La Domenica de Corriere ( 7 March 1921 ) showed British aeroplanes in a fictious attack on the IRA, in a full colour illustration. The caption in Italian read ‘The airmen foil an ambush by rebels on trucks loaded with troops, killing 5 of the assailants‘ . In the propaganda war the Italian Press mostly deferred to British ‘spin‘. Up to then aeroplanes could challenge at will those on the ground as described by Michael Kilroy X as happening to a very young and small girl , Maggie Mc Donnell, who was pursued repeatedly by a low-flying aeroplane as she made her way over a hill to a neighbour’s ( Frank Chambers ) house in Upper Skerdagh after a party of Black and Tans had earlier raided the Mc Donnell home close-by, on 23 May 1921, the day of the Battle of Skerdagh (16).
After the formation of the four Mayo Brigades ( North, South, East, and West ) in July 1920 greater activity was demanded by G.H.Q. and Michael Collins for action in the West and elsewhere in an attempt to divert British Army men and resources from the South of Ireland where the Volunteers were under severe pressure. In response, in the Spring of 1921, a decision was made by Michael Kilroy, O.C. West Mayo Brigade, to form an active service unit or Flying Column to engage the enemy despite a paucity of weapons (especially ammunition), and a battle-hardened British Army with aerial support, in opposition.
Military activity by the West Mayo Flying Column , however , invariably resulted in an aerial pursuit by the R.A.F. as happened after the Kilmeena and Carrowkennedy Ambushes, where communication with R.A.F. Castlebar was important. For example, John Feehan, a native of Rossow, Newport and Q.M. to the West Connemara Brigade who had been attending the wedding of his O.C. Petie Joe Mc Donnell to Michael Kilroy’s sister Matilda in the parish church in Kilmeena, just four days before the Kilmeena Ambush( 19th May 1921 ) as they made their way back to Connemara after the ambush stated “A plane passed overhead as we stood on the hill of Corveigh ( south of Aughagower) and we had to run for shelter” (17).
After the Skerdagh affray on 23rd May 1921 R.A.F. search planes were summoned early because an RIC Constable Mc Menamin had made his way on horseback to Newport whence contact was made with Castlebar for aerial support unlike after the Carrowkennedy Ambush ( 2 June 1921) from where the RIC had no way of sending for help until well after the ambush was over X. Michael Kilroy in his Witness Statement wrote ” In support of the search for the ASU after the battle of Skerdagh a hugh operation was mounted by the police and military using aeroplanes with carrier pigeons flying around the whole Nephin Range in North Mayo to create awe and consternation among the people (18). This was a wide road-less and tree-less area with inadequate cover for a retreating Column , so described by Michael Henry, ( WS 1732 p.8 ) and with a scattered population for support. After a night march Kilroy was able to penetrate the encircling British cordon and lead the Column to safety.”
Two weeks earlier men from the West Mayo Brigade had marched cross country to support Tom Maguire’s South Mayo Brigade at the Battle of Tourmakeady but returned after scouts had advised that the engagement was over and that the Column was in retreat over the Partry Mountains pursued by Crown forces guided by aeroplanes with Morse telegraphy facilities on board, as described by Ernie O Malley in ‘ Raids and Rallies ‘. Many years later in ‘ Survivors ‘ Tom Maguire wrote ” An aeroplane came in so low it would deafen you, but it passed on ”(19) Later, on p70, Kilroy wrote ” After the Carrowkennedy Ambush the surrounding district was scouted by enemy planes in the morning before any help was permitted to come along to assist the wounded and dying after the battle ” . This was later corroborated by Ernie o Malley again in Raids andRallies p.202 who stated that, after Carrowkennedy, aircraft from Galway and Castlebar assisted the ground forces of Auxilaries and R.I.C. in a search of the entire mountainous area close-by based on information relayed to the pilots via Morse code, by British Army troops on the ground, all to no avail, the whole Column having escaped. Thomas Heavey in his Witness Statement described the follow up as follows ” After Carrowkennedy the Column retreated towards Leenane and the Killary. A gun-boat from Clew Bay fired rounds at Croagh Patrick, the countryside was cordoned off and searched while aircraft circled overhead (20).
Another member of the Flying Column, Tom Kitterick, their erstwhile Q.M. wrote about Carrowkennedy in his Witness Statement ” We decided to make tracks for for a mountain over Towneyard Lake and as we moved along the the side of the mountain we had to keep dodging the plane which kept swooping down in search of us. At nightfall we attempted to reach the peak of the mountain but when we arrived there we saw the searchlights from the destroyers in Killary Bay sweeping the mountains and returned to our Eagle’s Nest of the day before. By now we had been a week without rest (21). Commandant Sean Gibbons described how after Carrowkennedy the British had an aeroplane out (3rdJune) ” which gave the sentries a lot of trouble “. A few days later as the divided Column retreated towards Leenane he noticed a plane directly overhead ” I was sure the pilot or observer had spotted something unusual because one plane kept flying from there ( a wood where they had taken cover ) to where we had left Kilroy, Kitterick and Rushe for practically the whole of the day. We tried several times to get across the road ( and break cover ) but we were unable to do so on account of plane activity (22). Later John Feehan watched the ‘drive ‘ from the safety of the south side of the Killary as he awaited the arrival of Kilroy and others. ” At 6.30 a plane came along and searched all the islands in the Killary and delayed long enough at each to see there were no men there. Then it passed along the slopes of the mountain scanning the area or men and from our lookout on the mountain the O.C. and myself had the rare experience of looking down on a plane in flight and seeing the pilot and observer clearly. The plane left the Bay after 3/4 of an hour and headed over the Mayo Mountains ( and back to Castlebar ) (17ibid). As Richardson stated ” The IRA did not fear destruction from the air so much as detection 6ibid.
General Macready, British Military Commander in Ireland remarked, just as the Truce was called on the 11th July 1921, that he had practically cornered the ‘biggest murder gang’ in the West of Ireland! In reality, despite 3 to 5000 soldiers and R.I.C. involved in this massive ‘drive’ that was organised from Leenane, with searchlight- bearing destroyers in the Killary and with aeroplanes flying overhead continuously, only one Volunteer, Andy Harney, home to see his father was captured and later released.
These examples of the deleterious effect of air reconnaissance subsequent to military action by the West Mayo Brigade Flying Column illustrate the close cooperation that had evolved between British Army ground forces and the R.A.F. in West Mayo as the conflict progressed. One practical example of their mutual support, at a request by the Col. Commandant, Galway Brigade ( British Army ), involved an aeroplane from 100 Squadron at Castlebar on the 26 April 1921 being despatched in a successful search for a delayed armoured-car convoy carrying the British Military Commander in the Midlands and Connaught, Major General Jeudwine, who had been on a tour of inspection of garrisons in the Castlebar-Claremorris area.
While at the start of the War of Independence the R.F.C./R.A.F. was hampered by poor aircraft supply and maintenance, communication difficulties and inadequate cooperation between the Army and R.I.C., by the later stages of the conflict air to ground integration was greatly improved between Army and R.A.F. commanders as well as better wireless communications and better reconnaissance, whether armed, visual or photographic. Not widely acknowledged, these lessons in Ireland were to be further refined by the R.A.F. in subsequent ‘ colonial air policing ‘ actions elsewhere as in Kurdistan, Aden, Iraq and Palestine ( 8ibid). Ultimately the R.A.F. would incorporate these experiences into a written doctrine for their War Manuals in the inter-war years (23).
Michael Brennan O.C. East Clare Brigade later noted that ” The addition of (more) aeroplanes and armoured lorries would have made short work of us “. Good leadership and luck by Michael Kilroy, who was not unaware of the aeroplane menace, saved the West Mayo Flying Column on more than one occasion. With the likelihood of enhanced activity and surveillance by the R.A.F. in the future and also having acquired permission to use their on-board machine-guns and bombs at will, further military action by the I.R.A. was likely to be even more seriously curtailed as the War of Independence continued.
(1) French to Lloyd George , 18 April 1918 MAI BMH CD 178 /1/2
(2) Sheehan , William (2005) , ‘ British Voices from the Irish War of Independence 1918-1921 ’. Cork ( Collins Press), p151 .
(2b) Bond , Brian. 1980. ‘ British Military Policy between the Two World Wars ‘ . Oxford. Clarendon Press. p18.
(3) NAUK WO 141/45, Worthington-Evans memorandum, 29March 1921.
(4) NAUK AIR 5/214, No.2 Squadron War Diary for March 1921.
(5) Townsend, Charles(1975). ‘ The British Campaign in Ireland 1919-1921 ‘ Oxford (OUP) p170.
(6) Richardson, David (2016 ‘The Royal Air Force and the Irish War of Independence 1918-1922‘. Air Power Review Autumn/Winter. Vol.19 No.3.
(7) Dykes, Capt. W. Urquhart. (1999) ‘ Reminiscences 1917-1919 of a Pilot flying with the RFC in 1918 and with the RAF in 1919 in what is now the Irish Republic ‘. p67 1978. (Urquhart-Dykes and Lord ).
(8)Hofman, Bruce. ( Jan 1.1990). ‘ British Air Power in Peripheral Conflict 1916-1976 ‘. Rand Corp. R-3749-Af.
(9)Londen, Pete. (2015). ‘ Airwar over Ireland : Airpower in the Civil War ‘. Aeroplane. Oct. p33-37.
(9b) Macready, Neville. ‘ Annals of an Active Life‘, vol. 2 ( London :Hutchinson n.d. ), p 620.
(9c) Yeates, Padraig. ‘Lockout : Dublin 1913‘.
(10) NAUK AIR 8/22, Churchill to Trenchard 24Sept 1920.
(11) Howley,Thomas. BMH WS 1122.
(12)Cross and Cockade ( 1918 ). ‘ Gazeteer of Flying Sites in the UK and Ireland , Part 4‘.
(13) Marylands, Castlebar, Co. Mayo. Residence of Malachy Kelly. Illustration. ‘The Irish Builder‘. Vol.XLIV, No.1036, p 1568, Jan.25 1903.
(14) Castlebar Nostalgia Boards. (2Dec.2013 ). Posted by Alan King .
(15a) Henry, Michael BMH WS 1732. p3
( 15b ) General Staff 6th Division. ‘ The Irish Rebellion in the 6th Dvsn.Area ; From after the 1916 Rebellion to Dec. 1921 ‘ . Annex v1. Papers of General Sir Patrick Strickland. IWM p363.
(15c ) Philpot Ian M.( 2005 ). ‘ The Royal Air Force ‘ :Vol. 1 The Trenchard Years 1918-1919. Chapter 4 , Air control in the Middle East, India and Ireland. Pen and Sword Books Ltd.
(16) Mayo County Library. Statement by General Michael Kilroy, Newport, Co. Mayo. Part2 . Active Service Operations.
(17) Feehan, John. BMH WS 1692, p73, and p81.
(18)Kilroy, Michael. BMH WS 1162. p56
(19) Mac Eoin, Uinseann. (1980) Tom Maguire. ‘Survivors‘. Argenta Publications p286.
(20) Heavey, Thomas. BMH WS 1668. p59.
(21) Kitterick, Tom. BMH WS 872. p47.
(22) Gibbons, Sean. BMH WS 927. p48.
(23) Richie, Dr. Sebastion (2011). ‘ The R.A.F., Small Wars and Insurgencies in the Middle East 1919-1939 ‘. MOD Monograph.
Thanks to Sean Cadden, Westport for drawing my attention to the Michael Kilroy papers in Mayo County Library.
Photographs courtesy of the late Kay Mc Evilly, Cashel House Hotel where Captn. Dykes was a frequent guest. | 1 | 15 |
<urn:uuid:f038ba30-818d-4615-98d8-8e3296c04408> | Anxiety is common in children with ASD; however, the burden of specific anxiety disorders for adults with ASD is under-researched. Using the Stockholm Youth Cohort, we compared anxiety disorder diagnoses among autistic adults (n = 4049), with or without intellectual disability, and population controls (n = 217,645). We conducted additional sibling analyses. Anxiety disorders were diagnosed in 20.1% of adults with ASD compared with 8.7% of controls (RR = 2.62 [95% CI 2.47–2.79]), with greatest risk for autistic people without intellectual disability. Rates of almost all individual anxiety disorders were raised, notably obsessive–compulsive disorder and phobic anxiety disorders. Anxiety disorders were more common in full siblings and half-siblings of people with ASD. The implications of this are explored.
Autism spectrum disorders (ASD) are characterised by early-onset difficulties in social interaction, communication and restricted, repetitive patterns of interests and behaviour (Lai et al. 2014). At least 1% of the population have ASD (Brugha et al. 2016; Idring et al. 2015; Fombonne 2009). Despite a growing number of clinical and epidemiological studies, there is still limited understanding of the adult outcomes of people with ASD. Psychiatric comorbidity in ASD is associated with worse outcomes for individuals (Gillberg et al. 2016) and a greater burden for their families and society (Baxter et al. 2015; Buescher et al. 2014). Understanding the psychiatric comorbidities which affect adults with ASD is therefore essential to the planning of community services and optimising quality of life (Bakken et al. 2010; Charlot et al. 2008; Gillott and Standen 2007; La Malfa et al. 2007; Bradley and Bolton 2006).
While anxiety disorders are known to be common in children and adolescents with ASD (van Steensel et al. 2011; White et al. 2009; Vasa and Mazurek 2015) less is known about the prevalence of anxiety disorders in adult populations. Studies to date of adults provide an inconsistent account with prevalence estimates ranging between 28% and 77% and are limited by differences in methodological design, small sample size (Mazefsky et al. 2008; Tani et al. 2012), recruitment of selected samples from secondary services, or lack of a valid comparison group (Tani et al. 2012; Kanai et al. 2011; Bakken et al. 2010; Hutton et al. 2008; Buck et al. 2014; Hofvander et al. 2009; Russell et al. 2016; Lever and Geurts 2016). The existing evidence is therefore difficult to generalise and may be subject to confounding and selection bias.
The largest study of this topic to date used data from a medical insurance registration database in California and found that 29% of 1507 adults with a diagnosis of ASD also had an anxiety disorder diagnosed compared with 9% of a non-autistic reference population (adjusted odds ratio 3.69 [95% CI 3.11 to 4.36]) (Croen et al. 2015). However, other than obsessive–compulsive disorders (OCD), the study did not identify subtypes of anxiety and did not adjust for potentially important confounders such as parental mental illness or socioeconomic characteristics. A large Danish study reported over a two-fold incidence of OCD in individuals who had been diagnosed with ASD compared with those who had not (incidence rate ratio 2.18 [95% CI 1.91 to 2.48]) (Meier et al. 2015). Apart from OCD there have been no known large studies of specific anxiety disorders in adults with ASD.
It has been suggested that common mental disorders could be more prevalent in individuals with ASD without intellectual disability (ID), because they may have more insight into their problems. This pattern has been observed for depression (Rai et al. 2018) and for OCD (Meier et al. 2015) in higher functioning individuals with ASD. To date, studies of the prevalence of other anxiety disorders in adults with autism and ID have predominantly focussed on comparing individuals with intellectual disability with and without autism, with variable results regarding whether autism increases the rates of anxiety disorders, although these studies have been relatively small (Bakken et al. 2010; Bradley and Bolton 2006; Charlot et al. 2008; Gillott and Standen 2007). There is relatively more evidence on this issue in the literature on children and adolescents. For example, in their meta-analysis, van Steensel et al. found that studies of children and adolescents with ASD with a lower mean IQ had higher rates of social anxiety and generalised anxiety disorder, whilst studies with a higher mean IQ had higher prevalence of separation anxiety and OCD (lower and higher defined by the cross study mean of 87) (van Steensel et al. 2011). Only one study reported a mean IQ < 70 in which higher anxiety scores were associated with higher IQ and the presence of functional language use, however not for simple phobia, panic disorder and social phobia (Sukhodolsky et al. 2008). Further information on whether the risk of being diagnosed with anxiety disorders in the autistic population differs by the presence or absence of intellectual disability is therefore warranted.
It is possible that ASD and anxiety are associated through shared genetic causes. Genetic variants with pleiotropic effects on the ASD and anxiety phenotypes have been identified (Geschwind 2011) and family-based studies suggest aggregation of ASD in offspring to parents with anxiety disorders (Meier et al. 2015; Duvekot et al. 2016) as well as aggregation of anxiety in family members of individuals with ASD or autistic traits (Bolton et al. 1998; Jokiranta-Olkoniemi et al. 2016). However, the associations found in family-based studies may also reflect the psychological burden of having a close family member with ASD or difficulties in social interaction or communication.
In as far as ASD and anxiety are genetically linked, one could expect risk of anxiety to vary with genetic distance from the ASD proband, as it has been observed for family history (Xie et al. 2019). That is, risk of anxiety disorders would be highest in ASD cases, lower in their non-autistic full siblings (with whom they share on average 50% of their genome), lower still in their non-autistic half-siblings (with whom they share only 25% of their genome) and lowest in a non-autistic reference population (to whom they are likely unrelated). Comparing risks in this way can therefore help to explore a potential genetic correlation between autism and anxiety.
An additional advantage to the sibling design is that it allows for the direct comparison of ASD probands with non-autistic sibling controls, with whom they are likely to share many potentially confounding characteristics. Since associations within sibling pairs will not be confounded by characteristics that are shared between them (e.g. early life environment), sibling comparisons can account for unobserved confounders and may thereby provide a better estimate of causal effects when using observational data.
Using data from a large population-based cohort, this study aimed to: (1) describe the lifetime prevalence of diagnosed anxiety disorders in addition to prevalence in those aged 18 and over in people with ASD; (2) compare the risk of diagnosis of anxiety disorders in young adulthood among those with ASD (any; with ID; without ID) with that in a non-autistic reference population; (3) compare risk for diagnosis of anxiety disorders across ASD cases; their full siblings; their half-siblings; and a reference population; and (4) compare risk of anxiety disorders in adults with ASD directly with the risk of anxiety disorders among their non-autistic full siblings.
Study Setting and Design
The Stockholm Youth Cohort (SYC) is a register based cohort of all individuals who lived in Stockholm County, Sweden, for at least 1 year between January 1st 2001 and December 31st 2011 and were aged between 0 and 17 years at any time during that period (n = 736,180) (Idring et al. 2012, 2015). All residents in Sweden receive a unique personal identification number at birth or on immigration to Sweden, which enables the collection of prospectively recorded data via a number of national and regional health care, social and administrative registers (Idring et al. 2012). Statistics Sweden carried out the record linkages and, before the research group were given access, a serial number was used to replace the personal identity number to ensure the anonymity of cohort members. For this study, we excluded adoptees, those whose biological parents could not be identified, and those with missing covariate data. Over 96% of those with missing data were attributable to individuals or their parents having been born abroad, indicating their absence from Sweden and thus having missing information in the registers. Considering < 5% were excluded due to missing data, and there was no evidence of a difference between anxiety disorder diagnoses in those with or without missing data (p = 0.14), bias due to missing data is unlikely in this study. We included those who had been in the cohort over 4 years at the end of follow-up on December 31st 2011 for the analysis of lifetime prevalence (Supplement Figure I) and in the main analysis excluded anyone who was under the age of 18 (Fig. 1). After exclusions, our study population consisted of 221,694 aged ≥ 18 years, of whom 4049 had been diagnosed with ASD. We also identified the full and half-siblings of individuals with ASD within our study population. The study was approved by the ethical review board of Karolinska Institutet, Stockholm.
Identification of ASD
ASD diagnoses were established using national and regional registers that encompass all known pathways for ASD diagnosis and follow-up in Stockholm County. We obtained diagnoses of ASD (ASD: ICD-9 299, ICD-10 F84, DSM-IV 299) and intellectual disability (ID: ICD-9 317-319, ICD-10 F70-79, DSM-IV 317-319), identifying individuals with ASD without ID and with ID within our study population and additionally used service use data to identify ASD and intellectual disability (Idring et al. 2012, 2015). This method has been shown to have good validity in identifying individuals with a diagnosis of ASD (Idring et al. 2012) and provides comprehensive coverage of in-patient, out-patient and primary care contacts in this population.
Identification of Anxiety Disorders
For the purpose of this study, we included neurotic, stress related and somatoform disorders as defined in the World Health Organisation’s ICD-10 as our outcomes. We used three registers to identify members of the cohort who received a diagnosis of anxiety disorder by a clinician during the time period studied. These include the (1) National patient register, which contains the dates and discharge diagnoses of all inpatients in Sweden since 1973, and outpatient care since 2001; (2) the Stockholm adult psychiatric outpatient register which records the dates and diagnoses for any contact with specialist outpatient psychiatric services in Stockholm County since 1997; and (3) the Stockholm child and adolescent mental health register (PASTILL) which records dates and diagnoses of all child and adolescent mental health care in Stockholm County since 1999. The majority of the mental health services in Sweden are publicly funded with diagnoses contemporaneously recorded using ICD-10 codes in the relevant registers. Clinicians follow international and national guidelines when making such diagnoses, although individual variation in the diagnostic process is likely and the details of measures or tools used are not recorded in the registers. Diagnoses recorded in Swedish registers have been extensively used in psychiatric epidemiology and several validation studies on a range of psychiatric diagnoses have been carried out including obsessive compulsive disorders (Ludvigsson et al. 2011; Ruck et al. 2015; Allebeck 2009). We used the adult and child diagnostic registers to enable us to assess lifetime prevalence in addition to prevalence in those over 18 years of age. We then stratified our primary outcome variable (any anxiety disorder: ICD-10 F40-48) by diagnostic subtype to identify specific disorders as detailed in Supplement Table 1.
We identified variables that, in the literature, have been associated with ASD as well as anxiety. These include the individual’s age and sex, parental age at birth, highest education of either parent (≤ 9 years, 10–12, ≥ 13 years), quintile of household income, individual or parental birth country outside of Sweden, and having a record of parental mental illness (Lofors et al. 2006; de Graaf et al. 2002; Rai et al. 2012; Jokiranta et al. 2013).
Analyses were performed in Stata/MP version 14.2. We examined the characteristics of our study cohort by exposure status (no ASD; any ASD, further dichotomised into ASD with and without ID) in relation to the outcomes. We then used modified Poisson regression to estimate the relative risk (RR) of being diagnosed with an anxiety disorder after the age of 18 among those with a diagnosis of ASD (any; with ID; without ID) compared with a non-autistic reference population, using robust standard errors to account for clustering of outcomes within families. We compared the relative risk estimates for anxiety disorder diagnosis for adults with ASD with and without comorbid ID in supplementary analysis. In supplementary analysis we also calculated the prevalence of anxiety disorders for those with ASD (any; with ID; without ID) for the full population of those aged 4–27 years of age. To explore a potential genetic gradient, we also estimated the relative risk of anxiety disorders among the non-autistic full- and half-siblings of ASD cases compared with a non-autistic reference population using the same methods (but clustering on maternal rather than family identification number for half-siblings), and compared these relative risk estimates in supplementary analysis. We adjusted our estimates for (1) sex and age at the end of follow-up, and additionally for (2) parental age, parental educational attainment, family disposable income quintile, individual or parental foreign birth, and maternal and paternal psychiatric history. To assess the extent to which associations between ASD and anxiety were due to unobserved shared familial confounders, we directly compared adult individuals with ASD with their discordant full sibling controls using conditional logistic regression models with adjustment for non-shared characteristics between sibling pairs (age, birth order, sex, maternal and paternal age). Lastly, as anxiety disorders are known to be a common problem in children with ASD (van Steensel et al. 2011), we wanted to assess whether anxiety disorders in adults were a continuation of these, or a separate clinical issue which requires its own consideration. For this reason, we identified the relative risk of a new diagnosis of an anxiety disorder in adulthood in people with no previous history of an anxiety disorder as this could provide clinically relevant information for clinicians working with adults. As further clinically useful information, in a supplementary analysis, we calculated the average age of onset of these anxiety disorders and the proportion of the sample diagnosed with anxiety before and/or after the age of 18.
Our cohort contained 221,694 individuals aged 18 to 27 years of whom 4049 had been diagnosed with ASD. We provide study characteristics and descriptive statistics by ASD case status, including the lifetime prevalence of a diagnosis of specific types of anxiety disorders, in Table 1. Just over a fifth (20.13%) of adults with ASD had also been diagnosed with an anxiety disorder compared with 8.72% of an adult non-autistic reference population (adjusted RR 2.62 [95% CI 2.47–2.79]), and prevalence of anxiety disorder was highest in the absence of comorbid ID (23.11%) (Tables 1 and 2).
The most common diagnostic labels were non-specific, with 5.91% of the reference population and 10.16% of adults with ASD having a diagnosis of “other stress related disorders” or a non-specific anxiety disorder diagnosis, which fell under our category label of “other neurotic or anxiety disorder” (Table 1). Except for “other stress-related disorders”, people with ASD had higher relative risks for receiving a diagnosis of all types of anxiety disorder compared to a non-autistic reference population (Table 2). Amongst the common anxiety disorder diagnoses, prevalence of OCD diagnoses was notably raised in people with ASD (3.43%) compared with the general population (0.47%) (adjusted RR 8.07 [95% CI 6.74–9.65]) and prevalence of phobic anxiety disorders was also markedly higher. This increase can be partly accounted for by an increased prevalence of social phobia for adults with ASD; however, there were also higher rates of all other phobic anxiety disorders. The relative risk of developing a dissociative disorder for an adult with ASD was also high; however, the numbers of individuals with this disorder were low, leading to very wide confidence intervals.
There was a clear difference between the rates of anxiety disorders diagnosed in adults with ASD associated with the presence or absence of intellectual disability (Table 2). The relative risk for an anxiety disorder diagnosis for adults with ASD without ID was almost three times higher than the general population (adjusted RR 2.96 [95% CI 2.77–3.16]), higher than the same estimate for adults with ASD with ID (adjusted RR 1.71 [95% CI 1.47–1.99]) (Table 2), with evidence of a statistical difference in these estimates (p < 0.001, Supplement Table 3). Adults with ASD without ID had higher adjusted risks of panic disorder, generalised anxiety disorder, PTSD, somatoform disorders and mixed anxiety and depression, trends which were not present for those with ASD and ID. Rates of diagnoses of specific phobias, such as social phobia and agoraphobia were also higher for adults with ASD without ID, whereas adults with ASD who had ID were more likely to receive the non-specific diagnosis of ‘other phobia’. Mixed anxiety and depression, adjustment disorders and acute stress reactions were diagnoses which were more common in adults with ASD both with and without ID compared to the non-autistic reference population.
Comparing the ASD cases with their non-autistic full- and half-siblings, risk of anxiety disorder was highest amongst the ASD cases than their full- and half-siblings, and risks were raised in siblings compared with the general population. However, we did not observe a consistent gradient in risk with increased genetic distance (Table 3) although there was some evidence of such a difference in ASD without ID (p = 0.021, Supplement Table 4). Risk for anxiety among cases was highest in the absence of comorbid ID, although risk of anxiety among siblings did not appear to vary with being the family member of person with ASD with or without ID.
We then directly compared risk of anxiety in adults with ASD with risk in their discordant non-autistic full siblings using a conditional logistic model. Cases with ASD had a higher risk of anxiety disorders compared with their discordant non-autistic full siblings (adjusted OR 2.48 [95% CI 2.06–3.01]) suggesting that associations were robust against confounding from factors shared between siblings. Associations were stronger among full siblings discordant for ASD without comorbid ID (adjusted OR 3.10 [95% CI 2.50–3.85]). Amongst siblings discordant for ASD with comorbid ID there did not appear to be an association between ASD diagnosis and anxiety disorder (adjusted OR 1.04 [95% CI 0.71–1.54]).
The mean age of diagnosis for all anxiety disorders was just under the age of 15 for people with ASD and just under the age of 17 for people without ASD (Supplement Table 5). However, our research found that even when there had been no history of a childhood anxiety disorder (66% of all ASD cases), adults with ASD continued to be at higher risk of being diagnosed with an anxiety disorder compared to adults without ASD (adjusted RR 2.71 [95% CI 2.52–2.91]) and this risk was highest for people with ASD without ID (adjusted RR 3.13 [95% CI 2.90–3.38]) (Supplement Tables 6 and 7).
In this large population based study in Sweden, we found that people with ASD were over two and a half times more likely to have a diagnosis of an anxiety disorder than a reference population without ASD, and risk appeared highest for people with ASD without intellectual disability.
There are only a few epidemiological studies with which we can directly compare our results. Our findings of higher rates of anxiety disorders in general and OCD specifically are consistent with evidence in adults found in previous studies (Croen et al. 2015; Meier et al. 2015). This is the first large, population-based study which has assessed the rates of PTSD in adults with ASD. This is consistent with a large study of autistic traits amongst nursing staff which found that PTSD rates were highest amongst those in the highest quintile for autistic traits (10.7%) versus those in the lowest quintile for autistic traits (4.5%) (Roberts et al. 2015). Previous research has additionally suggested that traumatic childhood events are associated with developmental disorders, including ASD, but that low reported rates of PTSD in youth with ASD may be due to diagnosis being complicated by individuals with ASD manifesting symptoms of traumatic stress in a distinct manner from individuals without ASD (Kerns et al. 2015).
It is of note that the most commonly used diagnostic labels for all individuals were those identifying a non-specified anxiety or neurotic disorder, which we termed “other neurotic or anxiety disorder”, and there were a significantly greater number of individuals with ASD being given the diagnosis of “other phobias” than the general population. This may be due to specific disorders or phenomenology being harder to distinguish in individuals with ASD, or due to multiple phobic anxiety disorders being present which may complicate the diagnostic picture. To our knowledge this is the first study to measure somatoform disorder and dissociative disorder in individuals with ASD, and our study found significantly higher rates of these conditions.
Higher rates of anxiety disorders in people with ASD may occur for a number of reasons (Kerns and Kendall 2012). For example, people with ASD may be more likely to experience peer rejection and prevention or punishment of their desired behaviours (for example restricted, repetitive interests). Additionally, social difficulties such as repeated experiences of misinterpreting social situations or communication leading to misunderstandings may produce anxiety, particularly in social situations. In typically developing adolescents and those with ASD, social difficulties have been associated with increased anxiety (Bellini 2006) and in particular an individual’s perception of their social skills difficulties is predictive of social anxiety (Bellini 2004). The impact of social stressors may be increased by a biological vulnerability to anxiety. For example, limbic system dysfunction and behavioural inhibition are associated with both ASD and anxiety disorders. Lower arousal thresholds in the amygdala associated with behavioural inhibition may in turn result in people with ASD avoiding and being conditioned by negative experiences (Bellini 2006). Sensory over-responsivity has also been suggested as a possible cause of anxiety disorder in ASD (Mazurek et al. 2013), causing problematic fears to develop as a result of increased sensitivity to certain stimuli. In OCD, whilst obsessional thoughts are common in the general population, it has been suggested that the cognitive deficits associated with ASD may influence the manner in which these thoughts are appraised, resulting in more anxiety and the development of OCD (Russell et al. 2005).
To our knowledge this is the first study of the difference in prevalence rates of anxiety disorders for adults with ASD with and without ID. The finding of higher rates of anxiety disorders amongst adults with ASD without ID compared with adults with ASD and ID is consistent with some of the results from studies of children with ASD and anxiety disorder (Weisbrot et al. 2005; Sukhodolsky et al. 2008; Mayes et al. 2011) and with a recent study of depression in ASD (Rai et al. 2018), and may be due to a number of factors. Increased rates of anxiety disorders may be present in individuals without ID due to increased cognitive awareness of their impairments (Bauminger et al. 2003). It may also be that the rates of anxiety disorder in people with ASD and ID are underestimated due to carers or clinicians attributing features of anxiety as a symptom of an individual’s ID, also known as diagnostic overshadowing. We found higher rates of “other phobias” in individuals with ASD and ID, consistent with studies in youth, where ID was associated with more “atypical” versus “traditional” anxiety features (Kerns et al. 2014). Additionally, given that many anxiety symptoms are associated with increased verbal ability, the absence of an ID and associated increase in verbal skills may play a key role in the ability to communicate anxiety symptoms. This is consistent with studies in children and adolescents where the presence of functional language use has been associated with an increase in anxiety symptoms in children with ASD (Sukhodolsky et al. 2008), and has been shown to have an opposite pattern to children without ASD, where language deficits were associated with increased anxiety symptoms, suggesting a unique relationship between ASD, language and anxiety (Davis et al. 2011).
Assessing the risk of anxiety disorders in full- and half-siblings of individuals can be a useful tool to help find gradients in risk that may be influenced by genetic loading. Our results did not identify any evidence of risk gradients between full- and half-siblings of individuals with ASD; however, there was an increased risk of an anxiety disorder diagnosis for siblings and half-siblings compared with the general population. Studies of full-siblings of children and young adults with ASD have found a comparable increased risk of anxiety disorders (Jokiranta-Olkoniemi et al. 2016). However, we were not able to control for the situations such as parental separation which may contribute to higher rates of anxiety disorders in all half-siblings, possibly masking a gradient. While the increased risk of anxiety in siblings may still suggest the role of common genetic vulnerability, siblings of children with ASD may also be prone to psychiatric disorders through other mechanisms. For example, they may receive less parental attention because of their autistic sibling having more needs or may experience other stressors in the household. The sibling control analysis allowed us to directly compare the risk of anxiety disorder in individuals with ASD versus their discordant siblings. This enabled additional control of shared-environmental and shared-genetic confounding factors. The notably higher rates in adults with ASD compared with their discordant siblings gives additional evidence for the prospect of genetic factors specifically related to ASD, or other ASD-specific environmental factors that increase vulnerability to anxiety disorders, which could be possible targets for future interventions.
This study highlights that anxiety disorders are a significant problem for adults with autism and there is therefore a need for effective evidence-based treatments. Cognitive-behavioural therapy (CBT) is the most researched therapy for anxiety disorders in individuals with ASD; however, whilst there is a growing volume of research supporting the use of CBT for anxiety in children and adolescents with ASD, less is known about the use of CBT in adults with ASD (Rosen et al. 2018). Concerning pharmacological treatments, current evidence suggests that selective serotonin re-uptake inhibitors (SSRIs) produce benefit in anxiety disorders in ASD, including to obsessional thoughts (Buchsbaum et al. 2001; McDougle et al. 1996) and compulsions (Hollander et al. 2012; McDougle et al. 1996). However, studies have been small and the overall evidence for pharmacological treatments is also limited (Howes et al. 2018).
The results of this study should be interpreted in the light of the following limitations. Firstly, as this is a register based study, the possibility of exposure and outcome misclassification cannot be ruled out. There may be undiagnosed people with ASD in the population which could lead to some exposure misclassification. If this were differential in relation to the outcome, for example, if those with greater anxiety symptoms were more likely to be diagnosed with ASD more easily, then our observed associations may be over-estimates of the true association. If the reverse were true or the misclassification was not differential, then the results would likely be biased towards the null. Likewise, it is likely there are individuals with anxiety disorders in our cohort who have not been recorded as such. Due to the use of registers we were also not able to verify the anxiety disorder diagnoses. This may be an important limitation as there are phenomenological differences in presentation of anxiety disorders in individuals with ASD and/or ID and there may be potential variability between clinicians and mental health care centres in their recognition. The possibility of such measurement error may be compounded by the lack of standardised tools available for the measurement of anxiety disorders within the ASD population. Such misclassification may have been somewhat minimised by our use of anxiety disorder diagnoses from secondary care and inpatient records which are made by specialists, although it should be noted that there are still no clear consensus guidelines for measuring anxiety in ASD. Although the majority of diagnoses of specific anxiety disorders such as OCD would be made in secondary care, primary care doctors are involved in diagnosing anxiety disorders, particularly generalised anxiety disorder and mixed anxiety and depression; this means that the rates of these conditions within our study may be underestimated. Therefore, the possibility of outcome misclassification cannot be ruled out. Such limitations would be common to any record-linkage study and cohort studies with detailed clinical information would need to be carried out to assess this further. We were unable to measure early life trauma, which is a limitation as it may be associated with ASD (Kerns et al. 2015) and with anxiety disorders, particularly the higher rates of PTSD. Despite these limitations, this study has several strengths. The Swedish registers are of high quality, enabling a large sample size with the ability to adjust for a range of confounders and examine siblings of individuals with ASD. The prospective data collection minimises the possibility of recall bias.
Anxiety disorders are a notable problem for people with ASD. It is important for adult mental health services to be skilled at working with this population where co-occurring anxiety is so prevalent. More research is needed to determine the causes of increased anxiety for people with ASD. Our findings suggest a potential for environmental pathways; however, delineating these further may help the development of preventative interventions or targeted treatment. Our study has identified increased rates of specific anxiety disorders for adults with ASD which has been explored in children and adolescents but not studied previously in adults and these associations need to be replicated in future studies. Future research is also needed to better understand the phenomenology of anxiety disorders in this population and to better the means of measuring and treating anxiety in this population. This may be an aim worth pursuing because effective management of anxiety in this population could lead to major gains in quality of life and functioning (Kerns and Kendall 2012).
Allebeck, P. (2009). The use of population based registers in psychiatric research. Acta Psychiatrica Scandinavica,120(5), 386–391. https://doi.org/10.1111/j.1600-0447.2009.01474.x.
Bakken, T. L., Helverschou, S. B., Eilertsen, D. E., Heggelund, T., Myrbakk, E., & Martinsen, H. (2010). Psychiatric disorders in adolescents and adults with autism and intellectual disability: A representative study in one county in Norway. Research in Developmental Disabilities,31(6), 1669–1677. https://doi.org/10.1016/j.ridd.2010.04.009.
Bauminger, N., Shulman, C., & Agam, G. (2003). Peer interaction and loneliness in high-functioning children with autism. [Peer Reviewed]. Journal of Autism and Developmental Disorders. https://doi.org/10.1023/a:1025827427901.
Baxter, A. J., Brugha, T. S., Erskine, H. E., Scheurer, R. W., Vos, T., & Scott, J. G. (2015). The epidemiology and global burden of autism spectrum disorders. Psychological Medicine,45(3), 601–613. https://doi.org/10.1017/S003329171400172X.
Bellini, S. (2004). Social skill deficits and anxiety in high-functioning adolescents with autism spectrum disorders [Peer Reviewed]. Focus on Autism and Other Developmental Disabilities. https://doi.org/10.1177/10883576040190020201.
Bellini, S. (2006). The development of social anxiety in adolescents with autism spectrum disorders [Peer Reviewed]. Focus on Autism and Other Developmental Disabilities. https://doi.org/10.1177/10883576060210030201.
Bolton, P. F., Pickles, A., Murphy, M., & Rutter, M. (1998). Autism, affective and other psychiatric disorders: patterns of familial aggregation. Psychological Medicine,28(2), 385–395.
Bradley, E., & Bolton, P. (2006). Episodic psychiatric disorders in teenagers with learning disabilities with and without autism. British Journal of Psychiatry,189, 361–366. https://doi.org/10.1192/bjp.bp.105.018127.
Brugha, T. S., Spiers, N., Bankart, J., Cooper, S. A., McManus, S., Scott, F. J., et al. (2016). Epidemiology of autism in adults across age groups and ability levels. British Journal of Psychiatry. https://doi.org/10.1192/bjp.bp.115.174649.
Buchsbaum, M. S., Hollander, E., Haznedar, M. M., Tang, C., Spiegel-Cohen, J., Wei, T. C., et al. (2001). Effect of fluoxetine on regional cerebral metabolism in autistic spectrum disorders: A pilot study. International Journal of Neuropsychopharmacology,4(2), 119–125. https://doi.org/10.1017/S1461145701002280.
Buck, T. R., Viskochil, J., Farley, M., Coon, H., McMahon, W. M., Morgan, J., et al. (2014). Psychiatric comorbidity and medication use in adults with autism spectrum disorder. Journal of Autism and Developmental Disorders,44(12), 3063–3071. https://doi.org/10.1007/s10803-014-2170-2.
Buescher, A. V., Cidav, Z., Knapp, M., & Mandell, D. S. (2014). Costs of autism spectrum disorders in the United Kingdom and the United States. JAMA Pediatrics,168(8), 721–728. https://doi.org/10.1001/jamapediatrics.2014.210.
Charlot, L., Deutsch, C. K., Albert, A., Hunt, A., Connor, D. F., & McIlvane, W. J., Jr. (2008). Mood and anxiety symptoms in psychiatric inpatients with autism spectrum disorder and depression [Peer Reviewed]. Journal of Mental Health Research in Intellectual Disabilities. https://doi.org/10.1080/19315860802313947.
Croen, L. A., Zerbo, O., Qian, Y., Massolo, M. L., Rich, S., Sidney, S., et al. (2015). The health status of adults on the autism spectrum. Autism,19(7), 814–823. https://doi.org/10.1177/1362361315577517.
Davis, T. E., Moree, B. N., Dempsey, T., Reuther, E. T., Fodstad, J. C., Hess, J. A., et al. (2011). The relationship between autism spectrum disorders and anxiety: The moderating effect of communication. Research in Autism Spectrum Disorders,5(1), 324–329.
de Graaf, R., Bijl, R. V., Smit, F., Vollebergh, W. A., & Spijker, J. (2002). Risk factors for 12-month comorbidity of mood, anxiety, and substance use disorders: findings from the Netherlands Mental Health Survey and Incidence Study. American Journal of Psychiatry,159(4), 620–629. https://doi.org/10.1176/appi.ajp.159.4.620.
Duvekot, J., van der Ende, J., Constantino, J. N., Verhulst, F. C., & Greaves-Lord, K. (2016). Symptoms of autism spectrum disorder and anxiety: Shared familial transmission and cross-assortative mating. Journal of Child Psychology and Psychiatry,57(6), 759–769. https://doi.org/10.1111/jcpp.12508.
Fombonne, E. (2009). Epidemiology of pervasive developmental disorders. Pediatric Research,65(6), 591–598. https://doi.org/10.1203/PDR.0b013e31819e7203.
Geschwind, D. (2011). Genetics of autism spectrum disorders. Trends in Cognitive Sciences,15(9), 409–416.
Gillberg, I. C., Helles, A., Billstedt, E., & Gillberg, C. (2016). Boys with Asperger syndrome grow up: Psychiatric and neurodevelopmental disorders 20 years after initial diagnosis. Journal of Autism and Developmental Disorders,46(1), 74–82. https://doi.org/10.1007/s10803-015-2544-0.
Gillott, A., & Standen, P. J. (2007). Levels of anxiety and sources of stress in adults with autism [Peer Reviewed]. Journal of Intellectual Disabilities. https://doi.org/10.1177/1744629507083585.
Hofvander, B., Delorme, R., Chaste, P., Nyden, A., Wentz, E., Stahlberg, O., et al. (2009). Psychiatric and psychosocial problems in adults with normal-intelligence autism spectrum disorders. BMC Psychiatry,9, 35. https://doi.org/10.1186/1471-244X-9-35.
Hollander, E., Soorya, L., Chaplin, W., Anagnostou, E., Taylor, B. P., Ferretti, C. J., et al. (2012). A double-blind placebo-controlled trial of fluoxetine for repetitive behaviors and global severity in adult autism spectrum disorders. American Journal of Psychiatry,169(3), 292–299. https://doi.org/10.1176/appi.ajp.2011.10050764.
Howes, O. D., Rogdaki, M., Findon, J. L., Wichers, R. H., Charman, T., King, B. H., et al. (2018). Autism spectrum disorder: Consensus guidelines on assessment, treatment and research from the British Association for Psychopharmacology. Journal of Psychopharmacology,32(1), 3–29. https://doi.org/10.1177/0269881117741766.
Hutton, J., Goode, S., Murphy, M., Le Couteur, A., & Rutter, M. (2008). New-onset psychiatric disorders in individuals with autism. Autism,12(4), 373–390. https://doi.org/10.1177/1362361308091650.
Idring, S., Lundberg, M., Sturm, H., Dalman, C., Gumpert, C., Rai, D., et al. (2015). Changes in prevalence of autism spectrum disorders in 2001-2011: Findings from the Stockholm youth cohort. Journal of Autism and Developmental Disorders,45(6), 1766–1773. https://doi.org/10.1007/s10803-014-2336-y.
Idring, S., Rai, D., Dal, H., Dalman, C., Sturm, H., Zander, E., et al. (2012). Autism spectrum disorders in the Stockholm Youth Cohort: Design, prevalence and validity. PLoS ONE,7(7), e41280. https://doi.org/10.1371/journal.pone.0041280.
Jokiranta, E., Brown, A. S., Heinimaa, M., Cheslack-Postava, K., Suominen, A., & Sourander, A. (2013). Parental psychiatric disorders and autism spectrum disorders. Psychiatry Research,207(3), 203–211. https://doi.org/10.1016/j.psychres.2013.01.005.
Jokiranta-Olkoniemi, E., Cheslack-Postava, K., Sucksdorff, D., Suominen, A., Gyllenberg, D., Chudal, R., et al. (2016). Risk of psychiatric and neurodevelopmental disorders among siblings of probands with autism spectrum disorders. JAMA Psychiatry,73(6), 622–629. https://doi.org/10.1001/jamapsychiatry.2016.0495.
Kanai, C., Iwanami, A., Hashimoto, R., Ota, H., Tani, M., Yamada, T., et al. (2011). Clinical characterization of adults with Asperger’s syndrome assessed by self-report questionnaires based on depression, anxiety, and personality. Research in Autism Spectrum Disorders,5(4), 1451–1458.
Kerns, C. M., & Kendall, P. C. (2012). The presentation and classification of anxiety in autism spectrum disorder. Clinical Psychology-Science and Practice,19(4), 323–347. https://doi.org/10.1111/cpsp.12009.
Kerns, C. M., Kendall, P. C., Berry, L., Souders, M. C., Franklin, M. E., Schultz, R. T., et al. (2014). Traditional and atypical presentations of anxiety in youth with autism spectrum disorder. Journal of Autism and Developmental Disorders,44(11), 2851–2861. https://doi.org/10.1007/s10803-014-2141-7.
Kerns, C. M., Newschaffer, C. J., & Berkowitz, S. J. (2015). Traumatic childhood events and autism spectrum disorder. Journal of Autism and Developmental Disorders,45(11), 3475–3486. https://doi.org/10.1007/s10803-015-2392-y.
La Malfa, G., Lassi, S., Salvini, R., Giganti, C., Bertelli, M., & Albertini, G. (2007). The relationship between autism and psychiatric disorders in intellectually disabled adults. Research in Autism Spectrum Disorders,1(3), 218–228. https://doi.org/10.1016/j.rasd.2006.10.004.
Lai, M. C., Lombardo, M. V., & Baron-Cohen, S. (2014). Autism. Lancet,383(9920), 896–910. https://doi.org/10.1016/S0140-6736(13)61539-1.
Lever, A. G., & Geurts, H. M. (2016). Psychiatric co-occurring symptoms and disorders in young, middle-aged, and older adults with autism spectrum disorder. Journal of Autism and Developmental Disorders,46(6), 1916–1930. https://doi.org/10.1007/s10803-016-2722-8.
Lofors, J., Ramirez-Leon, V., & Sundquist, K. (2006). Neighbourhood income and anxiety: A study based on random samples of the Swedish population. European Journal of Public Health,16(6), 633–639. https://doi.org/10.1093/eurpub/ckl026.
Ludvigsson, J. F., Andersson, E., Ekbom, A., Feychting, M., Kim, J. L., Reuterwall, C., et al. (2011). External review and validation of the Swedish national inpatient register. BMC Public Health,11(1), 450. https://doi.org/10.1186/1471-2458-11-450.
Mayes, S. D., Calhoun, S. L., Murray, M. J., Ahuja, M., & Smith, L. A. (2011). Anxiety, depression, and irritability in children with autism relative to other neuropsychiatric disorders and typical development. Research in Autism Spectrum Disorders,5(1), 474–485. https://doi.org/10.1016/j.rasd.2010.06.012.
Mazefsky, C. A., Folstein, S. E., & Lainhart, J. E. (2008). Overrepresentation of mood and anxiety disorders in adults with autism and their first-degree relatives: What does it mean? Autism Research,1(3), 193–197. https://doi.org/10.1002/aur.23.
Mazurek, M. O., Vasa, R. A., Kalb, L. G., Kanne, S. M., Rosenberg, D., Keefer, A., et al. (2013). Anxiety, sensory over-responsivity, and gastrointestinal problems in children with autism spectrum disorders. Journal of Abnormal Child Psychology,41(1), 165–176. https://doi.org/10.1007/s10802-012-9668-x.
McDougle, C. J., Naylor, S. T., Cohen, D. J., Volkmar, F. R., Heninger, G. R., & Price, L. H. (1996). A double-blind, placebo-controlled study of fluvoxamine in adults with autistic disorder. Archives of General Psychiatry,53(11), 1001–1008.
Meier, S. M., Petersen, L., Schendel, D. E., Mattheisen, M., Mortensen, P. B., & Mors, O. (2015). Obsessive-compulsive disorder and autism spectrum disorders: Longitudinal and offspring risk. PLoS ONE,10(11), e0141703. https://doi.org/10.1371/journal.pone.0141703.
Rai, D., Heuvelman, H., Dalman, C., Culpin, I., Lundberg, M., Carpenter, P., et al. (2018). Association between autism spectrum disorders with or without intellectual disability and depression in young adulthood. JAMA Network Open,1(4), e181465. https://doi.org/10.1001/jamanetworkopen.2018.1465.
Rai, D., Lewis, G., Lundberg, M., Araya, R., Svensson, A., Dalman, C., et al. (2012). Parental socioeconomic status and risk of offspring autism spectrum disorders in a Swedish population-based study. Journal of the American Academy of Child and Adolescent Psychiatry, 51(5), 467–476 e466. https://doi.org/10.1016/j.jaac.2012.02.012.
Roberts, A. L., Koenen, K. C., Lyall, K., Robinson, E. B., & Weisskopf, M. G. (2015). Association of autistic traits in adulthood with childhood abuse, interpersonal victimization, and posttraumatic stress. Child Abuse and Neglect,45, 135–142. https://doi.org/10.1016/j.chiabu.2015.04.010.
Rosen, T. E., Mazefsky, C. A., Vasa, R. A., & Lerner, M. D. (2018). Co-occurring psychiatric conditions in autism spectrum disorder. International Review of Psychiatry,30(1), 40–61. https://doi.org/10.1080/09540261.2018.1450229.
Ruck, C., Larsson, K. J., Lind, K., Perez-Vigil, A., Isomura, K., Sariaslan, A., et al. (2015). Validity and reliability of chronic tic disorder and obsessive-compulsive disorder diagnoses in the Swedish National Patient Register. British Medical Journal Open,5(6), e007520. https://doi.org/10.1136/bmjopen-2014-007520.
Russell, A. J., Mataix-Cols, D., Anson, M., & Murphy, D. G. M. (2005). Obsessions and compulsions in Asperger syndrome and high-functioning autism. British Journal of Psychiatry,186, 525–528. https://doi.org/10.1192/bjp.186.6.525.
Russell, A. J., Murphy, C. M., Wilson, E., Gillan, N., Brown, C., Robertson, D. M., et al. (2016). The mental health of individuals referred for assessment of autism spectrum disorder in adulthood: A clinic report. Autism,20(5), 623–627. https://doi.org/10.1177/1362361315604271.
Sukhodolsky, D. G., Scahill, L., Gadow, K. D., Arnold, L. E., Aman, M. G., McDougle, C. J., et al. (2008). Parent-rated anxiety symptoms in children with pervasive developmental disorders: Frequency and association with core autism symptoms and cognitive functioning. Journal of Abnormal Child Psychology,36(1), 117–128. https://doi.org/10.1007/s10802-007-9165-9.
Tani, M., Kanai, C., Ota, H., Yamada, T., Watanabe, H., Yokoi, H., et al. (2012). Mental and behavioral symptoms of person’s with Asperger’s syndrome: Relationships with social isolation and handicaps. Research in Autism Spectrum Disorders,6, 907–912.
van Steensel, F. J., Bogels, S. M., & Perrin, S. (2011). Anxiety disorders in children and adolescents with autistic spectrum disorders: A meta-analysis. Clinical Child and Family Psychology Review,14(3), 302–317. https://doi.org/10.1007/s10567-011-0097-0.
Vasa, R. A., & Mazurek, M. O. (2015). An update on anxiety in youth with autism spectrum disorders. Current Opinion in Psychiatry,28(2), 83–90. https://doi.org/10.1097/YCO.0000000000000133.
Weisbrot, D. M., Gadow, K. D., DeVincent, C. J., & Pomeroy, J. (2005). The presentation of anxiety in children with pervasive developmental disorders. Journal of Child and Adolescent Psychopharmacology,15(3), 477–496. https://doi.org/10.1089/cap.2005.15.477.
White, S. W., Oswald, D., Ollendick, T., & Scahill, L. (2009). Anxiety in children and adolescents with autism spectrum disorders. Clinical Psychology Review,29(3), 216–229. https://doi.org/10.1016/j.cpr.2009.01.003.
Xie, S., Karlsson, H., Dalman, C., Widman, L., Rai, D., Gardner, R. M., et al. (2019). Family history of mental and neurological disorders and risk of autism. JAMA Network Open,2(3), e190154. https://doi.org/10.1001/jamanetworkopen.2019.0154.
This research was funded by Grant 3747-6849 from The Baily Thomas Charitable Foundation and by Grant 2017-010006 from the Swedish Research Council for Health, Working Life, and Welfare. This study was also supported by the National Institute for Health Research Biomedical Research Centre at the University Hospitals Bristol National Health Service Foundation Trust and the University of Bristol (Grant No. BRC-1215-2011) and Victoria Nimmo-Smith was funded by a National Institute for Health Research Academic Clinical Fellowship award (reference number ACF-2016-25-503).
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Cecilia Magnusson and Dheeraj Rai are Joint senior authors.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Nimmo-Smith, V., Heuvelman, H., Dalman, C. et al. Anxiety Disorders in Adults with Autism Spectrum Disorder: A Population-Based Study. J Autism Dev Disord 50, 308–318 (2020). https://doi.org/10.1007/s10803-019-04234-3
- Autism spectrum disorder
- Intellectual disability
- Mental health | 1 | 9 |
<urn:uuid:896b856d-a306-4d58-8b2f-f6b0127729fd> | - Open Access
DroID: the Drosophila Interactions Database, a comprehensive resource for annotated gene and protein interactions
BMC Genomics volume 9, Article number: 461 (2008)
Charting the interactions among genes and among their protein products is essential for understanding biological systems. A flood of interaction data is emerging from high throughput technologies, computational approaches, and literature mining methods. Quick and efficient access to this data has become a critical issue for biologists. Several excellent multi-organism databases for gene and protein interactions are available, yet most of these have understandable difficulty maintaining comprehensive information for any one organism. No single database, for example, includes all available interactions, integrated gene expression data, and comprehensive and searchable gene information for the important model organism, Drosophila melanogaster.
DroID, the Drosophila Interactions Database, is a comprehensive interactions database designed specifically for Drosophila. DroID houses published physical protein interactions, genetic interactions, and computationally predicted interactions, including interologs based on data for other model organisms and humans. All interactions are annotated with original experimental data and source information. DroID can be searched and filtered based on interaction information or a comprehensive set of gene attributes from Flybase. DroID also contains gene expression and expression correlation data that can be searched and used to filter datasets, for example, to focus a study on sub-networks of co-expressed genes. To address the inherent noise in interaction data, DroID employs an updatable confidence scoring system that assigns a score to each physical interaction based on the likelihood that it represents a biologically significant link.
DroID is the most comprehensive interactions database available for Drosophila. To facilitate downstream analyses, interactions are annotated with original experimental information, gene expression data, and confidence scores. All data in DroID are freely available and can be searched, explored, and downloaded through three different interfaces, including a text based web site, a Java applet with dynamic graphing capabilities (IM Browser), and a Cytoscape plug-in. DroID is available at http://www.droidb.org.
Many of the important properties of biological systems emerge as a result of the interactions among genes and among their protein products. Genes and the proteins they encode participate in gene-gene, gene-protein, and protein-protein interactions to mediate a wide variety of biological processes. An increasing appreciation for the importance of charting these interactions has lead to many large-scale efforts to identify gene and protein interactions for a number of systems . As this data continues to accumulate from a variety of sources there is an increasing need for comprehensive databases and analysis tools that allow biologists to make use of it. Genes and proteins that function in the same pathway, for example, interact directly or indirectly, and their functions can only be fully understood in the context of the interaction networks to which they belong.
Gene and protein interaction data have come from a variety of sources. To detect protein-protein interactions, for example, high throughput yeast two-hybrid [2–5] and co-affinity purification [6, 7] screens have been developed and applied to proteins from humans and several model organisms. To generate large networks of gene-protein interactions, high throughput techniques are being developed for detecting transcription factors and other proteins bound to DNA [8–11]. Finally, gene-gene interactions that suggest functional relationships between pairs of genes are being revealed by large-scale assays for genetic interactions [12, 13]. While each type of interaction data has proven useful for understanding how genes and their products work together in biological systems, the large amount of disparate data can be difficult to access and interpret. Combining data from different sources has become important because no single screen or technique is free from false positives and false negatives. Many studies have shown, for example, that interactions detected in multiple screens or by multiple techniques are less likely to be false positives (e.g., ), so that combining datasets can provide a simple way to gain confidence in any particular set of interactions. Likewise, the inability of any one technique or particular screen to detect all biologically relevant interactions suggests that combining datasets increases coverage.
A number of centralized databases have been implemented to store gene and protein interaction data and to make it publicly available [15–21]. While most of the data are from large-scale screens, several of these databases have also begun to include data from small-scale 'low throughput' experiments collected by manual curation of the literature. Despite the ideal of central databases to be comprehensive, a surprising number of interactions can be found in one database but not another [22, 23]. Thus, biologists have been well advised to consult multiple databases to get a complete picture of the available data. Most of the large interaction databases include data for many different species. Such multi-species databases, however, are rarely fully comprehensive for any one organism; for example, organism-specific gene information, such as gene expression and phenotype data is not available for searching and filtering the interaction data. Multi-organism databases also have difficulty representing potentially conserved interactions for any given species. Finding conserved interactions requires looking up the orthologous proteins and conducting searches for interaction data in each of several different organisms. Recently, a few public databases have addressed these issues in efforts to generate comprehensive resources for a particular species; e.g., HomoMINT and UniHI for humans . DroID is designed to be a comprehensive interactions database dedicated to the important model organism, Drosophila melanogaster.
Construction and content
We developed DroID with several guiding principles in mind. First, we set out to combine all available gene and protein interaction data for Drosophila into one place where it could be frequently updated. DroID also contains searchable gene information from Flybase, the central repository for Drosophila gene information , enabling users to find or filter interactions based on Drosophila-specific gene attributes. Second, DroID strives to include all original data when available. For example, the database tries to obtain and store even technique-specific or experiment-specific details. These details, which are often missing from centralized databases, can facilitate a wider range of downstream analyses. Third, DroID tracks primary sources and secondary sources, providing links to references where available, so that users can trace the provenance of each interaction. Fourth, DroID strives to eliminate redundancy. If an interaction derived from a single primary reference is found in slightly different forms in multiple databases, a single instance with the appropriate reference appears in DroID. Fifth, DroID includes interactions predicted from experimental data for other major model organisms and humans. Because interactions are often conserved, data from other organisms can be used to infer likely interactions between orthologous proteins in Drosophila. Such predicted interactions, which have been called interologs , essentially enable researchers to use humans and other organisms as 'model organisms' for Drosophila studies. Sixth, every interaction in DroID is annotated with a confidence score providing a measure of the likelihood that it is a biologically relevant interaction, and a separate score indicating the level of co-expression of the two genes involved. Finally, we set out to provide complete access to DroID with three user-friendly interfaces that include some features especially geared toward Drosophila researchers (Figure 1).
DroID is an extensive update of an earlier database . New features that are described in more detail below include a web interface, gene expression data, calculated gene correlation values, confidence scores, and substantially more interaction data. In addition, DroID is updated quarterly and each version is available for download. The current version of DroID (v4.0) is described here.
Interaction data and generation of interologs
DroID is stored in a relational database with each major interaction dataset corresponding to one database table (see Table 1). As new datasets become available, new tables are added. The different datasets can be seamlessly integrated or searched separately. Frequently, the overlap among different datasets contains more reliable interactions, and this overlap will be obvious to users. While much of the data in DroID represents protein-protein interactions, all interactions are keyed to gene or locus identifiers because protein interaction data rarely includes knowledge of specific alternative splice forms or protein isoforms. DroID uses the Flybase gene number (FBgn) to specify a gene or a protein encoded by a gene. Other common gene identifiers, such as the gene symbol or CG number, are also stored.
DroID contains the yeast two-hybrid interactions published in three major studies [29–31] in addition to unpublished interactions from an ongoing large-scale two-hybrid screening project [29, 32]. Full experimental details as reported by the original publications are included. For Drosophila physical interactions not covered by the three large-scale yeast two-hybrid screens and for interactions of human, worm (C. elegans), and yeast proteins, raw data are downloaded from respective online databases. These databases include BioGRID , IntAct , and MINT , in addition to MIPS for yeast and HPRD , PDZbase , and Reactome for human. To enable periodic updates we established a pipeline for entering data into DroID as follows. First, raw interaction data is parsed to ensure that it includes only physical protein-protein interactions. DroID obtains interactions annotated with at least one detection method that detects physical interactions (e.g., yeast two-hybrid, mass spectrometry, pull down, etc.). Second, we map genes to uniform identifiers for the four organisms utilized by DroID; that is, Flybase gene number (FBgn) for fly, Ensembl gene identifier (ENSG) for human, Wormbase gene identifier (WBGene) for worm, and ORF identifiers for yeast. For each interaction, DroID stores the original PubMed identifier (PMID), methods used in detecting it, and the databases reporting it. Finally, we map interactions collected from human, worm, and yeast to Drosophila interologs by orthology mapping using Inparanoid (currently at version 6) . DroID also stores genetic interactions obtained from Flybase, each annotated by reference numbers that trace to original data sources. Aside from interologs, DroID currently does not include interactions based solely on computational predictions, which may be found in other databases [36, 37]. For example, the Fly-DPI database has Drosophila protein interactions predicted on the basis of domain pairs found in experimental PPI .
Gene attributes and gene expression data
DroID includes a searchable gene attributes table populated from periodically updated gene annotations available in Flybase . Users can search for interactions involving specific genes by searching for gene names, symbols, synonyms, or gene identifiers. The gene attributes table also allows searches based on gene class, gene function annotations based on gene ontology (GO) , and protein domains. The IM Browser interface further extends this search ability by enabling a live search of Flybase for genes based on additional attributes, including reference and phenotype.
DroID also stores searchable gene expression data, which allows interaction data to be viewed and filtered in the context of gene expression patterns. DroID currently has two microarray-based gene expression datasets that can be used to constrain a search for interactions. One dataset includes genome-wide expression profiles over the course of embryogenesis in half-hour increments , and the other includes expression profiles for a developmental time course from early embryos through adults . DroID can accommodate additional gene and protein expression data as they become available.
Gene expression correlation
Genes that are frequently co-expressed often function together in common processes (e.g., [41, 42]). Thus, there is substantial value in knowing the level of co-expression for pairs of genes that interact. To facilitate co-expression analyses for Drosophila, we computed correlation values between pair-wise expression profiles derived from the Gene Expression Omnibus (GEO) database . We downloaded all D. melanogaster gene expression datasets from GEO and computed linear Pearson correlations between pair-wise expression profiles within each dataset. We first removed datasets that have less than 5 samples (e.g., tissues, conditions, or time points) to avoid possible spurious strong correlations. This resulted in 49 genome-wide expression datasets with 844 combined samples. Multiple correlations for a pair of genes from different datasets were then combined to produce a final correlation value for a specific gene pair. The combination is done based on how many samples each dataset has. Intuitively, a correlation based on a dataset having many samples may be more significant than the same value derived from another dataset with only a few samples. If there are n datasets, and each reports a correlation of xi for a gene pair, the final correlation value is computed by
corr = Σi(xi * si)/Σi(si)
where i ∈ [1, 2, ..., n], and si represents the number of samples in data set i. Every interaction in DroID is annotated with the current gene expression correlation value for that pair. Correlation values are updateable as new gene expression datasets are added to GEO.
Cross data set confidence scores
Protein-protein interaction data tend to be noisy, with variable rates of false positives from one dataset to another. A novel feature of DroID is the annotation of each physical protein-protein interaction with an updateable confidence score that reflects the probability that it is a biologically relevant true positive. Most methods for generating confidence scores work within a single type of data, such as yeast two-hybrid or protein complex data, by searching for features of the data that correlate with biological significance [30, 44–47]. As a consequence, the scores derived for one data set bear little relation to those for another data set. In contrast, DroID assigns confidence scores to all physical interactions, including data from different techniques and interologs derived from worm, yeast, and human. The method used to assign confidence scores is based on the logistic regression approach described by Giot et al. [30, 44]. In this approach we first identify training data, including a set of interactions that are likely to be true positives and another set that are likely to be false positives. We then search for specific attributes of the interactions that correlate with the two training sets. The attributes include gene expression correlation, number of associated literature citations (PubMed identifiers or PMIDs), local and global network topology, and domain-domain interactions. For example, the number of PMIDs for an interaction correlates with its likelihood of being in the true positive training set. Conversely, the number of interactions for a protein is inversely correlated with presence in the true positive training set. The correlations are then used to train a logistic regression model that can assign scores to all interactions based on their attributes. For the interactions in DroID, we used a variation of this scoring system in which we combine multiple training datasets to reduce the potential bias of any single training set (Yu, submitted).
Every physical interaction in DroID has a confidence score between 0 and 1 to represent the probability that it is a biological true positive. Validation of the scoring system shows that interactions with higher scores are more likely to be biologically relevant than interactions with lower scores (Yu et al., submitted; ). The set of interactions with scores greater than 0.5, for example, have significantly more pairs of genes that share GO biological process or cellular component annotations compared to interactions scoring < 0.5, or to random pairs. Interactions scoring less than 0.5 also share significantly more GO annotations than random pairs of genes, which indicates that overall the interactions collected in DroID are enriched for biologically relevant true positives. As additional interaction data and other new information become available, the scoring models can be periodically retrained to improve the overall accuracy. Thus, the confidence scores are updateable and receive a version number at each revision.
Utility and discussion
Summary of the database
The current version of DroID (v4.0) contains 131,659 links among 9,511 D. melanogaster genes, or roughly 64.4% of the predicted genes. The small amount of overlap between different interaction sets (Table 2) shows that no single dataset adequately covers the available data, and serves to illustrate the value of making all data available in one location. A major limitation to the value of most interaction datasets is the presence of false positive interactions that have no biological significance. To overcome this limitation and to help biologists focus on the most reliable interactions, DroID assigns confidence scores to individual interactions to denote potential biological significance. The current version of the confidence scoring system (v2.0) assigned scores to the 126,896 physical interactions in DroID (excluding genetic interactions). Of these, 28,259 (22.3%) interactions received a score above 0.5, distinguishing them as the high confidence set. These scores should help biologist focus on the most reliable subset of the data for future studies. For example, networks and subnetworks can be filtered based on user-defined confidence limits to accommodate analyses that tolerate different levels of uncertainty.
Gene expression correlation
In addition to physical protein-protein interactions and genetic interactions, gene expression data can be a valuable tool for linking together genes that may function together. It has been shown, for example, that genes with correlated expression patterns are more likely to function together in common biological processes (e.g., [41, 42]), and at least in yeast, proteins encoded by co-expressed genes are more likely to participate in direct physical interactions than random pairs . To help reveal relevant functional linkages, every gene pair in DroID is annotated with gene expression correlation values. Consistent with findings in yeast, we found that the physical protein-protein interactions in DroID are encoded by gene pairs with significantly higher expression correlations than random gene pairs (p-value < 2.2*10-16, Figure 2A and Figure 3). In addition, higher expression correlation values were seen for gene pairs that genetically interact, and therefore are likely to function in common biological processes (Figure 2A). Interestingly, the Drosophila physical interactions that overlap with interologs detected in other species have a significantly higher expression correlation than the remainder of the physical interactions (p-value < 2.2*10-16, Figure 2B), suggesting that conserved interactions involve proteins that are more likely to be co-expressed than non-conserved interactions. It is noteworthy that the average correlation values are not very high (e.g., 0.13 for the DroID physical interactions) and that many gene pairs have a negative correlation. This result is not surprising for a multi-cellular organism in which functionally relevant interactions can occur between pairs of proteins even if they are only co-expressed during a fraction of developmental time or in just one or a few tissues.
Viewing interaction data in the context of gene expression data
Gene expression data can also be used to view interaction data in a dynamic context. Most gene and protein interaction data that are currently available come from studies that are independent of gene expression. Examples include yeast two-hybrid data in which pairs of proteins are expressed together in yeast, whether or not they are co-expressed in vivo, and co-AP experiments in which often at least one of the proteins is artificially expressed with an affinity tag in tissue culture cells. Thus, the protein interactions in DroID and most other databases represent pairs of proteins that may interact in vivo, but only if they are expressed together. A powerful way to view this interaction data, therefore, is in the context of gene expression patterns for a particular tissue or developmental time point. DroID includes gene expression data from genome-wide developmental studies. This data can be used to constrain a set of interactions to include only genes expressed at a user-defined level and time point or developmental stage.
DroID access interfaces
All data in DroID can be accessed and downloaded in part or whole via three different interfaces (Figure 1). A user-friendly web interface is provided for simple searching, browsing, and downloading of DroID data. Going to the DroID web page opens a search box, which asks users for a term describing a gene or protein. The term can be a gene symbol, name, synonym, or a term describing a gene or protein (Figure 1A). Clicking 'Search Genes' produces a page listing genes that fit the search criteria. On this page, users select one or any number of the genes, and then have the option to select specific interaction datasets or to search all of them simultaneously. The search produces a results page listing the found interactions and their current confidence scores (Figure 1B). Each interaction is represented by the symbols of the two genes and a list of the datasets in which they were found. Additional information about each gene, including GO annotations and links to Flybase can be obtained by clicking on the gene symbol. Similarly, clicking on the dataset name for each interaction reveals its details, including original experimental data when available, references, and relevant links. The results page also includes several additional options for further analysis. These include an option to show the gene expression correlation values for each interaction and an option to filter the results by gene expression patterns or confidence scores. Utilizing these filters helps researchers to focus on interactions that are more likely to be true positives or that involve co-expressed genes. The results page also includes a link for downloading the interactions in formats that can then be uploaded into network analysis programs. Finally, a link is included that will generate a summary table showing the number of interactions for the selected genes in each of the interaction datasets, including those not originally searched. The summary table also includes a button that automatically opens the IM Browser applet to generate a graphical map of the interactions (see below).
DroID can be accessed via two different dynamic interfaces that allow an interaction network to be explored as a graph where nodes represent genes or proteins and edges connecting the nodes represent interactions. Viewing an interaction map in this way places each gene and interaction into the context of other interactions and facilitates biological insights that are not possible from simple lists of interactions. The first interface is a plug-in (Figure 1C) that allows DroID to be accessed through the powerful network visualization and analysis program, Cytoscape . The second interface is IM Browser (Figure 1D), a program originally designed to access an earlier version of DroID and other interaction databases . IM Browser runs as a java applet and allows advanced queries and dynamic graphing of search results. While a complete description of IM Browser capabilities is beyond the scope of this paper, a few features are worth noting here. First, the program easily accommodates new types of interaction data and dynamically enables all node and edge information to be used in searches and filtering. This feature is important as new techniques for detecting interactions are needed and continue to emerge, and each new technique has its own type of data. Second, interaction maps can be edited and saved to the user's local computer, and local datasets can be loaded into the program to allow the user to view and analyze their own interactions in the context of DroID data. Finally, a new feature of IM Browser allows maps to be filtered based on gene expression data or confidence scores. The constraint is implemented as a dynamic filter that can be applied to an existing interaction map. As new gene expression data becomes available, and eventually protein expression data is collected from proteomics studies, an increasingly fruitful way to view interaction maps will be in the context of specific temporal and spatial expression patterns.
DroID is a comprehensive interactions database designed specifically for Drosophila melanogaster. The database currently covers more Drosophila genes and interactions than any other single database and is periodically updated. Because it is an organism-specific database, it readily includes potentially conserved interactions found in other organisms by mapping them to Drosophila genes. The database also includes comprehensive gene information, including Drosophila-specific information, which can be used to search for and filter interactions and to analyze gene networks. DroID includes gene expression data, both as expression profiles and as correlation values, to help researchers link together genes that may function together in specific biological processes. Finally, DroID assigns updateable confidence scores to every physical interaction to help focus studies on biologically relevant links. Combined with three user interfaces, DroID should provide a valuable resource for studying Drosophila systems.
Availability and requirements
DroID is freely available for non-commercial use. Any modern web browser can access the DroID home page at http://www.droidb.org/. A web browser with an installed Java Virtual Machine can access IM Browser from the DroID home page or from a list of found interactions. Cytoscape enables installation and usage of the DroID plugin for Cytoscape.
Shoemaker BA, Panchenko AR: Deciphering protein-protein interactions. Part I. Experimental techniques and databases. PLoS Comput Biol. 2007, 3 (3): e42-10.1371/journal.pcbi.0030042.
Fields S, Song O: A Novel Genetic System to Detect Protein-Protein Interactions. Nature. 1989, 340 (6230): 245-246. 10.1038/340245a0.
Parrish JR, Gulyas KD, Finley RL: Yeast two-hybrid contributions to interactome mapping. Curr Opin Biotechnol. 2006, 17 (4): 387-393. 10.1016/j.copbio.2006.06.006.
Fields S: High-throughput two-hybrid analysis. The promise and the peril. Febs J. 2005, 272 (21): 5391-5399. 10.1111/j.1742-4658.2005.04973.x.
Vidal M: Interactome modeling. FEBS Lett. 2005, 579 (8): 1834-1838. 10.1016/j.febslet.2005.02.030.
Gavin AC, Superti-Furga G: Protein complexes and proteome organization from yeast to man. Curr Opin Chem Biol. 2003, 7 (1): 21-27. 10.1016/S1367-5931(02)00007-8.
Gingras AC, Aebersold R, Raught B: Advances in protein complex analysis using mass spectrometry. J Physiol. 2005, 563 (Pt 1): 11-21.
Deplancke B, Mukhopadhyay A, Ao W, Elewa AM, Grove CA, Martinez NJ, Sequerra R, Doucette-Stamm L, Reece-Hoyes JS, Hope IA: A gene-centered C. elegans protein-DNA interaction network. Cell. 2006, 125 (6): 1193-1205. 10.1016/j.cell.2006.04.038.
Lee TI, Rinaldi NJ, Robert F, Odom DT, Bar-Joseph Z, Gerber GK, Hannett NM, Harbison CT, Thompson CM, Simon I: Transcriptional regulatory networks in Saccharomyces cerevisiae. Science. 2002, 298 (5594): 799-804. 10.1126/science.1075090.
Mukherjee S, Berger MF, Jona G, Wang XS, Muzzey D, Snyder M, Young RA, Bulyk ML: Rapid analysis of the DNA-binding specificities of transcription factors with DNA microarrays. Nat Genet. 2004, 36 (12): 1331-1339. 10.1038/ng1473.
Sandmann T, Jakobsen JS, Furlong EE: ChIP-on-chip protocol for genome-wide analysis of transcription factor binding in Drosophila melanogaster embryos. Nat Protoc. 2006, 1 (6): 2839-2855. 10.1038/nprot.2006.383.
Boone C, Bussey H, Andrews BJ: Exploring genetic interactions and networks with yeast. Nat Rev Genet. 2007, 8 (6): 437-449. 10.1038/nrg2085.
Lehner B, Crombie C, Tischler J, Fortunato A, Fraser AG: Systematic mapping of genetic interactions in Caenorhabditis elegans identifies common modifiers of diverse signaling pathways. Nat Genet. 2006, 38 (8): 896-903. 10.1038/ng1844.
von Mering C, Krause R, Snel B, Cornell M, Oliver SG, Fields S, Bork P: Comparative Assessment of Large-Scale Data Sets of Protein-Protein Interactions. Nature. 2002, 417 (6887): 399-403. 10.1038/nature750.
Salwinski L, Miller CS, Smith AJ, Pettit FK, Bowie JU, Eisenberg D: The Database of Interacting Proteins: 2004 update. Nucleic Acids Res. 2004, D449-451. 10.1093/nar/gkh086. 32 Database
Kerrien S, Alam-Faruque Y, Aranda B, Bancarz I, Bridge A, Derow C, Dimmer E, Feuermann M, Friedrichsen A, Huntley R: IntAct – open source resource for molecular interaction data. Nucleic Acids Res. 2007, D561-565. 10.1093/nar/gkl958. 35 Database
Stark C, Breitkreutz BJ, Reguly T, Boucher L, Breitkreutz A, Tyers M: BioGRID: a general repository for interaction datasets. Nucleic Acids Res. 2006, D535-539. 10.1093/nar/gkj109. 34 Database
Chatr-aryamontri A, Ceol A, Palazzi LM, Nardelli G, Schneider MV, Castagnoli L, Cesareni G: MINT: the Molecular INTeraction database. Nucleic Acids Res. 2007, D572-574. 10.1093/nar/gkl950. 35 Database
Guldener U, Munsterkotter M, Oesterheld M, Pagel P, Ruepp A, Mewes HW, Stumpflen V: MPact: the MIPS protein interaction resource on yeast. Nucleic Acids Res. 2006, D436-441. 10.1093/nar/gkj003. 34 Database
Mishra GR, Suresh M, Kumaran K, Kannabiran N, Suresh S, Bala P, Shivakumar K, Anuradha N, Reddy R, Raghavan TM: Human protein reference database – 2006 update. Nucleic Acids Res. 2006, D411-414. 10.1093/nar/gkj141. 34 Database
Vastrik I, D'Eustachio P, Schmidt E, Joshi-Tope G, Gopinath G, Croft D, de Bono B, Gillespie M, Jassal B, Lewis S: Reactome: a knowledge base of biologic pathways and processes. Genome Biol. 2007, 8 (3): R39-10.1186/gb-2007-8-3-r39.
Reguly T, Breitkreutz A, Boucher L, Breitkreutz BJ, Hon GC, Myers CL, Parsons A, Friesen H, Oughtred R, Tong A: Comprehensive curation and analysis of global interaction networks in Saccharomyces cerevisiae. J Biol. 2006, 5 (4): 11-10.1186/jbiol36.
Mathivanan S, Periaswamy B, Gandhi TK, Kandasamy K, Suresh S, Mohmood R, Ramachandra YL, Pandey A: An evaluation of human protein-protein interaction data in the public domain. BMC Bioinformatics. 2006, 7 (Suppl 5): S19-10.1186/1471-2105-7-S5-S19.
Persico M, Ceol A, Gavrila C, Hoffmann R, Florio A, Cesareni G: HomoMINT: an inferred human network based on orthology mapping of protein interactions discovered in model organisms. BMC Bioinformatics. 2005, 6 (Suppl 4): S21-10.1186/1471-2105-6-S4-S21.
Chaurasia G, Iqbal Y, Hanig C, Herzel H, Wanker EE, Futschik ME: UniHI: an entry gate to the human protein interactome. Nucleic Acids Res. 2007, D590-594. 10.1093/nar/gkl817. 35 Database
Crosby MA, Goodman JL, Strelets VB, Zhang P, Gelbart WM: FlyBase: genomes by the dozen. Nucleic Acids Res. 2007, D486-491. 10.1093/nar/gkl827. 35 Database
Yu H, Luscombe NM, Lu HX, Zhu X, Xia Y, Han JD, Bertin N, Chung S, Vidal M, Gerstein M: Annotation transfer between genomes: protein-protein interologs and protein-DNA regulogs. Genome Res. 2004, 14 (6): 1107-1118. 10.1101/gr.1774904.
Pacifico S, Liu G, Guest S, Parrish JR, Fotouhi F, Finley RL: A database and tool, IM Browser, for exploring and integrating emerging gene and protein interaction data for Drosophila. BMC Bioinformatics. 2006, 7: 195-10.1186/1471-2105-7-195.
Stanyon CA, Liu G, Mangiola BA, Patel N, Giot L, Kuang B, Zhang H, Zhong J, Finley RL: A Drosophila Protein-Interaction Map Centered on Cell-Cycle Regulators. Genome Biology. 2004, 5 (12): R96-10.1186/gb-2004-5-12-r96.
Giot L, Bader JS, Brouwer C, Chaudhuri A, Kuang B, Li Y, Hao YL, Ooi CE, Godwin B, Vitols E: A protein interaction map of Drosophila melanogaster. Science. 2003, 302 (5651): 1727-1736. 10.1126/science.1090289.
Formstecher E, Aresta S, Collura V, Hamburger A, Meil A, Trehin A, Reverdy C, Betin V, Maire S, Brun C: Protein interaction mapping: a Drosophila case study. Genome Res. 2005, 15 (3): 376-384. 10.1101/gr.2659105.
Stanyon CA, Finley RL: Progress and potential of Drosophila protein interaction maps. Pharmacogenomics. 2000, 1 (4): 417-431. 10.1517/146224184.108.40.2067.
Mewes HW, Amid C, Arnold R, Frishman D, Guldener U, Mannhaupt G, Munsterkotter M, Pagel P, Strack N, Stumpflen V: MIPS: analysis and annotation of proteins from whole genomes. Nucleic Acids Res. 2004, D41-44. 10.1093/nar/gkh092. 32 Database
Beuming T, Skrabanek L, Niv MY, Mukherjee P, Weinstein H: PDZBase: a protein-protein interaction database for PDZ-domains. Bioinformatics. 2005, 21 (6): 827-828. 10.1093/bioinformatics/bti098.
O'Brien KP, Remm M, Sonnhammer EL: Inparanoid: a comprehensive database of eukaryotic orthologs. Nucleic Acids Res. 2005, D476-480. 33 Database
von Mering C, Jensen LJ, Snel B, Hooper SD, Krupp M, Foglierini M, Jouffre N, Huynen MA, Bork P: STRING: known and predicted protein-protein associations, integrated and transferred across organisms. Nucleic acids research. 2005, D433-437. 33 Database
Lin CY, Chen SH, Cho CS, Chen CL, Lin FK, Lin CH, Chen PY, Lo CZ, Hsiung CA: Fly-DPI: database of protein interactomes for D. melanogaster in the approach of systems biology. BMC bioinformatics. 2006, 7 (Suppl 5): S18-10.1186/1471-2105-7-S5-S18.
The Gene Ontology Consortium: Gene Ontology: Tool for the Unification of Biology. Nature Genetics. 2000, 25 (25–29):
Tomancak P, Beaton A, Weiszmann R, Kwan E, Shu S, Lewis SE, Richards S, Ashburner M, Hartenstein V, Celniker SE: Systematic determination of patterns of gene expression during Drosophila embryogenesis. Genome Biol. 2002, 3 (12): RESEARCH0088-10.1186/gb-2002-3-12-research0088.
Arbeitman MN, Furlong EEM, Imam F, Johnson E, Null BH, Baker BS, Krasnow MA, Scott MP, Davis RW, White KP: Gene Expression During the Life Cycle of Drosophila melanogaster. Science. 2002, 297: 2270-2275. 10.1126/science.1072152.
Hooper SD, Boue S, Krause R, Jensen LJ, Mason CE, Ghanim M, White KP, Furlong EE, Bork P: Identification of tightly regulated groups of genes during Drosophila melanogaster embryogenesis. Mol Syst Biol. 2007, 3: 72-10.1038/msb4100112.
Lee I, Date SV, Adai AT, Marcotte EM: A probabilistic functional network of yeast genes. Science. 2004, 306 (5701): 1555-1558. 10.1126/science.1099511.
Barrett T, Troup DB, Wilhite SE, Ledoux P, Rudnev D, Evangelista C, Kim IF, Soboleva A, Tomashevsky M, Edgar R: NCBI GEO: mining tens of millions of expression profiles – database and tools update. Nucleic Acids Res. 2007, D760-765. 10.1093/nar/gkl887. 35 Database
Parrish JR, Yu J, Liu G, Hines JA, Chan JE, Mangiola BA, Zhang H, Pacifico S, Fotouhi F, Dirita VJ: A proteome-wide protein interaction map for Campylobacter jejuni. Genome Biol. 2007, 8 (7): R130-10.1186/gb-2007-8-7-r130.
Deng M, Sun F, Chen T: Assessment of the Reliability of Protein-Protein Interactions and Protein Function Prediction. Pacific Symposium on Biocomputing. 2003, 8: 140-151.
Bader JS, Chaudhuri A, Rothberg JM, Chant J: Gaining Confidence in High-Throughput Protein Interaction Networks. Nature Biotechnology. 2004, 22 (1): 78-85. 10.1038/nbt924.
Suthram S, Shlomi T, Ruppin E, Sharan R, Ideker T: A direct comparison of protein interaction confidence assignment schemes. BMC Bioinformatics. 2006, 7: 360-10.1186/1471-2105-7-360.
Ge H, Liu Z, Church GM, Vidal M: Correlation between Transcriptome and Interactome Mapping Data from Saccharomyces cerevisiae. Nature Genetics. 2001, 29 (482–486):
Shannon P, Markiel A, Ozier O, Baliga NS, Wang JT, Ramage D, Amin N, Schwikowski B, Ideker T: Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome Res. 2003, 13 (11): 2498-2504. 10.1101/gr.1239303.
We thank Jodi R. Parrish, George G. Roberts III, and Stephen Guest for helpful comments on the manuscript. This work was supported in part through grants from the National Institutes of Health (HG001536) and the National Center for Research Resources (RR18327).
RLF conceptualized the project. JY and RLF designed and built the database. JY performed data analysis. SP and RLF designed and built the interfaces. GL compiled a previous version of the database. RLF and JY wrote the draft. All authors tested the database and interfaces.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Yu, J., Pacifico, S., Liu, G. et al. DroID: the Drosophila Interactions Database, a comprehensive resource for annotated gene and protein interactions. BMC Genomics 9, 461 (2008). https://doi.org/10.1186/1471-2164-9-461
- Gene Expression Data
- Gene Pair
- Interaction Data
- Confidence Score
- Gene Expression Dataset | 1 | 27 |
<urn:uuid:dd3cc83c-e0f3-4bf8-b039-6bfbfd66394b> | The use of e-book readers (e-readers or electronic-readers) has become increasingly widespread. An e-reader should meet two important requirements: adequate legibility and good usability. In our study, we investigated these two requirements of e-reader design. Within the framework of a multifunctional approach, we combined eye tracking with other usability testing methods. We tested five electronic reading devices and one classic paper book. The results suggested that e-readers with e-ink technology provided legibility that was comparable to classic paper books. However, our study also showed that the current e-reader generation has large deficits with respect to usability. Users were unable to use e-readers intuitively and without problems. We found significant differences between the different brands of e-book readers. Interestingly, we found dissociations between objective eye-tracking data and subjective user data, stressing the importance of multi-method approaches.
Practitioner’s Take Away
The following list summarized take-aways that practitioners can get from this article:
- E-readers are not yet accepted as a replacement for a classic paper book.
- One of the primary reasons for this is poor usability.
- Another reason is that users expect more functions from an electronic reading device. This problem illustrates a challenge for future e-reader development; more functions should be integrated in such a way that they are usable intuitively.
- The legibility of the current e-reader generation is good; e-ink technology enables a reading process that is very similar to the reading process for classic paper books.
- For people with visual impairment, e-readers have the advantage of providing an opportunity to adjust the font size.
- Differences between subjective interview data and objective eye-movement data underline the importance of combining different methods in usability testing.
An e-book (electronic book) is an electronic version of a printed book. E-books can be read using an e-book reader (or e-reader) which is a digital device that displays electronic text. E-books can also be read on a personal computer (PC), mobile phone, or personal digital assistant (PDA); however, a specialized software application (e.g., Stanza) is necessary to read an e-book on a PC, mobile phone, or PDA. Our study focused on the e-book reader device, not the e-reader software for PCs, mobile phones, or PDAs. E-book reader devices are becoming more and more widely used and are subject to rapid development. Figure 1 shows the mutual relationships between the devices and the content.
Figure 1. E-books can be read either on a specialized reading device (e-book reader/e-reader) or on any general purpose device such as PC, PDA, or mobile phone that allow e-book reading through a software application.
The idea of using a digital device to read books is not new; it has existed for almost as long as interactive (end-user) computing (Golovchinsky, 2008). The first devices for e-reading were prototyped in the late 1960s by Alan Kay and later embodied in several generations of devices (Apple Newton, the Rocket eBook, and the Amazon Kindle). These generations of devices have been driven by innovations in device technology (e.g., displays, batteries, CPUs) rather than through evolving user needs.
Around the year 2000, there was a large interest to read content on specialized e-reading devices. Several companies (e.g., Franklin, Hanlin, Hiebook, Rocket eBook) released specialized e-reading devices (e-readers). At the same time, Microsoft released an Adobe software-only tool for reading on PCs. In this time, online stores for purchasing titles (e-books) were created. The latest generation of e-readers includes, among others, the Sony reader, the Amazon Kindle, and the IrexiLiad. This generation of e-readers is equipped with a different display technology. The active LCD displays have been replaced by “e-ink” technology that reduces the power consumption, thereby increasing the battery life and reducing the weight of the device. Last year’s market shows an expanding interest in e- readers. It seems that the e-book (and e-readers) is the most important development in the world of literature since the Gutenberg press (Rao, 2003). Consequently, there is a great need for empirical research on the usability of e-readers examining the extent to which the devices are capable of replacing the classic paper book and how their additional capabilities should be implemented so that users can use them intuitively.
Usability of E-Readers
The question whether e-readers will, at least partly, supersede the classic book largely depends on good usability. To define this variable we have applied ISO 9241-11, which is the most widely accepted definition. This standard describes how these qualities can be defined as part of a quality system and specifies good usability when a tool can be used effectively and efficiently and when the user is satisfied.
Previous studies found that users have problems in the handling of the current e-reader generation (Lam, P., Lam, S.L., Lam, J., & McNaught, 2009; McDowell & Twal, 2009; Thompson, 2009). Lam et al. found that students judged the enjoyment of the e-book reading process as low. Thompson compared six e-readers and concluded that the Sony Digital Reader was the most user friendly solution for consumer level use, but she added that studying on an electronic reader is less attractive than studying text using a classic paper book. In McDowell and Twal’s study, students used the Kindle Reader during one semester. They found that the majority of the students were not satisfied with the navigation.
In a self-experiment, Jakob Nielsen (2009) tested the new Amazon e-reader Kindle 2. He read one half of a book on the Kindle 2 and read the classic paper book for the other half. He concluded that the Kindle 2 was well suited for linear texts. He found no difference in the reading speed between the Kindle 2 and the classic paper book. He noted problems in navigation and criticized the navigation of Kindle 2 as non-intuitive; therefore, reading non-linear texts (like newspapers) is not comfortable. Nielsen saw advantages in the use of e-readers, like less weight to carry around or big fonts, and saw benefits in equal-to-print legibility and multidevice integration (2009).
Except for the studies mentioned above, a substantial amount of experimental work about the usability of e-readers does not exist. One reason could be that e-reader companies do not publish their results about their own laboratory studies; another reason could be that an e-reader as a commercial device is a relatively new phenomenon. However, despite the few experimental studies on the usability of e-readers, there is a great potential for improving the usability of the next e-reader generation (Siegenthaler, Wurtz, & Groner, 2010).
The usage of e-books has potential in different areas like the fields of education or publication of newspapers and literature. In comparison to classic paper books, e-books have some essential advantages. E-books can be easily updated for correcting errors and adding information. Additionally, full text search functions help users quickly find a passage or keyword in a book. E-books can be annotated without compromising the original work. They can be hyper-linked for easier access to additional information, and they may allow the option for the addition of multimedia like still images, moving images, and sound. And last, but not the least, they make reading accessible to persons with disabilities, because text can be re-sized for the visually impaired and be read aloud using speech synthesis. However, for the satisfactory usage of all these functions good usability is necessary.
Studies about e-reading based on reading processes are rare. There are several studies that compare reading between classic paper books and TFT screens (e.g., Creed, Dennis, & Newstead, 1987; Gould & Grischkowsky, 1984). Early studies found many qualitative and quantitative differences between paper and screen reading (for a review, see Dillon, 1992). In the meantime, digital displays have become more sophisticated and are increasingly present in everyday life. A recent study (Van de Velde & von Grünau, 2003) suggested that the differences between media (e.g., print vs. screen) have decreased but emphasized that reading behavior also depends on moderating variables like computer experience or the task to be performed. It is not granted that reading an e-reader is the same as reading a classic paper book or reading on a computer screen. The size of the device, the scalable font size, and the new e-ink technology make it different in comparison to reading a classic paper book or to reading from a screen. However, the robustness against bright ambient light in the surrounding area provided by e-paper technology and the improvement in screen quality makes reading from these devices increasingly acceptable, and the advantages in terms of low weight, small dimensions, and freedom of movement are evident.
Colors can serve as a factor for information transmission. Traditional print books are most frequently monochromatic, while technology of e-books has virtually no limits in that regard, although currently available e-ink technology is mainly monochromatic. Future e-readers could offer different color combinations of text and background. Some studies showed that the new e-ink technology can have a positive effect on the reading process (Siegenthaler, Wurtz, & Groner, 2010). However, other results showed that usability problems can negatively affect reading with e-reading devices such as e-readers and palm handhelds (Marshal l& Ruotolo, 2002; Siegenthaler, Wurtz, & Groner, 2010). Additional research investigating the reading process with e-reading devices is necessary.
In this study, we compared the usability and legibility of different e-reading devices to the classic paper book.
The following sections discuss the participants, apparatus, devices, and procedures used in this study.
Ten participants, 5 male and 5 female, were tested. Their ages ranged from 16 to 71 years (mean = 42 years). They were selected to represent the full range of possible e-reader users (with respect to subjective media experience, education, and subjective reading time per week). The average subjective media experience was 3.7 (scale from 1 to 6), and the average subjective reading time per week was 6.25 hours. Four participants had a high-school degree, two participants had a university degree, and four participants had completed an apprenticeship. All participants reported normal or corrected to normal vision and had no previous experience with e-readers. Participants gave written informed consent prior to participation. The study was performed in accordance with the latest declaration of Helsinki (WMA: Ethical Principles for Medical Research Involving Human Subjects, 2008).
Eye movements were recorded with an infrared video eye-tracking device (Tobii X120 Eye Tracker, Tobii Technology, Danderyd, Sweden). The system has a sampling rate of 120 Hz and a spatial accuracy of 0.5 degrees of visual angle. For each participant and trial, the system was recalibrated to assure best possible accuracy. Participants were allowed to move their head within a range of approximately 30 × 20 × 30 cm.
Five e-readers and one classic paper book were chosen for this study. The selection criterion was their availability in the Swiss market in June 2009. The five e-readers are listed below:
- IRex Iliad
- Sony PRS-505
- Ectaco jetBook®
- Bookeen Cybook Gen
Figure 2 shows the e-readers used in the study.
Figure 2. The e-readers used in this study: (a) IRex Iliad, (b) Sony PRS-505, (c) BeBook, (d) Ectaco jetBook, and (e) Bookeen Cybook Gen.
The text material used in this study was the first chapter of a novel in German language (Querschläger, Silvia Roth, 2008). Every participant read the first 12 pages from the novel for legibility assessment.
When the participants arrived at the laboratory, they were instructed about the aim and course of the experiment. The experiment started with a first legibility test. Participants had to read a segment of the text on all reading devices (five e-Readers and one classic paper book) while their eye movements were recorded. The reading devices were presented in randomized order, and the experimenter adjusted the font to the size most convenient for the participant (this was not possible for the classic book of course). Prior to each reading trial, a 9-point calibration procedure was performed to ensure the best possible measurement quality. Immediately after the first legibility test participants were interviewed, and they had to rate and give subjective preferences to each reading device.
Following the first legibility test, a two- hour usability-based task session took place. Participants had to perform small tasks on each of the six devices and rate the devices on different scales. Each participant had to perform the tasks and fill out separate questionnaires consecutively for each reading device. In this phase, participants worked individually because we wanted to create a general situation that was close to reality. The sequence of e-readers was randomized to control for order effects. Participants were allowed to use the user manuals and other documentations supplied by the manufacturers in the original device package. Participants had to complete the following five usability tasks:
- Open a book.
- Increase font size.
- Open a text in horizontal format.
- Open an audio-file.
- Open a picture.
For each task, participants had to answer whether they managed the task successfully or not. The time it took to complete the task was not noted because participants worked individually. After the small usability tasks were completed, participants were asked to rate (6-point Likert scale) the devices on the following criteria:
After completing the usability tasks and the ratings, participants filled out a questionnaire about usability and acceptance of the device based on Huang, Wei, Yu, Kuo’s (2006) questionnaire.
After the usability test, participants had a short break where refreshments were supplied. After the break, a second legibility test was administered. This second test was identical to the first legibility test with the exception that different text segments and different orders of e-reading devices were employed. Finally, participants were interviewed and asked to give subjective judgments in the form of ratings on a Likert scale with numbers matching the grading system as used by Swiss schools, ranging from 1 (very bad) 2 (bad), 3 (fail), 4 (pass), 5 (good), 6 (very good).
The following sections discuss the reading performance, results of usability ratings, and comparison between subjective user data and objective eye-movement data.
We analyzed reading performance and eye-movement data (for a detailed eye-movement analysis, see Siegenthaler, Wurtz & Groner, 2010). Analysis of reading speed was based on the time codes of the video recordings. The start and stop times of reading and page-turns were coded for later statistical analysis.
Statistical analysis was performed using F-statistics based on a repeated measures ANOVA with the within factor “reading device” (iRex, Bookeen, BeBook, Sony, Ectaco, classic paper book). In cases of unequal variances within the groups, Friedman-tests (using χ2-statistics) were employed.
No significant effects were found for total reading duration, F(5,40) = 0.857, p = .518. The time needed to read the text did not differ between the different devices.
We measured reading speed in counting words read per minute. No significant effect was found for reading speed between the different devices, F(5,40) = 1.113, p = .369.
Total page-turn duration
The time needed for each page-turn (where no reading takes place) was assessed, i.e., the period of time between the last reading fixation on the bottom of a page and the first reading fixation on the top of the next page. By summing up those durations, we measured the total time needed for page-turns. The means of total page-turn durations differed significantly between the reading devices, χ2 (5) = 22.016, p < .01.
Proportion of time spent for page-turns
The proportion of time spent for turning the pages, i.e., the ration of time needed to turn the pages and the total reading time, is shown in Table 1. The Friedman test revealed significant differences between the e-readers, χ2(5) = 19.857, p < .01.
Table 1. Mean Proportion (in %) and Standard Deviations (SD) of Time Spent for Page-Turns
Mean fixation duration
Visual fixation duration is a well established indicator of the difficulty of perceptual and/or cognitive processing (Just & Carpenter, 1980; Menz & Groner, 1982). The mean duration of visual fixations differed significantly between the reading devices, χ2(5) = 25.063, p < .01. Figure 3 shows differences in means and standard deviations of the fixation durations.
Figure 3. Mean fixation durations (ms) for reading on the different devices, with bars indicating one standard deviation. A single asterisk indicates a significant difference of p < .05 to the device with the shortest mean fixation duration (iRex); double asterisks indicate a significant difference of p < .01.
Number of letters per fixation
The mean number of letters, read per fixation was significantly affected by the type of book, χ2(5) = 14.460, p < .05. In the classic paper book, one fixation covered the largest number of letters. Table 2 shows the numbers of letters per fixation depending on the reading device. Note that the paper book had the smallest font.
Table 2. Mean Number of Letters per Fixation and Standard Deviations (SD)
Results of the Usability Ratings
The following sections describe the usability analysis based on the usability tasks and the ratings of the participants after they had tried to solve the tasks. Statistical analysis was performed using a Friedman-test with the reading devices as a within factor (iRex, Bookeen, BeBook, Sony, Ectaco, classic paper book).
Success rate of the usability tasks
After participants had solved the usability tasks, they had to report whether they solved the task successfully or not. Table 3 shows the success rates of the five usability tasks.
Table 3. Percentage of Success Rates for the Five Usability Tasks1
Subjective Usability ratings
The question was, “How do you like the design?” The ratings (on a 1–6 Likert scale) about the design of the reading devices was significantly different between devices, χ2(5) = 20.388, p < .01. Table 4 shows the results.
Table 4. Mean Rating and Standard Deviations (SD) for Design on a Likert Scale from 1 (Very Bad) to 6 (Very Good)
Participants judged the navigation on a Likert scale from 1 (very bad) to 6 (very good). The question was, “How do you judge the navigation?” We found significant differences between the devices, χ2(5) = 25.064, p < .01. As a control question, we asked, “How are you getting along with the reading device?” and replicated the above result, χ2(5) = 29.411, p < .01. Table 5 shows the results and ranks.
Table 5. Mean Ratings and Standard Deviations (SD) for Navigation, Likert Scale from 1 (Very Bad) to 6 (Very Good)
The question, “How do you judge the different functions/applications (like reading, listening music, storage of pictures…) of the reading device?” was judged on a Likert scale from 1 (very bad) to 6 (very good). We found significant differences in the functionality of the reading devices, χ2(5) = 19.265, p < .01. Table 6 shows results and ranks for functionality.
Table 6. Mean Rating and Standard Deviation (SD) of Functionality Ratings on a Likert Scale Ranging from 1 (Very Bad) to 6 (Very Good)
The question was, “How handy do you rate the reading device?” Participants judged handiness on a Likert-scale ranging from 1 (very bad) to 6 (very good). We found significant differences in handiness of the reading devices, χ2(5) = 15.111, p < .05. Table 7 shows the results and ranks.
Table 7. Mean Rating and Standard Deviations (SD) for Handiness on a Likert Scale Ranging from 1 (Very Bad) to 6 (Very Good)
Usability ratings based on a questionnaire
After participants performed the requested tasks in the usability test, they filled out the usability questionnaire by Huang et al. (2006). The questionnaire originally resulted in a 10-item Likert scale that, for comparison with the other rating scales, was transformed into a 6-item Likert scale. The usability ratings differed significantly between the different reading devices, χ2(5) = 26.667, p < .01. Table 8 shows the mean usability ratings.
Table 8. Mean Usability Ratings (Based on the Questionnaire by Huang et al. 2006) and Standard Deviations (SD) Transformed to a Likert Scale Ranging from 1 (Very Bad) to 6 (Very Good)
Comparison Between Subjective User Data and Objective Eye Movement Data
In the multifunctional approach employed in our analysis, different usability methods were combined. We found a dissociation, which is a discrepant result between perception (eye tracking) and evaluation (interviews) in the second legibility test. The results in the first legibility test showed a significant correlation between preference and legibility (r = .356, p < .01); the second legibility test showed a dissociation (r = –.002) between preference and legibility. In the first session, 60% of the participants preferred the iRex device to read with; however, after having used all devices, only 30% still preferred it. Figure 4 shows the comparison between subjective and objective measures for the two tests, a similar distribution between interview data and eye-movement data in Test 1 (left side of Figure 4) and a dissociation in Test 2 (right side).
Figure 4. The two diagrams in the upper half show the percentage of “favorite to read with” as judged by participants. The lower diagrams show the eye-movement data as the percentage of participants who had the shortest visual fixations when reading on the device compared to the other devices.
This result suggests that subjective appraisal by the subject is prone to bias: When asking users to rate the legibility, their judgment is biased by their overall impression of the device, including usability. Test setup and procedure (like order of tasks and questions) influence the participant’s appraisal.
For consumers who want to buy an e-reader, we recommend the following:
- Think about the purpose for which you want to use an e-reader.
- Choose the e-reader model depending on the required functions.
- Before you buy an e-reader, test it and check whether you can handle the device.
For practitioners doing usability tests, we recommend the following:
- Do not rely exclusively on subjective interview data.
- Use a combination of objective and subjective data.
- Carefully analyze the differences between subjective and objective data.
- If you do a usability test in an area where reading is involved, apply eye tracking.
Our study shows that the legibility of the current e-reader generation is comparable to that of classic paper books. E-ink technology enables a reading process that is very similar to the reading process of classic paper books. In our experiment, participants had the possibility of choosing the font size they liked. We have identified that especially older people prefer a big font size, and the eye-tracking data show that participants had significant shorter fixations on the e-readers compared to the classic paper book, which is an indication for better legibility. As a conclusion, we can say that in some situations, e-readers, with the option of changing font size, can have better legibility than classic paper books. The option of adjusting the font size could also enlarge the user group. For instance, people with low vision may benefit from the possibility of increasing font size on e-readers.
It should also be mentioned that the fact that participants had the possibility to adjust font size probably influenced eye-movement data. Because reading distance was constant and font size in the classic paper book was very small, some participants (especially older participants) reported problems with reading. This may have contributed to longer fixations in the classic book. Smaller font size may also be related to the number of letters per fixations. A single fixation covers more letters when the font size is smaller. One of the disadvantages when reading on an e-reader is the time spent with page-turns. Page-turns take a longer time with current e-ink technology because the building up of the displays takes some seconds. Another reason for the time loss due to page turning is the larger font size. Increasing font size inevitably results in a higher number of pages and, as a consequence, more page-turns. Although larger font size can negatively affect some eye-movement parameters, the possibility of adjusting font size will lead to better usability and also to better legibility.
The results of this study showed a remarkable deficit in the usability of the current e-reader generation. Participants had great problems using the e-readers. This is a crucial problem because perceived usability can influence subjective legibility ratings. In other words, if a person is not able to use a reading device efficiently, then he or she does not like reading with it. Future e-reader generations should have a more intuitive design. Users expect more from such a device than only displaying text. Further, e-reader generations may incorporate additional functionality like wireless local area network (WLAN) compatibility, the possibility to highlight text sections, a comment function, or a fast search function. To integrate such functions and make them intuitively usable will be a challenge for e-reader developers.
The finding that the usability of a device has an effect on legibility judgments expresses a methodical problem. In the first legibility test, where participants had no experience with the e-reading devices, the participants’ acceptance ratings were in close agreement with the efficiency of their eye-movement data. Thus, participants were able to judge which device was best for reading. However, after a period of self-regulated handling of the device, during the second legibility test, eye-movement data and the interview data showed a discrepancy. Between the first legibility test and the second legibility test, the participants had the opportunity to use the devices and had to give their usability ratings. In this session, participants encountered several problems during the interaction with the respective reading devices. They became disappointed or frustrated when they were not able to perform the exercises successfully. This outcome influenced their judgments in the second test session. After the second test session, the participants judged the classic paper book with the highest legibility rating. Objective eye-movement data remained constant, because eye-movement behavior is predominantly controlled by automatic and unconscious processes, which are not likely to be changed or manipulated deliberately and are only partially accessible to awareness (Albert & Tedesco, 2010). This discrepancy between subjective rating data and objective performance measures is an important cue for the interpretation of the results; it shows that the usability of a reading device is at least as important as its legibility.
- Albert, W., & Tedesco, D. (2010). Reliability of self-reported awareness measures based on eye tracking. Journal of Usability Studies, 5(2), 50-64.
- Creed, A., Dennis, I., & Newstead, S. (1987). Proof-reading on VDUs. Behaviour & Information Technology, 6(1), 3-13.
- Dillon, A. (1992). Reading from the paper versus screens: a critical review of the empirical literature. Ergonomics, 35(10), 1297–1326.
- Golovchinsky, G. (2008). Reading in the office. In Proceeding of the 2008 ACM Workshop on Research Advances in Large Digital Book Repositories (pp. 21-24). New York, NY. ACM Press.
- Gould, J. D., & Grischkowsky, N. (1984). Doing the same work with hard copy and with cathode-ray tube (CRT) computer terminals. Human Factors, 26(3), 323-337.
- Huang, S-M., Wei, C-W., Yu, P-T., & Kuo, T-Y. (2006). An empirical investigation on learners’ acceptance of e-learning for public unemployment vocational training. International Journal of Innovation and Learning, 3 (2), 174-185.
- Just, M.A., & Carpenter, P.A. (1980). A theory of reading: From eye fixations to comprehension. Psychological Review, 87, 329-354.
- Lam P., Lam S.L., Lam J., & McNaught C. (2009). Usability and usefulness eBooks on PPCs: How students’ opinions vary over time. Australasian Journal of Educational Technology, 25(1), 30-44.
- Marshall, C. C., & Ruotolo, C. (2002). Reading-in-the-small: a study of reading on small form factor devices, Proceedings of the 2nd ACM/IEEE-CS joint conference on Digital libraries (pp. 58-64). San Jose, CA. ACM Press.
- McDowell, M., & Twal, R. (2009, September 17). Integrating Amazon Kindle: A Seton Hall University pilot program. Retrieved on September 17, 2009, from Educause Resources http://www.educause.edu/Resources/IntegratingAmazonKindleASetonH/163808 .
- Menz, Ch., & Groner R. (1982).The analysis of some componential skills of reading acquisition. In R. Groner & P. Fraisse (Eds.) Cognition and Eye Movements (pp. 169-178). Amsterdam: Elsevier North Holland.
- Nielsen, J. (2009, March 9). Kindle 2 usability review. Retrieved on November 16, 2009, from http://www.useit.com/alertbox/kindle-usability-review.html
- Rao, S. S. (2003). Electronic books: A review and evaluation. Library High Tech, 27(1), 85-93.
- Siegenthaler, E., Wurtz, P., & Groner, R. (2010). Comparing reading processes on e-ink displays and printed paper (submitted for publication).
- Thompson, C. (2009). Digital readers: Fact and fiction. Proceedings of the IATUL Conference in Leuven. Available at: http://www.iatul.org/doclibrary/public/Conf_Proceedings/2009/Thompson-text.pdf
- Van de Velde, C., & von Grünau, M. (2003). Tracking eye movements while reading: Printing press versus the cathode ray tube. Perception 32 ECVP Abstract Supplement. | 1 | 2 |
<urn:uuid:e7e747c4-4db4-4691-b64c-d3f57ea97af2> | Many Australians are fearful of catastrophic human-caused climate change because this is what the state-sponsored propaganda on the Australian Broadcasting Corporation (the ABC) tells us.
In Australia, we mostly live near the sea. All along our coastline there is evidence of sea level fall, yes fall.*
Where is the evidence for rising sea levels?
Will you see how much sea levels have risen when you watch the fireworks over the Opera House in Sydney Harbour this New Year’s Eve — or will you see evidence of sea level fall?
It is that time of year when family and friends visit me at the beach. My niece told me just before Christmas that she had read so many of the comments at the YouTube thread following my first short film ‘Beige Reef’. She was surprised at how many comments there were — an awful lot she commented.
When I asked her what she thought of the film, she told me that she had not actually watched the film.
At that morning tea, under a shelter at Coolum Beach, none of my nieces or nephews or older brother could admit to having watched the film.
It is all of 12 minutes long.
This first film involved me wading into, and diving below, waters that my sister-in-law some weeks earlier had indicated put me at risk of a shark attack. But still she has not actually watched the film.
I know that there is fear within the varies communities within which I exist, of at least three things: sharks, catastrophic human-caused global warming — and that I could lead some of them down the path of global warming scepticism and from this they could end-up pariahs.
The best evidence is that global sea level has fallen by at least 2 metres since the the Holocene high stand about 4,000 BC; that is about 6,0000 years ago, a time known as the Minoan warm period.
The evidence in rocks and cliff faces all along the Australian east coast is that sea level was about 1m higher in the Roman warm period (year 0), and about 0.5m higher in the Medieval warm period (1,000 AD).
Conversely, it is believed the sea level was lower in the cold periods of 500 AD (Dark Ages) and the Little Ice Age (1,650 AD), maybe both 0.2 — 0.5 metres below today’s level. This last low sea level is particularly important, because it from this base sea levels are perhaps still rising back to average Holocene levels. But are they really?
When I go kayaking, and walk along the sea shore, and send my drone Skido up into the sky and look down and take pictures of things like marine potholes that feature at the top of this blog post: I see evidence for a sea shore that is receding.
The sea begins at the land’s edge. Where the sea begins is the ‘sea level’.
When I stand beside the circular pothole that you can see in the centre of the picture accompanying this blog post (… scroll to the very top).
I’m standing on a wave-cut platform of sandstone bedrock with rectangular fractures, and red iron oxide colouring.
Potholes are formed by the relentless grinding of harder rocks — perhaps granite— caught in a depression in this softer sandstone. Pounding surf causes the harder rocks to swirl — round and round — grinding down.
The grinding that created these potholes could only have happened when sea levels were higher, when this platform was between the high and low tide marks.
I took the picture on a highest tide this last year, in 2019. A year that is nearly over.
Sea levels must have been higher in the past. Because even on the highest tides this last year, the waves never reached this far?
The ABC may be concerned about rising sea levels, but where is your evidence for it? Are you brave enough, do you care enough, can you find the time enough, to think through some of these issues this next year: in 2020?
We are all entitled to our own opinions, but not our own facts.
* This is intended as the first of a series of blog posts on sea level change.
Wonderful drone photo at the top of the post. Scale, north point ?
Pothole grinding is equally likely caused by some sandstone shards that were more heavily infused with iron oxides in the pore spaces of the bedrock. Commonly called “ironstone”, these patches are scattered throughout the bedrock from overlying palaeoleaching of extrusive volcanics – mostly. Basalt shards may have also been the grinders.
The fracture sets are NE-NW jointing with superimposed E-W fractures, perhaps from later tectonics. To determine that, location is helpful. There is suggestion in the photos of the tree lines following the NE joint line as that is likely where the soil accumulated most heavily. I’ve seen growing eucalypts root-anchored in such joints on bare rock exposures – one such tree had actually shunted a large sandstone block aside as its’ trunk grew in girth, with the slickensides from the moved block quite evident.
All hard evidence of sea level drop from the Holocene highstand. The mean rate of drop could perhaps be estimated by examining the remnant tree line. Soil (even poor sandy soil derived from sandstone weathering) cannot accumulate under constant tidal washout of bare rock, nor can that plant species survive if the soil is constantly washed away.
Anne Carter says
Fear of becoming a sceptic pariah…never a truer word said
Allan Cox says
The youth of today, as indicated by the lack of interest by your young relatives in your work, seem to have little or no idea of the real world.
It’s so sad, and it bothers me that their future is being directed by their elders upon whom they trust to do the right thing; but, sadly not. The Pope’s call to action in May 2020 has all the hallmarks of a giant propaganda exercise to further entrench the ignorance of the youngsters of today. What hope have they got to learn the truth?
Mike Thurn says
Thanks Jennifer. I did watch your lovely film on Beige Reef, and as far as l’m concerned you most certainly proved that the reef is looking exceedingly healthy.
I’m a bit of a reef freak. In fact l’ve been to many different reefs, including Fiji (hundreds), Cook Islands, Solomon Islands, Bali (various), and a multitude of others, including the Red Sea, Hawaii, etc.
I’ve dived and snorkeled various locations along the GBR, and rarely have l witnessed extensive coral bleaching. I’ve seen bleaching, but more often than not l have seen extraordinarily healthy corals alongside, or in close proximity.
l struggle to understand what all the fuss is about, as I’ve been snorkeling and diving reefs, for close on 43 years, and although bleaching definitely occurs, the corals eventually recover. Most of the damage l’ve witnessed has had far more to do with cyclones!
AGW is such a con job, and in reality has nothing to do whatsoever with Climate. The UN and it’s IPCC, and the various NG0’s pushing this nonsense are getting desperate, and l’m getting that feeling that it’s going to end in tears soon. I believe the demise the climate Hoax was cast in stone when Trump made his decision to exit Paris. China will be forced to follow, and then India. This represents close to half the worlds population.
Keep up the good work Jennifer.
Frances Lilian Wellington says
Ta Jennifer. I reckon it’s just as well nature/climate/weather has taken it’s own course, otherwise we would be contending with dinosaurs!
I look forward to each point you make. I appreciate your leadership and education.
People are scared of being unpopular and the aggressive spiteful attempts at personal attack when the (most popular) belief is challenged. My own mum was a biologist and librarian… she taught me to question everything.
David blackall says
Thanks for your excellent work. I have shown your work to university students for 10 years or so. They only watch or read if I made them. It is frustrating that folks don’t watch films we have made, papers we have written. Just a thought about a typo: “I know that there is fear within the varies communities within which I exist, of at least three” should be various.
spangled drongo says
Thanks Jen. Yes, sea levels are falling. The BoM’s actual records show current Mean Sea Level at Fort Denison, Sydney Harbour, to be over 4 inches [113mm] LOWER than their first recording 105 years ago:
This is where the world’s oceans are at their widest and this recorded MSL fall is supported by the increase in area of atolls throughout the Pacific.
Brian Johnston says
When AGW is proved to be a scam, and I am sure it will be, I would expect the proponents of the scam to be sued for the billions/trillions wasted.
Scientists will have to be struck off.
Fraud is surely involved.
John Tillman says
The Minoan Warm Period was about 3300 years ago. The high stand 6000 years ago was during the Holocene Climatic Optimum, which ended around 5200 years ago.
Pamela Matlack-Klein says
Keep up the good work, Jen! Your Beige Reef video was excellent and told a compelling story. It is too bad the close-minded dolts in the media and politics refuse to accept the truth of things.
When I was a grad student on the east coast of the USA back in the early eighties, all the talk was about rising sea level causing “barrier island rollover.” Those pushing this theory based their careers on this erroneous idea. And I admit it was a seductive theory until I really looked into it deeply. Now that these guys are at the ends of their careers and lives they continue to hold onto the same lame theory. While I feel sad for their plight, I also recognize that they brought this on themselves by refusing to look deeper. Had they really made an effort to study the coast they would have soon realized that subsidence along the east coast of North America mimics RSL. In Florida, where I studied, the so-called barrier islands are nothing of the sort, they are mainland beaches cut off by the building of the Intracoastal Waterway in the 1920s. I discovered this while working with aerial photos of the entire coastline. If one is willing to look and observe the truth of what is going on often reveals itself.
Today I work as an Independent Researcher based in northern Portugal and owe nothing to anyone so can freely speak my mind without fear of losing funding or being fired. The CAGW clowns have built themselves a huge sand castle and I think the tide is now coming in, hopefully a King tide! It will undermine the foundations of their foolish hypothesis about CO2 driving the climate and many careers will end up stranded on the berm when the water recedes….
The good thing about the Australian continent is that in the last 10,000 years it has been relatively stable compared to Europe and North America, which were both extensively covered by very thick ice sheets that depressed the land surface, which is now slowly springing back.
It makes these more recent sea level movements (since the end of the last ice age ca. 12,000 years ago) much easier to see in the coastal morphology.
ed smith says
Thank you Jennifer, another well presented article !
I do fear for the younger generation though who now actually know nothing except for the lies being spread by almost every media outlet, TV and pop personalities who are their gods unfortunately, and even the religious icons now getting involved in things they know nothing about.
I also fear for the parents of the younger generation who should know better. Except I reflect on their education which I now recall when my children were in high school some 10 years ago now and I cant remember a time when they were taught to challenge and do research on topics for their science projects as we were taught to do when I went to school in the 60’s and studied chemistry and biology in the 70’s. we did our own measurements for all our own projects and experiments, reported the errors and stats on them and made our inferences and conclusions around that data. Always challenging the norm and making sure we reported honestly and openly all of our raw data along with the reports.
Now they just seem to listen to the extremists and follow like sheep. no questions asked. Teachers (or should I say facilitators ??) alike. When I talk to young people about AGW they all sound like robots programmed from the same algorithms. SCARRY !
I believe it all started with dumbing down the education system and taking away the ability to question and and think “This doesn’t sound right”?? is there something missing or incorrect in what i am hearing and seeing ??, “That is the Question !” and ‘Why Is It So?’, (Professor Julius Sumner Miller). one of my childhood icons who I have never forgotten, and also The summer science school series, Professor Harry Messell.
these guys knew how to think, how to teach thinking and how to present facts, physics and THINKING !.. Sadly this is not taught in universities or schools anymore. That really saddens me.
Except for a few of us left and especially you Jennifer. keep up the great work.
ps I loved the vid. one day, it will be part of the great hoax exposure, but I think it will take a long time yet to get to critical mass proportions,,However, I do see the tide turning so lets keep up the good fight and and I will keep plugging away whenever I can in my circles.
It’s sad and depressing that your own relatives won’t (or at least haven’t) watch your film due to fear of, well, no longer being afraid.
I’ve always found it confusing and unnatural. When people think something bad is going to happen, aren’t they normally happy and relieved when there’s evidence that the bad thing won’t happen? Yet in the global warming/climate change arena, believers viciously attack anyone who brings good news that the thing they allegedly fear so much is not going to occur.
Is there any chance you could arrange a family get-together for the purpose of watching your film? Perhaps you could allay some of their fears.
Jennifer, even their clueless ABC get it right occasionally and in this Catalyst program they tell us that just 4,000 years ago sea levels on our east coast were about 1.5 metres higher than today. Here’s their quote from the Catalyst program. This is nth Sydney where the aboriginal remains were found. Here’s the link.
Dr Macdonald:” The date came back at about 4000 years ago, which was quite spectacular we were very surprised.
Narration: 4000 years ago when Narrabeen Man was wondering around this area the sea levels were up to 1.5 metres higher than they are today.
Paul: So that spit would have been much narrower. The water levels in the Narrabeen lagoon would also have been higher and it would have acted like a saline estuary”.
Jennifer here is more on Spangled Drongo’s link to BOM’s MSL data at Fort Denison.
Here Andrew Bolt talks to Daniel Fitzhenry about the 1914 to 2019 BOM MSL data and this is the covering story below the video.
“Hydrographic Surveyor of NSW Australia Daniel Fitzhenry says data recorded by the Bureau of Meteorology at Fort Denison in the Sydney Harbour is “more accurate than satellite” on sea levels”
Here’s the video link from the Bolt Report. I hope Jennifer has the time to watch it. All the best for 2020 Jennifer and let’s hope somebody in govt starts to wake up to their silly mitigation fra-d and con trick. IMO Craig Kelly is the best chance to make a difference, if Jennifer has the time to even carry out a phone interview and try to swap ideas with Craig?
Craig seems to be well up on the science and perhaps an interview for Jennifer with Alan Jones or Bolt or Credlin or Murray on Sky news could be arranged? Here’s the Fort Denison link.
Sea level was how I first discovered the sceptics’ arguments, via the late, great John Daly’s site.
I am forever grateful for my scepticism of this scam. It truly is like walking around and seeing so many people wearing the Emperor’s New Clothes. Scary, yes and sometimes a little depressing, but I wouldn’t swap my clear sight on the issue for anything.
I grew up in WA where up and down the coast one sees stranded wave cut platforms, evidence of higher sea levels in the very recent (historical times, less than 10,000 years) past – very similar to your example from SE Qld. Concur with with your interpretation re the pothole features – can only occur in the high-energy, littoral zone in normal tide range.
Jennifer, Willis Eschenbach has another interesting post at WUWT and looks again at some of the recent projections about the future.
Some of these have been complete failures and have been found to be wrong in a very short time.
But he does look at the Greenland temp again over the last 12,000 years and finds no correlation with increased co2 emissions.
In fact co2 has been rising for the past 7,000 years while temp has been falling over that time.
See Vinther et al study linked at the post and their last Holocene graph showing co2 and temperature.
So where is the melting ice to come from to give us our dangerous SLR by 2100 and beyond? According to satellite data UAH V 6 there has been no warming on Antarctica over the last 41 years, so that doesn’t help either.
Anyone have any ideas? Here’s the link.
Bill In Oz says
Thanks Jennifer for this great post.
Real science by a real scientist !
Ken Stewart says
Good photos. There’s plenty of evidence for sea level fall over past 800 to 1500 years: NSW, NT, and Qld coasts have dried coastal lagoons, receding beaches, old wave platforms, old mud flats and flood plains now a metre above high tide (Townsville, Bowen, Port Alma). Also the Albrolhos off WA coast. We are experiencing long term global cooling.
So much thanks to Charles at WUWT for reposting: https://wattsupwiththat.com/2019/12/27/what-can-you-see-indicating-sea-levels-are-rising/
Of course, lives and livelihoods are threatened by all of this nonsense and careers destroyed.
I made the film ‘Beige Reef’ because back in 2016 Peter Ridd was censured by James Cook University for questioning what quality assurance was in place regarding claims about the supposed death of the Stone Island corals. Peter’s further statements about quality assurance in 2017 result in his sacking.
John F. Hultquist says
At that morning tea, under a shelter at Coolum Beach, none of my nieces or nephews or older brother could admit to having watched the film.
I watched it twice.
Here Andrew Bolt has a very interesting editorial from the Bolt report Oct 2019.
This is one of his best covering clueless Labor’s climate emergency stunt in parliament on that day.
And a few minutes into the video he shows 2 photos of Balmoral beach Sydney in 1905 and today in 2019, with emphasis on a circled flat rock in both photos.
The photos showing the rock over a period of 114 years are identical with little change that can be noticed. Check it out for yourselves and see if our so called dangerous SLR makes any sense to you at all.
But Bolt is right, Labor is deaf and barking mad.
Here’s that Bolt video again direct from You tube. Perhaps it may allow more browsers to select full screen to look at the two Balmoral beach photos in more detail?
Your ‘research’ is mindblowingly simplistic. You cannot measure global sea level trends by visual observations at your local beach. There are fluctuations in sea levels around the globe caused by a multitude of factors (mostly land influenced). In some places seas will be lower. The evidence of sea level rise has been conducted through thorough, replicated, peer reviewed, internationally recognized research. Please people, dont fall for this rubbish.
Here is the latest Duvat 2019 study of Coral atoll islands and this supports all the other recent studies that show about 87% of islands are either stable or growing in size.
The recent Kench studies etc also support these findings. Charles Darwin reported on this over 160 years ago and yet today we still find their ABC and other MSM extremists trying to con people into thinking these Island states are in extreme danger from SLR.
Here’s Dr Ole Humlum’s 2019 report to the GWPF and he finds an average Global SLR at the tide gauges of about 1 mm to 1.5 mm a year or 3.2 inches to 4.8 inches by 2100. If this was the case this would be less than the SLR over the 20th century. But satellites after adjustment show about 3.1 mm a year, or about 12 inches SLR by 2100.
Here’s his quote from the report and link to the report below. See page 38.
“Data from tide gauges all over the world suggest an average global sea-level rise of 1– 1.5 mm/year, while the satellite-derived record (Figure 30) suggests a rise of 3 mm/year, or more. The noticeable difference (at least 1:2) between the two data sets has no broadly accepted explanation”.
James Campbell says
I came across this site and had a look at the Bom Fort Denison Data.
Average for 1914-1923 is 0.913m
Average for 2010-2019 is 1.008m
Increase is therefore 9.5cm
Happy to show people how its done, nothing fancy just a 10 year average.
Stuart Harmon says
Great post this and the post on the coral reef.
Harlech Castle in North Wales has a sea gate for the delivery of supplies. The castle was constructed circa 1285.
The sea gate is now approximately one kilometre from the sea. The area between is a golf course. I’m not sure this is relevant but given the topography I think it unlikely to be re-claimed land.
Happy New Year
Here’s an interesting article about the time after 2010- 2011 when Australia was able to change the satellite GLOBAL MSL graph.
So much water was contained within the continent because of very heavy rainfall from la nina+ negative IOD events that the global MSL graph had a pronounced drop ( gully) for more than a year.
It just proves that we can have major floods and droughts in Australia over a short period of time. Don’t forget that 2016 was also a MDB flood year because of another negative IOD.
There is a link to the Uni Colorado MSL graph below showing the big drop after 2011.
spangled drongo says
“You cannot measure global sea level trends by visual observations at your local beach.”
Sally, over a long period of time this is the most factual way of observing what sea levels are really doing.
For example, when I observed fine weather king tides [at around normal barometric pressure, not cyclonic] from 1946 to 1953 coming just over the top of our sea wall and trickling into our well [if we didn’t keep a levy bank around it] and for the last 10 years those normal BP king tides are up to 250 mm LOWER than 70 years ago, there is no better data than that to advise me that sea levels are not rising.
When I see our ocean beaches wider than they have been in my lifetime and ocean-front houses that we struggled to save from washing out to sea now changing hands for tens of millions when you couldn’t give them away back then, that is very conclusive evidence that sea levels are not rising.
Why is it that many of the main promoters of the “climate crisis” are very wealthy people but are still perfectly relaxed in spending a large part of their fortune on sea-front mansions?
Do you think they really believe alarmist sea level predictions?
Peter Cook says
Dr Marohasy, you ask where to get facts about sea level rise. May I suggest you start with peer reviewed science based on actual collection and analysis of data, such as the following:
White, Neil J., Ivan D. Haigh, John A. Church, Terry Koen, Christopher S. Watson, Tim R. Pritchard, Phil J. Watson et al. “Australian sea levels—Trends, regional variability and influencing factors.” Earth-Science Reviews 136 (2014): 155-174.
To quote just one paragraph from this paper:
“Gehrels et al. (2012) used sea levels recorded in salt marsh sediments to infer sea level was stable in the Tasmanian and New Zealand region, at about 0.3 m lower than at present, through the middle and late Holocene up to the late 19th century. The rate of sea-level rise then increased in the late 19th century, resulting in a 20th century average rate of relative sea-level rise in eastern Tasmania of 1.5 ± 0.4 mm yr− 1. This, and other analyses (e.g. Lambeck, 2002, Gehrels and Woodworth, 2013) suggest an increase in the rate of global and regional sea-level rise in the late 19th and/or the early 20th Centuries. The earliest known direct measurements of sea level in Australia are from a two-year record (1841–1842) at Port Arthur, Tasmania relative to an 1841 benchmark (Hunter et al., 2003). Hunter et al. (2003) estimated a sea-level rise over the 159 years to 1999–2002 of 0.135 m (at an average rate of 0.8 mm yr− 1). If, following Gehrels et al. (2012), most of this rise occurred after 1890, the 20th century rate would be 1.3 mm yr− 1, or 1.5 mm yr− 1 after correction for land uplift (Hunter et al., 2003 and Hunter pers comm).”
I assume you have robust arguments (perhaps even a published paper) which demonstrates the above points are wrong, and in fact sea levels are now falling?
Happy New Year.
When the reactionary trolls who have not read your work do drive by comments that only shot their ignorance, it is clear you are doing a good job.
Best wishes for a very successful and happy 2020!
Jennifer Marohasy says
You are quoting rates of at most 1.5 mm per year increase over the last 100 years? So, this is equivalent to 150 mm in total over the last hundred years or 0.15 metres?
There are cycles within cycles, generating oscillations within bands, when it comes to everything to do with climate. So, there may be some cycling-up over the last 100 or so years of around 15 centimetres in total? I thought the IPCC was quoting a value of more than double this: about 36 cms?
This is not inconsisent with the longer term trend of sea level fall since the Holocene High Stand which some claim to be 4,500 years ago, others closer to 7,000 years ago. This longer term trend — that we can see with our eyes and also is supported by the technical literature — suggests an overall fall in sea levels of about 1.5 metres.
I was recently sent this note with a list of technical papers. I actually have some more, and more recent reviews of Holocene sea level fall. If you would like I can email them to you. Let me know.
Note from Howard Brady from some months ago:
There is evidence of a gradual fall (not rise) from a high sea level stand between 8000 and 2000 BP. Such evidence comes from an increasing number of peer-reviewed articles describing evidence of this high sea level stand and its decline along the coasts of Australia, South Africa, South America, South Korea, and Vietnam. There is increasing evidence that such a wide occurrence of a high sea level stand, especially in the Southern Hemisphere, cannot be interpreted as due to crustal movements (Glacial Isostatic Adjustments -GIAs) in different continents at the same time as these areas did not experience any significant glacial or ice crustal loading during the last ice age advances. Basically, there is now so much data on this fall in sea level from a high-level stand that the GIAs quoted by Dutton and Lambeck 2012 should be abandoned. A few references to peer reviewed articles describing a high sea level stand in the HTM and the fall in sea-level from 8000 -2000 BP are listed below. There is no justification for any glacio-eustatic uplift since 8000 BP that stopped (for some unknown reason about 2000 BP) in regions that did not experience any ice loading during the last glaciation.
Accordi.A, Carbone, F 2016. Evolution of the siliciclastic-carbonate shelf system of the northern Kenyan coastal belt in response to Late Pleistocene-Holocene relative sea level changes. Journal of African Earth Sciences. Volume 123, November 2016, Pages 234-257
Baker,R.G.V., Haworth,R.J; 2000. Smooth or oscillating late Holocene sea-level curve? Evidence from the palaeo-zoology of fixed biological indicators in east Australia and beyond. Marine Geology 163, 367-386.
Baker,R.G.V., Haworth,R.J., Flood,P.G; 2001. Warmer or Cooler late Holocene palaeoenvironments? Interpreting south-east Australian and Brazilian sea level changes using fixed biological indicators and their d18 Oxygen composition. Palaeogeography, Palaeoclimatology, Palaeoecology 168. 249-272.
Baker,R.G.V., Haworth,R.J., Flood,P.G; 2001. Inter-tidal fixed indicators of former Holocene sea levels in Australia; a summary of sites and a review of methods and models. Quaternary International 83-85. 257-273.
Baker,R.G.V., Haworth,R.J., Flood,P.G; 2005.An Oscillating Holocene Sea-level? Revisiting Rottnest Island, Western Australia, and the Fairbridge Eustatic Hypothesis. Journal of Coastal Research, Special Issue no.42.
Bracco,B. et al; 2014. A reply to “Relative sea level during the Holocene in Uruguay. Palaeogeography, Palaeoclimatology, Palaeoecology.Volume 401.
Bradley, S, Milne,G, Horton,B, Zong,Y 2016. Modelling sea level data from China and Malay-Thailand to estimate Holocene ice-volume equivalent sea level change. Quaternary Science Reviews 137:54-68
Chiba,T et al;, 2016. Reconstruction of Holocene relative sea-level change and residual uplift in the Lake Inba area, Japan. Palaeogeography, Palaeoclimatology, PalaeoecologyVolume 441, Part 4,Pages 982-996
Clement, A, Whitehouse,P, Sloss,S 2015. An examination of spatial variability in the timing and magnitude of Holocene relative sea-level changes in the New Zealand archipelago. Quaternary Science Reviews. Volume 131, Part A. January 2016, Pages 73-101
Haworth,R.J., Baker,R.G.V., Flood,P.G; 2001. Predicted and observed Holocene sea-levels on the Australian coast: what do they indicate about hydrostatic models in far field sites? Journal of Quaternary Research 17. 5-6.
Lee, S., Currell. M, Cendon, D. 2015. Marine water from mid-Holocene sea level highstand trapped in a coastal aquifer: Evidence from groundwater isotopes, and environmental significance. Science of The Total Environment. Volume 544. February 2016, Pages 995-1007
Lunning,S, Vahrenholt, F. Im südlichen Afrika lag der Meeresspiegel vor 5000 Jahren um 3 m höher als heute- Kategorien: Allgemein, News/Termine.25. Juni 2018 | 07:30
Oliver and Terry, 2019. Relative sea-level highstands in Thailand since theMid-Holocene based on 14C rock oyster chronology. Palaeogeography, Palaeoclimatology, Palaeoecology,Volume 517. Pages 30-38
Prieto,A. Peltier, W. 2016. Relative sea-level changes in the Rio de la Plata, Argentina and Uruguay: A review. Quaternary International.
Sloss, Craig R,: 2005. Holocene sea-level change and the amino-stratigraphy of wave-dominated barrier estuaries on the southeast coast of Australia, PhD thesis, School of Earth and Environmental Sciences, University of Wollongong, 20. http://ro.uow.edu.au/theses/447.
Sloss, C.R, Murray-Wallace,C.V, Jones.B.G; (2007). Holocene sea-level change on the southeast coast of Australia: a review. The Holocene 17, 7. 999-1014.
Strachan K, et al;, 2014. A late Holocene sea-level curve for the east coast of South Africa. S. Afr. j. sci. vol.110 n.1-2
A happy and prosperous new year to everyone. But in reality we’ve just lived through the most prosperous decade in human history.
I know this won’t be what the extremists or con merchants want to hear but it’s the truth whether they like it or not.
Matt Ridley lists some of the facts for everyone to understand, but I’m sure the usual eco-loons and Greta luvvies will twist and turn until they convince themselves that he must be wrong.
See his list of sources at the link or check Our World in Data or UN or Dr Roslings Gap minder etc. Lomborg is another source.
BTW check out our rapid prosperity from 1810 to 2009 after we had the sense to start using fossil fuels. Thanks to Dr Rosling and his team allowing anyone who has a few minutes to spare to understand how far we’ve come in just 200 years.
Here is Dr Rosling’s 200 year video.
Peter Cook says
Thank you Dr Marohasy for that information. I am still not sure whether your position is that you accept (or reject) that there has been sea level rise since the start of the industrial revolution.
The 2016 Donchyts et al study found that since the 1980s coastal land was increasing all around the world. This was due to exact measurements by satellites over that period of 30 years.
This was surprising after we’d been told for years that the seas were rising and would eventually impact and flood more coastal areas across the globe.
Here’s the BBC report on the study and the data and measurements directly from the satellites.
The BBC link is worth a read.
The study is from Nature climate change and does not have free access. Here is the link.
This is from the study link page.
Earth’s surface water change over the past 30 years
Gennadii Donchyts, Fedor Baart, Hessel Winsemius, Noel Gorelick, Jaap Kwadijk & Nick van de Giesen
Nature Climate Change volume 6, pages810–813(2016)Cite this article
“Earth’s surface gained 115,000 km2 of water and 173,000 km2 of land over the past 30 years, including 20,135 km2 of water and 33,700 km2 of land in coastal areas. Here, we analyse the gains and losses through the Deltares Aqua Monitor — an open tool that detects land and water changes around the globe”.
Jennifer Marohasy says
My position is that there are many different phenomena that affect sea level, most of them extraterrestrial in origin.
The longer-term most significant trend obvious in the cliff faces and wave cut platforms along the shore line here at Noosa indicate sea level fall of more than one metre since the Holocene High Stand. That there has been a rise of perhaps 36 cms since the early 1800s, coincident with the Industrial revolution, is not particularly remarkable and does not change the fact that the longer term trend is one of fall.
It is also the case that there will be a measurable rise in the sea level at Noosa of about 80 cms tomorrow between 6am and lunch time, followed by a fall of a similar amount. This is part of the daily cycle that sits within the monthly cycle that sits within the 18.6 year lunar declination cycle, and so it goes on.
Can you tell me what caused the 120 metre rise in sea levels globally that began about 20,000 years ago and continued until about 7,000 years ago?
Many sceptics and alarmists like to suggest that sea levels have continue to rise through the Holocene (over the last 10,000 years) but this claim is not supported by the technical literature or what I can see along the east coast of Australia.
There are cycles within cycles, I think the fall since the Holocene high stand the most significant for those interested in climate change, and the most denied … but if you are going fishing it is the daily cycle that will be of most concern to you, perhaps. | 2 | 4 |
<urn:uuid:7e951441-dcf0-4c2d-a4fc-805a0026bdef> | “Torturers must never be allowed to get away with their crimes, and systems that enable torture should be dismantled or transformed.”
UN Secretary-General António Guterres
The Bahraini authorities’ use of systematic torture has been verified through dozens of local and international human rights reports, notably the report of the Bahrain Independent Commission of Inquiry (BICI), which was created and its findings and recommendations adopted by the King himself. The BICI attributed the systematic torture to “the lack of accountability of officials within the security system in Bahrain,” which has led to a “culture of impunity.” As widespread impunity is conducive to more torture and other forms of cruel, inhuman, and degrading treatment, the absolute priority of the Bahraini government should have been to punish perpetrators if it was sincere in combating torture. However, the government efforts in the last decade have fallen short of meaningful progress. Creating various oversight bodies and enacting new laws have not brought those involved in torture and ill-treatment to justice. While addressing torture requires strict adherence to the principle of “superior responsibility,” the Bahraini government has completely failed to uphold this principle. Hundreds of torture victims and their families are still waiting for redress and accountability in Bahrain.
Torture has always been one of the Bahraini authorities’ means of dominance and control over the population, even before the independence. Torture reports intensified during periods of political turmoil, specifically against political detainees and prisoners. Most cases of torture occur during interrogations by law enforcement officials for the purpose of extracting confessions or punishment. The last peak of torture cases occurred during and in the aftermath of the 2011 Uprising. The BICI received 559 complaints concerning the mistreatment of persons in custody between 1 July 2011 and 23 November 2011 alone. Later, dozens of torture and ill-treatment cases have been documented by local and international human rights organizations. The Bahraini criminal system enables torture by condoning credible allegations of torture by the Public Prosecution Office (PPO), accepting coerced confessions in courts, establishing non-independent and ineffective oversight bodies, and subsequently not holding perpetrators to account. Even the media plays a role in the lack of accountability by being a propaganda tool for the government.
The Bahraini law gives the PPO jurisdiction over torture crimes. However, over the years, the PPO demonstrated a lack of independence and reluctance to investigate torture and ill-treatment cases and hold perpetrators accountable. Hundreds of people were tortured and ill-treated in detention centers that the PPO was empowered to inspect, and dozens of law enforcement officials who were involved in torture are free and unpunished because of the PPO’s unwillingness to bring them to justice. There have been many reports of the PPO’s blatant disregard of torture allegations during interrogations of detainees. In 2015, Human Rights Watch reported six cases in which detainees reported their torture at the Criminal Investigation Directorate (CID) to the PPO, and not only did the PPO fail to take any action, but it also ordered the return of two of them to the CID after refusing to make confessions. Only one case of those has resulted in an investigation. The BICI report also indicated that, in some cases, “judicial and prosecutorial personnel may have implicitly condoned” the lack of accountability within the security system in Bahrain.
On 31 July 2016, Hassan Jassim Hasan al-Hayky, a 35-year-old Bahraini citizen, died in custody from injuries he sustained during torture at the CID, according to his family. Al-Hayky reported his torture to the PPO, but instead of investigating his allegations, they ordered his return to the CID, where he was “subjected to further acts of torture.” At the time, Bahrain Center for Human Rights (BCHR) received information that al-Hayky was allegedly subjected to sexual abuse on 10 July 2016 by officials at the PPO, who forced him to sign a confession. He was denied access to a lawyer during the investigation despite his multiple requests for one, and when his lawyer headed there, he was falsely told that al-Hayky had not yet been brought in. When al-Hayky’s lawyer stated that his deceased client had wounds and bruises on his body, confirming “beyond any doubt the existence of a criminal suspicion behind the death,” the PPO accused him of “spreading false news.” After a quick investigation, the Special Investigation Unit (SIU), which operates under the PPO, concluded that the death was due to natural causes. Similarly, the PPO condoned Mohammad Ramadan’s allegation of torture, who was later convicted based on a coerced confession and sentenced to death on 29 December 2014 even though he showed public prosecutor signs of torture on his body.
The PPO has been complicit in torture by accepting coerced confessions during interrogations, condoning allegations of torture, practicing leniency with torture perpetrators, and being part of a system that criminalizes dissent and justifies violence to crush it. In the majority of cases of unlawful killing of civilians by security forces documented by the BICI, some of which were killed under torture, the PPO unacceptably charged the perpetrators with only assault without intent to kill, which resulted in lenient sentences for perpetrators.
In parallel, the Bahraini courts play a role in the systematic torture and widespread impunity in the country. They have accepted coerced confessions even in high-profile cases, like the case of what is called “Bahrain 13.” Moreover, the judicial system has failed to provide redress to dozens of families and victims of security forces’ human rights violations by acquitting many perpetrators of torture and extrajudicial killings and issuing light sentences against the few who were convicted inconsistent with the gravity of their offenses.
In 2012, the BICI chief commented, “you can’t say that justice has been done when calling for Bahrain to be a republic gets you a life sentence and the officer who repeatedly fired on an unarmed man at close range only gets seven years,” referring to the case of Hani Abd al-Aziz Jumaa.
In the case of the February 14 Coalition, fifty individuals were convicted in a mass trial on charges of establishing and joining a “terrorist group.” The evidence presented in court to establish guilt regarding these charges consisted of mainly coerced confessions and a number of recordings and photographs showing some defendants organizing and participating in demonstrations and calling on social media for participation. “Despite the striking lack of evidence of any legitimately criminal activity, the court sentenced 16 defendants to 15-year terms, 4 defendants to 10-year terms, and the remaining 30 defendants to 5 years in prison.”
In the cases of the 21 leading activists and opposition figures, 14 of whom were tried before the “National Safety Courts” in person, the civilian courts upheld their convictions and sentences although they were denied basic due process rights. The High Criminal Court of Appeal upheld the convictions and sentences of 13 of them on 4 September 2012, and on 6 January 2013, the Cassation Court confirmed the verdict. The BICI documented in detail the torture and ill-treatment of many of them, and Amnesty International considered them “prisoners of conscience who should be immediately and unconditionally released.” The Military Prosecution “failed to provide any evidence that the accused used or advocated violence” during the 2011 protests. The main incriminating evidence in the case before the High Criminal Court of Appeal (civilian court) was the “confession of two defendants, allegedly obtained under torture and testimonies from officers allegedly involved in the defendants’ torture.”
In the case of Mohammad Ramadan and Hussein Ali Moosa, who are at imminent risk of execution, the SIU, despite its inconclusive investigations into their torture complaints, recommended that “the courts reconsider the verdicts against Moosa and Ramadan in light of a newly uncovered medical report by an Interior Ministry doctor that had not been available during the initial trial,” in its 18 March 2018 report. The SIU concluded that there is a “suspicion of the crime of torture (…) which was carried out with the intent of forcing them to confess to committing the crime they were charged with.” Yet, the Court of Cassation upheld their death sentences on 31 July 2020.
In February 2014, a group of 97 lawyers submitted a memorandum to the vice-president of the Supreme Judiciary Council, highlighting the inaction and complicity of the judiciary in preventing and combating torture:
- The PPO and some judges ignore evidence and allegations of torture and admit coerced confessions in criminal proceedings.
- In several cases, the judge refuses the lawyers’ request to include the defendants’ identification of the person who tortured or ill-treated them into the session minutes and refuses to assign the PPO to investigate.
- In many cases, the judge requests that defendants be expelled from the courtroom after identifying the person who tortured or abused them.[i]
The Bahraini judicial system, both the PPO and courts, failed to address the systematic torture of detainees in places of detention and even contributed, directly and indirectly, to widespread impunity.
The judicial system’s reluctance to seriously combat torture could be balanced by effective and independent human rights bodies that are willing and capable of exposing torture cases and pushing for accountability; however, this is not the case in Bahrain. The governmental oversight bodies created in response to the BICI recommendations post-2011 have proved to be defective and deficient.
The Special Investigation Unit (SIU), created in 2012, is responsible for “the determination of criminal accountability of those in government who have committed crimes of killing or torture or mistreatment of civilians, including those in the chain of command under the principle of superior responsibility.” Although creating the SIU within the PPO was a step forward in addressing impunity in Bahrain, the SIU’s independence and effectiveness have been called into question since its inception: it is a part of the PPO hierarchy. Moreover, the rate of case referrals to criminal courts is very low compared to the total complaints received by the SIU during the last decade, most of which ended with acquittals and light sentences, besides most prosecutions have been of low-ranking officers. The SIU association with the PPO adversely affects its integrity and public trust in it with the latter’s disregard of torture allegations and prosecution of prisoners of conscience. It does not also comply with many of the Istanbul Protocol provisions, with which it is supposed to be in line.
The Office of the Ombudsman at the Ministry of the Interior (MOI Ombudsman) is another governmental body that is supposed to help put an end to systematic torture. It became operational in July 2013. The MOI Ombudsman was created to ensure that the Ministry of Interior personnel abide by the legal procedures and hold violators accountable. It is also mandated to receive, review, and examine complaints against members of the Public Security Forces, including torture complaints. The MOI Ombudsman’s independence is also questionable, where it works under the supervision of the MOI, and its employees are appointed upon the approval of the Minister of Interior. As for its effectiveness, the MOI Ombudsman practiced reluctance and disregard for the well-documented violations committed by the MOI personnel, which is demonstrated by the number of cases referred by the MOI Ombudsman for possible criminal prosecution and the fact that a few investigations were launched on its initiative in the last few years.
As for the Prisoners and Detainees Rights Commission (PDRC), created in 2013, it functions as a National Preventive Mechanism (NPM) and is empowered to verify the conditions of inmates and the treatment they receive. The PDRC’s independence and effectiveness have been called into question as well with a lack of transparency in appointing its members, its financial dependency on the MOI Ombudsman, and a lack of clear judgment in its reports. It failed to demonstrate rigor, seriousness, and persistence in addressing pressing issues in detention facilities, especially the torture and ill-treatment of political prisoners.[ii]
Creating these oversight bodies and others seemed promising in combating systematic torture in Bahrain. However, after a decade, their work has not achieved tangible results, raising serious questions about the government’s willingness to genuinely address torture in the country.
Combating and preventing torture requires the concerted efforts of the country’s various stakeholders. “The media and civil society organizations can contribute to an effective system of checks and balances to prevent and prohibit torture.” While the government has almost completely closed the civic space and stifled its institutions, it controlled the media and prevented it from taking its supposed role in exposing torture, supporting victims, and raising awareness about combating it. Responsible media reporting, public education campaigns, and targeted awareness-raising initiatives can build greater knowledge and understanding of the issues, influence public opinion, and help change societal attitudes. However, the existing Bahraini media only disseminate news and information favorable to the government, praise the government’s accomplishments, and adopt the government’s narrative, making them a propaganda tool. Depicting dissidents and opposition figures as “traitors,” “foreign agents,” and “threats to national unity” in Bahraini mainstream media is not helping either. It vilifies the government’s opponents and justifies violence against them.
Overall, Bahrain’s criminal system is lacking independence to eradicate torture, along with a clear political unwillingness to genuinely address this issue. As discussed earlier, the judiciary, in its current state, is unable or unwilling to hold perpetrators to account nor bring justice for hundreds of victims. Civil society is stifled, media are strictly controlled, and oversight bodies are not independent, thus making redress and accountability far-reaching in the current circumstances.
On the International Day in Support of Victims of Torture, BCHR calls on the Bahrain government to end the “culture of impunity” and bring those involved in torture to justice, including those in the chain of command. Ensuring remedies for torture victims contributes to long-term societal stability and opens the door for overdue political and social reconciliation. Torture victims have the right to redress and accountability, and the Bahraini government has an obligation to protect and fulfill this right under national and international laws.
[i] For more information about torture in Bahrain, read BCHR report “Cosmetic Reforms: Assessing Bahrain’s Implementation of the BICI Recommendations Ten Years Later”, Recommendations 1716, 1719, 1720, 1722 f, available at https://bahrainrights.net/?p=136371
See also BCHR report “Bahrain: Torture is the Policy, Impunity is the Norm”, available at https://bahrainrights.net/?p=13407
[ii] For more on Bahrain’s human rights bodies, read BCHR report “Defective and Deficient: A Review of Bahrain’s Human Right Bodies”, available at https://bahrainrights.net/?p=13624 | 1 | 2 |
<urn:uuid:40a1cbaf-c00e-4355-a357-d6c08ef67a0a> | Introduction: Access to infrastructure
2008 was a year in which there was much focus on the issue of universal access to information and communications technologies (ICTs) and the internet. Many global institutions focused on access, resulting in initiatives such as the International Telecommunication Union (ITU) Global Symposium for Regulators on Open Access; an Organisation for Economic Co-operation and Development (OECD) publication called Global Opportunities for Internet Access Developments; the GSM Association’s report Universal Access: How mobile can bring communications for all; the Global Alliance for ICT and Development (GAID) Global Forum on Access and Connectivity; and infoDev’s publication on broadband in Africa, as well as the European Commission’s call for universal broadband in Europe by 2010, and the Internet Governance Forum’s (IGF) adoption of “Internet for All” as its overall theme for its third meeting in Hyderabad.
Within these institutions there is a broad recognition that while the digital divide, driven by the spread of mobile, has closed dramatically with regard to voice telephony, a new access gap is emerging with respect to broadband internet infrastructure and services. In this decade, the rapid increase in user-generated content and interactivity on the internet, sometimes known as Web 2.0, has transformed the digital environment. This process was facilitated by the expansion of broadband internet access, and the eclipse of narrowband internet access through dial-up connectivity. In 2004, the number of broadband subscribers in the OECD surpassed the number of dial-up subscribers. At the end of 2003, there were 83 million broadband subscribers in the OECD. By June 2007, there were 221 million – an increase of 165% (OECD, 2008a, p. 23). In 2006, about 70% of broadband subscribers worldwide were located in OECD countries, which accounted for only 16% of the world’s population. In contrast, 30% of broadband subscribers were found in developing countries, with 84% of the population. The situation in least developed countries (LDCs) is much worse – there were only 46,000 broadband subscribers in 22 out of 50 LDCs with broadband services in 2006 (ITU, 2007, p. 5).
Why is access to broadband so important? The ITU says this:
Ensuring the information society requires not only access and availability of ICT, but a high quality ICT experience. Broadband-enabled services have the potential to create economic and empowerment opportunities, and improve lives (ITU, 2007, p. 7-8).
European Union (EU) Telecoms Commissioner Viviane Reding:
High speed internet is the passport to the Information Society and an essential condition for economic growth. That is why it is the Commission’s policies to make broadband internet for all Europeans happen by 2010 (BBC, 2008).
And the OECD Council on Broadband Development:
Broadband not only plays a critical role in the workings of the economy, it connects consumers, businesses, governments and facilitates social interaction (OECD, 2008a, p. 7).
When opinion-makers, policy think tanks and industry players in developed countries look at the issue of broadband in developing countries, they tend to say that broadband will be delivered by wireless networks. For example, the OECD says “all indications are that the majority of the next several billion users, mainly from developing countries, will connect to the internet principally via wireless networks. In some developing countries the number of wireless subscribers already outnumbers those for fixed networks by more than 20 to one” (OECD, 2008b, p. 4). While this kind of statement may be generally true, it tends to elide with the notion that these wireless networks will be those of the mobile phone operators and that the solution to the broadband divide will be simply left to the private sector in the form of the mobile operators to resolve. Global institutions representing the interests of mobile subscribers take this point up with alacrity and make claims that “mobile communication will deliver affordable voice, data and Internet services to more than five billion people by 2015” (GSMA, 2008, p. 1). Free market activist financial journals like The Economist champion the mobile web when they argue: “The developing world missed out on much of the excitement of the initial web revolution, the dotcom boom and Web 2.0, largely because it did not have an internet infrastructure. But developing countries may now be poised to leapfrog the industrialised world in the era of the mobile web” (Economist, 2008).
Among the hoary rhetorical notions that have exceeded their sell-by date, the idea that developing countries will somehow leapfrog over developed countries with regard to access to broadband infrastructure should really be abandoned. This is akin to the myth that there are more telephone subscribers in Manhattan than the whole of Africa, which was popular in the 1990s and continued to be trotted out even when it was demonstrably no longer true. With 70% of the population of OECD countries already connected to broadband internet infrastructure and universal broadband service on the horizon, there is nothing to leapfrog over.
Southern-based policy research centres have a more sober view on the matter. In its review of policy outcomes in Africa, Research ICT Africa! says the following:
The excitement about the extension of telecommunications networks and services in countries across the continent over the last few years, particularly in the area of mobile telephony, should be tempered by the fact that these have not been optimal. While gains have clearly been made this review of the telecommunications sector performance in 16 African countries suggests that national policy objectives of pervasive and affordable ICT services are often undermined by many countries’ own policies and practices, market structures and institutional arrangements. While Africa may have the highest growth rate in mobile telephony this is off a very low base. Large numbers of people do not have permanent access to basic telephony. The enhanced ICT services required for effective participation in the economy and society continue to elude the vast majority of the continent’s people (Esselaar et al., 2007, p. 9).
It is likely that wireless networks, and not simply those of mobile operators, will play an important role in developing country access to broadband, particularly with regard to local access. But it is necessary to recognise the considerable complexity involved in building access to broadband in developing countries, which goes beyond the notion that mobile operators will simply supply it. At the IGF in Rio de Janeiro in 2007, African internet expert Mike Jensen argued that reaching the goal of affordable universal access to broadband in developing countries requires the following combination of factors:
- More competition and innovation in the internet and telecom sector, with effective regulation.
- Much more national and international backbone fibre, with effective regulation of non-discriminatory access to bandwidth by operators and service providers.
- More effort to build demand, especially efforts by national governments to build useful local applications.
- Improved availability of electric power.
- Better indicators for measuring progress (quoted in Jagun, 2008a).
Speaking at an equitable access workshop before the Rio IGF, African telecommunications expert Lishan Adam identified the existing access gaps that are most stark in Africa, Latin America and Asia (Adam, 2008). Then, based on an analysis of the data and studies that have been made into why the policy programmes to stimulate access in developing countries have had such poor results, Adam posits a number reasons for the failure by policy-makers and regulators to address these access gaps:
- Market-based approaches were not entirely effective in promoting equitable access – in particular, they failed to break fixed-line telecom monopolies and introduce effective competition in ICT networks and services.
- Regulatory institutions and frameworks remained rather weak. Roles and responsibilities between policy-makers and regulators were often confused, and regulators lacked the capacity to regulate effectively.
- Global regimes were not responsive to the need for equitable access. Developing countries lack the capacity to influence the shape of global ICT policy that cascades across regional and national domains.
After analysing three IGF workshops and the plenary session on access, APC identified a convergence of views on access as follows:
- First, there appeared to be agreement that the competitive (market) model has been effective in increasing access in developing countries. There were therefore calls for policy coherence in the telecom sectors of developing nations and specifically for the principles of competition to be consistently and evenly applied to all areas of the telecom sector.
- Second, there was recognition of the applicability of collaborative models for providing access in areas where traditional market models seem to have failed. Such areas include rural and other underserved areas where the participation of diverse network operators and providers – including municipal government authorities, cooperatives, and community operators – has contributed to increasing access. There were therefore calls for the review of policy and regulation and the establishment of incentives to facilitate increased participation by this cadre of operators.
- Third, there continues to be conviction and consensus on the potential of ICTs as tools for development – particularly at the level of rural and local access. ICTs can be used in increasing accessibility to healthcare and education; they can help in decreasing vulnerabilities and improving citizen engagement with governments and their institutions. There was therefore a call for the promotion and adoption of a multi-sectoral approach in achieving universal, affordable and equitable access. Specifically, there was a recognised need for the integration of ICT regulation and policy with local development strategies, as well as the exploitation of complementarities between different types of development infrastructure (for example, transport networks, water pipes/canals, power/electrification, communication, etc.) (Jagun, 2008a).
However, there are apparent contradictions between some of these points. For example, there is (at least at face value) an inherent contradiction between acceptance of the “efficacy” of competitive models and their promotion in the telecom sector, and the call for increased participation of a more diverse range of network operators and providers, most of whom adopt non-market models to achieve wider access in rural areas. Will all stakeholders truly agree that in order to make universal access a reality, competitive models need to coexist with collaborative ones? One can see fault lines around the roll-out of municipal wireless networks running into opposition from private network operators in the United States (US).
This may not be a problem in developing countries where there is still considerable involvement of the public sector in ICT network provision and an increasing role in ICT services like e-government. In many developing countries the attempts to privatise public telecom operators had negative consequences for the introduction of competition and for reducing access gaps (Horwitz & Currie, 2007). It is unlikely that there will be a pure market approach in countries where the notion of the developmental state is prevalent. It is more likely that the primary modification of the telecom reform model will be that there is a role for public sector and community network provision within a predominantly competitive environment as long as it is transparent and non-discriminatory. Anyone can play, as the open access principle goes.
What is also needed is a modification of the mandates for universal access funds in developing countries to support the roll-out of community wireless networks in rural areas, as well as for capacity-building programmes and local content development to enable citizens to use ICTs effectively in local languages. Policy-makers and regulators need to support this roll-out with enabling regulations liberalising voice over internet protocol (VoIP), allowing community access to spectrum, and creating simple licensing and interconnection regimes for community-based networks.
Access to fibre remains a problem for many developing countries. On the west coast of Africa, the problem has been compounded by the continued dominance of moribund monopolies propped up by rent-seeking patron-client networks in government. Research into the operations of the SAT-3/WASC cable has identified what needs to be done to break these monopolies (Jagun, 2008b).
ICT for development analyst and researcher Abiodun Jagun illustrates what she calls the “reinforced monopolies” that inhibit the economic and developmental potential of the SAT-3/WASC cable from being realised in Figure 1. The diagram represents the varying kinds of monopolies of the cable that exist in many of its beneficiary countries in sub-Saharan African. It shows the monopolies operating at different levels, such as international gateway licences, landing stations, national backhaul network, etc. Those who want to “access” the bandwidth need to navigate these monopolies.
The solid lines represent pure monopolies. For instance, when the research was conducted, the SAT-3/WASC cable was the only fibre-optic cable offering connectivity to many countries in sub-Saharan Africa. In many cases the SAT-3/WASC landing station is also only restricted to one signatory.
Sorting out a policy and regulatory problem of this magnitude illustrates the complexity of what is at stake in building broadband in developing countries. And without resolving the problem of affordable access to international bandwidth, the promise of mobile operators to provide broadband internet access will be inhibited.
Nevertheless, sometimes a simple manoeuvre by a regulator can make a dramatic change in a seemingly hopeless state of affairs. One example is the case of Mauritius, where the regulator invited the monopoly operator into a price determination proceeding which enabled the issue of the high cost of international bandwidth to be discussed in public with full transparency. The outcome was that the regulator was able to get the operator to lower its prices for international bandwidth (Southwood, 2008). A problem, however, is the state of governance in developing countries. Developing country governments are often the worst enemies of their citizens. They lack the capability to get things done, lack responsiveness to their citizens’ needs and rights, and are unaccountable for their actions. There may be all the consensus in the world as to what can be done to improve equitable access to ICTs, but it will be of little use if the state is dysfunctional. This is not to say that poor governance is limited to developing countries, but its impact is so much greater in countries that lack institutional capacity generally, and have to cope with poverty, conflict and lack of resources. This is a major challenge when it comes to equitable access.
Fortunately there is a growing awareness in most developing country governments of their shortcomings with regard to governance. The issue is on the agenda globally and nationally with international agencies developing indicators to measure good governance, such as the World Bank Institute’s Governance and Anti-Corruption programme, which produces a set of governance indicators for each country reflecting:
- Voice and accountability
- Political stability
- Government effectiveness
- Rule of law
- Regulatory quality
- Control of corruption.
The indicators are a form of incentive to some developing countries to improve their standing, but they are also useful for civil society organisations to understand where the governance problems in a particular state lie and what space there is for effective advocacy on equitable access. The indicators on regulatory quality and government effectiveness are particularly important here.
However, what is missing in the good governance methodology is sufficient recognition of the role of patron-client networks in developing country governance. The ITU-D (the telecommunications development sector of the ITU) never addresses this in its engagement with developing country governments and regulators. The various ITU policy documents are disseminated in what amounts to an apolitical state, suggesting there is a straight line between following their policy advice on communications policy reform and positive outcomes on the ground. This lacuna in communication policy reform – that it tries to address policy and regulatory shortcomings as a function of institutional failure and the incorrect application of incentives in the language of institutional economics – does not reach into the realities of client-patron relations and rent-seeking in the politics of developing countries (Khan, 2004). There is unlikely to be much improvement in communications policy reform until these political dynamics are addressed.
The critical success factor of working towards good governance is the extent to which developing countries take it seriously themselves, without the prompting of developed countries and international development institutions. Within Africa, the New Partnership for Africa’s Development (NEPAD) has initiated a peer review process which examines:
- Democracy and good political governance
- Economic governance and management
- Corporate governance
- Socio-economic development.
Such steps are important and help create a climate for good governance, which in turn may enable effective ICT regulators to emerge as greater awareness of the value of good governance grows. More effective government may lead to a situation such as in Kenya. There the government is driving the expansion of broadband access in the country and across the region by taking the initiative to lay a fibre-optic submarine cable, TEAMS, and then applying the lessons for broadband delivery systematically and coherently with the enthusiastic support of all stakeholders. If the Kenyan government can pull this off, it will provide a powerful example for other countries in Africa to follow.
Adam, L. (2008) Policies for equitable access. London: APC.
Available at: www.apc.org/en/pubs/research/openaccess/world/policies-equitable-access
African Peer Review Mechanism: www.nepad.org/aprm
BBC (2008) EC call for ‘universal’ broadband. BBC News, 26 September.
Available at: news.bbc.co.uk/2/hi/technology/7637215.stm
Economist (2008) The meek shall inherit the web. The Economist, 4 September.
Available at: www.economist.com/science/tq/displaystory.cfm?STORY_ID=11999307#top
Esselaar, S., Gillwald, A. and Stork, C. (2007) Towards an African e-Index 2007: Telecommunications Sector Performance in 16 African Countries. Research ICT Africa!
Available at: www.researchictafrica.net/images/upload/Africa_comparativeCORRECTED.pdf
GSMA (GSM Association) (2008) Universal Access: How mobile can bring communications for all.
Available at: www.gsmworld.com/universalaccess/index.shtml
ITU (International Telecommunication Union) (2007) Trends in Telecommunication Reform 2007: The Road to Next-Generation Networks (NGN). Geneva: ITU.
Horwitz, R. and Currie, W. (2007) Another instance where privatization trumped liberalization:The politics of telecommunications reform in South Africa – A ten-year retrospective. Telecommunications Policy, 31.
Jagun, A. (2008a) Building consensus on internet access at the IGF. Montevideo: APC.
Available at: www.apc.org/en/pubs/issue/openaccess/all/building-consensus-internet-access-igf
Jagun, A. (2008) The Case for “Open Access” Communications Infrastructure in Africa: The SAT-3/WASC cable (Briefing). Glasgow: APC.
Available at: www.apc.org/en/node/6142
Khan, M. (2004) State Failure in Developing Countries and Strategies of Institutional Reform. In Tungodden, B., Stern, N. and Kolstad, I. (eds.), Toward Pro-Poor Policies: Aid Institutions and Globalization. Washington and New York: Oxford University Press and World Bank.
OECD (Organisation for Economic Co-operation and Development) (2008a) Broadband Growth and Policies in OECD Countries. Paris: OECD.
OECD (2008b) Global Opportunities for Internet Access Developments. Paris: OECD.
Southwood, R. (2008) The Case for “Open Access” in Africa: Mauritius case study. London: APC.
World Bank Institute Governance and Anti-Corruption programme:
One in which consumers are able to select, from a range of providers, the product that best matches their needs at a price they feel is acceptable.
South Atlantic 3/West Africa Submarine Cable.
World Bank Institute Governance and Anti-Corruption programme.
African Peer Review Mechanism: www.nepad.org/aprm
The Kenyan case is interesting in that the country scores quite well on accountability and regulatory quality indices, while doing poorly in other governance indicators. One dimension in Kenya is an awareness that political stability is fragile, which policy-makers like the permanent secretary in their ICT Ministry incorporates into planning as far as he can. | 1 | 10 |
<urn:uuid:ae87e79d-8c53-45e6-b537-925bdec3143c> | Antisocial personality disorder
For patient information click here
Editor-In-Chief: C. Michael Gibson, M.S., M.D. ; Associate Editor(s)-in-Chief: Jesus Rosario Hernandez, M.D. , Haleigh Williams, B.S., Irfan Dotani
Synonyms and keywords: APD; introverted personality disorder; sociopath; ASPD; antisocial behavior; sociopathic; sociopathic behavior; antisocial; antisocial tendency; antisocial tendencies
Antisocial personality disorder (ASPD) is a psychiatric condition characterized by a disregard for social rules, norms, and cultural codes, as well as impulsive behavior and indifference to the rights and feelings of others. People with ASPD may lie, endanger the wellbeing of others for their own benefit, and/or show a prominent lack of remorse for wrongdoing. Such behavior is often associated with criminal activity. Sufferers of ASPD may nonetheless be capable of behaving in a flattering, charming, or otherwise likeable and socially acceptable way in the interest of manipulating others and achieving their own ends. Incarcerated people are roughly ten times more likely to have antisocial personality disorder than members of the general population. During childhood, people who will go on to be diagnosed with ASPD may demonstrate pyromania, a prolonged period of bedwetting, and/or cruelty to animals; this set of symptoms is known as the Macdonald triad. Truancy, delinquency, hyperactivity, and conduct disorder are also common in young people with ASPD. "Antisocial personality disorder" is the terminology used by the American Psychiatric Association's Diagnostic and Statistical Manual, while the World Health Organization's ICD-10 uses the term Dissocial personality disorder. People with ASPD are sometimes referred to as "sociopaths."
- Prior to being defined cohesively, what we have come to know as ASPD was encapsulated under the categories of "psychopathy" and "sociopathy." The distinctions among these disorders remain somewhat ill-defined.
- The term antisocial personality disorder first appeared in the third edition of the DSM in 1980.
There is no established classification system for ASPD.
- Individuals with cavum septum pellucidum (CSP), a marker of limbic neural maldevelopment, are significantly more likely to have ASPD than control populations.
- This relationship is observed even when researchers control for trauma and head injury.
- The early maldevelopment of limbic and septal structures appears to predispose individuals to antisocial behaviors.
- The presence of CSP is more closely related to the aggressive aspect of ASPD symptomology than the deceptive/irresponsible facet.
Conditions that are commonly comorbid with ASPD include:
- Psychopathy (sometimes considered a particularly intense form of ASPD)
- Substance abuse disorder
- People who meet the DSM criteria for ASPD are 21 times more likely to develop alcohol abuse and dependence at some point throughout their lives than people who do not suffer from ASPD.
The cause of ASPD is unknown, though the disorder is commonly associated with both genetics and childhood tumult.
ASPD must be differentiated from disorders with similar symptomology, including:
- Substance abuse disorders
- Bipolar disorder
- Narcissistic personality disorder
- Histrionic personality disorder
- Borderline personality disorder
- Criminal behavior
Epidemiology and Demographics
- Overall, 3% of men and 1% of women in the general population meet the criteria for ASPD.
- The 12-month prevalence of ASPD is 1.0% of the United States adult population.
- ASPD is more common in men than in women.
- The male to female ratio is 2 to 1.
- Boys with ASPD tend to develop symptoms earlier than girls, who may not show signs of ASPD until they reach puberty.
- Basal testosterone levels are positively related to incidence of antisocial behaviors.
- Symptoms tend to be most severe in early adulthood and may diminish over time, as a person ages.
- This finding is consistent with arrest records, which show that arrests are most common among individuals in their late teens and early 20s, and then decline in subsequent age groups. This is a relevant finding because criminality is a common complication of ASPD.
- No racial predilection is associated with ASPD.
- Approximately 1 in 2 male prisoners and 1 in 5 female prisoners suffer from ASPD.
- Prisoners are roughly ten times more likely to have antisocial personality disorder than members of the general population.
Risk factors for the development of ASPD include:
- Male gender
- Family history of ASPD
- Antisocial/alcoholic parent
- Alcohol abuse
- Child abuse
- Drug abuse
- Childhood hyperactivity or conduct disorder
- Though people with ASPD commonly exhibited conduct disorder (CD) as children, most children with CD do not go on to develop ASPD. Accounting for the variety and severity of childhood behavioral issues is a more precise means of predicting whether a child with CD will later be diagnosed with ASPD.
Screening for ASPD, performed by a mental health specialist, is recommended for individuals who demonstrate antisocial behaviors, particularly as children, when complications may be easier to forestall. Screening for ASPD is also recommended for incarcerated people, who experience much higher rates of ASPD than the general population.
Natural History, Complications and Prognosis
- People who suffer from ASPD are generally less likely to seek treatment for any medical problems or to adhere to treatment regimens set forth by their physicians.
- This can lead to myriad physical and psychological complications, including suicide attempts.
- Antisocial tendencies in childhood are also strongly predictive of future economic issues, diminished educational achievement, long-term unemployment, and unsatisfying familial relationships in later years.
- Patients who experience an early onset of symptoms progress more quickly to a severe form of ASPD than do people with later ages of onset.
- In a comparison between patients with ASPD and patients with schizophrenia, members of the former group were more likely to be married and to have secured their own housing, but they were equally likely to struggle at work and to be debilitated by psychiatric symptoms (though not the same symptoms as the schizophrenic patients).
- Married people with ASPD were found to be more likely to coalesce over time than their unmarried counterparts.
Complications of ASPD can include:
- Law-breaking or imprisonment
- Drug abuse
- ASPD patients who are alcoholics are more likely to experience alcohol-related consequences than non-ASPD alcoholics.
- Inability to sustain employment
- Tumultuous or unhappy family life
- Self-harm or suicide
- Traumatic injury
- Hepatitis C infection
- Without treatment, the prognosis of ASPD is varied.
- Symptoms commonly peak in the late teenage years and the early 20s.
- Sometimes, symptoms improve on their own as the patient nears middle age.
- The median age for improvement seems to be 35 years.
- Among ASPD patients, heavy alcohol usage and low socioeconomic status are predictive of poor neuropsychological outcomes.
- With treatment, the prognosis is improved, though concrete data are unavailable.
- Though behavior may improve with treatment, the elements of personality that are central to ASPD, such as an inability to empathize with others, often remain.
- Successfully treated individuals may no longer pose a threat to society or themselves, but they often continue to exhibit irritability and hostility, and may still have difficulty sustaining interpersonal relationships.
ICD-10 Diagnostic Criteria
Chapter V of the tenth revision of the International Classification of Diseases offers a set of criteria for diagnosing the related construct of dissocial personality disorder.
Dissocial Personality Disorder (F60.2), usually coming to a gross disparity between behavior and the prevailing social norms, and characterized by:
- Callous unconcern for the feelings of others
- Persistent attitude of irresponsibility and disregard for social norms, rules, and obligations
- Incapacity to maintain enduring relationships, though having no difficulty in establishing them
- Very low tolerance to frustration and a low threshold for discharge of aggression, including violence
- Incapacity to experience guilt or to profit from experience, particularly punishment
- Marked proneness to blame others, or to offer plausible rationalizations, for the behavior that has brought the patient into conflict with society
There may also be persistent irritability as an associated feature. Conduct disorder during childhood and adolescence, though not invariably present, may further support the diagnosis.
History and Symptoms
Common symptoms of ASPD include:
- Persistent lying or stealing
- Recurring difficulties with the law
- Tendency to violate the rights of others (property, physical, sexual, emotional, legal)
- Substance abuse
- Aggressive, often violent behavior; prone to getting involved in fights
- A persistent agitated or depressed feeling (dysphoria)
- Inability to tolerate boredom
- Disregard for the safety of self or others
- A childhood diagnosis of conduct disorders
- Lack of remorse for hurting others
- Superficial charm
- Impulsivity or recklessness
- A sense of extreme entitlement
- Inability to make or keep friends or maintain long-term, healthy relationships
- Lack of remorse for wrongdoing
- Persistent difficulty interacting with authority figures
- Macdonald triad in childhood
- Longer-than-normal period of bedwetting
- Cruelty to animals
- ASPD is diagnosed based on the results of a psychiatric evaluation, during which the clinician will consider the nature of the patient’s symptoms, how long they have been present, and how severe they are.
- For a diagnosis of ASPD to be made, the patient must have exhibited behavioral and emotional problems during childhood.
- A diagnosis of ASPD is supported by clinical evaluation rather than laboratory tests.
- Prefrontal cortex (particularly orbitofrontal and dorsolateral prefrontal cortex)
- Superior temporal gyrus
- Amygdala-hippocampal complex
- Anterior cingulate cortex
- ASPD is often regarded among mental health professionals as one of the most difficult personality disorders to treat effectively, largely because individuals with this disorder are unlikely to seek treatment on their own and might do so only on the orders of a court or at the urging of a loved one.
- Behavioral treatments or talk therapy (CBT or MBT), as well as patient support groups, may be useful for some patients.
- It is also important to treat patients for any comorbid conditions, including substance abuse disorder or mood disorder.
- Relatively little evidence exists to support the use of medication to treat ASPD. Carbamazepine or lithium may be useful for minimizing aggressive behavior, while SSRIs can help improve disposition issues.
- Surgery is not recommended for the treatment of ASPD.
- Early intervention in children with conduct disorder who display antisocial tendencies may prevent the development of ASPD throughout adolescence and adulthood and, consequently, can result in improved academic performance.
- No effective strategies for the secondary prevention of ASPD have been established.
- ↑ 1.00 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.10 1.11 1.12 1.13 U.S. National Library of Medicine. (2016). MedlinePlus: “Antisocial personality disorder.” Retrieved 4 October 2016.
- ↑ 2.0 2.1 Houser, Mallory C. (2015). “A History of Antisocial Personality Disorder in the Diagnostic and Statistical Manual of Mental Illness and Treatment from a Rehabilitation Perspective.”
- ↑ Raine A, Lee L, Yang Y, Colletti P (2010). "Neurodevelopmental marker for limbic maldevelopment in antisocial personality disorder and psychopathy". Br J Psychiatry. 197 (3): 186–92. doi:10.1192/bjp.bp.110.078485. PMC 2930915. PMID 20807962.
- ↑ 4.0 4.1 4.2 4.3 4.4 NHS Choices. (2015). “Antisocial personality disorder.” Retrieved 4 October 2016.
- ↑ 5.0 5.1 Moeller FG, Dougherty DM (2001). "Antisocial personality disorder, alcohol, and aggression". Alcohol Res Health. 25 (1): 5–11. PMID 11496966.
- ↑ 6.0 6.1 6.2 6.3 Diagnostic and statistical manual of mental disorders : DSM-5. Washington, D.C: American Psychiatric Association. 2013. ISBN 0890425558.
- ↑ 7.0 7.1 7.2 7.3 Lenzenweger MF, Lane MC, Loranger AW, Kessler RC (2007). "DSM-IV personality disorders in the National Comorbidity Survey Replication". Biol Psychiatry. 62 (6): 553–64. doi:10.1016/j.biopsych.2006.09.019. PMC 2044500. PMID 17217923.
- ↑ 8.0 8.1 Simonoff E, Elander J, Holmshaw J, Pickles A, Murray R, Rutter M (2004). "Predictors of antisocial personality. Continuities from childhood to adult life". Br J Psychiatry. 184: 118–27. PMID 14754823.
- ↑ 9.0 9.1 9.2 9.3 9.4 9.5 9.6 Black DW (2015). "The Natural History of Antisocial Personality Disorder". Can J Psychiatry. 60 (7): 309–14. PMC 4500180. PMID 26175389.
- ↑ Menelaos L, et al. (2012). Testosterone and Aggressive Behavior in Man. Int J. Endocrinol. Metab.
- ↑ McGilloway A, Hall RE, Lee T, Bhui KS (2010). "A systematic review of personality disorder, race and ethnicity: prevalence, aetiology and treatment". BMC Psychiatry. 10: 33. doi:10.1186/1471-244X-10-33. PMC 2882360. PMID 20459788.
- ↑ 12.0 12.1 12.2 Fazel S, Danesh J (2002). "Serious mental disorder in 23000 prisoners: a systematic review of 62 surveys". Lancet. 359 (9306): 545–50. doi:10.1016/S0140-6736(02)07740-1. PMID 11867106.
- ↑ Oscar-Berman M, Valmas MM, Sawyer KS, Kirkley SM, Gansler DA, Merritt D; et al. (2009). "Frontal brain dysfunction in alcoholism with and without antisocial personality disorder". Neuropsychiatr Dis Treat. 5: 309–26. PMC 2699656. PMID 19557141.
- ↑ "Antisocial Personality Disorder". Psychology Today. 2005. Retrieved 2007-02-20.
- ↑ "Antisocial Personality Disorder Treatment". Psych Central. 2006. Retrieved 2007-02-20.
- ↑ Yang Y, Glenn AL, Raine A (2008). "Brain abnormalities in antisocial individuals: implications for the law". Behav Sci Law. 26 (1): 65–83. doi:10.1002/bsl.788. PMID 18327831.
- ↑ Scott S, Briskman J, O'Connor TG (2014). "Early prevention of antisocial personality: long-term follow-up of two randomized controlled trials comparing indicated and selective approaches". Am J Psychiatry. 171 (6): 649–57. doi:10.1176/appi.ajp.2014.13050697. PMID 24626738.
Template:DSM personality disorders | 1 | 3 |
<urn:uuid:17183957-4321-4a86-8a67-b9f14278c3a7> | How To Search:
- Type a search with the words that you are looking for in the entry box, then click [Search].
- Any Word: Just type one or more words to find any of the words. [ Search ANY ] is the default.
- All Words: Type more than one word and select [ Search ALL ] to find all of the words. Or you can use Booleans (see below).
- Exact Phrase: "…" You can search for exact phrases by surrounding them in double quotes. Or you can just type the words and select [ Search EXACT ].
- Boolean Operators [ + - ]: Use + in front of each word or a quoted phrase that you require. Use - in front of each word that you want to exclude.
- Boolean Expressions: AND OR NOT ( ) Use AND, OR, NOT, (, and ) to form a Boolean expression. AND requires, OR allows, NOT excludes. Use double quotes to protect the words "and", "or", or "not" in a phrase.
Query Gets pages with Moses Ramsdale 'Moses' or 'Ramsdale' or both "Moses Ramsdale" the phrase 'Moses Ramsdale' +Moses +Ramsdale 'Moses' and 'Ramsdale' +Moses -Ramsdale 'Moses' but not 'Ramsdale' +Ramsdale -"Ramsdal" 'Ramsdale' but not 'Ramsdal' (Moses OR Ramsdale) AND NOT Ramsdal 'Moses' or 'Ramsdale', and without 'Ramsdal'
Capitalization doesn't matter - the search engine is not case sensitive. The ranked results will come from a total match on the words and phrases which you supply, so try to think of several specific terms for your topic and spell them correctly. It may help to include important plurals and derived words too, like [address addresses contact contacting information].
Using A Search Engine
Search engines are very powerful tools for searching the Internet for information using keywords and phrases. Electronic scouts, known as "robots" or "spiders" explore Web sites and "index" each word within their pages. This information is then compiled into a searchable database. When you enter a query into a search engine, it matches your query words against the records it has in its database to present a listing of possible documents meeting your request.
The key advantage to using a search engine to find your surnames is the size of their index, which typically have information on millions and millions of Web pages. This size does have its drawbacks, however. Due to the sheer number of results possible with a search engine, there is often no way (outside of visiting each one yourself) to determine the quality of the links or their relevancy to your search topic. This can often leave you more frustrated than when you began.
Search Engine Mathematics
One of the best ways to focus your search is to use what many call Search Engine Mathematics. Two simple operators, add (+), subtract (-), can go a long way toward narrowing your search results. These operators are supported by the majority of the major search engines and directories and are a lot easier to learn and remember for most users than the Boolean operators AND, OR, and NOT.
Using the (+) Symbol
Beginning each keyword with a plus (+) symbol helps you to tell the search engine to find pages that include all of the words that you enter, not just some of them. For example, consider a search where you are looking for the surname SMITH. Typing that name into AltaVista brings up 3,002,420 pages which match your search request. But assume that what you are really looking for is information on a man named Jebediah Smith. You would want to look for pages which contain both names. Typing in your search request this way:
would bring the number of results down to a much more manageable 635 pages. Now the majority of these pages probably have nothing to do with genealogy, but we will get to that in a moment.
Using the (-) Symbol
The minus (-) symbol allows you to search for pages that have one word on them but not another word. This is especially useful in the case of surnames which have a dual meaning. For example, imagine you want information about the surname RICE, but don't want to be overwhelmed by pages relating to cooking and food. You could search this way:
rice -food -cook
When you enter the term rice into AltaVista, 983,420 pages are found which match your request. If you exclude the words food and cook, as in the above example, the number of returned pages drops to 787,800. That is still a lot of pages, but it's almost 200,000 less than it was when you started.
This minus (-) technique also comes in handy when you want to exclude information about celebrities (sports heroes, movie stars, etc) who share your surname from your results.
It is all well and good to use the plus (+) symbol to include both the given name and the surname of an ancestor in your search, but it does you little good when searching the vast majority of family sites that have their pedigree information in databases. There are so many names in such databases, that it is likely that your names will both be found, just not in relation to one another. Consider your search for an ancestor by the name of Jebediah Smith:
This would turn up any page which contained both of those names - not taking into consideration the fact that the page has a Jebediah BRAZELTON and a Bob SMITH, but no trace of a Jebediah SMITH. This is where phrase searching comes in handy. Phrase searching is a technique used to ask search engines to find documents that contain words in the exact order that you specify. You do this by enclosing your terms with quotation marks. Using the previous example, this would look like:
The search engine basically treats this as one search term and will only return pages that contain the term jebediah smith, as a phrase where the two words are next to each other. This search technique too has its problems. In the Jebediah SMITH example, the search would not have returned any pages which list last name first in their database, i.e. SMITH, Jebediah.
Pretty much all of the major search engines support phrase searching. AltaVista and Google actually attempt to perform automatic phrase searching as their default setting (meaning that you don't have to enclose your search terms in quotation marks when using those search engines).
Wild card searches are an extremely useful tool for genealogists. A wild card search allows you to enter a character (*) to search for plurals of a word or variations in spelling. Since names can be spelled in so many different ways, it is a tremendous help to be able to automate your search with the use of a wild card character. This saves you from the tedious job of searching individually on all possible spellings of the name.
For example, I know that when I research my OWENS surname, I must also search on the singular OWEN. The wild card search:
would return all matches to OWEN and OWENS.
The major search engines which currently support wild card searches are AOL Search, AltaVista, HotBot, MSN Search, Northern Light, Snap, and Yahoo. Excite, Google, Go, GoTo, LookSmart, Lycos, and WebCrawler do not currently support the wild card character.
Narrowing the Search
Now that you have learned how to use the search engines effectively, it is time to learn how to fine-tune those searches to turn up pages on your specific family and not anyone who happens to share a surname. Several techniques can be used to turn those 3,002,420 SMITH pages in AltaVista into a reasonable amount of Web sites with a decent chance of containing the information that you are looking for.
Include Given Names
Searching for given names in conjunction with surnames can really help to focus your search.
In cases such as this, where the name is such a common one, this technique may not help you to narrow the search results enough. Try searching for a family member with a more unusual name (such as Jebediah). Try as many different combinations as you can think of, because not everyone will know everything that you do about the family. Remember to try phrase searching as well.
Include Place Names
Why look at SMITH family Web pages from Zimbabwe when your SMITH ancestors were from Virginia? Most family historians tend to mention locations in their online information, so use this to your advantage. Use the math search technique and try searching for:
That takes the 3,000,000+ SMITH sites on AltaVista down to about 36,000. Not bad for a simple little search term.
Search for Less Common Surnames
One of the most often overlooked ways of narrowing your search is to search first on the more uncommon surnames in your family tree. If your SMITHS married into the RAMSDALE family, then start by searching for RAMSDALE. They will be a lot easier to find and will, hopefully, lead you right back to your SMITH family. The downside to this type of search, however, is that you are limiting information sources to people who knew that the SMITH and RAMSDALE families were connected in the first place.
How to Use Soundex Codes
Soundex is a phonetic coding system used to group together surnames that sound alike (SMITH/SMYTH). This helps to find a surname in census documents and surname databases, even though it may have been recorded under various spellings.
- Every Soundex code consists of a letter(always the first letter of the surname) and three numbers.
- Write out your surname and keep the first letter, but cross out any remaining vowels (A, E, I, O, U, Y), and the letters H and W.
- If there are any double letters, then cross out the second of the two letters.
- Cross out any letters remaining after the first four letters.
- Assign the following codes to the remaining letters, remembering to leave the first letter as it is.
- Replace these letters in your code with the number 1: B, P, F, V
- Replace these letters in your code with the number 2: C, S, K, G, J, Q, X, Z
- Replace these letters in your code with the number 3: D, T
- Replace these letters in your code with the number 4: L
- Replace these letters in your code with the number 5: M, N
- Replace these letters in your code with the number 6: R
- If the surname has less than three letters left, assign zeros to those places.
- Your final code should be the first letter of the surname followed by three numbers i.e. S530 (SMITH)
- If Soundex sounds difficult this online Soundex converter will convert your surname for you.
- Names are sometimes spelled in different ways or misread when being indexed. You may have better results if you search under several possible Soundex codes.
- If the surname has a prefix, such as D', De, du, le, Van, or Von, code it both with and without the prefix because it might be listed under either code. Mc and Mac are not considered prefixes in the Soundex.
- Two or more letters with the same code number that appear in sequence in a surname are assigned one number. Thus the CK in Jackson would be coded as 2, not 22.
Soundex Coding Rules - Common Mistakes, Exceptions, and Examples
The Soundex Coding scheme works very well for most surnames. As with anything in life, however, there are exceptions to the rule. Some of these exceptions are so ambiguous that their Soundex codes in the U.S. Federal census indexes will vary, depending upon the interpretation of the people who coded the index. Presented here are examples of exceptions to the Soundex coding rules, along with alternative coding techniques to try when searching for surnames that fall under these exceptions.
- Exception 1: Names with adjacent letters having the same equivalent number are coded as one letter with a single number. Thus the letters CK in the surname JACKSON would be coded as 2, not 22. This rule also applies to instances of double letters, such as the LL in Phillips. This is the most ambiguous rule in the Soundex system, however, as technically you are supposed to consider the letters H and W as separators, despite the fact that you don't code those letters. Therefore, in surnames where two letters with the same equivalent number are separated by an H or a W, they would each be considered a single occurance and given their own code number. The SC in Ashcroft would be coded as 22, because the S and C are separated by an H, despite the fact that the H is not itself coded.
- Exception 2: Sometimes prefixes to names such as D', De, du, le, Van, or Von are omitted when coding the a surname. This will vary from census to census, and between the people who coded the names. Code your surname both with and without the prefix for better results.
- Exception 3: People with a surname of more than one word, or whose surname is commonly presented before their given name, such as Native Americans and Chinese, may be difficult to locate in a Soundex index. Names may have been coded under the name which appears last, even though it might not be the actual surname. In the case of multi-word surnames, only one of the words may have been coded. Check for your surname under several different variations.
- Exception 4: Some members of religious orders may be indexed with the name "Sister" or "Brother" considered as their surname for indexing purposes. This would respond to Soundex codes S236 and B636, respectively. Check the entire list for these codes, as they are not always listed alphabetically.
The Soundex code for Ramsdale, Ramsdall, Ramsdell, Ramsdaille, Ramsdill and Romsdale is: R523 | 1 | 4 |
<urn:uuid:d463ae07-83ac-4047-85e4-449a298f0f32> | Cardiovascular disease (CVD) is the leading cause of morbidity and mortality in China and worldwide [1, 2]. People with type 2 diabetes are at approximately two- to fourfold higher risk of developing CVD than people without diabetes [3, 4]. Because almost a quarter of the estimated global population of people with diabetes is in China , it is important to be able to support effective management of the large number of people with diabetes, especially in urban areas where diabetes prevalence is higher than in rural areas .
Conventionally, CVD risk prediction models are derived from cohort studies and were designed to assess the management options and develop personalized treatment strategies [6, 7]. Most CVD risk prediction models [8, 9, 10, 11, 12, 13] for people with type 2 diabetes were derived in cohorts from Western countries . Existing CVD risk prediction models derived in Chinese populations with type 2 diabetes were derived from cohorts collected more than 20 years ago [15, 16], from small data sets (less than 10,000 individuals), or from data sets that were missing important clinical characteristics at baseline [17, 18]. Therefore, new CVD models for patients with type 2 diabetes in China are needed.
Application of the Observational Medical Outcomes Partnership Common Data Model (OMOP CDM) platform offers a new way to create CVD risk prediction models for patients with diabetes in China using routinely collected data. Data concerning basic characteristics, diagnoses, and medications of patients were extracted from an aggregation of patient-centric health data collected in hospitals in real time can be cleaned, pseudonymized, and used for research.
Nanjing city is the capital of Jiangsu province in central-eastern China. The First Affiliated Hospital of Nanjing Medical University (NMU) is one of the best medical centers in Nanjing with a catchment area including approximately eight million people. The CDM platform of the hospital contains observational health data of more than 140,000 patients with type 2 diabetes since 2005. The NMU-diabetes database offers an opportunity to evaluate existing diabetes-specific CVD risk prediction models and to generate a new model.
These developments offer an alternative efficient approach to conducting observational research on routine care data, enabling follow-up using routine data on larger numbers of people. Risk prediction models created from such data also have the potential to be more relevant to ‘real-world’ settings.
External verification in other ethnic groups showed the differences in the diagnosis, treatment, and CVD risk factors among type 2 diabetics in different regions. It is valuable to explore whether models derived from Chinese populations might apply to other ethnic groups in other settings. Nowadays, most CVD risk prediction models were derived primarily from populations of European ancestry. It is valuable to assess whether models derived in Chinese populations might also perform well in populations of European ancestry. Scotland maintains a national population-based register, Scottish Care Information–Diabetes (SCI-Diabetes), of more than 180,000 patients with a diagnosis of type 2 diabetes that can be linked to population-based hospitalization and mortality records. It has previously been used to externally validate other CVD prediction models .
The aim of this study is to validate previous CVD risk prediction models for people with type 2 diabetes, then develop a five-year CVD risk prediction model for Chinese adults with type 2 diabetes using routinely collected hospital data from a large medical center in Nanjing, and validate its performance in a population of largely European ancestry identified from the population-based register of people with a diagnosis of diabetes in Scotland.
The derivation cohort was identified from the OMOP CDM platform in the First Affiliated Hospital of Nanjing Medical University (NMU), where we undertook a cohort study in a large number of inpatients and outpatients. Data concerning basic characteristics, diagnoses, and medications of patients were extracted from the Clinical Data Repository (CDR), an aggregation of patient-centric health data collected in hospital in real time, and went through privacy-free and cleaning treatment to map an observational medical outcomes partnership common data model (OMOP CDM, Ver. 5.0). In this research, data concerning demographic characteristics, diagnoses, and medications for more than 6.3 million patients seen as inpatients or outpatients between January 1, 2005, and December 31, 2017, were extracted. Details of the data set have been reported previously . This study was approved by the Ethics Committee of the First Affiliated Hospital of Nanjing Medical University (No. 2019-SR-153), and informed consent from study participants was waived.
The external validation cohort was extracted from the Scottish Care Information (SCI) diabetes data set, which was introduced in 2000 and is populated by patient data from primary care and hospital diabetes clinics. Outcome data is obtained from linkage to the Scottish Morbidity Record (SMR01), a national hospital admission data set, and death registrations. Approval for generation and analysis of the linked data set was obtained from the Caldicott Guardians of all health boards in Scotland, the Privacy Advisory Committee of the Information Services Division of NHS National Services Scotland, and the Scottish multicenter research ethics committee.
The derivation cohort, NMU-Diabetes, and the external validation cohort, SCI-Diabetes, both consisted of people diagnosed with type 2 diabetes between January 1, 2008 and December 31, 2017. This time frame was chosen to reflect the earliest availability of high-quality data and the most recently available data for both data sources. We defined baseline date as the date of diagnosis of type 2 diabetes using the earliest hemoglobin A1c (briefly HbA1c, ≥48 mmol/mol or 6.5%) or use of insulin or oral hypoglycemic drugs, or the first recorded diagnosis of type 2 diabetes.
We included participants who were aged between 30 and 89 years at the date of diagnosis of diabetes because there were few events in younger people and because of the complex confounding factors in older patients. Members of the cohort were followed up from baseline (date of diabetes diagnosis) until the earliest date of death, date of first CVD event (defined later), or study end date (December 31, 2017).
The cohort was restricted to people who had no previous history of CVD (as defined later) in order to enable the identification of high-risk groups that could inform approaches to primary prevention. We included individuals who were prescribed statins prior to and after type 2 diabetes diagnosis in the main analyses but conducted sensitivity analyses in subpopulations restricted to (1) people who had not been prescribed statins prior to type 2 diabetes diagnosis and (2) people who had not been prescribed statins prior to type 2 diabetes diagnosis or during follow-up. See the supplementary Figure 1 for the study criteria in the derivation cohort.
Our outcome was cardiovascular disease, which was defined as any hospital admission or death from nonfatal myocardial infarction (International Classification of Disease [ICD-10] codes I21–I22), stroke (ICD-10 codes I60–I69), heart failure (ICD-10 code I50), cerebrovascular diseases, or transient cerebral ischemic attacks and related syndromes (ICD-10 codes G45) between baseline and December 31, 2017. CVD definitions used in derivation and external validation data sets were identical except that ICD-9 codes were used in the derivation data set for identifying the previous history of CVD due to the existence of legacy data (see Supplementary Table 1 for details).
The candidate predictors included in the prediction model were chosen from those used in the existing risk scores [8, 9, 10, 11, 12, 13] and that were available in both NMU-Diabetes and SCI-Diabetes data sets.
In the NMU-Diabetes data set, we extracted data for demographic factors, clinical diagnoses, clinical values, and drug use. Candidate predictors of demographic factors included age (years) and sex (men/women). Candidate predictors of clinical diagnoses included rheumatoid arthritis (yes/no, ICD-10 codes M06.8, M06.9), hypertension (yes/no, ICD-10 codes I10.x), and chronic kidney disease (yes/no, ICD-10 codes N03, N11, N18). All predictor variables of clinical diagnoses were based on the latest diagnosis text in Chinese recorded in the CDM platform before entry to the cohort. Candidate predictors of clinical values included body mass index (weight/height, kg/m2), smoking status (yes/no), HbA1c (%), systolic blood pressure (mmHg), total cholesterol (mmol/L), HDL–cholesterol (mmol/L), LDL cholesterol (mmol/L), albumin-to-creatinine ratio (mg/g), albuminuria (normal, micro, macro), urine creatinine (umol/L), and estimated glomerular filtration rate (mL/min/1.73 m2). Smoking status and body mass index were extracted from medical records by natural language processing. Other clinical values were extracted from the standard clinical data in the CDM platform. Predictor values of clinical values were defined as measurements recorded closest to the date of diagnosis within 12 months before or after the baseline date of diagnosis of diabetes. Candidate predictors of drug use included antihypertensive medications (yes/no) and lipid-lowering medications (yes/no). Values of drug-use predictors were decided by the drug name in Chinese recorded in the drug exposure table in the CDM platform and the tertiary list of the China national essential drugs list (2018 edition). The descriptions of predictors were shown in Supplementary File 1 in detail.
In the SCI-Diabetes cohort, prescriptions of antihypertensive and lipid-lowering medications were defined using British National Formulary codes 2.5 and 2.12, respectively. The presence of rheumatoid arthritis was defined as patients with any prescription for disease-modifying antirheumatic drugs, defined with a British National Formulary code of 10.1.3 prior to diagnosis of diabetes. For comparability with the derivation cohort, diagnosis of hypertension was based on the presence of ICD-10 codes I10, I11, I13, and I15 in hospital records. Other predictors are defined as the same as those in the NMU-Diabetes cohort.
Multiple imputations were implemented using the mice algorithm in the statistical package R (Package mice version 3.7.0 mice package in R) to replace missing values in exposure and risk factor variables. Imputation models were estimated and included all the baseline covariates used in the main analysis (age, sex, smoking status, high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, total cholesterol, systolic blood pressure, total cholesterol to high-density lipoprotein cholesterol ratio, HbA1c, albuminuria [normal, micro, macro], albumin-to-creatinine ratio, creatinine, eGFR), baseline medications (prescribed antihypertensive medications, prescribed statins prior to diabetes diagnosis), coexisting medical conditions (history of rheumatoid arthritis, chronic kidney, and atrial fibrillation disease), coexisting medical conditions (history of rheumatoid arthritis, chronic kidney, and atrial fibrillation disease), and survival days and the outcome event status for each endpoint. Prior (between 10 and 1 years before study entry) and post (between 0 and 1 year after study entry) averages of continuous covariates were used in the imputation. Five multiply imputed data sets were generated, and Cox models were fitted to each data set. Estimates were pooled using Marshall’s adaption of Rubin’s rules . The Kolmogorov-Smirnov test was used to compare the distribution of observed versus imputed log-transformed covariates.
Grouped Kaplan-Meier analysis, a nonparametric approach, using the month as the unit of time was used to illustrate event rates. The proportional hazards (PH) assumption was checked using statistical tests and graphical diagnostics based on scaled Schoenfeld residuals. Statistical significance was defined as a p-value < 0.05, and 95% confidence intervals (CI) were calculated.
We developed and evaluated the prediction models using existing works and performed an initial analysis based on all patients identified in the cohort. We used Cox proportional hazards regression to derive the risk prediction model. We developed a basic model that examined risk factors (age, sex, clinical diagnoses, and drug-use factors) that were easily measured in Chinese routine clinical settings and with lower data missing rates. The basic model excluded smoke status and biomarkers (eGFR, albuminuria, systolic blood pressure, LDL cholesterol, HbA1c) to test the performance of eliminating the influence of missing data on the model that used all the relevant patients on the database. Then, to maximize the power and generalizability of the results while remaining the cohort size same, we developed an extended model that used all the relevant patients on the database. We fitted full models initially, selected variables for inclusion in the model following a stepwise approach, and undertook standard model checking. The variable included in the extended model was the one associated with the smallest Akaike’s information criterion (AIC). AIC statistic estimates the relative amount of information lost by a given model and deals with the trade-off between the goodness of fit of the model and the simplicity of the model. Sex-specific models were also generated, and interactions between the predictors and sex were tested to investigate potential effect modification. AIC statistics were used to analyze the performance of the interactive model. We also considered several approaches to transforming continuous predictors, including linear, squared, log, and restricted cubic spline with four and five knots. The values of these knots can be found in Supplementary Table 2. The effect of different transformations was evaluated using Wald χ2. To avoid overfitting, the transformation of each continuous predictor with the highest χ2, if it was more than 5% higher than the χ2 value of the linear model, was chosen for inclusion in the multivariable model. We also evaluated performance in each age group (≤45, 45–60, 61–75, >76 years), persons without a previous statin prescription subgroup, the person with complete data for all predictors subgroup, and the person with complete data for some important biomarkers (albuminuria, estimated glomerular filtration rate, LDL cholesterol) subgroups. Performance was also evaluated by calculating Harrell’s C-statistics.
A bootstrap approach was used in internal validation (200 replications). Bootstrap is a statistical resampling procedure that draws bootstrap samples with replacement from the original sample to introduce a random element.
We classified patients as being at high risk of CVD if their estimated five-year risk was equal to or greater than a threshold. The threshold was set as 20% according to the overall event rate of CVD in deviation cohort. Integrated discrimination improvement (IDI) was applied to compare the performance of the basic and the extended models we presented .
We compared predictions from our model with those from existing works, such as the American College of Cardiology/American Heart Association (ACC/AHA) Pooled Cohort Equations (PCEs) for white and PECs for Africa, ADVANCE , Swedish NDR , and QRISK2 for Chinese. Harrell’s C-statistic and IDI were also applied to assess how well each set of equations distinguished high-risk from low-risk patients (the threshold was set as 0.2 for cardiovascular disease).
The predictive performance of each derived risk score for the NMU-Diabetes cohort was assessed by internal and external validation. Discrimination of the final model was assessed using Harrell’s C-statistic during external validation. Discrimination describes the model’s ability to differentiate people who developed CVD from those who did not. Calibration was assessed in the external validation cohort using calibration slopes, calibration-in-the-large statistics, and calibration plots. Calibration slope statistics are the unitless slope of our calibration plot used to evaluate the agreement between the risk prediction model and observed five-year risk using Kaplan-Meier estimates. Calibration-in-the-large statistics compare the mean predicted risk and mean observed risks.
The models were recalibrated during external validation by adjusting baseline hazard and regression coefficients of the predictors by linear regression between the predicted risks and the observed risks in the SCI-Diabetes cohort. We also evaluated performance in each age group (≤45, 45–60, 61–75, ≥76 years).
All statistical analyses were conducted in R, version 3.6.2. Details about packages and codes were shown in Supplementary file 3. The reporting of this study is in accordance with the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) guidelines.
For the derivation cohort, we identified 148,624 patients with a diagnosis of diabetes from the CDM platform. We excluded 6,613 (4.45%) with a diagnosis of other diabetes, 4,297 (2.89%) with a diagnosis of type 1 diabetes, 3,622 (2.44%) aged less than 30 or missing age info at baseline, 17,244 (11.60%) who are clearly observable for drugs in the one-year period after diagnosis and have insulin prescribed in that period, 23,295 (15.67%) with a diagnosis of cardiovascular disease at baseline, and 8,923 (6.00%) with a diagnosis of type 2 diabetes before January 1, 2008. Overall, 84,630 patients were included in the derivation.
For the external validation cohort, there were 248,281 individuals diagnosed with type 2 diabetes in Scotland before December 31, 2017. Of these, 128,390 had a previous history of CVD or were diagnosed with type 2 diabetes before January 2008 and so were excluded from the analyses, leaving 119,891 (28.29%) individuals to form the external validation cohort.
Among the predictors included in the risk models, five had missing values, and the proportions of missingness were higher in NMU-Diabetes than SCI-Diabetes. In the derivation and validation cohorts, respectively, there were a total of 79,975 and 91,481 individuals with incomplete predictor data, including 2,907 and 55,638 individuals with a single incomplete predictor and a further 9,541 and 27,264 individuals with two incomplete variables (Supplementary Table 3).
Table 1 compares the characteristics of eligible patients in both cohorts. Major differences between the derivation and validation cohort included a higher prevalence of macroalbuminuria (6.8% for women and 5.1% for men) in the derivation cohort than in the validation cohort (2.9% for women and 3.6% for men), whereas the prevalence of prescribing of lipid-lowering (11.3% for woman and 13.6% for men) and antihypertensive medications (23% for woman and 24.5% for men) in the NMU-Diabetes cohort was lower than in the Scottish cohort.
|CHARACTERISTICS||DERIVATION COHORT||VALIDATION COHORT|
|Age at diagnosis, years, median (IQR)||59 (8.6)||57 (9.4)||61 (9.3)||58 (8.9)|
|Systolic blood pressure, mmHg, mean (SD)||131 (18.7)||130 (18.2)||137.6 (17.7)||138.1 (17.1)|
|Smoking status, n (%)|
|no||38,840 (98.6)||28,904 (63.9)||26,061 (50.5)||28,900 (42.4)|
|ex||80 (0.2)||2,599 (5.7)||14,596 (28.3)||24,539 (36)|
|cur||472 (1.2)||13,735 (30.4)||10,994 (21.3)||14,801 (21.7)|
|TDL cholesterol, mmol/mol, mean (SD)||5.9 (1.6)||5.9 (1.6)||5.4 (1.2)||5.1 (1.3)|
|LDL cholesterol, mmol/mol, mean (SD)||3.4 (0.9)||3.4 (0.9)||3 (1.1)||2.8 (1)|
|HDL cholesterol, mmol/mol, mean (SD)||2.4 (0.8)||2.3 (0.8)||1.3 (0.4)||1.1 (0.3)|
|Glycated hemoglobin, %, mean (SD)||6.9 (1.5)||7.2 (1.7)||7.9 (2)||8.3 (2.2)|
|Urine creatinine (umol/L)||100.9 (66.2)||134.2 (78.6)||71.8 (18.8)||85.3 (21.8)|
|Albumin-to-creatinine ratio, mean (SD)||72.7 (141.5)||60.5 (122.2)||3.5 (13.5)||4.1 (14.9)|
|Estimated glomerular filtration rate mls/min/1.73 m2, mean (SD)||68.9 (34.9)||66 (36.5)||81.2 (19.4)||86.9 (18)|
|Albuminuria, n (%)|
|normal||25,460 (64.6)||30,996 (68.5)||42,414 (82.1)||51,240 (75.1)|
|micro||11,261 (28.6)||11,919 (26.3)||7,741 (15)||14,553 (21.3)|
|macro||2,671 (6.8)||2,323 (5.1)||1,496 (2.9)||2,447 (3.6)|
|Retinopathy, n (%)|
|0||37,487 (95.2)||43,416 (96)||51,643 (100)||68,225 (100)|
|1||1,905 (4.8)||1,822 (4)||8 (0)||15 (0)|
|rheumatoidarthritis, n (%)|
|0||39,087 (99.2)||45,058 (99.6)||50,962 (98.7)||67,822 (99.4)|
|1||305 (0.8)||180 (0.4)||689 (1.3)||418 (0.6)|
|Atrial Fibrillation, n (%)|
|0||39,210 (99.5)||44,938 (99.3)||49,806 (96.4)||65,582 (96.1)|
|1||182 (0.5)||300 (0.7)||1,845 (3.6)||2,658 (3.9)|
|Prescribed statins prior to diabetes diagnosis, n (%)|
|0||34,925 (88.7)||39,102 (86.4)||36,088 (69.9)||47,204 (69.2)|
|1||4,467 (11.3)||6,136 (13.6)||15,563 (30.1)||21,036 (30.8)|
|Prescribed antihypertensive medications, n (%)|
|0||30,349 (77)||34,151 (75.5)||35,987 (69.7)||48,051 (70.4)|
|1||9,043 (23)||11,087 (24.5)||15,664 (30.3)||20,189 (29.6)|
|5-year CVD event rate, n (%)||7,340 (18.63)||9,954 (22.0)||2,579 (5)||3,660 (5.4)|
|10-year CVD event rate, n (%)||8,048 (20.43)||10,779 (23.83)||3,647 (7.1)||5,116 (7.5)|
Table 1 also shows the incidence rates of cardiovascular disease by gender both in the derivation cohort and in the external validation cohort. In the derivation cohort, during median follow-up (interquartile range, IQR) of 4.75 [2.67, 7.42] years, 18,827 (22.25%) individuals developed cardiovascular disease during the study period from 10 years of observation. The 5- and 10-year CVD event rates were 22.0% and 23.83% for men and 18.63% and 20.43% for women, respectively. Kaplan-Meier survival curves for the derivation cohort are shown in Figure 1. In the survival analysis, CVD event incident risk was significantly higher among patients older than 60 (hazard ratio 1.97, 95% CI 1.92–2.03) compared with younger patients, was significantly higher among patients using statins before baseline (hazard ratio 8.2, 95% CI 7.94–8.45) compared with those who did not, and was slightly lower among female patients (hazard ratio 0.83, 95% CI 0.81–0.86) compared with male patients in the entire cohort.
Overall, in the NMU cohort, there were 10,120 (53.75%) coronary heart disease events (I21, I22, I50), 4,043 (21.47%) cerebrovascular disease (I60–I66), and 4,664 (24.78%) other CVD events (I67, G45).
In the SCI cohort, there were 4,689 (57.87%) heart disease events, 1,718 (21.47%) cerebrovascular disease events were, and 2,356 (20.93%) other CVD events.
Table 2 shows the coefficients for each predictor in the basic and extended models. The basic model performed modestly overall in all patients with C-statistics of 0.716 [0.714–0.718]. The extended model identified 11 predictors associated with the risk of CVD (shown in Table 2) and had a similar C-statistic, 0.727 [0.725–0.729], to the basic model. The internal validation using a bootstrap approach showed these models were stable with C-statistics of 0.712 [0.703–0.72] for the basic model and 0.723 [0.715–0.732] for the extended model (Table 2). Results of analyses exploring improvement in model fit following the inclusion of nonlinear terms are presented in Supplementary Table 4.
|Age at diagnosis||1.027 [1.026–1.028]***||1.025 [1.024–1.027]***|
|female||0.821 [0.797–0.846]***||0.86 [0.831–0.889]***|
|Rheumatoid Arthritis||1.63 [1.399–1.899]***||1.65 [1.416–1.922]***|
|Hypertension||2.873 [2.785–2.964]***||2.758 [2.672–2.847]***|
|Prescribed statins prior to diabetes diagnosis||2.264 [2.178–2.353]***||2.37 [2.279–2.464]***|
|Prescribed antihypertensive medications||0.635 [0.611–0.659]***||0.628 [0.604–0.653]***|
|Current smoker||1.273 [1.18–1.373]***|
|Estimated glomerular filtration rate, mls/min/1.73 m2,|
|(0, 15)||0.748 [0.631–0.888]***|
|( 15, 30)||1|
|(30, 60)||1.146 [1.087–1.208]***|
|(60, 90)||1.308 [1.24–1.381]***|
|Glycated hemoglobin, %,||0.904 [0.894–0.914]***|
|LDL cholesterol, mmol/mol,||0.828 [0.814–0.842]***|
|Harrell’s C-statistic||0.718 [0.716–0.72]||0.727 [0.725–0.729]|
|Internal Validation C-statistic (bootstrap)||0.718 [0.716–0.72]||0.727 [0.725–0.729]|
Calibration and discrimination for age-stratified subsets were shown in Supplementary Table 5. Evaluated according to the mean C-statistics, the age-stratified extended models using all factors except age performed slightly higher (with C-statistics as 0.727 [0.72, 0.734] in subgroup age lower than 45, 0.713 [0.708, 0.717] in subgroup age between 45 and 60, 0.699 [0.696, 0.703] in subgroup age between 61 and 75, and 0.673 [0.668, 0.677] in subgroup age older than 76 years, respectively) compared with the age-stratified basic models only using sex, clinical diagnoses, and drug-use factors. Totally, the age-stratified models performed better in the younger subcohort.
The sex-specific models using all factors except gender performed slightly lower (with C-statistics as 0.726 [0.723–0.729] and 0.725 [0.723–0.728] in the female and male subcohorts, respectively) compared to the results of the whole cohort. Detailed are presented in Supplementary Table 6. Interactions between the predictors and sex were consistent in the basic and the extended models (Supplementary Table 7). For each of these interactions, hazard ratios for the predictors were lower for women compared with men and raised gradually with increasing age.
In sensitivity analyses conducted in the subcohort of persons without a previous statin prescription (N = 74,027), model coefficients were consistent with the whole NMU-Diabetes cohort (Supplementary Table 8). In the subcohort of persons with complete data for all predictors (N = 2,877) (see Supplementary Table 8), hazard ratios for nonusers of statins were consistent with the whole NMU cohort. Five-year CVD incidence in the subcohort for people with complete data for all predictors was 36.32 %, nearly twice as high as the whole cohort. Hazard ratios for rheumatoid arthritis and the use of statins were in the opposite direction from those of the whole cohort. The adjusted hazard ratio for three biomarker-related predictors (albuminuria, estimated glomerular filtration rate, or LDL cholesterol), included in the extended model in derivation cohort but with high missing rate, are explored in the subcohort with no missing values.
The internal validation using a bootstrap approach showed these models were stable with C-statistics of 0.712 [0.703–0.72] for the basic model and 0.723 [0.715–0.732] for the extended model (Table 2). The calibration slopes were 1.04 [1.04, 1.04] and 1.038 [1.038, 1.038], and estimates of calibration-in-large were 0.019 and 0.018 for the basic and the extended models, respectively, which means that the two models performed well in cohorts similar to the derivation cohort.
With a 20% threshold for high risk of developing CVD, the extended model classified 15.07% of the cohort as high risk, capturing 12,751 (73.73%) of the subsequent CVD events. In comparison, 14.61% of the cohort were classified as high risk by the basic model, capturing 12,362 (71.48%) of the CVD events. The IDI of the extended model compared to the basic model was 1.75% [1.64%–1.85%].
Reclassification statistics of comparison between the NMU extended model and other models (PEC for Africa, ADVANCE, Swedish NDR, and QRISK2 for Chinese) were shown in Supplementary Table 10.
Of the 38,743 patients classified as high risk (risk of at least 20% over five years) with the PCE score (using white people equation), 22,261 (57.5%) would be reclassified at low risk with the NMU extended model. The five-year observed risk among these reclassified patients was 13.08% [12.92%–13.24%]—that is, below the 20% threshold for high risk.
Among the 37,095 patients classified as high risk with NMU extended model, QRISK2 for Chinese model reclassified the lowest (11,747) as low risk, and ADVANCE reclassified the highest (36,204) as low risk. The five-year observed risk among those patients reclassified as low risk with QRISK for Chinese was 48.72% [46.58%–50.88%]—that is, above the 20% threshold for high risk and highest in comparison.
The annual incidence rate of cardiovascular events among those with an NMU extended model score of at least 20% was 87.06 per 1,000 person-years (95% confidence interval 86.61–87.50) for men and 62.24 per 1,000 person-years (61.90–62.57) for men. Both these figures are higher than the annual incidence rate for patients identified as high risk with PCE score (using the white people equation). The annual incidence rate for these patients was 68.43 per 1,000 person-years (68.08–68.78) for men and 56.04 (55.74–56.35) for women. In other words, at the 20% threshold, the population identified by NMU extended model was at higher risk of a CVD event than the population identified by the PCE score.
In the validation cohort, during median follow-up (IQR) of 4.75 [2.67, 7.42] years, 8,763 (7.31%) individuals in the SCI-Diabetes cohort developed cardiovascular disease CVD during the study period from 10 years of observation. The 5- and 10-year CVD event rates were 5.4% and 7.5% for men and 5.0% and 7.1% for women, respectively. Supplementary Table 11 shows the associations between variables selected in the NMU-Diabetes basic and extended models and the risk of incident CVD in the SCI-Diabetes cohort. For the basic model, hazard ratios were similar in the two cohorts, with the exception of antihypertensive medication use. For the extended model, hazard ratios for eGFR, Hemoglobin AIC, LDL, and lipid-lowering medication use were in different directions.
In the external validation, C-statistics were 0.691 [0.688–0.694] for the basic model and 0.714 [0.71–0.717 ] for the extended model. Measures of calibration and discrimination for subsets within the external validation cohort yielded results similar to those of the main analyses (Supplementary Table 12), and calibration plots are presented in Figure 2. For calibration, applying the basic model and the extended model to the external validation cohort by adjusting the linear predictor gave a C statistics of 0.65 [0.646, 0.654] and 0.634 [0.63, 0.637] for CVD, respectively, and good calibration with the calibration slope 1.009 [1.008, 1.01] and 1.116 [1.115, 1.117] for CVD. Overall, the basic model performed better than the extended model in the external validation cohort, but both models tended to overestimate risk. For both models, C-statistics values decreased after stratification by age, particularly in older age groups.
We developed a five-year CVD disease risk score in people with new-onset diabetes identified from the database of a tertiary hospital in China who were found to be at high risk of developing CVD. The risk score was compared with previous CVD scores and performed moderately well in external validation in a Scottish population–based cohort.
There are three main contributions of this paper. First, in this study we developed the risk models for high-risk patients using routinely collected clinical data from a large medical center. This permits better identification of patients at high risk among the large number of patients who visit large medical centers. The average incidence of CVD in these patients was higher than in population-based survey data. For example, the 10-year Kaplan-Meier ASCVD rate reported as 4.6% for men and 2.7% for women in the China-PAR Project, which is much lower than the 10-year CVD event rate (23.8% for men and 20.4% for women) in the NMU-Diabetes cohort. Although the median follow-up of the NMU-Diabetes cohort in this study, 4.75 [2.67, 7.42], is not the longest retrospective cohort in China, the median follow-up of the cohort will increase with the routinely collected process in the hospital. Therefore, it is valuable to develop a CVD risk prediction model based on longterm follow-up clinical diagnosis and treatment data extracted from top-level hospitals in China. Through this study, high-risk patients can be more effectively identified, and targeted early interventions and focused interventions can be carried out in the large medical centers.
Second, in this prospective study, we provide a new attempt to leverage clinical big data in China to help to reduce health inequalities in Western countries. In China, large medical centers have accumulated considerable clinical diagnosis and treatment data, which have the potential to offer a valuable resource for risk prediction to support individual patient management, observational research, and recruitment to clinical trials. The NMU-Diabetes cohort provided a good reference for analyzing the risk of CVD in diabetic patients in China’s eastern developed regions (according to China’s 2021 census data, which accounts for 40% of China’s total population). In contrast, in the study of the prediction of cardiovascular disease risk in people with type 2 diabetes mellitus, many existing works have studied the performances of the models established in Western countries in China, but few studies have tried to do it in reverse. Therefore, our work provides a new attempt to leverage huge clinical data in China to help to reduce health inequalities in other countries.
Third, we provide two models to better predict patients’ risk of CVD according to their visiting history. The basic model of CVD risk prediction tries to eliminate bias caused by differences in test results between different hospitals. Due to China’s current medical system, patients are free to choose hospitals, and their medical test results might have a high missing rate. Furthermore, there is also a lack of high-quality guarantees for the comparability of the results of medical tests between different hospitals because of the insufficient level of medical development. For example, in the 2020 Nanjing Medical Laboratory Intermural Quality Assessment, it was found that some hospitals’ cholesterol-related test results could not meet the relevant quality standards. Meanwhile, we provide an extended model that incorporates test results to provide more accurate risk prediction for patients who visit one hospital commonly and do medical tests in the same hospital. Compared with the existing equation for cardiovascular risk, the extended model allows more accurate quantification of risks for individual patients by incorporating important additional clinical conditions (including clinical diagnoses, clinical values, and drug use such as rheumatoid arthritis, chronic kidney disease, hypertension, antihypertensive medications, lipid-lowering medications, etc.).
According to the information available in the routinely collected data set, we retained as many predictors used by existing algorithms (QRISK2, ADVANCE, PCE, Swedish NDR, etc.) as possible.
We have produced two main final models: the basic model, which includes age, sex, clinical diagnoses, and drug-use factors that may be more suitable for patients admitted for the first time, and the extended model, which includes smoke status and biomarkers (eGFR, albuminuria, systolic blood pressure, LDL cholesterol, HbA1c) after multiple imputations for missing data that may be more suitable for patients who often visit the hospital where longitudinal repeated biomarkers values are likely to be available.
Although adopted in many risk score studies, predictors such as BMI, family history of CVD, and economic status were not included in the candidate predictors of this study. BMI is excluded as a risk factor because of the limited number of records in CDR (only 405 records) and their high values (mean ± SD 36.89 ± 8.59 kg/m2) suggesting biased recording. Few other hospitals in China record BMI limiting its value as a predictor. Although family histories of CVD were recorded in more than 50,000 patients’ admission notes in Chinese text, the structured results of family history require special complicated natural language processing and are not available in this study. Socioeconomic status is excluded as a risk factor because it was totally not recorded in the EHR system. Although some existing work determined the economic status of patients through self-report or zip code matching, it is not suggested in the EHR standard in China until now.
Hemoglobin AIC had a statistically negative correlation with incident CVD in the NMU-Diabetes cohort similar to the association reported for women in the five-year CVD risk prediction study in Hong Kong. For example, in Model 1 proposed by the Hong Kong Cohort Study , the hazard ratio for Hemoglobin AIC was 0.73 [0.63, 0.85] in women and 1.25 [1.09, 1.42] in men. In contrast, Hemoglobin AIC is positively associated with CVD risk in people with type 2 diabetes in Scotland , the United States , and the United Kingdom [28, 29]. Further research is required to investigate potential explanations for this discrepancy, which may be related to the duration of diabetes that can be difficult to establish in the absence of regular screening.
In terms of the effects of drug use on CVD risk, the effects of lipid-lowering drugs are consistent with existing studies [26, 30]. Statin prescription at diagnosis of diabetes (12.53%) appears to be low in NMU cohort compared to that (30.53%) in the Scottish cohort. Mean LDL in the whole NMU cohort and the subcohort with a diagnosis of hypertension are 3.39 and 3.39 mmol/L, respectively, considerably higher than 2.84 and 2.66 in comparable Scottish groups. In contrast, prescribed statins prior to diabetes diagnosis became a protective factor (OR 0.934 [0.798–1.093] in the basic model and 0.969 [0.827–1.135] in the extended model) among the subcohort of patients with complete data. Patients with complete data visited the medical center more than other patients, and the incidence rate (36.32%) of the subcohort is much higher than the whole cohort (18.63% for female and 22% for male). According to ‘2017 China Diabetes Prevention and Control Guidelines ,’ high risk is adults with one or more of the 12 risk factors, such as older than 40 years, hypertension or undergoing antihypertensive treatment, and dyslipidemia or receiving lipid-lowering therapy. Target LDL-C for people with diabetes is less than 2.6 mmol/L, which is much lower than the values observed result in the NMU cohort. The proposed risk score in this paper could potentially be used to target prescribing of statins to people at particularly high risk of CVD.
In some existing research, hypertension-related predictors are defined in different ways. For example, treated hypertension (diagnosis of hypertension and treatment with at least one antihypertensive drug) was combined as one predictor in QRISK3 and resulted in adjusted ORs of 1.66 [1.60–1.73]. Systolic blood pressure of ≥150 mm Hg and diastolic blood pressure of ≥90 mm Hg were used as two predictors in research using cohorts from the First Affiliated Hospital of Zhengzhou University, Henan, China . In the NMU-Diabetes cohort, 7,225 patients with hypertension diagnosis were not taking antihypertensive drugs in one year. Accordingly, the diagnosis of hypertension and antihypertensive drug prescription was used as two separate predictors on the basis of advice from clinical experts. In both models, the diagnosis of hypertension was a risk factor, and antihypertensive medicine prescription was a protective factor. This suggests that in patients with diabetes mellitus complicated with hypertension, antihypertensive drug therapy has a significant protective effect on reducing the risk of CVD.
Chronic kidney disease–related risk factors like albumin-to-creatinine ratio, albuminuria (normal, micro, macro), urine creatinine, and estimated glomerular filtration rate (eGFR) act as positive or negative risk factors in different subcohorts. Among the patients in three subcohorts in the derivation cohort with no missing albuminuria, with no missing estimated glomerular filtration rate, or with no missing LDL cholesterol, incidence rates of five-year CVD are, respectively, 29.5%, 30.5%, and 41.4% higher than the incidence of the whole NMU-Diabetes cohort. All of them showed positive risk OR adjusted by age and gender. Among the subcohort for patients without prescribed statins used prior to diabetes diagnosis and the subcohort for the person with complete data for all predictors, LDL cholesterol showed a negative risk effect, and estimated glomerular filtration rate remained a positive risk predictor. The adjusted hazard ratio of chronic kidney disease stage (defined by eGFR value range) was presented in gender-specified extended mode, in line with other published studies. So the chronic kidney disease stage (defined by the eGFR value range) is suggested to be used in risk prediction instead of the values of LDL cholesterol.
Differences in follow-up time, cohort size, and CVD definition make it difficult to make direct comparisons with previous similar studies, and the performance of our model was similar to previous models. For example, the C-statistic for internal validation of the China-PAR Project (Prediction for ASCVD Risk in China) 10-year ASCVD risk prediction model was 0.794 [0.775–0.814] . In a study from Hong Kong , a five-year ASCVD risk prediction model had a C-statistic of 0.705 [0.693, 0.716] in internal validation. The definition of CVD in the Hong Kong cohort was similar to ours, but the sudden death for unknown reason was also included in the outcome. Previous validation of risk scores developed in populations of European ancestry among Chinese populations have only shown moderate performance, even after recalibration leading to recommendations that ethnic-specific risk scores are developed [32, 33, 34].
When we evaluate the performance of some existing models using similar outcome and similar predictors available in the NMU cohort, the ACC/AHA Pooled Cohort Equations (PCE for white and PCE for Africa), ADVANCE, Swedish NDR, and QRISK2 all did not perform very well in the deviration cohort. All of them underestimate the CVD risk of patients in the NMU cohort.
As expected, we needed to recalibrate our CVD risk model as part of the external validation. The difference in the rate of CVD between the two cohorts is more than 14%. It might be caused by the difference in the cohort data sources. The NMU-Diabetes cohort collected the data from a tertiary hospital in Jiangsu province. The patients in the hospital are more serious than those in other hospitals in Nanjing. Patients of this cohort showed moderate adherence, and some patients cannot maintain regular visits to the hospital. The SCI-Diabetes cohort is populated by patient data from primary care and hospital diabetes clinics. The incidence rate of CVD in the SCI-Diabetes cohort decreased 7.3% because of their years of effort to control CVD risk in diabetes. Furthermore, we noted that the component of the composite outcome differed in the NMU and Scottish cohorts. In the NMU-Diabetes cohort, the proportion of the outcome made up by coronary disease not further specified is 29.92%, which is much higher than in the Scottish cohort (10.2%).
Key limitations of the NMU-Diabetes cohort were limited data on BMI and socioeconomic status, which meant that these could not be included in the risk models. The proportions of missing data for biochemical predictors, such as systolic blood pressure and LDL cholesterol, were higher in the NMU-Diabetes cohort than in the Scottish cohort. But both the basic model and extended model derived from the cohort with data after imputation performs better than those from the cohort without missing data. Risk prediction based on patient medication records and related blood test results still yielded adequate performance of the risk score. The data were obtained from a single hospital and therefore may not be representative of the wider population. We will try the validation of different hospitals in China in future work.
The results of this study can be easily embedded into existing electronic clinical systems to support clinician decision-making. It also provides an example of how to use the routinely collected clinical data to create useful prediction models for chronic disease complications. There are some patients in whom the proposed models should not be calculated, including patients who were diagnosed as type 2 diabetes and type 1 diabetes at different times, and those with preexisting cardiovascular disease.
The study results further suggest the importance of multicenter clinical electronic medical record integration, and we recommended that patients’ smoking status and BMI be systematically collected and recorded in a more useful format than free text in the EHR in China. Further research is needed to establish whether the wider use of risk scores improves the quality of care and outcomes for people with diabetes. The external validation will provide a reference for the treatment of patients with type 2 diabetics, and maybe with rare subtypes of diabetes if the risk models based on a big population in China perform well in other ethnic groups.
In conclusion, we developed a five-year CVD risk model in Chinese people diagnosed with type 2 diabetes based on a large, retrospective cohort study in a population treated in tertiary care. The risk score performed moderately well in both internal and external validation. We conclude that although there is scope to further improve risk scores for incident CVD among people with type 2 diabetes, Chinese databases have the potential to provide a valuable source of data for the development of future risk scores for populations from diverse ethnic groups.
The additional files for this article can be found as follows:Supplementary File 1
Supplementary Files 1 to 3. DOI: https://doi.org/10.5334/gh.1131.s1Supplementary File 2
Supplementary Figure 1 and Tables 1 to 12. DOI: https://doi.org/10.5334/gh.1131.s2
We acknowledge with gratitude the contributions of people with diabetes, National Health Service (NHS) staff, and organizations (the Scottish Care Information–Diabetes Steering Group, the Scottish Diabetes Group, the Scottish Diabetes Survey Group, diabetes managed clinical networks) involved in providing, setting up, maintaining, and overseeing collation of data for people with diabetes in Scotland. Data linkage was performed by colleagues at the Information Services Division of NHS National Services Scotland. The Scottish Diabetes Research Network is supported by National Health Service (NHS) Research Scotland, a partnership involving Scottish NHS boards and the Chief Scientist Office of the Scottish Government.
This work was supported by the industry prospecting and common key technology key projects of Jiangsu Province Science and Technology Department (Grant no. BE2020721), the National key Research and Development plan of Ministry of Science and Technology of China (Grant no. 2018YFC1314900, 2018YFC1314901), the big data industry development pilot demonstration project of Ministry of Industry and Information Technology of China (Grant no.(2019) 243), (2020) 84), the Industrial and Information Industry Transformation and Upgrading Guiding Fund of Jiangsu Economy and Information Technology Commission (Grant no.(2018) 0419), and Jiangsu Province Engineering Research Center of Big Data Application in Chronic Disease and Intelligent Health Service (Grant no.(2020) 1460). Yun Liu is the guarantor of this paper.
The authors have no competing interests to declare.
C. W. wrote the manuscript and analyzed data. S. R. advised on study design, data analysis, and interpretation; reviewed and edited the manuscript; and contributed to the discussion. H. W. advised on data analysis, reviewed and edited the manuscript, and contributed to the discussion. S. L. contributed to the discussion. X. Z. researched data. S. W. supported access to data, advised on data interpretation, reviewed and edited the manuscript, and contributed to the discussion. Y. L. reviewed the manuscript and contributed to the discussion.
Cheng Wan and Stephanie Read contributed equally.
Cheng Wan and Stephanie Read are joint first authors.
The NMU data underlying this article will be shared at reasonable request to the corresponding author Y. L. The SCI data underlying this article will be shared at reasonable request to the corresponding author S. W.
Zhou M, Wang H, Zeng X, et al. Mortality, morbidity, and risk factors in China and its provinces, 1990–2017: A systematic analysis for the Global Burden of Disease Study 2017. Lancet. 2019; 394(10204): 1145–1158. DOI: https://doi.org/10.1016/S0140-6736(19)30427-1
Kassebaum NJ, Barber RM, Bhutta ZA, et al. Global, regional, and national levels of maternal mortality, 1990–2015: A systematic analysis for the Global Burden of Disease Study 2015. Lancet. 2016; 388(10053): 1775–1812. DOI: https://doi.org/10.1016/S0140-6736(16)31470-2
Members ATF, Rydén L, Grant PJ, et al. ESC Guidelines on diabetes, pre-diabetes, and cardiovascular diseases developed in collaboration with the EASD: The task force on diabetes, pre-diabetes, and cardiovascular diseases of the European Society of Cardiology (ESC) and developed in collaboration with the European Association for the Study of Diabetes (EASD). European Heart Journal. 2013; 34(39): 3035–3087. DOI: https://doi.org/10.1093/eurheartj/eht108
Read SH, Fischbacher CM, Colhoun HM, et al. Trends in incidence and case fatality of acute myocardial infarction, angina and coronary revascularisation in people with and without type 2 diabetes in Scotland between 2006 and 2015. Diabetologia. 2019; 62(3): 418–425. DOI: https://doi.org/10.1007/s00125-018-4796-7
Zhou B, Lu Y, Hajifathalian K, et al. Worldwide trends in diabetes since 1980: A pooled analysis of 751 population-based studies with 4·4 million participants. Lancet. 2016; 387(10027): 1513–1530. DOI: https://doi.org/10.1016/S0140-6736(16)00618-8
Branch CMAD. Guidelines for the prevention and treatment of type 2 diabetes in China (2017 edition). Chinese Journal of Practical Internal Medicine. 2017; 38(4): 34–86.
McGuire H, Longson D, Adler A, Farmer A, Lewin I. Management of type 2 diabetes in adults: summary of updated NICE guidance. BMJ. 2016; 353: i1575. DOI: https://doi.org/10.1136/bmj.i1575
Kengne AP, Patel A, Marre M, et al. Contemporary model for cardiovascular risk prediction in people with type 2 diabetes. European Journal of Cardiovascular Prevention and Rehabilitation. 2011; 18(3): 393–398. DOI: https://doi.org/10.1177/1741826710394270
Mukamal K, Kizer J, Djoussé L, et al. Prediction and classification of cardiovascular disease risk in older adults with diabetes. Diabetologia. 2013; 56(2): 275–283. DOI: https://doi.org/10.1007/s00125-012-2772-1
Elley CR, Robinson E, Kenealy T, Bramley D, Drury PL. Derivation and validation of a new cardiovascular risk score for people with type 2 diabetes: The new zealand diabetes cohort study. Diabetes Care. 2010; 33(6): 1347–1352. DOI: https://doi.org/10.2337/dc09-1444
Davis W, Knuiman M, Davis T. An Australian cardiovascular risk equation for type 2 diabetes: The Fremantle Diabetes Study. Internal Medicine Journal. 2010; 40(4): 286–292. DOI: https://doi.org/10.1111/j.1445-5994.2009.01958.x
Zethelius B, Eliasson B, Eeg-Olofsson K, Svensson A-M, Gudbjörnsdottir S, Cederholm J. A new model for 5-year risk of cardiovascular disease in type 2 diabetes, from the Swedish National Diabetes Register (NDR). Diabetes Research and Clinical Practice. 2011; 93(2): 276–284. DOI: https://doi.org/10.1016/j.diabres.2011.05.037
Basu S, Sussman JB, Berkowitz SA, et al. Validation of risk equations for complications of type 2 diabetes (RECODe) using individual participant data from diverse longitudinal cohorts in the US. 2018; 41(3): 586–595. DOI: https://doi.org/10.2337/dc17-2002
Peters SA, Huxley RR, Woodward M. Diabetes as risk factor for incident coronary heart disease in women compared with men: A systematic review and meta-analysis of 64 cohorts including 858,507 individuals and 28,203 coronary events. In Springer; 2014. DOI: https://doi.org/10.1007/s00125-014-3260-6
Shah AD, Langenberg C, Rapsomaniki E, et al. Type 2 diabetes and incidence of cardiovascular diseases: A cohort study in 1.9 million people. 2015; 3(2): 105–113. DOI: https://doi.org/10.1016/S2213-8587(14)70219-0
Van der Leeuw J, Van Dieren S, Beulens J, et al. The validation of cardiovascular risk scores for patients with type 2 diabetes mellitus. 2015; 101(3): 222–229. DOI: https://doi.org/10.1136/heartjnl-2014-306068
Yu D, Shang J, Cai Y, et al. Derivation and external validation of a risk prediction algorithm to estimate future risk of cardiovascular death among patients with type 2 diabetes and incident diabetic nephropathy: Prospective cohort study. BMJ Open Diabetes Research and Care. 2019; 7(1). DOI: https://doi.org/10.1136/bmjdrc-2019-000735
Quan J, Pang D, Li TK, et al. Risk prediction scores for mortality, cerebrovascular, and heart disease among Chinese people with type 2 diabetes. 2019; 104(12): 5823–5830. DOI: https://doi.org/10.1210/jc.2019-00731
Zhang X, Miao SM, Dai ZLJCDM. Research and application of conversion from clinical data to OMOP common data model. China Digital Medicine. 2018; 13(10): 69–72.
Read SH, van Diepen M, Colhoun HM, et al. Performance of cardiovascular disease risk scores in people diagnosed with type 2 diabetes: External validation using data from the national Scottish diabetes register. Diabetes Care. 2018; 41(9): 2010–2018. DOI: https://doi.org/10.2337/dc18-0578
Zhang X, Wang L, Miao S, et al. Analysis of treatment pathways for three chronic diseases using OMOP CDM. Journal of Medical Systems. 2018; 42(12): 260. DOI: https://doi.org/10.1007/s10916-018-1076-5
Marshall A, Altman DG, Holder RL. Combining estimates of interest in prognostic modelling. 2009.
Leening MJ, Steyerberg EW, Van Calster B, DAgostino RB, Sr, Pencina M. Net reclassification improvement and integrated discrimination improvement require calibrated models: Relevance from a marker and model perspective. Statistics in Medicine. 2014; 33(19): 3415–3418. DOI: https://doi.org/10.1002/sim.6133
Hippisley-Cox J, Coupland C, Vinogradova Y, et al. Predicting cardiovascular risk in England and Wales: Prospective derivation and validation of QRISK2. 2008; 336(7659): 1475–1482. DOI: https://doi.org/10.1136/bmj.39609.449676.25
Stevens RJ, Poppe KK. Validation of clinical prediction models: What does the “calibration slope” really measure? Journal of Clinical Epidemiology. 2020; 118: 93–99. DOI: https://doi.org/10.1016/j.jclinepi.2019.09.016
Wan EYF, Fong DYT, Fung CSC, Lam CLKJJoD, Complications I. Incidence and predictors for cardiovascular disease in Chinese patients with type 2 diabetes mellitus—a population-based retrospective cohort study. 2016; 30(3): 444–450. DOI: https://doi.org/10.1016/j.jdiacomp.2015.12.010
Zhao W, Katzmarzyk PT, Horswell R, Wang Y, Johnson J, Hu G. Sex differences in the risk of stroke and HbA1c among diabetic patients. Diabetologia. 2014; 57(5): 918–926. DOI: https://doi.org/10.1007/s00125-014-3190-3
Currie CJ, Peters JR, Tynan A, et al. Survival as a function of HbA1c in people with type 2 diabetes: A retrospective cohort study. 2010; 375(9713): 481–489. DOI: https://doi.org/10.1016/S0140-6736(09)61969-3
Kontopantelis E, Springate DA, Reeves D, et al. Glucose, blood pressure and cholesterol levels and their relationships to clinical outcomes in type 2 diabetes: A retrospective cohort study. 2015; 58(3): 505–518. DOI: https://doi.org/10.1007/s00125-014-3473-8
Wan EYF, Fong DYT, Fung CSC, et al. Development of a cardiovascular diseases risk prediction model and tools for Chinese patients with type 2 diabetes mellitus: A population-based retrospective cohort study. Diabetes, Obesity and Metabolism. 2018; 20(2): 309–318. DOI: https://doi.org/10.1111/dom.13066
Yang X, Li J, Hu D, Chen J, et al. Predicting the 10-year risks of atherosclerotic cardiovascular disease in Chinese population. Circulation. 2016; 134(19): 1430–1440. DOI: https://doi.org/10.1161/CIRCULATIONAHA.116.022367
Chia YC, Gray SYW, Ching SM, Lim HM, Chinna K. Validation of the Framingham general cardiovascular risk score in a multiethnic Asian population: A retrospective cohort study. 2015; 5(5): e007324. DOI: https://doi.org/10.1136/bmjopen-2014-007324
Sun C, Xu F, Liu X, et al. Comparison of validation and application on various cardiovascular disease mortality risk prediction models in Chinese rural population. Scientific Reports. 2017; 7(1): 43227. DOI: https://doi.org/10.1038/srep43227
Asia Pacific Cohort Studies C, Barzi F, Patel A, et al. Cardiovascular risk prediction tools for populations in Asia. Journal of Epidemiol Community Health. 2007; 61(2): 115–121. DOI: https://doi.org/10.1136/jech.2005.044842 | 1 | 36 |
<urn:uuid:b4362359-502d-4d62-b48c-3b5bb0294e3b> | The election had four main candidates, all republicans. They were John Quincy Adams (from NE), Henry Clay (from Kentucky/West), William H. Crawford (from Georgia/South), and Andrew Jackson (from Tennessee/West). At the time, there was not an organized party system and the election process was fuzzy. Jackson, who was popular after the Battle of New Orleans, won the popular vote but not the electoral vote.
Since there was no clear winner, the House voted amongst the top three candidates. After the supposed "corrupt bargain," John Quincy Adams became president with Clay as his secretary of state
Policy of rewarding political supporters with public office, first widely employed at the federal level by Andrew Jackson. The practice was widely abused by unscrupulous office seekers, but it also helped cement party loyalty in the emerging two-party system.
Showdown between President Andrew Jackson and the South Carolina legislature, which declared the 1832 tariff null and void in the state and threatened secession if the federal government tried to collect duties. It was resolved by a compromise negotiated by Henry Clay in 1833
Indian Removal Act
Ordered the removal of Indian Tribes still residing east of the Mississippi to newly established Indian Territory west of Arkansas and Missouri. Tribes resisting eviction were forcibly removed by American forces, often after prolonged legal or military battles.
Jackson consoled his conflicting emotions over this act with the belief that NA could preserve their cultures out west; Jackson believed he had an obligation to "rescue" native races
The Bank Battle/Pet Banks
Battle between President Andrew Jackson and Congressional supporters of the Bank of the United States over the bank's renewal in 1832. Henry Clay had tried to renew the bank's charter 4 years early to undermine Jackson's presidency; the Bank was a heated issue.
Many disliked the institution: it had considerable sway over the national economy, it was not accountable to the people, and it foreclosed on many western properties in favor of the east. Jackson ultimately vetoed the Bank Bill, arguing that the bank favored moneyed interests at the expense of western farmers.
Popular term for pro-Jackson state banks that received the bulk of federal deposits when Andrew Jackson moved to dismantle the Bank of the United States in 1833.
Issues w/ Mexico:
Texas pioneers refused to assimilate to Mexican culture and did not practice Catholicism. Settlers resented the Mexican government and military presence in the region and interference with local rights, immigration, and slavery (Mexico had emancipated its slaves, Texans had not).
Mexicans believed that the US had an obligation to stay neutral in Texas' independence movement, but Americans favored Texas nonetheless. However, northerners opposed its annexation because of slavery, believing the campaign to make Texas a state was in fact a conspiracy to expand the institution.
President Tyler later annexed it as the 28th state, mexicans became mad because they thought that the US had "stolen" it from them
Led an aborted slave rebellion in Charleston in 1822, fueling Southern fears about government intervention with slavery. He was betrayed by informants and hung from the gallows.
Irish immigrants came to the US primarily after the 1840s potato famine and settled in larger seaport cities like NY and Boston. They were poor, illiterate, and extremely Catholic. Irish immigrants opposed abolition and had race riots with African Americans; the Irish themselves were treated as inferior and "a social menace" by established Americans and nativists.
Nativist political party, also known as the American party, which emerged in response to an influx of immigrants, particularly Irish Catholics.
Following crop failure and the collapse of the 1848 democratic revolution, German immigrants came to the American Middle West (notably Wisconsin.) These people were educated, promoted art and music, and lived in "compact colonies" away from their American neighbors. Although they were regarded with suspicion because of their isolationist tendencies, German Immigrants influenced American culture through things like the Christmas tree and the conestoga wagon.
Commonwealth v. Hunt
Massachusetts Supreme Court decision that strengthened the labor movement by upholding the legality of unions.
the common-law doctrine of criminal conspiracy did not apply to labour unions.
Factory Girls/ Lowell Mill Girls
Young women employed in the growing factories of the early nineteenth century, they labored long hours in difficult conditions, living in socially new conditions away from farms and families.
Cult of Domesticity
Pervasive nineteenth century cultural creed that venerated the domestic role of women. It gave married women greater authority to shape home life but limited opportunities outside the domestic sphere.
New York state canal that linked Lake Erie to the Hudson River. It dramatically lowered shipping costs, fueling an economic boom in upstate New York and increasing the profitability of farming in the Old Northwest.
Term referring to a series of nineteenth century transportation innovations—turnpikes, steamboats, canals, and railroads—that linked local and regional markets, creating a national economy.
Eighteenth- and nineteenth-century transformation from a disaggregated, subsistence economy to a national commercial and industrial network.
Identical components that can be used in place of one another in manufacturing
Second Great Awakening
Religious revival characterized by emotional mass "camp meetings" and widespread conversion. Brought about a democratization of religion as a multiplicity of denominations vied for members. It gave way to various reform movements, most of which were spearheaded by women: temperance, prison and institutional reform, women's suffrage, abolition, etc. The movement also further seperated classes, regions, and religions; its religious secession would foreshadow the Civil War.
Reformers addressed overcrowding, poorly-trained and poorly-paid teachers, limited curriculum, and irregular schedules in American tax-supported public schools. In higher education, reformers wanted to address the lack of schooling for women.
Horace Mann (secretary on the MA Board of Education; promoted more schools, longer school days, expanded curriculum)
Noah Webster (Published a dictionary that standardized the American language and textbooks that were used throughout American schools)
William Holmes McGuffey (Published McGuffey Readers, which were textbooks for young students that emphasized morality, patriotism, and idealism.)
Emma Willard and Mary Lyon (both established colleges for women.)
Better school houses, longer school terms, higher pay for teachers, expanded curriculum. In higher education, Oberlin College began to accept women students as well as black students, and colleges for women were established elsewhere in the country.
Black slaves in the South were legally barred from education, free black people were still generally excluded from schooling, even public school was an expensive luxury for the poor.
Have prisons reform inmates as well as punish them. Reformers also wanted proper treatment for those "suffering insanity" (many mentally ill people at the time were kept in prisons and recieved extremely cruel treatment)
Dorthea Dix (assembled reports on the treatment of the mentally ill and advocated for better institutional practices)
Dix's 1843 petition to the Massachusetts legisilature and her consistent persistance led to improved treatment of the mentally ill and a better understanding of their issues.
persistent discrimination, not observed, wretched conditions, understanding of psychology limited
Women's Rights Movement
End the separation between the private life of women and the public life of men; grant women more power in public life
Elizabeth Cady Stanton (advocated suffrage for women)
Susan B. Anthony (lecturer of women's rights)
The Grimke sisters
Convention at Seneca Falls, the Declaration of Sentiments, women were gradually admitted to colleges, wives in some states could own property after marriage
The movement was eclipsed by the campaign against slavery in the years leading up to the civil war. Advocates for women's suffrage were often ridiculed, and women still not vote in the civil war era.
The temperance movement encouraged less consumption of alcohol; people believed heavy drinking decreased the safety and efficiency of labor and also wreaked havoc on family life.
Neal S. Dow (Known as the "Father of Prohibition." He sponsored the Maine Law of 1851)
Maine Law of 1851 and similar laws passed in Northern states. Overall, there was less consumption of liquor amongst women and of hard liquor in general.
Laws prohibitng alchohol were repealed, deemed unconstitutional, and openly flouted.
Seeking out human betterment by setting up societes of cooperative, communistic nature
Mother Ann Lee (shakers)
flourished for a period of time; increase of new ideas
these communities would often fail due to various reason (shakers = no children)
Abolition of Slavery
the movement in opposition to slavery, often demanding immediate, uncompensated emancipation of all slaves. This was generally considered radical, and there were only a few adamant abolitionists prior to the Civil War. Almost all abolitionists advocated legal, but not social equality for blacks.
As a result of the War of 1812 and European romanticism, literature in the US reflected patriotism and the individualistic mood of the era. The Knickerbocker group found great success as writers of the time (Byrant, Irving, Cooper).
Planter elite felt a strong obligation to serve the public and had time for studying; thus they turned out many statesmen
Planters sent their children to Northern or foreign private schooling institutions, harming tax-supported public education. They fancied themselves part of a romanticized feudal system, as perpetuated by Sir Walter Scott. Southern women with slaves rarely supported abolition, as they had control over the household female slaves.
-South governed by select few rich people, was the head of the southern society. they determined the political, economic, and even the social life of their region. the wealthiest had home in towns or cities as well as summer homes, and they traveled widely, especially to europe, children got good education. they were defined as the cotton magnates, the sugar, rice, and tobacco, the whites who owned at least 40 or 50 slaves and 800 or more acres
- women dependent on slaves
Nat Turner/His Rebellion
Leader of the Virginia slave revolt that resulted in the deaths of sixty whites and raised fears among white Southerners of further uprisings.
Virginia slave revolt that resulted in the deaths of sixty whites and raised fears among white Southerners of further uprisings.
American Colonization Society
Reflecting the focus of early abolitionists on transporting freed blacks back to Africa, the organization established Liberia, a West-African settlement intended as a haven for emancipated slaves.
William Lloyd Garrison
Publisher of the antislavery newspaper, "The Liberator." He was radical and often the target of mob outrage; he started the American Anti-Slavery society.
Black abolitonist who wrote the "Appeal to the Colored Citizens of the World" which advocated bloody end to slavery
The most prominent black abolitionist of the era and a former slave; he published his autobiography and gave speeches and writings in support of the cause.
Reverand who opposed slavery and doubted the chastity of Catholic women; he was killed by a mob and became known as the "martyr abolitionist"
Belief that the United States was destined by God to spread its "empire of liberty" across North America. Served as a justification for mid-nineteenth century expansionism. In the election of 1844, expansionist Democrats campaigned on promises of annexing Texas and occupying Oregon.
Amendment that sought to prohibit slavery from territories acquired from Mexico. Introduced by a Pennsylvania congressman David Wilmot, the failed amendment ratcheted up tensions between North and South over the issue of slavery. It never became a federal law, but was endorsed by most free states and later adopted by the Free-Soil party.
Notion that the sovereign people of a given territory should decide whether to allow slavery. Though abolitionists opposed it as a compromise that could extend slavery, it appealed to many Americans because it accorded with democratic principle of self-determination.
Politicians liked it because it was a comfortable compromise between free soilers' and southern demands, allowing them to dissolve a national issue into smaller local ones.
Compromise of 1850
Admitted California as a free state, opened New Mexico and Utah to popular sovereignty, ended the slave trade (but not slavery itself) in Washington D.C., and introduced a more stringent fugitive slave law. Widely opposed in both the North and South, it did little to settle the escalating dispute over slavery. The Compromise favored the north overall, but the north was also deeply angered over the accompanying Fugitive Slave Laws.
Acquired additional land from Mexico for $10 million to facilitate the construction of a southern transcontinental railroad.
Rationale: Transportation into new land from the Mexican Cession was difficult, and so land transportation was imperative to keep the new land connected to the rest of the country.
Controversy: The North criticized the purchase as paying an excessive sum for a small amount of desert land; they also disliked it because it gave the South greater claims over the coveted railroad. As a result, northern railroad boosters demanded Nebraska be organized (into a free soil state).
Proposed that Nebraska be split into two and that the issue of slavery be decided by popular sovereignty in the Kansas and Nebraska territories, thus revoking the 1820 Missouri Compromise. Introduced by Stephen Douglass in an effort to bring Nebraska into the Union and pave the way for a northern transcontinental railroad. It was criticized for contradicting the Missouri Compromise; in response, the North largely disregarded the Compromise of 1850.
The Hudson River School of Art
The Hudson River School was a mid-19th century American art movement embodied by a group of landscape painters whose aesthetic vision was influenced by Romanticism. The paintings typically depict the Hudson River Valley and the surrounding area, including the Catskill, Adirondack, and White Mountains
Panic of 1819
Cause of the Panic of 1819
1 - A dramatic decline in cotton prices
2 - A contraction of credit by the Bank of the US designed to curb inflation
3 - An 1817 congressional order requiring hard-currency payments for land purchases
4 - The closing of factories due to foreign competition
What was the Panic of 1819
- The economic disaster (financial collapse) was largely the fault of the Second Bank of the US.
- It which had tightened credit in a belated effort to control inflation (they tried to call in their loans).
Effects of the Panic of 1819
- Thousands of Americans lost their savings and property, and unemployment estimates suggest that half a million people lost their lands.
- The depression caused business/personal bankruptcies skyrocketed (ended the "era of good feelings").
- Temporarily ended economic expansion.
- Caused Americans to have a deep distrust of banking and also allowed for a backlash on the idea of westward expansion.
Severe financial crisis brought on primarily by the efforts of the Bank of the United States to curb over-speculation on western lands. It disproportionately affected the poorer classes, especially in the West, sowing the seeds of Jacksonian Democracy
The idea of spreading political power to the people and ensuring majority rule as well as supporting the "common man"
the policy of promoting industry in the U.S. by adoption of a high protective tariff and of developing internal improvements by the federal government (as advocated by Henry Clay from 1816 to 1828)
Henry Clay's three-pronged system to promote American industry. Clay advocated a strong banking system, a protective tariff and a federally funded transportation network. It focused on uniting the country both economically and politically (exchange of raw materials from the South/West for Northern manufactured goods.)
Monroe's Foreign Policy
A statement of foreign policy which proclaimed that Europe should not interfere in affairs within the United States or in the development of other countries in the Western Hemisphere. The United States largely lacked the power to back up the pronouncement, which was actually enforced by the British, who sought unfettered access to Latin American markets.
Adams Onis Treaty/Purchase of Florida:
Under the agreement, Spain ceded Florida to the United States, which, in exchange, abandoned its claims to Texas.
He first reiterated the traditional U.S. policy of neutrality with regard to European wars and conflicts. He then declared that the United States would not accept the recolonization of any country by its former European master, though he also avowed non-interference with existing European colonies in the Americas; enunciates a policy of neutrality towards the Latin American colonies seeking independence
The Missouri Compromise consisted of three large parts: Missouri entered the Union as a slave state, Maine entered as a free state, and the 36’30” line was established as the dividing line regarding slavery for the remainder of the Louisiana Territory.
occurred after the LA purchase bc there was lots of land and ppl freaked
Uncle Tom's Cabin
Novel written by Harriet Beecher Stowe. Showed northerners and the world the horrors of slavery while southerners attack it as an exaggeration, contributed to the start of the Civil War.
These mills combined the textile processes of spinning and weaving under one roof, essentially eliminating the "putting-out system" in favor of mass production of high-quality cloth.
Mills that employed women to work. They offered supervision for the women at all times and lodging so they could stay near their work. Located in Lowell, MA.
allowed for easier transportation to west, transport of goods, etc:.
Historians' term for the spoliation of Western natural resources through excessive hunting, logging, mining, and grazing.
Era of Good Feelings
Popular name for the period of one-party, Republican, rule during James Monroe's presidency. The term obscures bitter conflicts over internal improvements, slavery, and the national bank
Lowell Mill Girls
the Lowell mill women organized, went on strike and mobilized in politics when women couldn't even vote—and created the first union of working women in American history.
protested cut wages, long hours, and unfair working conditions
what it was:
creation and integration of use of steam powered machines in factories to create mass product items (textiles)
impact on dev of the US:
improvements of agricultural system, trans communication
manufacturing became the basis of the economy in N
slavery expanded to keep up w/ North competition/need for cotton
sewing machines (easy made clothes)
-young nation was excited about the prospects of moving westward
-had little interest in European wars
-patriotic themes in paintings, schoolbooks
-Stuart, Peale, Trumball (paintings)
-Noah Webster's patriotic speller: blue-backed speller
-movement to support the growth of the nation's economy
-internal improvements was a big aspect (building of roads and canals)
TARIFF OF 1816
-Congress raised tariff rates on certain goods for express purpose of protecting US manufacturers
-Americans feared British would dump their goods
-first protective tariff in US history
-protective tariff: a tariff imposed to protect domestic firms from import competition
They favored weaker state governments, a strong centralized government, the indirect election of government officials, longer term limits for officeholders, and representative, rather than direct, democracy.
Democratic Republicans (jeffersonian)
believed in individual freedoms and the rights of states. They feared that the concentration of federal power under George Washington and John Adams represented a dangerous threat to liberty.
a 19th century single issue third party that opposed freemasonry as well as Andrew Jackson. They drew support from Evangelists and aspired to remove the politically influential secret Masonic Order. (hated them bc they were secretive and exclusive (un-democratic))
The idea of spreading political power to the people and ensuring majority rule as well as supporting the "common man"
First, it declared itself to be the party of ordinary farmers and workers. Second, it opposed the special privileges of economic elites. Third, to offer affordable western land to ordinary white Americans, Indians needed to be forced further westward.
The Whig Party believed in a strong federal government, similar to the Federalist Party that preceded it. The federal government must provide its citizenry with a transportation infrastructure to assist economic development. Many Whigs also called for government support of business through tariffs.
The Whig party avoided taking any position on slavery, seeking northern compromise on the issue in return for southern support for northern economic interests.
foreign policy of the war of 1812
Following the conclusion of the War of 1812, Manifest Destiny attained its strongest ideological pull. Even though the United States did not expand geographically nor was there an actual winner of the War of 1812, relations between the United States and Britain were impacted greatly. The British would now need to recognize the United States as a world power. Also, tensions between Americans and Native Americans began to rise. Nationalism began to spread throughout the United States since the nation was able to fend off the British thus influencing manifest destiny. Expanding the nation to the Pacific Ocean was now a possibility for Americans.
Throughout the early history of the United States, manifest destiny was shaping the country in significant ways. The citizens of the United States believed that it was their destiny and right to expand and grow. The War of 1812 helped prove that this right was legitimate. The United States fended off the British for a second straight time, and this bestowed confidence in the public. These events helped make the term of manifest destiny relevant.
aka US became more defensive and wanted more land bc they had big heads
A significant push toward the west coast of North America began in the 1810s. It was intensified by the belief in manifest destiny, federally issued Indian removal acts, and economic promise.
-focused on industrialization bc rocky soil (no crops :( )
-had more abolitionists but like not strong push for no slavey yk
-had more colleges and educational opps than south
-didnt want westward expansion bc didn't want more slave states (NO THROW OF BALANCE!!!)
-kinda more rights for women here (they could own property Yippie!)
-slave reliant/ farm reliant
-often pushed for wars and conflict (M-A war, pushed for British war, etc blah blah)
-had an aristocratic type society (planters w/ more land were at the top, slaves and poor farmers at bottom) | 1 | 2 |
<urn:uuid:0db0445e-07d4-4190-ba6c-f22be6141fd0> | About this course:
The purpose of this module is to provide an overview of colorectal cancer, its risk factors, clinical features, treatment options, and screening guidelines to enrich nursing knowledge of the condition, patient education, and nursing practice.
This module aims to provide an overview of colorectal cancer, its risk factors, clinical features, treatment options, and screening guidelines to enrich nursing knowledge of the condition, patient education, and nursing practice.
By the completion of this learning activity, the nurse should be able to:
- discuss the epidemiology of colorectal cancer in the US and recognize screening guidelines for both average-risk and high-risk individuals
- explain the pathophysiology, primary classifications, and subtypes of colorectal cancer, and identify the risk factors and signs and symptoms of the disease
- review the various treatment modalities for colorectal cancer, including surgery, radiation, chemotherapy, and the evolving role of targeted treatments and immunotherapy
- describe the possible side effects, monitoring parameters, and precautions of the different treatments and important components of patient education
The latest data released by the National Cancer Institute’s (NCI) Surveillance, Epidemiology, and End Results (SEER) program ranks colorectal cancer (CRC) as the third leading cause of cancer death in both men and women in the US. When genders are combined, it moves up in rank to second (Siegel et al., 2020). The recent unexpected and tragic death of celebrity icon Chadwick Boseman has directed attention toward the hidden perils of CRC, prompting increased awareness of the disease and screenings’ importance. CRC originates in the colon (large intestine) or the rectum (terminal portion of the colon) and is collectively referred to as colon cancer. Since colon cancer and rectal cancer share many similar features, they are often grouped as one condition; however, it is essential to recognize these as distinct disease entities in terms of their cancer staging, treatment, and survival patterns. The National Comprehensive Cancer Network (NCCN) offers separate evidence-based guidelines for each disease. Figure 1 summarizes the parallels and differences between the two conditions (NCCN, 2020a, 2020b).
Note. Original figure based on information found in NCCN, 2020a and 2020b.
The American Cancer Society (ACS, 2020a) estimates there will be 147,950 new cases of CRC (104,610 colon and 43,340 rectal) diagnosed in the US this year. The lifetime risk of CRC is approximately 1 in 23 (4.4%) among males and 1 in 25 (4.1%) for females. Recognizing that rectal cancer is commonly misclassified as colon cancer, SEER statistics describe combined mortality analysis, with an estimated 53,200 deaths from CRC in 2020. The majority of cases are diagnosed in adults 50 years of age and older, as incidence steadily climbs with age, as demonstrated in Figure 2. Over the last two decades, the age at diagnosis has steadily declined, shifting from a median onset of 72 years in 2000 to the current median age of 66 years (66 years for males and 69 years for females; Siegel et al., 2020).
The ACS (2020a) broadly defines racial and ethnic groups using the following terminology: non-Hispanic blacks (NHB), American Indians/Alaska Natives (AI/AN), non-Hispanic whites (NHW), Hispanic/Latino (Hispanic), and Asian/Pacific Islanders (API). CRC incidence and mortality are highest among NHB, followed closely by AI/AN, and lowest in APIs. Between 2012 and 2016, CRC incidence was 20% higher in NHB than NHW and 50% higher than APIs (ACS, 2020a). Between 2013 and 2017, CRC death rates in NHB were nearly 40% higher than NHWs and twice as much as APIs. Proposed rationales for these racial disparities include socioeconomic status, education, and lifestyle factors (i.e., smoking, diet, and obesity). NHB individuals are less likely to receive timely follow-up of a positive screening test and high-quality colonoscopy, contributing to higher mortality (Siegel et al., 2020).
In the US, ANs are at disproportionately high-risk for advanced cancers at the time of diagnosis. The reasons are mostly unknown, but theories include a higher prevalence of risk factors such as diets high in animal fat, vitamin D deficiency, and an increased overall incidence of diabetes, obesity, and smoking. ANs have a higher prevalence of Helicobacter pylori (H. pylori), a bacterium associated with inflammation and cancer of the stomach, which may also be associated with CRC. Finally, there is a notable inadequacy of available endoscopic services throughout Alaska (ACS, 2020a; Siegel et al., 2020). According to a recent study by Berkowitz et al. (2018), Alaska has the nation’s lowest CRC screening rates. The most important predictor of CRC survival is the stage at the time of diagnosis. The five-year survival rates are as follows:
- 90% for patients with localized-stage disease (no sign of cancer spread outside the colon or rectum)
- 71% for patients with regional-stage disease (cancer has spread outside the colon or rectum to nearby structures or lymph nodes)
- 14% for patients diagnosed with distant-stage (cancer has spread to distant organs or lymph nodes (ACS, 2020a; Siegel et al., 2020).
Risk and Protective Factors
Roughly 55% of CRC cases are attributable to modifiable risk factors (see Table 1) and, thus, are potentially preventable. Advancing age, lifestyle habits, inflammatory bowel disease (ulcerative colitis and Crohn’s disease), type 2 diabetes mellitus (T2DM), environmental factors, and inherited and acquired genetic susceptibilities are associated with increased CRC risk. People who engage in healthy and active lifestyle behaviors have up to a 52% lower risk of CRC compared to those who do not engage in these behaviors. Those who are physically active have a 25% lower risk of developing proximal and distal colon tumors than sedentary people. Men who are obese have about a 50% higher risk of colon cancer and a 25% higher risk of rectal cancer, whereas women who are obese have about a 10% increased risk of colon cancer and no apparent increased risk of rectal cancer (ACS, 2020b). In the US, an estimated 13% of CRCs are attributed to excessive alcohol consumption and 12% to current or former tobacco use; the risk in current smokers is about 50% higher than in never smokers (Islami et al., 2018).
While Table 1 catalogs modifiable risk factors supported by clinical evidence, several other proposed risks are correlated with the disease but currently lack definitive evidence. Some of the most common are diet, vitamin D, and acetylsalicylic acid (Aspirin) use.
Diet is believed to play a dual role in preventing and promoting CRC through its influence on the immune response and inflammation; however, the precise impact of specific foods on cancer occurrence is not definitively established beyond those listed in Table 1. Nevertheless, the ACS (2020a) describes diets high in refined carbohydrates and processed sugar as posing an increased risk for adenomas and CRC. While diets rich in whole grains and fiber are associated with a reduced risk of CRC due to potentially less carcinogen exposure in higher stool volume and faster transit time, evidence from randomized controlled trials remains inconclusive (ACS, 2020a). Two 2017 meta-analyses determined that whole grains served a more substantial role in risk reduction than fiber intake, revealing a 5% risk reduction for CRC for every 30 grams/day of whole grains consumed (Schwingshackl et al., 2017; Vieira et al., 2017). &nb
...purchase below to continue the course
Vitamin D has received a lot of attention over recent years regarding its potentially protective role against CRC; however, research findings remain inconclusive. Pooled data from 17 cohort studies determined that higher circulating blood levels of vitamin D (up to 100 nmol/L) were associated with a statistically significant, substantially lower CRC risk in women only. Correspondingly, the findings also revealed a 37% increased risk of CRC in those with vitamin D deficiency (McCullough et al., 2019).
Acetylsalicylic acid (Aspirin)
While there is a broad research base supporting the long-term regular use of acetylsalicylic acid (Aspirin) and other nonsteroidal anti-inflammatory drugs (NSAIDs) as protective factors against CRC, the ACS (2020a) confers this risk reduction to be stronger only among those younger than 70 and without excess body weight. Acetylsalicylic acid (Aspirin) users who develop CRC appear to have less aggressive tumors and improved survival than non-aspirin users. The ACS (2020a) currently does not recommend using NSAIDs for cancer prevention in the general population due to the potentially serious side effect of gastrointestinal (GI) bleeding. The US Preventive Services Task Force (USPSTF, 2016a) reports the regular use of acetylsalicylic acid (Aspirin) is more likely to have a beneficial effect when begun between the ages of 50 and 59, and benefits only become apparent after 10 years of consistent use.
Non-Modifiable Risk Factors
While 60% of CRC-related deaths can be mitigated through proper screening and surveillance practices, research demonstrates that at least one in three people is not up to date with CRC screenings. Non-modifiable risk factors also serve critical roles in the evolution of this disease, such as a personal history of chronic inflammatory bowel disease, a family history of CRC or adenomas (precancerous polyps), and inherited gene mutations. While most CRC cases are sporadic (60 to 70%), meaning healthy cells and genes mutate from genetic and environmental exposures over one’s lifetime, up to 30% of people diagnosed with CRC have a family history of the disease. Individuals who have a first-degree relative (i.e., parent or sibling) with CRC have two to four times the risk of developing the disease than people without this family history. This risk increases further for those with first-degree relatives diagnosed before age 50 or those with multiple-affected relatives. According to ACS (2020a), most “CRC clustered in families is believed to reflect interactions between lifestyle factors and the cumulative effect of relatively common genetic variations that increase disease risk, referred to as high prevalence/low penetrance mutations” (p. 13). Nearly 10% of CRCs are hereditary, meaning they are caused by an inherited gene mutation or syndrome (ACS, 2020a; Yurgelun et al., 2017). The most common inherited causes of CRC include Lynch syndrome, familial adenomatous polyposis (FAP), and inheritance of BRCA1/BRCA2 gene mutations (ACS, 2020a, 2020b).
Lynch syndrome is also called hereditary non-polyposis colorectal cancer (HNPCC) and is the most common hereditary risk factor for CRC. It also poses an increased risk for several other types of cancer, including ovarian, endometrial (uterine), brain, stomach, and breast. Approximately 5 in 100 people with CRC have Lynch syndrome. Around 8% of CRCs in people younger than 50 years are due to Lynch syndrome, and the median age at CRC diagnosis is 61 (ACS, 2020a). Individuals with Lynch syndrome tend to develop colon polyps (benign growth in the colon) at younger ages and typically develop cancer in their 40s or 50s. Affected women have an overall higher risk of developing cancer than their male counterparts. In the US, it is estimated that 1 in 279 individuals (1.2 million people) have a gene mutation associated with Lynch syndrome; however, most are undiagnosed since the identification of this mutation depends on a cancer diagnosis (ACS, 2020a). Individuals inherit lymph syndrome in an autosomal dominant pattern, which means one inherited copy of each cell’s altered gene is sufficient to increase cancer risk. Changes in the MLH1, MSH2, MSH6, or PMS2 genes are most commonly found in Lynch syndrome. Under physiologic conditions, these genes are responsible for repairing any potential errors during DNA replication (the process during which DNA is copied in preparation for cell division); collectively, they are known as mismatch repair (MMR) genes. Since mutations in any of these genes impede the cell’s ability to repair the DNA replication errors, abnormal cells continue to divide. Over time, the accumulated DNA replication errors can lead to uncontrolled cell growth and an increased propensity for cancer development (US National Library of Medicine [NLM], 2020b).
Familial Adenomatous Polyposis (FAP)
There are a few types of polyposis syndromes associated with increased risk for CRC, but FAP is the most common and accounts for approximately 1% of all CRCs. People with FAP tend to develop multiple benign polyps in their colon as early as their 20s and 30s. The number of polyps increases to thousands with advancing age, and unless the colon is removed, these polyps will develop into cancer. The median age for CRC in patients with FAP is 39 years. There are two genes associated with FAP that carry different inheritance patterns. Mutations in the APC gene can cause FAP and attenuated FAP, in which polyp growth is delayed. The average age of CRC diagnosis for patients with attenuated FAP is 55 years. Mutations in the APC gene affects the cell’s ability to maintain healthy growth and function, leading to cellular overgrowth. Most people with mutations in the APC gene will develop CRC, but the number of polyps and the time frame in which they become malignant depends on the location of the mutation in the gene. FAP resulting from mutations in the APC gene is inherited in an autosomal dominant pattern. Nurses should counsel patients with FAP on the importance of vigilant surveillance with frequent colonoscopy screenings as recommended by their physician. Surgery is often the standard of care for cancer prevention in patients with FAP syndromes once polyp growth accelerates beyond control with colonoscopy screenings (ACS, 2020a; NLM, 2020a).
BRCA1/BRCA2 gene mutations
All individuals have BRCA1/2 genes, which act as tumor suppressor genes to prevent cancer by regulating cellular growth and division. Mutations in these genes prevent them from working correctly, thereby increasing the propensity toward cancer development. Most commonly associated with breast, ovarian, and prostate cancers, BRCA1/2 mutations are being explored regarding their role in increasing the risk for several other types of cancers (The Centers for Disease Control and Prevention [CDC], 2020b). Regarding CRC, the absolute risk associated with BRCA1 and BRCA2 gene mutations is not definitively established (ACS, 2020a). Yurgelun and colleagues (2017) have determined that approximately 1% of CRC patients have heritable mutations in BRCA1/2. Further, a 2018 systematic review and meta-analysis investigating the association between CRC and BRCA1 and BRCA2 mutations found an association limited to BRCA1 mutations only, not BRCA2. They found that individuals with BRCA1 mutations have about a 50% increased risk of the disease than individuals without the mutation (Oh et al., 2018).
Early Detection and Screening
Since a precancerous polyp can take 10 to 15 years to progress into cancer, there is a unique opportunity for cancer prevention and early detection with screenings at defined intervals. Screening offers substantial opportunities to lessen CRC incidence and mortality. CRC screening aims to identify disease in the precancerous or early stage of cancer when it is small, localized, and curable. Various cancer screening guidelines are put forth by credible organizations and grounded in clinical research, evidence, and expert consensus. While there are some variations between the sets of guidelines, they are relatively consistent in their recommendations on CRC screening. The ACS is one of the most widely utilized, comprehensive, evidence-based resources for cancer care; they publish an annual report that summarizes CRC screening recommendations (ACS, 2020a).
Nurses’ Role in Cancer Prevention and Screening
One of the most powerful tools that nurses have is their ability to develop deep professional relationships with patients and their family members. Nurses are in a unique position to convey the importance of cancer prevention across healthcare settings. To effectively counsel patients on CRC risk reduction practices, it is important that nurses understand the basis for CRC screening and possess adequate knowledge of the various types of screening tests available. Nurses can help patients understand their options and select a test based on their individual preferences, such as ease of access and cost factors. Nurses must also recognize and understand the various features that place patients at increased risk for the disease to ensure they are counseled appropriately (Kahl, 2018).
Screening Modalities. There are multiple options for CRC screening, all of which are associated with a significant reduction in CRC incidence, although colonoscopy is the most accurate and commonly used screening test. CRC screening modalities are divided into two major categories; high-sensitivity stool-based test (collected at home) or visual examination (performed at health care facilities), as described in Table 2. The ACS (2020a) and the US Preventive Services Task Force (USPSTF, 2016b) do not endorse one test over another; they instead stress that all recommended tests can help save lives.
Limitations. Nurses should educate patients on the limitations and potential harms of CRC screening tests. All screening tests carry a risk for false-positive and false-negative results. While stool-based tests are noninvasive and less costly, they are more likely to miss polyps or cancers. Patients should also be advised that any positive findings on non-colonoscopy screening tests will need to be followed up with a colonoscopy. Delays in the follow-up of abnormal results are associated with an increased risk of advanced disease and mortality. Colonoscopy is also subject to error and may miss adenomas. Although rare, colonoscopy poses a higher risk of complications such as bowel tears, infection, and sepsis (ACS, 2020a).
Summary of Colorectal Cancer Screening Recommendations
The ACS (2020a) screening recommendations are based on risk category: average risk and increased or high-risk. The ACS explicitly defines individuals at high-risk to include those with one or more of the following characteristics listed in Box 1-1. By default, individuals are considered to be at average risk if they do not have any of the characteristics listed in Box 1-1.
Average Risk Screening Recommendations
The ACS recently lowered the age to initiate CRC screening from 50 to 45 years due to increasing incidence in younger populations. Currently, the ACS (2020a) recommends that adults aged 45 years and older undergo regular CRC screening with either a stool test or a visual exam. Screening intervals are based on individual patient factors and findings at the time of the initial screening test. Individuals who are in good health and with a life expectancy of greater than ten years should continue regular CRC screening through 75. The USPSTF (2016b) recommends screening for CRC starting at age 50 years and continuing until age 75 years; however, the guideline is currently undergoing revision. Collectively, the guidelines recommend that for individuals aged 76 through 85, the decision to be screened should be based on a person’s preferences, life expectancy, overall health, and prior screening history. The ACS (2020a) recommends that those over 85 should no longer undergo CRC screening.
High-Risk for CRC
People at increased or high-risk for CRC should start screening before age 45. For patients with one first-degree relative diagnosed with CRC before age 60, or two first-degree relatives diagnosed at any age, screening should be performed at age 40 or 10 years younger than the earliest age of diagnosis in the family, whichever comes first. Finite screening intervals for those at increased or high-risk for CRC vary based on the individual patient and screening test results. The USPSTF and ACS do not put forth screening guidelines specifically for those in higher-risk categories due to the high variability between these individuals (ACS, 2020a; USPSTF, 2016b).
The GI tract extends from the mouth to the anus and consists of several organs, as demonstrated in Figure 4. It carries out several digestive processes to perform two primary functions: breaking down food to assimilate nutrients and eliminating waste. Food is processed and mixed with an enzyme called salivary amylase in the mouth before the esophagus propels the food into the stomach. The stomach churns the food and combines it with enzymes, mucus, gastric acids, and other secretions. The food is stored in the stomach until phasic contractions help propel it into the small intestine, where most nutrient absorption occurs. Biochemicals and enzymes secreted by the pancreas and liver break down the food particles into smaller, absorbable nutrients. The nutrients pass through the small intestine walls into blood vessels and lymphatics before reaching the large intestine, where fluid absorption continues. Liquid wastes are transported to the kidneys, where they are excreted from the body in urine. Solid waste travels into the rectum and is eliminated through the anus (McCance & Heuther, 2019).
The Large Intestine
The large intestine is the final passageway of undigested food particles and, as shown in Figure 5, is subdivided into four regions: the cecum, colon, rectum, and anus. It also includes the appendix. Its primary function is to complete the absorption of any residual nutrients and water, synthesize specific vitamins, form feces (waste), and eliminate it. The cecum is the cul-de-sac, or small pouch at the beginning of the large intestine. It receives partially digested food (known as chyme) from the small intestine and mixes it with bacteria to continue digestion and form feces before transporting it into the colon. The colon is about five feet in length and comprises four connecting regions: the ascending colon, transverse colon, descending colon, and sigmoid colon. The ascending colon receives the feces from the cecum, bacteria digest the waste, and the intestinal wall absorbs water, nutrients, and vitamins into the bloodstream. The hepatic flexure is located on the right side of the body near the liver and creates a sharp, right-angle bend in the colon, marking the beginning of the transverse colon. The transverse colon is the longest region of the colon, and most nutrient absorption and feces formation occur in this region. The splenic flexure is the connection point between the transverse colon and the descending colon, forming a sharp, right-angle bend on the abdomen’s left side. The descending colon walls absorb water and any remaining nutrients from the feces, dehydrating the stool in preparation for elimination. The sigmoid colon is the curved, S-shaped section of the colon that transports and stores the residual fecal matter until it is transported to the rectum. The rectum is the final straight portion of the large intestine and stores fecal matter until the body is ready to eliminate the waste through the process of defecation. The rectum connects the colon to the anus, the GI tract’s final segment. The anus is a short tube at the end of the rectum terminating to the body’s exterior that feces pass through during defecation (Innerbody Research, 2017; McCance & Heuther, 2019).
Layers of the Intestinal Wall
The six layers of tissue that comprise the colon wall are displayed in Figures 6 and 7 and then described below in Table 3.
Cancer growth is a multistep process involving several genetic and environmentally induced alterations in cellular growth and differentiation that occur over time. While most CRCs arise from polyps, most polyps are not cancerous. There are two types of polyps strongly associated with cancer; hypoplastic and adenomatous (adenoma) polyps. Hypoplastic polyps associated with cancer are typically large (> 1 cm), present in multiples (>20), and located on the right side of the colon (ascending colon). Adenomas are more closely associated with CRC, and there are two chief growth patterns: tubular and villous. Most adenomas have a combination of both and are therefore called tubulovillous adenomas. Adenomas that are smaller (< 1.25 cm) typically have a tubular growth pattern, whereas larger adenomas generally have a villous growth pattern. Larger adenomas and those with a villous growth pattern are more likely to have cancerous cells within them (ACS, 2017a). Adenoma subtypes are also characterized by shape; pedunculated and sessile. As displayed in Figure 8, pedunculated polyps have a mushroom-like appearance as they have a round top attached to the surface by a narrow stalk. Sessile polyps are more common than pedunculated, are flatter, and do not have a stalk. These do not protrude as much from the colon wall, making them harder to identify. The three major types of sessile polyps include papillary, villous, or serrated. Serrated is a term used to describe any polyp with a saw-tooth pattern (McCance & Heuther, 2019; NCCN, 2020a, 2020b).
CRC is not a solitary tumor type, as its pathogenesis can vary widely from slow-growing tumors to more aggressive disease. Histologically, the majority of CRCs are adenocarcinomas. The prefix ‘adeno’ means glands, and ‘carcinoma’ denotes a type of cancer that starts in the cells that form glands producing mucus to line the inner surfaces of intestines. Adenocarcinomas produce abnormal glands that can infiltrate the submucosa, muscle layer, and invade the surrounding tissues. In the submucosa, the cancerous cells can infiltrate lymphatic vessels and spread to the nearby lymph nodes or penetrate veins and metastasize (spread) to the liver or other organs. There are two less common subtypes of adenocarcinoma, which include mucinous and signet ring cell. Although rare, other types of CRC include primary colorectal lymphomas, GI stromal tumors, leiomyosarcomas, carcinoid tumors, and melanomas. Combined, these account for less than 5% of all cases. CRC subtypes are outlined further in Table 4 (ACS, 2017b; Macrae & Bendell, 2020).
Signs and Symptoms of CRC
Most early-stage CRCs do not have any obvious warning signs and are most often identified during screenings. Symptoms of CRC are typically due to the expansion of the tumor into the lumen or adjacent structures (Macrae & Bendell, 2020). Nurses can serve a central role in detecting early signs and symptoms of CRC. Nurses routinely triage patient symptoms, by telephone, in primary care offices, or the emergency department. Regardless of the setting, a well-informed nurse can recognize and respond to warning signs to facilitate a timely diagnosis (Kahl, 2018).
Some of the most common symptoms of CRC include the following:
- ongoing abdominal pain, cramping, gas pains, bloating, or fullness,
- rectal bleeding or blood in the stools (bright red, black, or tarry stools),
- new-onset anemia,
- unintentional weight loss, and
- excessive fatigue or unusual tiredness.
Rectal tumors can affect the stools’ caliber, causing narrower stools than usual, rectal pain, and bleeding. Tenesmus is also common, which is a continual inclination to evacuate the bowels and a sensation that the bowel does not empty completely. Some patients may present with a palpable mass on the digital rectal exam (DRE; CDC, 2020a; Macrae & Bendell, 2020).
To definitely diagnose CRC, a tissue biopsy is required. The biopsy may be performed during a colonoscopy or as part of a surgical procedure. Nurses are critical in helping patients navigate the healthcare system as they undergo diagnostic testing. Nurses must ascertain adequate knowledge to explain the clinical significance of the testing, simplify medical terminology, effectively respond to questions, and clarify any misconceptions (Macrae & Bendell, 2020; Yarbro et al., 2018).
Carcinoembryonic antigen (CEA)
CEA is the most widely used and diagnostically sensitive tumor marker for patients with CRC. Tumor markers are substances that may be produced by cancer or by the body’s response to cancer’s presence but are also made in smaller quantities by healthy cells. Serum CEA is not sensitive enough to be used as a screening test for CRC. Research has demonstrated that the sensitivity of CEA for the diagnosis of CRC is only around 50%. CEA’s specificity is low as it is a nonspecific marker and can also be elevated in other cancerous processes such as pancreatic cancer or breast cancer. CEA is often elevated in noncancerous conditions such as gastritis, liver disease, peptic ulcer disease, diabetes, and several other acute or chronic inflammatory conditions. CEA levels are significantly higher in smokers than in nonsmokers. A normal CEA level is below 2.5 ng/mL in nonsmokers and below 5.0 ng/mL in smokers (Macrae & Bendel, 2020).
Imaging tests are performed to evaluate the extent of the tumor, infiltration into surrounding structures, or distant metastasis to other sites in the body. In the US, about 20% of patients have distant metastatic disease at the time of presentation. The most common metastatic sites for CRCs are the regional (neighboring) lymph nodes, liver, lungs, and peritoneum. Many patients need to undergo a computed tomography [CT] of the chest, abdomen, and pelvis, or magnetic resonance imaging [MRI] of the abdomen and pelvis as part of their diagnostic workup (NCCN, 2020a). Nurses educate patients about the purposes, indications, and required preparation for diagnostic tests; thus, an accurate understanding of these tests is essential (Olson et al., 2019).
CT scans use a series of x-rays and computer technology to create cross-sections of the internal body structures. The CT scanner resembles a large doughnut, and the patient is positioned on a table that slides into the machine. Patients are advised to lie flat and remain still during the test for enhanced quality of images. The X-ray beam moves in a circle around the body to generate several different views. CT scans use ionizing radiation as their imaging method and may be performed with or without intravenous contrast administration (International Atomic Energy Agency [IAEA], n.d.). Iodine-based contrast may be administered intravenously (IV), orally (PO), or both. If contrast is used, patients are typically required to fast or remain nothing by mouth (NPO) for several hours before the scan. Oral contrast is administered about two hours before the examination and is most useful in visualizing the abdomen and pelvis structures; this is particularly useful when evaluating colon cancers. Patients should be informed that they may feel warm or flushed almost immediately following contrast injection and may develop a metallic taste in the mouth. These symptoms are transient, harmless, and only last seconds to minutes before they resolve (RadiologyInfo.org, 2018).
An MRI is distinct from a CT scan as it does not use x-rays or pose radiation exposure to patients and is considered a very safe imaging test. MRIs utilize strong electromagnetic fields (EM) and radio waves to measure the water content of tissues as a means to generate detailed images of internal organs and structures. An MRI image comes mainly from the protons in fat and water molecules within the body. The MRI scanner is essentially a large magnet, measured in a unit called Telsa (T). Most modern MRI scanners are 1.5 to 3T. To put this into context, an MRI of 3T strength is about 60,000 times stronger than the earth’s magnetic field. During an MRI, an electric current creates a temporary magnetic field within the patient’s body, and radio waves are sent from and received by a transmitter and receiving device within the machine. These signals are used to generate images of the body’s scanned area (US Food & Drug Administration [FDA], 2018). While CT scans are superior at visualizing anatomical structures, internal organs, and the skeleton, MRI scans are superior to CT scans at identifying the difference between normal and abnormal soft tissue. MRIs can be performed on nearly any body part, and each scan follows a specific protocol depending upon the clinical concern; they may be performed without or without contrast administration. Gadolinium-based contrast agents (GBCAs) are rare earth metals administered intravenously to enhance the contrast of the MRI images. Patients who are undergoing MRI scans of the abdomen or soft tissue pelvic structures are usually advised to remain NPO for six hours before the scan (Ibrahim et al., 2020).
Principles of Pathologic Review
A biopsy sample undergoes a series of tests to determine cancer’s pathologic features, evaluate its behavior, and select the best treatment options. It is important for nurses to understand the fundamental elements of the pathology report, including tumor grade and common CRC gene and biomarker analyses as a means to enhance understanding of CRC treatment options. Knowledge of the basic principles of pathology can also provide a foundational background necessary when educating patients and responding to questions regarding their pathology results (Yarbro et al., 2018).
Tumor grade is a measurement of how different the cancer cells look compared to healthy cells under the microscope. The grade helps predict how likely the cancer is to grow and metastasize.
- Grade 1 is well-differentiated (appears similar to healthy cells) and the least aggressive.
- Grade 2 is moderately-differentiated, appears less like healthy cells, and is an intermediate grade (more aggressive than grade 1).
- Grade 3 is poorly differentiated or undifferentiated (does not resemble healthy cells at all), is high-grade, most aggressive, and tends to grow and spread more quickly (ACS, 2017b).
Gene and Molecular Biomarker Analyses
KRAS and NRAS
RAS is a family of genes that include KRAS and NRAS genes, which serve an important role in the treatment of metastatic CRC (mCRC). Patients with mutations in the KRAS and NRAS genes do not respond to certain treatments recommended for mCRC (NCCN, 2020a, 2020b).
Less than 10% of CRCs have a mutation called BRAF V600E. This mutation is considered a poor prognostic sign, as it causes cancer cells to grow and spread more quickly. BRAF mutations can be targeted with specific therapeutics termed BRAF inhibitors (NCCN, 2020a, 2020b).
Mismatch Repair (MMR)/Microsatellite Instability (MSI)
Most CRCs do not have high MSI levels or changes in MMR genes; changes in these genes are most common in those with Lynch syndrome. MMR testing helps guide if the patient should be tested for Lynch syndrome and determines if specific treatments (such as immunotherapy drugs) are likely to be effective. While MMR deficiency can be reported as microsatellite instability-high (MSI-H) or mismatch repair deficient (dMMR), it is important for nurses to understand that these are interpreted to have the same meaning (NCCN, 2020a, 2020b).
CRCs are staged based on the location of origin; the depth of invasion of cancer into the intestinal wall is also an important staging parameter. There are four stages of CRC, stage 1 (the cancer is localized) to stage IV (cancer has spread to distant sites in the body), as demonstrated in Figure 9
Treatment for CRC is often multimodal in which more than one therapy is combined and administered simultaneously (concurrently) or sequentially. CRC treatments are grouped into two major categories of localized and systemic therapies, as demonstrated in Figure 10. The most optimal treatment modality depends on several factors, including the pathologic features and cancer stage. A synopsis of each treatment subcategory will be described within this section (ACS, 2020c; NCCN, 2020b).
Treatments explicitly directed at the tumor without affecting the rest of the body are referred to as localized therapies. There are two major types of localized treatment options for CRC; surgery and radiation therapy. These treatments are most useful for early-stage cancers that have not metastasized beyond the area of origin. Surgery is the mainstay treatment for early-stage colon and rectal tumors. Since there are fundamental distinctions between surgical interventions for colon tumors and rectal tumors, they will be reviewed separately (ACS, 2020c; NCCN, 2020a).
Surgical Intervention for Colon Cancer
Polypectomy. Precancerous lesions and some early-stage colon cancers can be effectively removed with a polypectomy, a relatively noninvasive procedure to remove a polyp that is most often performed during a colonoscopy. As demonstrated in Figure 11, the polyp is removed with a snare, a wire loop device designed to slip over the polyp. Upon closure of the device, the polyp is removed at the stalk. Snares can burn through the base of the polyp (electrocautery). The goal of polypectomy is to remove the entire lesion in one piece and avoid leaving residual cancerous cells behind, which could replicate, grow, and spread over time (ACS, 2020c).
Colectomy. A colectomy is a surgical procedure in which all or part of the colon is removed. If only a portion of the colon is removed, the procedure is called a hemicolectomy. The extent of the surgery depends on the degree of cancer presence. Lymphadenectomy is usually performed during a colectomy, which involves removing nearby lymph nodes (ACS, 2020c). At the end of the procedure, the remaining two ends of the colon are adjoined. Figure 12 demonstrates a transverse colectomy in which part of the bowel was removed.
Colostomy. If the cancer is more advanced or the colon becomes obstructed by the tumor, more extensive surgery is required. If a large portion of the colon is removed, the surgeon may not be able to reattach the colon. In these situations, a stoma is created in which the end of the intestine is brought to the surface of the abdominal wall to provide an outlet for waste products (stool) to leave the bowel. A colostomy may be temporary or permanent, and the type of colostomy may vary based on the stoma’s location, as shown in Figure 13. An ileostomy (not pictured) is formed when the end of the small intestine (the ileum) is connected to a stoma. A collection bag is attached to the skin to hold the stool, as displayed in Figure 14 (ACS, 2020c).
Surgical Intervention for Rectal Cancer
The type of surgery depends on the anatomical location and extent of rectal cancer. Early rectal cancers are usually removed by polypectomy, whereas more advanced rectal tumors may require local excision in which healthy surrounding tissue is removed alongside the cancerous tissue (NCCN, 2020b).
Transanal excision (TAE). TAE is an option for early-stage rectal tumors located near the anal opening that have not spread to the anus or sphincter. During this procedure, the surgeon cuts through the layers of the rectal wall to remove the tumor as well as some surrounding healthy tissue. TAE typically allows patients to retain bowel function and eliminates the need for a permanent colostomy (ACS, 2020c).
Low anterior resection (LAR). LAR is a more extensive surgical approach reserved for advanced rectal tumors located in the upper portion of the rectum. The cancerous portion of the rectum is removed, and the lower portion of the colon is then reattached to the remaining part of the rectum. Patients may need a temporary colostomy following a LAR, but most will regain the normal functioning of their bowels (ACS, 2020c).
Proctectomy with colo-anal anastomosis. Surgical removal of the entire rectum is called a proctectomy. Most stage II and III tumors located in the middle or lower third portion of the rectum are treated with proctectomy, followed by a total mesorectal excision (TME). A TME involves the removal of all surrounding lymph nodes. The colon is then reattached to the anus in a procedure called a colo-anal anastomosis. Most patients will regain normal function of their bowels over time (ACS, 2020c).
Abdominoperineal resection (APR). APR is similar to a LAR, but it is a more extensive surgery. APR is typically indicated for Stage II and Stage III rectal tumors that have invaded the sphincter muscles or the levator muscles (the muscles that control urinary flow). In an APR, the entire anus is removed, necessitating a permanent colostomy (ACS, 2020c).
Pelvic Exenteration. Pelvic exenteration is a much more radical and complex surgical approach in which the rectum and any affected nearby organs are removed, such as the prostate or bladder. Patients require a permanent colostomy, and if the bladder is removed, a urostomy (opening through the skin to allow the elimination of urinary waste). This surgical technique is not commonly performed as it is associated with significant morbidity (ACS, 2020c).
Surgical Risks and Side Effects
The risks and side effects of surgery depend on the size and degree of cancer invasion, the extent of surgery, and the structures removed. All surgeries and invasive procedures are accompanied by risks, such as adverse reactions to anesthesia, bleeding, infection, perforation of the bowel (a hole in the intestines), and life-threatening sepsis. Patients may have complications with newly formed colostomies, such as malfunctioning of the stoma, skin breakdown, and leaking at the site. Adjusting to a colostomy can be challenging, and patients may struggle with caring for and managing the device. Nurses serve a vital role in helping patients acclimate to the physical care of their new stoma and colostomy device, which can pose extensive challenges for CRC survivors. Patients are often affected by the physical and psychological aspects of body image distortion and lifestyle adjustments. Studies have demonstrated that nurses are key in identifying and meeting patients’ psychosocial needs and managing negative emotions toward colostomy care. Compassionate and knowledgeable nursing care focused on managing psychosocial needs, reducing anxiety, and addressing depression is helpful in alleviating patients’ stress responses and promotes healthy coping and adjustment (Jin et al., 2019). Sexual dysfunction is common among males who undergo LAR or APR as the nerves that supply blood to the penis may be injured during the surgery. Males may be unable to achieve an erection or orgasm, which can negatively impact interpersonal relationships, quality of life, psychological health, self-esteem, and wellbeing (ACS, 2020c). Nurses are tasked with addressing these issues with sensitivity, empathy, and compassion, fostering a safe and nonjudgmental environment for patients to openly express their concerns. Nurses can offer psychosocial support and connect patients with resources, support groups, and medical specialists (Yarbro et al., 2018).
Radiation therapy is a type of localized treatment that delivers a precisely measured amount of high-energy, highly focused rays of ionizing radiation to the tumor while providing as little injury as possible to surrounding tissue. Radiation causes cellular damage to cancer cells, leading to biological changes in the DNA, rendering cells incapable of reproducing or spreading. All healthy cells and cancer cells are vulnerable to the effects of radiation and may be injured or destroyed; however, healthy cells can repair themselves and remain functional. The total dose of radiation is hyper-fractionated, which means it is delivered to the tumor in small divided doses, or fractions, rather than all at once. Hyper-fractionation allows healthy cells a chance to recover between treatments. The total number of fractions (doses) administered depends on the tumor size, location, reason for treatment, patient’s overall health, performance status, goals of therapy, as well as consideration of any other concurrent therapies the patient is receiving (Nettina, 2019). Radiation therapy plays a central role in treating many types of CRCs and can be delivered externally or internally; some patients may receive both (ACS, 2020c).
External beam radiation therapy (EBRT) delivers radiation from a source outside the body and is the most common type of radiation therapy used for CRC. Traditionally, radiation beams were only able to match the tumor’s height and width, exposing more healthy tissue to the consequences of radiation. Over recent decades, 3-D conformational radiation therapy (3D-CRT) became the mainstay EBRT for CRCs. 3D-CRT is credited with the ability to reshape the radiation beam to match the shape of the tumor. Further advancements in imaging technology have led to more precise treatment mechanisms that allow even more of the radiation beam to reach the tumor. Intensity-modulated radiation therapy (IMRT) is a newer, highly conformal form of radiation that further reduces unintended exposure to healthy tissues. While 3D-CRT and IMRT are very similar in that they both target the tumor while sparing healthy tissue, IMRT allows for modulation of the radiation beam’s intensity, delivering a higher radiation dose to a precise location. The enhanced targeting technology of IMRT allows for the delivery of higher radiation doses to the site of disease, thereby enhancing clinical outcomes and limiting side effects (Hathout et al., 2017). Stereotactic body radiation therapy (SBRT) is a technique in which extremely high biological doses of radiation are administered over a few short treatments. The target area is affected to a higher degree over a shorter period with minimal impact on healthy tissue (Jingu et al., 2018).
Brachytherapy and intraoperative radiation therapy (IORT) are among the most well-cited internal modalities. Brachytherapy involves implanting a wire, seed, pellet, or catheter into the body within or near the tumor. IORT is an intensive form of radiation administered directly into the target area during surgery while sparing the surrounding healthy tissues. The role of IORT is limited but may be considered as an additional radiation boost for patients with large tumors or those with recurrent tumors. IORT is also a viable treatment option for patients with CRC that has spread to the liver, causing predominant hepatic disease (Hathout et al., 2017; NCCN, 2020b).
Radiation Side Effects
Nurses caring for patients undergoing radiation therapy serve multifaceted roles in monitoring for and managing side effects. Radiation side effects depend on the specific area(s) of the body exposed and the dose received. Superficial skin irritation at treatment site is common and can include redness, blistering, and inflammation, giving skin the appearance of a sunburn. GI symptoms are common due to the tumor’s anatomical location and the impact of the radiation on surrounding tissues and structures. Common symptoms of GI toxicity include nausea, vomiting, diarrhea, anorexia, bowel incontinence, rectal irritation, rectal bleeding, blood in stools, and pain with defecation. Radiation to these regions can also lead to fluid volume deficit, electrolyte disturbances, weight loss, and malnutrition. Some patients may require placement of a feeding tube to ensure adequate nutrition and prevent cachexia. Nutrition is a core component of all forms of cancer treatment and largely influences the patient’s tolerance to therapy and toxicities. The placement of a percutaneous endoscopic gastrostomy (PEG) tube may be necessary for some patients. Radiation for CRC can sometimes indirectly affect underlying structures located within the radiation field, such as the bladder or prostate, causing cystitis (inflammation of the bladder), dysuria (painful urination), hematuria (blood in the urine), urinary incontinence, and loss of pelvic floor muscular strength. Systemic effects may include fatigue, weakness, dehydration, scarring, fibrosis, and adhesion formation (the tissues impacted by radiation stick together; ACS, 2020c; Yarbro et al., 2018).
For more information regarding nursing implications in radiation therapy, refer to the Oncology Nursing Part 1: Surgical and Radiation Oncology Nursing CE Course and earn five ANCC credits.
Chemotherapy, also called cytotoxic or antineoplastic therapy, refers to a group of high-risk, hazardous medications that destroy cancer cells throughout the body. Chemotherapy works by interfering with the normal cell cycle, impairing DNA synthesis and cell replication, thereby preventing cancer cells from dividing and multiplying (Yarbro et al., 2018). Chemotherapy serves a prominent role in treating CRC and can be used at various time points during the disease trajectory. Neoadjuvant chemotherapy intends to shrink a tumor so that the surgical intervention is less extensive. Neoadjuvant chemoradiation (concurrent chemotherapy and radiation therapy) is the gold standard for locally advanced rectal cancer, followed by surgical resection and then adjuvant chemotherapy (additional chemotherapy following surgery). Chemotherapy acts as a radiosensitizer, thereby rendering cancer cells more vulnerable to the toxic effects of radiation. After surgery, adjuvant chemotherapy aims to prevent cancer recurrence, reduce micro-metastases, and eradicate any remaining cancer cells. Palliative chemotherapy aims to relieve or delay cancer symptoms, enhance comfort, reduce symptom burden, and improve quality of life. CRC chemotherapy can include oral and/or intravenous formulations and typically consist of at least two or three medications. Some of the most common chemotherapy agents used for CRC are listed below (Itano, 2016; Olsen et al., 2019).
- 5-fluorouracil (5-FU)
- capecitabine (Xeloda)
- irinotecan (Camptosar)
- oxaliplatin (Eloxatin)
- trifluridine and tipiracil (Lonsurf) (ACS, 2020c; NCCN, 2020a, 2020b).
Chemotherapy Side Effects
Side effects of chemotherapy are inevitable due to the nonspecific nature of cytotoxic therapy; it simultaneously impacts healthy cells along with cancerous cells. However, side effects vary based on the drug type, dosage, duration of treatment, and specific patient factors. Not all patients respond in the same way, and not all chemotherapy agents pose the same risks. Assessment and education are the most critical components to ensuring timely recognition, intervention, and management of side effects as experienced by each patient. Many side effects, such as nausea, can be primarily thwarted by implementing appropriate prevention strategies and medications. As a group, the most common side effects include reduced blood counts (anemia, thrombocytopenia, neutropenia), fatigue, nausea, anorexia, alopecia (hair loss), mucositis (mouth sores), diarrhea, skin changes, and peripheral neuropathy (damage to the sensory nerves). Table 6 highlights a few of the unique side effects and important patient teaching points for each medication (Nettina, 2019; Olsen et al., 2019).
Targeted agents are a highly specialized treatment modality devised to attack specific parts of cancer cells. They work to prevent tumor development or to shrink existing tumors. Proteins (called growth factor receptors) connect the external and internal cellular environments and are essential for healthy cell growth and development. Alterations in genes lead to changes in these proteins, disrupting normal cellular processes and igniting an environment for cancer growth. While each type of targeted therapy has a distinct mechanism of action, they all interfere with the cancer cell’s ability to grow, divide, repair, and/or communicate with other cells. By directing their effects at tumor cell growth through specific targets, these therapies are considered less toxic to normal cells and tissues than traditional chemotherapy agents. However, cancer cells have the potential to become resistant to them, as they only block specific pathways of cancer growth. The most common targeted therapies used for the treatment of CRCs will be described in this section (Sengupta, 2017).
Epidermal growth factor receptor (EGFR) Inhibitors. EGFR is a protein found on some normal cells’ surface, which causes cells to divide when the epidermal growth factor binds to it. EGFR is found at abnormally high levels in certain types of CRCs, and activation of the EGFR accelerates tumor growth. EGFR inhibitors are a class of medications that impede the activation of EGFR, thereby blocking cancer growth. Cetuximab (Erbitux) and panitumumab (Vectibix) are monoclonal antibodies widely utilized in CRC treatment. They selectively bind to the extracellular component of the EGFR, thereby preventing cellular growth. Scientists analyze specific antigens on cancer cells (target) to determine a protein to match the antigen and then create a specialized antibody to precisely attach to the target antigen like a key fits a lock. The antibodies bind to the antigen and mark it for destruction by the immune system. Monoclonal antibodies work on cancer cells in the same way natural antibodies work by identifying and binding to the target cells and then alerting other cells in the immune system to the presence of the cancer cells (Sengupta, 2017). Cetuximab (Erbitux) is a chimeric monoclonal antibody, primarily made of a mouse (murine) protein with a (lesser) human protein component. It binds to extracellular EGFR resulting in inhibition of cell growth and induction of apoptosis. Panitumumab (Vectibix) is a fully human monoclonal antibody that inhibits ligand binding to the EGFR receptor resulting in inhibition of cell growth. There is a higher risk of infusion-related reactions with cetuximab (Erbitux) due to its higher murine content, which can induce rigors, chills, and fevers. Rarely, life-threatening hypersensitivity reactions can occur. The most common side effects of EGFR inhibitors include skin manifestations, particularly an acne-like rash on the face and chest. Other side effects can include fatigue, diarrhea, and headache (ACS, 2020c).
Vascular endothelial growth factor (VEGF) inhibitors. VEGF is a signaling protein that stimulates angiogenesis (the formation of new blood vessels) in both healthy and cancerous cells. Blood vessels carry oxygen and nutrients to the tissue for growth and survival. Tumors need blood vessels to grow and spread. Anti-angiogenesis is the process of inhibiting the formation of new blood vessels by blocking the VEGF receptors. Angiogenesis inhibitors (VEGF-inhibitors) target the blood vessels that supply oxygen to the tumor cells, ultimately causing them to starve by cutting off their nutrient supply. VEGF-inhibitors such as bevacizumab (Avastin) work by cutting off blood supply to cancer cells by interfering with the VEGF receptor, so tumors stay small and eventually die due to starvation. Bevacizumab (Avastin) is a humanized monoclonal antibody that binds to and inhibits the activity of human VEGF to its receptors, thereby blocking proliferation and formation of new blood vessels that supply tumor cells. Bevacizumab (Avastin) is commonly used as part of combination therapy for the treatment of CRC. VEGF inhibitors’ potential side effects include bleeding events, headache, hypertension, and proteinuria (protein spilling out in the urine due to increased pressure in the kidneys; Olsen et al., 2019).
Nurses should educate patients on the risk of hypertension while on treatment with VEGF-inhibitors. Patients may require blood pressure management with antihypertensives or may need an adjustment to their current antihypertensive medication regimen. Patients should also be counseled on ways to reduce their blood pressure through compliance with medications, dietary adjustments such as following a heart-healthy diet high in fiber, fruits and vegetables, and low in sodium, and engaging in regular cardiovascular exercise. VEGF-inhibitors are contraindicated within six weeks of surgery (preoperatively or postoperatively) due to an increased risk for major bleeding events, delayed wound healing, and fistula (an abnormal connection between two hollow spaces within the body). VEGF-inhibitors carry a boxed warning for bowel perforation (a hole in the intestines). Nurses must ensure patients are aware of this rare but serious side effect. Patients must report any sudden onset of severe and diffuse abdominal pain, an abdomen that is unusually firm or hard to touch, sudden onset bloating, nausea, vomiting, or rectal bleeding (Olsen et al., 2019).
BRAF V600E inhibitors. BRAF inhibitors are only indicated for patients with a mutation in the BRAF V600E gene. As mentioned earlier, BRAF-mutated disease is an aggressive subtype of CRC that carries a poorer prognosis, although the mechanism remains poorly understood. BRAF inhibitors have been extensively studied in their management of malignant melanomas; however, their activity in CRC is limited (Caputo et al., 2019; Korphaisarn & Kopetz, 2016). The most common BRAF-inhibitors used for CRCs include encorafenib (Braftovi), vemurafenib (Zelboraf), and dabrafenib (Tafinlar). These oral medications attack the abnormal BRAF protein directly. When given with cetuximab (Erbitux) or panitumumab (Vectibix), the combination has been shown to shrink or reduce the progression of metastatic CRC and increase survival (Ducreux et al., 2019; NCCN, 2020a, 2020b). Common side effects include skin rash, abdominal pain, diarrhea, anorexia, nausea, joint pain, and fatigue. BRAF inhibitors increase the risk for the development of new squamous cell skin cancers (SCC). Nurses should educate patients on monitoring for and reporting any new skin lesions (ACS, 2020c).
Immunotherapy is a novel group of cancer treatments that stimulate the immune system to recognize and destroy cancer cells. Immunotherapy aims to produce antitumor effects by modifying the actions of the body’s natural host defense mechanisms to become more sensitive to cancer cells. Immune-based treatments work differently than chemotherapy as they are highly specialized in their activity (Miliotou & Papadopoulou, 2018). Immune therapy has emerged as an effective treatment strategy for patients with metastatic MSI-H or MMR-deficient (dMMR) disease who have disease progression (or whose cancer has grown) despite the use of standard chemotherapy. Immune checkpoint inhibitors work by blocking the receptors that cancer cells use to inactivate immune cells (specifically, T-cells). When this signal is blocked, T-cells can better differentiate between healthy cells and cancer cells, thereby augmenting the cancer cells’ immune response. Checkpoint inhibitors are categorized into two categories: (1) programmed cell death-1 (PD-1)/PD-ligand 1 (PD-L1) inhibitors, and (2) cytotoxic T-lymphocyte–associated antigen 4 (CTLA-4) inhibitors (Sasikumar & Ramachandra, 2018). Pembrolizumab (Keytruda), nivolumab (Opdivo), and ipilimumab (Yervoy) are the three immunotherapy drugs approved for metastatic CRC (Olsen et al., 2019).
Immunotherapy Side Effects
The most common side effects of immunotherapy include fatigue, nausea, anorexia, cough, diarrhea, arthralgias, myalgias, skin rash, pruritus, and flu-like symptoms. All immunotherapy agents carry boxed warnings for immune-mediated adverse reactions (irAEs), which can be fatal if left untreated. An immune response can impact any organ system, inducing nonspecific inflammation throughout the body, ranging from mild inflammatory effects to life-threatening reactions that develop without warning. The incidence of irAEs across clinical trials ranged from as low as 15% to as high as 90% (Winer et al., 2018). Some of the most commonly reported irAEs include hepatitis (inflammation of the liver), endocrinopathies (inflammation to the thyroid and adrenal glands), nephritis (inflammation of the kidneys), and uveitis (inflammation of the eye). Less common but more severe and potentially fatal irAEs include pneumonitis (inflammation of the lung tissue), enterocolitis (inflammation of the bowel), and Stevens-Johnson syndrome (SJS), a rare, severe disorder of the skin and mucous membranes (Sasikumar & Ramachandra, 2018). Among the immunotherapy agents, PD-L1 inhibitors/PD-1 inhibitors are generally the best tolerated and pose a lower risk for irAEs. Compared to drugs that target PD-1 or PD-L1, serious side effects are much more likely with CTLA-4 agents such as ipilimumab (Yervoy; Stenger, 2018). There is an evolving role for combination therapy, in which immunotherapy is given in tandem with one another to provide a synergistic effect; however, this combination poses an even greater risk for severe irAEs. Immunotherapy is also given in combination with cytotoxic chemotherapy agents at times, further complicating their already complex side effect profiles (Winer et al., 2018).
Nursing care of the patient receiving immunotherapy requires cautious triage and continuous meticulous assessment to identify signs of potential irAEs, as a timely diagnosis is critical to ensure prompt response and reduce morbidity. Most irAEs are reversible with immunosuppressive steroid treatment but must be graded according to the Common Terminology Criteria for Adverse Events (CTCAE) Version 5 (v5.0) and managed per specific medication guidelines (NCI, 2020). Patient education is vital. Nurses must teach patients and caregivers about the importance of self-assessment and the immediate reporting of any symptoms. With pneumonitis, symptoms can range from mild cough and dyspnea to severe shortness of breath and life-threatening hypoxia. GI toxicity can range from mild diarrhea and abdominal cramping to severe colitis, which can be fatal if not managed. Skin toxicity may present initially as mild pruritus or dermatitis and can progress to SJS. SJS is characterized by a painful systemic red rash that leads to blistering and sloughing of the skin’s top layer. Life-threatening endocrinopathies can cause an abundance of symptoms that may vary widely, such as extreme weakness, excessive fatigue or lethargy, electrolyte disturbances, thyroid inflammation, and pituitary dysfunction (Olsen et al., 2019; Winer et al., 2018).
For a more detailed review of the principles of chemotherapy, immunotherapy, and specific nursing implications, refer to the following NursingCE courses:
- Oncology Nursing Part 2: Chemotherapy and Oncologic Emergencies (5 contact hours)
- Oncology Medication Administration for LPNs and RNs (7 ANCC contact hours)
American Cancer Society. (2017a). Understanding your pathology report: Colon polyps (sessile or traditional serrated adenomas). https://www.cancer.org/treatment/understanding-your-diagnosis/tests/understanding-your-pathology-report/colon-pathology/colon-polyps-sessile-or-traditional-serrated-adenomas.html
American Cancer Society. (2017b). Understanding your pathology report: Invasive adenocarcinoma of the colon. https://www.cancer.org/treatment/understanding-your-diagnosis/tests/understanding-your-pathology-report/colon-pathology/invasive-adenocarcinoma-of-the-colon.html
American Cancer Society. (2020a). Colorectal cancer facts & figures 2020-2022. https://www.cancer.org/content/dam/cancer-org/research/cancer-facts-and-statistics/colorectal-cancer-facts-and-figures/colorectal-cancer-facts-and-figures-2020-2022.pdf
American Cancer Society. (2020b). Colorectal cancer risk factors. https://www.cancer.org/cancer/colon-rectal-cancer/causes-risks-prevention/risk-factors.html
American Cancer Society. (2020c). Treating colon cancer. https://www.cancer.org/content/dam/CRC/PDF/Public/8607.00.pdf
Berkowitz, Z., Zhang, X., Richards, T. B., Nadel, M., Peipins, L. A., & Holt, J. (2018). Multilevel small-area estimation of colorectal cancer screening in the United States. Cancer Epidemiology Biomarkers & Prevention, 27(3), 245-253. https://doi.org/10.1158/1055-9965.EPI-17-0488
BruceBlaus. (2013). The large intestine [image]. https://commons.wikimedia.org/wiki/File:Blausen_0604_LargeIntestine2.png
BruceBlaus. (2014). Colostomy types [image]. https://upload.wikimedia.org/wikipedia/commons/8/88/Blausen_0247_Colostomy.png
Cancer Research UK. (2014a). Part of the bowel removed with a transverse colectomy [image]. https://commons.wikimedia.org/wiki/File:Diagram_showing_the_part_of_the_bowel_removed_with_a_transverse_colectomy_CRUK_319.svg
Cancer Research UK. (2014b). Stoma with colostomy bag [image]. https://commons.wikimedia.org/wiki/File:Diagram_showing_a_colostomy_with_a_bag_CRUK_061.svg
Caputo, F., Santini, C., Bardasi, C., Cerma, K., Casadei-Gardini, A., Spallanzani, A., Andrikou, K., Cascinu, S., & Gelsomino, F. (2019). BRAF-mutated colorectal cancer: Clinical and molecular insights. International Journal of Molecular Sciences, 20(21), 5369. https://doi.org/ 10.3390/ijms20215369
The Centers for Disease Control and Prevention. (2020a). Basic information about colorectal cancer. https://www.cdc.gov/cancer/colorectal/basic_info/
The Centers for Disease Control and Prevention. (2020b). The BRCA1 and BRCA2 genes. https://www.cdc.gov/genomics/disease/breast_ovarian_cancer/genes_hboc.htm
Ducreux, M., Chamseddine, A., Laurent-Puig, P., Smolenschi, C., Hollebecque, A., Dartigues, P., Samallin, E., Boige, V., Malka, D., & Gelli, M. (2019). Molecular targeted therapy of BRAF-mutant colorectal cancer. Therapeutic Advances in Medical Oncology, 11, 1-15. https://doi.org/10.1177/1758835919856494
Goran, T. (2014). Layers of the intestinal wall [image]. https://commons.wikimedia.org/wiki/File:Layers_of_the_GI_Tract_english.svg
Hathout, L., Williams, T. M., & Jabbour, S. K. (2017). The impact of novel radiation treatment techniques on toxicity and clinical outcomes in rectal cancer. Curr Colorectal Cancer Rep, 13(1), 61-72. https://doi.org/10.1007/s11888-017-0351-z
Ibrahim, M. A., Hazhirkarzar, B., & Dublin, A. B. (2020). Magnetic resonance imaging (MRI) gadolinium. StatPearls. https://www.ncbi.nlm.nih.gov/books/NBK482487/
Innerbody Research. (2017). Cecum. [image]. https://www.innerbody.com/image_dige08/dige44.html
International Atomic Energy Agency. (n.d.). Computed tomography – what patients need to know. Retrieved October 16, 2020, from https://www.iaea.org/resources/rpop/patients-and-public/computed-tomography
Islami, F., Sauer, A. G., Miller, K. D., Siegel, R. L., Fedewa, S. A., Jacobs, E. J., McCullough, M. L., Patel, A. V., Ma, J., Soerjomatarm, I., Flanders, W. D., Brawley, O. W., Gapstur, S. M., & Jemal, A. (2018). Proportion and number of cancer cases and deaths attributable to potentially modifiable risk factors in the United States. CA: Cancer J Clin, 68(1), 31-54. https://doi.org/10.3322/caac.21440
Itano, J. K. (2016). Core curriculum for oncology nursing. (5th ed.). Elsevier
Jin, Y., Zhang, J., Zheng, M., Bu, X., & Zhang, J. (2019). Psychosocial behavior reactions, psychosocial needs, anxiety and depression among patients with rectal cancer before and after colostomy surgery: A longitudinal study. Journal of Clinical Nursing, 28(19-20), 3547-3555. https://doi.org/10.1111/jocn.14946
Jingu, K., Matsushita, H., Yamamoto, T., Umezawa, R., Ishikawa, Y., Takahashi, N., Katagiri, Y., Takenda, K., & Kadoya, N. (2018). Stereotactic radiotherapy for pulmonary oligometastases from colorectal cancer: A systematic review and meta-analysis. Technology in Cancer Research & Treatment, 17, 1-7. https://doi.org/10.1177/1533033818794936
Kahl, K. L. (2018). Nurses play integral role in newly issues colorectal screening guidelines. https://www.oncnursingnews.com/web-exclusives/nurses-play-integral-role-in-newly-issued-colorectal-screening-guidelines
Klinikum, O. (2013). Method of removing a polyp with a snare [image]. https://commons.wikimedia.org/wiki/File:Ortenau_Klinikum_Darmspiegelung.jpg
Korphaisarn, K., & Kopetz, S. (2016). BRAF-directed therapy in metastatic colorectal cancer. Cancer J, 22(3), 175-178. https://doi.org/10.1097/PPO.0000000000000189
Macrae, F. A., & Bendell, J. (2020). Clinical presentation, diagnosis, and staging of colorectal cancer. UpToDate. https://www.uptodate.com/contents/clinical-presentation-diagnosis-and-staging-of-colorectal-cancer
McCance, K. L., & Heuther, S. E. (2019). Pathophysiology: The biologic basis for disease in adults and children. (8th ed.). Elsevier.
McCullough, M. L., Zoltick, E. S., Weinstein, S. J., Fedirko, V., Wang, M., Cook, N. R., Eliassen, A. H., Zeleniuch-Jacquotte, A., Agnoli, C., Albanes, D., Barnett, M. J., Buring, J. E., Campbell, P. T., Clendenen, T. V., Freedman, N. D., Gapstur, S. N., Giovannucci, E. L., Goodman, G. G., Haiman, C. A., ... Smith-Warner, S. A. (2019). Circulating vitamin D and colorectal cancer risk: An international pooling project of 17 cohorts. Journal of National Cancer Institute, 111(2), 158-169. https://doi.org/10.1093/jnci/djy087
Miliotou, A. N., & Papadopoulou, L. C. (2018). CAR T-cell therapy: A new era in cancer immunotherapy. Current Pharm Biotechnology, 19(1), 5-18. https://doi.org/10.2174/1389201019666180418095526.
National Cancer Institute. (2020). Common terminology criteria for adverse events (CTCAE). https://ctep.cancer.gov/protocolDevelopment/electronic_applications/ctc.htm
National Comprehensive Cancer Network. (2020a). NCCN clinical practice guidelines in oncology (NCCN guidelines®) colon cancer: Version 4.2020-June 15, 2020. https://www.nccn.org/professionals/physician_gls/pdf/colon.pdf
National Comprehensive Cancer Network. (2020b). NCCN clinical practice guidelines in oncology (NCCN guidelines®) rectal cancer: Version 6.2020- June 25, 2020. https://www.nccn.org/professionals/physician_gls/pdf/rectal.pdf
Nettina, S. M. (Ed.). (2019). Lippincott manual of nursing practice. (11th ed.). Wolters Kluwer.
Oh, M., McBride, A., Yun, S., Bhattacharjee, S., Slack, M., Martin, J. R., Jeter, J., & Abraham, I. (2018). BRCA1 and BRCA2 gene mutations and colorectal cancer risk: Systematic review and meta-analysis. J Natl Cancer Institute, 110(11), 1178-1189. https://doi.org/10.1093/jnci/djy148
Olsen, M., LeFebvre, K., & Brassil, K. (2019). Chemotherapy and immunotherapy guidelines and recommendations for practice. (1st Ed.). Oncology Nursing Society
Olson, A., Rencic, J., Cosby, K., Rusz, D., Papa, F., Croskerry, P., Zierler, B., Harkless, G., Giuliano, M. A., Schoenbaum, S., Colford, C., Cahill, M., Gerstner, L., Grice, G. R., & Graber, M. L. (2019). Competencies for improving diagnosis: An interprofessional framework for education and training in health care. Diagnosis, 6(4), 335-341. https://doi.org/10.1515/dx-2018-0107
OpenStax College. (2013). Components of the digestive system [image]. https://commons.wikimedia.org/wiki/File:2401_Components_of_the_Digestive_System.jpg
RadiologyInfo.org. (2018). Contrast materials. https://www.radiologyinfo.org/en/pdf/safety-contrast.pdf
Sasikumar, P. G. & Ramachandra, M. (2018). Small-molecule immune checkpoint inhibitors targeting PD-1/PDL1 and other emerging checkpoint pathways. BioDrugs, 35(5), 481-497. https://doi.org/10.1007/s40259-018-0303-4.
Selchick, F. (2020a). Age-specific CRC incidence rates in US per 100,000 individuals [image].
Selchick, F. (2020b). Comparison of colon and rectal cancers [image].
Selchick, F. (2020c). Pedunculated and sessile polyp [image].
Selchick, F. (2020d). Types of colorectal cancer treatment [image].
Sengupta, S. (2017). Cancer nanomedicine: Lessons for immune-oncology. Trends Cancer, 3(8), 551-560. https://doi.org/10.1016/j.trecan.2017.06.006.
Schwingshackl, L., Schwedhelm, C., Hoffman, G., Knuppel, S., Preterre, A. L., Iqbal, K., Bechthold, A., De Henauw, S., Michels, N., Devleesschauwer, B., Boeing, H., & Schlesinger, S. (2017). Food groups and risk of colorectal cancer. International Journal of Cancer, 142(9), 1748-1758. https://doi.org/10.1002/ijc.31198
Siegel, R. L., Miller, K. D., Sauer, A. G., Fedewa, S. A., Butterly, L. F., Anderson, J. C., Cercek, A., Smith, R. A., & Jemel, A. (2020). Colorectal cancer statistics, 2020. CA Cancer J Clin, 70(3), 145-164. https://doi.org/10.3322/caac.21601
Stanford Health Care. (n.d.). Colorectal cancer. Retrieved September 23, 2020, from https://stanfordhealthcare.org/medical-conditions/cancer/colorectal-cancer.html
Stenger, M. (2018). Ipilimumab in combination with nivolumab for MSI-H or dMMR metastatic colorectal cancer. https://ascopost.com/issues/september-10-2018/ipilimumab-in-combination-with-nivolumab-for-colorectal-cancer/
US Food & Drug Administration. (2018). MRI (magnetic resonance imaging). https://www.fda.gov/radiation-emitting-products/medical-imaging/mri-magnetic-resonance-imaging
US National Library of Medicine. (2020a). Familial adenomatous polyposis. https://ghr.nlm.nih.gov/condition/familial-adenomatous-polyposis
US National Library of Medicine. (2020b). Lynch syndrome. https://ghr.nlm.nih.gov/condition/lynch-syndrome#:~:text=Inheritance%20Pattern,sufficient%20to%20increase%20cancer%20risk
US Preventative Services Task Force. (2016a). Final recommendation statement: Aspirin use to prevent cardiovascular disease and colorectal cancer: Preventative medication. https://www.uspreventiveservicestaskforce.org/uspstf/recommendation/aspirin-to-prevent-cardiovascular-disease-and-cancer
US Preventative Services Task Force. (2016b). Final recommendation statement: Colorectal cancer: Screening. https://www.uspreventiveservicestaskforce.org/uspstf/recommendation/colorectal-cancer-screening
Vieira, A. R., Abar, L., Chan, D. S. M., Vingeliene, S., Polemiti, E., Stevens, C., Greenwood, D., & Norat, T. (2017). Foods and beverages and colorectal cancer risk: A systematic review and meta-analysis of cohort studies, an update of the evidence of the WCRF-AICR continuous update project. Annals of Oncology, 28(8), 1788-1802. https://doi.org/10.1093/annonc/mdx171
Winer, A., Bodor, J. N., & Borghaei, H. (2018). Identifying and managing the adverse effects of immune checkpoint blockade. Journal of Thoracic Disease, 10(Supple 3); S480-S489. https://doi.org/10.21037/jtd.2018.01.111
Yarbro, C. H., Wujcik, D., & Gobel, B. H. (Eds.). (2018). Cancer nursing: Principles and practice. (8th ed.). Jones & Bartlett Learning.
Yurgelun, M. B., Kulke, M. H., Fuchs, C. S., Allen, B. A., Uno, H., Hornick, J. L., Ukaegbu, C. I., Brais, L. K., McNamara, P. G., Mayer, R. J., Schrag, D., Meyerhardt, J. A., Ng, k., Kidd, J., Singh, N., Hartman, A., Wenstrup, R. J., & Syngal, S. (2017). Cancer susceptibility gene mutations in individuals with colorectal cancer. Journal of Clinical Oncology, 35(10), 1086-1095. https://doi.org/10.1200/JCO.2016.71.0012 | 1 | 99 |
<urn:uuid:db5335d1-b140-414a-8fe1-67d6c9c27300> | IN DECEMBER 1991 PAUL KUNZ, a particle physicist at the Stanford Linear Accelerator Center in Menlo Park, CA, programmed the very first Web server in North America. I learned about it the following January, upon returning to SLAC from a year away in Washington, DC. “Have you seen the latest?” an enthusiastic colleague asked. “It’s called the World Wide Web!”
At the time, 30 years ago, I didn’t grasp the tremendous historical significance of this event. Hardly anybody did.
This was the pivot point at which the Web truly became a worldwide system. Until that December, it was a European phenomenon focused at the European Center for Particle Physics (CERN) near Geneva, developed there by physicist-turned-programmer Tim Berners-Lee. In 1991 physicists who came to and worked at CERN from throughout the continent had begun to use the Web to share documents and data. An early “killer application” (or killer app) was the CERN phonebook, which could now be accessed via the Web from different computer platforms at the lab or externally via the Internet.
The SLAC server, set up on its mainframe computer, allowed physicists everywhere to access its valuable SPIRES database of preprints — as-yet-unpublished papers that had been submitted to scientific journals. Before that, it had been difficult to access this collection from beyond SLAC. Afterwards, accessing it became easy. Web traffic tripled in the two months after Kunz established the SLAC server, making SPIRES its new killer app. “A world of information is now available online from any computer platform,” boasted Berners-Lee in CERN’s computer newsletter that December, “Information sources at CERN and across the world span subjects from poetry to biochemistry and supercomputing.”
Physicists then were largely unaware of its likely impacts, but a major rupture in world history occurred late that month. Meeting in Moscow on 26 December 1991, a day after Mikhail Gorbachev had resigned as president, the Supreme Soviet voted to dissolve the Union of Soviet Socialist Republics. The bipolar world order of the Cold War, which had been disintegrating since the fall of the Berlin Wall in November 1989, officially ended, to be replaced — at least during the next decade, before 11 September 2001 — by one in which democratic Westernized nations predominated, knit together increasingly via the Internet and Web.
“There can be no serious doubt that, in the late 1980s and early 1990s, an era in world history ended and a new one began.”— Eric Hobsbawm
As British historian Eric Hobsbawm observed in his epochal history of the twentieth century, The Age of Extremes, “There can be no serious doubt that, in the late 1980s and early 1990s, an era in world history ended and a new one began.” Physicists and programmers — especially at CERN and SLAC — were to play important roles in shaping the technological foundations of the emerging globalized world order.
Part One: Origins of the Web
The origins of the Web can be traced to the ARPANET, which emerged in the 1960s during the depths of the Cold War. This was a project of the US Defense Department’s Advanced Research Projects Agency to build a decentralized, nationwide computer-communication system that could survive major disruptions (e.g., in a nuclear war) and continue functioning. During the 1980s, early computer networks such as BITNET, DECnet, ESNET and NSFNET emerged in the United States; by adopting ARPANET’s Transmission Control Protocol and Internet Protocol (TCP/IP), they permitted computers to communicate across one another and led to the “network of networks,” or Internet.
CERN had used TCP/IP for its internal networks, but it initiated external connections to the Internet only in January 1989. It proved the ideal site for the Web’s germination. During the 1980s, hundreds of physicists were arriving there from all over Europe — and increasingly from the United States — to do research on its Large Electron-Positron (LEP) collider and detectors. They brought with them a variety of computers, which communicated with one another over diverse systems and networks set up by the various experimental collaborations. Most would return to their home institutions, but they still needed to communicate via their computers with colleagues at CERN and elsewhere.
Enter Tim Berners-Lee, a British physicist turned computer programmer, who had first worked at CERN in 1980 as a consultant on a particle-accelerator control system. He returned in mid-decade as a CERN Fellow, which allowed him the freedom to follow his personal instincts in creating software. Thus, two months after the Internet reached the lab, he began to advocate marrying it with the point-and-click techniques of hypertext in a March 1989 memorandum titled “Information Management: A Proposal. ”
“CERN is a model in miniature of the rest of the world in a few years’ time,” he wrote. The lab “meets now some of the problems the rest of the world will have to face soon.” By marrying hypertext and the Internet, he envisioned, users could point to the text of references on a computer screen and “skip to them with the click of a mouse.”
The following year he teamed with Belgian software engineer Robert Cailliau, who had been working at CERN since the 1970s and then headed its Office of Computing Systems. Also interested in hypertext, he had much more experience obtaining CERN resources. Together they conceived an audacious name for their initiative and in October 1990 issued a document titled “World Wide Web: Proposal for a HyperText Project.”
Berners-Lee and Cailliau had been able to procure expensive NeXT computers — an innovative workstation marketed by the firm Steve Jobs founded after leaving Apple in 1985. It had a graphical user interface (GUI) and powerful programming software that helped Berner-Lee to develop the Web software in a few creative months, including core features we now take for granted, such as the HyperText Transfer Protocol (HTTP) and HyperText Markup Language (HTML). One computer would act as the “server,” providing useful digital information while other computers acted as “clients” or “browsers” seeking this information. By Christmas 1990, they had the system up and running, with the world’s first Web server (nxoc01.cern.ch) residing on Berners-Lee’s NeXT computer and Cailliau accessing it from his machine.
But given their $6,500 cost (equivalent to about $15,000 today), NeXT computers were extremely rare in high-energy physics, well beyond the financial resources of nearly all in the field. So the two men encouraged a young CERN intern, Nicola Pellow, to program what they called a “line-mode browser” — a simpler Web interface than the NeXT computer’s elaborate point-and-click GUI browser. It could be employed on many other computers used at CERN, so that “no matter what machine someone was on, he would have access to the Web,” recalled Berners-Lee. And he succeeded in getting a Web gateway established to the CERN phonebook database, providing easy access to an information resource that nearly everyone at the lab used almost daily. “Mundane as it was,” wrote Berners-Lee a decade later, “this first presentation of the Web was, in a curious way, a killer application.”
“By 1990 CERN had become the largest Internet site in Europe,” recalled Ben Segal, who had served as the lab’s network czar during the 1980s. The laboratory “positively influenced the acceptance and spread of Internet techniques both in Europe and elsewhere.” And the Web soon was the major reason for institutions to gain access to the Internet. In March 1991, for example, the Internet became accessible in Czechoslovakia, Hungary and Poland after the US National Science Foundation extended the NSFNET backbone into these Eastern Bloc countries. And the previous year, Russian physics labs had established a link to the Internet via direct phone lines to Finland (this link surreptitiously kept the outside world appraised of events in Moscow during the August 1991 Soviet coup). European physicists involved in CERN research were thus among the earliest to take advantage of Internet access, many of them via the Web. In networking and software, CERN led Europe onto the emerging “information superhighway.”
That was a popular name for the Internet championed by Vice Presidential candidate Al Gore during the 1992 US national elections. As a Tennessee Congressman and Senator, he had foreseen the vast economic, social, and cultural impacts of the new network technology. And he helped enable its further evolution by sponsoring the High-Performance Computing Act of 1991, which the US Congress passed that autumn and President George H. W. Bush signed into law on 6 December 1991. It was undoubtedly a pivotal month in world history.
Three months earlier, Paul Kunz had passed through CERN on his way home to SLAC from Sweden and met there with Berners-Lee, who showed him the Web. Kunz quickly grasped its potential after seeing how it could be used to communicate over the Internet with his NeXT workstation back at SLAC. “I was really excited,” he recalled. “I told Tim I was going to put SLAC’s SPIRES database on the Web as soon as I got home.”
Just after returning, Kunz visited SLAC librarian Louise Addis. “I’ve just been at CERN and found this wonderful thing a guy named Tim Berners-Lee is developing,” he told her. “It’s called the World Wide Web, and it’s just the ticket for what you guys need for your database.” But it took more than two months to set it up, in part because Kunz delegated the work to SLAC programmer Terry Hung and because the server was to be established on the SLAC IBM-VM mainframe computer. So Addis and Berners-Lee pressed Kunz back into service to finish the job.
On 12 December 1991, the first Web server outside of Europe (slacvm.slac.stanford.edu) went online, making SPIRES available throughout the world via the Internet. Less than 24 hours later, Berners-Lee announced its existence it on the WWW interest-group bulletin board: “There is an experimental W3 server for the SPIRES High Energy Physics preprint database, thanks to Terry Hung, Paul Kunz and Louise Addis at SLAC.”
Exactly two weeks later, the Soviet Union collapsed, and the Cold War officially ended. Few physicists recognized the global significance of this epochal event, however, continuing their research as if nothing important had happened. The Superconducting Super Collider (SSC) Project in Texas, for example, was then preparing to ask Congress to approve a $2.35 billion increase in its construction cost, to $8.25 billion.
By the summer of 1992, three more high-energy physics labs had set up Web sites: the Fermi National Accelerator Laboratory (Fermilab) west of Chicago, the Deutsches Electronen Synchrotron (DESY) lab in Hamburg; and the Dutch National Institute for Subatomic Physics in Amsterdam. And the MIT Laboratory for Computer Science, where Berners-Lee visited that summer, established a server in June. Others soon followed as he and Cailliau hit the lecture circuit to promote their networking software.
A signal event was the 1992 Computing in High-Energy Physics Conference held in Annecy, France, that September. There they gave Web demonstrations and a prominent lecture titled “World Wide Web,” in which they showed a map of the rapidly growing number of Web servers in Europe and North America, plus others planned for Asia and Australia — then about 20 in all. The presentation was the principal highlight of the gathering for Terry Schalk, a particle physicist at the University of California, Santa Cruz, who in summarizing the meeting, remarked that “if there is one thing everyone should carry away with them from the conference, it is the World Wide Web.”
By November 1992 there were at least 42 Web servers worldwide, most of them at high-energy and nuclear physics laboratories in Europe and the United States. What was needed then for the Web to gain much wider acceptance were GUI browsers that functioned on the personal computers and workstations then commonly in use. Cailliau and Pellow had been programming such an Apple Macintosh browser on and off for almost a year, and a group of students working with them was developing an IBM-PC browser. But these fledgling efforts suffered from the lack of high-level management support.
Thus it was CERN outsiders who ended up developing the first useful GUI browsers, at Cornell, UC Berkeley and SLAC. In the summer of 1992, physicist Tony Johnson — a member of an informal group calling themselves the “SLAC WWW Wizards” — was developing such a browser for computers with Unix operating systems, which he dubbed Midas (or MidasWWW). Like the GUI browser that Berners-Lee had programmed on his NeXT workstation, it could display images in separate windows. Midas was becoming popular at SLAC, but Johnson was initially reluctant to make it widely available, given the additional work that would be required to maintain it. He finally released the browser on the FREEHEP server in mid-November, announcing it to the growing Web community on the WWW-talk newsgroup.
Among the first to download this software and try it out was Marc Andreessen, a young computer-science undergraduate at the University of Illinois. He had become enamored of the Web that month while working at the National Center for Supercomputer Applications (NCSA) on campus, developing computer-visualization software. He emailed Johnson about it the next day. “Midas WWW is superb!” he exclaimed. “Fantastic! Stunning! Impressive as hell!”
Andreessen suggested they collaborate on an improved version incorporating graphic files, animations, and other advanced features. Intrigued at first, Johnson turned down the offer after considering how such an intense programming activity would conflict with his physicist day job. So Andreessen teamed instead with NCSA staff member Eric Bina to develop an X-Windows browser for Unix-based computers. After a feverish two months of programming activity that included many pizza-fueled all-nighters, they released what they dubbed NCSA X Mosaic 0.5 on 23 January 1993 (three days after the Clinton-Gore inauguration). “Brilliant!” Berners-Lee exclaimed on the WWW-talk newsgroup. “Having the thing self-contained in one file makes life a lot easier.”
Over the next two months, Andreessen and Bina released several more versions of X Mosaic in rapid succession. By early March, their browser could include embedded graphics on computer screens for the first time — rather than having to open separate windows to display them. This was a pivotal salient into commercial territory. Whereas early Web pages had been dominated by text (with underlined text for hypertext links), they could now resemble glossy magazine pages. The advertising industry began to take notice.
That April NCSA officially released X Mosaic version 1.0. But Unix-based computers were largely the domain of the academic community or government and industrial laboratories. To commercialize the Web, companies and ordinary users needed browsers that functioned on Macintosh and IBM-PC computers. Other NCSA programmers were developing such browsers — which could also support embedded graphics. In the autumn of 1993, NCSA released Mosaic browsers for Mac and IBM-PC computers to great fanfare. These easy-to-install browsers, with superior graphics capabilities, were soon being downloaded by the thousands every month. (Now I could finally access the Web from my PowerMac computer, not just on the SLAC mainframe.)
The New York Times was among the first to pick up on the news among the mainstream media. “A new software program available free to companies and individuals is helping even novice computer users find their way around the Internet,” announced technology reporter John Markoff on the front page of the 9 December 1993 Business Day section. “Since its introduction earlier this year, the program, called Mosaic has grown so popular that it is causing data traffic jams on the Internet.”
Together the Mosaic browsers and Web provided ready on-ramps to the information superhighway for the rapidly expanding legions of personal-computer users. With a burgeoning audience now available, the number of Web sites literally exploded, to more than 500. By the end of December 1993, even the White House had one.
“By 1994 the centre of gravity of the World Wide Web had crossed the Atlantic to a place where the entrepreneurial heart beats stronger,” observed Cailliau and James Gillies in their 2000 book, How the Web Was Born. To be more exact, it had shifted to Silicon Valley — into which SLAC physicists had introduced the Web two years earlier.
Although few high-energy physicists anticipated the Web’s commercial potential, Marc Andreessen surely did. In early 1994, he headed west to Silicon Valley, where he joined venture capitalist Jim Clark in establishing Netscape Communications Corporation and recruiting fellow NCSA programmers, including Eric Bina. By year’s end, this team had developed the Netscape Navigator browser, which eclipsed Mosaic within a few months, becoming the dominant Web browser. After the firm went public in August 1995, its stock value quickly tripled to $58, giving it a market value of nearly $3 billion and making Andreesen a millionaire many times over.
CERN workers had pioneered an early keyword-search function and the WWW Virtual Library of interesting Web sites, but these were soon swamped by commercial operations established by Stanford University graduate students who became early adopters of the new networking software that had touched down at SLAC a few years earlier. In 1994 electrical-engineering graduate students David Filo and Jerry Yang began compiling an online list of their favorite servers, which they dubbed Yahoo! after hundreds were accessing it regularly. Soon it was experiencing millions of hits a day, so they found a local venture-capital firm to provide money and management know-how and to take the company public — which happened in April 1996, to the tune of $34 million.
Several early attempts to establish Web search engines, each with its own strengths and weaknesses, had proved frustrating. But in 1995 to 1996, two Stanford computer-science graduate students, Larry Page and Sergey Brin, hit upon a novel idea: to rank a Web site according to the other sites linked to it. Their “PageRank” algorithm formed the basis of the new Google search engine, originally released on Stanford University’s Web site in August 1996. Incorporated in 1998, the firm Google, Inc. (now Alphabet, Inc.) has since come to dominate searching and other aspects of the Web. The corporation finally went public in August 2004, at a market value of $23 billion.
A common element in this explosion of commercial Web activity was the proximity of the Valley’s venture-capital industry headquartered just across Sand Hill Road from SLAC in the Sharon Heights district of Menlo Park — for example, Kleiner-Perkins, Menlo Ventures and Sequoia Capital. Technology financiers at these firms began viewing computer software as a high-growth, high-tech industry after the 1980s successes of Apple and Microsoft. With innovative Web-based software firms bursting out literally at their feet, they were quick to fund them with much-needed capital and the management expertise required to promote rapid growth and to take them public.
Nothing like this venture-capital industry existed near CERN, despite several major international banks headquartered in Geneva and other Swiss cities. And to my knowledge, no SLAC WWW Wizard ever crossed Sand Hill Road to seek venture financing. Just steps away, this Sharon Heights financial community was a distinctly different culture from the atomic and high-energy physics research efforts consuming SLAC scientists, engineers and programmers daily. Thus high-energy physicists largely missed the vast commercial promise of the Web (and in certain prominent cases resisted it).
By 1994 Berners-Lee could see the handwriting on the wall, especially after the first International Conference on the World Wide Web held at CERN that May, which attracted nearly 400 attendees and was dubbed “the Woodstock of the Web.” With commercialization came the possibility of the Web’s fragmentation, as powerful business interests tried to pull it in specialized directions more amenable to individual profit motives. He too crossed the Atlantic that year, in September, to assume a position at MIT’s Laboratory for Computer Science as director of the World Wide Web Consortium, which involved both academic Web users and commercial purveyors of Web software and services. CERN had initially agreed to serve as the European headquarters of the consortium, but it stepped aside in favor of the French computer-science organization INRIA after deciding that December to proceed with the costly Large Hadron Collider project. After that, the principal role of high-energy physicists in fostering the Web’s further evolution became that of pioneering its use in the coordination and management of global-scale activities and projects. (see “Going Global 2“).
Top Photo: Paul Kunz (left) in his SLAC Office with Tim Berners-Lee in 2000, with Paul’s NeXT computer between them. (Courtesy of SLAC)
A condensed version of this article, titled “Going Global: What the Web Has Wrought,” has been published in the January 2022 issue of Physics World.
1. Tim Berners-Lee, “Information Management: A Proposal,” March 1989.
2. Tim Berners-Lee with Mark Fischetti, Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web. (New York: Harper Collins, 1999).
3. James Gillies and Robert Cailliau, How the Web Was Born: The Story of the World Wide Web. (Oxford, UK: Oxford University Press, 2000).
4. Ben Segal, “A Short History of Internet Protocols at CERN.”
5. Bebo White, “The World Wide Web and High-Energy Physics,” Physics Today (November 1998).
Orcas Island physicist and writer Michael Riordan is author of the award-winning 1987 book The Hunting of the Quark and coauthor of The Solar Home Book (1977), The Shadows of Creation (1991), Crystal Fire (1997) and Tunnel Visions (2015). His articles and essays have appeared in the New York Times, Seattle Times, Scientific American, and many other publications. He serves as Editor of Orcas Currents.
3 thoughts on “Going Global: The World Wide Web and Globalization”
I truly enjoyed reading your excellent history of the World Wide Web, Michael! The World Wide Web transformed the biological sciences during my career at the University of California at Berkeley.
Thanks, Janet. It turns out that UC Berkeley also had a role to play in the early history of the Web, but I did not mention it here. A student named Pei Wei programmed the ViolaWWW (or just “Viola”) browser in 1992 for Unix-based computers. Like Tony Johnson’s Midas, it influenced Marc Andreessen in the development of the NCSA Mosaic browser during the winter of 1992-1993.
And the rest, as they say, is history!
The WWW—Tim-Berners-Lee—CERN story reminds me of
the arXiv—Paul-Ginsparg—Los Alamos story involving another physicist who, by establishing the online preprint depository arXiv for physics (and mathematics) at Los Alamos National Laboratory, made physics research more democratic. Ginsparg was recently given the first Einstein Award from the Einstein Foundation in Berlin (see https://www.einsteinfoundation.de/en/award/) for his contribution. The physics/mathematics arXiv now lives at Cornell University.
Comments are closed. | 1 | 10 |
<urn:uuid:7a6d0753-243a-4ca6-82ee-3bb027e65513> | School For an Ultrasound Technician Degree in Miami
One of the highlights at the 2018 Sundance Film Festival was an amazing documentary centered on the lives of four young residents of Pahokee, a Palm Beach County town that is about an hour away from Mar-a-Lago seaside resort, often called the “Winter White House.” Located on the shores of Lake Okeechobee, Pahokee holds the dubious distinction of being one of the poorest towns in the Sunshine State, not the most promising setting for teenagers with hopes and dreams; notwithstanding this socioeconomic disadvantage, one of the documentary’s subjects, Na’Kerria Nelson, stands out because of her sheer drive towards becoming a sonographer.
Diagnostic Medical Sonography (DMS) is an imaging technique practiced by sonographers, also known as ultrasound technicians or ultrasound image technicians. Medical sonography, a branch of diagnostic medical imaging, uses non-ionizing ultrasound to produce two and three-dimensional images of human anatomy. Sonographers use special imaging equipment that directs sound waves into a patient’s body in procedures commonly known as an ultrasound, sonogram, and in the case of diagnostic heart imaging, echocardiogram. The goal of this imaging technique is to assess and diagnose various medical conditions.
Diagnostic sonography, or ultrasonography as it is often known, is an ultrasound-based imaging technique that allows medical professionals to visualize subcutaneous body structures including tendons, muscles, joints, blood vessels, and internal organs for possible pathology or lesions. One particularly common use of ultrasound technology is in the field of prenatal care, obstetric sonography is commonly used during pregnancy.
What Does A Diagnostic Medical Sonographer Do?
While they might be more commonly thought of as monitoring the health and development of a fetus, or diagnosing the sex of the developing baby, they are a highly specialized and trained allied health professional. Ultrasound technologists work closely with other physicians.
They are trained to help identify and diagnose a disease or use ultrasound waves in the treatment of injuries such as in physical therapy. Beyond the education and training required to be a sonographer, so-called “soft skills” are required as they work closely with patients who might be in an emotional or frightened state. Our graduates report that the work is very rewarding.
How is Diagnostic Medical Sonography Used in the Health Care Fields?
Sonography is widely used to perform both diagnostic and therapeutic procedures. Ultrasound can guide procedures such as biopsies or drainage of fluid collections. Sonographers are medical professionals who perform scans to be interpreted by radiologists, specialists who specialize in the application and interpretation of a wide range of medical imaging modalities, or by cardiologists in the case of cardiac ultrasonography, also known as echocardiography.
Sonographers typically use hand-held probes, called transducers, which are placed directly on patients. In modern clinical practice, healthcare professionals are relying more frequently on ultrasound technology in medical offices and hospitals because of its efficient, low-cost and dynamic imaging, which facilitates the planning of treatments while avoiding exposure to ionizing radiation.
Diagnostic medical sonography is used in medical fields such as anesthesiology, cardiology, emergency medicine, gastroenterology, colorectal surgery, gynecology, head and neck surgery, otolaryngology, neonatology, neurology, obstetrics, ophthalmology, pulmonology, urology, and others.
Specialties in Sonography
Since people often only consider Sonographers as medical professionals they need when expecting, we wanted to highlight some of the fascinating and varied ways this career can develop if you choose to pursue a specialty in sonography.
Obstetric and Gynecologic Sonography, OB/GYN
These technicians perform both transabdominal and transvaginal ultrasounds to detect and sometimes treat, abnormalities, such as ovarian cysts or uterine fibroids. But most commonly, they determine the presence of an embryo/fetus inside the uterus of a woman and continue to monitor the development of the fetus.
Cardiovascular and Vascular Technologists, RDCS
These sonographers perform non-invasive ultrasounds and assist in making diagnoses based on the images they collect, but they may also perform invasive procedures, such as inserting cardiac catheters and conducting stress tests.
Breast Sonography, BR
Breast ultrasounds can see all layers and angles of the breast, the images that breast sonographers can obtain play an important role in the early detection of breast cancer among female and male patients.
Abdominal Sonography, AB
Abdomen sonography involves taking ultrasounds of the organs and soft tissues in the abdominal region, including the liver, spleen, kidneys, pancreas, and gallbladder. Abdominal ultrasounds help detect and diagnose a variety of conditions, from kidney stones and gallstones to pancreatic cancer and cirrhosis of the liver.
Echocardiography and Pediatric/Fetal Echocardiography, AE/PS/FE
Echocardiography is the process of performing ultrasounds on the heart and its surrounding structures. In producing these echocardiograms or electrocardiograms, cardiac sonographers help diagnose cardiovascular diseases and assess the overall health and function of the heart.
Musculoskeletal Sonography, MSKS
This is an emerging form of ultrasound increasingly popular in the fields of sports medicine and rheumatology. MSK sonographers help diagnose musculoskeletal injuries and diseases and also monitor the progression or treatment of those conditions.
Professional Outlook for Diagnostic Medical Sonographers
According to the United States Bureau of Labor Statistics, demand for diagnostic medical sonographers is expected to be solid in the coming decade, and a major reason for this occupational trend is that ultrasound technology continues to evolve, and it is bound to become a more common method used to assist in diagnosing conditions. Whenever possible, sonography will always be favored over more invasive procedures.
The median salary for diagnostic medical sonographers in 2017 was $65,620 per year, and the median hourly pay was $31.55 per hour. While there were already 67,300 professionals working in sonography jobs in 2016, employment in this sector is expected to rise by a respectable 23 percent by the year 2026. This amounts to an occupational growth that is considerably higher than the average for all occupations. The anticipated demand as of 2019 is about 22,400 trained diagnostic medical sonographers over the next seven years.
Work Environment for Diagnostic Medical Sonographers
Most sonographers work in hospitals; some also work in physicians’ offices or diagnostic imaging clinics. Sonographers may be required to stand for long periods and may also need to lift or turn disabled patients. Since this is a profession that requires direct contact with patients, it is also ideal for professionals to keep a pleasant and easygoing manner and be able to positively and easily interact with people of all ages and backgrounds.
Most of the work is done at diagnostic imaging machines in dimly lit rooms, but may also perform procedures at patients’ bedsides. Full-time sonographers work about 40 hours a week while in part-time they may work weekends, nights, or holidays. But due to high demand, most of them are hired on a full-time basis.
For more information about the sonographer profession, contact an admissions counselor at Florida National University. This is a school properly accredited by the American Registry of Radiologic Technologists to grant associate of science degrees in this field, which means that FNU complies with national standards. Working students can take advantage of online courses, and our financial aid counselors can help you apply for grants, loans and scholarships.
Diagnostic Medical Sonographer Technology – Associate of Science Degree at FNU
The Associate of Science program in Diagnostic Medical Sonographer Technology is designed to prepare students for performing ultrasound procedures in clinical settings. This program trains students to effectively create and interpret sonographic images. The curriculum includes the following courses for a minimum of 87 credits:
* General Education Requirements (23 Credits)
* Communications (6 Credits)
- ENC 1101 – English Composition I – 3 credits
- SPC 1017 – Fundamentals of Oral Communication – 3 credits
* Humanities (3 Credits)
* Mathematics (3 Credits)
- MAC 1105 – College Algebra I – 3 credits
* Natural Science (7 Credits)
- PHY 1100C General Physics – 3 credits
- BSC 1020C Human Biology – 3 credits
* Computers (4 Credits)
- CGS 1030 Introduction to Information Technology – 4 credits
- SLS 1501 College Study Skills – 0 credits
* Core Requirements (64 Credits)
- HSC1000C Introduction to Health Care – 3 credits
- HSC1531C Medical Terminology – 3 credits
- BSC 1085C Anatomy & Physiology I – 4 credits
- BSC 1086C Anatomy & Physiology II – 4 credits
- HSC 1230L Patient Care Procedures – 2 credits
- SON 2140C Axial Anatomy I – 3 credits
- SON 2146C Axial Anatomy II – 3 credits
- SON 2614C Physics in Ultrasound – 2 credits
- SON 2807L Pre-Clinical Sonographic Practice – 2 credits
- SON 2616C Sonography Equipment Operation and Image – 3 credits
- SON 2170C Introduction to Cardiovascular System – 3 credits
- SON 2111C Abdominal Ultrasound – 3 credits
- SON 2116C Abdominal Pathology – 3 credits
- SON 2117C Artifacts in Ultrasound – 1 credit
- SON 2121C Obstetric/Gynecology Ultrasound I – 3 credits
- SON 2122C Obstetric/Gynecology Ultrasound II – 3 credits
- SON 2125C Gynecology Pathology – 2 credits
- SON 2126C Obstetric Pathology – 3 credits
- SON 2141C Small Parts Ultrasound – 3 credits
- SON 2804C Clinical Practicum in Ultrasound I – 3 credits
- SON 2814C Clinical Practicum in Ultrasound II – 3 credits
- SON 2955L Journal in Ultrasound Practice – 5 credits
- SON 2935 Special Topics in Sonography – 0 credits
To become a professional diagnostic medical sonographer, students must complete an exam to obtain certification from an accredited school like FNU. After completion of the graduation requirements, students may appear for the American Registry for Diagnostic Medical Sonography (ARDMS) registry exam.
Medical Sonography certificate program takes 24 months to complete. It involves an exciting externship where you get hands-on training which aids in improving your skill. This certification helps in fast-tracking your path to finding a job and also helps you balance your professional or family commitments.
Throughout this program, students will be required to conduct 800 hours of clinical practice. At the end of the program, graduates will be able to conduct ultrasound procedures in the abdomen, pelvis, and appendages.
Particular emphasis in the training program will be given to the abdominal and pelvic anatomy as well as obstetrical and fetal evaluations. The program prepares students to pass the national certification exam for ultrasound technicians and diagnostic medical sonographers.
Become an Ultrasound Technician
Start off your career in a growing field with enormous employment demand and excellent earning potential. Contact FNU today to find out how. | 1 | 22 |
<urn:uuid:4b426169-206a-4f40-8e19-d11699aec9f1> | So we just completed another Stream of this Course about 2 weeks ago. As always, watching the project defence is always the highlight of the Course. It is always amazing seeing different designers developing fashion collections from just an idea – same theme, different interpretations! And this particular one was just as exciting as the others!
Running this Course has always proven something over and over again. Creativity may not be in-built but it can be nurtured and learnt. Some people have been sketching right from the minute they were born, others have to learn it from scratch! But one thing is certain. Everyone has an innate creative ability. We just have to know how to tap into it! And that is what this Course is about!
Our Design, Fashion Illustration & Computer-Aided Design Course is literally 3 courses in 1. It teaches how to develop design ideas from scratch, how to draw these ideas out and how to edit the illustrations using a software.
– have no idea how fashion designers operate and develop fashion collections, OR
– you want to be creative, OR
– you find yourself always having to copy people’s designs without any knowledge of how to create concept boards, mood boards, story boards OR
– you do not have a portfolio for use during client consultations or for fashion competitions or admissions, OR WORSE
– you dream these designs but you do not know how to sketch them out on paper much less bring them to life…
Then you definitely need this course.
What the Course entails:
The 3 parts to the Course are:
Part 1: Design & Concept Development
Part 2: Manual Illustration; and
Part 3: Digital Illustration
In Part 1, students are taught how to create a fashion collection from an idea. How to build upon those ideas using the tools in the design process and are given a theme for their project, which they are expected to build on and defend at the end of the Course. They learn how to create storyboards, mood boards, concept boards, design briefs, technical drawings, etc through taught modules and a series of assignments and research to enable them discover who they truly are as designers.
In Part 2 they are taught how to sketch manually – LITERALLY from scratch using circles and straight lines, how to draw fashion figures and how to draw different design and clothing details n order to properly communicate their ideas to third parties.
In Part 3, students learn how edit their manually drawn illustrations using a software which enables them convert their illustrations from okay to fab! Take this picture for example.
It’s the exact same face in 3 different skin tones, without having to draw out the same face 3 times! And you can make slight edits to your foundation to make it look truly unique! I am still torn on which skin tone is my favourite! 😀
Fees & Duration:
The full Course lasts 8 weeks and there is a weekday and a weekend option available. The weekday option is on Tuesdays & Thursdays from 10am – 2pm while the weekend option is on Saturdays only from 10am – 6pm.
The fees for the full course are N90,000 and this includes all your training materials and drawing kit. You, however, need an IOS or Android-enabled tablet and/or smartphone and a capacitive stylus (don’t worry these are really cheap).
There are options to the full Course though… for example if you are a fab illustrator who just wants to learn digital illustration, you can sign up for just that. Here are the variations.
Part 1 only: N35,000
Parts 2 & 3 only : N75,000
Part 2 OR Part 3 only: N60,000
Parts 1 & Part 2 OR Part 3 only: N70,000
To register for the option of your choice, simply pay the fees by cash or online transfer into:
Martwayne Limited, 023 710 3843, GTBank
All classes take place at our Training Centre, off Alhaji Masha Road, in Surulere Lagos – about 2 minutes from the National Stadium.
For more details, you can contact us on via telephone on: 0818 395 8856 or simply click on any of the link to:
Excellent! We look forward to you joining us! Enjoy more pictures from our students in the last class! And remember…
This could also be YOU! It’s time to #liveyourfashiondreams and become the fashion designer you were destined to be!
And we were live on Facebook :-D Enjoy the live feed of Martwayne on 98.9 Kiss FM Lagos on 7 July 2017
Yes we were! Actually to be honest, this is the 2nd week we have been live on both Facebook and Instagram. I haven’t posted the one that happened 2 weeks ago. I’ll do that sometime this week.
Why did we decide on live feeds?! Well we realized people may not be able to tune into the show on 98.9 Kiss FM perhaps because of work or the fact that they may be outside Lagos so we brought the show to them through other means! And I kid you not, the response has been great! So soon enough, we will also be live on Youtube once I get the settings right! 😀
SO what did we discuss last week?!
Work from @ms_ellysha, a student in the last stream…
So we were talking about the upcoming course this month at Martwayne – the Design, Fashion Illustration & Computer-Aided Design Course. It is not news that we had some great courses coming up and this one has been one of our most popular Courses.
Last week, we went live again on Facebook showcasing a student’s work as she was defending her project based on the theme “Back to the Future” and what she presented was incredible! The picture above was part of her capsule collection I was blown away! The story behind the designs, the concept, the actual designs all from the theme “Back to the Future” was incredible. And the best part is, she learnt everything FROM SCRATCH! Here is the actual storyboard:
As I have said several times int he past, fashion goes beyond sewing. There is nothing that gives a designer more joy than seeing his/her designs which they developed from scratch on a customer. That really is what makes you a fashion designer and sets you apart from the rest! And that is what this course teaches.
About the Course:
It was developed to build your inner creativity. If you love fashion – which we know you do… and you want to build on that talent, then this Course is for you! It is literally 3 courses in 1 and it is packed full of what will definitely turn you into a pro! We teach FROM SCRATCH so don’t worry if you can’t draw or anything like that. That is what the Course is for!
The 3 parts to the Course are as follows:
Part 1: Concept Development (Developing a Fashion Collection) & Portfolio Presentation
Part 2: Manual Illustration
Part 3: Digital Illustration using CAD
There is a weekend option for busy professionals which starts on the 15th of July (Saturdays only from 10am – 6pm) and a weekday option which starts on the 18th of July, 2017 (Tuesdays & Thursdays from 10am – 2pm).
There are also different variants to the course which you can opt for based on your time, finances and level of skill:
Option 1: Full Course – N80,000
Option 2: Parts 2 & 3 only – N65,000
Option 3: a) Part 2 OR b) Part 3 only – N50,000
Option 4: a) Parts 1 & 2 only OR b) Parts 1 & 3 only – N60,000
Option 5: Part 1 only – N35,000
So if, for example, you draw very well but have difficulty coming up with your own designs and are fed up of copying people’s work, then you can choose Option 4b i.e. Parts 1 & 3 only.
All fees include your training materials and drawing kits where applicable but for the CAD options, you need your tablet.
Duration is 8 weeks for the full course. The other options have their own timetables as well but all fall within the 8 weeks depending on if you are running the weekday or the weekend option.
How to Register:
Registration is easy! Simply pay the fees of your choice by bank or online transfer into Martwayne Limited, 023 710 3843 at any GTBank branch.
If you want to visit us ahead of time or if you have questions, please contact us on:
Tel: 0818 395 8856, WhatsApp: 0809 787 6075; Blackberry: D8E9802C or via email: [email protected]
Trust me when I say you will ENJOY this Course to the hilt!
Remember the Course starts this week so you need to register now! Don’t procrastinate! We need to prepare for you!
Okies…. now for the video. Here is the one of the radio show. If you can’t see the embedded video, simply click on this link. It will take you straight to the page. I’ll embed the one of @ms_ellysha defending her project in another post!
Have a great week everyone! We can’t wait to have you on the Course! 😀 For more in-depth details about it, please click on this link.
This is not a Sewing Course. If you are interested in a Sewing Course, please click on this link.
Want to cut your production time in half? Then ditch the paper and pencils and go digital!
This picture made me laugh when I saw it! But yes! We tailors now use laptops! 😀 I spoke about it on our radio show on 98.9 Kiss FM on Friday and you should watch it! I tell you this is the way to go!
Ok… so I realized that whilst promoting this Course (and I must confess, I don’t think I really promoted it that much), I never really spoke about the benefits of the Course. So you are probably thinking… “why go digital when I can use my pencil and paper” right?
Ok… check out this scenario…
Example of a Digital Pattern and the Sewn Outfit
You make clothes for people and draft your patterns manually. Imagine if you could do this on the system just once! And next time someone else comes, you input their measurements and voila, the pattern comes out in their own size!
OR you run an in-house clothing production unit creating clothes in standard sizes. Oh… imagine the torture of having to grade each pattern one by one when with just a few strokes, your work is done!
I can safely speak for myself when I say I have to be physically and psychologically prepared for pattern-making! Having to search for paper, pencils, erasers and all what not or WORSE discover that I am out of paper is like the worst thing! But now… you can simply work on your dining table in front of the TV. Life does not have to be so tough anymore! 😀
And this is just a SNAP SHOT of what this course has to offer! You can work across international boundaries as well. I have 2 people who have reached out to me asking me to create patterns for them. So this is you starting a new career as a pattern maker!
Okies so ditch the pencils and paper and switch to digital patterns on your laptop! And save yourself the headache and backache as well! Register for our Pattern-Making & Grading with Computer-Aided Design.
|Please click on this link to watch the video about me talking about this Course. It starts from 1:20|
SO details about the Course?! I’ll give it to you just now!
The course begins on Tuesday 4th of July 2017 and it will hold on Tuesdays and Thursdays from 10am – 2pm. It is in 2 parts:
Part 1 is the Pattern Making bit. It lasts for 5 weeks and costs N80,000 including your training materials except, of course, your laptop. That you need to come with.
Part 2 is the Pattern Grading bit which is where you increase or reduce in size. This will last an extra 3 weeks and costs N40,000 if you registered for Part 1.
If you want just this part of the Course, it will cost N50,000 and it includes your training materials. But to be honest, this limits you to just the manual grading portion because we can’t start teaching you about the digital workspace in Part 1. Please bring your laptop. Oh… if you use a Mac please contact us before you register.
And that’s it really! Easy peasy! Trust me you will love this Course!
Registration is easy! Simply pay the fees by bank or online transfer into the Martwayne Limited 023 710 3843 account in any GTBank OR
you can simply click on any of these links to register. It takes you to our Paystack payment portal 😀
Excellent! This tech age is making life so much easier! 😀
If you have questions, please reach out to us on 0818 395 8856, WhatsApp: 08097876075, E-Mail: [email protected] or [email protected] or via Blackberry: D8E9802C
Oh. PS. You need to be familiar with Pattern-making to register for this Course. If you are a beginner, you can register for any of our sewing courses. They all include pattern making.
Fabulous! Please tell everyone to tell everyone!
I look forward to you joining us!
Have a great day ahead!
Watch out for details of our other courses:
– Design & Fashion Illustration with Computer-Aided Design
– Basic Fashion Course for Juniors (ages 9-16)
– Short Course in Childrenswear.
Counting Down to Pattern Making with Computer Aided Design Course
Just think about the appeal of saving yourself half the time spent on manual patterns by doing them digitally. And better still, work across international boundaries. All you need is your laptop and you are good to go! Save the paper, minimize your errors, correct your mistakes without getting overwhelmed! Learn all this and more in our new Course Pattern-Making & Grading with Computer Aided Design. Trust me, it will be well worth your while!
Corsetry Course, Weekend Option comes up 10 June 2017. Register now!
So we had a great class when we launched our Corsetry Course in March. You have seen the pictures already but just in case… here they are:
Yes! You know a well-built corset when it stays on its own without any support! 😀
Well… weekend students, which are of course our busy professionals and others who work during the week, were not happy they could not attend the class so this time, so of course we had to make a weekend option available! SO! Weekend students, this one is just for you! 😀
It’s the same great course and the same content! Learn how to build your own corsets from scratch! And this course is perfect for bridal wear designers and those who want to start their own waist cinching businesses! Yes! Why buy one in the market when you can easily create yours and sell to your customers! In fact you can even create corsets for sale in a ready to wear line as an accessory in the form of a waist corset! The possibilities are endless really!! And it costs next to nothing!
Fees are only N50,000 and this includes your toolkit! Classes are on Saturdays only from 10am to 4pm for 6 weeks. To be honest, you can be done in a shorter period of time but we are giving enough time for those who may not be so familiar with pattern making.
The Course includes learning how to make 2 types of corsets: the paneled corset and the corset with a cup – AND also includes lace moulding!
To register is easy! Simply pay the fees in cash or online transfer into our GTBank account, “Martwayne Limited”, 023 710 3843 and you are good to go! Just show up on the day! Easy peasy!
For more details, please contact us on 0818 395 8856 / 0903 498 5877; via WhatsApp on 0809 787 6075, via Blackberry on D8E9802C or via e-mail: [email protected]
Please note that you have to be able to sew to attend this class. Complete beginners have to attend our Beginners Course first to learn how to sew before they can attend this class! Current Martwayne students need not apply as this is already part of your Bodices 3 course. Except you want to to register for this now and get the fees deducted from Part 3 when it’s time. Let us know so we can make a plan.
Great! That’s it! We look forward to having you! Tell others as well and let’s make this Stream a really fun one!
Have a great week!
BY FAR my favourite picture of the day! Stop the excuses! Make the move now! Learn Fashion at Martwayne!
I was going to write about something else today until I saw this picture and I just FELL IN LOVE with it! So apt! So to the point! So ruthless!
Of course it leads me to this:
Why keep putting it off till tomorrow?! The tomorrow that never comes?! The clock is ticking….. Make that move now! Let us help you LIVE YOUR DREAM of becoming a fashion designer and earn that second income!
Look Good . Live Life . Love Fashion . Learn Fashion @ Martwayne | 1 | 14 |
<urn:uuid:72ae3224-cff6-440f-a737-75892f22a475> | - Open Access
Effective management of attention-deficit/hyperactivity disorder (ADHD) through structured re-assessment: the Dundee ADHD Clinical Care Pathway
Child and Adolescent Psychiatry and Mental Health volume 9, Article number: 52 (2015)
Attention-deficit/hyperactivity disorder (ADHD) has become a major aspect of the work of child and adolescent psychiatrists and paediatricians in the UK. In Scotland, Child and Adolescent Mental Health Services were required to address an increase in referral rates and changes in evidence-based medicine and guidelines without additional funding. In response to this, clinicians in Dundee have, over the past 15 years, pioneered the use of integrated psychiatric, paediatric, nursing, occupational therapy, dietetic and psychological care with the development of a clearly structured, evidence-based assessment and treatment pathway to provide effective therapy for children and adolescents with ADHD. The Dundee ADHD Clinical Care Pathway (DACCP) uses standard protocols for assessment, titration and routine monitoring of clinical care and treatment outcomes, with much of the clinical work being nurse led. The DACCP has received international attention and has been used as a template for service development in many countries. This review describes the four key stages of the clinical care pathway (referral and pre-assessment; assessment, diagnosis and treatment planning; initiating treatment; and continuing care) and discusses translation of the DACCP into other healthcare systems. Tools for healthcare professionals to use or adapt according to their own clinical settings are also provided.
Attention-deficit/hyperactivity disorder (ADHD) is a heterogeneous neurodevelopmental disorder with a worldwide prevalence of 5–7 % in children and adolescents [1, 2]; UK prevalence is estimated at 2.2 % . The disorder is characterized by core symptoms of inattention, hyperactivity and impulsivity [4, 5], and is associated with functional impairment [6–8]. In the UK, ADHD management is primarily the responsibility of specialists based within either paediatric departments or Child and Adolescent Mental Health Services (CAMHS). As a consequence of an increase in awareness and acceptance of ADHD in the UK in recent years, management of this disorder has become a major aspect of the work of these services [9, 10]. This has required adaptations, usually within existing budgets and staffing levels, to accommodate this increased workload.
In a 5-year study, most adolescents with ADHD managed in a UK community setting had continuing difficulties despite contact with CAMHS and pharmacotherapy ; the authors of this report concluded that “the treatment and monitoring of ADHD need to be intensified” . This concurs with the findings of the Multimodal Treatment Study of Children with ADHD (MTA) [12, 13], which showed that a carefully implemented approach to medication is superior to routine clinical care. However, the use of symptom thresholds or specific impairment criteria during ADHD assessment, or standardized or systematic criteria to assess treatment outcomes is still limited within UK clinical settings [14, 15].
ADHD treatment guidelines and algorithms, including those for England and Wales , Scotland [17, 18], Europe [19–25], and North America [26–28], have proposed evidence-based approaches for ADHD management. However, tools to translate this guidance into everyday clinical practice are lacking. While Hill and Taylor published an auditable protocol for treating ADHD in 2001 and CADDRA published several toolkits to support ADHD practitioners, we are unaware of any other detailed descriptions of effective, evidence-based pathways that have been developed and implemented within a real-world setting. Therefore, we developed an implementable evidence-based clinical pathway for the assessment and management of ADHD. Here, we describe the pathway and provide the protocols and supporting tools necessary for wider use. We hope that the information provided will be adapted by others to suit their local healthcare service structure and resources.
The Dundee ADHD Clinical Care Pathway
Dundee and Angus are Scottish regions with a broad sociodemographic composition, including urban and rural areas of both considerable social deprivation and relative affluence. Specific clinical services for ADHD in the region are managed by the National Health Service (NHS) generic CAMHS service and delivered by non-academic NHS clinicians. Over the last 15 years, Dundee CAMHS has developed a clearly structured, evidence-based clinical pathway for the assessment and management of children and adolescents with ADHD in Dundee and Angus based on key clinical practice guidelines and other publications (Table 1).
The Dundee ADHD Clinical Care Pathway (DACCP) was developed to facilitate the dynamic integration of new knowledge in order to provide effective, evidence-based therapy; speed up the transfer of research findings into clinical practice; use staff skills and time effectively; and provide a consistent approach to the management of waiting lists and treatment. The DACCP integrates psychiatric, paediatric, nursing, occupational therapy, dietetic and psychological care. A key focus of the pathway is the routine use of standardized protocols for the assessment, titration and monitoring of clinical care. These protocols incorporate accessible, free or low-cost, clinically relevant, well-validated instruments at all stages of the pathway. The use of clinical outcome assessments to inform day-to-day clinical decision-making is particularly important, and is in keeping with key findings from the MTA study [12, 13].
The pathway is dynamic and in continuous development; up-to-date, evidence-based approaches to assessment and treatment are implemented into the DACCP as quickly as possible. While clinical care is delivered within a non-academic, clinical setting, there are close ties with the University of Dundee, where staff are heavily involved in the generation and evaluation of new evidence to advance the management of ADHD and in the development of clinical guidelines. These associations have undoubtedly played an important part in the development and implementation of the pathway. However, we believe that having now developed and refined the pathway over several years it is now ready to be implemented in broader settings.
Approximately 800 patients (~1.2 % of the local school-age population) currently receive care via the DACCP. The pathway was formally evaluated in the 2012 Scotland-wide audit of ADHD by Health Improvement Scotland . This audit found the DACCP to be compliant with all of the major recommendations of the Scottish Intercollegiate Guidelines Network (SIGN) and the National Institute for Clinical Excellence (NICE) [16, 30, 31] for the assessment and management of ADHD. The pathway was highly praised because it demonstrated the provision of robust, quality-based, protocol-driven and non-profession-specific clinical care . It was also the only ADHD pathway in Scotland that routinely measured clinical outcomes . The pathway has received international attention and has been used as a template for service development in many countries (personal communication, D Coghill).
Stages of the DACCP
The pathway has four key stages, described in detail below, and summarized in Fig. 1.
1. Referral and pre-assessment screening
In approximately 80 % of cases, the information in the referral letter is adequate to decide whether a full clinical assessment is warranted. Where insufficient information is provided (e.g. clinical problems are unclear or do not indicate whether impairment is likely), a ‘direct but distant’ approach is used to obtain additional insight whenever possible, as it combines accuracy with efficient resource use. Telephone interviews are conducted with a parent/carer, followed by a teacher if necessary. These are typically conducted by a specialist nurse using either the ADHD rating scale IV (ADHD-RS-IV) or the ADHD questions from the Swanson, Nolan and Pelham (SNAP)-IV questionnaire, delivered as a clinician-rated semi-structured interview (Table 2). A mean item score (total or sub-scale) of >2 is highly suggestive of ADHD; intermediate scores (1–2) require clinical judgement. This approach combines good sensitivity (83 %) and better specificity (97 %; i.e. fewer false positives) compared with the indirect questionnaire-based approach outlined below (unpublished observations, D Coghill).
Within the DACCP, we focus on this ‘direct but distant’ approach; however, where this is not feasible there are alternative approaches available for pre-screening of referrals, including: indirect contact (e.g. parent-completed questionnaires, such as the generic Strengths and Difficulties Questionnaire , or the ADHD-specific Conners , ADHD-RS-IV or SNAP-IV questionnaires); and personal assessment using a triage approach or the Choice appointments associated with the Choice and Partnership Approach (CAPA) model .
Once a decision has been made to conduct a full assessment, we do not usually request any further pre-assessment parent- or self-completed ADHD questionnaires.
Of note, population-based screening in the DACCP is not utilized. In areas where ADHD is under-diagnosed, such as Scotland , the main purpose of screening is to ensure that patients do not go unrecognized. However, population-based approaches using parent- and/or teacher-rated questionnaires are associated with high false positive rates .
Waiting list prioritization
Complex neurodevelopmental disorders (such as ADHD, autism spectrum disorders, tic disorders and Tourette’s syndrome, as well as learning disorders and intellectual impairment) can have a dramatic impact on home and family life and it is not uncommon to receive requests for prioritization of care. These cases, however, typically require different criteria for prioritization to other psychiatric disorders. Without appropriate prioritization, those with developmental disorders are at risk of remaining at the end of the queue. Our service therefore runs two parallel prioritization systems (one for ‘emotional disorders’ and one for ‘developmental disorders’), each with its own prioritization criteria. Examples of prioritization criteria for patients with a developmental disorder are shown in Table 3. Within the DACCP, decisions about prioritization are typically conducted by specialist nurses, with backup from senior medical staff as required.
2. Assessment, diagnosis and treatment planning
The DACCP has developed a standardized protocol for assessment, diagnosis and treatment planning, whereby initial information gathering is conducted by specialist nursing staff, restricting the role of the doctor to diagnosis and treatment planning. This facilitates effective use of limited clinical resources, improving clinical flow.
2a. Information gathering
The focus at this stage is to collect the information required to make a diagnosis and to plan treatment. Clinical information is primarily gathered from parents/carers using a standardized procedure that, in addition to ADHD, also considers potential differential diagnoses and comorbid mental and physical health problems. An interview with the child, focusing on impairment and functioning, is also conducted. Structured narrative school reports and teacher-rating scales, most frequently the Swanson, Kotkin, Agler, M-Flynn and Pelham (SKAMP) scale (Additional file 1), are requested prior to the first assessment visit.
Initial information gathering is completed during one or more face-to-face clinical assessment visits using a structured assessment document (Additional file 2). Presenting problems, health and developmental history, and global functioning are documented, in addition to comorbid psychiatric conditions and any issues in the patient’s family life, social functioning (including peer relationships, criminal behaviour, etc.) and school functioning. Within the DACCP, this assessment is conducted by a core CAMHS worker (a nurse, primary mental health workerFootnote 1 or clinical psychologist); all staff are trained in all aspects of the assessment.
A structured assessment of ADHD is performed using the ADHD section of the Schedule for Affective Disorders and Schizophrenia for School-Age Children-Present and Lifetime version (K-SADS-PL) [39, 40]. Additional routine screening questions cover the full range of mental health problems, including; autism spectrum disorders, developmental communication disorders and social communication disorder. Standardized screening questionnaires (summarized in Additional file 3) are used to support the identification of common co-existing disorders.
A general physical examination, including observation of the standard of general care, assessment for stigmata of congenital disorders and neurodevelopmental immaturity, a vision and hearing check, a screen of gross and fine motor functioning and a screen for motor and vocal tics, is suggested during the initial assessment. Physical health (head circumference, height, weight, blood pressure and pulse rate) and assessment of cardiac risk factors are recorded at assessment (and routinely thereafter). In line with guideline recommendations, routine blood tests, electroencephalography or electrocardiography are not routinely conducted, unless there is a specific indication [20, 23, 24].
Following the interview, additional information (e.g. from the patient’s school or other agencies) is requested as required. Patients may be referred for additional specific assessments (e.g. the Autistic Diagnostic Observation Schedule for autism , occupational therapy for developmental coordination disorder and/or sensory sensitivity, cognitive testing or paediatric assessment for physical problems). While cognitive and neuropsychological testing are not part of the routine assessment, the British Picture Vocabulary Scale is utilised routinely as an estimate of verbal intelligence.
2b. Diagnosis and treatment planning
Once the required information has been gathered, a standardized assessment report (Additional file 4) is compiled and forwarded to a senior clinician (usually a consultant or associate specialist/higher specialist trainee), who will review the information and arrange an “end of assessment” appointment with the patient and their family to discuss diagnosis and treatment planning. Whilst it is often possible to conclude the ADHD assessment while awaiting the outcome of additional information, it is sometimes necessary to delay this meeting until all data are available. The core CAMHS worker who conducted the initial assessment does not usually need to attend, but this may be helpful in complex cases. At this meeting, the consultant does not spend valuable time revisiting issues that have been adequately covered during the assessment; rather he/she aims to address any outstanding uncertainties, provide a diagnosis and formulation and agree a management/treatment plan.
Both the International Classification of Diseases (ICD-10) and the Diagnostic and Statistical Manual of Mental Disorders 5 (DSM-5) definitions of hyperkinetic disorder (HKD)/ADHD are considered during diagnosis, respectively. The ICD definition of HKD is more restrictive than DSM-defined ADHD and requires that inattentive, hyperactive and impulsive symptoms are all present and are both pervasive and impairing. While symptoms must also be pervasive and impairing in DSM-defined ADHD, the requirements are less strict and DSM-defined ADHD includes less severe cases than HKD [4, 5]. If ADHD or HKD is diagnosed, the focus for the remainder of the meeting is to provide psychoeducation about ADHD and any co-existing problems, and to discuss the various treatment options available. Written information and suggestions for web-based support materials are provided to support these discussions.
Initial treatment decisions generally follow the recommendations of the SIGN , NICE [16, 30, 31] and European guidelines [20, 21, 23–25]. Initial therapy depends on symptom severity, circumstances, comorbidities, patient preference and parent/carer preference , and usually includes recommendations for school interventions. Treatment options include non-pharmacological interventions and pharmacotherapies.
If ADHD is not diagnosed, any other mental health problems that have been identified will be discussed and appropriate arrangements made for follow-up or discharge.
3. Initiating treatment
The initial focus of treatment is to reduce the core symptoms of ADHD. Medication is usually offered as first-line treatment for patients aged 6 years and over who meet ICD-10 criteria for HKD (Fig. 2). Non-pharmacological treatment, consisting primarily of parenting interventions that focus on behavioural management, is generally recommended for children under 6 years of age, those who meet DSM criteria for ADHD but not ICD criteria for HKD and those whose parents are resistant to medication options. Parenting programs readily available in Dundee include: New Forest Parenting Programme (NFPP) , Triple P and Incredible Years . If the treatment response to a parenting intervention is considered adequate, the need for additional interventions to address any remaining difficulties is assessed and follow-up in the continuing care clinic is arranged (see below for further details). If the treatment response is inadequate, further treatment options are discussed; typically involving medication.
Initiating and titrating medication for ADHD
Initial medication options
The choice of first-line medication is informed by clinical guidelines [16–18, 20, 21]. In most cases, a stimulant medication is the first choice and methylphenidate is most commonly prescribed. Primary school-age children (up to 11 years) usually begin treatment with immediate-release methylphenidate (which is less expensive—a priority for publically funded services—more flexible and has a short duration of adverse events), whereas older children usually start with a long-acting formulation (which is less stigmatizing and has a lower risk of diversion as medication is not taken during school hours). For patients with tic disorders, issues with substance misuse or a strong family preference to avoid stimulants , atomoxetine may be considered as a first-line treatment.
As informed by the MTA study and in line with clinical guidelines, the DACCP places considerable importance on accurate dose titration, with the aim of achieving maximum benefit with minimal adverse effects. Maximum benefit is prioritized over minimum dose. A 4-week, structured dose-optimization schedule is used for all patients prescribed immediate-release stimulants or extended-release methylphenidate. The dose is increased from 5 to 20 mg three times per day for immediate-release formulations or equivalent dose for long-acting formulations. Medication is usually initiated with 12-h cover, 7 days a week, without routine drug holidays.
Baseline and titration appointments are nurse led (although a senior clinician is always available for advice and to write prescriptions, if required) and last approximately 30 min. During the baseline appointment, patients are informed of the purpose of titration, the schedule is agreed and baseline assessments performed (see below). Three or four titration appointments are typically required, depending on the medication and clinical response. Titration appointments are conducted face-to-face or by telephone (in which case local health services may need to perform weight, pulse and blood pressure assessments). The patient is reviewed jointly by a nurse and a physician at the end of the 4-week period and they, in discussion with the family, agree on the ongoing medication and dose.
In addition to clinical feedback from the patient and parent/carer, the following information is gathered using standardized documentation at baseline and each subsequent titration appointment (Additional files 1, 5):
ADHD-RS-IV or SNAP-IV, administered as a semi-structured interview and rated by the clinician.
SKAMP report, completed by the patient’s teacher.
Clinical Global Impression-Severity and -Improvement rating scales.
Children’s Global Assessment Scale.
Structured assessment of ‘other symptoms’. Although the purpose is to identify treatment-related adverse effects, we ask patients ‘Do you have these symptoms?’ and ‘Are they impairing?’ rather than ‘Did medication cause these problems?’. The clinician then decides whether any identified symptoms are likely related to medication or the underlying ADHD.
Weight, blood pressure and pulse rate.
Assessment of symptom control and tolerability
Medication doses are increased at each visit, unless symptoms are already under optimal control (indicated by a mean post-treatment score of ≤1 for ADHD-RS-IV or the ADHD questions from SNAP-IV; see section on defining adequate/inadequate response below and Table 2) or there are significant adverse effects. When symptom control is considered optimal, the end-of-titration appointment is usually brought forward and the dose maintained. The patients exiting titration are booked into a continuing care clinic approximately 3 months later, and prescribing, but not monitoring, is transferred relatively quickly to primary care under a shared-care agreement.
If a patient experiences adverse effects, the dose is usually decreased, but may be either continued for another week or increased as originally scheduled to assess treatment benefit versus adverse effects.
If there has been no clinical response to a maximum dose (usually 20 mg methylphenidate tds or equivalent) or the patient has experienced significant adverse effects, switching to an alternative medication or a different approach is considered (described further below). A full discussion of the management of adverse effects is beyond the scope of this article; interested readers are directed to Cortese et al. for further information.
How do we define optimal/adequate/inadequate response?
Individual response to ADHD therapy is influenced by a number of factors, including severity of the disorder, sensitivity to a specific treatment, vulnerability to treatment-related adverse effects, and personal values and preferences regarding treatment outcome . Indeed, the perception of treatment response is subjective and thus may differ depending on the reporter. In the DACCP, information on treatment response is always gathered from both the patient and the parent/carer, using a semi-structured interview. During titration, good symptom control is considered the key outcome by the DACCP.
Using a combination of clinical data, published norms, the results of clinical trials and established statistical methods , we calculated a clinically meaningful cut-off score for the ADHD-RS-IV when used as a semi-structured interview. This, combined with clinical experience and published data, has suggested scores associated with different clinical states. The mean (standard deviation [SD]) ADHD-RS-IV total score for untreated individuals with ADHD was reported as 41.8 (8.3) in the UK . In general, a decrease in total score of >11 from baseline suggests a clinically meaningful response. As the ADHD-RS-IV and the ADHD section of SNAP-IV are very similar, it seems likely that the same scoring rules can be applied to SNAP-IV.
The clinical significance of post-treatment reductions in ADHD-RS-IV and SNAP-IV scores are thoroughly described in Table 2. Although these definitions are used to guide clinical decision-making, they must be applied flexibly, and the final judgement of the adequacy of treatment response requires clinical judgement and consideration of all available information.
Of those children with ADHD, 70–80 % respond well to either methylphenidate or d-amphetamines and 90–95 % respond to at least one class of stimulant [49–53]. Where a patient is judged to have an inadequate clinical response to methylphenidate at the end of titration, switching to lisdexamfetamine or atomoxetine is usually recommended and the titration process repeated. Titration of lisdexamfetamine is similar to that of methylphenidate, but with three rather than four dose steps (30, 50 and 70 mg). Titration of atomoxetine begins with a dose of 0.5 mg/kg for 1 week, then increased to 1.2 mg/kg for at least 12 weeks (unless there are intolerable adverse effects) to fully assess the benefits. The dose is increased to 1.8 mg/kg if there is only a partial response.
4. Continuing care/monitoring treatment
Although titration and optimization of the initial response to medication are important, data from the MTA suggest that close attention to continuing care is also essential . Accordingly, all patients on the DACCP, regardless of medication status, are followed up. The purpose of continuing care clinics is to monitor and adjust ADHD treatments and to identify any ‘other problems’ that will require additional sessions for further assessment or treatment . Continuing care clinics are nurse led but a senior clinician (consultant or associate specialist/higher specialist trainee) is always available to discuss proposed changes to treatment, review patients with particularly complex issues and/or discuss stable patients who do not require changes to care after the clinic has finished. Clinics are conducted by the patient’s core worker if possible for continuity of care. Each appointment is scheduled for 45 min. Up to six clinics are held simultaneously to make the best use of senior clinicians’ time.
For patients receiving medication, the typical interval between review appointments is 6 months; however, more frequent appointments are available as necessary. Annual reviews are conducted for patients receiving non-pharmacological interventions. Patients who are not being actively treated are also followed up at least annually as it is not uncommon for these patients to experience renewed difficulties, especially at times of transition (e.g. moving from primary to secondary school) or stress (e.g. periods of family discord).
Continuing care clinics use the same structured data collection instruments and standardized assessment tools used during medication titration (Additional file 5). However, there is a change of emphasis to collect information on medication issues (such as breakthrough symptoms), adherence and stigmatization, in addition to the standard clinical outcomes collected during titration. During this treatment phase, we also placed increased emphasis on the broader picture, such as comorbid mental health issues, physical problems, learning difficulties, ongoing functional impairment and quality of life, including peer and family relationships, school and academic progress and social life. Identified issues are assessed using standardized instruments and assessments as appropriate (Additional file 3).
The identification of these ‘other problems’ is the key to providing good quality holistic care for patients with ADHD. Typical issues include:
assessment of sleeping or eating difficulties
assessment of mood or anxiety problems
liaison with schools or other agencies
assessment of the need for parent training or other psychological interventions
discussion of complex medication issues
occupational therapy assessment.
Some of the simple problems, such as sleep and eating difficulties, can be managed within the continuing care clinic appointment. However, time constraints mean additional appointments are often required to focus on identified issues. These appointments are arranged either with the core worker or as a specific ‘asked-to-see’ assessment with an appropriate team member (e.g. a clinical psychologist, dietician or physician).
Outcomes of the DACCP
Clinical pathways need to demonstrate positive outcomes. As noted previously, the DACCP received favourable reviews from the Healthcare Improvement Scotland 2008 and 2012 audits of ADHD services across Scotland [15, 54]. These reflect the DACCP’s implementation of and adherence to the SIGN clinical practice guidelines . In addition, clinical outcomes are routinely reviewed by the DACCP team. For example, from a random sample of 150 patients currently in continuing care, 96 % (144/150) are receiving pharmacological treatment, most commonly methylphenidate (83 %; 119/144), followed by lisdexamfetamine (9 %; 13/144) and atomoxetine (8 %; 12/144). The remaining 4 % (6/150) of patients are unmedicated. Overall, our clinical outcome data support the use of the DACCP and provide evidence that we can replicate improvements in ADHD symptoms observed in clinical trials within a real-world setting. For example, among the 119 patients currently in continuing care and receiving methylphenidate (Table 4), their mean (SD) total ADHD-RS-IV item score at baseline was 2.5 (0.4), and none had a mean item score of ≤1, indicating a severely impaired population (see Table 2 for clinical interpretation of scores). Mean (SD) item score decreased to 0.7 (0.4) at the end of titration (best dose), indicating a strong clinical response and 80 % of patients had a mean item score of ≤1. At the most recent clinic visit, mean (SD) total ADHD-RS-IV item score remained low at 0.8 (0.8), although the average score across all post-titration continuing care visits was slightly higher (1.0 [0.6]). The mean total ADHD-RS-IV score decreased by 29.4 points from baseline to their most recent visit. This is in line with changes in total ADHD-RS-IV scores observed in a rigorously conducted randomized clinical trial of European children and adolescents treated with stimulant ADHD medication for 7 weeks . In this study, the mean (SD) total ADHD-RS-IV scores at baseline for patients treated with lisdexamfetamine or methylphenidate were 41.0 (7.3) and 40.4 (6.8), respectively, and least squares mean reductions (standard error) from baseline to endpoint were 24.3 (1.2) and 18.7 (1.1), respectively .
Furthermore, we found no significant associations between ADHD-RS-IV subscale and total scores with duration of treatment, which ranged from 1 to 119 months, suggesting that with careful management, methylphenidate may be effective for long-term treatment of ADHD symptoms.
Staff and training
The DACCP is funded by the NHS from the core CAMHS budget and staffed by employees from within the general CAMHS service. Limited resources in the Dundee CAMHS require us to make best use of available staff. Therefore, much of the clinical work is nurse led, which allows multiple clinics to be held simultaneously and streamlines demand on senior clinician’s time.
At present, there are no dedicated ADHD staff members. Each full-time nurse in the service is involved with assessments and dose titrations and provides ongoing continuing care for about 50–70 patients. This accounts for approximately 60 % of their working week. Most nurses leading the DACCP clinics are not qualified to prescribe ADHD medications. Senior medical cover is provided by doctors with specialist training and experience in either child psychiatry or paediatrics, each contributing 1–1.5 days per week, comprising approximately one full-time equivalent. All clinicians working within the DACCP have had prior experience in general child and adolescent mental health or paediatrics. Junior doctors (doctors in training) are involved when available, and contributions from clinical psychology, occupational therapy and a dietician are made as required.
A multidisciplinary team of experienced clinicians provide supervision and training to new and junior staff on the assessment and management of ADHD, recognition and assessment of common coexisting difficulties, and measurement of clinical outcomes. All new staff members receive formal classroom training on how to conduct assessments, dose titration and continuing care appointments, and the use of standardized instruments to evaluate clinical outcomes. However, most training is conducted within the clinic by observation of consultations with senior nursing medical staff; new staff shadow an experienced clinician until considered competent to work independently. The training period lasts up to 3 months for nurses and typically around 4 weeks for junior doctors. All staff are updated when new information on ADHD becomes available.
Translation of DACCP into other healthcare systems
The DACCP has proved to be robust in the face of substantial changes to the CAMHS service. Each successive organizational framework has presented challenges. For example, the workflow-based CAPA model was not designed to incorporate the volume of patients seen by ADHD services and, in direct contrast to our pathway, tends to emphasize quantity over quality. We are currently reviewing the implementation of CAPA and it is likely that ADHD care will move out of the CAPA workflow and run in parallel; a move that would be strongly supported by the authors.
The pathway has continued to develop in light of new evidence, experience and ideas from staff. The ethos within the pathway is to be change-orientated and problem-solving in its approach. Changes are often implemented as a result of new findings in the literature or the licensing of a new treatment, but frequently also suggested by a team member and then problem solved by the team, implemented, reviewed and audited, with further changes made as required. Examples of changes include; the adoption of a slimmed down approach to titration appointments to ensure that time is used efficiently during this stage of treatment; the development of an electronic version of the clinic documentation that interfaces with the electronic patient record and facilitates comparison of treatment outcomes and vital signs over time; the implementation of titration protocols for new medications (e.g. the non-stimulants and lisdexamfetamine) that were not available when the pathway was originally designed; and the introduction of locally developed blood pressure centile charts and implementation of the algorithm for managing increased blood pressure as proposed by the European ADHD Guidelines Group . However, notwithstanding these changes, the core of the DACCP has remained essentially intact since its inception, demonstrating the generalizability of the pathway and the capacity for translation into other healthcare systems.
The DACCP is protocol-driven but flexible. Importantly, the protocols are not profession-specific, allowing best use of the staff available. Nurse-led clinics are clinically- and cost-effective within our setting. In healthcare systems where only doctors are able to manage ADHD, these protocols facilitate rapid training and establish consistent standards of care.
Some elements of the DACCP may not translate into other healthcare systems so easily. For example, the DACCP is strongly multidisciplinary and this brings many benefits. For services where such multidisciplinary working within a clinical team is more difficult, we would suggest discussing opportunities for virtual teams with agreed cross-referral protocols.
Another commonly discussed problem concerns the assessment of psychiatric comorbidities within a non-psychiatric setting. Clinical guidelines are in agreement that integration of assessment of comorbidities into ADHD work-up is essential. To facilitate this, we have successfully trained paediatricians and paediatric nurses to conduct a full mental health assessment, typically using structured and semi-structured interviews such as the Development and Well Being Assessment and K-SADS-PL. Once comfortable and confident with this structured approach they will switch to our systematic (but less structured) assessment protocol described above (Additional file 2). An alternative approach would be to use a screening questionnaire such as the Strengths and Difficulties Questionnaire or Child Behaviour Checklist to identify patients with possible comorbidities and make any necessary arrangements for patients to be further assessed by an appropriately trained specialist.
A further issue concerns the prescription of medications. Unlike in the UK, this may not be delegated to nurses in some countries (although the use of experienced doctors as described above may assist here). Many tasks are already performed by case managers other than the physician, and private practices are encouraged to establish multidisciplinary teams. At the same time, enormous differences in terms of acceptance and treatment approaches continue to exist, not only between European countries, but also between regions within those countries. The sharing of best practice and the creation of treatment pathways based on clinical and scientific evidence could help institutions to improve their standards.
Our clinic documentation and the SKAMP teachers rating scale are available as online Additional files. Alternative documentation is available from the Canadian ADHD Resource Alliance . Their assessment toolkit has many similarities to our own and may be preferred by some clinicians .
Administrative aspects to consider when implementing a pathway based on the DACCP principles are the need for a good organization to ensure the necessary forms and instruments are available for distribution, and that systems are in place to follow-up with schools regarding the return of questionnaires and reports.
The DACCP uses staff skills and time effectively via a structured core pathway to provide a consistent, up-to-date, evidence-based approach to the treatment and management of children and adolescents with ADHD. The DACCP uses standard protocols for the assessment, titration and routine monitoring of clinical care and treatment outcomes. The pathway provides effective care in a real-world setting and has demonstrated success in the long-term management of ADHD. As with any clinical pathway, there are limitations; it is time-intensive and requires well-trained staff. However, we believe that the need for this standard of care is evident and that patients with ADHD should be managed within a pathway that strives for optimal care. While the pathway is continually developing, it has remained essentially intact, demonstrating its flexibility and capacity for translation into other healthcare systems. However, we continually strive to improve the efficiency of our service without compromising clinical standards.
A mental health practitioner who focuses on the interface between primary and secondary care. Primary mental health workers may have a variety of professional backgrounds, including nursing, psychology, social work and education.
attention-deficit/hyperactivity disorder rating scale IV
Child and Adolescent Mental Health Service
Choice and Partnership Approach
Dundee ADHD Clinical Care Pathway
International Classification of Diseases
Schedule for Affective Disorders and Schizophrenia for School-Age Children-Present and Lifetime version
Multimodal Treatment Study of Children with ADHD
National Institute for Clinical Excellence
New Forest Parenting Programme
National Health Service
Scottish Intercollegiate Guidelines Network
Swanson, Kotkin, Agler, M-Flynn and Pelham scale
Swanson, Nolan and Pelham questionnaire
Polanczyk G, de Lima MS, Horta BL, Biederman J, Rohde LA. The worldwide prevalence of ADHD: a systematic review and metaregression analysis. Am J Psychiatry. 2007;164:942–8.
Willcutt EG. The prevalence of DSM-IV attention-deficit/hyperactivity disorder: a meta-analytic review. Neurotherapeutics. 2012;9:490–9.
Ford T, Goodman R, Meltzer H. The British Child and Adolescent Mental Health Survey 1999: the prevalence of DSM-IV disorders. J Am Acad Child Adolesc Psychiatry. 2003;42:1203–11.
American Psychiatric Association. Diagnostic and statistical manual of mental disorders, Fifth Edition (DSM-5). Arlington: American Psychiatric Publishing; 2013.
World Health Organization. International Classification of Diseases (ICD-10). Geneva: WHO; 1992.
DuPaul GJ, McGoey KE, Eckert TL, VanBrakle J. Preschool children with attention-deficit/hyperactivity disorder: impairments in behavioral, social, and school functioning. J Am Acad Child Adolesc Psychiatry. 2001;40:508–15.
Wehmeier PM, Schacht A, Barkley RA. Social and emotional impairment in children and adolescents with ADHD and the impact on quality of life. J Adolesc Health. 2010;46:209–17.
Wilson JM, Marcotte AC. Psychosocial adjustment and educational outcome in adolescents with a childhood diagnosis of attention deficit disorder. J Am Acad Child Adolesc Psychiatry. 1996;35:579–87.
Salmon G, Kemp A. ACHD: a survey of psychiatric and paediatric practice. CAMH. 2002;7:73–8.
Holden SE, Jenkins-Jones S, Poole CD, Morgan CL, Coghill D, Currie CJ. The prevalence and incidence, resource use and financial costs of treating people with attention deficit/hyperactivity disorder (ADHD) in the United Kingdom (1998 to 2010). Child Adolesc Psychiatry Ment Health. 2013;7:34.
Langley K, Fowler T, Ford T, Thapar AK, van den Bree M, Harold G, et al. Adolescent clinical outcomes for young people with attention-deficit hyperactivity disorder. Br J Psychiatry. 2010;196:235–40.
Murray DW, Arnold LE, Swanson J, Wells K, Burns K, Jensen P, et al. A clinical review of outcomes of the multimodal treatment study of children with attention-deficit/hyperactivity disorder (MTA). Curr Psychiatry Rep. 2008;10:424–31.
MTA Cooperative Group. A 14-month randomized clinical trial of treatment strategies for attention-deficit/hyperactivity disorder. The MTA Cooperative Group. Multimodal Treatment Study of Children with ADHD. Arch Gen Psychiatry. 1999;56:1073–86.
Kovshoff H, Williams S, Vrijens M, Danckaerts M, Thompson M, Yardley L, et al. The decisions regarding ADHD management (DRAMa) study: uncertainties and complexities in assessment, diagnosis and treatment, from the clinician’s point of view. Eur Child Adolesc Psych. 2012;21:89–99.
Healthcare Improvement Scotland. Attention Deficit and Hyperkinetic Disorders. Services Over Scotland. Final Report. (http://www.healthcareimprovementscotland.org/our_work/mental_health/adhd_services_over_scotland/stage_3_adhd_final_report.aspx).
NICE. Attention deficit hyperactivity disorder. Diagnosis and management of ADHD in children, young people and adults. National Clinical Practice Guideline Number 72. Issued 2008; last updated March 2013. (http://www.nice.org.uk/nicemedia/live/12061/42060/42060.pdf).
SIGN (Scottish Intercollegiate Guidelines Network). Attention deficit and hyperkinetic disorders in children and young people. A national clinical guideline. Volume 52 (2001). (http://www.sign.ac.uk/pdf/2005ADHDreport.pdf).
SIGN (Scottish Intercollegiate Guidelines Network). Management of attention deficit and hyperkinetic disorders in children and young people. A national clinical guideline. Volume 112 (2009). (http://www.sign.ac.uk/pdf/sign112.pdf).
Taylor E, Sergeant J, Doepfner M, Gunning B, Overmeyer S, Mobius HJ, et al. Clinical guidelines for hyperkinetic disorder. European Society for Child and Adolescent Psychiatry. Eur Child Adolesc Psychiatry. 1998;7:184–200.
Taylor E, Dopfner M, Sergeant J, Asherson P, Banaschewski T, Buitelaar J, et al. European clinical guidelines for hyperkinetic disorder—first upgrade. Eur Child Adolesc Psychiatry. 2004;13(Suppl 1):i7–30.
Banaschewski T, Coghill D, Santosh P, Zuddas A, Asherson P, Buitelaar J, et al. Long-acting medications for the hyperkinetic disorders. A systematic review and European treatment guideline. Eur Child Adolesc Psychiatry. 2006;15:476–95.
Coghill D, Sergeant J. Assessment. In: Banaschewski T, Coghill D, Danckaerts M, Doepfner M, Rohde L, Sergeant JA, et al., editors. Attention-deficit hyperactivity disorder and hyperkinetic disorder. Oxford: Oxford University Press; 2009. p. 33–51.
Cortese S, Holtmann M, Banaschewski T, Buitelaar J, Coghill D, Danckaerts M, et al. Practitioner review: current best practice in the management of adverse events during treatment with ADHD medications in children and adolescents. J Child Psychol Psychiatry. 2013;54:227–46.
Graham J, Banaschewski T, Buitelaar J, Coghill D, Danckaerts M, Dittmann RW, et al. European guidelines on managing adverse effects of medication for ADHD. Eur Child Adolesc Psychiatry. 2011;20:17–37.
Sonuga-Barke EJ, Brandeis D, Cortese S, Daley D, Ferrin M, Holtmann M, et al. Nonpharmacological interventions for ADHD: systematic review and meta-analyses of randomized controlled trials of dietary and psychological treatments. Am J Psychiatry. 2013;170:275–89.
McClellan J, Kowatch R, Findling RL. Practice parameter for the assessment and treatment of children and adolescents with bipolar disorder. J Am Acad Child Adolesc Psychiatry. 2007;46:107–25.
Wolraich M, Brown L, Brown RT, DuPaul G, Earls M, Feldman HM, et al. ADHD: clinical practice guideline for the diagnosis, evaluation, and treatment of attention-deficit/hyperactivity disorder in children and adolescents. Pediatrics. 2011;128:1007–22.
Pliszka SR, Crismon ML, Hughes CW, Corners CK, Emslie GJ, Jensen PS, et al. The Texas Children’s Medication Algorithm Project: revision of the algorithm for pharmacotherapy of attention-deficit/hyperactivity disorder. J Am Acad Child Adolesc Psychiatry. 2006;45:642–57.
Hill P, Taylor E. An auditable protocol for treating attention deficit/hyperactivity disorder. Arch Dis Child. 2001;84:404–9.
NICE. Methylphenidate, atomoxetine and dexamfetamine for attention deficit hyperactivity disorder (ADHD) in children and adolescents. Technology Appraisal 98. (http://www.nice.org.uk/nicemedia/live/11572/33225/33225.pdf).
NICE. Guidance on the use of methylphenidate (Ritalin, Equasym) for attention deficit/hyperactivity disorder (ADHD) in childhood. NICE technology appraisal guidance No 13. London, UK; 2000.
Goodman R. The Strengths and Difficulties Questionnaire: a research note. J Child Psychol Psychiatry. 1997;38:581–6.
Conners CK, Sitarenios G, Parker JD, Epstein JN. The revised Conners’ Parent Rating Scale (CPRS-R): factor structure, reliability, and criterion validity. J Abnorm Child Psychol. 1998;26:257–68.
DuPaul GJ, Power TJ, Anastopoulos AD, Reid R. ADHD Rating Scale-IV (for Children and Adolescents): checklists, norms, and clinical interpretation. New York: Guilford Publications, Inc; 1998.
Swanson J, Schuck S, Mann-Porter M, Carlson C, Hartmark C, Sergeant J, et al. Categorical and dimensional definitions and evaluations of symptoms of ADHD: history of the SNAP and the SWAN rating scales. Int J Ed Psyc Assess. 2012;10:51–70.
CAPA. The Choice and Partnership Approach. (http://www.capa.co.uk/index.htm).
Sayal K, Letch N, El Abd S. Evaluation of screening in children referred for an ADHD assessment. CAMH. 2008;13:41–6.
Wigal SB, Gupta S, Guinta D, Swanson JM. Reliability and validity of the SKAMP rating scale in a laboratory school setting. Psychopharmacol Bull. 1998;34:47–53.
Kaufman J, Birmaher B, Brent D, Rao U, Ryan N. Diagnostic Interview. Kiddie-Sads-Present and Lifetime Version (K-SADS-PL). Version 1.0 of October 1996. (http://www.psychiatry.pitt.edu/sites/default/files/Documents/assessments/ksads-pl.pdf).
Kaufman J, Birmaher B, Brent D, Rao U, Flynn C, Moreci P, et al. Schedule for Affective Disorders and Schizophrenia for School-Age Children-Present and Lifetime Version (K-SADS-PL): initial reliability and validity data. J Am Acad Child Adolesc Psychiatry. 1997;36:980–8.
Lord C, Rutter M, Goode S, Heemsbergen J, Jordan H, Mawhood L, et al. Autism diagnostic observation schedule: a standardized observation of communicative and social behavior. J Autism Dev Disord. 1989;19:185–212.
Dunn L, Dunn L, Whetton C, Burley J. British Picture Vocabulary Scale. London: NFER-Nelson; 1997.
Sonuga-Barke EJ, Daley D, Thompson M, Laver-Bradbury C, Weeks A. Parent-based therapies for preschool attention-deficit/hyperactivity disorder: a randomized, controlled trial with a community sample. J Am Acad Child Adolesc Psychiatry. 2001;40:402–8.
Sanders MR, Turner KM, Markie-Dadds C. The development and dissemination of the Triple P-Positive Parenting Program: a multilevel, evidence-based system of parenting and family support. Prev Sci. 2002;3:173–89.
Jones K, Daley D, Hutchings J, Bywater T, Eames C. Efficacy of the Incredible Years Basic parent training programme as an early intervention for children with conduct problems and ADHD. Child Care Health Dev. 2007;33:749–56.
Kravitz RL, Duan N, Braslow J. Evidence-based medicine, heterogeneity of treatment effects, and the trouble with averages. Milbank Q. 2004;82:661–87.
Jacobson NS, Truax P. Clinical significance: a statistical approach to defining meaningful change in psychotherapy research. J Consult Clin Psychol. 1991;59:12–9.
Dopfner M, Steinhausen HC, Coghill D, Dalsgaard S, Poole L, Ralston SJ, et al. Cross-cultural reliability and validity of ADHD assessed by the ADHD Rating Scale in a pan-European study. Eur Child Adolesc Psychiatry. 2006;15(Suppl 1):I46–55.
Froehlich TE, Epstein JN, Nick TG, Melguizo Castro MS, Stein MA, Brinkman WB, et al. Pharmacogenetic predictors of methylphenidate dose-response in attention-deficit/hyperactivity disorder. J Am Acad Child Adolesc Psychiatry. 2011;50:1129–39.
Vitiello B, Severe JB, Greenhill LL, Arnold LE, Abikoff HB, Bukstein OG, et al. Methylphenidate dosage for children with ADHD over time under controlled conditions: lessons from the MTA. J Am Acad Child Adolesc Psychiatry. 2001;40:188–96.
Elia J, Borcherding BG, Rapoport JL, Keysor CS. Methylphenidate and dextroamphetamine treatments of hyperactivity: are there true nonresponders? Psychiatry Res. 1991;36:141–55.
Hodgkins P, Shaw M, Coghill D, Hechtman L. Amfetamine and methylphenidate medications for attention-deficit/hyperactivity disorder: complementary treatment options. Eur Child Adolesc Psychiatry. 2012;21:477–92.
Coghill D, Banaschewski T, Zuddas A, Pelaz A, Gagliano A, Doepfner M. Long-acting methylphenidate formulations in the treatment of attention-deficit/hyperactivity disorder: a systematic review of head-to-head studies. BMC Psychiatry. 2013;13:237.
NHS Quality Improvement Scotland. Attention Deficit and Hyperactivity Disorders. Services Over Scotland. Report of the implementation review exercise. (http://www.healthcareimprovementscotland.org/our_work/mental_health/adhd_service_improvement/stage_2_implementation_review.aspx).
Coghill D, Banaschewski T, Lecendreux M, Soutullo C, Johnson M, Zuddas A, et al. European, randomized, phase 3 study of lisdexamfetamine dimesylate in children and adolescents with attention-deficit/hyperactivity disorder. Eur Neuropsychopharmacol. 2013;23:1208–18.
Hamilton RM, Rosenthal E, Hulpke-Wette M, Graham JG, Sergeant J. Cardiovascular considerations of attention deficit hyperactivity disorder medications: a report of the European Network on Hyperactivity Disorders work group, European Attention Deficit Hyperactivity Disorder Guidelines Group on attention deficit hyperactivity disorder drug safety meeting. Cardiol Young. 2012;22:63–70.
SDQ. Information for researchers and professionals about the strengths and difficulties questionnaires. (http://www.sdqinfo.com).
Achenbach T, Rescorla L. Manual for the ASEBA school-age forms and profiles; an integrated system of multi-informant assessment. Burlington: ASEBA; 2001.
CADDRA. Canadian ADHD Practice Guidelines (CAP-Guidelines). (http://www.caddra.ca/cms4/pdfs/caddraGuidelines2011.pdf).
CADDRA. CADDRA ADHD Assessment Toolkit (CAAT) forms. (http://www.caddra.ca/cms4/pdfs/caddraGuidelines2011_Toolkit.pdf).
NHS Quality Improvement Scotland. Attention Deficit and Hyperkinetic Disorders. Services Over Scotland. Report of the ADHD project User and Parent/Carer Subgroup. (http://www.healthcareimprovementscotland.org/our_work/mental_health/adhd_service_improvement/stage_2_implementation_review.aspx).
Danckaerts M, Sonuga-Barke EJ, Banaschewski T, Buitelaar J, Dopfner M, Hollis C, et al. The quality of life of children with attention deficit/hyperactivity disorder: a systematic review. Eur Child Adolesc Psychiatry. 2010;19:83–105.
Banaschewski T, Buitelaar J, Coghill DR, Sergeant JA, Sonuga-Barke E, Zuddas A, et al. The MTA at 8. J Am Acad Child Adolesc Psychiatry. 2009;48:1120–1.
MTA Cooperative Group. Moderators and mediators of treatment response for children with attention-deficit/hyperactivity disorder: the Multimodal Treatment Study of children with Attention-deficit/hyperactivity disorder. Arch Gen Psychiatry. 1999;56:1088–96.
Swanson J, Arnold LE, Kraemer H, Hechtman L, Molina B, Hinshaw S, et al. Evidence, interpretation, and qualification from multiple reports of long-term outcomes in the Multimodal Treatment Study of children with ADHD (MTA): part II: supporting details. J Atten Disord. 2008;12:15–43.
Swanson J, Arnold LE, Kraemer H, Hechtman L, Molina B, Hinshaw S, et al. Evidence, interpretation, and qualification from multiple reports of long-term outcomes in the Multimodal Treatment study of Children With ADHD (MTA): part I: executive summary. J Atten Disord. 2008;12:4–14.
Pliszka SR, Greenhill LL, Crismon ML, Sedillo A, Carlson C, Conners CK, et al. The Texas Children’s Medication Algorithm Project: report of the texas consensus conference panel on medication treatment of childhood attention-deficit/hyperactivity disorder. Part I. attention-deficit/hyperactivity disorder. J Am Acad Child Adolesc Psychiatry. 2000;39:908–19.
Pliszka SR, Greenhill LL, Crismon ML, Sedillo A, Carlson C, Conners CK, et al. The texas children’s medication algorithm project: report of the texas consensus conference panel on medication treatment of childhood attention-deficit/hyperactivity disorder. Part II: tactics. attention-deficit/hyperactivity disorder. J Am Acad Child Adolesc Psychiatry. 2000;39:920–7.
Scottish Medicines Consortium. Lisdexamfetamine dimesylate, 30 mg, 50 mg and 70 mg capsules (Elvanse®) SMC No. (863/13). (http://www.scottishmedicines.org.uk/files/advice/lisdexamfetamine_dimesylate__Elvanse__FINAL_April_2013_Amended_26.04.13_for_website.pdf).
National Institute for Health and Care Excellence. ESNM19: Attention deficit hyperactivity disorder in children and young people: lisdexamfetamine dimesylate. (http://publications.nice.org.uk/esnm19-attention-deficit-hyperactivity-disorder-in-children-and-young-people-lisdexamfetamine-esnm19/).
Both authors provided information regarding the Dundee ADHD Clinical Care Pathway, made substantial contributions to the conception and design of the review, were involved in drafting the manuscript and revising it critically for important intellectual content. Both authors read and approved the final manuscript.
Under the direction of the authors, Alyson Bexfield, PhD, an employee of Caudex, provided writing assistance for this review. Editorial assistance in formatting, proofreading and copy editing was also provided by Caudex. The authors wish to thank Martin Markarian, an employee of Shire, Switzerland at the time of manuscript development, for his valuable contribution regarding evidence-based treatment approaches. Shire International GmbH, Switzerland provided funding to Caudex, Oxford, UK, for support in writing, editing, coordination and collating comments for this manuscript. Although Shire was involved in the topic concept, the content of this manuscript, the ultimate interpretation, and the decision to submit it for publication in Child and Adolescent Psychiatry and Mental Health was made by the authors independently. Shire supports the responsible use of medications for the treatment of ADHD. Shire does not endorse the off-label use of ADHD medications.
DC has served in an advisory or consultancy role for Flynn Pharma, Otsuka, Lilly, Janssen, Medice, Pfizer, Schering-Plough, Shire and Vifor. He has received conference attendance support, conference support or speaker’s fees from Flynn Pharma, Lilly, Janssen, Medice, Novartis and Shire. He is or has been involved in clinical trials conducted by Lilly and Shire and has received research funding from Lilly, Janssen, Shire and Vifor. The present work is unrelated to the above grants and relationships. SS has attended advisory meetings, received conference attendance support and received speaker’s fees from Lilly, Janssen and Shire. She is or has been involved in clinical trials conducted by Lilly and Shire and has received research funding from Lilly and Shire. The present work is unrelated to the above grants and relationships.
Additional file 1. SKAMP (Swanson, Kotkin, Agler, M-Flynn and Pelham) rating scale form for completion by teachers.
Additional file 3. Instruments and scales commonly used by staff within the Dundee ADHD Clinical Care Pathway.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
Coghill, D., Seth, S. Effective management of attention-deficit/hyperactivity disorder (ADHD) through structured re-assessment: the Dundee ADHD Clinical Care Pathway. Child Adolesc Psychiatry Ment Health 9, 52 (2015). https://doi.org/10.1186/s13034-015-0083-2
- Attention-deficit/hyperactivity disorder
- Treatment response
- Inadequate response | 2 | 7 |
<urn:uuid:225a6990-d614-47f4-8604-57e11ed23c93> | What Does Vmware Fusion Mean?
VMware Fusion is a VMware product developed for Macintosh computers with Intel processors. VMware Fusion enables system administrators to run x86 and x86-64 operating systems simultaneously as guests that include Microsoft Windows (All), Linux, Solaris and NetWare as virtual machines, while the Mac operating system acts as a host OS on the physical machine.
VMware Fusion uses a combination of para-virtualization, dynamic recompilation, and emulation to make this happen.
Fusion is the first product launched by VMware for Macintosh virtualization.
Techopedia Explains Vmware Fusion
In 2006, Macintosh decided to shift its architecture to Intel processors, which allow Mac computers to run different operating systems, including 64-bit OS. Now, administrators can run Microsoft Windows, Linux and Solaris over Mac computers running Mac OS using virtualization. It is for this purpose that VMware introduced Fusion in 2007.
Virtualization provides switching between different operating systems. As a result, older programs, operating systems and applications can be used to explore or reuse older data.
The following are the key features available with VMware Fusion.
- Unity View: Enables the virtual machines to give a seamless view of both Mac and other virtual machine desktops.
- DirectX 9.0: Users can run 3D programs and even 3D videos games within virtual machines.
- Snapshot: Allows users to save a stable state of the guest OS to the hard disk, thereby allowing the users to swiftly return to the virtual machine without rebooting.
VMware Fusion is extremely compatible. Virtual machines created with Fusion can also be used with other VMware products and vice versa. | 1 | 2 |
<urn:uuid:1401ee55-e8dd-472e-9446-2cb19169e6a8> | Many kinds of plant fibers have been used to make shoes. Shoes have been a part of the human experience for eons. Shoes were and are made from various materials.
In China, archaeological finds show that as early as 7,000 years ago, ancient Chinese had learned to make shoes from plant fibers.
One article found in traditions.cultural-china.com reported, “In ancient times almost all people across China wore straw shoes, excepting only nomadic tribes. The main difference in mode of this footwear was that people in the frigid north wore thick straw boots, while those in the hot, humid south wore straw sandals. Straw footwear was worn by all, whether they were nobles, men of letters or farmers.”
Assuming that is an accurate account, shoes made of straw in one form or another have been made for thousands of years.
As broad an area footwear made of grasses might be, The Straw Shop found the area of straw decorated shoes of particular interest to discuss. The Straw Shop, thus far, has been unable to locate any written accounts describing anything about them. How were they produced? Were straw decorated shoes produced in any real number? Were certain locations credited with their manufacture ? Were they made and decorated by individual shoemakers? The truth is, we just don’t know. Fortunately straw decorated shoes have been saved and may be found in museums and historical collections around the world. The Straw Shop is especially appreciative to share the following images from many of the museums. Most of the items pictured are housed in museum storage and are not on public display.
According to our research we have located a few examples of straw decorated shoes from the 1600s spanning to the 1900s.
We found a very early pair of straw decorated shoes, thought to be Italian from 1670, known as a slap shoe A slap shoe made a clicking or clacking sound when worn due to its construction. The shoe style became quite the symbol of wealth in fashionable shoes. The appliqué of straw on these shoes is spectacular.
Thought to be made between 1720 and 1740, the mule, or clog, style shoes shown below are described by the Museum of Fine Arts, Boston as, “Paper upper solidly embroidered with mica held in cloisons formed by plaited and netted straw in conventional floral motif. Pointed toe. Green silk covered Louis heel. Leather sole. Pale green silk binding at edges. White silk insole, red silk lining. Condition: heavily soiled, some splitting silks.” Is the straw ring plait or possibly cat’s foot plait?
Thought to be from 1750, the pair shown below is decorated with ornate straw strips or splints. The embroidery is integrated into the straw splints with the stitches formed between the splints not splitting them. This is a style of embroidery, originating in gold work embroidery. It is called Or Nué Embroidery. The flowers and leaves are embroidered in silk and the scalloped edges in silver-gilt thread. The shoes have a pointed toe and a comfortable, low, broad heel covered in silk satin to match that of the lining.
Another view of the same pair:
According to the Victoria and Albert Museum, “As early as 1750 the wealthy had been wearing garments with straw decoration that evoked the natural and pastoral world.” Along with straw adorned garments came straw decorated shoes. How the straw decorated shoes were made may have been left up to each individual shoemaker for all we know. The photos shown below are quite rare to see. They are components of similarly woven straw embroidered pieces intended to be joined together to make a shoe. The shoe components shown below are in the collection of Luton Culture.
These shoe parts are described as silk and straw-work floral pattern in colored silks and tinsel on a background of split straw sewn to linen, consisting of the components of shoe uppers and sides.
Described as an 18th century pair of shoes, the next pair of shoes are similar in construction to those previously described from the Victoria and Albert Museum and Luton Culture. The shoes from the collection of the Museum of Fine Arts, Boston, describe the shoes as, ” Upper of straw couched to plain cotton and embroidered with polychrome silk yarns in floral motif; silver embroidered swag and white silk binding along top. Pointed toe. White leather insole and lining. Condition: straw broken and missing in places; binding, leather on heel disintegrating; insole worn.”
Straw couched shoes, origin unknown, courtesy Museum of Fine Arts, Boston
The Napoleonic Wars, 1799-1815, disrupted trade everywhere in Europe and America. Its effects lasted well into the 1800s. Due to the wars and embargoed goods, fashions changed too. Structured fashions of boned corsets and hoop skirts relaxed significantly. During what would be called, in England, the Regency period (1811-1820) the Empire dress style was embraced. Shoe styles changed as well. According to headoverheelshistory.com, “To go with all the breezy less structured dresses during The Regency were breezy, less structured shoes. No more heel. No more discomfort. No more squashing toes into rigid up turned points. Let’s talk, slippers! Sure they still had pointed toes but they were made of soft material. There were only three styles of shoes to choose from in the later part of the century: the boot, the clog and the dress slipper.”
The nine-inch long shoes are described by Fashion Institute of Design and Marketing (FIDM) as follows:
“These slippers, made of straw and horsehair, were intended for use at home. They were probably made in the Tuscany region of Italy, whose port city of Livorno (traditionally called Leghorn in English) had long been the preferred source of straw for fashionable ladies’ hats. A horsehair mesh supports a variety of intricate straw plaits, braids, and ornaments that were likely surplus trimmings from the bonnet industry. Crimson silk linings highlight these minuscule twists, spirals, loops, and weaves.
The fragile shoes required a light, graceful step. The feminine ideal of the 1830s and 40s owed a great deal to the popularity of ballerinas, who embodied demure poise and refinement onstage. Their costumes were a major influence on fashion, particular their delicate, flat-soled slippers, still worn by dancers today and sold as fashionable street wear under the name “ballet flats”.
Some readers may notice these shoes, and the next few pairs included, are actually made from products more commonly used to make hats in the 1800s. Look carefully and you will see loom-woven bands of straw and horsehair and decorative plaits and edgings made from straw.
From the Kyoto Costume Museum in Japan we present another pair of house shoes from the same period. These shoes are thought to be Italian in origin. This photo is an excellent example to discuss the hat industry and the techniques incorporated in other items such as purses or shoes. The toes and heels of these shoes are created from plait used in hat-making. The horsehair has been worked into the loom-woven bands, or bordures, alongside ring plait, or cat’s foot plait. In hat making these bordures would have been cut to length and sewn onto a wire and silk base to form a bonnet. For the shoes they have been cut to length and stitched to a base.
Crinoline and straw are described as the materials found in the below 1835-1840 pair of shoes. These shoes are from the Metropolitan Museum of Art, New York. These are interesting as although made from bordures the patterns are not in the common European styles. Could these bordures have been made in America rather than Europe as they suppose?
From the mid-1800s there are various examples of plain straw shoes made from the straw plait sewn together edge to edge.
The shoes below may have been made by cutting the shoe parts from straw hoods (or straw sheets) more commonly made into hats. While the technique for sewing together the plait edge to edge is Italian, the trimmings used to edge the shoes, and decorations, are typically Swiss.
The shoes below, thought to be from 1846, are housed at the Danvers Historical Society in Danvers, Massachusetts:
The red-bowed shoes below were cited incompletely on Pinterest as mid-1800s and Italian in origin. The straw bordures are particularly nice around the top edging of the shoe. The same narrow loom-woven edging, made of silk, straw threads and split straw has been used on the front of the shoe.
The Victoria and Albert Museum describe the next image as a plaited pair of shoes, 1860. They write: “the uppers of these lady’s shoes are worked in the Leghorn remaille technique, where fine straw plaits are joined edge to edge with invisible ladder stitching. This work was produced in Tuscany and was mainly used for high quality hats. However, at the height of its popularity straw plait was used to decorate almost all kinds of clothing – from bodices to parasols. The Italian-made plaits used for the shoes were probably sold as flat sheets, maybe even cut into the pattern pieces, and then assembled with various trimmings. This means that the shoe could have been made up anywhere in Europe, or even in America.”
According to a Harvard John A. Paulson School of Engineering and Applied Sciences article entitled, “Shoes of the 19th Century”, the author states, “As late as 1850 most shoes were made on absolutely straight lasts, there being no difference between the right and the left shoe.” Other scholars have differing views.
The historical note about the shoes shown below is they were gifted to the Victoria and Albert Museum in London by Her Majesty (HM) Queen Mary.
Described as, “Pair of shoes for a child, made of plaited straw and catgut, lined with red silk. The shoes are heel-less with rounded square toes, and are in a slip-on style with no fastenings or adjustment. They are made in a striped openwork pattern, each with a rosette on the vamp.” The catgut described is more likely to be horsehair. Incidentally, catgut does not come from cats!
The 6-inch long children shoes are further described as being made in England between the dates of 1850 and 1879. As with the previous shoes, these shoes are also are made from hat products, straw plait sewn together edge to edge and bordures of horsehair and straw threads. The rosette is typical of those made in Switzerland in the mid-1800s to decorate hats.
Described by Tennant’s auction as “Two Similar 19th Century Children’s Straw Work Shoes with a leather sole, faded pink silk lining with a straw rosette to the front, 13.5cms, and another similar 12 cms (2).
As with the previous shoes, the above shoes are made from straw plait stitched together edge to edge and with bordures. The edging is a typical Swiss trimming known as Bögli, made from straw threads and split straw. The rosettes are difficult to identify but appear to be a plait called Glanz Zaggli that has been made from straw threads.
The origin of the next pair of shoes to be shown is unknown to the owner or The Straw Shop. However, we do know the shoes are made from straw plait and edged with a typical Swiss trimming made on a loom from straw threads and split straw. The rosettes are decorated with wooden beads covered with split straw and loops of straw threads.
Raffia and man-made straw were introduced in the late 1890s. These changes ended the exclusive use of natural materials previously found in shoes.
Into the 20th century, the shoes below appeared in a magazine dated April 13, 1941. The shoes are an example of the work of the Freiamt, Switzerland. The photograph appears in two books about the Swiss straw product industry written by G. Rodel.
Now a photograph of the shoes and Rodel’s caption.
The shoes above are decorated with straw threads and other possible remnants from the once thriving straw hat industry in Switzerland.
Straw shoes in the below 1950s advertisement, show very practical, comfortable, very un-decorated, every day shoes.
Straw decorated shoes did appear in the 20th century but only occasionally . The pink pair of evening shoes below was produced in France by House of Dior in 1956.
With the existence of shoes decorated with straw for so many hundreds of years it will be interesting to see if the fashion of straw decorated shoes reappear this century. To date we have yet to locate any other intricately straw decorated shoes beyond 1956.
The Straw Shop is grateful to the collections of many museums and private individuals who made this article possible. The Straw Shop encourages museums to carry the book, Swiss Straw Work Techniques of a Fashion Industry, as an invaluable reference to further identify their collections of straw shoes and other straw fashions.
The Straw Shop welcomes more information about the history of straw decorated shoes.
Copyright, Jan Huss, The Straw Shop 2016 | 1 | 10 |
<urn:uuid:e2df8c29-e497-4102-a7a1-6cc73ffdf652> | In our current world, where the use of technology has become so prominent, it is not rare to walk into a classroom at any level and notice at least one technological device. The use of many traditional teaching tools, such as chalkboards and slates have been replaced by modern technological devices, and curriculums often revolve around online learning. Since their emergence in 2010, iPads are a relatively new form of technology that have increasingly entered into the sphere of education as a means to enhance student learning. Many schools mandate that students possess their own iPad devices in order access online learning tools, applications, and digital textbooks used both inside and outside of school. I can recall my senior year of high school when many of my teachers asked their students to raise their hand if they favored using iPads as a means to access digitalized books. As a result of the large amount of support that my school community had for the use of the device, my high school implemented iPads in the classroom for the first time in 2014. Every incoming freshman student was expected to own their own iPad that would be used to access digital books along with academic websites and applications. However, only shortly after this change was made, teachers and parents, including my own, began to question whether my high school had made the right decision. How have iPads changed ― or not changed ― student learning since they were first introduced in 2010, and has the public’s view towards the use of iPads in the classroom changed between 2011 and today?
Many studies show that the implementation of iPads has transformed classrooms and helped to enhance student learning and engagement both within and outside of schools in ways that traditional forms of teaching along with the technology used in the classroom prior to 2010 had not allowed them to do. I argue that even though the implementation of iPads has changed student learning, sometimes for the better and sometimes for the worst, the results that researchers have uncovered regarding how the use of iPads in the classroom enhances or hinders student learning has stayed constant between 2011 and 2016 but has begun to change in 2017. Researchers between 2011 and 2016 have all discovered that the use of iPads in schools can lead to decreased academic performance and attentional awareness because these devices can act as a major distraction to students and their nearby peers. Researchers between 2011 and 2016 have also found that iPads can benefit student learning by increasing student’s academic engagement, motivation, and achievement both within and outside of schools. However, 2017 has seen an even greater increase in negative attitudes towards iPads and their effect on schooling, as schools have begun to recognize its limitations and rather turn to using Google Chromebooks in order to enhance student learning.
Most of the research conducted on iPads in the classroom has occurred between 2012 and 2015. A large amount of this research recognizes how iPads, like the laptops used in the classroom prior to 2010, act as portable devices that allow students to transport their learning outside of the classroom. As early as 2011, researchers have recognized how the “touch sensitive, lightweight and compact” features of the iPad make it a device that has the potential to change the classroom and student learning (“Educators Find the iPads”). Throughout the years, many researchers have continued to note that iPads have enhanced student learning because they are more beneficial to students than other portable devices are like laptops and smartphones. In 2014, researchers Picard, Martin, and Tsao claim that
iPads have all the functionality and connectivity of laptop computers, but are far more light weight, and all the mobility of smartphones, but with a larger, multi-touch screen. The iPad’s finger-based interface is intuitive to use, convenient, and can be used to perform a variety of activities, including writing and drawing with the finger-tip. (Picard et. al. 203-204)
Ipads therefore act as a medium that allow students to have access to a portable device with touch screen abilities that is still large enough and easy for students to use in order to compete their school related activities.
Another way in which iPads have their own unique aspects that have changed student learning is through their ability to engage students with their learning in a variety of creative ways that were not previously available to them before 2010. The touch screen aspect of iPad devices, along with the devices multiple apps, which have a variety of video and audio functions, allow and encourage students to apply their creative thinking and skills to their school work more than they had been able to do prior to 2010. In 2011, Westlake High School conducted an iPad pilot initiative program in order to test the effects of iPads on student learning and compare it to student learning without iPads. During the experiment, the school discovered that iPads have “spurred creativity as well because of the camera, video camera, and the apps that can be used for creative storytelling, video production, etc.”(Foote 16). In addition, “because the camera is embedded in the device, projects that might have taken weeks in the past can be completed in a matter of days” (Foote 16). Having recordings of class projects enables students to be able to replay their work so that they can access and critique it and ultimately enhance their learning experience.
Studies conducted on the implementation of iPads between 2011 and 2016 have also showed that the iPads can also affect the creativity aspect of student learning by discouraging students from using their creative skills that traditional methods of drawing with a pen and paper encouraged them to use. Picard et. al. conducted a study in which they compared the drawing results of students who drew with a pen-on-paper and those who drew on their iPads by touching the screen. Although they had originally thought that children’s drawings would be more detailed on the iPads because many have claimed that “finger drawing on an iPad screen enhances the quality of the resulting production because it bypasses the difficulties involved in handling a pen”, they discovered that their hypothesis was incorrect (Picard et. al. 205). According to the results, the researchers “found a slight but significant decrease in graphic scores in the iPad (finger drawing) condition, compared with the standard paper/pen drawing” (Picard et. al. 210). Such findings suggest that students are more likely to draw detailed and more artistic drawings with their traditional pen and paper than with their iPads and that iPads may have changed student learning by leading students to exercise a smaller amount of creativity to their drawings.
In addition, iPads allow students to communicate in ways that computers prior to 2010 did not offer. In their own study conducted in 2011, researchers Orrin T. Murray and Nicole R. Olcese from the Pennsylvania State University considered “whether the iPad and its attended software constitutes a set of resources for which there is no analog equivalent, thus allowing teachers and students to do things in learning environments that could not otherwise be possible.” (Murray and Olcese 43). One of their major findings included how “as advertised, the iPad offers users a way to connect to others via Bluetooth and collaborate via the Internet through popular social networking services like Facebook and Twitter” (Murray and Olcese 46). With iPads, students can communicate globally and gain access to a vast amount of information that extends beyond their classroom and home lives due to the variety of unique applications. Murray and Olcese go on to explain how iPad applications also allow for students to participate in peer group work, as these applications enable
multiple users to create and share material simultaneously using either a WiFi peer-to-peer function. It also provides an opportunity for multiple users to work on the same document at the same time, a function that has given cachet in the classroom to online collaborative offerings like Google Documents. (Murray and Olcese 47)
Such innovative applications encourage students to engage more with those not only in their classroom but also with those around the world. Such findings suggest that the introduction of iPads into the classroom has therefore allowed students to interact with a globalized world and to extend their learning beyond the classroom even more than prior means of learning allowed them to do.
In addition, studies between 2011 and 2016 have shown that the increase of iPads in the classroom has helped to increase student motivation and engagement in school. In the Westlake High School iPad pilot initiative program, “90% [of the 854 students surveyed] reported that the iPad had a somewhat positive or positive effect on their motivation to learn. Eighty-nine percent reported that the iPad had a positive or somewhat positive impact on their ‘desire to dig deeper into a subject.’”(Foote 17). Although such findings are not generalizable to the entire population since the group studied is only a small sample of all students, the findings are characteristic of many of the positive results uncovered between 2011 and 2016 regarding how students have responded positively to the implementation of iPads and its effects on their learning. In many cases, the implementation of iPads has been associated with “increased motivation, enthusiasm, interest, engagement, independency, and self- regulation, creativity and improved productivity” (Clark and Lukin 4).
IPads have also affected the role of teachers in the classroom and their pedagogical practices. Many teachers expressed how iPads have allowed students to receive more individualized lesson plans that meet their needs. Research in 2013 has discovered that “teachers felt that the devices [iPads] enabled them as teachers, to promote independent learning, to differentiate learning more easily for different student needs and to easily share resources both with students and with each other” (Clark and Lukin 4). Through the variety of applications that iPads offer, students can independently complete activities at different levels. Research conducted in 2016 also showed that iPads allow teachers “to personalize instruction for every child. If the student is struggling, [the teacher] can let the iPad offer repetition (through games, targeted reading, or apps) and if another needs to move faster, [the teacher] directs him to a faster-paced game or app (Tynan-Wood). Also, iPads can be used on the side of teacher instruction, as students can learn from the applications on their iPad rather than just having to listen to their teacher the entire class time.
One of the downsides to the implementation of iPads in the classroom that researchers have discovered between 2011 and 2016 is that student’s learning can suffer when teachers do not receive the training needed to properly implement the devices in the classroom (Tynan-Wood). Also, research conducted in 2016 found that iPads can distract students from their learning at school, and even in class, “some students bypassed security measures and surfed prohibited websites” (Tynan-Wood). Such research about how iPads can be distracting devices existed prior to 2016, as in 2013, a study was conducted on 777 students at Arkansas State University and discovered that “students reported using their smartphone or other internet capable device an average of 11 times per day. Eighty-six percent of students reported texting during class, 68 percent reported checking emails and 66 percent said they use social networking sites during class” (Hennington). Studies throughout 2011 and 2016 have found that iPads have changed student learning by causing students and their nearby peers to be more distracted by the multiple applications that are easily available to use.
While studies between 2011 and 2016, have all looked at how iPads have changed student learning in the classroom either positively or negatively, there have been not been a large number of studies conducted in 2017 that have looked at the use of iPads in the classroom yet. Rather, the studies conducted in 2017 on this topic have mostly looked at how schools have begun to shift away from the use of iPads to Chromebooks. The New York Times claimed in 2017 that “Apple’s iPads and Mac notebooks ― which accounted for about half of the mobile devices shipped to schools in the United States in 2013 ― have steadily lost ground to Chromebooks…” (Singer). A major reason for this shift away from iPads and towards Chromebooks is because “while school administrators generally like the iPad’s touch screens for younger elementary school students, some said older students often needed laptops with built-in physical keyboards for writing and taking state assessment tests” (Singer). Another article in 2017 related how Lancaster City School District has begun to use Chromebooks more than iPads because “along with being cheaper, Chromebooks are easier for IT staff to manage. A lot of education software utilizes Flash, which is not supported on iPads” (Thurston). Such pushback against the use of iPads illustrates how today, the public’s focus has shifted away from how iPads can change student learning and towards how new methods of technology can complete this task even better.
Between 2011 and 2016, researchers have focused on finding evidence of how iPads have changed student learning for the better or worse; however, beginning in 2017, researchers are starting to no longer look at the effects of iPads on student learning but rather at the new devices that are emerging in the classroom. While the pushback against the use of iPads in the classroom because of the device’s limitations has also shown to influence the way in which students learn in the classroom, looking at the effects of the rise of Chromebooks on student learning is another topic of study in and of itself. It will be interesting to see if iPads will continue to remain in the classroom at all by the end of 2017 or if they will be completely replaced by Chromebooks. If schools continue to follow the trajectory where they increase the number of technological innovations used in the classroom in order to keep up with the modernized world, then it becomes extremely important for society to study the effects of these technological innovations on student learning in order to ensure that students are receiving the education that they need.
Clark, Wilma, and Rosemary Luckin. “IPads in the Classroom.” What The Research Says. 2013, < http://ss-web stag.westminster.ac.uk/teachingandlearning/wp-content/uploads/sites/7/2015/08/2013-iPads-in-the-Classroom-Lit-Review-1.pdf>. Accessed 4 May 2017. Web
“Educators Find the iPad a Useful Aid in the Classroom.” Targeted News Service. Washington, D.C. 3 Nov 2011, < https://search.proquest.com/docview/902124765?accountid=14405>. Accessed 4 May 2017. Web.
Foote, Carolyn. “The Evolution of a 1: 1 iPad Program.” Internet Schools vol. 1, no. 15-18, 2012, <http://ajhs.auburn.cnyric.org/ajhs_library/01E90843-006ACFDF.0/The%20Evolution%20of%20a%201-1%20Ipad%20Program.pdf>. Accessed 4 May 2017. Web.
Hennington, C. “Tech Offers Opportunities, Distractions in Classroom. University Wire. 13 Feb 2014, <https://search.proquest.com/docview/1497948419?accountid=14405>. Accessed 4 May 2017. Web.
Murray, Orrin T., and Nicole R. Olcese. “Teaching and Learning with IPads, Ready or Not?.” TechTrends vol. 55, no. 6, 2011, pp. 42-48, <https://link.springer.com/article/10.1007%2Fs11528-011-0540-6?LI=true>. Accessed 4 May 2017. Web.
Picard, Delphine, Perrine Martin, and Raphaele Tsao. “IPads at School? A Quantitative Comparison of Elementary Schoolchildren’s Pen-on-Paper Versus Finger-on-Screen Drawing Skills.” Journal of Educational Computing Research vol. 50. no. 2, 2014, p. 203-212, <http://journals.sagepub.com/doi/abs/10.2190/EC.50.2.c>. Accessed 4 May 2017. Web.
Thurston, Trista. “Chromebooks for Everyone Lancaster Schools are Rolling Out New Technology.” Lancaster, Ohio: Lancaster Eagle- Gazette, 20 Feb 2017, <http://search.proquest.com/docview/1869869607/B3E7F63D8A924298PQ/6?accountid=14405>. Accessed 4 May 2017. Web.
Tynan-Wood, Christina. “IPads in the Classroom: The Promise and the Problems.” Great Schools, 7 Mar. 2016, <http://www.greatschools.org/gk/articles/ipad-technology-in-the-classroom/>. Accessed 4 May 2017. Web.
Singer, Natasha. “Apple Devices Lose Luster in American Classrooms”. New York Times. New York: New York Times Company, 2 March 2017, <http://search.proquest.com/docview/1873380267/B3E7F63D8A924298PQ/3?accountid=14405>. Accessed 4 May 2017. Web. | 1 | 4 |
<urn:uuid:f29726ac-8601-4ba9-af24-13064b639760> | Evidence informing policy
The dose-response relationship between alcohol and cancer risk (i.e. risk increases in line with consumption) means significant reduction in the dose, or the amount and frequency of alcohol used, reduces the risk of developing an alcohol-related cancer.
There is a large body of evidence on the effectiveness of alcohol policy interventions in reducing alcohol consumption and improving long-term health outcomes. A key example is the World Health Organization's ‘STEPwise approach’ to alcohol policy options for the prevention and control of non-communicable diseases. This approach ranks mechanisms that impact on alcohol affordability, availability and promotion as the best interventions to reduce alcohol-related harm.
The evidence supports the following interventions for reducing alcohol-related harms:
- reforming alcohol pricing;
- phasing out alcohol promotion to young people;
- improving the safety of drinkers and those around them through liquor control regulations, controls on outlet density and better policing;
- promoting a safer drinking culture through social marketing and school, community and workplace programs;
- providing targeted programs for Aboriginal and Torres Strait Islander people;
- strengthening/supporting primary healthcare to provide interventions; and
- strengthening data collection and the evidence base for interventions.
This section focuses on the recommendations with the most potential to reduce the risk of alcohol-related cancer on a population basis.
Alcohol pricing reform
Current taxation and pricing arrangements
Under the current alcohol taxation system in Australia, different alcohol products are taxed differently (See Table 1), however not according to risk and harm in relation to alcohol content. Alcohol is taxed on either a volume or value basis, with a range of tax rates depending on the type of beverage and packaging, alcohol strength, place of manufacture and the method or scale of production.
There are currently four categories of taxes applied to alcohol:
- Goods and Services Tax (GST) - a 10% ad valorem (i.e. according to the value of the goods) tax on all retail sales of alcohol;
- Excise duties - a volumetric tax based on alcohol content per volume of product; calculated by reference to the Consumer Price Index and levied twice a year. It is levied on a per litre of alcohol (Lal) basis, at rates that vary according to the type of beverage, size of container and alcoholic strength; and
- Wine Equalisation Tax (WET) - an ad valorem tax that applies to wine based on the value of the goods at the last wholesale sale;
- Customs duties – a combination of both volumetric and ad valorem tax imposed on imported products only.
Table 1. Alcohol tax measure by product type
|Beer||Spirits and RTDs||Wine||Cider|
|WET||No||No||Yes||Yes (excluding flavoured ciders)|
|Customs duty (ad valorem)||No||Yes (imported)||Yes (imported)||No|
|Customs duty (per unit of alcohol)||Yes (imported)||Yes (imported)||No||No|
*RTDs - Ready-to-drink
Australia's alcohol taxation system is not based on public health or alcohol harm minimisation principles. The ad hoc development of Australia's alcohol taxation system has resulted in inconsistencies that do not encourage lower risk of alcohol use.
Currently, the tax payable by consumers per standard drink (10 grams of pure alcohol) for different types of alcoholic beverages varies considerably. These variations in tax payable do not reflect the relative contribution of different beverage types to alcohol-related harms.
Some of these inconsistencies are highly problematic in a public health context. For example, the tax payable per standard drink for low-price cask wine with an alcohol content of 12.5% is only $0.05, whereas the tax payable per standard drink of mid-strength beer in a can/stubbie with an alcohol content of just 3% is $0.26.
Figure 1: Tax payable per standard drink* of alcohol, various products, Australia, June 2008
*A standard drink is equal to 0.001267 litres or 10 grams of pure alcohol
Note: WET (Wine Equalisation Tax) payable per standard drink of wine is based on a 4 litre cask of wine selling for $13.00 (incl. GST), a 750ml bottle of wine selling for $15.00 (incl GST)[Bottled Wine 1], a 750ml bottle of wine selling for $30 (incl GST)[Bottled Wine 2] and a 750ml bottle of port selling for $13.00 (incl GST)
Source: Vandenberg 2008
Evidence shows that increasing alcohol prices through taxation decreases both alcohol use and alcohol-related harms. It has been estimated that a price increase of 10% reduces alcohol use by an average of 5%.
Evidence also suggests that this reduction in use applies to all groups of drinkers, including high-risk drinkers. Price increases have also been shown to reduce the amount of alcohol drunk at a population level over time. Therefore, increasing the price of alcohol has potential to reduce alcohol-related cancer risk on a population basis, particularly in view of the dose/response relationship between alcohol use and cancer risk.
Alcohol use among younger people also decreases when price is increased. This is significant in a cancer prevention context, because young people who drink at high-risk levels are more likely to continue using alcohol at harmful levels over the long term, thus their risk of alcohol-related cancer increases significantly.
One Australian study concluded taxation measures could reduce the social costs of alcohol in Australia by between 14 and 39% (or between $2.19b and $5.94 billion in 2004-05 dollars). The returns from taxation measures can be used to fund prevention and treatment programs, as recommended by WHO, thereby further reducing alcohol-related harms.
Studies have shown that a single "volumetric" alcohol taxation system - where tax is levied on the alcohol content of a product by volume – has the potential to reduce alcohol use and related harm, provided it translates to an overall increase in the price of alcohol as well as consistency in how alcohol tax is levied. Taxing all alcohol products on a volumetric basis would make stronger (more carcinogenic) alcohol products more expensive, therefore driving sustained shifts in use toward products with lower average alcohol content by volume.
It is also the most cost-effective intervention; it has been estimated that a volumetric tax on all alcohol products set at the existing rate for spirits could reduce overall alcohol use by 24%, resulting in a net health gain of 170,000 Disability Adjusted Life Years (DALYs) and an increase in revenue of over $3 billion. Abolishing the WET and replacing it with a volumetric tax on wine would increase taxation revenue by $1.3 billion per year, reduce alcohol use by 1.3%, save $820 million in health care costs and avert 59,000 DALYs. The ACE-Prevention report showed a volumetric tax at 10% above the current excise on spirits would not only provide significant public health benefits, it would be cost-saving. ("Cost-effective" public health measures are those that provide a significant return on investment when measured in DALYs; cost-saving interventions result in direct economic returns to government that are higher than the investment.)
The landmark "Henry Review" of Australia’s taxation arrangements calls for alcohol taxation to be "levied on a common volumetric basis across all forms of alcohol, regardless of place, method or scale of production".
A minimum floor price ensures alcohol cannot be sold below a certain amount. The absence of a minimum price based on alcohol volume can provide relatively easy access to products for people who drink at harmful levels, encouraging high-risk consumption. Alcohol products that are inexpensive to produce and distribute can be sold cheaply, irrespective of alcohol content. Increasingly, retail outlets such as supermarkets heavily discount alcohol products, including to below-cost prices, to attract customers into their stores. Reduced-price promotions at licensed venues also encourage binge drinking.
A minimum floor price for alcohol per standard drink may assist in reducing the supply of cheaper, more harmful drinking options. The Northern Territory Government has introduced a minimum floor price for alcohol to minimise the harms associated with high-alcohol, low-cost alcoholic beverages. It may encourage heavy and high-risk drinkers to switch from high alcohol products to lower-strength, less harmful options. Establishing a minimum price for alcohol, which raises the cost of products at the cheapest end of the spectrum, is likely to have a substantial impact both on overall drinking levels and on drinkers at most risk of short and long-term harm.
Restrictions to marketing and promotion of alcohol
The WHO defines alcohol marketing and promotion as “any form of commercial communication or message that is designed to increase, or has the effect of increasing, the recognition, appeal and/or consumption of particular products and services".
Alcohol beverages are marketed and promoted through a mix of television, online (including social media), radio and print advertisements, point-of-sale marketing and sponsorship of sporting and cultural events. Embedded and incidental advertising through product placement in social media, films and television programs is also significant.
Marketing strategies are becoming increasingly based on internet and mobile phone technology to target young people. While this is a priority for alcohol companies keen to recruit new customers, it is particularly problematic in a public health context because the harms of high-risk alcohol use impact disproportionately on younger age groups. Evidence from systematic reviews has shown direct associations between exposure to advertising and age of drinking onset, prevalence of drinking and the amount of alcohol use by young people. Younger people who consume alcohol at risky or high-risk levels may continue to do so over the longer term, thus significantly increasing their risk of alcohol-related cancer.
Current regulatory environment
There are very few legislative restrictions upon the content or placement of alcohol advertising and promotion in commercial and subscription media and social media; advertising is largely self-regulated, predominantly through voluntary industry codes of practice. These include:
- the Alcohol Beverages Advertising Code (ABAC), an alcohol-specific advertising code of practice and complaints mechanism, which covers the content and placement of alcohol advertising in most media. The code does not apply to alcohol sponsorships and advertising related to sponsorships.
- television broadcast codes of practice such as the Commercial Television Industry Code of Practice, which regulates the placement of alcohol advertising on commercial free-to-air television and the Subscription Broadcast Code of Practice;
- the Outdoor Media Association Code of Ethics; and
- the Commercial Radio Code of Practice.
Why current regulatory arrangements are inadequate
Young people are exposed to the same number of alcohol advertisements as adults and the alcohol advertisements that appeal to young people increases their intention to use and purchase the advertised products. Australia's well-documented use of alcohol at harmful levels suggests improvements in the regulation of alcohol marketing and promotion are urgently required. Key limitations in the current framework include:
- The codes are voluntary; they cannot be enforced and there are no penalties for breaches. Voluntary codes have been shown internationally and in Australia to be ineffective;
- Platform scope is limited; some media channels are excluded including cinema advertising;
- Inadequate protection for children and adolescents from exposure to alcohol advertising. Although the Commercial Television Industry code restricts alcohol advertising to mature and adult viewing classification periods (i.e. after 8.30pm), alcohol advertising is permitted during sport programs on weekend and public holiday sporting events and events broadcast simultaneously across a number of time zones. This exposes children to intensive alcohol advertising; significant numbers of children also watch TV after 8.30pm. Recent research has shown that children are being exposed to as much alcohol advertising when viewing televised sport as adults.
- ABAC relies solely on receiving complaints from the public. Community awareness of the complaints process is low.
Cancer Council recommends a staged phase out of alcohol promotions from times and placements which have high exposure to young people aged up to 25 years, including:
- Advertising during all sport programs
- Advertising during high adolescent/child viewing
- Advertising on outdoor signage and public transport
- Sponsorship of sport and cultural events.
For detailed information on reforms to alcohol marketing and promotion for reducing the health harms of alcohol use, see Position statement - Marketing and promotion of alcohol.
Restrictions to alcohol availability
There is a wealth of evidence demonstrating that restricting the availability of alcohol can reduce the use of alcohol and alcohol-related harm, and that it is a highly cost-effective measure.
A 2019 umbrella review of systematic reviews found that limiting on- and off-premise outlet density could be beneficial in reducing alcohol use and subsequently reducing alcohol related harm. A separate systematic review with meta-analysis found that limiting the physical availability of take-away alcohol (days and hours of sale and outlet density) would reduce alcohol use. For each additional day of sale there was a 3.4% increase of per capita total alcohol use. These findings support the implementation of policies designed to restrict hours of sale and number of outlets.
Inadequate restrictions potentially result in increased harm from alcohol. In Australia, increased market availability of alcohol has been found to be associated with a significant increase in alcohol use and consequently significant increases in mortality from head and neck, lung, colorectum and overall cancers for males and head and neck and colorectum cancers for females.
Consumer information and labelling of alcohol
Health information (e.g. ingredients, alcohol volume) and warning labels on alcohol products have the potential to increase public awareness of alcohol harms, notably because they can directly target the user at purchase and during drinking.
Health information labelling
In Australia, Standard 2.71 of the Food Standards Australia New Zealand (FSANZ) Act 1991 (“Labelling of Alcoholic Beverages and Food containing alcohol”) stipulates that an alcohol label is to include alcohol by volume (expressed in mL/100g or % alcohol by volume) and the estimated number of standard drinks contained. The size and legibility of the information, however, varies markedly between products. In addition, a list of ingredients or nutritional information, such as the amount of sugar, kilojoules or preservatives, is not required on alcohol products as it is on non-alcoholic beverages. Mandatory energy labelling and prohibition of sugar and carbohydrate claims on alcohol products are being considered by FSANZ.
Evidence shows that warning labels have the potential to help reduce risky and high-risk alcohol use. In tobacco control, the use of graphic tobacco warning labels is effective in increasing awareness, changing attitudes as well as changing behaviour, suggesting graphic warning labels can reduce alcohol use as well. A comprehensive review of key international studies, done in British Columbia in 2006, concluded that warning labels raised awareness of alcohol-related harms. Cancer warning labels on alcohol products have the potential to increase awareness of the link between alcohol use and cancer risk. Individual studies have also shown that warning labels could help to encourage culture-change towards less hazardous consumption levels, while others indicate that the effectiveness of labels may depend on their design, content and targeting.
Role of general practitioners
Around 86% of Australia's population visit their GP every year, making primary care an important healthcare sector for chronic disease prevention. Studies show that advice in general practice consultations can reduce harmful alcohol use in men; the effect is less clear in women. An international review of 29 controlled trials in 2007 showed GP advice could reduce harmful drinking in men by 20-30%; a US-based analysis of 12 randomised controlled trials published in 1997 showed men who drank at harmful levels were at least twice as likely to moderate their drinking if advised by their GP. Despite these findings, the primary care sector in Australia receives limited support for implementing preventive healthcare interventions such as alcohol counselling.
The Royal Australian College of General Practitioners produces guidelines on interventions targeted at smoking, nutrition, alcohol and physical activity such as the Red Book, SNAP guide and Green Book.
Social marketing/public education
Social marketing – i.e. a range of targeted communication and public education strategies – has been effective in raising awareness about important health issues and encouraging positive behaviour change, particularly when integrated with supportive public policies. While there is potential for social marketing programs to encourage drinking at lower risk levels, it is a complex area, partly because perceptions of drinking vary widely; the evidence on optimal effectiveness remains unclear and warrants further research.
It should be noted that social marketing campaigns to raise awareness of the harms of alcohol use have never been run on a sustained, large-scale basis, so there is no direct evidence of their effectiveness. However, an evaluation of Western Australian campaigns showed that public education campaigns increased parental awareness of adolescent alcohol use and can increase awareness of the links between alcohol and cancer. Furthermore, targeted public education campaigns – for example those linked to complementary alcohol control policy measures such as random breath testing of motorists – can be effective.
Given the potential benefits of appropriately targeted, integrated social marketing strategies, research into the development of effective campaigns should be a priority. Social marketing and public education to date have focused on the short-term harms of alcohol use; it is also important to raise community awareness about the link between long-term alcohol use and cancer risk. Moreover, any campaigns that have the effect of reducing overall alcohol use on a population basis have the potential to reduce the disease burden of alcohol-related cancers.
- ↑ 1.0 1.1 World Cancer Research Fund, American Institute for Cancer Research. Policy and action for cancer prevention. Food, nutrition, and physical activity: a global perspective. Washington DC: AICR; 2009 Available from: http://www.dietandcancerreport.org/cancer_resource_center/downloads/chapters/pr/Introductory%20pages.pdf.
- ↑ 2.0 2.1 Casswell S, Thamarangsi T. Reducing harm from alcohol: call to action. Lancet 2009 Jun 27;373(9682):2247-57 Available from: http://www.ncbi.nlm.nih.gov/pubmed/19560606.
- ↑ Commonwealth of Australia. Alcohol taxation in Australia. Canberra; 2015 Oct 14. Report No.: 03/2015. Available from: https://www.aph.gov.au/About_Parliament/Parliamentary_Departments/Parliamentary_Budget_Office/Publications/Research_reports/Alcohol_taxation_in_Australia.
- ↑ Gray D, Saggers S, Atkinson D, Sputore B, Bourbon D. Beating the grog: an evaluation of the Tennant Creek liquor licensing restrictions. Aust N Z J Public Health 2000 Feb;24(1):39-44 Available from: http://www.ncbi.nlm.nih.gov/pubmed/10777977.
- ↑ Vandenberg B, Livingston M, Hamilton M. Beyond cheap shots: reforming alcohol taxation in Australia. Drug Alcohol Rev 2008 Nov;27(6):579-83 Available from: http://www.ncbi.nlm.nih.gov/pubmed/19023770.
- ↑ 6.0 6.1 6.2 6.3 6.4 Anderson P, Baumberg B. Alcohol in Europe: a public health perspective. A report for the European Commission. London: Institute of Alcohol Studies; 2006 Jun Available from: http://ec.europa.eu/health/archive/ph_determinants/life_style/alcohol/documents/alcohol_europe_en.pdf.
- ↑ 7.0 7.1 7.2 7.3 7.4 Babor T, Caetano R, Casswell S, Edwards G, Giesbrecht N, Graham K, et al. Alcohol: no ordinary commodity - research and public policy. 1st ed. Oxford, UK: Oxford University Press; 2003.
- ↑ 8.0 8.1 8.2 Loxley W, Toumbourou JW, Stockwell T, Haines B, Scott K, Godfrey C, et al. The prevention of substance use, risk and harm in Australia: a review of the evidence. Canberra: Australian Government Department of Health and Ageing; 2004 Jan Available from: http://www.health.gov.au/internet/main/publishing.nsf/Content/health-pubhlth-publicat-document-mono_prevention-cnt.htm/$FILE/mono_prevention.pdf.
- ↑ 9.0 9.1 9.2 9.3 9.4 9.5 Meier P, Booth A, Stockwell A, Sutton A, Wilkinson A, Wong R, et al. Independent review of the effects of alcohol pricing and promotion - part a: systematic reviews. Sheffield: University of Sheffield; 2008 Sep Available from: http://www.ias.org.uk/uploads/pdf/UK%20alcohol%20reports/DH_091366.pdf.
- ↑ 10.0 10.1 10.2 10.3 10.4 World Health Organization. WHO expert committee on problems related to alcohol consumption. Geneva: WHO; 2007. Report No.: 944. Available from: http://www.who.int/substance_abuse/expert_committee_alcohol_trs944.pdf.
- ↑ Chaloupka FJ, Grossman M, Saffer H. The effects of price on alcohol consumption and alcohol-related problems. Alcohol Res Health 2002;26(1):22-34 Available from: http://www.ncbi.nlm.nih.gov/pubmed/12154648.
- ↑ Gallet CA. The demand for alcohol: a meta-analysis of elasticities. Aust J Agric Resour Econ 2007;51:121-135 Available from: http://onlinelibrary.wiley.com/doi/10.1111/j.1467-8489.2007.00365.x/abstract.
- ↑ 13.0 13.1 Rehm J, Zatonksi W, Taylor B, Anderson P. Epidemiology and alcohol policy in Europe. Addiction 2011 Mar;106 Suppl 1:11-9 Available from: http://www.ncbi.nlm.nih.gov/pubmed/21324017.
- ↑ Doran C, Vos T, Cobiac L, Hall W, Asamoah I, Wallace A, et al. Identifying cost-effective interventions to reduce the burden of harm associated with alcohol misuse in Australia. Brisbane: The University of Queensland; 2008. Sponsored by Alcohol Education Rehabilitation Foundation. Available from: http://fare.org.au/wp-content/uploads/Identifying-Cost-effective-Interventions-to-Reduce-the-Burden-of-Alcohol-Harm.pdf.
- ↑ 15.0 15.1 Australian Institute of Health and Welfare. Young Australians: their health and wellbeing 2011. Canberra: AIHW; 2011 Jun 10. Report No.: Cat. no. PHE 140. Available from: http://www.aihw.gov.au/WorkArea/DownloadAsset.aspx?id=10737419259.
- ↑ 16.0 16.1 Collins D, Lapsley H. The avoidable costs of alcohol abuse in Australia and the potential benefits of effective policies to reduce the social costs of alcohol. Canberra: Commonwealth of Australia; 2008. Report No.: Monograph Series No. 70. Available from: http://www.health.gov.au/internet/drugstrategy/publishing.nsf/Content/0A14D387E42AA201CA2574B3000028A8/$File/mono70.pdf.
- ↑ 17.0 17.1 Byrnes JM, Cobiac LJ, Doran CM, Vos T, Shakeshaft AP. Cost-effectiveness of volumetric alcohol taxation in Australia. Med J Aust 2010 Apr 19;192(8):439-43 Available from: http://www.ncbi.nlm.nih.gov/pubmed/20402606.
- ↑ Cobiac L, Vos T, Doran C, Wallace A. Cost-effectiveness of interventions to prevent alcohol-related disease and injury in Australia. Addiction 2009 Oct;104(10):1646-55 Available from: http://www.ncbi.nlm.nih.gov/pubmed/21265906.
- ↑ Doran CM, Byrnes JM, Cobiac LJ, et al.. Estimated impacts of alternative Australian alcohol taxation structures on consumption, public health and government revenues. Med J Aust 2013;199 (9): 619-622. Available from: https://www.mja.com.au/journal/2013/199/9/estimated-impacts-alternative-australian-alcohol-taxation-structures-consumption.
- ↑ Vos T, Carter R, Barendregt J, Mihalopoulos C, Veerman JL, Magnus A, et al. ACE–Prevention Team (2010). Assessing cost-effectiveness in prevention (ACE–Prevention): Final report. Brisbane, Melbourne: University of Queensland, Deakin University; 2010 Sep Available from: http://www.deakin.edu.au/strategic-research/population-health/assets/resources/ace-prevention-report.pdf.
- ↑ Henry K, Harmer J, Piggott J, Ridout H, Smith G. Australia's future tax system: report to the treasurer. Canberra: Commonwealth of Australia; 2009 Dec Available from: http://www.taxreview.treasury.gov.au/content/downloads/final_report_part_1/00_AFTS_final_report_consolidated.pdf.
- ↑ Kuo M, Wechsler H, Greenberg P, Lee H. The marketing of alcohol to college students: the role of low prices and special promotions. Am J Prev Med 2003 Oct;25(3):204-11 Available from: http://www.ncbi.nlm.nih.gov/pubmed/14507526.
- ↑ Gruenewald PJ, Ponicki WR, Holder HD, Romelsjö A. Alcohol prices, beverage quality, and the demand for alcohol: quality substitutions and price elasticities. Alcohol Clin Exp Res 2006 Jan;30(1):96-105 Available from: http://www.ncbi.nlm.nih.gov/pubmed/16433736.
- ↑ Alcohol Review Implementation Team. Northern Territory Alcohol Policies and Legislation Reform: Floor Price. [homepage on the internet] Darwin: Northern Territory Government; 2017 Available from: https://alcoholreform.nt.gov.au/milestones/floor-price.
- ↑ World Health Organization. Strategies to reduce the harmful use of alcohol: draft global strategy. WHO; 2009 Dec 3. Report No.: EB126/13. Available from: http://apps.who.int/gb/ebwha/pdf_files/EB126/B126_13-en.pdf.
- ↑ 26.0 26.1 Roche AM, Bywood PT, Borlagdan J, Lunnay B, Freeman T, Lawton L, et al. Young people and alcohol: the role of cultural influences. Adelaide: National Centre for Education and Training on Addiction, DrinkWise Australia Ltd; 2007 Available from: http://nceta.flinders.edu.au/files/8512/5548/2802/EN360.pdf.
- ↑ Carroll T, Stewart C, King E, Taylor J. Consistency of alcohol advertising and promotion on the internet with the revised alcohol beverage advertising code. Sydney: Research and Marketing Group, Commonwealth Department of Health and Ageing; 2005 Oct Available from: http://www.alcohol.gov.au/internet/alcohol/publishing.nsf/Content/963E09FFE16AEF1ECA2571E3001F0EDD/$File/consistent-internet.pdf.
- ↑ 28.0 28.1 National Preventative Health Taskforce. Australia: the healthiest country by 2020. National preventative health strategy – the roadmap for action. Canberra: Commonwealth of Australia; 2009 Jun 30 Available from: http://www.preventativehealth.org.au/internet/preventativehealth/publishing.nsf/Content/nphs-roadmap/$File/nphs-roadmap.pdf.
- ↑ Anderson P, de Bruijn A, Angus K, Gordon R, Hastings G. Impact of alcohol advertising and media exposure on adolescent alcohol use: a systematic review of longitudinal studies. Alcohol Alcohol 2009 May;44(3):229-43 Available from: http://www.ncbi.nlm.nih.gov/pubmed/19144976.
- ↑ Jernigan D, Noel J, Landon J, Thornton N, Lobstein T. Alcohol marketing and youth alcohol consumption: a systematic review of longitudinal studies published since 2008. Addiction 2017 Jan;112 Suppl 1:7-20 Available from: http://www.ncbi.nlm.nih.gov/pubmed/27565582.
- ↑ Alcohol Beverage Code Scheme (ABAC). ABAC Responsible Alcohol Marketing Code. ABAC; 2017 Available from: http://www.abac.org.au/wp-content/uploads/2017/07/ABAC_CodeofConduct_2017_web.pdf.
- ↑ 32.0 32.1 Australian Communications and Media Authority. Commercial television industry code of practice. Canberra: ACMA; 2010 Available from: http://www.acma.gov.au/webwr/aba/contentreg/codes/television/documents/2010-commercial_tv_industry_code_of_practice.pdf.
- ↑ Australian Subscription Television and Radio Association. ASTRA codes of practice 2007: Australian subscription broadcast television. Artarmon, NSW: ASTRA; 2007 Available from: http://www.acma.gov.au/webwr/aba/contentreg/codes/television/documents/stbcodesofpractice2007.pdf.
- ↑ Outdoor Media Association Inc. OMA alcohol advertising guidelines. Woolloomooloo, NSW: OMAI; 2009 Mar.
- ↑ Commercial Radio Australia. Commercial Radio Australia codes of practice and guidelines 2004. Surry Hills: CRA; 2004 Sep Available from: http://www.commercialradio.com.au/files/uploaded/file/Codes/Commercial%20Radio%20Codes%20of%20Practice%20-%20Sep%202004.pdf.
- ↑ Pettigrew S, Roberts M, Pescud M, Chapman K, Quester P, Miller C. The extent and nature of alcohol advertising on Australian television. Drug Alcohol Rev 2012 Sep;31(6):797-802 Available from: http://www.ncbi.nlm.nih.gov/pubmed/22452292.
- ↑ Fielder L, et al. Exposure of children and adolescents to alcohol advertising on Australian metropolitan free-to-air television. Addiction 2009 May 12;104(7):1157-1165] Available from: https://europepmc.org/abstract/med/19438841.
- ↑ Aiken A, Lam T, Gilmore W, Burns L, Chikritzhs T, Lenton S, et al. Youth perceptions of alcohol advertising: are current advertising regulations working? Aust N Z J Public Health 2018 Jun;42(3):234-239 Available from: http://www.ncbi.nlm.nih.gov/pubmed/29697872.
- ↑ Collins DJ, Lapsley HM. The costs of tobacco, alcohol and illicit drug abuse to Australian society in 2004/05. Canberra: Commonwealth of Australia; 2008 Available from: http://www.nationaldrugstrategy.gov.au/internet/drugstrategy/publishing.nsf/Content/mono64/$File/mono64.pdf.
- ↑ Reeve B. Regulation of alcohol advertising in Australia: Does the ABAC scheme adequately protect young people from marketing of alcoholic beverages? QUT Law Review 2018;18(1):96-123. Available from: https://lr.law.qut.edu.au/article/view/738/651.
- ↑ Carr S, O’Brien K, Ferris J, et al.. Child and adolescent exposure to alcohol advertising in Australia’s major televised sports. Drug and Alcohol Review 2015 Jul;DOI: 10.1111/dar.12326.
- ↑ Alcohol Beverage Code Scheme (ABAC). Alcohol Beverages Advertising (and Packaging) Code Scheme - Review of ABAC Code Decisions: Final Report. ABAC; 2013 May 9 Available from: http://www.abac.org.au/wp-content/uploads/2013/06/ABAC-Community-Perceptions-Report-May-2013.pdf.
- ↑ World Health Organization. The updated Appendix 3 of the WHO Global NCD Action Plan 2013-2020. Geneva: WHO; 2017 [cited 2020 May 15] Available from: https://www.who.int/ncds/management/WHO_Appendix_BestBuys.pdf.
- ↑ Chisholm D, Moro D, Bertram M, Pretorius C, Gmel G, Shield K, et al. Are the "Best Buys" for Alcohol Control Still Valid? An Update on the Comparative Cost-Effectiveness of Alcohol Control Strategies at the Global Level. J Stud Alcohol Drugs 2018 Jul;79(4):514-522 Available from: http://www.ncbi.nlm.nih.gov/pubmed/30079865.
- ↑ Siegfried N, Parry C. Do alcohol control policies work? An umbrella review and quality assessment of systematic reviews of alcohol control interventions (2006 - 2017). PLoS One 2019;14(4):e0214865 Available from: http://www.ncbi.nlm.nih.gov/pubmed/30969992.
- ↑ 46.0 46.1 Sherk A, Stockwell T, Chikritzhs T, Andréasson S, Angus C, Gripenberg J, et al. Alcohol Consumption and the Physical Availability of Take-Away Alcohol: Systematic Reviews and Meta-Analyses of the Days and Hours of Sale and Outlet Density. J Stud Alcohol Drugs 2018 Jan;79(1):58-67 Available from: http://www.ncbi.nlm.nih.gov/pubmed/29227232.
- ↑ Jiang H, Livingston M, Room R, Gan Y, English D, Chenhall R. Can public health policies on alcohol and tobacco reduce a cancer epidemic? Australia's experience. BMC Med 2019 Nov 27;17(1):213 Available from: http://www.ncbi.nlm.nih.gov/pubmed/31771596.
- ↑ 48.0 48.1 Food Standards Australia New Zealand. User guides to the new Food Standards Code. [homepage on the internet] FSANZ; 2010 [cited 2013 Aug 27; updated 2013 Aug 27]. Available from: http://www.foodstandards.gov.au/foodstandards/userguides/.
- ↑ 49.0 49.1 Pettigrew S, Jongenelis M, Chikritzhs T, Slevin T, Pratt IS, Glance D, et al. Developing cancer warning statements for alcoholic beverages. BMC Public Health 2014 Aug 3;14:786 Available from: http://www.ncbi.nlm.nih.gov/pubmed/25087010.
- ↑ Hammond D, Fong GT, et al. Impact of the graphic Canadian warning labels on adult smoking behaviour. Tob Control 2003;12(4):391-5 Available from: https://tobaccocontrol.bmj.com/content/12/4/391.
- ↑ Stockwell T. A review of research into the impacts of alcohol warning labels on attitudes and behaviour. Victoria, Canada: Centre for Addictions Research of BC; 2006 Feb Available from: http://carbc.ca/Portals/0/PropertyAgent/558/Files/7/AlcWarningLabels.pdf.
- ↑ Agostinelli G, Grube JW. Alcohol counter-advertising and the media. A review of recent research. Alcohol Res Health 2002;26(1):15-21 Available from: http://www.ncbi.nlm.nih.gov/pubmed/12154647.
- ↑ 53.0 53.1 Harris M, Bennett J, Del Mar C, Fasher M, Foreman L, Furler J, et al. Guidelines for preventive activities in general practice. 7th ed. South Melbourne: Royal Australian College of General Practitioners; 2009 Available from: http://healthprofessionals.flyingdoctor.org.au/IgnitionSuite/uploads/docs/RACGP%20Guidelines%20for%20Preventive%20Activities%20in%20General%20Practice.pdf.
- ↑ Bertholet N, Daeppen JB, Wietlisbach V, Fleming M, Burnand B. Reduction of alcohol consumption by brief alcohol intervention in primary care: systematic review and meta-analysis. Arch Intern Med 2005 May 9;165(9):986-95 Available from: http://www.ncbi.nlm.nih.gov/pubmed/15883236.
- ↑ 55.0 55.1 Kaner EF, Beyer F, Dickinson HO, Pienaar E, Campbell F, Schlesinger C, et al. Effectiveness of brief alcohol interventions in primary care populations. Cochrane Database Syst Rev 2007 Apr 18;(2):CD004148 Available from: http://www.ncbi.nlm.nih.gov/pubmed/17443541.
- ↑ Wilk AI, Jensen NM, Havighurst TC. Meta-analysis of randomized control trials addressing brief interventions in heavy alcohol drinkers. J Gen Intern Med 1997 May;12(5):274-83 Available from: http://www.ncbi.nlm.nih.gov/pubmed/9159696.
- ↑ Department of Health and Ageing. Primary health care reform in Australia: report to support Australia's first national primary health care strategy. Canberra: Commonwealth of Australia; 2009 Available from: http://www.health.gov.au/internet/yourhealth/publishing.nsf/Content/nphc-draftreportsupp-toc/$FILE/NPHC-supp.pdf.
- ↑ Royal Australian College of General Practitioners. Smoking, nutrition, alcohol and physical activity (SNAP): a population health guide to behavioural risk factors in general practice. South Melbourne: RACGP; 2004 Available from: http://www.racgp.org.au/download/documents/Guidelines/snapguide2004.pdf.
- ↑ Royal Australian College of General Practitioners. Putting prevention into practice: guidelines for the implementation of prevention in the general practice setting. 2nd ed. South Melbourne: RACGP; 2006 Available from: http://www.racgp.org.au/download/documents/Guidelines/Greenbook/racgpgreenbook2nd.pdf.
- ↑ Johnston RS, Stafford J, Jongenelis MI, Shaw T, Samsa H, Costello E, et al. Evaluation of a public education campaign to support parents to reduce adolescent alcohol use. Drug Alcohol Rev 2018 Jul;37(5):588-598 Available from: http://www.ncbi.nlm.nih.gov/pubmed/29672988.
- ↑ Dixon HG, Pratt IS, Scully ML, Miller JR, Patterson C, Hood R, et al. Using a mass media campaign to raise women's awareness of the link between alcohol and cancer: cross-sectional pre-intervention and post-intervention evaluation surveys. BMJ Open 2015 Mar 11;5(3):e006511 Available from: http://www.ncbi.nlm.nih.gov/pubmed/25762231. | 1 | 5 |
<urn:uuid:69fe30b9-8e49-4707-ade7-9528f1ccc1ff> | Blood biomarkers are crucial in assessing an individual's overall health and longevity. While the significance of each biomarker can vary based on a person's age, gender, medical history, and overall health, there are 45 essential blood biomarkers that are commonly used as indicators of health and longevity based on current scientific knowledge.
It is important to note that while testing blood biomarkers is a good start, it is not the only way of measuring health and longevity. Other tests and markers that can provide a more comprehensive view of health and longevity include the organic acids test, which measures nutritional and metabolic biomarkers and the levels of amino acids in the urine and fatty acids in the blood. Additionally, quantifying the microbiota and microbiome can provide essential information on gut health and its impact on overall health.
Furthermore, a high-quality and comprehensive genetic test (DNA) can provide insights into an individual's genetic makeup and potential genetic predispositions to certain diseases. Also, an epigenetic test can provide information on how lifestyle and environmental factors affect gene expression and potential health outcomes. Thus, it is crucial to consider combining these tests and markers to gain a complete view of an individual's health and longevity.
Physicians generally consider findings “normal” if they fall within the reference range. Often they miss the big picture by ignoring various markers. A test result within the reference range is considered “normal.” However, Medical Laboratory Science places the word in quotation marks as there isn’t a clear-cut line between what is normal and what is not. This is why the term “reference range” is used instead of “normal range”.
A lab result may be slightly higher or lower than the reference range without indicating that the individual is ill. This is problematic from the viewpoint of maintaining good health and preventing illness. The above interpretation is undoubtedly correct if health is viewed simply as the absence of disease. However, if health is considered vibrant and good at the population and individual level, the reference range may be viewed differently.
WHO (World Health Organization) took a stance on this topic in a statement from 2014; their latest comprehensive report published in the International Journal of Epidemiology in 2016 stated,” health is not just the absence of disease...”. The international understanding of this has increased recently, and preventive health care is becoming an equally important area compared to the medical care of illness.
What is an optimal level?
All laboratory markers certainly don’t have so-called optimal values determined in scientific studies, but such values do exist in some cases. Optimal values are likely to be based on findings made on a population level regarding low mortality or, for instance, the greatest likelihood of preventing cardiovascular disease associated with a particular marker. Optimal levels, as opposed to a reference range, have also been defined for some vitamins. For example, a testosterone level at the lower end of the reference range may indicate subclinical hypogonadism.
However, it is crucial always to compare the results to your previous results and track the changes over time, particularly after lifestyle changes. It is also beneficial to take several samples to get a bigger picture of various levels and to minimize the slight day-to-day variation before interpreting the results.
The 45 Most Important Blood Biomarkers
There are numerous blood biomarkers that are important for health and longevity, and their significance may vary depending on a person’s age, gender, medical history, and overall health. However, based on current scientific knowledge, here is a list of 45 blood biomarkers, ranked in no particular order, that are commonly used as indicators of health and longevity.
It is important to note that biomarkers should not be interpreted in isolation and should always be considered in the context of an individual’s medical history, lifestyle factors, and other relevant health metrics (all the references for the markers and more are found in the Optimize Your Lab Results Online Course).
- C-reactive protein (CRP): CRP is a protein that increases in response to inflammation in the body. High levels of CRP have been linked to an increased risk of heart disease, diabetes, and other chronic health conditions and mortality. Monitoring CRP levels can help identify inflammation and other related health issues.
- Fasting blood glucose: Fasting blood glucose is a measure of the amount of glucose in the blood after an overnight fast. Elevated blood glucose levels are a key indicator of diabetes and metabolic syndrome, which are associated with an increased risk of heart disease, stroke, and other chronic health conditions.
- Hemoglobin A1C (HbA1C): HbA1C measures average blood glucose levels over the past 2-3 months. High HbA1C levels indicate poor glucose control and insulin resistance and have been associated with an increased risk of heart disease, stroke, and other chronic health conditions.
- High-density lipoprotein (HDL) cholesterol: HDL cholesterol is often referred to as “good” cholesterol because it helps remove LDL cholesterol, or “bad” cholesterol, from the bloodstream. Low HDL levels are a risk factor for heart disease, while high levels are associated with a lower risk of heart disease and other chronic health conditions.
- Low-density lipoprotein (LDL) cholesterol: LDL cholesterol is often referred to as “bad” cholesterol because it can contribute to plaque formation in the arteries. High LDL levels may be a risk factor for heart disease and other chronic health conditions.
- Triglycerides: Triglycerides are a type of fat found in the blood. High triglyceride levels have been associated with an increased risk of heart disease, stroke, and other chronic health conditions.
- Total cholesterol: Total cholesterol is the sum of HDL, LDL, and other cholesterol particles in the blood. High total cholesterol levels are a risk factor for heart disease and other chronic health conditions. Then again, low total cholesterol may cause vitamin D deficiency, steroid hormone production problems, depression, and an increased risk of premature death from various causes.
- Homocysteine: Homocysteine is an amino acid that can be toxic to the body at high levels. Elevated homocysteine levels have been linked to an increased risk of heart disease and other chronic health conditions due to increased oxidative stress.
- Vitamin D: Vitamin D is an essential nutrient that plays a crucial role in bone health, immune function, and many other physiological processes. Low vitamin D levels have been linked to an increased risk of various health conditions, including osteoporosis, cancer, and autoimmune diseases.
- Serum iron: Serum iron levels measure the amount of iron in the blood. Iron is an essential nutrient that plays a critical role in the formation of red blood cells. High serum iron levels have been linked to an increased risk of heart disease and mortality, while low levels can lead to anemia.
- Ferritin: Ferritin is a protein that stores iron in the body. Elevated ferritin levels indicate excess iron storage, which has been linked to an increased risk of various health conditions, including heart disease, cancer, and diabetes. Too low levels indicate iron deficiency.
- Transferrin saturation: Transferrin saturation measures the amount of iron bound to transferrin, a protein that transports iron in the blood. Elevated transferrin saturation levels can indicate excess iron storage and an increased risk of various health conditions. Too low levels indicate iron deficiency.
- Complete blood count (CBC): A CBC measures several components of the blood, including red blood cells, white blood cells, and platelets. It can help diagnose and monitor various conditions such as anemia, infection, and leukemia.
- White blood cell count (WBC): A WBC count measures the number of white blood cells in the blood. It can help diagnose and monitor infections, inflammation, and immune system disorders. Lower but within the reference range levels are linked to reduced mortality risk.
- Red blood cell count (RBC): An RBC count measures the number of red blood cells in the blood. It can help diagnose and monitor anemia, kidney disease, and bone marrow disorders.
- Hemoglobin: Hemoglobin is a protein in red blood cells that carries oxygen throughout the body. A hemoglobin test measures the amount in the blood and can help diagnose and monitor anemia and other blood disorders.
- Hematocrit: Hematocrit measures the proportion of red blood cells in the blood. A hematocrit test can help diagnose and monitor anemia and dehydration.
- Mean corpuscular volume (MCV): MCV measures the average size of red blood cells. An MCV test can help diagnose and monitor anemia and other blood disorders.
- Mean corpuscular hemoglobin (MCH): MCH measures the amount of hemoglobin in a single red blood cell. An MCH test can help diagnose and monitor anemia and other blood disorders.
- Mean corpuscular hemoglobin concentration (MCHC): MCHC measures hemoglobin concentration in a given volume of red blood cells. An MCHC test can help diagnose and monitor anemia and other blood disorders.
- Platelet count: A platelet count measures the number of platelets in the blood. It can help diagnose and monitor bleeding, clotting, and bone marrow disorders. Lower but within the reference range levels are linked to reduced mortality risk.
- Fibrinogen: Fibrinogen is a protein produced in the liver involved in blood clotting. High fibrinogen levels in the blood can increase the risk of cardiovascular disease and stroke.
- D-dimer: D-dimer is a protein fragment produced when a blood clot is broken down. Elevated D-dimer levels in the blood can indicate a blood clot or thrombotic disorder.
- Prostate-specific antigen (PSA): PSA is a protein produced by the prostate gland in men. Elevated levels of PSA in the blood can be a sign of prostate cancer or other prostate-related conditions.
- Testosterone: Testosterone is a male sex hormone produced in the testes. Low testosterone levels can cause various symptoms in men, including fatigue, decreased libido, and muscle weakness. Read the comprehensive article on naturally elevating testosterone levels here.
- Estrogen: Estrogen is a female sex hormone produced in the ovaries. Low estrogen levels can cause various symptoms in women, including hot flashes, night sweats, and vaginal dryness. Learn more about estrogen and other female hormones in the Biohacking Women Online Course.
- Follicle-stimulating hormone (FSH): FSH is a hormone produced by the pituitary gland that stimulates the growth of ovarian follicles in women and the production of sperm in men. Elevated levels of FSH can be a sign of menopause in women or testicular failure in men.
- Luteinizing hormone (LH): LH is a hormone produced by the pituitary gland that stimulates ovulation in women and testosterone production in men. Elevated levels of LH can be a sign of menopause in women or testicular failure in men.
- Thyroid-stimulating hormone (TSH): TSH is a hormone produced by the pituitary gland that stimulates the thyroid gland to produce thyroid hormones. Elevated levels of TSH can be a sign of an underactive thyroid gland or hypothyroidism.
- Free triiodothyronine (fT3): fT3 is one of the two main thyroid hormones produced by the thyroid gland. Low levels of fT3 can be a sign of an underactive thyroid gland or hypothyroidism.
- Free thyroxine (fT4): fT4 is the other primary thyroid hormone produced by the thyroid gland. Low levels of fT4 can be a sign of an underactive thyroid gland or hypothyroidism.
- Thyroid peroxidase antibody (TPO): The antibody produced by the immune system can attack the thyroid gland and cause hypothyroidism. Elevated levels of TPO antibodies can be a sign of autoimmune thyroid disease.
- Adrenocorticotropic hormone (ACTH): ACTH is a hormone produced by the pituitary gland that stimulates the adrenal glands to produce cortisol, a steroid hormone. Elevated levels of ACTH can be a sign of adrenal insufficiency or Cushing’s syndrome.
- Cortisol: Cortisol is a steroid hormone produced by the adrenal glands in response to stress. It helps regulate the body’s response to stress and plays a role in blood sugar control, immune function, and inflammation. Abnormal levels of cortisol can be a sign of adrenal dysfunction or other health issues.
- Insulin-like growth factor 1 (IGF-1): IGF-1 is a hormone primarily produced by the liver in response to growth hormone. It is essential for normal growth and development; abnormal levels can be associated with growth disorders and other health issues. Low- and high-normal IGF-I levels are both related to insulin resistance.
- Dehydroepiandrosterone (DHEA): DHEA is a hormone produced by the adrenal glands and plays a role in producing sex hormones. Abnormal levels of DHEA can be associated with adrenal dysfunction and other health issues.
- Follicular phase estradiol: Estradiol is a type of estrogen hormone produced by the ovaries. During the follicular phase of the menstrual cycle, estradiol levels increase and play a role in preparing the body for ovulation. Abnormal estradiol levels can be associated with menstrual disorders and other health issues.
- Luteal phase progesterone: Progesterone is a hormone produced by the ovaries and is essential for preparing the uterus for pregnancy. During the luteal phase of the menstrual cycle, progesterone levels increase. Abnormal progesterone levels can be associated with menstrual disorders and other health issues.
- Cystatin C: Cystatin C is a protein produced by the cells in the body and is used to measure kidney function. Elevated levels of cystatin C can be a sign of reduced kidney function.
- Fasting insulin: Fasting insulin is a blood test that measures the amount of insulin in the blood after fasting. Insulin is a hormone produced by the pancreas that helps the body regulate blood sugar levels. High levels of fasting insulin can indicate insulin resistance or diabetes.
- Creatinine: Creatinine is a waste product generated by muscles during normal metabolism. It is filtered from the blood by the kidneys and excreted in the urine. A blood test that measures the level of creatinine in the blood can be used to evaluate kidney function. Elevated creatinine levels in the blood may indicate impaired kidney function or damage.
- Uric acid: Uric acid is a waste product produced when the body breaks down purines found in many foods and the body’s cells. The kidneys excrete most uric acid, but if too much uric acid is produced or the kidneys are not working correctly, uric acid levels in the blood can elevate. High uric acid levels in the blood can lead to gout, which causes joint pain and swelling. Elevated uric acid may also be a more crucial remediable risk factor for metabolic and cardiovascular diseases.
- Alanine aminotransferase (ALT): ALT is an enzyme found primarily in the liver. It is released into the bloodstream when liver cells are damaged, which can occur due to conditions such as hepatitis, alcohol abuse, or liver cancer. Elevated levels of ALT in the blood can indicate liver damage or disease. Moderate increases in ALT levels also occur with metabolic disorders such as hyperlipidemia, obesity, and type 2 diabetes.
- Aspartate aminotransferase (AST): AST is an enzyme found in many tissues in the body, including the liver, heart, and muscles. Like ALT, it is released into the bloodstream when cells are damaged. Elevated levels of AST can indicate damage to the liver, heart, or muscles.
- Gamma-glutamyl transferase (GGT): GGT is an enzyme found in the liver, pancreas, and other organs. It is involved in the metabolism of glutathione, an antioxidant that helps protect cells from damage. Elevated levels of GGT in the blood can indicate liver or bile duct disease and excessive alcohol consumption.
Most of these markers (95%) are covered in great detail in our most popular health-optimization learning platform; the Optimize Your Lab Results Online Course!
The Organic Acids Test (OAT)
The Organic Acids Test (OAT) is a diagnostic tool that measures organic acid metabolites in urine. These metabolites are produced by the body as a result of various metabolic pathways and can provide information on nutrient deficiencies, energy production, and the health of the gut microbiome.
The OAT can detect and monitor various conditions, including nutrient deficiencies, inflammation, oxidative stress, mitochondrial dysfunction, and abnormalities in neurotransmitter metabolism. It can also identify the overgrowth of harmful bacteria or yeast in the gut and imbalances in the gut microbiome that can contribute to a range of health issues.
One of the main benefits of the OAT is that it can provide a comprehensive view of an individual's metabolic profile, including information on how their body is processing various nutrients and how well their mitochondria are functioning. The OAT can help healthcare practitioners tailor nutritional and supplemental interventions to an individual's unique needs by identifying nutrient deficiencies and imbalances. Additionally, by identifying imbalances in the gut microbiome, the OAT can help guide dietary and lifestyle interventions that can improve gut health and overall health outcomes.
We recommend taking the Metabolomix+ home test.
Metabolic Areas of Metabolomix + Analysis:
- Organic acids
- Absorption disorders and dysbiosis
- Cellular energy and mitochondria
- Vitamin tracers
- Toxin and detoxification markers
- Tyrosine metabolism
- Amino acids
- Essential amino acids
- Non-essential amino acids
- Intermediates of metabolism
- Markers for dietary peptides
- Markers of oxidative stress
See a complete Metabolomix+ sample report here.
Amino acids (urine)
Amino acids contain four essential elements: carbon (C), hydrogen (H), oxygen (O), and nitrogen (N). There are twenty amino acids that are important for humans, of which nine are essential (must be obtained from dietary sources), and the remaining eleven are synthesized in the body. Thus, amino acids are classified into essential and non-essential amino acids. Some of the dispensable amino acids are still classified as conditionally essential or conditionally indispensable, i.e., they must be received from dietary sources, as their synthesized amount cannot fully meet the body ́s needs.
The body needs the proteins formed from amino acids to tackle several different tasks. They are as follows:
- Tissue growth and regeneration
- Repair of damaged tissue
- Food digestion (digestive enzymes)
- Enzymes and cofactors (they catalyze chemical reactions in the body)
- Structural components (in tissues and cell membranes)
- Acceleration and regulation of chemical processes (coenzymes etc.)
- Acting as biological transfer proteins (e.g., hemoglobin)
- Maintaining immune system function (antibodies and immunoglobulins)
- Mediators and signal carriers
- Acting as a hormone
- Ferritin storage
- Energy production
- Cell movement
Fatty acids (blood)
Fatty acids are chemical compounds consisting of carbon, hydrogen, and the carboxyl group, which also contains oxygen. Fatty acids are monocarboxylic acids, which always have an even amount of carbon atoms. In nature, they form carbon chains of various lengths, which determine the class of fatty acids (short-chain fatty acids, medium-chain fatty acids, long-chain fatty acids, and very-long-chain fatty acids).
The body can synthesize short-chain fatty acids in the intestine with the help of intestinal bacteria. In addition, medium-chain fatty acids are also found in nature (e.g., in a coconut). The saturation degree of fatty acids depends on the possible double bonds between the carbon chains. Saturated fatty acids contain only single bonds. Monounsaturated fatty acids have one double bond between carbon atoms, and polyunsaturated fatty acids have several bonds. Hence, fatty acids can be either saturated, monounsaturated, or polyunsaturated.
Fatty acids affect cell signaling in the body and alter gene expression in fat and carbohydrate metabolism. Moreover, fatty acids may act as ligands for the peroxisome proliferation-activated receptors (PPARs), which play an essential role in the regulation of inflammation (i.e., eicosanoids), fat formation (adipogenesis), insulin, and neurological functions, among others.
Fatty acids add-on for Metabolomix+
This add-on can be added to the Metabolomix+ test, and it covers Essential and Metabolic Fatty Acids with an easy at-home bloodspot finger prick.
Analytes covered in this add-on:
- Omega 3 Fatty Acids are essential for brain function and cardiovascular health and are anti-inflammatory
- Omega 6 Fatty Acids are involved in the balance of inflammation
- Omega 9 Fatty Acids are essential for brain growth, nerve cell myelin, and reducing inflammation
- Saturated Fatty Acids are involved in lipoprotein metabolism and adipose tissue inflammation
- Monounsaturated Fats include omega-7 fats and unhealthy trans fats
- Delta-6 Desaturase Activity assesses the efficiency of this enzyme to metabolize omega 6’s and omega 3’s
- Cardiovascular Risk includes specific ratios and the Omega-3 Index
Gut microbiome & microbiota – a key test for everyone
Microbiome and microbiota are sometimes interchangeable, but these terms differ. The microbiome is the collection of genomes from all the microorganisms in the environment. For example, the human microbiome refers to a group of microorganisms around the body (including skin, eyes, gut, and so on). Microbiota usually refers to specific microorganisms that are found within a particular environment. In this case, microbiota (i.e., gut microbiota) refers to all microorganisms found in the gut, such as bacteria, viruses and fungi.
It is estimated that 500–1,000 distinct bacterial species live in the intestine. The most common bacterial species in the intestine are Bacteroides, Clostridium, Fusobacterium and Bifidobacterium. Other known strains include Escherichia and Lactobacillus. The Bifidobacterium and Lactobacillus strains are typically present in probiotic products because these are the most widely studied.
The functions of the bacteria in the intestines include breaking down carbohydrates (fermentation) that the body cannot otherwise digest. The intestines' bacterial strains also play a role in the absorption of K vitamins, B vitamins, and some minerals (magnesium, calcium, and iron) in the production of bile acids and the immune system. In addition, they act as protective walls against various pathogens.
The bacterial strain of the intestine changes quickly whenever dietary adjustments are made. Studies on mice have found that the microbiota may change overnight upon changing the diet. Similar changes also occur in humans, but the exact time span is unknown. Switching to a more intestine-friendly diet has brought positive results in the treatment of chronic inflammation, obesity, and gut permeability.
GI360 – The Lamborghini of Gut Tests
A personal treatment strategy is the future of medicine. It is based on data related to individual biochemistry and genetic inheritance. This test will help you gain objective information about yourself, create a more accurate treatment strategy, and implement changes that will lead to better health.
The GI360 x3 intestinal assay uses several screening methods (multiplex PCR, MALDI-TOF, and microscopy) to detect pathogens, viruses, parasites, and bacteria. These may manifest as acute or chronic gastrointestinal symptoms and diseases or possibly as intestinal-related symptoms.
Image: Sample report first page analysis of the GI 360 test.
Microbiome Abundance and Diversity
The GI360™ Profile is a gut microbiota DNA analysis tool that identifies and characterizes the abundance and diversity of more than 45 targeted analytes that peer-reviewed research has shown to contribute to dysbiosis and other chronic disease states.
The Dysbiosis Index (DI) is a calculation with scores from 1 to 5 based on the overall bacterial abundance and Profile within the patient's sample compared to a reference population. Values above 2 indicate a microbiota profile that differs from the defined normobiotic reference population (i.e., dysbiosis). The higher the DI above 2, the more the sample is considered to deviate from normobiosis.
Among other things, this information can be used to consider and build an individualized treatment program.
The test is particularly suitable for use in the following intestinal diseases and chronic problems:
- Gastrointestinal symptoms
- Autoimmune diseases
- IBD / IBS
- Food hypersensitivity
- Nutritional deficiencies
- Joint pain
- Chronic or acute diarrhea
- Bloody stools
- Mucosal dysfunction
- Stomach ache
- Fever and vomiting
The extensive GI360 x3 gut analysis is currently the most accurate and comprehensive analysis of the total balance of the gastrointestinal system. Numerous functional medicine physicians around the world also use the test.
Genetic Testing (DNA) and Its Vast Possibilities
Knowing your genetic code is made possible by new DNA tests based on the latest science and technology. They can help make better choices in everyday life and find more effective ways to change lifestyles. At the same time, DNA tests help optimize health and achieve personal goals.
Genetic testing is a powerful tool that has revolutionized the field of healthcare. It allows individuals to gain insight into their genetic makeup and better understand their risk of developing certain diseases or conditions. By analyzing an individual's DNA, genetic testing can reveal information about genetic mutations, variations, and changes that can significantly impact an individual's health. With this information, individuals can make more informed decisions about their health, including lifestyle changes and preventive measures, to reduce their risk of developing certain conditions.
Furthermore, genetic testing can diagnose and treat various diseases, providing personalized and targeted treatments that can significantly improve patient outcomes. The importance of genetic testing in healthcare cannot be overstated, and as technology advances, it can potentially transform how we approach disease prevention and treatment.
Integral DNA: Combination Of Three DNA Tests (Resilience + Health + Active)
Precision nutrition, precision medicine, and nutrigenomics are all related concepts revolutionizing how we think about health and nutrition. At their core, these terms refer to using advanced technology and data to create personalized health plans. Understanding the individual's DNA and lifestyle can tailor these plans to meet a person's unique needs.
With Integral DNA, you'll get three powerful new genetic tests to help you make better life choices and more effective lifestyle changes. By knowing your genetic code, you can unlock the secrets of your body to optimize health and reach personal goals.
The test kit consists of three different genetic tests, giving you a comprehensive picture of your health. Previously, for the price of one genetic test, you get three.
DNA Health® tests known genetic variants that significantly impact health and various risks of diseases such as osteoporosis, cancer, cardiovascular diseases and diabetes.
DNA Active analyzes genes that have been found to significantly affect the following areas: soft tissue injury risk, recovery, power generation potential, endurance potential, caffeine metabolism, salt sensitivity, and timing of peak performance.
DNA Resilience provides information on seven key molecular regions that impact stress and resilience the most. These include neuropeptide Y, oxytocin, neurotrophic factors, cortisol, norepinephrine, dopamine and serotonin.
Image: Example summary of the DNA Resilience test.
Learn more about the Integral DNA Test here.
Epigenetic Testing - The Future of Preventive Medicine?
Epigenetics studies how gene expression changes can occur without changes in the underlying DNA sequence. Various factors, including environmental exposures, lifestyle choices, and other external influences, can influence this.
In terms of human health, epigenetics is thought to play a role in various conditions, including cancer, cardiovascular disease, and neurological disorders. By better understanding the underlying mechanisms of epigenetic changes, researchers hope to develop new therapies and interventions that can prevent or treat these conditions.
Some factors that have been shown to influence epigenetic changes include diet, exercise, stress, and exposure to toxins and pollutants. Genetic factors can also play a role in determining an individual's susceptibility to epigenetic changes.
While much is still unknown about the complex interplay between genetics, epigenetics, and environmental factors, research in this field is advancing rapidly. It has the potential to revolutionize our understanding of human health and disease.
The epigenome is a dynamic system that plays a significant role in aging. DNA methylation and histone modifications change with chronological age and chronic diseases. Aging is associated with general hypomethylation and local hypermethylation. To appropriately analyze DNA methylation, various "epigenetic clocks" have been developed (such as the Horvath clock, Weidner Clock, and Hannum clock).
Types of Epigenetic Modifications
Several different epigenetic modifications can be measured, each of which can provide important insights into an individual's health and disease risk. These include:
- DNA methylation: This is adding a methyl group to a specific location on the DNA molecule, which can alter how genes are expressed. Abnormal methylation patterns have been associated with various diseases, including cancer and cardiovascular disease.
- Histone modification: Histones are proteins that help to package DNA into a compact structure. Modification of histones can change the accessibility of genes, either promoting or inhibiting their expression.
- Non-coding RNA: Non-coding RNA molecules do not code for proteins but can regulate gene expression by interacting with other RNA molecules or proteins.
- Chromatin structure: The way that DNA is packaged into chromatin can also affect gene expression, and changes in chromatin structure have been linked to various diseases.
Aging is an extraordinarily complex and highly individual process that needs to be fully understood. Therefore many biomarkers related to aging may only scratch the surface and give a point of view from a specific angle on what comes to aging. Hence, a combination of wide-ranging routine laboratory tests, epigenetic tests, molecular biomarkers, and phenotypic markers may be the best solution to evaluate a comprehensive view of an individual aging process.
Biohacker Center will provide the most cutting-edge epigenetic tests available in the future.
For now, we recommend taking the GlycanAge test, which is an at-home blood test that analyses glycans (sugars that coat cells) in the body to determine your biological age. They look at your IgG glycome composition (which regulates low-grade chronic inflammation and drives aging). GlycanAge technology goes beyond existing biological age tests by integrating genetic, epigenetic, and environmental aspects of aging.
With increasing emphasis on preventative health and a growing desire to personalize and validate health interventions. GlycanAge is the most sensible place to track lifestyle changes as it provides an independent measure of health.
Learn more about GlycanAge test here.
For a complete assessment of your overall health, utilizing the biomarkers mentioned in this article and relying on current scientific knowledge of human physiology is highly recommended. It's advisable to take all these tests at least once and follow up with a test after making lifestyle changes in 6-12 months to evaluate their impact on your physiology, biochemistry, and epigenetics.
To obtain a holistic view of your health, we suggest undergoing a comprehensive blood biomarkers panel, organic acids test, amino acids (which are included in the organic acids test), fatty acids (as an add-on to the organic acids test), comprehensive microbiota test, integral DNA test and an epigenetic test. These tests are designed to give you a more accurate and in-depth understanding of your health. With the follow-up test after making lifestyle changes, you'll be able to monitor your progress and make more informed decisions about your health. | 1 | 4 |
<urn:uuid:882891fe-90eb-4a83-aa50-2076867efe81> | [ Home | Weather | Wiki | RSS | HN | xkcd ] [ Search | Settings | About ]
[ Related articles | Random article | Open in Wikipedia ]
This article is about the family of operating systems. For the kernel, see Linux kernel. For other uses, see Linux (disambiguation).
Linux is a family of open-source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991, by Linus Torvalds. Linux is typically packaged as a Linux distribution, which includes the kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name "GNU/Linux" to emphasize the importance of GNU software, causing some controversy.
Popular Linux distributions include Debian, Fedora Linux, and Ubuntu, the latter of which itself consists of many different distributions and modifications, including Lubuntu and Xubuntu. Commercial distributions include Red Hat Enterprise Linux and SUSE Linux Enterprise. Desktop Linux distributions include a windowing system such as X11 or Wayland, and a desktop environment such as GNOME or KDE Plasma. Distributions intended for servers may omit graphics altogether, or include a solution stack such as LAMP. Because Linux is freely redistributable, anyone may create a distribution for any purpose.
Linux was originally developed for personal computers based on the Intel x86 architecture, but has since been ported to more platforms than any other operating system. Because of the dominance of the Linux-based Android on smartphones, Linux, including Android, has the largest installed base of all general-purpose operating systems, as of May 2022. Although Linux is, as of November 2022, used by only around 2.6 percent of desktop computers, the Chromebook, which runs the Linux kernel-based ChromeOS, dominates the US K-12 education market and represents nearly 20 percent of sub-$300 notebook sales in the US. Linux is the leading operating system on servers (over 96.4% of the top 1 million web servers' operating systems are Linux), leads other big iron systems such as mainframe computers, and is used on all of the world's 500 fastest supercomputers (since November 2017, having gradually displaced all competitors).
Linux also runs on embedded systems, i.e. devices whose operating system is typically built into the firmware and is highly tailored to the system. This includes routers, automation controls, smart home devices, video game consoles, televisions (Samsung and LG Smart TVs), automobiles (Tesla, Audi, Mercedes-Benz, Hyundai and Toyota), and spacecraft (Falcon 9 rocket, Dragon crew capsule and the Perseverance rover).
Linux is one of the most prominent examples of free and open-source software collaboration. The source code may be used, modified and distributed commercially or non-commercially by anyone under the terms of its respective licenses, such as the GNU General Public License (GPL). The Linux kernel, for example, is licensed under the GPLv2, with an exception for system calls that allows code that calls the kernel via system calls not to be licensed under the GPL.
Table of contents
Design Development Hardware support Uses Market share and uptake Copyright, trademark, and naming See also
Main article: History of Linux
The Unix operating system was conceived and implemented in 1969, at AT&T's Bell Labs, in the United States by Ken Thompson, Dennis Ritchie, Douglas McIlroy, and Joe Ossanna. First released in 1971, Unix was written entirely in assembly language, as was common practice at the time. In 1973, in a key pioneering approach, it was rewritten in the C programming language by Dennis Ritchie (with the exception of some hardware and I/O routines). The availability of a high-level language implementation of Unix made its porting to different computer platforms easier.
Due to an earlier antitrust case forbidding it from entering the computer business, AT&T licensed the operating system's source code as a trade secret to anyone who asked. As a result, Unix grew quickly and became widely adopted by academic institutions and businesses. In 1984, AT&T divested itself of its regional operating companies, and was released from its obligation not to enter the computer business; freed of that obligation, Bell Labs began selling Unix as a proprietary product, where users were not legally allowed to modify it.
Onyx Systems began selling early microcomputer-based Unix workstations in 1980. Later, Sun Microsystems, founded as a spin-off of a student project at Stanford University, also began selling Unix-based desktop workstations in 1982. While Sun workstations didn't utilize commodity PC hardware like Linux was later developed for, it represented the first successful commercial attempt at distributing a primarily single-user microcomputer that ran a Unix operating system.
With Unix increasingly "locked in" as a proprietary product, the GNU Project, started in 1983 by Richard Stallman, had the goal of creating a "complete Unix-compatible software system" composed entirely of free software. Work began in 1984. Later, in 1985, Stallman started the Free Software Foundation and wrote the GNU General Public License (GNU GPL) in 1989. By the early 1990s, many of the programs required in an operating system (such as libraries, compilers, text editors, a command-line shell, and a windowing system) were completed, although low-level elements such as device drivers, daemons, and the kernel, called GNU Hurd, were stalled and incomplete.
MINIX was created by Andrew S. Tanenbaum, a computer science professor, and released in 1987 as a minimal Unix-like operating system targeted at students and others who wanted to learn operating system principles. Although the complete source code of MINIX was freely available, the licensing terms prevented it from being free software until the licensing changed in April 2000.
Although not released until 1992, due to legal complications, development of 386BSD, from which NetBSD, OpenBSD and FreeBSD descended, predated that of Linux.
Linus Torvalds has stated on separate occasions that if the GNU kernel or 386BSD had been available at the time (1991), he probably would not have created Linux.
While attending the University of Helsinki in the fall of 1990, Torvalds enrolled in a Unix course. The course utilized a MicroVAX minicomputer running Ultrix, and one of the required texts was Operating Systems: Design and Implementation by Andrew S. Tanenbaum. This textbook included a copy of Tanenbaum's MINIX operating system. It was with this course that Torvalds first became exposed to Unix. In 1991, he became curious about operating systems. Frustrated by the licensing of MINIX, which at the time limited it to educational use only, he began to work on his own operating system kernel, which eventually became the Linux kernel.
Torvalds began the development of the Linux kernel on MINIX and applications written for MINIX were also used on Linux. Later, Linux matured and further Linux kernel development took place on Linux systems. GNU applications also replaced all MINIX components, because it was advantageous to use the freely available code from the GNU Project with the fledgling operating system; code licensed under the GNU GPL can be reused in other computer programs as long as they also are released under the same or a compatible license. Torvalds initiated a switch from his original license, which prohibited commercial redistribution, to the GNU GPL. Developers worked to integrate GNU components with the Linux kernel, creating a fully functional and free operating system.
Linus Torvalds had wanted to call his invention "Freax", a portmanteau of "free", "freak", and "x" (as an allusion to Unix). During the start of his work on the system, some of the project's makefiles included the name "Freax" for about half a year. Initially, Torvalds considered the name "Linux" but dismissed it as too egotistical.
To facilitate development, the files were uploaded to the FTP server (ftp.funet.fi) of FUNET in September 1991. Ari Lemmke, Torvalds' coworker at the Helsinki University of Technology (HUT) who was one of the volunteer administrators for the FTP server at the time, did not think that "Freax" was a good name, so he named the project "Linux" on the server without consulting Torvalds. Later, however, Torvalds consented to "Linux".
According to a newsgroup post by Torvalds, the word "Linux" should be pronounced with a short 'i' as in 'print' and 'u' as in 'put'. To further demonstrate how the word "Linux" should be pronounced, he included an audio guide with the kernel source code. However, in this recording, he pronounces Linux as /'lin?ks/ with a short but close front unrounded vowel, instead of a near-close near-front unrounded vowel as in his newsgroup post.
Commercial and popular uptake
Main article: Linux adoption
Adoption of Linux in production environments, rather than being used only by hobbyists, started to take off first in the mid-1990s in the supercomputing community, where organizations such as NASA started to replace their increasingly expensive machines with clusters of inexpensive commodity computers running Linux. Commercial use began when Dell and IBM, followed by Hewlett-Packard, started offering Linux support to escape Microsoft's monopoly in the desktop operating system market.
Today, Linux systems are used throughout computing, from embedded systems to virtually all supercomputers, and have secured a place in server installations such as the popular LAMP application stack. Use of Linux distributions in home and enterprise desktops has been growing. Linux distributions have also become popular in the netbook market, with many devices shipping with customized Linux distributions installed, and Google releasing their own ChromeOS designed for netbooks.
Linux's greatest success in the consumer market is perhaps the mobile device market, with Android being the dominant operating system on smartphones and very popular on tablets and, more recently, on wearables. Linux gaming is also on the rise with Valve showing its support for Linux and rolling out SteamOS, its own gaming-oriented Linux distribution, and later the Steam Deck platform. Linux distributions have also gained popularity with various local and national governments, such as the federal government of Brazil.
Greg Kroah-Hartman is the lead maintainer for the Linux kernel and guides its development. William John Sullivan is the executive director of the Free Software Foundation, which in turn supports the GNU components. Finally, individuals and corporations develop third-party non-GNU components. These third-party components comprise a vast body of work and may include both kernel modules and user applications and libraries.
Linux vendors and communities combine and distribute the kernel, GNU components, and non-GNU components, with additional package management software in the form of Linux distributions.
See also: Linux kernel - Architecture and features
Many open source developers agree that the Linux kernel was not designed but rather evolved through natural selection. Torvalds considers that although the design of Unix served as a scaffolding, "Linux grew with a lot of mutations - and because the mutations were less than random, they were faster and more directed than alpha-particles in DNA." Eric S. Raymond considers Linux's revolutionary aspects to be social, not technical: before Linux, complex software was designed carefully by small groups, but "Linux evolved in a completely different way. From nearly the beginning, it was rather casually hacked on by huge numbers of volunteers coordinating only through the Internet. Quality was maintained not by rigid standards or autocracy but by the naively simple strategy of releasing every week and getting feedback from hundreds of users within days, creating a sort of rapid Darwinian selection on the mutations introduced by developers." Bryan Cantrill, an engineer of a competing OS, agrees that "Linux wasn't designed, it evolved", but considers this to be a limitation, proposing that some features, especially those related to security, cannot be evolved into, "this is not a biological system at the end of the day, it's a software system." A Linux-based system is a modular Unix-like operating system, deriving much of its basic design from principles established in Unix during the 1970s and 1980s. Such a system uses a monolithic kernel, the Linux kernel, which handles process control, networking, access to the peripherals, and file systems. Device drivers are either integrated directly with the kernel, or added as modules that are loaded while the system is running.
The GNU userland is a key part of most systems based on the Linux kernel, with Android being the notable exception. The GNU C library, an implementation of the C standard library, works as a wrapper for the system calls of the Linux kernel necessary to the kernel-userspace interface, the toolchain is a broad collection of programming tools vital to Linux development (including the compilers used to build the Linux kernel itself), and the coreutils implement many basic Unix tools. The GNU Project also develops Bash, a popular CLI shell. The graphical user interface (or GUI) used by most Linux systems is built on top of an implementation of the X Window System. More recently, the Linux community seeks to advance to Wayland as the new display server protocol in place of X11. Many other open-source software projects contribute to Linux systems.
Installed components of a Linux system include the following:
- A bootloader, for example GNU GRUB, LILO, SYSLINUX or systemd-boot. This is a program that loads the Linux kernel into the computer's main memory, by being executed by the computer when it is turned on and after the firmware initialization is performed.
- An init program, such as the traditional sysvinit and the newer systemd, OpenRC and Upstart. This is the first process launched by the Linux kernel, and is at the root of the process tree: in other terms, all processes are launched through init. It starts processes such as system services and login prompts (whether graphical or in terminal mode).
- Software libraries, which contain code that can be used by running processes. On Linux systems using ELF-format executable files, the dynamic linker that manages the use of dynamic libraries is known as ld-linux.so. If the system is set up for the user to compile software themselves, header files will also be included to describe the programming interface of installed libraries. Besides the most commonly used software library on Linux systems, the GNU C Library (glibc), there are numerous other libraries, such as SDL and Mesa.
- C standard library is the library needed to run C programs on a computer system, with the GNU C Library being the standard. For embedded systems, alternatives such as the musl, EGLIBC (a glibc fork once used by Debian) and uClibc (which was designed for uClinux) have been developed, although the last two are no longer maintained. Android uses its own C library, Bionic.
- Basic Unix commands, with GNU coreutils being the standard implementation. Alternatives exist for embedded systems, such as the copyleft BusyBox, and the BSD-licensed Toybox.
- Widget toolkits are the libraries used to build graphical user interfaces (GUIs) for software applications. Numerous widget toolkits are available, including GTK and Clutter developed by the GNOME Project, Qt developed by the Qt Project and led by The Qt Company, and Enlightenment Foundation Libraries (EFL) developed primarily by the Enlightenment team.
- A package management system, such as dpkg and RPM. Alternatively packages can be compiled from binary or source tarballs.
- User interface programs such as command shells or windowing environments.
The user interface, also known as the shell, is either a command-line interface (CLI), a graphical user interface (GUI), or controls attached to the associated hardware, which is common for embedded systems. For desktop systems, the default user interface is usually graphical, although the CLI is commonly available through terminal emulator windows or on a separate virtual console.
CLI shells are text-based user interfaces, which use text for both input and output. The dominant shell used in Linux is the Bourne-Again Shell (bash), originally developed for the GNU Project. Most low-level Linux components, including various parts of the userland, use the CLI exclusively. The CLI is particularly suited for automation of repetitive or delayed tasks and provides very simple inter-process communication.
On desktop systems, the most popular user interfaces are the GUI shells, packaged together with extensive desktop environments, such as KDE Plasma, GNOME, MATE, Cinnamon, LXDE, Pantheon and Xfce, though a variety of additional user interfaces exist. Most popular user interfaces are based on the X Window System, often simply called "X". It provides network transparency and permits a graphical application running on one system to be displayed on another where a user may interact with the application; however, certain extensions of the X Window System are not capable of working over the network. Several X display servers exist, with the reference implementation, X.Org Server, being the most popular.
Server distributions might provide a command-line interface for developers and administrators, but provide a custom interface towards end-users, designed for the use-case of the system. This custom interface is accessed through a client that resides on another system, not necessarily Linux based.
Several types of window managers exist for X11, including tiling, dynamic, stacking and compositing. Window managers provide means to control the placement and appearance of individual application windows, and interact with the X Window System. Simpler X window managers such as dwm, ratpoison, i3wm, or herbstluftwm provide a minimalist functionality, while more elaborate window managers such as FVWM, Enlightenment or Window Maker provide more features such as a built-in taskbar and themes, but are still lightweight when compared to desktop environments. Desktop environments include window managers as part of their standard installations, such as Mutter (GNOME), KWin (KDE) or Xfwm (xfce), although users may choose to use a different window manager if preferred.
Wayland is a display server protocol intended as a replacement for the X11 protocol; as of 2022, it has received relatively wide adoption. Unlike X11, Wayland does not need an external window manager and compositing manager. Therefore, a Wayland compositor takes the role of the display server, window manager and compositing manager. Weston is the reference implementation of Wayland, while GNOME's Mutter and KDE's KWin are being ported to Wayland as standalone display servers. Enlightenment has already been successfully ported since version 19.
Video input infrastructure
Main article: Video4Linux
Linux currently has two modern kernel-userspace APIs for handling video input devices: V4L2 API for video streams and radio, and DVB API for digital TV reception.
Due to the complexity and diversity of different devices, and due to the large number of formats and standards handled by those APIs, this infrastructure needs to evolve to better fit other devices. Also, a good userspace device library is the key of the success for having userspace applications to be able to work with all formats supported by those devices.
Main articles: Linux distribution and Free software
The primary difference between Linux and many other popular contemporary operating systems is that the Linux kernel and other components are free and open-source software. Linux is not the only such operating system, although it is by far the most widely used. Some free and open-source software licenses are based on the principle of copyleft, a kind of reciprocity: any work derived from a copyleft piece of software must also be copyleft itself. The most common free software license, the GNU General Public License (GPL), is a form of copyleft, and is used for the Linux kernel and many of the components from the GNU Project.
Linux-based distributions are intended by developers for interoperability with other operating systems and established computing standards. Linux systems adhere to POSIX, SUS, LSB, ISO, and ANSI standards where possible, although to date only one Linux distribution has been POSIX.1 certified, Linux-FT.
Free software projects, although developed through collaboration, are often produced independently of each other. The fact that the software licenses explicitly permit redistribution, however, provides a basis for larger-scale projects that collect the software produced by stand-alone projects and make it available all at once in the form of a Linux distribution.
Many Linux distributions manage a remote collection of system software and application software packages available for download and installation through a network connection. This allows users to adapt the operating system to their specific needs. Distributions are maintained by individuals, loose-knit teams, volunteer organizations, and commercial entities. A distribution is responsible for the default configuration of the installed Linux kernel, general system security, and more generally integration of the different software packages into a coherent whole. Distributions typically use a package manager such as apt, yum, zypper, pacman or portage to install, remove, and update all of a system's software from one central location.
See also: Free software community and Linux User Group
A distribution is largely driven by its developer and user communities. Some vendors develop and fund their distributions on a volunteer basis, Debian being a well-known example. Others maintain a community version of their commercial distributions, as Red Hat does with Fedora, and SUSE does with openSUSE.
In many cities and regions, local associations known as Linux User Groups (LUGs) seek to promote their preferred distribution and by extension free software. They hold meetings and provide free demonstrations, training, technical support, and operating system installation to new users. Many Internet communities also provide support to Linux users and developers. Most distributions and free software / open-source projects have IRC chatrooms or newsgroups. Online forums are another means for support, with notable examples being LinuxQuestions.org and the various distribution specific support and community forums, such as ones for Ubuntu, Fedora, and Gentoo. Linux distributions host mailing lists; commonly there will be a specific topic such as usage or development for a given list.
There are several technology websites with a Linux focus. Print magazines on Linux often bundle cover disks that carry software or even complete Linux distributions.
Although Linux distributions are generally available without charge, several large corporations sell, support, and contribute to the development of the components of the system and of free software. An analysis of the Linux kernel in 2017 showed that well over 85% of the code developed by programmers who are being paid for their work, leaving about 8.2% to unpaid developers and 4.1% unclassified. Some of the major corporations that provide contributions include Intel, Samsung, Google, AMD, Oracle and Facebook. A number of corporations, notably Red Hat, Canonical and SUSE, have built a significant business around Linux distributions.
The free software licenses, on which the various software packages of a distribution built on the Linux kernel are based, explicitly accommodate and encourage commercialization; the relationship between a Linux distribution as a whole and individual vendors may be seen as symbiotic. One common business model of commercial suppliers is charging for support, especially for business users. A number of companies also offer a specialized business version of their distribution, which adds proprietary support packages and tools to administer higher numbers of installations or to simplify administrative tasks.
Another business model is to give away the software to sell hardware. This used to be the norm in the computer industry, with operating systems such as CP/M, Apple DOS and versions of the classic Mac OS prior to 7.6 freely copyable (but not modifiable). As computer hardware standardized throughout the 1980s, it became more difficult for hardware manufacturers to profit from this tactic, as the OS would run on any manufacturer's computer that shared the same architecture.
Programming on Linux
Most programming languages support Linux either directly or through third-party community based ports. The original development tools used for building both Linux applications and operating system programs are found within the GNU toolchain, which includes the GNU Compiler Collection (GCC) and the GNU Build System. Amongst others, GCC provides compilers for Ada, C, C++, Go and Fortran. Many programming languages have a cross-platform reference implementation that supports Linux, for example PHP, Perl, Ruby, Python, Java, Go, Rust and Haskell. First released in 2003, the LLVM project provides an alternative cross-platform open-source compiler for many languages. Proprietary compilers for Linux include the Intel C++ Compiler, Sun Studio, and IBM XL C/C++ Compiler. BASIC in the form of Visual Basic is supported in such forms as Gambas, FreeBASIC, and XBasic, and in terms of terminal programming or QuickBASIC or Turbo BASIC programming in the form of QB64.
A common feature of Unix-like systems, Linux includes traditional specific-purpose programming languages targeted at scripting, text processing and system configuration and management in general. Linux distributions support shell scripts, awk, sed and make. Many programs also have an embedded programming language to support configuring or programming themselves. For example, regular expressions are supported in programs like grep and locate, the traditional Unix MTA Sendmail contains its own Turing complete scripting system, and the advanced text editor GNU Emacs is built around a general purpose Lisp interpreter.
Most distributions also include support for PHP, Perl, Ruby, Python and other dynamic languages. While not as common, Linux also supports C# (via Mono), Vala, and Scheme. Guile Scheme acts as an extension language targeting the GNU system utilities, seeking to make the conventionally small, static, compiled C programs of Unix design rapidly and dynamically extensible via an elegant, functional high-level scripting system; many GNU programs can be compiled with optional Guile bindings to this end. A number of Java virtual machines and development kits run on Linux, including the original Sun Microsystems JVM (HotSpot), and IBM's J2SE RE, as well as many open-source projects like Kaffe and Jikes RVM.
GNOME and KDE are popular desktop environments and provide a framework for developing applications. These projects are based on the GTK and Qt widget toolkits, respectively, which can also be used independently of the larger framework. Both support a wide variety of languages. There are a number of Integrated development environments available including Anjuta, Code::Blocks, CodeLite, Eclipse, Geany, ActiveState Komodo, KDevelop, Lazarus, MonoDevelop, NetBeans, and Qt Creator, while the long-established editors Vim, nano and Emacs remain popular.
See also: List of Linux-supported computer architectures
The Linux kernel is a widely ported operating system kernel, available for devices ranging from mobile phones to supercomputers; it runs on a highly diverse range of computer architectures, including ARM-based Android smartphones and the IBM Z mainframes. Specialized distributions and kernel forks exist for less mainstream architectures; for example, the ELKS kernel fork can run on Intel 8086 or Intel 80286 16-bit microprocessors, while the µClinux kernel fork may run on systems without a memory management unit. The kernel also runs on architectures that were only ever intended to use a manufacturer-created operating system, such as Macintosh computers (with PowerPC, Intel, and Apple silicon processors), PDAs, video game consoles, portable music players, and mobile phones.
Linux has a reputation of supporting old hardware very well by maintaining standardized drivers for a long time. There are several industry associations and hardware conferences devoted to maintaining and improving support for diverse hardware under Linux, such as FreedomHEC. Over time, support for different hardware has improved in Linux, resulting in any off-the-shelf purchase having a "good chance" of being compatible.
In 2014, a new initiative was launched to automatically collect a database of all tested hardware configurations.
Main article: Linux range of use
Market share and uptake
Main article: Linux adoption
See also: Usage share of operating systems
Many quantitative studies of free/open-source software focus on topics including market share and reliability, with numerous studies specifically examining Linux. The Linux market is growing, and the Linux operating system market size is expected to see a growth of 19.2% by 2027, reaching $15.64 billion, compared to $3.89 billion in 2019. Analysts and proponents attribute the relative success of Linux to its security, reliability, low cost, and freedom from vendor lock-in.
- Desktops and laptops
- According to web server statistics (that is, based on the numbers recorded from visits to websites by client devices), as of May 2022, the estimated market share of Linux on desktop computers is around 2.5%. In comparison, Microsoft Windows has a market share of around 75.5%, while macOS covers around 14.9%.
- Web servers
- W3Cook publishes stats that use the top 1,000,000 Alexa domains, which as of May 2015 estimate that 96.55% of web servers run Linux, 1.73% run Windows, and 1.72% run FreeBSD.
- W3Techs publishes stats that use the top 10,000,000 Alexa domains and the top 1,000,000 Tranco domains, updated monthly and as of November 2020 estimate that Linux is used by 39% of the web servers, versus 21.9% being used by Microsoft Windows. 40.1% used other types of Unix.
- IDC's Q1 2007 report indicated that Linux held 12.7% of the overall server market at that time; this estimate was based on the number of Linux servers sold by various companies, and did not include server hardware purchased separately that had Linux installed on it later.
- Mobile devices
- Android, which is based on the Linux kernel, has become the dominant operating system for smartphones. In July 2022, 71.9% of smartphones accessing the internet worldwide used Android. Android is also a popular operating system for tablets, being responsible for more than 60% of tablet sales as of 2013. According to web server statistics, as of October 2021 Android has a market share of about 71%, with iOS holding 28%, and the remaining 1% attributed to various niche platforms.
- Film production
- For years Linux has been the platform of choice in the film industry. The first major film produced on Linux servers was 1997's Titanic. Since then major studios including DreamWorks Animation, Pixar, Weta Digital, and Industrial Light & Magic have migrated to Linux. According to the Linux Movies Group, more than 95% of the servers and desktops at large animation and visual effects companies use Linux.
- Use in government
- Linux distributions have also gained popularity with various local and national governments. News of the Russian military creating its own Linux distribution has also surfaced, and has come to fruition as the G.H.ost Project. The Indian state of Kerala has gone to the extent of mandating that all state high schools run Linux on their computers. China uses Linux exclusively as the operating system for its Loongson processor family to achieve technology independence. In Spain, some regions have developed their own Linux distributions, which are widely used in education and official institutions, like gnuLinEx in Extremadura and Guadalinex in Andalusia. France and Germany have also taken steps toward the adoption of Linux. North Korea's Red Star OS, developed since 2002, is based on a version of Fedora Linux.
Copyright, trademark, and naming
See also: GNU/Linux naming controversy and SCO-Linux disputes
The Linux kernel is licensed under the GNU General Public License (GPL), version 2. The GPL requires that anyone who distributes software based on source code under this license must make the originating source code (and any modifications) available to the recipient under the same terms. Other key components of a typical Linux distribution are also mainly licensed under the GPL, but they may use other licenses; many libraries use the GNU Lesser General Public License (LGPL), a more permissive variant of the GPL, and the X.Org implementation of the X Window System uses the MIT License.
Torvalds states that the Linux kernel will not move from version 2 of the GPL to version 3. He specifically dislikes some provisions in the new license which prohibit the use of the software in digital rights management. It would also be impractical to obtain permission from all the copyright holders, who number in the thousands.
A 2001 study of Red Hat Linux 7.1 found that this distribution contained 30 million source lines of code. Using the Constructive Cost Model, the study estimated that this distribution required about eight thousand person-years of development time. According to the study, if all this software had been developed by conventional proprietary means, it would have cost about US$1.64 billion to develop in 2021 in the United States. Most of the source code (71%) was written in the C programming language, but many other languages were used, including C++, Lisp, assembly language, Perl, Python, Fortran, and various shell scripting languages. Slightly over half of all lines of code were licensed under the GPL. The Linux kernel itself was 2.4 million lines of code, or 8% of the total.
In a later study, the same analysis was performed for Debian version 4.0 (etch, which was released in 2007). This distribution contained close to 283 million source lines of code, and the study estimated that it would have required about seventy three thousand man-years and cost US$9.16 billion (in 2021 dollars) to develop by conventional means.
In the United States, the name Linux is a trademark registered to Linus Torvalds. Initially, nobody registered it, but on August 15, 1994, William R. Della Croce, Jr. filed for the trademark Linux, and then demanded royalties from Linux distributors. In 1996, Torvalds and some affected organizations sued him to have the trademark assigned to Torvalds, and, in 1997, the case was settled. The licensing of the trademark has since been handled by the Linux Mark Institute (LMI). Torvalds has stated that he trademarked the name only to prevent someone else from using it. LMI originally charged a nominal sublicensing fee for use of the Linux name as part of trademarks, but later changed this in favor of offering a free, perpetual worldwide sublicense.
The Free Software Foundation (FSF) prefers GNU/Linux as the name when referring to the operating system as a whole, because it considers Linux distributions to be variants of the GNU operating system initiated in 1983 by Richard Stallman, president of the FSF. They explicitly take no issue over the name Android for the Android OS, which is also an operating system based on the Linux kernel, as GNU is not a part of it.
A minority of public figures and software projects other than Stallman and the FSF, notably Debian (which had been sponsored by the FSF up to 1996), also use GNU/Linux when referring to the operating system as a whole. Most media and common usage, however, refers to this family of operating systems simply as Linux, as do many large Linux distributions (for example, SUSE Linux and Red Hat Enterprise Linux). By contrast, Linux distributions containing only free software use "GNU/Linux" or simply "GNU", such as Trisquel GNU/Linux, Parabola GNU/Linux-libre, BLAG Linux and GNU, and gNewSense.
As of May 2011, about 8% to 13% of the lines of code of the Linux distribution Ubuntu (version "Natty") is made of GNU components (the range depending on whether GNOME is considered part of GNU); meanwhile, 6% is taken by the Linux kernel, increased to 9% when including its direct dependencies.
- Comparison of Linux distributions
- Comparison of open-source and closed-source software
- Comparison of operating systems
- Comparison of X Window System desktop environments
- Criticism of Linux
- Linux kernel version history
- Linux Documentation Project
- Linux From Scratch
- Linux Software Map
- List of Linux distributions
- List of games released on Linux
- List of operating systems
- Loadable kernel module
- Usage share of operating systems
- Timeline of operating systems
Wikipedia is available under the Creative Commons Attribution-ShareAlike License 3.0.
These pages best viewed with Netscape Navigator 1.1 or later. | 1 | 12 |
<urn:uuid:7d3bbcfa-e2c8-442c-9512-a51b48f5d897> | On Single Board Computers, If you have never seen a wave in your engineering life, then take time to observe how Single Board Computers are taking over our societal markets. They have become trendy nowadays. If you don’t interact with them in the latest automobile you purchase, then you will meet them embedded in a washing machine, a security system, or even in the hospital. We can’t avoid them; we’ll have to embrace them.
As PCB manufacturers, we can no longer afford to shy away from Single Board Computers; it is time we embrace them. This edition aimed to explore this interestingly growing field of SBCs and how they could help your projects or daily life. Because as WellPCB, we’ve come of age in PCB Assembling, we are willing to extend an olive branch in offering you both advice and a reliable source you could make a quotation. Stick around; it won’t take long; I promise.
A Single Board Computer At A Glance
1.1: What is a Single Board Computer (SBC)?
A Single Board Computer is a fully functional computer (meaning it has input and output, a memory, and a processor) built on a single PCB (Printed Circuit Board). Unlike desktops and ordinary Personal Computers (PCs), SBCs rely on simple architectures that do not give room for expanded peripheral functions using expansion slots. The ARM (Advanced RISC Machine) processor architecture with lower processing speeds and significantly low power consumption powers most SBCs. Then again, some modern SBCs conform to the x86-Intel processor architecture, as we shall explore in the ensuing chapters.
1.2: What Are Their Uses?
As outlined before, SBCs can be utilized to perform any task that an ordinary computer can execute. At their inception (around May 1976), SBCs were initially developed to aid learning. But with improvements in technology, they evolved to run various applications like ordinary computers. Intelligence systems like those that run intelligent cars, innovative security systems, and automated devices, among several other such applications, rely on SBCs. Besides, they are still valuable for learning, prototyping, and experimentations among learners and hobbyists.
1.3: Why Are They Preferable Over Ordinary Computers?
It is critical to note that SBCs, generally have pretty inferior capabilities to their PC counterparts. Then again, they have kept increasing their market demand and their processing capability over the years. Because of their varied uses, one can predict that SBCs might overtake the popularity of PCs in the future. Now, various reasons make Single Board Computers more preferable to the traditional PCs. Some of the notable reasons include:
They are portable. Top in the list of their peculiarity over PCs is their tiny size. Raspberry Pi, one of the world’s popular SBC, is small enough to fit the size of a palm. Thanks to their small size, they are portable, and they can be programmed then embedded on various automated/intelligent systems.
They are cheap: Most SBCs are very cheap to acquire. When compared to conventional computers, Single Board Computers are significantly affordable to develop than PCs. Usually, with a reduction in the buying price, there is always a tendency for the demand for such a product to soar. Similarly, inventions on the use of such products increase. SBCs are now readily available and used by various electrical engineers and hobbyists for prototyping. At the moment, you could even order some customized SBC boards from us at an incredible bargain.
They consume less power than PCs. When evaluating SBCs against conventional computers on power consumption, you will undoubtedly find a very significant reduction in power consumption. An average SBC consumes just eight watts of power instead of 400 watts of energy consumed by the traditional computers when running on a “power saving” mode. On this angle, SBCs are preferable over conventional computers. Possibly, this low power consumption feature makes them adorable to embed in other computerized systems.
1.4: The SBC Wave Is Already Sweeping Off PCs
If numbers do not lie, then the best time to have embraced SBCs was in 2013; and the second-best time to embrace them is now. The two graphs above show a contrast between how people purchased PCs vis-à-vis carrying out Do-It-Yourself (DIY) projects using SBCs within the same period. While the market demand for PCs is steadily declining, the need for SBCs to carry out experiments is soaring daily.
A simple analysis of the trend on people’s buying habits does depict that SBCs are not just the wave of the future; SBCs are the current wave that will become a tornado in the future. We cannot afford to be swept; it is time we embraced the wave and swam along. Unfortunately, most engineers still do not see it coming.
Perhaps the main reason SBCs are on the rise and are still going to rise higher is that SBCs allow people the freedom to program computers. People obsessed with DIY wish to invent and program their computers to suit them better in their specific environments. SBCs attain this freedom goal by being cheaper, portable, and programmable. SBCs also have lower power consumption.
In the following chapters, we will now consider in detail the different processors and architectures of Single Board Computers and a comparison of the best SBCs available at the moment.
Processor Architectures Of Single Board Computers
ARM, Intel, and Freescale Power Architecture Are the three main processor designs that power Single Board Computers. As I had hinted before, most SBCs run on ARM processors. A significant amount of the remaining is powered by Intel and a few by the Power Architecture. Let us begin by examining the ARM processor architecture.
2.1: The ARM Processor Architecture For Single Board Computers
In essence, the ARM is a CPU architecture that falls under the family of RISC (Reduced Instruction Set Computers) developed by Advanced RISC Machines (ARM). Acorn Computers developed the design in the 1980s.
ARM produces both 32-bit and 64-bit processors like Intel. However, ARM processors are designed to perform at higher speeds by processing Millions of Instructions Per Second (MIPS). The processors achieve better processing by getting rid of additional instructions and optimizing processing paths, thus achieving better performance at lower power consumption.
ARM processors tend to have a lower throughput than the leading X86 Intel processors; however, they have more applications than X86 processors due to their ability to utilize power consumption.
In the current market, ARM processor architecture is the most popularly used processor architecture for Single Board Computers due to their lower power consumption. These processors are famous for open-source operating systems like Linux and Android. They run on several defense, aerospace, and consumer electronics products, such as smartphones, multimedia players, and fitness wearable devices, among other devices that utilize SBCs.
Most of the leading market SBCs like The Raspberry Pi, Banana Pi, Orange Pi, BeagleBoard, Board, among several others, use these processors.
2.2 Single Board Computer—Intel Processors
Intel leads in manufacturing processors for Personal Computers. In recent years, the tech giant has made progress in manufacturing high-speed processors for SBCs.
The fourth-generation Intel Core and X86 processors have recorded a higher throughput in processing graphics and processor instructions than ARM processors. Unlike ARM SBCs that hold many simple processors that share the workload (known as “scaling out”), Intel SBCs are designed to hold few high-capacity processors (at times called “scaling-up”) that can handle complex tasks with ease.
Because of their high-capacity processing, Intel processors are utilized in fields that demand high accuracy and faster processing speeds like automation markets, medicine, defense, transportation, and aerospace (like in crewless vehicles).
Despite their high throughput and processing capability, Intel processors are still unpopular in the SBC market owing to their high pricing and higher power consumption when compared to ARM processors. However, most Intel Processors have a drawback of overheating. Thus, for projects that require certification to use in hot environments, it is advisable to consider designing a cooling system or even use a different processor alternative to Intel.
Some of the leading Small Board Computers powered by Intel include those produced by Intel Company like Intel Galileo generations and The UDOO Company like UDOO Dual Basic, UDOO Quad, and UDOO X86 advanced.
2.3: Freescale Power Architecture
Freescale Company produces the Power Architecture processors. They follow the RISC processor architecture like ARM processors. They perform like ordinary X86 processors. Their design aims at minimizing throttling.
While Intel processors face the challenge of overheating due to throttling in larger workloads, Power Architecture processors easily sail through processing without consuming too much power or overheating. Then again, these processors are rarely used among SBCs. At the moment, The Power processors run on SBCs like The MIPS Creator C120 Small Board Computer.
2.4: Single Board Computer—Chapter Conclusion
So far, we have done an overview to understand Small Board Computers, their uses, and their relevance in the future of computers. We have also taken time to study the three main architectures of SBCs based on their processors. In the next chapter, we will begin comparing specific SBCs or SBC brands to determine the one that is likely to perform better than the others regarding speed, throughput, and cost of buying.
Classification Of The Best Single Board Computers
If you wanted to run an IoT project, which board could you choose among the thousands on sale now?
Honestly, owing to the high number of SBC manufacturers and designers, you might sometimes find it hard to choose a board from the long list of boards. Every manufacturer seems to present superior designs or features. In such cases, you might need to invest more in information before you can place an order, i.e., pay more attention rather than more money. And that’s why we invested some time to analyze various SBCs and help you choose the best among the best brands of Small Board Computers in our markets. We’ll skip all other explanations and dive right into our top SBC listing.
3.1: Top 3 Small Board Computers of All Time
Rank 1: Raspberry Pi 3 Model B+
Raspberry Pi is the undoubted “big boy” in the industry of Small Board Computers. The company is reputable for releasing highly reliable boards with unwavering community support of their board releases. Its latest board release: The Raspberry Pi 3 Model B+, improves the famous Model 3B. It boasts of the following features:
- Processor: 2.4GHz (and a separate option for 5GHz for the network), quad-core ARM processor that supports 64-bit OS.
- USB: 4x USB 2.0
- Network: Gigabit Ethernet (over USB with a maximum speed of 300mbps), Bluetooth 4.2/BLE
- Other ports: DSI display port and a CSI camera port.
It has a hit on:
- Excellent community support: Raspberry boasts of a great onine community that always responds to your challenges. Because it is more popular and uses open-source programs, there is hardly a challenge you will encounter with Raspberry has never been experienced by other developers. Most of the problems you will face could get solved by simple online researching.
- It has faster processing capabilities.
- cheap to acquire.
- It has a fast Ethernet speed.
It has a miss on:
- Poor Bluetooth connectivity: Even though new into the market, the board has already faced reports of unreliable Bluetooth connectivity.
What I think About Raspberry Pi 3 Model B+
Raspberry was and remained the king of Single Board Computers. Its latest model is affordable and should be helpful in various experiments. However, judging from the model’s features and strengths, the Model B+ is recommended for projects requiring fast internet connectivity, like projects involving GSM or remote monitoring/control.
Rank 2: Udoo X86 Ultra SBC
This little “boy” spits both class and unimaginable power. Udoo is one of the few SBCs that Intel processors power. Its buying price is a lot higher than even PCs, and its processing speed is fast enough to outdo most PCs as well. Honestly, it could be hard to summarize all its features, but here are some of the most notable features:
- Processor: Intel Pentium N3710 2.56 GHz Quad-Core
- RAM: 8GB
- ROM: embedded SD-card slot that supports up to 32GB
- USB: 3 X USB 3.0
- Graphics driver: Intel HD Graphics 405
- Power: 12V, 3A.
It has a hit on:
- It has excellent processing capabilities: with a 2.56 GHz Intel processor and an 8GB RAM, Udoo ranks high as a champion of its processing speed. The SBC can run common software applications like a standard PC without facing any processing hurdles.
- It offers excellent graphics and multimedia capabilities
- It is compatible with Arduino: Udoo is consistent with the good old Arduino micro-controller board for IoT experimentations. This ability makes it ideal as it does not require transitioning to use in projects or experiments.
- It has a larger storage capacity than an ordinary SBC.
- Udoo has good community support to back up its products.
It has a miss on:
- It is expensive: this SBC is expensive than many SBCs. Thus, you might not wish to buy it for your experimentations.
- It consumes more power than an ordinary SBC. Even though its power consumption is lower than that of a PC, it is notable that it still consumes more electricity than most SBCs. Thus, it is unworthy of use in projects that require less electricity to run.
What I think about Udoo X86 SBC
Udoo is still a growing community. This SBC is on record for shelving an 8GB RAM and a 2.56 GHz Intel processor on an SBC. Because of its superb features and high buying price, I would recommend it to industrial users instead of hobbyists and learners.
Rank 3: Qualcomm Dragon Board
Qualcomm Dragon is not a very popular SBC. It is compact, supportive of various platforms, and it is powerful than ordinary SBC alternatives. Then again, it has good features to add to your project development menu. It has the following features:
- Processor: Quad-core ARM processor with speeds of up to 1.2GHz (and 2.4GHz for the network). Supports both 32-bit and 64-bit OS alternatives
- RAM: 1GB
- USB: 2 x USB 2.0 or a Micro USB
- Network: On-Board Wi-Fi 802.11
- Other ports: HDMI display port.
Qualcomm Dragon has a hit on:
- Fast processing speed
- Compatible with more software applications and operating systems including Linux, Windows, and Macintosh
- It has fast network connectivity.
It has a Miss on:
- A high cost of buying: Even though it has got features that take after the Raspberry Pi 3 model B, it has a significantly higher cost of acquisition than Raspberry.
- It is not very popular. Because it is not popular in the market, it lacks a supportive community to back it up. Thus, it is not ideal for learners.
What I think about Qualcomm Dragon Board
Qualcomm is an excellent board for projects that require SBCs. It is ideal for industrialists as it is stable and supports other ordinary computer applications.
3.2: The Cheapest Single Board Computers
When you are running on a tight budget but would still love to have your hands on an SBC, there are a couple of cheap SBC options you should consider checking out. Here are some of them:
Pine A64 SBC: one of the rarest, most powerful SBCs I (personally) have seen in the market that cost about $15 is the Pine A64. Its design takes much after the popular Arduino Uno.
If you seek to have an IoT project run with impressive rates, but at a lower cost, you should consider The Pine A64. It supports up to 2GB RAM and runs on an ARM Cortes A53 Quad-core processor with a speed of 1.2GHz that can support 64-bit applications. It has excellent network speeds of about 1000 Mbps. The only bone with Pine A64 is its remote online support if you develop issues with the board.
Raspberry Pi Zero: The Raspberry Pi Zero costs just about five dollars. It has 512MB RAM and a processor speed of 1GHz single-core. It has 40 GPIO pins. Pi Zero is ideal for learners, hobbyists, and even engineers when carrying out simple experiments. The Pi Zero is recommended for its easy availability and massive community support.
Nano Pi NEO: Nano Pi NEO is extremely small in size and has limited features. It has 256/512 MB RAM and a 1GHz processor speed. It supports Ethernet, Micro-SD, and VGA. Nano costs about seven dollars and keeps approximately 40 GPIO pins. However, it has a significantly slow processing capability and minor community support to back it up.
3.3 Single Board Computer—The Most Powerful Single Board Computer
When you get tired of playing around with Arduino boards and Raspberry Pi microcontrollers, and you would love to go the world of the limitless, you will need to check out for more powerful, faster options and (often) more expensive. One such Small Board Computer I know of is the Intel NUC Kit (Intel NUC NUC7i3BNH). Even in a business environment, this device is powerful enough to rival some computer servers (I assume it is more powerful than an ordinary computer). Here is how it performs:
The Intel NUC NUC7i3BNH Single Board Computer Kit
- Processor: it runs on Intel’s 7th generation processor (Intel Core i3-7100U).
- RAM: It has two DDR4 SO-DIMM Sockets (support of up to 32GB at 2.133 GHz)
- USB: it has 4 X 3.0 USB and 2 X 2.0 USB.
- GPU: It uses Intel HD Graphics 620.
- It has both SATA and HDMI ports
So far, the NUC kit is the only board that has dared to offer a Micro-SD reading speed of up to 3GB per second and a seventh-generation processor onto a single panel. However, it is limited because it is a kit, and it is open to vendor part adjustments that can either increment its performance or lower it.
It is an unpopular SBC to most engineers owing to its expensive buying cost. Nevertheless, if you are interested in real-life applications that require high speeds and more storage space, then you could offer the Intel NUC NUC7i3BNH a try.
Single Board computers have become both accessible and valuable over time. As we have already explored, there are several instances you could incorporate SBCs into your projects. We have also studied some of the most excellent Single Board Computers that the current market offers.
Now, perhaps after going through it, you might have had a few queries that might require answers; please, feel free to reach out to us through our official WellPCB website. If you want us to get you a new Single Board Computer PCB printout, please feel free to reach out to make an online quotation at an incredible bargain. | 1 | 16 |
<urn:uuid:13225974-eb86-451c-880a-0b08fa5c23c8> | Paul A. Byrne, M.D.
We are continually bombarded with information about the COVID-19 pandemic and encouraged to receive the COVID-19 vaccines. Many are making the decision to consent or refuse these with little information. To make an informed decision about receiving a vaccine requires: 1) full and complete information regarding safety and side effects, 2) explanations of efficacy and effectiveness, 3) ingredients including any adjuvants, preservatives, and genetic material, and 4) vaccine companies’ lack of liability for damages. The use of human fetal cell lines should be made known to all recipients and those who administer the vaccines. This includes development, production or any testing of the vaccines, no matter how remote the cooperation.
COVID-19 PANDEMIC VIRUS
The cause of the present pandemic is a virus, first labeled as the Wuhan virus or the China virus. The official name of the virus is Severe Acute Respiratory Syndrome Coronavirus 2, abbreviated (SARS-CoV-2). The name of the disease it causes is coronavirus disease 2019, abbreviated as COVID-19. In COVID-19, 'CO' stands for 'corona,' 'VI' for 'virus,' and 'D' for disease. There are many types of human coronaviruses including some that commonly cause mild upper-respiratory tract illnesses.
www.cdc.gov › coronavirus › 2019-ncov › faq (Accessed 1-23-21)
A pandemic is a contagious illness that spreads widely. In this pandemic, people in 188 countries have the virus. A virus is a microorganism that cannot reproduce outside of a cell. It can be seen only through a microscope that magnifies by 500,000 times. A virus can cause disease in human persons by spreading from one human being to another human being.
SARS-CoV-2 is a respiratory virus that can have no symptoms, or a range of symptoms from minimal to severe. These may include runny nose, sneezing, sore throat, cough, fever, headache, muscle aches, fatigue, abdominal pain, vomiting, diarrhea, or eye irritation. More severe respiratory illness can include shortness of breath. Also, losses of smell and taste have been reported.
When a virus enters the body, at first, it is not associated with any symptoms. This is known as the incubation period. For COVID-19, the median duration of this incubation period is about 7 days but can be as short as 5 days and usually not longer than 12 days. Near the end of the incubation period and just before symptoms make their appearance, the patient might be contagious. If so, the virus could spread from one person to another even before any symptoms. The patient continues to be contagious throughout the course of the Illness.
COVID-19 may spread through the air as droplets and aerosols that come out of the nose and mouth. This is the rationale to wear a mask, keep distance between persons, and wash hands. Droplets are larger particles that may be visible and tend to fall to the ground or are deposited on another’s nose, mouth, and/or eyes or onto objects that the person touches, thereby inoculating themselves if they touch their nose, eyes or mouth. Aerosols are smaller particles that can float in the air and are not usually visible but may be inhaled. The warm exhaled breath that is seen in cold temperatures is an example of an aerosol.
To cause disease, the virus must get out of the infected person, then spread into the other person. The COVID-19 virus can enter only through the nose, mouth, or eyes. Remember, to cause disease, the virus must get from one person with the virus to another, then enter via the nose, mouth, or eyes of the next person. Keeping the environment clean and washing hands is important in stopping this kind of spread. Do not suck your fingers; do not pick your nose; wash your hands before you eat, and before and after touching your nose and mouth or eyes if you must touch them. Cover coughs and sneezes with a tissue or into one’s sleeve and then discard tissue and wash hands.
The virus itself does not cause the symptoms. It is the response of the body trying to get rid of the invading virus that results in the symptoms. A runny nose can be thought of as washing away the virus; a cough can be thought of as expelling it away. Fever is a body response that can be measured easily and accurately without bias from the patient or examiner.
COVID-19 is associated with an inflammatory reaction of increased congestion and swelling of the affected tissues with damage to the lining of blood vessels and tendency for blood clots to occur. A lesser amount of an inflammatory response may protect against the virus. However, sometimes the inflammatory response is exaggerated and overwhelming. This has been observed to be especially significant in the lungs of patients with COVID-19. In addition, blood clots may form and cause blockages that can be life threatening if in the lungs, heart, or brain.
This exaggerated and overwhelming response has been labeled a hyper-inflammatory response. The lungs swell; air passages get obstructed. Thus, the ventilator cannot effectively expand the lungs as a ventilator does for other patients. Ventilators have been used successfully in many patients with COVID-19, but for others, the ventilator has not been as helpful as desired, possibly because the exaggerated inflammatory response does not allow expansion of the lungs.
Most patients with COVID-19 recover without treatment. Others may have a more prolonged course of chronic illness. Some require more intensive treatments, especially if there would be the new onset of shortness of breath and difficult breathing. In these patients, early treatment is important and vitamins and medications may need to be given intravenously.
Although persons with other medical conditions, such as, diabetes, obesity, hypertension and/or heart disease, are at higher risk of death, with timely treatment survival is possible, even among older aged persons. This is a serious disease that is associated with mortality that increases with age and co-morbidities (other disease conditions).
THE NOVEL COVID mRNA VACCINES
Pfizer-BioNTech and Moderna vaccines are not FDA approved but have been released under Emergency Use Authorization (EUA). These are novel vaccines that use messenger ribonucleic acid (mRNA), which is a molecular portion of the virus’ total genetic information. The clinical trials had followed recipients for 2 months after 2 doses. Long-term side effects are unknown. Neither mRNA nor the lipid nanoparticles have been tested in humans.
Vaccines commonly use a weakened or killed virus or part of the virus toxin to inject. This triggers the person’s immune system to make antibodies that would recognize and neutralize an infecting virus. A mRNA vaccine works differently because laboratory-made genetic material coding for a part of the virus (spike protein) is injected. It first relies on the recipient’s cells to read this genetic code and make more of the foreign protein molecule for the spike protein. Then it relies on the immune system to make antibodies to this part of the virus. These antibodies are presumed to inactivate the foreign virus and not attack the person’s own cells.
DNA and RNA
DNA and RNA are molecules of genetic information that make a person biochemically unique. DNA is composed of double-stranded nucleic acids; RNA is composed of only one strand of nucleic acids. The genetic information in the RNA is read by structures in the cell called ribosomes, resulting in the production of proteins needed by the cell.
A virus has genetic information as DNA or RNA but needs host cells to “read” this information to replicate (i.e., make more of itself). Sars-CoV-2 virus has mRNA that codes for its spike protein. It enables the virus to enter the human cells. The mRNA vaccines work by getting the person’s own cells to read this foreign mRNA and then to have the person’s cells make more copies of the foreign spike protein. This will allegedly trigger an immune response in the person to make antibodies and T cells (killer white blood cells) that will target spike proteins and thus prevent viral entry into the human cells.
The Pfizer and Moderna vaccines use genetically engineered foreign mRNA sequences that code for the viral spike protein. They rely on ribosomes, which (as noted) are structures in the cell that read the mRNA code and make protein molecules. In the case of these mRNA Covid vaccines, the ribosomes make the foreign spike protein which would not naturally occur.
There are ribosomes in several places inside the cell. Ribosomes are both inside and outside the cell nucleus, and produced in the mitochondria (the energy producing structures of the cell).
Many do not understand that it is incorrect to say that the human genome is made up only of DNA in the cell nucleus. The human genome includes both the DNA in the nucleus and the DNA in the mitochondria outside the nucleus. DNA are the blueprints for making protein molecules. mRNA is akin to work orders for the ribosomes. Ribosomes are akin to machines that fabricate the final protein products.
It is in mitochondrial produced ribosomes in the cytoplasm where the vaccine-delivered foreign messenger-RNA makes the foreign spike protein. According to Biochemist/Biologist Dianne Irving, Ph.D. foreign m-RNA could also be read by ribosomes located outside and inside the cell nucleus and the foreign protein could also thus enter the cell nucleus.
Dr. Irving explains that while it may be correct to say that the foreign mRNA does not change the DNA structure inside the nucleus, or the DNA structure of the mitochondrial DNA, it does change the functioning of the mitochondrial-bound ribosomes and thus the functioning of the mitochondrial DNA. Mitochondrial DNA is part of the human genome. Therefore, foreign messenger-RNA causes a change in function of mitochondrial-bound ribosomes, and thus in mitochondrial DNA function. This change in function of the mitochondrial DNA is to produce a foreign protein that it would never make naturally. This change in function of the mitochondrial ribosomes can affect all cells in the body including those that can be passed down through the egg (oocyte) and sperm to next generations. https://en.wikipedia.org/wiki/Human_genome (Accessed 1-28-21) https://www.ncbi.nlm.nih.gov/books/NBK21134/ (Accessed 1-28-21)
RISKS and BENEFITS
The Pfizer-BioNTech COVID-19 and Moderna vaccines are made from messenger RNA, tiny lipid nanoparticles, and polyethylene glycol (PEG). PEG is a stabilizing component of the vaccine. Anaphylaxis is rare but some people are allergic to PEG and could have a life-threatening reaction needing immediate life-support treatment. Patients who are allergic to PEG should not get the vaccine. Since a life-threatening reaction, even though rare, may occur without warning, receiving the vaccine in a health care facility with life-support available is desirable, as is being observed for 15-30 minutes after the injection.
In 2016 scientists from the University of North Carolina at Chapel hill advised the potential importance of screening patients for PEG antibodies before receiving therapeutics with PEG, “The widespread prevalence of pre-existing anti-PEG Ab [antibodies], coupled with high Ab [antibody] levels in a subset of the population, underscores the potential importance of screening patients for anti-PEG Ab [antibody] levels prior to administration of therapeutics containing PEG.” https://pubs.acs.org/doi/full/10.1021/acs.analchem.6b03437 (Accessed 1-23-21)
Vaccination reactions can occur without prior history. The US CDC reported that of the six persons with “severe allergic reaction” out of >250,000 vaccinations, only one had a “history of vaccination reactions.” The American College of Allergy states, “The Pfizer-BioNTech COVID-19 vaccine should be administered in a health care setting where anaphylaxis can be treated." https://college.acaai.org/acaai-guidance-on-risk-of-allergic-reactions-to-pfizer-biontech-COVID-19-vaccine/ (Accessed 1-20-21)
Norway reported 23 deaths after vaccinations; thirteen have been investigated; these were old and frail persons. Subsequently, Norway recommends, “If you are very frail, you should probably not be vaccinated.”
https://norwaytoday.info/news/norwegian-medicines-agency-links-13-deaths-to-vaccine-side-effects-those-who-died-were-frail-and-old/ (Accessed 1-20-21)
The public may think that the vaccine will prevent spread of the virus, but public health experts have repeatedly said that is unknown. Therefore, the public is still expected to use masks, social distance, and limit contacts. Pfizer did tests on monkeys and found that vaccinated animals still got Covid although the duration of infection was shorter.
The clinical vaccine trials did not test for Sars-CoV-2 in all participants so it cannot answer the question of whether the vaccine actually reduces infection or transmission of the virus. The trials only tested for presence of the virus if the test subject became symptomatic. 170 of the total 41,135 (0.41%) subjects given two doses of the vaccine or placebo became symptomatic. Of these 170 subjects, 162 were in the unvaccinated group and 8 were in the vaccinated group. From these small numbers of symptomatic test subjects the 90-95% efficacy claims were calculated. https://www.pfizer.com/news/press-release/press-release-detail/pfizer-and-biontech-conclude-phase-3-study-covid-19-vaccine (Accessed 1-31-21)
The small number of test subjects in both groups who became symptomatic and then tested positive for Covid (170/41,135 x 100 = 0.4%) could also reflect decreased exposure, transmission, and/or virulence of the virus.
The CDC posts deaths and infection fatality ratios by age but has not released calculations for age-specific “survival rates” or “mortality rates.”
America’s Frontline Doctors White Paper On Experimental Vaccines For COVID-19, using CDC data on infection fatality ratios, total cases and total deaths, calculated and estimated risks of Covid deaths according to age. They state, “When talking about the risk/benefit ratio of any treatment we must consider the Infection Fatality Ratio or IFR. The IFR for COVID-19 varies dramatically by age, from a low of 0.003% for Americans under age 19 to as high as 5.4% for those 70 years of age and above.19 That is an 1800x risk difference based upon age! It is quite clear that young people are at a statistically insignificant risk of death from COVID-19. Nearly 80% of all coronavirus-related deaths in the US through November 28, 2020 have occurred in adults 65 years of age and older and only 6% of the deaths had COVID-19 as the only cause mentioned. On average, there were 2.6 additional conditions or causes per death.20”
The CDC Infection Fatality Ratios by age according to CDC’s best estimate considering disease severity, virus transmissibility, and transmission are:
0-19 years: 0.00003; 20-49 years: 0.0002; 50-69 years: 0.005; 70+ years: 0.054.
The Frontline doctors calculated survival rates from this data to be:
<20 years: 99.9%; 20-49 years: 99.8%; 50-69 years: 99.5%; 70+ years: 95%.
As of Jan 23, 2021, CDC reported most of the deaths from Covid were in inpatient facilities 229,265 out of 359,265 x 100=64%. 79,134 or 22% of the 359,265 Covid deaths were in nursing homes or long term care facilities.
https://www.cdc.gov/nchs/nvss/vsrr/covid_weekly/index.htm#PlaceDeath (Accessed 1-13-2021)
“CDC data also show that Americans, regardless of age group, are far more likely to die of something other than COVID-19. Even among those in the most heavily impacted age group (85 and older), only 10.8 percent of all deaths since February 2020 were due to COVID-19.”
FDA admits long list of possible negative side-effects. The list has 22 separate entries of “possible adverse event outcomes.” First on the list is “Guillain-Barré syndrome”, described as “a rare disorder in which your body's immune system attacks your nerves.” The syndrome has “no known cure” and it’s mortality rate is “4% to 7%.” “Acute disseminated encephalomyelitis,” a “rare inflammatory condition that affects the brain and spinal cord,” is second on the FDA’s list. Third is “Transverse myelitis,” a neurological disorder that inflames the spinal cord, causing “pain, muscle weakness, paralysis, sensory problems, or bladder and bowel dysfunction.” Also listed as a possible outcome of a potential vaccine is “Anaphylaxis,” the severe allergic reaction which can lead to anaphylactic shock. A “stroke,” and “convulsions/seizures” are further possible side-effects, along with “Acute myocardial infarction” or heart attacks, inflammation of the muscles around the heart, and even death.
The FDA also suggested that Kawasaki disease as being a possibility after the vaccine. The disease “mainly affects children under the age of 5,” and is “always treated in hospital.”
EFFICACY and EFFECTIVENESS
The public might assume that the words “efficacy” and “effectiveness” mean the same thing. In medical/scientific talk, they are different. “Efficacy” refers to the results observed in ideal or controlled conditions, like a vaccine clinical trial. “Effectiveness” is the result seen in the real world after widespread use. Hence the quote 95% “efficacy” in the controlled vaccine experimental trials may not be achieved in the real world after widespread vaccination. Recall that the clinical trials were testing for the virus only if the subjects became “symptomatic.” Asymptomatic persons in the trial were not tested so the overall effectiveness of the vaccine to prevent infection cannot be known.
https://pediaa.com/difference-between-efficacy-and-effectiveness/ (Accessed 1-20-21) https://www.businesswire.com/news/home/20201118005595/en/ (Accessed 1-20-21)
https://www.pfizer.com/news/press-release/press-release-detail/pfizer-and-biontech-conclude-phase-3-study-COVID-19-vaccine (Accessed 1-20-21)
VACCINE INJURY AND SIDE EFFECTS REPORTING
“Furthermore, mandating vaccines is a blatant violation of medical informed consent—a basic tenet of ethical medical practice. With numerous vaccines currently mandated for work, school, college, and daycare—and soon a coronavirus vaccine likely added to the list when marketed—the potential for harm increases.” “If an adult or child is killed or injured by a vaccine, federal law—the National Childhood Vaccine Injury Act of 1986—prohibits the person from suing the drug company that made the vaccine.” https://clmagazine.org/topic/medicine-science/should-we-be-concerned-about-ethical-vaccines/
The FDA in October listed possible side effects that are to be monitored in conjunction with administering a COVID-19 vaccine. There is to be both passive and active surveillance of side effects related to the vaccine. Under the former system, the FDA is to partner with the Centers for Disease Control (CDC) to manage the Vaccine Adverse Event Reporting System (VAERS), whereby individuals report adverse side effects to their health care provider.”
“With the active surveillance, the FDA plans to use the “Biologics Effectiveness and Safety (BEST) System,” with numerous partners. MarketScan, the largest number of the partner companies, has over 250 million patients.
Along with the Center for Medicare & Medicaid Services (CMS), the FDA states that its data can cover “approximately 55 million elderly US beneficiaries >65yrs of age.” It is through using the CMS data that the FDA plans to monitor side-effects from COVID vaccines, based on rapid-cycle analyses.”
PREGNANCY and mRNA COVID VACCINES
There is usually caution about giving medications to pregnant women. In spite of the many unknown and unstudied effects of these new vaccines, the CDC guidelines allow pregnant women who are part of a group (e.g. essential workers) to choose to receive them. Given that the risk of serious sequelae of COVID-19 disease is very low in young people, why would anyone who is pregnant or sexually active decide to receive the vaccine?
Use of aborted babies’ DNA? Pfizer and Moderna used cell cultures made from at least one aborted baby to test the vaccines, albeit not manufacture them. Although there are no cells from aborted babies in these mRNA vaccines, the remains of the murdered child were still illicitly used in testing to develop these vaccines. It must be mentioned that at least 7 other companies use aborted fetal cell line cultures in the manufacture and/or testing of Covid vaccines. https://soundchoice.org/vaccines/covid-19-vaccine-chart/
YOU MAY—YOU SHOULD—YOU MUST
At this time that vaccines are being made available to certain groups, thus a person “may” receive the vaccine. There are three verbs that usually follow. Now it is “you may,” “you should” often follows. Will there be a “you must”?
CONCLUSIONS & ONE PAGE SUMMARY
- The new mRNA COVID vaccines are experimental. Emergency Use Authorization (EUA) is not the same as FDA approval. There are many unknowns long-term and already more side effects short term, including life-threatening anaphylaxis, than for other vaccines. COVID-19 survival rates in the young are high and even in those >70+ years or older are 95%.
- To make an informed decision about receiving a vaccine requires full and complete information regarding safety, side effects, ingredients including preservatives, use of fetal cell lines, and vaccine companies’ lack of liability for damages. Pfizer-BioNTech and Moderna vaccines are not FDA approved but have been released under Emergency Use Authorization (EUA). These are novel vaccines that can affect all cells in the body
- FDA lists 22 separate entries of possible adverse event outcomes, including “Guillain-Barré syndrome”, (no known cure and it’s mortality rate is 4% to 7%), Acute disseminated encephalomyelitis (condition that affects the brain and spinal cord), Transverse myelitis (inflames the spinal cord, causing pain, muscle weakness, paralysis, sensory problems, or bladder and bowel dysfunction, Anaphylaxis, (severe allergic reaction which can lead to anaphylactic shock), stroke, convulsions/seizures” are further possible side-effects, along with Acute myocardial infarction, heart attacks.
- Pfizer-BioNTech, Moderna and 7 other companies used cell cultures made from at least one aborted, killed baby to test the vaccines, albeit not manufacture them.
- Despite unknown and unstudied effects of these new vaccines, the CDC guidelines allow pregnant women to choose to receive them. Given that the risk of serious sequelae of COVID-19 disease is very low in young people, why would anyone who is pregnant or sexually active decide to receive the vaccine?
- If an adult or child is killed or injured by a vaccine, federal law—the National Childhood Vaccine Injury Act of 1986—prohibits the person from suing the drug company that made the vaccine.
- Is there a need for this vaccine? The calculated survival rates are: <20 years: 99.9%; 20-49 years: 99.8%; 50-69 years: 99.5%; 70+ years: 95%.
- CDC data also show that Americans, regardless of age group, are far more likely to die of something other than COVID-19. Even among those in the most heavily impacted age group (85 and older), only 10.8 percent of all deaths since February 2020 were due to COVID-19.
- These new mRNA COVID vaccines are experimental. Emergency Use Authorization (EUA) is not the same as FDA approval. There are many unknowns long-term and already more side effects short term, including life-threatening anaphylaxis.
- Now, you may, or should get the vaccine. Will it be mandated?
- This information is to better inform your conscience when deciding to consent or decline the novel mRNA Covid-19 vaccines.
- I decline to get the vaccine.
The views expressed by RenewAmerica columnists are their own and do not necessarily reflect the position of RenewAmerica or its affiliates. | 1 | 2 |
<urn:uuid:d002a592-e334-44b2-ad56-c52371f1add6> | Liquid crystal display
From Wikipedia, the free encyclopedia
- "LCD" redirects here. For other uses, see LCD (disambiguation).
A liquid crystal display (LCD) is a thin, flat display device made up of any number of color or monochrome pixels arrayed in front of a light source or reflector. It is prized by engineers because it uses very small amounts of electric power, and is therefore suitable for use in battery-powered electronic devices.LCDs are commonly called LCD displays but this is wrong as it traslates as: "liquid crystal display displays."
Each pixel of an LCD consists of a layer of liquid crystal molecules aligned between two transparent electrodes, and two polarizing filters, the axes of polarity of which are perpendicular to each other. With no liquid crystal between the polarizing filters, light passing through one filter would be blocked by the other.
The surfaces of the electrodes that are in contact with the liquid crystal material are treated so as to align the liquid crystal molecules in a particular direction. This treatment typically consists of a thin polymer layer that is unidirectionally rubbed using a cloth (the direction of the liquid crystal alignment is defined by the direction of rubbing).
Before applying an electric field, the orientation of the liquid crystal molecules is determined by the alignment at the surfaces. In a twisted nematic device (the most common liquid crystal device), the surface alignment directions at the two electrodes are perpendicular, and so the molecules arrange themselves in a helical structure, or twist. Because the liquid crystal material is birefringent (i.e. light of different polarizations travels at different speeds through the material), light passing through one polarizing filter is rotated by the liquid crystal helix as it passes through the liquid crystal layer, allowing it to pass through the second polarized filter. Half of the light is absorbed by the first polarizing filter, but otherwise the entire assembly is transparent.
When a voltage is applied across the electrodes, a torque acts to align the liquid crystal molecules parallel to the electric field, distorting the helical structure (this is resisted by elastic forces since the molecules are constrained at the surfaces). This reduces the rotation of the polarization of the incident light, and the device appears gray. If the applied voltage is large enough, the liquid crystal molecules are completely untwisted and the polarization of the incident light is not rotated at all as it passes through the liquid crystal layer. This light will then be polarized perpendicular to the second filter, and thus be completely blocked and the pixel will appear black. By controlling the voltage applied across the liquid crystal layer in each pixel, light can be allowed to pass through in varying amounts, correspondingly illuminating the pixel.
With a twisted nematic liquid crystal device it is usual to operate the device between crossed polarizers, such that it appears bright with no applied voltage. With this setup, the dark voltage-on state is uniform. The device can be operated between parallel polarizers, in which case the bright and dark states are reversed (in this configuration, the dark state appears blotchy).
Both the liquid crystal material and the alignment layer material contain ionic compounds. If an electric field of one particular polarity is applied for a long period of time, this ionic material is attracted to the surfaces and degrades the device performance. This is avoided by applying either an alternating current, or by reversing the polarity of the electric field as the device is addressed (the response of the liquid crystal layer is identical, regardless of the polarity of the applied field).
When a large number of pixels is required in a display, it is not feasible to drive each directly since then each pixel would require independent electrodes. Instead, the display is multiplexed. In a multiplexed display, electrodes on one side of the display are grouped and wired together (typically in columns), and each group gets its own voltage source. On the other side, the electrodes are also grouped (typically in rows), with each group getting a voltage sink. The groups are designed so each pixel has a unique, unshared combination of source and sink. The electronics, or the software driving the electronics then turns on sinks in sequence, and drives sources for the pixels of each sink.
Specifications of LCD
Important factors to consider when evaluating an LCD monitor include
- resolution: unlike CRT monitors, LCD monitors have a native-supported resolution for best display effect.
- dot pitch(Dot Pitch): the granularity of LCD pixels. The smaller, the better.
- viewable size: The length of diagonal of a LCD panel
- response time (sync rate)
- matrix type (passive or active)
- viewing angle
- color support: How many types of colors are supported.
- contrast ratio
- aspect ratio: 4 by 3, or 16 by 9, etc.
- input ports (e.g. DVI,VGA, or even S-Video ).
1904: Otto Lehmann publishes his work "Liquid Crystals"
1911: Charles Mauguin describes the structure and properties of liquid crystals.
1936: The Marconi Wireless Telegraph company patents the first practical application of the technology, "The Liquid Crystal Light Valve".
1962: The first major English language publication on the subject "Molecular Structure and Properties of Liquid Crystals", by Dr. George W. Gray.
Pioneering work on liquid crystals was undertaken in the late 1960s by the UK's Royal Radar Establishment at Malvern. The team at RRE supported ongoing work by George Gray and his team at the University of Hull who ultimately discovered the cyanobiphenyl liquid crystals (which had correct stability and temperature properties for application in LCDs).
The first operational LCD was based on the Dynamic Scattering Mode (DSM) and was introduced in 1968 by a group at RCA in the USA headed by George Heilmeier. Heilmeier founded Optel, which introduced a number of LCDs based on this technology.
In December 1970, the twisted nematic field effect in liquid crystals was filed for patent by M. Schadt and W. Helfrich, then working for the Central Research Laboratories of Hoffmann-LaRoche in Switzerland (Swiss patent No. 532 261). James Fergason at Kent State University filed an identical patent in the USA in February 1971. In 1971 the company of Fergason ILIXCO (now LXD Incorporated) produced the first LCDs based on the TN-effect, which soon superseded the poor-quality DSM types due improvements of lower operating voltages and lower power consumption.
In 1972, the first active-matrix liquid crystal display panel was produced in the United States by T. Peter Brody.
In color LCDs each individual pixel is divided into three cells, or subpixels, which are colored red, green, and blue, respectively, by additional filters (pigment filters, dye filters and metal oxide filters). Each subpixel can be controlled independently to yield thousands or millions of possible colors for each pixel. Older CRT monitors employ a similar method.
Color components may be arrayed in various pixel geometries, depending on the monitor's usage. If software knows which type of geometry is being used in a given LCD, this can be used to increase the apparent resolution of the monitor through subpixel rendering. This technique is especially useful for text anti-aliasing.
Passive-matrix and active-matrix
LCDs with a small number of segments, such as those used in digital watches and pocket calculators, have a single electrical contact for each segment. An external dedicated circuit supplies an electric charge to control each segment. This display structure is unwieldy for more than a few display elements.
Small monochrome displays such as those found in personal organizers, or older laptop screens have a passive-matrix structure employing supertwist nematic (STN) or double-layer STN (DSTN) technology (DSTN corrects a color-shifting problem with STN). Each row or column of the display has a single electrical circuit. The pixels are addressed one at a time by row and column addresses. This type of display is called a passive matrix because the pixel must retain its state between refreshes without the benefit of a steady electrical charge. As the number of pixels (and, correspondingly, columns and rows) increases, this type of display becomes less feasible. Very slow response times and poor contrast are typical of passive-matrix LCDs.
High-resolution color displays such as modern LCD computer monitors and televisions use an active matrix structure. A matrix of thin-film transistors (TFTs) is added to the polarizing and color filters. Each pixel has its own dedicated transistor, allowing each column line to access one pixel. When a row line is activated, all of the column lines are connected to a row of pixels and the correct voltage is driven onto all of the column lines. The row line is then deactivated and the next row line is activated. All of the row lines are activated in sequence during a refresh operation. Active-matrix displays are much brighter and sharper than passive-matrix displays of the same size, and generally have quicker response times, producing much better images.
Active matrix technologies
- Main article: TFT LCD, Active-matrix liquid crystal display
Twisted nematic (TN)
Twisted nematic displays contain liquid crystal elements which twist and untwist at varying degrees to allow light to pass through. When no voltage is applied to a TN liquid crystal cell, the light is polarized to pass through the cell. In proportion to the voltage applied, the LC cells twist up to 90 degrees changing the polarization and blocking the light's path. By properly adjusting the level of the voltage almost any grey level or transmission can be achieved.
3LCD Display Technology
3LCD is a video projection system that uses three LCD microdisplay panels to produce an image. It was adopted in 1995 by numerous front projector manufacturers and in 2002 by rear projection TV manufacturers for its compactness and image quality.
3LCD is an active-matrix, HTPS (high-temperature polysilicon) LCD projection technology. It inherits sharp images, brightness and excellent color reproduction from its active matrix technology. Deeper blacks are contributed by the HTPS technology.
The 3LCD website describes the technology in detail and is supported by various companies including 3LCD manufacturers and vendors.
In-plane switching (IPS)
In-plane switching is an LCD technology which aligns the liquid crystal cells in a horizontal direction. In this method, the electrical field is applied through each end of the crystal, but this requires two transistors for each pixel instead of the one needed for a standard thin-film transistor (TFT) display. This results in blocking more transmission area requiring brighter backlights, which consume more power making this type of display undesirable for notebook computers.
Some LCD panels have defective transistors, causing permanently lit or unlit pixels which are commonly referred to as stuck pixels or dead pixels respectively. Unlike integrated circuits, LCD panels with a few defective pixels are usually still usable. It is also economically prohibitive to discard a panel with just a few defective pixels because LCD panels are much larger than ICs. Manufacturers have different standards for determining a maximum acceptable number of defective pixels. The maximum acceptable number of defective pixels for LCD varies a lot (such as zero-tolerance policy and 11-dead-pixel policy ) from one brand to another, often a hot debate between manufacturers and customers. To regulate the acceptability of defects and to protect the end user, ISO released the ISO 13406-2 standard. However, not every LCD manufacturer conforms to the ISO standard and the ISO standard is quite often interpreted in different ways.
LCD panels are more likely to have defects than most ICs due to their larger size. In this example, a 12" SVGA LCD has 8 defects and a 6" wafer has only 3 defects. However, 134 of the 137 dies on the wafer will be acceptable, whereas rejection of the LCD panel would be a 0% yield. The standard is much higher now due to fierce competition between manufacturers and improved quality control. An SVGA LCD panel with 4 defective pixels is usually considered defective and customers can request an exchange for a new one. Some manufacturers, notably in South Korea where some of the largest LCD panel manufacturers, such as LG, are located, now have "zero defective pixel guarantee" and would replace a product even with one defective pixel. Even where such guarantees do not exist, the location of defective pixels is important. A display with only a few defective pixels may be unacceptable if the defective pixels are near each other. Manufacturers may also relax their replacement criteria when defective pixels are in the center of the viewing area.
The zenithal bistable device (ZBD), developed by QinetiQ (formerly DERA), can retain an image without power. The crystals may exist in one of two stable orientations (Black and "White") and power is only required to change the image. ZBD Displays is a spin-off company from QinetiQ who manufacture both grayscale and colour ZBD devices.
A French company, Nemoptic, has developed another zero-power, paper-like LCD technology which has been mass-produced in Taiwan since July 2003. This technology is intended for use in low-power mobile applications such as e-books and wearable computers. Zero-power LCDs are in competition with electronic paper.
Kent Displays has also developed a "no power" display that uses Polymer Stabilized Cholesteric Liquid Crystals (ChLCD). The major drawback to the ChLCD display is slow refresh rate, especially with low temperatures.
LCD technology still has a few drawbacks in comparison to some other display technologies:
- While CRTs are capable of displaying multiple video resolutions without introducing artifacts, LCD displays produce crisp images only in their "native resolution" and, sometimes, fractions of that native resolution. Attempting to run LCD display panels at non-native resolutions usually results in the panel scaling the image, which introduces blurriness or "blockiness".
- LCD displays have a lower contrast ratio than that on a plasma display or CRT. This is due to their "light valve" nature: some light always leaks out and turns black into gray. In brightly lit rooms the contrast of LCD monitors can, however, exceed some CRT displays due to higher maximum brightness.
- LCDs have longer response time than their plasma and CRT counterparts, older displays creating visible ghosting when images rapidly change; this drawback, however, is continually improving as the technology progresses and is hardly noticeable in current LCD displays with "overdrive" technology. Most newer LCDs have response times of around 8 ms.
- In addition to the response times, some LCD panels have significant input lag, which makes them unsuitable for fast and time-precise mouse operations (CAD design, FPS gaming) as compared to CRTs
- Overdrive technology on some panels can produce artifacts across regions of rapidly transitioning pixels (eg. video images) that looks like increased image noise or halos. This is a side effect of the pixels being driven past their intended brightness value (or rather the intended voltage necessary to produce this necessary brightness/colour) and then allowed to fall back to the target brightness in order to enhance response times.
- LCD display panels have a limited viewing angle, thus reducing the number of people who can conveniently view the same image. As the viewer moves closer to the limit of the viewing angle, the colors and contrast appear to deteriorate. However, this negative has actually been capitalized upon in two ways. Some vendors offer screens with intentionally reduced viewing angle, to provide additional privacy, such as when someone is using a laptop in a public place. Such a set can also show two different images to one viewer, providing a three-dimensional effect.
- Some users of older (around pre-2000) LCD monitors complain of migraines and eyestrain problems due to flicker from fluorescent backlights fed at 50 or 60 Hz. This does not happen with most modern displays which feed backlights with high-frequency current.
- LCD screens occasionally suffer from image persistence, which is similar to screen burn on CRT and plasma displays. This is becoming less of a problem as technology advances, with newer LCD panels using various methods to reduce the problem. Sometimes the panel can be restored to normal by displaying an all-white pattern for extended periods of time.
- Some light guns do not work with this type of display since they do not have flexible lighting dynamics that CRTs have. However, the field emission display will be a potential replacement for LCD flat-panel displays since they emulate CRTs in some technological ways.
- Some panels are incapable of displaying low resolution screen modes (such as 320x200). However, this is due to the circuitry that drives the LCD rather than the LCD itself.
- Consumer LCD monitors are more fragile than their CRT counterparts, with the screen especially vulnerable. However, lighter weight makes falling less dangerous, and some displays may be protected with glass shields.
- List of LCD matrices
- TFT LCD
- Active-matrix liquid crystal display (AMLCD)
- Input lag
Other display technologies
- Comparison of display technology
- Cathode ray tube (CRT)
- Vacuum fluorescent display (VFD)
- Digital Light Processing (DLP)
- Plasma display panel (PDP)
- Light-emitting diode (LED)
- Organic light-emitting diode (OLED)
- Surface-conduction electron-emitter display (SED)
- Field emission display (FED)
- Liquid crystal on silicon (LCOS)
- Television and digital television
- Liquid crystal display television (LCD TV)
- LCD projector
- Computer monitor
- Sharp Corporation
- Corning Inc.
- LXD Incorporated
- International Display Works
- CoolTouch Monitors
- ^ Brody, T.P., "Birth of the Active Matrix," Information Display, Vol. 13, No. 10, 1997, pp. 28-32.
- How LCDs are madeAn interactive demonstration.
- Development of Liquid Crystal Displays - George Gray, Hull University Freeview video by the Vega Science Trust.
- History of Liquid Crystals, presentation and extracts from the book Crystals that Flow: Classic papers from the history of liquid crystals by its co-author Timothy J. Sluckin
- Display Technology, by Geoff Walker in the September 2001 issue of Pen Computing
- CRT vs. LCD Monitors Concise comparison matrix by Sam C. Chan for the network integrator Bravo Technology Center
- LCD, Plasma TV's Deemed Reliable: Report The International Business Times
- Overview of 3LCD display technology
- HDTV Org Independent guide to High Definition / LCD TV
- LCD-TFT Desktop Displays, specs and information about LCD display technology and models
- Liquid Crystal Institute of the Kent State University
- LXD Incorporated, harsh environements dispays manufacturing
- Displaze Ltd., Flat Panel reseller with specifications for many brands
- G-NET Inc., LCD Displays For Vehicle Applications From 7" to 12" Sizes.
- Hannstar Display Corp. LCD panel manufacturer for computers and televisions.
- 3M Company. Manufacturer of brightness enhancement films for LCD displays.
Usage of LCDs
- LCD (Liquid Crystal Display) IO, source code and examples for driving small LCD displays by techref.massmind.org, updated: 2006/4/10
- PIC Microcontroler LCD IO routines, source code and examples for driving LCD displays with the Microchip PIC embedded controllers, by techref.massmind.org, updated: 2006/3/5
Categories: Articles with unsourced statements | Display technology | Liquid crystal displays | 1 | 2 |
<urn:uuid:2da30e3f-0377-4a3a-bb66-52dd02e0b5fc> | Lightning is one of Nature’s most impressive displays and capturing it with a camera is a challenge, but the results can be almost as grand as the natural spectacle. There are a lot of overlaps between lightning photography and fireworks photography, but lightning’s unscheduled appearance adds an element of luck to the adventure.
Before we get started, there are two points I’d like to make:
- Lightning is incredibly awesome, fun to photograph, and surprisingly easy to photograph.
- Lightning is incredibly dangerous.
Before we get started, let's talk about point #2 for a minute...
Lightning kills around 2,000 people a year around the world. This makes it one of the world's most dangerous weather phenomena. So, before you grab your metal tripod and head outside in the face of a storm to take photos, do some homework and use liberal amounts of common sense. For lightning safety tips, go to the NOAA Lightning Safety website and study up.
Now let's talk about #1...
Some awesome and "cool" lightning facts:
- About 100 times per second, a lightning bolt strikes somewhere on the earth. That is 8,640,000 strikes per day.
- A lightning bolt can reach up to 50,000⁰F (27,760⁰C). That is more than 5 times hotter than the surface of the sun―that same sun that is so hot it warms the earth from 93 million miles away. Cool, right?
Note: There is a lot of reading to be done on lightning, and some of it is incredibly interesting. I am here to talk about photography and not the "anatomy" of a lightning strike; so, in the interest of saving time, I will not get overly scientific here, but feel free to do your own research into this incredible phenomenon.
So, know that lightning is even more awesome and dangerous than you thought; you want to safely go out to capture it with your camera.
There are many different methods that successful lightning photographers employ to capture lightning. And, in the case of lightning, there are lightning-specific gadgets you can add to your camera bag to help you out as well. So, not only are there different ways to get the shot, there are different mousetraps, too.
As you are about to see, the techniques employed for getting good lightning photos are similar to the way you capture fireworks photography, but the biggest difference with lightning is that the photographer has absolutely no idea when and where it will strike―lightning is completely random.
An SLR, DSLR, or mirrorless camera is likely to be the best tool for the job. A point-and-shoot camera that has a "manual" mode and minimal shutter delay can also be used. Some mobile apps even exist to help you get lightning photos with your smartphone or tablet, too.
Lightning can be photographed during the day or night, but some gear can help you get the shot you want in any lighting conditions.
- A camera support. Note that I did not say "tripod." Generally, with lightning photography, you aren't holding the camera to your eye waiting for a strike; you will set up the camera on a support and hope to capture a strike in the field of view of your lens. For lightning, you can always use your trusty tripod (especially if shooting at night), but to maximize your flexibility, a bean bag or car-window mount might be great options. The car-window mount gives you the added benefit of being inside a vehicle while the storm approaches.
- A cable release. Important for reducing camera shake―especially when doing long exposures.
- A spare battery. The Law of Murphy says you will miss the best strikes after your camera battery dies!
- A pocket full of memory cards. You might be taking a lot of photos of no lightning in-between strikes. Be prepared to delete photos, or pop in another card.
- A flashlight, if it is dark outside.
- A SAFETY PLAN. As I said before, lightning is dangerous. If it is getting too close for comfort, you need to have a plan in place to take shelter indoors or in a vehicle to protect yourself. Lightning storms are often harbingers of heavy rain and even damaging hail. Do NOT get stuck outside as the tallest thing in a wide-open space holding onto a metal tripod!
Storms come from different directions on different days. Keep an eye on the weather to "get ahead" of the storm. There are a number of weather and lightning-tracking websites available on your computer, tablet, or smartphone. There are even some dedicated lightning tracking apps like Lightning Finder and Spark that show up-to-the-minute maps of strike activity.
" The Law of Murphy says you will miss the best strikes after your camera battery dies!"
Fact: Lightning is in the sky, so a wide-angle lens is going to get you the most coverage and maximize your chance of getting a strike in your frame.
Before the storm comes, spend some time at home looking at lightning photos on the Internet. My guess is that the images you are most drawn to are those with not only amazing strikes, but those images that are composed with some sort of interesting landscape elements. A photo capturing the most spectacular lightning strike will likely not be a great photo if there is a construction site, shopping mall, half of a speeding car, or something else awkwardly crowding the frame. From what I have seen, expansive landscapes/waterscapes with big-cloud storms and cityscapes seem to work well as compositional elements for lightning photos. If you want to photograph only an expanse of sky and a lightning bolt, no worries. But, if you want to make your photograph stand out as an artistic piece capturing one of nature's most dynamic and dramatic forces, think about the entire photograph, not just the lightning.
This is where you need a bit of luck to enter the fray.
You do not control where and when the storm comes. Nor can you always reach the best vantage point. Remember to stay safe and play the cards you have been dealt. Today might not be the storm for the best photos. My guess is that some of the most successful lightning images are often the result of more luck than skill.
With fireworks photography, one of the keys is to remain flexible with your camera settings and, if you are not getting the shot you want, change things up. With lightning, the same thing applies, however, fireworks are scheduled. Lightning is not. That last strike you overexposed might be the last one of the storm. That is simply the nature of photographing random and unscheduled events. So, before you head out, have a safety plan, prepare yourself for failure, and hope for success.
- Focus. Unlike fireworks, the camera will likely not have time to focus on a lightning strike and then get an image. The trick here is to set the lens or camera to focus at the infinity position so that everything past a certain distance is in focus. See the linked article for a discussion on infinity focus.
- White Balance. "Auto" should be fine, but there is a popular opinion that the "cooler" WB settings give the scene a "blue cast" that works well with lightning. It is a matter of personal preference, of course. Also, something else to consider: there are some striking black-and-white lightning photographs that can be captured.
- Noise Reduction. Leave it off for your night shots and keep the shutter speeds short enough to not worry about noise buildup.
- Flash. Off. Nature is going to pop its awesome flash.
- ISO. Set it low. Feel free to leave it at your camera's native ISO setting. For nighttime shots you will be working from a camera support and not trying to squeeze a handheld shot off. For daytime, you don't need higher ISO. Use 100 or 200.
- Mode. This is going to be dependent on the technique you are using. I will discuss this in the next section.
- Aperture. Variable. If you are being smart and safe, the lightning bolt will not be hitting the ground anywhere near your camera, and depth of field will not be an overriding issue. Your aperture setting will likely be near the middle of the range, but you may change it up depending on the conditions or the technique you are using.
- Shutter Speed. This also depends on your technique. Stay tuned...
Technique #1: The Bulb / Long-Exposure Method
This technique is similar to that used for fireworks and, because you may be leaving your shutter open for an extended time, it works best for low-light/nighttime lightning photography. Basically, you place your camera on a support and use one hand to activate the cable release to open the shutter and use the other hand to cross your fingers while you hope for a great lightning strike in the distance.
"Lightning happens fast, but often a return strike lingers in the sky for much more than an instant."
After the lightning strike, close the shutter. Exposure done.
Depending on the pace of the action, you might want to check your LCD to see how the image looked. Do you need to recompose? Was it overexposed as you were waiting too long for the bolt from the clouds? You may need to adjust your aperture if you felt that too much light came into the frame over a short period, or open your aperture if you failed to capture some of the landscape. This is where it pays to be flexible. Also, try to note how long your shutter was open, either by mentally counting seconds, using a watch, or by looking at the last image's metadata on your LCD.
For instance, if you waited 30 seconds for a lightning strike and found the entire image was overexposed, you might want to close the aperture a bit to make sure the next 30-second exposure is better. Or, leave the aperture alone and shorten the exposure. No lightning? Release the shutter and immediately open it back up for the next shot. Remember, lightning is not scheduled, so do not be rigid with your exposures.
One of my favorite lightning photos, "Road," by renowned storm landscape photographer Mitch Dobrowner, was taken using this method. Mitch says, "I do nothing special [for lightning] except compose for it, based on how/where a storm may be 'electrified', and then shoot in sequence via time exposures (between 2 seconds and 10 seconds). Then, I just cross my fingers and (sometimes) pray..... "
Before I move on with more lightning stuff, let’s analyze Dobrowner's image for a moment. I am sure you have probably seen dramatic images of a more prolific lightning strike, but what Dobrowner has done here has successfully combined a striking composition (the road) and incredible lighting (the linear sun break before the horizon) into an image that would likely be successful without a lightning bolt. The lightning's cameo appearance to the left of the road does an incredible job of balancing the dissymmetry of the trees to the right of the road and, therefore, keeps the otherwise symmetrical image in balance. The image is a wonderful landscape photograph and works particularly well because it is a whole image; not just a photo of a bolt of lightning.
I mentioned Murphy's Law earlier. It comes into play here, as well. I can almost guarantee that if you decided to reduce your aperture after that first exposure, the next bolt of lightning will happen within moments of you opening the shutter and you will be left with an underexposed landscape!
I can tell also tell you, from experience, that the absolute best lightning strikes happen when you are reviewing images on your LCD or making adjustments to the camera! Thanks, Murph!
Technique #2: The Wild West Method
How quick is your shutter finger? Lightning happens fast, but often a return strike lingers in the sky for much more than an instant. A lot of great lightning shots have been made by photographers letting the initial strike serve as the catalyst for opening the shutter. To do this you will need 1) fast reflexes and, 2) a camera with very little shutter lag. Today's point-and-shoot cameras have minimal shutter lag, but "fast shutter lag" used to be the sole realm of SLR and rangefinder cameras.
So, set up your camera on a support and select "bulb" as your shutter speed. Have the cable release under your itchy shutter finger, take a deep breath, and as soon as you see a flash, press the button! When the strike ends, release the shutter.
Check out the photo or keep your eyes on the sky and get ready to fire another shot.
Technique #3: The Gadget Method
If you think that using technology is cheating, you might want to stick to the first two techniques. If you believe in better living through tech, a lightning trigger might just be the think for you. B&H sells some great lightning triggers from several brands. Some of these devices mount on your camera's hot shoe or tripod connect to the camera via a cable (make sure you get the correct version for your camera) and feature sophisticated electronic triggers that tell your camera to take a photo when they detect lightning.
Some of the triggers detect an emission of infrared light that precedes a lightning strike and others are multi-use―triggered not only by lightning , but they also have modes for motion, laser light, sound, and other external inputs. Once triggered, they can automatically activate your camera's shutter in a fraction of a millisecond. These triggers often detect lightning during both day and nighttime.
Lightning photography expert and storm chaser Roger Hill is a fan of using lightning triggers during the day, but prefers to use the bulb method for his nighttime shots. For the lightning triggers to work, he says, "You have to have a return stroke from a lightning bolt to capture it, as a lightning strike is VERY fast, so fast even the trigger cannot detect a single stroke."
In summary, there are a few things to remember before you go out and try to catch lightning in a camera:
- Lightning is random. It is not scheduled. It comes and goes as it pleases. Be prepared to come away empty-handed when trying to photograph it but do not be discouraged. With more than 8 million lightning strikes every day on the planet, it is likely you will have more opportunities.
- LIGHTNING IS DANGEROUS. Yes, I already said that―several times. There is a reason for my redundancy. The bottom line is that we are talking about taking photos of something that can kill or severely injure you. Be safe. Be smart. Use common sense. Have quick access to shelter. If you are pointing your camera straight up to capture lightning, you should find yourself photographing a ceiling or the overhead of your car because you were smart enough to go inside several minutes before the storm got close.
So, have fun, good luck, get some great shots, but, most importantly, BE SAFE!
Or you can simply get one of the newer Olympus (now OMSystems) cameras that offer 'Live Composite', set your exposure time to correctly expose for the ambient light, trip the shutter, sit down in your lawn chair and watch the LCD. When you have captured several lightening strikes and are happy with the image on the LCD, end the exposures. You can use this feature for lightening, fireworks, star trails, light painting, etc. for up to 3 hours without over exposing the ambient light. The camera will produce a composite image in-camera -- it's an awesome feature, and one that every major camera manufacturer will offer at some future point.
Yet one more example of some great tech that lies in Oly/OM's Micro Four Thirds system that is rarely found in more popular cameras.
I have seen that mode in action at night photo workshops and I can safely say that almost everyone who witnesses it in action is a bit jealous that their cameras don't have that functionality!
Thanks for making us jealous and thanks for reading!
Is it okay to use a neutral density filter to extend the exposure time of daytime shots and increase your chance for a lightning strike?
I don't see why there would be an issue with that. Actually, its a great idea. With some fast-moving storm clouds, you might get a milky, streaky gray sky punctuated with a bright lightning bolt! I assume others have done this, but I haven't seen many examples!
The only potential downside is that the dimmer strikes might not register as well due to the darkened filter.
Thanks for reading!
I can be very helpful depending on the distance of the storm/bolts. There is a point that the exposure will be to dark to get the bolt in the 100ms say like a 1min exposure the bolt won't show. However this can be good that only the brightest closest will show...can be dangerous to have that close of hunting bolts. 3-6 stops is as far as I would go.
For years I used a 5 stop ND filter for photographing lightning. I would adjust the camera to give me a 30-second exposure, and it worked great. The problem is that you will come home with 200-300 images that you have to go through to find the 8 images that contain lightning. A lightning trigger will make your lightning photography easier. Check out these articles on my site: [jeffcolburn.com]
Thanks for the awesome tip and the links. Unfortunately, I had to remove the specific hyperlinks to publish, but your comment is out of B&H jail!
Thanks for reading!
I use a Sony a6300 with an 10-18 wide angle lens to record videos in my favorite spot under an alcove on Queens Blvd. in Forest Hills. I take 4K videos at 24fps, manual mode, shutter speed of 1/30, F11 or F16 and ISO between 100 and 800 depending on how light or dark the sky is. I then run the videos in the Photo app in my iMac desktop operating system (Sierra) and save the individual frames (it's under the gear icon) as an 8mp TIFF. I can also save a frame in Quick Time by stopping at the frame, clicking Edit, Copy, going into Photoshop, creating a new file from the clipboard and clicking Edit, Paste. The video will have all the stages of the lightning on indivual frames. I then crop and tweak in Photoshop, flatten the layers, and save as a jpg. Although they are only 8mp files, I wouldn't miss the lightning and all the phases of the llightning are recorded. If the sky is light, I underexpose and adjust in Photoshop.
Having lived in the Florida panhandle, I find NYC pretty devoid of good lightning. I was told that the county I lived in has more lightning strikes per year than any other place in the US...fireworks almost every night!
Thanks for stopping by!
Thank you for the excellent article and tips, amazing photos!
I can only take credit for the shots I took, but, I agree, the other ones are awesome!
Thanks for reading and the kind words!
Incredible article and photos! Would you be able to share what kind of camera, lenses, and settings you used for these photos above? I want to start lightning photography as a hobby and I am still contemplating what kind of camera to purchase, although I am leaning toward the Nikon brand. For lenses I'm still not sure what to get. Thanks!
Thanks for the compliments. Only one of the photographs was actually mine and it was taken with a Nikon D300 and likely the NIKKOR 17-55mm f/2.8 lens.
Roger Hill took the other watermarked photos and he might talk about his gear somewhere on the interweb. When it comes to lightning, gear is probably secondary to luck! Any modern camera with a good lens can be used to get great images of lightning these days! Regardless of the brand, you'll want to make sure you can control the aperture and shutter speed manually to improve your chances of catching a strike.
Thanks for reading!
You're welcome Todd! I ended up investing in the Nikon D750 with Nikkor 24-70mm f2.8 lens. I'm enjoying every minute of it so far and I already get to try techniques 1 & 2 this weekend since it's supposed to be severe weather where I'm at. I will eventually purchase a lightning trigger device sometime in the next few months as well as purchase another lens to start some engagement/wedding photography. I really hope I can get lucky this weekend! I will definetly share if I get any good shots.
Good luck this weekend, Robert! Great body and lens combination there...you should be well equipped for a lot of photographic challenges.
Make sure to stay safe.! What good is a photo if you aren't around to enjoy it? (Play on Han Solo's wisdom.)
Thanks Todd! I'm planning to do it while I sit in my garage. I am going to try technique 1 first but need a little clarification on where I should set my settings (basically explaining to a beginner). I appreciate it!
The garage is a good place to stay safe!
For method one, the key is to get the camera to take a long enough exposure that you can maximize your chance of catching a strike. During the day or, even at dusk, your "normal exposures" will be very short - a fraction of a second. To lengthen the exposure, you want to set the D750's ISO to 200 and start closing your aperture. Once the lens closes past f/11, you will likely start seeing some softening of the image due to diffraction, but you want to stretch the exposure as long as you can. At night, this shouldn't be an issue as you can get long exposures at f/8 or wider.
So, go to Aperture Priority Mode, set your ISO to 200 and close the aperture smaller and smaller and start shooting. Find a good exposure and note the amount of time the shutter was open. Now you can use that as a baseline and either keep shooting on Aperture Priority, or go to bulb mode and hold the shutter open manually while keeping an eye on your sweep second hand. If you go longer than the exposure that the camera "suggested" you will start overexposing. If your shutter is open a shorter time, you will go underexposed.
The best way to shoot in bulb is with a wired or radio remote trigger so that you don't have to shake the camera while you hold the shutter open.
I hope this helps! If it doesn't, keep the questions coming and let me know how it turns out!
Hey Todd unfortunately I'm a little dissapointed with mother nature. We did not see any lightning at all but we did get a ton of rain. I was going to try and use the bulb mode since I do have a remote trigger device. I planned to set my D750 to bulb mode with my aperture set at f11, my ISO at 200 and then keeping the shutter open as long as I could until I get a lightning strike. Then I could close the aperture a bit if it turned out overexposed or open it up more if it turned out underexposed. I think I am beginning to understand this concept better now. When I'm in bulb mode is there a way to tell the camera to automatically shut after 30 seconds if I don't press the trigger or do I just have to keep trying to press my luck and hope I get a lightning strike the next round and so forth?
Your help is greatly appreciated by the way!
Besides staying safe, always know that, when you do lightning photography, the Law of Murphy applies along with the laws of physics!
The best lightning strikes ALWAYS happen when you are between shots, changing batteries, out of memory, or on a bathroom break.
As far as bulb and 30 seconds, the D750, in manual or shutter priority mode will allow you to select a 30-second shutter speed, so if 30 seconds at a particular aperture is getting you good exposures, you can set to 30-seconds and forget it. Also, you can do continuous shooting at 30-seconds so that you only have a fraction of a second between shots. The advantage of this is that you wont miss much action. The disadvantage is you will probably get an overexposure if you catch several strikes in one 30-second frame. The solution is to turn the camera off after a strike inside that 30-second exposure.
Good luck! I hope the weather turns ugly for you! Please let me know if you have any other questions. Thanks!
You have been such a great help Todd! I'm hoping we see some ugly weather soon because I'm pumped up about this and can't wait to start experimenting. I'm trying the bulb mode first and then the 30 second technique to see which works better for me. I would love to continue seeing your photos and keeping in touch.
No worries at all! That is what we are here for!
I hope you get some lightning soon and some good shots!
I'll be around! Thanks for hanging out with us at the B&H blog!
Hey Todd tonight I had the perfect opportunity to give it my first shot! With the photos I have I think they turned out really good! A couple of them are either under or overexposed but I would love to send these to you and get you opinions of them and get some advice on how I could make them better. These photos were taken in bulb mode with ISO set to 200 and the aperature set to f/11 along with a remote trigger device. I can also learn to fix the over and underexposed photos with the new Adobe Photoshop program I got for Christmas.
Glad it worked out! Yep, under and over-exposed shots are going to be the norm with lightning. You really cant avoid it, even if you are using triggers and other gadgets. The thing to do is keep your "base" proper exposure in mind and never go above it. Sometimes, if your base exposure is 30 seconds, you will get a strike at the 28 second mark and blow out the frame, or you'll get one at 2-seconds, close the shutter and find an underexposed shot.
Feel free to send them to toddv at bhphoto.com and I will check them out.
Looking forward to it! Nice Xmas gift there (Photoshop)!
Sent my photos. Hope you got them!
I got them this morning. Good shots! Thanks for sharing. I'll address your questions here so that others can learn from your trials...
"All these images were take anywhere from 2 seconds to 45 seconds after I started my exposures using bulb mode. One or 2 of these I was also impatient and pressed my wireless trigger device too soon. I think for next time I will aim to keep each exposure for no longer than 30 seconds and if nothing happens I'll just start a new exposure. There is one thing I may be misunderstanding using the bulb technique. To my knowledge you can have an exposure time for as long as you want in bulb mode until you end your exposure right? Or is there a way to set up the exposure to end at 30 seconds automatically if I don't manually end it myself? Also, is it ideal to get lightning strikes towards the middle of your exposure (10-20 seconds) to avoid under/overexposing the image? Lightning can strike at anytime and seems really hard to capture them in the exact timeframe we are looking for."
While you are shooting lightning, keep evaluating your base exposure. If you start to go overexposed at 15 or 30 seconds (or more or less), then stop your exposure at that time and start anew even if you didn't get a strike. There is no need in the digital world, to conserve frames, so just start another shot. If you are shooting after dark, your base exposure should not change much. If you are shooting at dawn or dusk, you will have to keep adjusting for the changing light.
Bulb keeps the shutter open as long as you want it to. It can be very fast, or it can be open for a long time. Most cameras have selectable shutter speeds up to 30 seconds. So, if your camera does 30 seconds, there is no need to do bulb. In fact, you can set it to 30 seconds (or less) and put it on continuous mode so that the camera just keeps shooting automatically while you get a beverage or surf the B&H blog. If you are shooting in Bulb, you will need to stop the exposure manually and then start the next one.
If you get a strike early in a long exposure, you might close the shutter and get a shot with an underexposed foreground. If the strike is late, you might get an overexposure. That is part of the (fun?) experience of shooting a phenomenon of nature that is completely random in location and schedule! You just have to roll with it and hope for a great shot at the end of the storm!
Keep experimenting and having fun! Stay safe!
I will definitely be taking your advice and next time we get a lightning storm here I will hope to get even better shots! I will keep in touch in the future. Thanks again!
No worries, Robert! Glad to help!
I like the safety of shooting semi-indoors, but the next challenge is to get an (more) interesting foreground!
This was a very good artical.
Thank you, Danny!
Nice article. Personally I've never been able to capture a strike but with these pointers I might try it again.
Not sure if this would help. If you count the seconds between the cracks of thunder, for each second is a mile then you can tell if the storm is coming closer or moving away. The thunder is usually right after the lightning strike so you can gauge about when the next strike will occur. Without using a watch just count off one thousand one, one thousand two, etc. And if you count only one one thousand make sure you're shelterd good because that puppy is right over head.
Yep, the speed of sound is a handy way to calculate the distance from the strikes, but you never know if the next one is going to be where the last one was, or right overhead!
Thanks for reading and thanks for the tips! Stay safe!
A slight correction on the time to distance... its actually 5 seconds per mile.
Lightning can strike up to 10 miles from the storm... about the limit you can hear it... so the advice usually is if you can hear thunder take cover. Strikes tend to be seprerated by 2-3 miles so a 15 second delay could mean the next strike could be right near or upon you!
Thanks for clarifying! It all stems from the fact that the speed of sound is 343.59 meters/second at sea level...in dry air...at 20 degrees C.
Great article! Covers the most essential techniques for photographing lightning, and stresses the most important, safety, very well. I've had a few close calls through the years, and would like to add, keep an eye on what's going on behind you. Numerous times I have been so concentrated on what is going on in front of me, I didn't notice the storm building behind me until it let me know it was there.
Also, when setting my aperture, I tend to base it on my distance from the storm. If I'm really close to the storm, I'll use a small aperture like f/11 or f/16. The further I get away, I'll open it up to f/8 or even wider. I've found a wide aperture close to the storm, renders the strike very thick and unnatural looking. Of course this is a matter of personal preference, however, I would suggest experimenting and see the difference for yourself.
Again, great article! Can't wait for monsoon season here in the southwest!
Thanks for your comments and tips! Awesome stuff!
Also, thanks for adding to the safety conversation. Very important!
Be sure to share your photos with us here at B&H! Good luck and stay safe!
Under Technique #1, why would you have to have an underexposed landscape if the lightning struck soon after reducing the aperture? How about leaving the shutter open after the lightning strike until you think the landscape will be properly exposed?
Thanks for your question!
What I was trying to convey is the following...
If you take a 30-second exposure, and get an overexposed landscape, regardless of if you caught a bolt or not, you know that 30 seconds is too long for that particular landscape. Your options are reducing your aperture and keeping the shutter time the same, or keeping the aperture constant and reducing the shutter time.
Once you figure out the proper exposure for the landscape, you can adjust your shutters speed and/or aperture to help maximize your efficiency for capturing strikes. If you closed your aperture to reduce the exposure on the landscape, and then capture a bolt in the time the shutter is open, there is nothing to say that you cannot keep the shutter open until you get to the calculated time for the properly exposed landscape.
Having said that, you will run the risk of capturing another strike...not a bad thing unless it blows out your exposure. Also, with digital photography, it is much easier to recover detail out of the underexposed parts of the image than overexposed. So, to hedge your bets, you might want to end the shot with an underexposed landscape in the hopes of pulling data from it in post processing and, thereby not risking capturing a second blot that ruins the frame or overexposing the landscape.
It is all up to you how to roll. There are no hard-and-fast rules for photographing lighting except...STAY SAFE!
I hope that helped clear things up for you! Thanks for reading!
Another point worth emphasizing, especially in connection with the "Wild West" method, is to make sure the camera doesn't have to meter anything. No auto exposure/Av/Tv/P, no auto white balance, etc., beyond of course the no auto focus mentioned above. Even in M, I suspect it might be advantageous to select any mode other than evaluative (e.g. partial, spot, center-weighted), since I think it still meters to report the exposure bias for the EXIF data. The point of this is that as soon as you hit that shutter (or your gadget does), it can open the shutter as soon as possible, and while the computing is mighty fast these days, that evaluating or metering is still going to slow things down, even if by just a fraction of a second, and that could make all the difference in catching a return stroke.
Hi J. Randall,
Great point! For the "Wild West" method, the more manual the better, to give you the quickest draw possible.
Thanks for reading and commenting!
I really appreciate what you have written here. This gives me something to think about if we get any more storms worth photographing; seems like they have been few and far between lately in our area: look like they are coming on great across the plains, but then they hit the Big Muddy and fizzle out.
You did not mention my method which is using a tripod and setting the camera on manual with a shutter speed of 50 for night and 800 or so for day (depending on the available light) and with the aperture set at f/4, ISO 3200, and auto white balance (yes, grainy, but effective). I use a remote trigger and have my camera set on continuous high which gives me just over 5 frames per second. The challenge is - as with any lightning shot - to hit the button in time to get the best of the strike. Obviously this has the limitation of relying on my own reflexes, but that, to me, is the challenge. I am shooting at home from my porch, so there are trees in the distance to provide scenery, but I focus more on the sky and have only the meerest outline of the trees showing (except for when I get a "daylight" flash and it illuminates everything in the field as though it were midday). I think my focus is more on cloud to cloud lightning which can "skitter" across the sky and through two or three pictures. These are just for my own pleasure and would probably be too "grainy" because of the high ISO for commercial use. But we don't get many chances at lightning here in Tennessee like you can get in the plains; storms are brief for the most part and I don't have a lot of time to experiment with settings. But I may try the bulb method next time; I've done it in the past and tend to get too much light before I can get it closed.
I like your method too! There are lots of different ways to catch the lightning in your camera! What kind of camera and lens are you using?
When doing the Bulb method, keep taking "test" shots to see how long you can keep the shutter open before you start to get overexposed. Once you know that, I would dial it back a few seconds or stops so that, if you catch a bolt, you don't end up very overexposed. Keep experimenting and making exposures. The light of night changes, so a 30 second exposure on a clear moonless night might be way too long for a cloudy or moonlit night.
Or...get a lightning trigger!
Good luck and stay safe!
Thank you for the excellent article and tips. shooting lightning is an energetic challenge that can obviously yield some beautiful photographs. I have graduated from the "wild west" photographer to now a photograher leaning toward the bulb mode. I know I need to save up for a lens with a wider aperture setting but.... thats another story.......
Hi Jim! Thanks for your comments!
If you are looking for a larger-aperture lens for lightning photography, do not be afraid to find an older manual focus used lens for the task. As you are likely shooting lightning at great distances (hopefully!), you might find that an older MF lens with a hard-stop at infinity is a great benefit and that grabbing a used example fits better into your budget!
Stay safe out there!
Excellent information, especially the safety points!
Thank you, Richard! Stay safe out there!
Great Advice. Thank you for the article.
Thanks for reading, James. And thank you for the compliment!
This is good stuff
Thank you, Chris! I am glad you enjoyed the article! | 1 | 8 |
<urn:uuid:0f6535dd-f405-491e-88b0-d24cdf18aa82> | Access the full text.
Sign up today, get DeepDyve free for 14 days.
Revista Brasileira de Ornitologia, 23(1), 36-63 March 2015 A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions 1,3,4,5 2,3 Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Programa de Pós-Graduação em Ecologia – Instituto Nacional de Pesquisas da Amazônia – INPA. Coordenação de Pesquisas em Biodiversidade - Instituto Nacional de Pesquisas da Amazônia – INPA Programa de Conservação do Gavião-real (PCGR) – INPA Current address: Laboratório de Zoologia, Programa de Pós-Graduação em Biodiversidade e Conservação, Universidade Federal do Pará – UFPA – Campus Universitário de Altamira. Rua José Porfírio, 2115, São Sebastião, CEP 68372-040, Altamira, PA, Brazil. Corresponding author: [email protected] Received on 1 December 2014. Accepted on 22 January 2015. ABSTRACT: Here we review the distribution of the Crested Eagle (Morphnus guianensis) in the Americas, and based on the Brazilian Harpy Eagle Conservation Program (PCGR) database, literature, online databases, zoos, wild and museum records, we provide an updated distribution map with 37 points outside the IUCN map; 16 were recorded close to the border of the map (up to 40 km), and do not expand or contribute to the distribution map. Far from the border (>40 km) we found 21 records, contributing to an expansion of the known range and habitat. At the northernmost extreme of distribution, the range was extended to southern Mexico; in Nicaragua, the range extension was farther south in the north, and two records extend the range to the southern border with Costa Rica. In Colombia, an old specimen is located between Darien Peninsula and the Perija Mountains. In Brazil a record from the ecotone between Cerrado and Gallery Forest, and another in an upland remnant of Atlantic Rainforest, expands the range towards central and southeastern Brazil, and to the Northeast, old records could expand the Atlantic Rainforest distribution towards the interior. KEY-WORDS: Conservation, Falconiformes, Neotropics, Raptor. Included in the order Accipitriformes, the Crested Eagle, between soybean fields and forest fragments (Lees et al. Morphnus guianensis, and Harpy Eagle, Harpia harpyja, 2013), and also has been found in forest mosaics within are the Neotropical representatives of the subfamily the Gran Sabana, Venezuela (Crease & Tepedino, 2013). Harpiinae (CBRO 2014). The members of Harpiinae In Brazil, the Crested Eagle is known as “Uiraçu- can be distinguished from other Accipitridae by large falso” or “Gavião-real-falso” [=False Harpy Eagle] (CBRO sizes and weight, length and wingspan, being traditional 2014). According to the literature, adults reach up to 89 inhabitants of humid tropical forests (Ferguson-Lees & cm in total length, wingspan up to 154 cm and weight up Christie 2001), preying on mid-sized mammals such as to 3 kg; females are larger and more robust (Bierregaard sloths, monkeys, and rodents (Ferguson-Lees & Christie 1994; Ferguson-Lees & Christie 2001). The head is 2001; Aguiar-Silva et al. 2014. grayish with a crest tipped with a single larger medial The species occurs in low density, and is deemed rare black feather. In general the color pattern resembles that to very rare in all areas of distribution, mainly inhabiting of Harpy Eagle, however the latter always has a black chest Neotropical dense humid forest, mountain slopes, band (Sick 1997; Ferguson-Lees & Christie 2001). Adults coastal forest, from sea level to 2200 m; it is considered most commonly are pale-morph, but may occur in two resident (Brown & Amadon 1968; Hilty & Brown 1986; melanistic forms, dark-morph and extreme-dark-morph. Bierregaard 1994; Howell & Webb 1995; Ridgely & During its 4-year sequence to attain adult plumage, birds Greenfield 2001; Ferguson-Lees & Christie 2001; Hilty become darker over time (Bierregaard 1994; Ferguson- 2003; Hennessey et al. 2003; Jones & Komar 2007). It Lees & Christie 2001). can also occur in forest patches and has been recorded Mauduyt (1782), described the Aigle (petit) de nesting in a Brazilian forest fragment, located in a mosaic La Guiane in a systematic and comparative way, with Revista Brasileira de Ornitologia, 23(1), 2015 A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti specimens coming from Cayenne [=French Guiana]. Estadual do Amazonas”. Records were subdivided into Based on this work, in 1800 Daudin described Falco New (after 2000), Old (prior to 2000), Nests, and Rescued guianensis using Linnean nomenclature. Later, Falco individuals by IBAMA overlaid with distribution limits was made a synonym of Morphnus by Dumont (1816), provided in the IUCN map (IUCN 2014). For published giving rise to the monotypic species Morphnus guianensis. records, whenever possible, we used the exact date, and In 1879, Gurney described Morphnus taeniatus as a full when the article did not provide this information, we species, later synonymized because it was just a dark- used the publication date. Not all records had accurate morph (Lehmann 1943). locations. When this information was available, the exact The Crested Eagle has a wide distribution over locality was included on the map, following the exact Central and South America (Ferguson-Lees & Christie geographical coordinates. For those records with no exact 2001), however records are generally casual or by chance, geographic coordinates, we used coordinates associated being considered rarer than the Harpy Eagle in some with the geographical center of the municipality where regions where they coexist (Jones & Komar 2006). they were obtained. Seven museum specimens without More than 250 years after its description, few collecting dates were assumed as Old records on the surveys include the species in their lists, and studies of map (prior to 2000). All records are presented in the its biology and ecology are rare, therefore understanding Appendices, but some were not included on the map its distribution is the goal of this review. The Crested because they overlapped, or had little accuracy. Eagle is a top predator, occurring in low densities, and is considered a Vulnerable (IBAMA 2014) and Near- Collections and Museums threatened (IUCN 2014) species, due to habitat loss and hunting. The knowledge of its current distribution and Since 2005, the “Programa de Conservação do Gavião- ecological requirements could contribute as a basis for real” PCGR-INPA [=Brazilian Harpy Eagle Conservation further conservation policies. Program; http://gaviaoreal.inpa.gov.br] visited collections Currently, the most widely used distribution maps as researching specimens of Harpy Eagle, Crested Eagle, and a basis for conservation plans and determining the threat Hawk-Eagles, to build a distributional database. Eight status of the vast majority of organisms are provided by the Brazilian collections housed specimens of Crested Eagle: International Union for Conservation of Nature (IUCN Museu Paraense Emilio Goeldi (Belém, Pará – MPEG), 2014). However, recent records, very old ones, and those Instituto Nacional de Pesquisas da Amazôonia – Coleção from gray literature or from birdwatchers, photographers Ornitólogica (Manaus, Amazonas – INPA), Museu de or videographers are lacking consideration. Our goal is Zoologia da Universidade de São Paulo (São Paulo, São to review the distribution of the Crested Eagle, including Paulo – MZUSP), Museu de História Natural de Taubaté new records, particularly for Brazil, which holds the (Taubaté, São Paulo – MHNT), Universidade Federal de largest continuous forests in the continent, and produce Santa Catarina – Coleção Ornitológica (Florianópolis, an updated map of its occurrence. Santa Catarina – UFSC), Museu Frei Miguel (Luzerna, Santa Catarina), Museu de Biologia Prof. Mello Leitão (Santa Teresa, Espírito Santo – MBML), as well as small MATERIAL AND METHODS private collections as tourist exhibits, such as Museu do Índio (Florianópolis, Santa Catarina). Data from two The review follows the format of the database of the collections, Museu Sete Quedas (Pato Bragado, Paraná) Global Raptor Information Network - GRIN (2013) for and Museu da Fauna (Rio de Janeiro, Rio de Janeiro, closed all countries. It is augmented with more details for the in 1983 and its collection tranferred to Museu Nacional states of Brazil, old published records, information from in 1993), were taken only from literature describing their the Brazilian Harpy Eagle Conservation Program (PCGR) holdings. Twelve collections outside Brazil had Crested database and online databases such as ORNIS, IBC and Eagle specimens, and data were accessed directly from AVECOL, gray literature (such as Instituto Brasileiro the institution`s website or from websites that replicate de Meio Ambiente de Rescursos Naturais Renovaveis information from different collections, such as ORNIS, (IBAMA) [=Brazilian Environmental Agency] reports), where we accessed the collections of the Academy birdwatcher reports, lodge lists, unpublished reports, of Natural Sciences of Philadelphia (Philadelphia, photographs, and recently published studies.Those sources Pennsylvania – ANSP); the United States National where the indication of the distribution of the species was Musem (Washington, D. C. – USNM); Field Museum very broad and poorly defined (for example, no specific of Natural History (University of Chicago, Chicago, localities mentioned) were not used in our final map. Illinois – FMNH); Louisiana State University Museum of The final map was created using the ArcGIS software Zoology (Baton Rouge, Louisiana – LSUMZ); Museum of at the “Laboratório de Agrimensura da Universidade Comparative Zoology (Harvard University, Cambridge, Revista Brasileira de Ornitologia, 23(1), 2015 A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Massachusetts – MCZ); American Museum of Natural obtained for Brazil and 140 records for Mexico, Central History (New York, New York – AMNH); Royal Ontario America and other South American countries. From the Museum (Toronto, Canada – ROM); Carnegie Museum total, 45 records did not meet criteria and were rejected of Natural History (Pittsburgh, Pennsylvania – CM); from our map when they did not have known origin (zoo Los Angeles County Museum of Natural History (Los or museum specimens), were repetitive (different years Angeles, California – LACM); Western Foundation of at the same point), or the literature listed only “general Vertebrate Zoology (Camarillo, California – WFVZ), occurrence” (Appendices 1 and 2). and the Oklahoma Natural History Museum (Oklahoma Listed by source, 156 records are from published City, Oklahoma – OMNH). Specimens with imprecise literature in articles and books, 45 are records from information about the collection site were not included museums and collections (ORNIS database), and 17 on the map (Appendices 1 and 2). are records from our PCGR Database. The remaining 51 records were obtained from online photo and sound Rescued and Captive Birds in Brazil websites and personal communications (Appendices 1 and 2). For the location of individuals rescued by wildlife Listed by date, the records spanned 18982014. authorities in Brazil (IBAMA), one point for the location One hundred and thirty records are Old (before 2000), of each bird’s origin was plotted on the map, with the 113 are New (after 2000), and 26 records have no precise date of rescue included only in the text. For individuals date (Appendices 1 and 2). A total of 37 records were at conservation centers and zoos, we inserted a point placed outside the IUCN map, and are highlighted in on the map only if they had information on the origin/ bold in the appendices (Appendices 1 and 2). capture. Current or past individuals of Crested Eagle were at: Zoológico de São Paulo – São Paulo; Zoológico Review of the distribution of the Crested Eagle do Centro de Instrução e Guerra na Selva – CIGS, outside Brazil Manaus, Amazonas; Zooparque de Itatiba – Itatiba, São Paulo; Zoológico Municipal Dois Irmãos – Recife, Of the 140 records obtained for Crested Eagle in Mexico, Pernambuco; and Criadouro Conservacionista – CRAX Central and South America, excluding Brazil, 96 were in Belo Horizonte, Minas Gerais (Appendix 1). sourced from published literature in books or articles, 27 came from records in museums and collections, and Online Databases 17 were obtained from online databases, recordings of vocalizations and photos from personal archives. From Open access online databases were also consulted. Photo those records, 30 are located outside the distribution and sound fi les and videos were obtained from the map provided by IUCN (Figure 1), enlarging the area of following websites providing both records and accession occurrence to southern Mexico, to the north of Nicaragua numbers: www.wikiaves.com.br (WA), www.xeno-canto. and to its southern border, and in Colombia, to include org (XC), and Macaulay Library (MAC) Cornell Lab. of the region between Darien Peninsula and the Perija Ornithology, Ithaca, New York (http://macaulaylibrary. Mountains, and all are highlighted in bold in Appendix 2. org); or less scientific sites, such as the Internet Bir d Collection (IBC), and stock photos at the Visual Resources North America for Ornithology (VIREO) at the Academy of Natural Mexico – The first visual recor d in the country (a Sciences, Philadelphia (http://vireo.acnatsci.org). Some soaring adult), occurred in 1992 in Campeche (J. Sutter records came from private photo collections (Flickr); and J. M. Diaz cited in Whitacre et al. 2012). However the authors were asked for permission to use the records the first documented recor d was a photo from 2004, and in some cases provided additional data. Despite at the Biosphere Reserve of Montes Azules, in Chiapas the chance of mistaken indetifications between Crested (Grosselet & Gutierrez-Carbonel, 2007). Whitacre et al. and Harpy eagles, some unpublished sight records and (2012) mentioned the probable occurrence in Chiapas sound recordings of Brazilian professional and amateaur and Quintana Roo. ornithologists alike were included in the distribution map (Appendices 1 and 2). Central America Crested Eagle can be considered rarer than the Harpy Eagle in regions where they coexist, according to RESULTS Jones & Komar (2006). Belize – The first recor d occurred in 1995, at Orange A total of 269 Crested Eagle records were found from Walk (Hall 1995), and is rarely seen in Toledo and Orange Mexico to Argentina. Listed by locality, 129 records were Walk, Cayo (Jones et al. 2000). A probable record was Revista Brasileira de Ornitologia, 23(1), 2015 A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti FIGURE 1. Localities where the Crested Eagle (Morphnus guianensis) has been recorded until 2014 (see Appendices 1 and 2). White circles represent recent records, after the year 2000; black circles represent records prior to 2000; red circles represent nests; yellow circles represent individuals rescued by IBAMA. The white water mark denotes the range contained in the IUCN (2014) map. made in December 2006 in the southeast at Hickatee in Quebrada Kahkatingni, Patuca River, in June 1999 Lodge, Punta Gorda, Toledo (Jones & Komar 2007). (GRIN 2013). Bonta & Anderson (2002) consider it a Guatemala – The first recor d occurred in 1978, rare and resident species. The MCZ listed a skin from La reported by Ellis & Whaley (1981), in Flores (Petén). Ceiba collected in 1902. Between 1994 and 1995 an active nest was found in Nicaragua – An individual was seen in March Tikal National Park, also Petén (Whitacre et al. 2012) 2001, near the community of Hormigero, Cerro Sasiaya, and observations of a young bird were made in the same in the Bosawas Biosphere Reserve (GRIN 2013). Two place, with a juvenile reported (Grijalva & Eisermann other individuals were seen in May 1994 and May 1999 2006). Eisermann & Avendaño (2007) considered the (Múnera-Roldán et al., 2007) in the south, at Bartola species resident and restricted to low-lying areas in the Reserve; another individual was seen in 2001 in the “North Atlantic region. The AMNH has a specimen collected in Atlantic region” of the country, in the Alamikangban 1978 in Flores, Petén, and ROM has a specimen collected community, Prinzapolka, by Kjeldsen (2005). in 1966, in the same location. Costa Rica – Birds were seen in the regions of Honduras – Bangs (1903) reported collection Sarapiquí and Osa Peninsula (Stiles & Skutch 1989). of a young male at La Ceiba, and Monroe (1968) of There is a recor d of Carriker (1910) in Cuabre, near the another individual in San Pedro Sula. A juvenile was Sacsola River (probably Sixaola). Slud (1964) recorded photographed by Russell Thorstrom in flooded forest an individual at Cañas Gordas, near Panama. There were Revista Brasileira de Ornitologia, 23(1), 2015 A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti also records at Finca La Selva and Rara Avis Jungle Lodge, least four specimens collected in Leticia, Caquetá and around Braulio Carrillo National Park (Jones 2004). An Chocó (Salaqui River and Juradó River). ANSP has a adult dark-morph was photographed in 29 April 2011, specimen listed from Morelia, Caquetá, undated. USNM in Crucitas, Curtis de San Carlos, Alajuela (photo by R. maintains three complete specimens preserved in alcohol: Vargas; P. Camacho, pers. comm.) There is a photograph one from Truandó, one from the Sinu River, in Rescues, of a subadult pale-morph in 2005 from Tortuguero near Córdoba, 1949, and one from Uraba Gulf in Acandi, the Nicaraguan border (Jones & Komar 2006), including Chocó, 1949. FMNH maintains two specimens collected a pale and a dark-morph pair photographed in 2005 (G. in Chocó in 1940, from Jampavado River and from Ocklind, pers. file); besides there are two recent records, Jurado River, plus one from Cuturu, Antioquia, in 1947. one from March 2013 (in the park, photo and record by Ecuador – Very rare, but has been recorded in the R. Osborne and E. Miranda) and January 2014 (Caño Pichincha region and at the base of the Andes (Ridgely Harold, photo by C. C. Obando), both from Tortuguero & Greenfield 2001). Muñiz-Lópes et al. (2007) cite National Park (P. Camacho pers. comm., Fundacíon its occurrence in Esmeraldas Province. In 2007, a Rapaces de Costa Rica Database). The CM has a specimen pale-morph individual was seen preying on a snake collected in Cuabre, Límon, in 1904. (Spilotes pullatus) in Cuyabeno, Sucumbios Province (L. Panama – It is considered very rare, occuring in Vaincenbacher, IBC, 2007). In the Wildlife Center of continuous forest and on the Caribbean slope, from Napo, the species has been seen by birdwatchers and is southeast to east (Wetmore 1965; Ridgely & Gwynne listed for the region (http://www.napowildlifecenter.com), 1989). There are occasional recor ds from the southwestern plus a photographic record of an individual dark-morph Azuero Peninsula (Cerro Hoya region), in the provinces in 2008 (T. Cloudman http://www.hargrove.org/2008/ of Panama (eastern region) and Darién; those from images/2008crestedEagle-edited-jpg). In 2014 a nest was Chiriquí and Coiba Island are unsubstantiated; there is a found in the Cuyabeno Reserve (R. Muniz-Lopes pers. photographic record near Achiote Road in January 1975 comm.). In the Quito Zoo there is a female dark-morph (reported by W. Cornwell) (Ridgely & Gwynne 1989). of unknown origin (Montalvo & Montalvo 2012). Kiff et al. (1989) notes an egg obtained in the wild from Bolivia – Pearman (1994 In GRIN 2013) records a nest located on the Chiquita River, central Panama, the species for the first time in Beni. Then it was seen and passed to the CEPEPE [=Center for Propagation of in Noel Kempff National Park (Bates et al. 1998) and Endangered Panamanian Species]. Vargas et al. (2009) subsequently in La Paz and Santa Cruz de la Sierra reported an adult Crested Eagle feeding a nestling Harpy (Hennessey et al. 2003). In 2005 it was seen in Caparú Eagle in Quintin Darién. A young Crested Eagle was Biological Station (Vidoz et al. 2010). reported from San Lorenzo National Park, Colón, in Peru – Considered rare, it occurs in the eastern March 2007 by Jones & Komar (2007). Two records region of the Andes (Clements & Shany 2001). Kiff et were made in Darién National Park: an adult dark- al. (1989) cite the capture of a female from a nest in morph was filmed at the Cana Camp in May 2010 (E. Amazonas Department in 1978 (this individual became Groenewoud, IBC), and a female was sound-recorded part of the breeding stock at the Oklahoma City Zoo). by A. Spencer near her nest in March 2013, in Rancho It is listed from the Tambopata-Candamo Reserve Frio. The MAC has a sound recor ding made in 1981, of (Parker et al. 1994), where they have recorded breeding a dark-morph female perched in a tree, next to Pipeline activity since 2002 (Raine 2007). In 1977 and 2006, Rd., northwest of Gamboa, in the Canal Zone (van den two different nests with nestlings were photographed Berg 1981). The MCZ holds four old specimens from in Madre de Dios, near Manu Lodge (R. Fabbri, pers. Panama: Changuinola (1928), Perme, in Darién (1929), comm.). In 2001 it was seen preying on small primates, Banana River (1928) and Puerto Obaldia (1930). The at the Quebrada Blanco Biological Station (Vasquez & FMNH has a pair collected in San Blas, Puerto Obaldia, Heymann 2001), and in June 2012, a young on the in 1935. The AMNH has one specimen from Barro ground was seen and photographed (Flickr) in National Colorado Island from 1936, and two from Tapalisa, Park Pacaya-Samiria, Loreto (A. Morales, personal fi le). eastern Panama, from 1915. The species is listed by Foss & Huanaquiri on a birdlist at a forest reserve in Loreto (Tahuayo River – http:// South America thinkjungle.com/amazon-jungle-tours/tahuayo-lodge). Colombia – Considered rare by Hilty & Brown The Centro de Reproducion Huayco, in Lima, owned (1986), these authors cite localities of Chocó, Baudó by Jose Antonio Otero, housed six individuals with no mountains, Achicayá and Sinu valleys, Córdoba and Perijá record of origin. LSUMZ maintains two specimens mountains, Guajíra (Carraipia), the eastern region of the collected from Amazonas Department, two specimens Andes, and west of Meta (Villavivencio) and Caquetá. from Loreto Department, and a feather from the same Márquez et al. (2005) provides records in museums, at location, all without exact dates. Revista Brasileira de Ornitologia, 23(1), 2015 A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Venezuela – Rarely sighted, occurs widely over Park, of an individual in flight in December 2003 (N. lowland forests and mountains, with records from Lopes, In Del Castillo & Clay 2004). Northern Orinoco, Caura River, Maracaibo Basin, Perijá Mts., Zulia, Mérida, Lara, Amazonas, Bolívar, and Review of the distribution of the Crested Eagle Margarita Island (Hilty 2003). In 2006 it was recorded in in Brazil Obispos, Barinas (Vargas et al. 2009), and in 2011, a nest was found and monitored in the Gran Sabana, Bolivar, Of the 130 records obtained for Crested Eagle from with documenting photos by Crease & Tepedino (2013). Brazil, 60 were sourced from published literature in Guyana – Considered resident and scarce, occupying books or articles, 18 came from records in museums and lowland forest environments (Braun et al. 2000). Pickles et collections, and 34 were obtained from online databases, al. (2011) recorded the species in the Chief Rewa Reserve, recordings of vocalizations and photos from personal in Rupununí, in the south. Two specimens housed at archives, and 18 came from the PCGR Database. From ROM are from this same region, upper Takutu and upper these, seven records are located outside the distribution Essequibos, Kwitara River, Rupununi, both from 1964. map provided by IUCN (Figure 1), enlarging the area of AMNH keeps a specimen from Kalacoon, undated. occurrence to the northeast, southeast and south, and are French Guiana – It is widely distributed in highlighted in bold in Appendix 1. forest areas, and is more common than Harpy Eagle in The majority of the records (70%) were outside disturbed forests, however, it is not significantly more conservation units, 27 records (21%) were from National common in primary forest (Thiollay 2007). Julliot (1994) Conservation Reserves and 12 (8%) were from County, reports predation by the Crested Eagle on a young spider Municipal or Private Reserves. monkey (Atteles paniscus), at Nouragues Station, which For each state we include the Brazilian region (South, took place in 1992. In August 2011, an extreme-dark- Southeastern, Northeastern, Central and Northern morph individual was photographed on the banks of Brazil) and abbreviations for the biome occupied (ARF- the Approuague River (J. Tascon, pers. comm.). The Atlantic Rainforest, AMZ-Amazonian Rainforest, and Macouria Zoo in Guyana maintains a live specimen, ECO – Ecotone Biome Cerrado/Gallery Forest). dark-morph, possibly a male, with unknown origin Rio Grande do Sul (South-ARF) – There are only (Maxcobigo, In IBC). three historical records. The oldest comes from I hering Suriname – Apparently a rare bird in primary (1899), near the municipality of Taquara; its occurrence forest, sometimes seen wandering into areas of the coast is also suggested at the Turvo River Reserve (Belton (Haverschmidt & Mees 1994). Possibly a resident, but 1984). Bencke (1997) provides the last record from Santa no reproductive activity has been noted in the country. Cruz do Sul in 1920. Considered very rare in the state by On the list of birds from Raleigh Falls-Voltzberg Sick (1997), it is currently classified as ‘Pro bably Extinct’ Nature Reserve, Sipaliwini District, where it was seen (Marques et al. 2002). in a predation attempt on Guianan Cock-of-the-rock Santa Catarina (South-ARF) – There are five (Rupicola rupicola) (Trail 1987). records of the species: in 1977 it was seen in Jordão Baixo Argentina – Species considered to be resident (Siderópolis) by Albuquerque (1983); Rosário (1996) (Mazar-Barnett & Pearman 2001). Pearman (2001) provides a record in Siderópolis; and another record in considers the species a casual visitor to Missiones, 2005, in Aiúre, in the municipality of Grão-Pará, in and reports the observation of an adult in El Piñalito the foothills of Serra Geral (Albuquerque et al. 2006). Provincial Park. There are earlier recor ds in Santa Ana Records from museums in Santa Catarina are unreliable, (Bertoni 1913) and at Iguazú National Park, a pair since they come from private collections, without displaying, recorded in September 1980 by Rumboll & scientific identification of locality. There is a specimen Straneck (In Olrog 1985). in Frei Miguel Museum, in Luzerna, from the locality of Paraguay – Del Castillo & Clay (2004) consider Joinville, prepared in 1926 (Favretto 2008). Recently a the species rare, but reproductively active in Alto Paraná. specimen at the Universidade de Santa Catarina (UFSC) There are two visual recor ds during a survey conducted was analyzed, which had been previously classified as a in the San Rafael del Parana National Park, in Itapúa Harpy Eagle. We confirmed t hat it was a Crested Eagle. (Madroño-Nieto et al. 1997). The first confirmed recor d This specimen was collected between 1965-1970 in occured in 2002, at the same place, where a specimen the municipality of Lontras, by G. Knolle, also for his was captured and donated to the Itaipu Binacional Zoo, private collection, and was subsequently donated to the which survived until 2002 (Del Castillo & Clay 2004, Ornithological Collection of UFSC. There is an adult Museum of Natural History of Itaipu Binational). mounted specimen, pale-morph, sex undetermined, Another record also occurred near the Aurora Colony, in unknown origin, in the private collection of Museu do the same region of the San Rafael del Parana National Índio, in Florianópolis. Revista Brasileira de Ornitologia, 23(1), 2015 A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Paraná (South-ARF) – The occurrence of the The remaining Brazilian Northeastern states species was registered only at Marechal Cândido Rondon generally do not have records of the species, whether in 1964. This is a mounted specimen, housed in the Sete historical or recent ones; however, Dois Irmãos Zoo in Quedas Museum, and similar to the previous cases, it was Recife, Pernambuco, exhibits an adult pale-morph, of from a private collection (Straube & Urben-Filho 2010). unknown origin. The species possibly occurs at Iguaçu National Park, Mato Grosso do Sul (Central Brazil-ECO) - P. where well-preserved forest still remains (Forrester 1993). Scherer and C. Ribas report the species in April 2001, São Paulo (Southeastern-ARF) – The first citation flying over a road near the Private Natural Reserve found was Ihering (1898), which dealt with the collecting (RPPN) Buraco das Araras, in the municipality of Jardim a Harpy Eagle and likely (but not confirmed) occurrence (In Pivatto et al. 2006). This region is savanna (Cerrado) of the Crested Eagle in the state. More recently, it with Gallery Forest. was recorded in large reserves protected by the State Mato Grosso (Central Brazil-AMZ) – There are government, known as State Parks (P. E.): twice in nine records for the species. The oldest recor d comes from Jacupiranga P.E. (1990 and 1992), once in the Morro do Chapada dos Parecis, in Juruena (Sick 1997). In 2005, an Diabo P.E. (1992) and, once in Intervales P.E. (1995). In individual was photographed along the Cristalino River the first two recor ds the birds were soaring, and in the (A. Lees, pers. file), and in 2006, two individuals were last, landing (Galetti et al. 1997). In both cases only visual seen and photographed on the CEPLAC Farm (Executive records were made. At the MZUSP Collection, there is Board of the Cocoa Crop Plan), both in Alta Floresta a specimen from Apiaí, collected in 1900. MHNT has (A. Lees, pers. comm.). In 2011 one individual was a mounted specimen displayed, however of unknown photographed at the Jardins da Amazonia Inn, in São José procedence. In the Itatiba Zooparque, there is a pale and do Rio Claro (E. Endrigo, pers. comm.), and in 2012, dark-morph pair on display, both with unknown origin. a pair was recorded responding aggressively to playback, Rio de Janeiro (Southeastern-ARF) – There is only in the same locality, where possibly there was a nest (M. one historical record, from Pinto (1964) for the locality of Pádua, pers. comm.); in October 2012 in the Cristalino Cantagalo, and from this same location, there is a skin in RPPN, an adult was drying itself in the canopy, after a the collection of Johann Natterer (Hellmayr & Conover heavy rain (J. Silveira, pers. comm.). In addition to these, 1949). At the National Museum of Rio de Janeiro there are photographic records were also made in 2012 in the five specimens listed, however all are of unknown origin. municipality of Comodoro, (D. Mota and V. Castro, pers. Espírito Santo (Southeastern-ARF) - There are two comm.). In September 2012, a nest with a nestling (45 records: an observation at Sooretama Biological Reserve, months old) was located in the municipality of Paranaíta, Sooretama, by Parker & Goerck (1997) and another in at the Ouro Reunido Farm (P. Bernardo, pers. comm. and Itaúnas State Park (Petroff 2001). At the Museu Biológico D. Oliveira, pers. comm.). At MZUSP, one specimen is Mello Leitão, Santa Teresa (MBML) there is one mounted listed from the Ipê Farm, in Vila Rica. CETAS-IBAMA specimen of unknown origin and without registration pre-release facility in Guarantã do Norte is housing a live number. young female pale-morph from the municipality of Novo Minas Gerais (Southeastern-ARF) – There are two Progresso, PA, currently still being held. records. The first recor d is listed on the state list (Mattos et Rondonia (North-AMZ) – There are nine recor ds. al. 1993), as having been seen in Mata Escura Biological Between 1987 and 1988, the Crested Eagle has been Reserve, in Jequitinhonha (T. Mattos, pers. comm). The registered at the Cachoeira Nazaré, close to the Ji- second record occurred in the Caparaó National Park, in Paraná River by Stotz et al. (1997). In 2003, Olmos Alto Caparaó, in 1997, in which two individuals were et al. (2011) recorded three individuals in Serra Cutia, seen flying, which was probably a pair (Zorzin et al. 2006). between the municipalities of Guajará-Mirim and Costa CRAX maintains a live female dark-morph, previously Marques, besides having verified t he existence of native paired with a male pale-morph borrowed from the São craftsmanship using feathers of the bird. In January 2010 Paulo Zoo, but which died later (individual donated to an adult individual was seen in Chupinguaia (K. Okada, Universidade Federal de Minas Gerais-UFMG, however pers. com.); at the same locality, in September 2010, a it has not yet been taxidermized), both of unknown nest was found close to the previous record, with an active origin. Recently CRAX received another dark-morph pair and a nestling (M. Canuto, pers. file). The nest was female, from Pará (ICMBio). visited in October by the PCGR Team, who found it on Bahia (Northeastern-ARF) – There are two recor ds: the ground, because the tree had fallen. In January 2012, Willis & Oniki (2003) comment on an aural record in at the same locality, an adult was seen during an avifauna Porto Seguro, in 1974. There is also a visual recor d in the inventory (R. Hippolito, pers. comm.), possibly one of the municipality of Barrolândia, municipality of Belmonte, members of the resident pair. In 2011, next to the Ramal 1995 (Galetti et al. 1997). do Rio das Garças, in Porto Velho, an individual was Revista Brasileira de Ornitologia, 23(1), 2015 A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti photographed (F. Pereira). In March 2012, in Guaporé forest (S. V. Wilson, pers. comm. and photo), and in June Biological Reserve, between São Francisco do Guaporé 2012, at the same location, a pair was seen carrying prey, and Alta Floresta D’Oeste, an individual dark-morph probably for a nest (A. Whittaker, pers. comm.); however was recorded calling next to a group of small primates (S. on a visit in 2013 the nest was not found (FBRG). On Alves, ICMBio/ReBio Guaporé, pers. comm. and file). the banks of the Roosevelt River in Novo Aripuanã, on 2 Acre (North-AMZ) – There are seven recor ds. The July 2007, a pale-morph adult was seen in the canopy of first citation comes from Catuaba Experimental Farm – a tree (B. Whitney, video IBC & pers. comm.), and on Universidade Federal do Acre-UFAC, near Rio Branco, 12 September 2007 a pair and 17 September a subadult. between 1994 and 2004 (Rasmussen et al. 2005). The Whittaker (in 2009) recorded an individual on the banks second record comes from the Alto Juruá Extractive of the same river, and another around the lodging. In Reserve, in Marechal Thaumaturgo (Whittaker et al. 2008, it was seen on the Urucu River, and also a nest 2002). In 2008, DeLuca (2012) registers, through (not studied by PCGR) (Whittaker et al. 2008). In 2011, interviews with the locals, the presence of Harpy Eagle an individual was photographed in Tapauá, in Nascentes and Crested Eagle at the Chico Mendes Extractive Reserve do Lago Jari National Park (L. Condrati, pers. comm., and environs, comprising Assis Brazil, Brasiléia and ICMBio). In April 2013, a sub-adult pale-morph was Xapuri municipalities. In addition to this, and confirmed seen on ZF-2 Road, 10 km from the Cuieiras Reserve with a photograph, an adult dark-morph was seen in Rio nest known in the locality, interacting aggressively with Croa Community, municipality of Cruzeiro do Sul, in a flock of Red-throated Caracaras (Ibyc ter americanus). August 2012 (J. Filho), and Guilherme (2012) cites the Possibly this is a nestling from this nest, which has species as resident for the whole State, using bamboo dispersed for two or three years (whitish general plumage, forests and rainforests with palm trees. In April 2013, a with wing coverts still grayish) (FBRG). In Amazonas, young pale-morph was seen flying in the municipality of PCGR monitors four nests of this species, one in Manaus Porto Acre (L. Rondini and T. Nascimento, pers. comm.). and three in the nearby town of Manacapuru. In Manaus, In 2009 a young specimen was rescued near Rio Branco, the nest is located in the Cuieiras Reserve (INPA), and and forwarded to CETAS-IBAMA of Rio Branco, and was found by members of the TEAM Project – INPA in later transferred to permanent capitivity, but we were not 2006, and has been monitored by PCGR since then (In able to determine if it is still alive. Soares et al. 2008). The nests in Manacapuru, located on Amazonas (North-AMZ) – There are 16 recognized Cururu Lake, in a rural area of the municipality, were records/locations. The first recor d cites the occurrence found and reported by local residents, and have been of species in “Barra do Rio Negro” [=Manaus] and monitored since 2007. Recently a nest was discovered in Manaqueri [=Manaquiri] Lake (Manacapuru in August 2013, which is being monitored in Amanã municipality) (Von Pelzeln, 1871), localities also RDS (Sustainable Development Reserve), under the replicated by Pinto (1964). In this same state one of the auspices of the Mamirauá Sustainable Development best-known papers about the species, Bierregaard (1984) Institute (IDSM). In addition to these in vivo records, described nesting of a pair, the male being pale-morph the Instituto Nacional de Pesquisas da Amazônia INPA) and the female, dark-morph, at the ZF-3 Reserve, 60 - Ornithological Collection, holds a 1982 pale-morph km north of Manaus (Gavião Camp-PDBFF Biological female specimen, from the ZF-3 Reserve. Listed in Dynamics of Forest Fragments Project) but the pair has MZUSP are a female, from October 1902, and a male, not been registered subsequently. In 2004 an adult was from January 1937, from the Juruá River (for our map observed at the Mamirauá Sustainable Development assumed as near Carauarí), and another female from Reserve (RDS), in Tefé, Amazonas state, in a varzea forest Manacapuru, dated October 1936. FMNH holds two (Cintra et al. 2007 and R. Cintra, pers. comm.). Olmos skins of females: one from Lábrea, on the Purus River, et al. (2006) also cite seeing two individuals resting and 1935, and another from Lago do Baptista, in Itacoatiara, feeding, in the municipality of Alvarães. Cohn-Haft et al. 1937. CM maintains a mounted specimen, collected in (1997) cites its occurrence in large forest fragments of the Tonantins, on the left bank of the Solimões River, in PDBFF Project (Esteio, Dimona and Porto Alegre Farms) 1923. The CIGS Zoo (Manaus), kept a pale-morph male, near Manaus. In the Adolpho Ducke Forest Reserve, however it died 17 July 2012 and was not preserved. Manaus, in 2005, an individual was spotted (J. Valsko, Roraima (North-AMZ) – The species was recor ded pers. comm.) and recorded vocalizing (W. Magnusson, at the Maracá Ecological Station by Moskovits et al. pers. comm. and fi le at PCGR Database). There is also (1985). In 2004, an adult was photographed in the Viruá a record of the species in the Juami-Jupará Ecological National Park, in Caracaraí (R. Czaban, IBAMA). In Station, in 2006 (T.M.S. In Soares et al. 2008: 76). In 2011, during a MZUSP expedition, an adult pale-morph July 2009 an adult pale-morph was photographed in the was recorded, near the community of Caicubí, in the Anavilhanas National Park, perched on treetops in flooded Jufari River, near Caracaraí (L. F. Silveira, pers. comm.). Revista Brasileira de Ornitologia, 23(1), 2015 A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Pará (North-AMZ) – There are 28 recognized located in the municipality of Oriximiná during mining localities/records for the species. The first reference is activities of Mineração Rio Norte. This nest is located in prior to the date of its description, Daudin (1800). In an area of bauxite ore extraction within Porto Trombetas “Memórias de Dom Lourenço Álvares Roxo de Potflis”, National Forest and following the recommendations from 1752, translated and analyzed by Teixeira et al. of the PCGR and IBAMA-Oriximiná, the area will be (2010), the author makes a detailed description of the maintained and protected. The Museu Paraense Em ilio Harpy Eagle, and then describes the “ouyrà ouassù merì ou Goeldi holds a primary feather collected in 2000, from ouassù peua” a very similar bird to the Harpy Eagle, except Altamira, Xingu River, and the skin of a female pale- for its more slender appearance (Teixeira et al. 2010). In morph adult from 2008, collected in Almeirim, in the the description, he is obviously describing the Crested Maicuru Biological Reserve, besides two skins of young Eagle however the “Memories” is not a scientific paper. males without provenance; and two specimens from the In our most recent records, the species was seen in 2000, collection of the Museu Goeldi Zoo Botanical Park, one Taboca Island, Xingu River, and in 2008, in an Aquatic from 1916 and another from 1975. The FMNH keeps Bird Survey, Xingu River, both in forest (Henriques et al. two skins of Piquiatuba in Belterra, collected in 1937: a 2008 and L. M. P. Henriques, pers. comm.); in September female and a male with enlarged gonads, possibly being a 2013, an adult pale-morph was photographed in Vitória pair in reproductive condition. do Xingu (V. Castro, pers. comm.). In February 2014 an Amapá (North-AMZ) – There are four different adult pale-morph was observed on the left bank of the records. The first two, in 1994 and 2000, individuals were Xingu River, in Brasil Novo, near Altamira (TMS, PCGR alone, resting and feeding, in the municipality of Serra Database). In Tapajós National Forest, by Henriques et do Navio (Olmos et al. 2006). The other two recor ds al. (2003); in the Tapajós-Arapiuns Extractive Reserve, by were provided by Schunck et al. (2011), one in 2008 Peres et al. (2003) and, in the municipality of Tailândia and another in 2010. These are in the Carajaí Extractive (Soares et al. 2008). Between 1998 and 2005, individuals Reserve, in the municipality of Laranjal do Jari, western were recorded in Tailândia, at the Agropalma Forest Amapá. CM maintains a skin of a specimen collected in Reserve; between 2004 and 2006, in the municipality of 1918, in the upper Arucaua River, a tributary of Uaca Tomé-Açu, on the Cauaxi Farm; and between 2005 and River, in Oiapoque. 2007, also in Tailândia, on the Capim Farm (Portes et Maranhão (North-AMZ) – The oldest recor d is that al. 2011). Throughout 2008, individuals were recor ded of a male from the mouth of the Flores River, Mearim in several forest reserves, Trombetas, Grão-Pará, Maicuru River, municipality of Pedreiras, listed in the collection (female pale-morph collected) and in Faro State Forest of birds of the Museu da Fauna (No. 1576), from 1956 (Aleixo et al. 2011). In Santana do Araguaia, at the (Aguirre & Aldrighi 1983, same specimen under Rio de Fartura Farm, the species was recorded between 2009 and Janeiro). A slightly more recent record is of a nest found 2010 (Somenzari et al. 2011). Between 2010 and 2011, and monitored for a few days in November 1997, in the the species was also recorded in Paragominas by Lees et municipality of Buriticupu (Martínez 2008). Recently, in al. (2012). In mid-2012 an adult was seen on the banks 2009, during the movement of mobbing by a mixed bird of the Tapajós River, in the municipality of Itaituba (G. species flock, after tape playback of Amazonian Pygmy- Leite, pers. comm.); In mid-2012, a nestling female, Owl (Glaucidium hardyi), a young Crested Eagle was dark-morph was rescued by IBAMA-Marabá, from the attracted and registered in the municipality of Açailandia municipality of Tucuruí, and delivered to the Parque Zoo (F. Olmos and B. Lima, pers. comm.). Botanico VALE in Parauapebas-Carajás, and from there transferred to recovery at the CRAX Conservation Center (F. Martins, pers. comm. ICMBio, PCGR). In the same DISCUSSION year, a second nestling, also female from an unknown nest Up to 2006 M. guianensis was recorded at nine Brazilian in the municipality of Novo Progresso, is still being held in the IBAMA-CETAS pre-release facility from Guarantã National and State Reserves (ESEC, RESEX, FLONA, do Norte. In 2010 a nest was recorded in Belterra REBIO, PARNA, State Park) (Soares et al. 2008). Based municipality. It was found during an inventory of birds in on the compilation presented herein, the number of the region, and reported to PCGR, which we monitored conservation areas of the same category as above harboring stocks of M. guanensis increased to 27, three times the for a few months, since the nestling was approximately four months old and out of the nest (C. Andretti, pers. previous study. Besides this increase, we also added 11 comm.); the record was subsequently published with private reserve localities. details by Lees et al. (2013). In 2011 there was a record From 37 New records outside the IUCN map, 16 in Jacareacanga, near the Teles Pires River (C. Borges, were recorded close to the border of the known range (up to 40 km); of those, 13 records were Old records and pers. comm.). In 2012, another nest of the species was Revista Brasileira de Ornitologia, 23(1), 2015 A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti three were New, and do not expand or contribute to the another collected in 1899 by Ihering (cited by Belton distribution map, and could be an artefact of the border 1984 for Taquara, without details of the record). between our datapoints and the IUCN map, or were provided as only general occurrence in the literature, and Final Remarks therefore will not be discussed in detail. Far from the border of the IUCN map (> 40 km), The most commonly cited publications for studying birds we found 21 records, contributing to an expansion of the of prey, Ferguson-Lees & Christie (2001) and Amadon known range; of these, 10 are New records and 11 are & Bull (1988), and more recently Whitacre et al. (2012), Old, and noted in bold face in the Appendices. indicate the distribution of Crested Eagle in parts of North Mexico, at the northernmost extreme of distribution, and Central America, and throughout South America, the range was extended north to Southern Mexico, however the most commonly used tool for conservation at Montes Azules, in 2004 with a photo (Grosselet & strategies, the IUCN map, presents this distribution Gutierrez-Carbonel (2007), confirming the recor d (No. in a far more conservative form. Here it is possible to 132, Appendix 2). add the southernmost part of North America (Chiapas Nicaragua, in 2001 a visual record in Prinzapolka and Campeche), in southern Mexico to the range of the (Kjeldsen 2005: 39) extended the range farther south Crested Eagle, as well as three records in between the than the IUCN map (No. 155, Appendix 2). In addition, Amazon and Atlantic Rainforests, and one record in a there are currently two records at the southern border with forest patch within the Gran Sabana, Venezuela. Costa Rica, in 1994 and 1999 (Old records), published Our review suggests new data to be added in the only in 2007 (Munera-Roldan 2007: 155), which extends compilation of the IUCN map, which from now on the distribution of the Crested Eagle to Reserva Bartola, provides a database for the production of an updated (Nos. 153 and 154, Appendix 2). species distribution map, extending known area of Costa Rica, an Old record not considered on the occurrence of the Crested Eagle. In southeastern Brazil, IUCN map was a visual record of Slud (1964) at Cañas there are records previously not considered in the States Gordas (No. 159, Appendix 2), located between the of Minas Gerais and Bahia, in the central State of Mato Sirena Biological Station (Corcovado N. P.) on the Osa Grosso do Sul and to the North, new habitat records in Peninsula (No. 157, Appendix 2) and the Caribbean Roraima, Maranhão and Mato Grosso. Understanding lowland distribution. the distribution of the Crested Eagle is indispensable for Colombia, an Old record not considered on the the efficient development of conservation strategies for IUCN map is a skin from Cuturu (FMNH 190728; No. the species and the ideal determination of the risk the 212, Appendix 2) from 1947, located between Darien species runs of extinction. Peninsula and Perija Mountains. The Amazon is one of the current strongholds where Brazil, there is a New overhead visual record at large vertebrate populations find conditions to persist for Buraco das Araras Private Natural Reserve, Mato Grosso ongoing generations (Reed et al. 2003). The prospect is do Sul (No. 63, Appendix 1) in Pivatto et al. (2006), an of a future ideal for the conservation of large forest eagles ecotone between Cerrado Biome and Gallery Forests, and (Crested Eagle, Harpy Eagle and Hawk-Eagle spp.). could have been an individual dispersing, or transient, from Predictive modelling scenarios for the next decades, Bodoquena Forest, where there are Harpy Eagles recorded, according to current development policies, indicate an and which could possibly also support populations of increase in the “arc of deforestation”, the reduction of Crested Eagle (PCGR Database). Bodoquena Forest is forest cover together with climate change (Laurance et al. a upland remnant of Atlantic Rainforest on the ecotone 2001; Salazar et al. 2007), which could seriously reduce with Semideciduous Forest, previously connected with its distribution. Morro do Diabo, São Paulo (Galetti et al. 1997) (No. 125, Brazilian law requires preservation of forest on Appendix 1), an Old record not included on the IUCN private land (called Legal Forest Reserve) in the Amazon map. In Minas Gerais, there are two records far from the rainforest to be 80% of the propriety (“Lei N° 12.651, border of IUCN, but near the Atlantic Rainforest, and de 25 de maio de 2012”). Landowners who cut beyond the IUCN distribution could incorporate these records, the allowed percentage are supposed to replant with trees. which are two visual records, in Mata Escura Reserve The majority of Brazilian records come from private land, before 1993 (Mattos et al. 1993) (No. 65, Appendix 1), therefore a large effort should be carried out to maintain and Caparaó Park in 1997 (Zorzin et al. 2006) (No. 64, Legal Forest Reserves, areas which are not protected in Appendix 1). In southern Brazil, in Rio Grande do Sul, conservation reserves. The Crested Eagle is listed as Near- there is a confirmed O ld record, a specimen collected Threatened (NT) b y the IUCN (2014) and has already in 1920 from Santa Cruz do Sul, housed in a private been proposed for the status of “Endangered” for the collection (Bencke 1997) (No. 101, Appendix 1), and Brazilian List of Species threatened with Extinction, Revista Brasileira de Ornitologia, 23(1), 2015 A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti under review (PCGR and ICMBio), since the habitat REFERENCES that holds the largest populations is also under great risk, Aguiar-Silva, F. H.; Sanaiotti, T. M. & Luz, B. B. 2014. Food habits particularly in the future. Owing to the current scarcity of of the Harpy Eagle, a top predator from the Amazonian rainforest knowledge on the distribution and ecology of the Crested canopy. Journal of Raptor Research 48(1): 24-35. Eagle, it is possible that populations living in poorly Aguirre, A. C. & Aldrighi, A. D. 1983. Catálogo das Aves do Museu sampled or little known areas could go extinct even da Fauna, primeira parte. Delegacia Estadual do Estado do Rio de Janeiro. Instituto Brasileiro de Desenvolvimento Florestal. p. 49. before conservation programs or policies for the species’s Albuquerque, J. L. B. 1983. On the presence of Harpyhaliaetus preservation are devised. coronatus and Morphnus guianensis in southeast Brazil and We believe that the greatest impact on populations recommendations for the conservation of the species by means of of Crested Eagle is habitat loss and destruction, hunting maintaining their natural habitat. Hornero Numero Especial: 70-73. pressure and consequently, pressure on their prey and Albuquerque, J. L. B.; Ghizoni, Jr. I. R.; Silva, E. S.; Trannini, G.; Franz, I.; Barcellos, A.; Hassdenteufel, C. B.; Rend, F. nesting areas, mainly on the southern edges of the L. & Martins-Ferreira, C. 2006. Crowned Solitary-Eagle Amazon Rainforest and in the Southeast region of Brazil, (Harpyhaliaetus coronatus) and Crested Eagle (Morphnus in the remnants of Atlantic Rainforest. guianensis) in Santa Catarina and Rio Grande do Sul: priorities and challenges for their conservation. Revista Brasileira de Ornitologia 14: 411-415. Aleixo, A.; Poletto, F.; Lima, M. F. C.; Castro, M.; Portes, E. & ACKNOWLEDGMENTS Miranda, L. S. 2011. Note on the Vertebrates of Northern Pará, Brazil: a forgotten part of the Guiana Region, II. Avifauna. Boletim do Museu Paraense Emílio Goeldi, Ciências Naturais, Belém, 6(1): We are grateful to all researchers, students, photographers 11-65. and birdwatchers that shared and allowed us to use their Amadon D. & Bull, J. 1988. Hawks and Owls of the World. records of Crested Eagle in this work, or who helped us Proceedings of the Western Foundation of Vertebrate Zoology 3: with bibliography and discussion about literature: José 295-357. Bangs, O. 1903. Birds and Mammals from Honduras. Bulletin of F. Pacheco, Vitor Piacentini, Ricardo Gagliardi, Carlos the Museum of Comparative Zoölogy. Vol. 39. Cambridge, Mass., E. Carvalho, Marcos Canuto, Alexander Zaidan, Tomaz U.S.A. M. Nascimento, Rodolfo C. Souza, Pedro Scherer- Bates, J. M.; Stotz, D. F. & Schulenberg, T. S. 1998. Avifauna of Neto, Cassiano F. Ribas, Marcelo Pádua, Luis F. Silveira, Parque Nacional Noel Kempff Mercado. Pp. 112-128 In: Killeen, T. J. and Schulenberg, T. S. (Eds.). A Biological Assessment of Alexander Lees, Juliano Mafra, Paulo V. Bernardo, Parque Nacional Noel Kempff Mercado, Bo livia. Conservation Alexandre P. T. Moreira, Raphael Hipólito, Juliano Neves, International, Washington, DC. Sandro Alves, Andrew Whittaker, Bret Whitney, Bianca Belton, W. 1984. Birds of Rio Grande do Sul, Brazil. Pt. 1 – Rheidae P. Vieira, Theodoro Mattos, Ed son Endrigo, Danilo through Furnariidae. Bulletin of the American Museum of Natural Mota, Vitor Castro, Christopher Borges, Dalci Oliveira, History, New York, 178(4): 369-636. Bencke, G. A. 1997. Sobre a coleção de aves do Museu do Colégio Douglas Fernandes, Kurazo Okada, Frederico Pereira, Mauá, Santa Cruz do Sul (RS). Biociências 5(1): 143-164. Julio Silveira, João D. Filho, Luis Rondini, Jefferson Bertoni, A. D. 1913. Contribución para un catálogo de aves Valsko, William E. Magnusson, Luiz H. Condrati, argentinas. Anales de la Sociedad. Científica Ar gentina 75: 64-102. Robson Czaban, Gabriel Leite, Marcelo Barreiros, Bierregaard, R. O. 1994. Family Accipitridae – Neotropical Species Account. Pp. 108-205. In del Hoyo, J., Elliott, A. and Sargatal, J. Frederico Martins, Christian Andretti, Aquila Oliveira, (Eds.). Handbook of the Birds of the World. Vol 2. Lynx Ediciones, Fábio Olmos, Bruno Lima, Aureo Banhos, Luiza Magalli Barcelona. P. Henriques, Sergio Borges, Renato Cintra, Carlos Calvo, Bierregaard, R. O. Jr. 1984. Observations on the nesting biology of Ron Osborn, Erick Miranda, Rodolfo V. Leiton, Pablo the Guiana Crested Eagle (Morphnus guianensis). Wilson Bulletin Camacho (Fundacion Rapaces de Costa Rica), TEAM, 96: 1-5. Bonta, M. & Anderson, D. L. 2002. Birding Honduras: a checklist PDBFF and LBA Projects developed at INPA, and special and guide. EcoArte S. de R.L. Tegucigalpa, Honduras. thanks to Fernando Straube (literature and discussion),S. Braun, M. J.; Finch, D. W.; Robbins, M. B. & Schmidt, B. K. V. Wilson (English revision and suggestions) and A. 2000. A Field Checklist to the Birds of Guyana. Biological Diversity Aleixo for editing a final version of this manuscript. We Institute of the Guianas Program, National Museum of Natural History, Smithsonian Institute, Washington, DC. are grateful to all Institutions and Museum Curators Brown, L. & Amadon, D. 1968. Eagles, Hawks, and Falcons of the consulted on the ORNIS site. INPA-CBIO and the World. McGraw Hill, New York. Research Group in Amazonian Phytogeography and Biota Carriker, M. A., Jr. 1910. An annotated list of the birds of Costa Rica, Conservation provided support for monitoring nests. The including Cocos Island. Annals of the Carnegie Museum 6: 314-915. Brazilian Harpy Eagle Conservation Program-PCGR was CBRO – Comitê Brasileiro de Registros Ornitológicos. 2011. Lista das aves do Brasil. 10ª ed., http://www.cbro.org.br/CBRO/pdf/ funded by VALE S.A./INPA/FDB. FBRG thanks Renata AvesBrasil2011.pdf (Accessed on 19 February 2012). de Lima-Gomes for help during this study, CNPq for his Cintra, R.; Dos Santos, P. M. R. S. & C. Banks-Leite. 2007. Ph.D. fellowship (#145304/2009-4), and Atend Ltda. Composition and Structure of the Lacustrine Bird Communities for financial research support. This is the Contribution of Seasonally Flooded Wetlands of Western Brazilian Amazonia at Number 4 of the PCGR. High Water. Waterbirds 30(4): 521-540 Revista Brasileira de Ornitologia, 23(1), 2015 A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Clements, J. F. & Shany, N. 2001. A Field Guide to the Birds of Henriques, L. M. P.; Dantas, S. M.; Sardelli, C. H.; Carneiro, L. Peru. Ibis Publishing Company, Temecula, CA, USA. S.; Batista, R. S. S.; Almeida, C. C. A.; Torres, M. F.; Silva, M. Cloudman, T. 2008. List of species and photo of dark morph C.; Laranjeiras, T. O. & J. C. Mello Filho. 2008. Diagnóstico Morphnus guianensis. In_ http://www.hargrove.org/2008/ Avifaunístico da Área de Influência do AHE Be lo Monte Como DisplayPhotos.php?PresentationID=133 Subsídio ao Estudo de Impacto Ambiental (EIA/RIMA) – Cohn-Haft, M.; Whittaker, A. & Stouffer , P. C. 1997. A new look at RELATÓRIO FINAL. Avaiable In: http://philip.inpa.gov.br/ the “species-poor” Central Amazon: the avifauna north of Manaus, publ_livres/Dossie/BM/DocsOf/EIA-09/Vol%2018/TEXTO/ Brazil. Pp. 205-235. In: J.V. Remsen, Jr. (Ed.), Studies in Neotropical AVIFAUNA/Relat%C3%B3rio_Final_Avifauna.pdf Ornithology honoring Ted Parker. American Ornithologists’ Union Henriques, L. M. P .; Wunderle, J. M. Jr. & Willig, M. R. 2003. Birds Ornithological Monographs, No. 48, Washington, DC. of the Tapajos National Forest, Brazilian Amazon: a preliminary Crease, A. & Tepedino, I. 2013. Observations at a nest of Crested assessment. Ornitologia Neotropical, 14: 307-338. Eagle Morphnus guianensis in the southern Gran Sabana, Hilty, S. L. & Brown, W. L. 1986. A Guide to the Birds of Colombia. Venezuela. Cotinga 35: 90-93. Princeton University Press, Princeton, NJ. Daudin, F. M. 1800. Traité élémentaire et complet d’Ornithologie, Ou, Hilty, S. L. 2003. Birds of Venezuela. 2nd ed. Princeton University Histoire Naturelle des oiseaux. V. 2, Bertrandet: Paris, 473 pp. Press, Princeton, NJ. Del Castillo, H. & Clay, R. P. 2004. Annotated Checklist of the Birds Howell, S. N. G. & Webb, S. 1995. A Guide to the Birds of Mexico of Paraguay. Asociación Guyra Paraguay, Asunción, Paraguay. and northern Central America. Oxford University Press, New DeLuca, J. J. 2012. Birds of conservation concern in eastern Acre, York. Brazil: distributional records, occupancy estimates, human-caused IBAMA. 2014. Lista Nacional Oficial de Espécies da F auna mortality, and opportunities for ecotourism. Tropical Conservation Ameaçadas de Extinção. www.icmbio.gov.br/portal/ Science5 (3): 301-319. biodiversidade/fauna-brasileira/2741-lista-de-especies-ameacadas- Dumont, C. H. F. 1816. Dictionnaire des Sciences Naturelles, dans lequel saiba-mais.html (Accessed 22 April 2015). on traite methodiquement des differens etres de la nature, consideres IBC - The I nternet Bird Collection. 2013. Accessed 16 October soit en eux-memes, d’apres l’etat actuel de nos connoissances, soit 2013. www.ibc.lynxeds.com relativement a l’utilite qu’en peuvent retirer la medecine, l’agriculture, Ihering, H. V. 1898. As aves do Estado de São Paulo. Revista Museu le commerce et les arts. F.G. Levrault, Paris. Tome I, 369-370 Paulista 3: 113-500. Eisermann, K. & Avendaño, C. 2007. Annotated Checklist of the Birds Ihering, H. V. 1899. As Aves do Rio Grande do Sul. In Anuário do of Guatemala. Lynx Ediciones, Barcelona, Spain. Estado do Rio Grande do Sul para o ano de 1900. Editora Krahe, Ellis, D. H. & Whaley, W. H. 1981. Three Crested Eagle records for Porto Alegre. Guatemala. Wilson Bulletin 93: 284-285. IUCN 2014. Red List of Threatened Species. Version 2014.3. www. Favretto, M. A. 2008. As Aves do Museu Frei Miguel, Luzerna, Santa iucnredlist.org. (Accessed 1 December 2014). Catarina. Atualidades Ornitologicas Online www.ao.com.br 146, Jones, H. L. & Komar, O. 2006. Central America. North American 43:44. Birds 60: 152-156. Ferguson-Lees, J. & Christie, D.A. 2001. Raptors of the World. Jones, H. L. & Komar, O. 2007. Central America. North American Christopher Helm, London. Birds 61: 340-344. Forrester, B. C. 1993. Birding Brazil, A checklist and site guide. John Jones, H. L. 2004. Central America. North American Birds 58: Gedds (Printers) Irvine, Scotland. 254 p. 155-157. Foss, C. R. & Huanaquiri, J. T. 2014. List of species of rio Tahuayo, Jones, H. L.; E. McRae; M. Meadows & N. G. Howell 2000. Status in Loreto, Pero. In_ www.perujungle.com/BirdList.pdf. Accessed updates for selected bird species in Belize, including several species 14 January 2014. previously undocumented from the country. Cotinga 13 (2000): Galetti, M.; Martuscelli P.; Pizo M. A. & Simao I. 1997. Records of 17-31. Harpy and Crested Eagles in the Brazilian Atlantic forest. Bulletin Julliot, C. 1994. Predation of a young Spider Monkey (Ateles paniscus) of the British Ornithologists’ Club 117: 27-31. by a Crested Eagle (Morphnus guianensis). Folia Primatologica 63: Grijalva, B. & Eisermann, K. 2006. Observacíon de um Inmaturo 75-77. de Morphnus guianensis em Tikal, Petén, Guatemala. Boletín de La Kiff, L. F .; Wallace, M. P. & Gale, N. B. 1989. Eggs of captive Sociedad Guatemalteca de Ornitologia 3: 26-29. Crested Eagles (Morphnus guianensis). Journal of Raptor Research GRIN - Global Raptor Information Network. 2013. Species 23: 107-108. account: Crested Eagle Morphnus guianensis. http://www. Kjeldsen, J. P. 2005. Aves del municipio Rio Prinzapolka, un globalraptors.org (Accessed 24 Jun 2013). inventarío de base. Wani Revista Caribe Nicaragüense 41: 31-64. Grosselet, M. & Gutierres-Carbonell, D. 2007. Primera Laurance, W. F.; Cochrane, M. A.; Bergen, S.; Fearnside, P. M.; observación confirm ada del Águila crestada Morphnus guianensis Delamônica, P.; Barber, C.; D’Angelo, S. & Fernandes, T. para México. Cotinga 28: 74-75. 2001. The Future of the Brazilian Amazon. Science 291: 438. Guilherme, E. 2012. Birds of the Brazilian state of Acre: diversity, Lees, A. C.; Moura, N. G.; Silva, A. S.; Aleixo, A. L. P.; Barlow, J.; zoogeography, and conservation. Revista Brasileira de Ornitologia Berenguer, E.; Ferreira, J. & Gardner, T. A. 2012. Paragominas: 20(4): 393-442. a quantitative baseline inventory of an Eastern Amazonian Gurney, J. H. 1879. Notes on a “Catalogue of the Accipitres in the avifauna. Revista Brasileira de Ornitologia 20: 93-118. British Museum”. Ibis 21(4): 464-470. Lees, A. C.; Moura, N. M.; Andretti, C. B.; Davis, B. J. W.; Lopes, Hall, J. 1995. Lucky Shot. Living Bird 14:8-9. E. V.; Henriques, M. P.; Aleixo, A.; Barlow, J.; Ferreira, J. Haverschmidt, F. & Mees, G. F. 1994. Birds of Suriname. VACO, & Gardner, T. A. 2013. One hundred and thirty-five years of Paramaribo, Suriname. avifaunal surveys around Santarém, central Brazilian Amazon. Hellmayr, C. E. & Conover, B. 1949. Catalogue of birds of the Revista Brasileira de Ornitologia 21(1): 16-57. Americas. Part I, number 4. Field Museum of Natural History Lehmann, V. F. C. 1943. El genero Morphnus. Caldasia: 179: 165- Zoological Series, vol. 13. 179. Hennessey, B.; Herzog, S. K. & Sagot, F. 2003. Annotated List of Macaulay Library. 2013. Morphnus guianensis, www.macaulaylibrary. the Birds of Bolivia. Asociación Armonia, Santa Cruz de la Sierra, org. Cornell Laboratory of Ornithology (Accessed 25 October Bolivia. 2013). Revista Brasileira de Ornitologia, 23(1), 2015 A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Madroño-Nieto, A.; Clay, R. P.; Robbins, M. B.; Rice, N.H.; Pickles, R. S. A.; McCann, N. P. & Holland, A. P. 2011. Mammalian Faucett, R. C. & Lowen, J. C. 1997. An avifaunal survey of and avian diversity of the Rewa Head, Rupununi, Southern the vanishing interior Atlantic forest of San Rafael National Park, Guyana. Biota Neotropica 11(3): 237-251. Departments Itapúa/Caazapá, Paraguay. Cotinga 7: 45-53. Pinto, O. M. O. 1964. Ornitologia Brasiliense. Catálogo descritivo e Marques, A. A. B.; Fontana, C. S.; Vélez, E.; Bencke, G.A.; ilustrado das aves do Brasil. Primeiro volume – Parte Introdutória Schneider, M. & Reis, R. E. 2002. Lista de referência da fauna e famílias Rheidae a Cuculidae. Departamento de Zool. da ameaçada de extinção no Rio Grande do Sul. Porto Alegre, FZB/ Secretaria de Agricultura do Estado de São Paulo – SP. Composto MCT, PUCRS/PANGEA, Publicações Avulsas FZB 11 52 p. e impresso na Imprensa Oficial do Estado. 183 p., São Paulo. Márquez, C.; Gast, F.; Vanegas, V. H. & Bechard, M. 2005. Diurnal Pivatto, M. A. C.; Manço, D. G.; Straube, F. C.; Urben-Filho, A. raptors of Colombia, 1st ed. Instituto Investigación de Recursos & Milano, M. 2006. Aves do Planalto da Bodoquena, Estado do Biológicos Alexander von Humboldt, Bogotá, D.C., Colombia. Mato Grosso do Sul (Brasil). Atualidades Ornitológicas 129. Martínez, C. 2008. Um Ninho de Morphnus guianensis no Maranhão. Portes, C. E. B.; Carneiro, L. S.; Schunck, F.; Silva, M. S. S.; Livro de Resumos da Conferência Amazônia em Perspectiva LBA/ Zimmer, K. J.; Whittaker, A.; Poletto, F.; Silveira, L. F. & GEOMA/PPBio 2008, Conferência Amazônia em Perspectiva Aleixo. A. 2011. Annotated checklist of birds recorded between LBA/GEOMA/PPBio. Manaus, Amazonas, Brazil. 1998 and 2009 at nine areas in the Belém area of endemism, with Mattos, G. T.; Andrade, M. A. & Freitas, M. V. 1993. Nova lista de notes on some range extensions and the conservation status of Aves do Estado de Minas Gerais. Fundação Acangú. Belo Horizonte, endangered species. Revista Brasileira de Ornitologia 19: 167-184. Brazil. 20 p. Raine, A. F. 2007. Breeding bird records from the Tambopata- Mauduyt, M. 1782. Encyclopédie Méthodique. Histoire Naturelle. Candamo Reserve Zone, Madre de Dios, south-east Peru. Cotinga Oiseaux. Tome Second. p. 475. A Paris: Chez Panckoucke, libraire, 28: 53-58. hôtel de Thou, rue des Poitevins ; A Liège : Chez P lomteux, Rasmussen, D. T.; Rehg, J. A. & Guilherme, E. 2005. Avifauna da imprimeur des Etats, MDCCLXXXII-MDCCCXXV. Fazenda Experimental Catuaba: Uma Pequena Reserva Florestal Mazar Barnett, J. and Pearman, M. 2001. Lista comentada de las no Leste do Estado do Acre, Brasil. In Drumond, P. M. (Ed.) aves argentinas / Annotated checklist of Argentina. Barcelona: Fauna do Acre. Editora da Universidade Federal do Acre, Rio Lynx Edicions. Branco, AC. Monroe, B. L., Jr. 1968. A distributional survey of the birds of Reed, D. H.; Lowe, E.; Briscoe, D. A. & Frankham, R. 2003. Honduras. Ornithological Monographs 7: 1-458. Estimates of minimum viable population sizes for vertebrates Montalvo, L. D. & Montalvo, E. 2012. Aspectos comportamentales and factors influencing those estimates. Conservation Genetic en cautiverio de Morphnus guianensis en el zoológico de Quito, 4: 405-410. Guayllabamba, Ecuador. Revista Politécnica 30(3): 33-41. Ridgely, R. S. & Greenfield, P. J. 2001. The bir ds of Ecuador: status, Moskovits, D.; Fitzpatrick, J. W. & Willard, D. E. 1985. Lista distribution, and taxonomy. Comstock Publishing Associates, preliminar das aves da Estação Ecológica de Maracá, Território Ithaca, NY. de Roraima, Brasil, e áreas adjacentes. Papéis Avulsos de Zoologia Ridgely, R. S. & Gwynne, J. A., Jr. 1989. A Guide to the Birds of 36: 51-68. Panama with Costa Rica, Nicaragua, and Honduras, 2nd ed. Múnera-Roldán, C.; Cody, M. L.; Schiele-Zavala, R. H.; Sigel, B. Princeton University Press, Princeton, NJ. J.; Woltmann, S. & Kjeldsen, J. P. 2007. New and noteworthy Rosário, L. A. 1996. As aves em Santa Catarina: distribuição geográfica records of birds from south-eastern Nicaragua. Bulletin of the e meio ambiente. FATMA, Florianópolis. British Ornithologists’ Club 127: 152-161. Salazar, L. F.; Nobre, C. A. & Oyama, M. D. 2007. Climate Muñiz-López, R.; Criollo, O. & Mendúa, A. 2007. Results of five change consequences on the biome distribution in tropical South years of the “Harpy Eagle (Harpia harpyja) Research Program” in America. Geophysical Research Letters 34: L09708. the Ecuadorian tropical forest. Pages 23-32. In Bildstein, K.L.;, Schunck, F.; De Luca, A. C.; Piacentini, V. Q.; Rego, M. A.; Rennó, Barber, D. R. and Zimmerman, A. [Eds.], Neotropical Raptors. B. & Corrêa, A. H. 2011. Avifauna of two localities in the Hawk Mountain Sanctuary, Orwigsburg, PA, U.S.A. south of Amapá, Brazil, with comments on the distribution and Olmos, F.; Pacheco, J. F. & Silveira, L. F. 2006. Notes on Brazilian taxonomy of some species. Revista Brasileira de Ornitologia 19 (2): birds of prey (Cathartidae, Accipitridae, and Falconidae). Revista 93-107 Brasileira de Ornitologia 14: 401-404. Sick, H. 1997. Ornitologia Brasileira. Editora Nova Fronteira, Rio de Olmos, F.; Silveira, L. F. & Benedicto, G. A. 2011. A Contribution Janeiro, 912 pp. to the Ornithology of Rondônia, Southwest of the Brazilian Slud, P. 1964. The bir ds of Costa Rica: distribution and ecology. Amazon. Revista Brasileira de Ornitologia, 19(2): 200-229. Bulletin of the American Museum of Natural History 128: 1-430. Olrog, C. C. 1985. Status of wet forest raptors in northern Argentina. Soares, E. S.; Amaral, F. S. R.; Carvalho-Filho, E. P. M.; Granzinolli, ICBP Technical Publication, No. 5: 191-198. M. A.; Albuquerque, J. L. B.; Lisboa, J. S.; Azevedo, M. A. G.; Parker, T. A., III; Donahue, P. K. & Schulenberg, T. S. 1994. Moraes, W.; Sanaiotti, T. & Guimarães, I. G. 2008. Plano de Birds of the Tambopata-Reserve (Explorer’s Inn Reserve). In: Ação Nacional para a Conservação de Aves de Rapina. Instituto Rapid Assessment Program – The T ambopata-Candamo Reserve Chico Mendes de Conservação da Biodiversidade, Brasília. 136 pp. Zone of Southeastern Peru: A Biological Assessment. Conservation Somenzari, M.; Silveira, L. F.; Piacentini, V. Q.; Rego, M. A.; International RAP Working Papers 6. Schunck, F. & Cavarzere, V. 2011. Birds of an Amazonia- Parker,T. A., III & Goerck, J. M. 1997. The i mportance of National Cerrado ecotone in southern Pará, Brazil, and the efficiency of Parks and Biological Reserves to bird conservation in the Atlantic associating multiple methods in avifaunal inventories Revista Forest region of Brazil. Ornithological Monographs 48: 527-541. Brasileira de Ornitologia 19(2): 260-275. Pearman, M. 2001. Notes and range extensions of some poorly Stiles, F. G. & Skutch, A. F. 1989. A Guide to the Birds of Costa Rica. known birds of northern Argentina. Cotinga 16: 76-80. Comstock/Cornell University Press, Ithaca, NY. Peres, C. A.; Barlow, J. & Haugaasen, T. 2003. Vertebrate responses Stotz, D. F.; Lanon, S. M.; Schulenberg, T. S.; Willard, D. E.; to surface wildfires in a central A mazonian forest. Oryx 37: Peterson, A. T. Y. & Fitzpatrick, J. W. 1997. An avifaunal 97-109. survey of two tropical forest localities on the middle Rio Jiparaná, Petroff, M. A. D. S. 2001. Rapinantes ameaçados de extinção atuando Rondônia, Brazil, p 763-781. In: Remsen Jr., J. V (Ed.). Studies no Parque Estadual de Itaúnas. Boletim ABFPAR 4(2):12-14. in Neotropical Ornithology honoring Ted Parker. The American Revista Brasileira de Ornitologia, 23(1), 2015 A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Ornithologists’ Union (Ornithological Monographs, No. 48). Wetmore, A. 1965. The bir ds of the Republic of Panama. Part I. Washington, DC. Smithsonian Miscellaneous Collections 150: 1-483. Straube, F. C. and Urbem-Filho, A. 2010. Comentários e retificações Whitacre, D. F .; J. López & G. López. 2012. Crested Eagle. Pp. 164-184 In sobre o único registro de Morphnus guianensis (Accipitriformes: D. F . Whitacre (Ed.), Neotropical Birds of Prey: Biology and ecolog y of a Accipitridae) para o Paraná”. Atualidade Ornitológicas 157: 4-6. forest raptor community. Cornell University Press, Ithaca, NY. Teixeira, D. M.; Papavero, N. & Kury, L. B. 2010. As Aves do Pará Whittaker, A. 2009. Pousada Rio Roosevelt: a provisional avifaunal segundo “Memórias” de Dom Lourenço Álvares Roxo de Potflis inventory in south-western Amazonian Brazil, with information (1752). Arquivos de Zoologia do Museu de Zoologia da Universidade on life history, new distributional data and comments on de São Paulo 41(2): 97-131. taxonomy. Cotinga 31: 23-46. Thiollay, J. M. 2007. Raptor communities in French Guiana: Whittaker, A.; Aleixo, A. & Poletto, F. 2008. Corrections and distribution, habitat selection, and conservation. Journal of Raptor additions to an annotated checklist of birds of the upper Rio Research 41: 90-105. Urucu, Amazonas, Brazil. Bulletin of the British Ornithologists’ Trail, P. W. 1987. Predation and antipredator behavior at Guianan Club 128(2): 114-125. Cock-of-the-rock leks. Auk 104: 496-507. Whittaker, A.; Oren, D. C.; Pacheco, J. F.; Parrini, R. & Minns, Vargas, G. J.; Ríos-Uzcátegui, G.; Vargas-González, J. J.; Canelón- J. C. 2002. Aves registradas na Reserva Extrativista do Alto Arias, M. J.; Serrano-Marín, J. J.; Briceño-David, E. J. 2009. Juruá. P. 81-103. In: Carneiro da Cunha, M. M.; Almeida, M. Primer registro de Águila Crestada (Morphnus guianensis) em lós W. B. (Orgs). A Enciclopédia da floresta. O A lto Juruá: prática e Llanos Occidentales de Venezuela. Caderno de Resumos I Congreso conhecimentos das populações.Cia. das Letras. São Paulo. Venezolano De Ornitología, poster C-01. Wikiaves - A Enciclopédia das Aves Brasileiras. Morphnus guianensis. Vargas-González, J. J.; Mosquera, R. & Watson, M. 2006. Acessed on 26 October 2013 In: www.wikiaves.com.br. Crested Eagle Morphnus guianensis feeding a postfledged young Willis, E. O. & Oniki, Y. 2003. Aves do Estado de São Paulo. Ed. Harpy Eagle Harpia harpyja in Panama. Ornitologia Neotropical Divisa, Rio Claro. São Paulo. 398 p. 17: 581-584. Xeno-canto - Sharing bird sounds from around the world. Accessed Vasquez, M. R. O. & Heymann, E. W. 2001. Crested Eagle (Morphnus on 26 October 2013. In www.xeno-canto.org guianensis) predation on infant tamarins (Saguinus mystax Zorzin, G.; Carvalho, E. A.; Carvalho Filho, E. P. M. & Canuto, and Saguinus fuscicollis, Callitrichinae). Folia Primatologica M. 2006. New records of rare and threatened Falconiformes for 72: 301-303. the State of Minas Gerais. Revista Brasileira de Ornitologia 14: Vidoz, J. Q.; Jahn, A. E. & Mamani, A. M. 2010. The avifauna of 416-421. Estacion Biológica Caparú, Bolivia. Cotinga 32: 51-68. Von Pelzeln, A. 1871. Zur Ornithologie Brasiliens. Resultate von Johann Natterers Reisen in der Jahren 1817 bis 1835. Vienna, A. Pichler’s Witwe und Sohn, 462 p. Associate Editor: Caio Graco Machado Revista Brasileira de Ornitologia, 23(1), 2015 A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Revista Brasileira de Ornitologia, 23(1), 2015 APPENDIX 1 Databases, museums and personal communications of Crested Eagle records from Brazil were cited using the following nomenclatures. State: (M. G. do Sul) – Mato Grosso do Sul; (R. G. do Sul) – Rio Grande do Sul. Locality: RESEX – Extractive Reserve; ESEC – Ecological Station; ARIE – Relevant Interest Ecological Area; PARNA – National Park; RDS – Sustainable Development Reserve; RPPN – Private Natural Reserve; REBIO – Biological Reserve; FLONA – National Forest; CEPLAC – Executive Board of the Cocoa Crop Plan; CIGS – Jungle Instruction Army Center; CETAS – Wildlife Center of Ibama – IBAMA – Brazilian Environment Agency; CRAX – Criadouro Conservacionista Center; PCGR – Harpy Eagle Conservation Program - Brazil. Museums and Collections: MZUSP – Zoology Museum of the University of São Paulo; MNRJ –National Museum of Rio de Janeiro; UFSC – Federal University of Santa Catarina; MHNT – Taubaté Natural History Museum; ORNIS – Online database of Ornithological Collections; CM – Carnegie Museum of Natural history; FMNH – Field Museum of Natural History; MPEG – Museu Paraense Emilio Goeldi; INPA – Intituto Nacional de Pesquisas da Amazônia Collections; WA – www.wikiaves.com data; IBC – The Internet Bir d Collection. Record Type: Occu – general region of occurrence cited in literature, without specific recor ds, number of sights, number or data on individuals, Ind. – one specimen record, 2ind – two specimen records. Sex/ Age/Plumage: n.a. – not available, Ppale – pale-morph plumage, Pdark – dark- morph plumage. No. Collection / Sex / Age / No. State Municipality Locality Date Record Source Record type Museum/Source Plumage/Nest 1 Acre Rio Branco Catuaba Farm 1999-2004 Rasmussem et al. 2013 Literature occurence n.a. 2 Acre Marechal Thaumaturgo Alto Juruá RESEX 2002 Whittaker et al. 2002 Literature occurence n.a. Assis Brasil, Brasileia 3 Acre Chico Mendes RESEX 2008 De Luca 2012 Literature occurence n.a. and Xapuri IBAMA Report - Rio 4 Acre Rio Branco 2009 PCGR Database report young Branco, AC L. Rondini (photo); T. 5 Acre Porto Acre 2013 WA 959961 photo young Nascimento (visual) Bamboo and Palm forest - 6 Acre n.a. Guilherme 2012 Literature occurence n.a. occurrence in the state not on map 7 Acre Cruzeiro do Sul Rio Croa Community 2012 João D. Filho pers. comm. WA 722798 photo adult/Pdark 8 Amapá Oiapoque Uaca River 1918 CM CM P68846 skin n.a. 9 Amapá Serra do Navio 1994 Olmos et. al. 2006 Literature visual n.a. 10 Amapá Serra do Navio 2000 Olmos et. al. 2007 Literature visual n.a. 11 Amapá Laranjal do Jari Rio Carají RESEX 2008 Schunck et al. 2011 Literature occurence n.a. 12 Amapá Laranjal do Jari Rio Carají RESEX 2010 Schunck et al. 2012 Literature occurence n.a. Soares et al. 2008, Sanaiotti visual/ 13 Amazonas Japurá Juami-Jupará ESEC 2005 PCGR Database adult per.obs. predation Cuieiras and Manacapuru - not Soares et al. 2008, Sanaiotti 14 Amazonas 2006, 2007 PCGR Database nest nest on map per.obs. 15 Amazonas Novo Aripuanã Rio Roosevelt Lodge 1988 Whittaker 2009 Literature occurence pair A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Revista Brasileira de Ornitologia, 23(1), 2015 No. Collection / Sex / Age / No. State Municipality Locality Date Record Source Record type Museum/Source Plumage/Nest 16 Amazonas Manacapuru 1936 MZUSP MZUSP 16442 skin 17 Amazonas Lábrea Purús River 1935 FMNH FMNH 100819 skin female 18 Amazonas Itacoatiara Baptista Lake 1937 FMNH FMNH 101835 skin female mounted 19 Amazonas Tonatins Solimões River 1923 CM CM P97629 n.a. specimen 20 Amazonas Rio Preto da Eva Reserve ZF3 - ARIE 1982 INPA INPA 590 skin n.a. 21 Amazonas assumed near Carauari Juruá River 1902 MZUSP MZUSP 2593 skin female 22 Amazonas assumed near Carauari Juruá River 1937 MZUSP MZUSP 18113 skin male 23 Amazonas Novo Aripuanã Roosevelt River 2007 Bret Whitney pers. comm. IBC Video adult/Ppale pair/young/ 24 Amazonas Coari Urucu River 2008 Whittaker et. al. 2008 Literature visual nest 25 Amazonas Alvarães 1993 Olmos et al. 2006 Literature visual 2 individuals 26 Amazonas Citation von Pelzeln - not on map n.a. Pinto, 1964 Literature occurence n.a. Barra do rio Negro 27 Amazonas not on map n.a. Von Pelzeln 1871 Literature occurence n.a. [= presently Manaus] Manaqueri [=Manaquiri] Lake - 28 Amazonas Manacapuru n.a. Von Pelzeln 1872 Literature occurence n.a. not on map ZF3 Reserve, Gavião camp - literature/ 29 Amazonas Rio Preto da Eva 1980 Bierregaard 1984 Literature pair/nest PDBFF Project - ARIE photo PDBFF Project Forest Fragments 30 Amazonas Rio Preto da Eva n.a. Cohn-Haft et al. 1997 Literature occurence n.a. - ARIE Reserva Florestal Adolpho Ducke J. Valsko; W. Magnusson, Database/pers. visual/call 31 Amazonas Manaus 2005 young - ARIE PCGR comm. record 32 Amazonas Novo Airão Anavilhanas PARNA 11,22/6/2012 Whittaker 2012 WA 735004 photo pair 33 Amazonas Novo Airão Anavilhanas PARNA 2009 S.Wilson, PCGR Database photo 1 adult nest - nest - 34 Amazonas Manacapuru Cururu Lake, Solimões River 2007 PCGR Database Teixeirinha Teixeirinha 35 Amazonas Manacapuru Cururu Lake, Solimões River 2008 PCGR Database nest - Bracelo nest - Bracelo 36 Amazonas Manacapuru Cururu Lake, Solimões River 2008 PCGR Database nest - Erivan nest - Erivan nest - 37 Amazonas Manaus Cuieiras Reserve - ARIE 2006 PCGR Database nest - Cuieiras Cuieiras Luiz Henrique Condrati 38 Amazonas Tapaua Nascentes do Lago Jari PARNA 2011 WA 414010 photo adult/Ppale pers. comm. 39 Amazonas Manaus Cuieiras Reserve, Km 9 - ARIE 2013 F. B. R. Gomes WA 1025169 photo subadult A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Revista Brasileira de Ornitologia, 23(1), 2015 No. Collection / Sex / Age / No. State Municipality Locality Date Record Source Record type Museum/Source Plumage/Nest WA 1067156; pair/young/ 40 Amazonas Amanã Amanã RDS 2013 A. Jaskulski pers. comm. 1067159; photo nest CIGS Army Zoo, unknown 41 Amazonas died 2012 F. B. R. Gomes Pers. Archive photo male/Ppale procedence Alvarães, Uarini, Fonte Ressaca do Panelão - Mamirauá Lit. and pers. 42 Amazonas Boa, Tonatins, Maraã 2004 Cintra et al. 2007 visual n.a. RDS comm. and Japurá 43 Bahia Belmonte Barrolândia 1995 Galetti et al. 1997 Literature visual adult occurrence/ 44 Bahia Porto Seguro 1974 Willis & Oniki 2003 Literature n.a. call Espírito Sooretama, Linhares, occurence/ 45 Sooretama REBIO 1997 Parker III & Goerck 1997 Literature n.a. Santo Jaguaré and Vila Valério visual Espírito occurence/ 46 Conceição da Barra Itaúnas State Park n.a. Petroff 2001 Literature n.a. Santo visual Espírito Museu de Biologia Mello Leitão - 47 n.a. PCGR visited 2006 Database skin Ppale Santo unknown procedence Museu da Fauna 48 Maranhão Pedreiras Flores River 1956 Aguirre & Aldrighi 1983 skin male N.1576 adult/young/ 49 Maranhão Buriticupu Southeastern Buriticupu 1997 Martinéz 1997 Literature nest nest Fábio Olmos and Bruno Visual record and 50 Maranhão Açailandia 2009 visual young Lima pers. comm 51 Mato Grosso Alta Floresta Cristalino River 2005 Alexander Lees pers. arch.Photo pers photo young 52 Mato Grosso Juruena Chapada dos Parecis n.a. Sick, 1997 Literature occurence n.a. 53 Mato Grosso Alta Floresta CEPLAC 2006 Alexander Lees Visual record visual adult 54 Mato Grosso Alta Floresta CEPLAC 2006 Alexander Lees WA 349411 photo adult 55 Mato Grosso São José do Rio Claro Jardim da Amazônia Lodge 2011 Edson Endrigo WA 368198 adultadult 56 Mato Grosso São José do Rio Claro Jardim da Amazônia Lodge 2012 Marcelo Pádua Call recorded call recorded pair/adult 57 Mato Grosso Mundo Novo Cristalino RPPN 2012 Júlio Silveira WA 804572 photo adult 58 Mato Grosso Comodoro 2012 Vitor Castro WA 581114 photo 1 adult 59 Mato Grosso Comodoro 2012 Danilo Mota WA 669576 photo 1 adult Dalci Oliveira and P. WA 879011 and 60 Mato Grosso Paranaíta 2012 photo nest Bernardo pers. comm. 61 Mato Grosso Vila Rica Ipê Farm n.a. MZUSP MZUSP 78122 skin n.a. A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Revista Brasileira de Ornitologia, 23(1), 2015 No. Collection / Sex / Age / No. State Municipality Locality Date Record Source Record type Museum/Source Plumage/Nest 62 Mato Grosso adult/Ppale/ 63 M. G. do Sul Jardim Buraco das Araras RPPN 2001 Pivatto et al. 2006 Literature visual flying 64 Minas Gerais Caparaó Caparaó PARNA 1997 Zorzin et al. 2006 Literature visual pair flying 65 Minas Gerais Jequitinhonha Mata Escura REBIO n.a. Mattos et al. 1993 Literature occurence n.a. CRAX - Conservationist Center - 66 Minas Gerais n.a. CRAX PCGR Database female/Pdark unknown procedence Dois Irmãos Zoo - unknown 67 Parnambuco n.a. PCGR Database PCGR Database adult/Ppale procedence Ilha da Taboca “Area 2”, Xingu Lit. and pers. 68 Pará Vitória do Xingu 2000 Henriques et al. 2008 visual adult River comm. Transect 6, Area 2, Aquatic Survey, Lit. and pers. 69 Pará Vitória do Xingu 2008 Henriques et al. 2008 visual adult Xingu River comm. Christian Andretti and WA 522322 and 70 Pará Belterra Paraíso das Abelhas sem Ferrão 2010 photo nest/young Lees et al. 2013 Literature 71 Pará Vitória do Xingu 2013 Vitor Castro WA 1091697 photo n.a. 72 Pará Itaituba Right bank of Tapajos River 2012 Gabriel Leite WA 1036295 photo n.a. 73 Pará Jacareacanga Near Teles Pires River 2011 Christopher Borges WA 844120 photo adult 74 Pará Faro Faro State Park 15_28/1/2008 Aleixo et al. 2011 Literature occurence n.a. 75 Pará Oriximiná Trombetas State Park 16_28/4/2008 Aleixo et al. 2011 Literature occurence n.a. Óbidos, Alenquer, 76 Pará Oriximiná and Monte Grão-Pará ESEC 28/8_10/9/2008 Aleixo et al. 2011 Literature occurence n.a. Alegre IBAMA Report - 77 Pará Oriximiná Trombetas State Park 2012 PCGR Database nest Oriximiná, PA 78 Pará Altamira Ilha da Taboca - Xingu River 2000 MPEGMPEG 55570 remige n.a. Aleixo et al. 2011 and 79 Pará Almeirim Maicuru REBIO 2/10_5/11/2008 MPEG 66390 skin female/Ppale MPEG Museu Paraense Emilio Goeldi - 80 Pará 1916 MPEGMPEG 1287 skin young/male unknown procedence Museu Paraense Emilio Goeldi - 81 Pará 1975 MPEGMPEG 30888 skin young/male unknown procedence male enlarged 82 Pará Belterra Piquiatuba 1937FMNH FMNH 101507 skin gonads A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Revista Brasileira de Ornitologia, 23(1), 2015 No. Collection / Sex / Age / No. State Municipality Locality Date Record Source Record type Museum/Source Plumage/Nest female 83 Pará Belterra Piquiatuba 1937FMNH FMNH 101506 skin enlarged gonads 84 Pará Belterra Tapajós FLONA 2003 Henriques et al. 2003 Literature occurence n.a. 85 Pará Santarém Tapajós Arapiuns RESEX 2003 Peres et al. 2003 Literature occurence n.a. 86 Pará Tailândia 2008 Soares et al. 2008 Literature occurence n.a. CETAS IBAMA Guarantã 87 Pará Novo Progresso 2012 do Norte, MT, without PCGR Database rescued young/Pdark destination 88 Pará Para State - not on map historical Teixeira et al. 2010 Literature occurence n.a. 89 Pará Tomé Açu Cauaxi Farm 2004, 2006 Portes et al. 2011 Literature occurence n.a. 90 Pará Tailândia Group Agropalma Reserve 1998, 2005 Portes et al. 2011 Literature occurence n.a. 91 Pará Tailândia Rio Capim Farm 2005, 2007 Portes et al. 2011 Literature occurence n.a. 92 Pará Santana do Araguaia Fartura Farm 2009-2010 Somenzari et al. 2007 Literature occurence n.a. Frederico Martins - ICMBio - pers. 93 Pará Tucuruí 2012 rescued young ICMBio; housed at CRAX comm. 94 Pará Paragominas 2010-2011 Lees et al. 2012 Literature occurence n.a. 95 Pará Brasil Novo Xingu River left bank 2014 T. M. Sanaiotti, PCGR Database visual adult/Ppale Marechal Cândido Straube & Urben-Filho mounted 96 Paraná Sete Quedas Museum 1964 Literature n.a. Rondon 2010 specimen Rio de 97 Cantagalo n.a. Pinto,1964: 82-83 Literature n.a. Janeiro skin, Johann Rio de 98 Cantagalo n.a. Hellmayr & Conover 1949 Literature Natterer n.a. Janeiro collection MNRJ 889, 8552, Rio de Rio de Janeiro Nacional Museum 99 n.a. MNRJ 21645, 44312, 5 specimens n.a. Janeiro - without procedence 100 R. G. do Sul Taquara 1899 Von Ihering 1899 Literature occurence n.a. mounted 101 R. G. do Sul Santa Cruz do Sul 1920 Bencke 1996 Literature Ppale specimen Derrubadas - 102 R. G. do Sul Rio Turvo State Park 1984 Belton 1984 Literature occurence n.a. occurrence suggested 103 R. G. do Sul Foz do Iguaçu Iguaçu PARNA 1993 Forrester 1993 Literature occurence n.a. 104 R. G. do Sul Very rare in the State n.a. Sick 1997 Literature n.a. A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Revista Brasileira de Ornitologia, 23(1), 2015 No. Collection / Sex / Age / No. State Municipality Locality Date Record Source Record type Museum/Source Plumage/Nest Probably Extinct in the 105 R. G. do Sul n.a. Marques et al. 2002 Literature n.a. State Guajará Mirim and 106 Rondônia Serra Cutia 2003 Olmos et al. 2011 Literature visual 1 adult/Ppale Costa Marques Guajará Mirim and 107 Rondônia Serra Cutia 2003 Olmos et al. 2011 Literature visual 2 adult/Ppale Costa Marques 108 Rondônia Ji-Paraná Cachoeira Nazaré, Ji-Paraná River 1987-1988 Stotz et al. 1997 Literature occurence n.a. Pers. comm. and 109 Rondônia Chupinguáia Boa Esperança Village 25/08/2011 S.O.S. Falconiformes photo young photos São Francisco do Video pers. 110 Rondônia Guaporé and Alta Guaporé REBIO mar/12 Sandro Alves video adult/Pdark archive Floresta D’Oeste 111 Rondônia Chupinguáia Boa Esperança Village 2010 Kurazo Okada Aguiar WA 10942 photo 1 adult Pers. comm. and 112 Rondônia Chupinguáia Boa Esperança Village set/12 S.O.S. Falconiformes photo pair/nest photos Boa Esperança Village- not on 113 Rondônia Chupinguáia 2012 Raphael Hipólito Pers. comm.photo 1 adult/Ppale map 114 Rondônia Porto VelhoRamal Rio das Garças 12/09/2011 F. Pereira WA 674776 photo 1 adult/Ppale 115 Roraima Caracaraí Viruá PARNA 2004 Robson Czaban WA 88548 photo 1 adult/Ppale 116 Roraima Caracaraí Jufari River, Caicubí Village 2011 L. F. Silveira Pers. comm. visual adult 117 Roraima Boas Vista Maracá ESEC 1985 Moskovits et al. 1985 Literature occurence n.a. Santa 118 Siderópolis Jordão Baixo 1977 Albuquerque 1983 Literature visual adult Catarina Santa 119 Grão Pará Aiúre 2005 Albuquerque et al. 2006 Literature visual 2 visual Catarina Santa mounted 120 Joinville Frei Miguel Museum 1926 Favretto, 2008 Literature adult/Ppale Catarina specimen Santa mounted 121 Lontras 1965/70 PCGR Database UFSC 362 adult/Ppale Catarina specimen Santa 122 Siderópolis 1997 Rosário 1997 Literature occurence n.a. Catarina Jacupiranga, Barra do Turvo, 123 São Paulo Jacupiranga State Park 25/05/1990 Galetti et al. 1997 Literature visual 1 adult Cananéia,Iporanga, Eldorada and Cajatí A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Revista Brasileira de Ornitologia, 23(1), 2015 No. Collection / Sex / Age / No. State Municipality Locality Date Record Source Record type Museum/Source Plumage/Nest Jacupiranga, Barra do Turvo, 124 São Paulo Jacupiranga State Park 14/12/1992 Galetti et al. 1997 Literature visual 1 adult Cananéia,Iporanga, Eldorada and Cajatí 125 São PauloTeodoro Sampaio Morro do Diabo State Park 14/12/1992 Galetti et al. 1997 Literature visual 1 adult Ribeirão Grande, 126 São Paulo Guapiara, Sete Barras, Intervales State Park 24/02/1995 Galetti et al. 1997 Literature visual 1 adult Eldorado and Iporanga 127 São Paulo Apiaí 1900 MZUSP MZUSP 2417 n.a. Occurrence in the State possible 128 São Paulo 1898 Ihering 1898 Literature n.a. - not on map occurence Zooparque Itatiba at Itatiba, 129 São Paulo n.a. PCGR PCGR captive pair without procedence Museu de Historia Natural de mounted 130 São Paulo n.a. MHNT n.a. adult/Ppale Taubaté - without procedence specimen A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Revista Brasileira de Ornitologia, 23(1), 2015 APPENDIX 2 Databases, museums and personal communications of Crested Eagle records from outside Brazil used the following nomenclatures. Museums and Collections accessed with ORNIS – Online database of Ornithological Collections: (AMNH) American Museum of Natural History; (ROM) Royal Ontario Museum; (MCZ) Museum of Comparative Zoology; (CM) Carnegie Museum of Natural History; (FMNH) Field Museum of Natural History; (ANSP) Academy of Natural Sciences of Philadelphia; (USNM) United Sates National Musem; (LSUMZ) Louisiana State University Museum of Natural Science; (MHNIB) Museum of Natural History Itaipu Binacional; (XC) Xeno-canto Online Sound Collection; (MAC) Macaulay Library Collection; and (CEPEPE) Center for Propagation of Endangered Panamanian Species. Record Type: Occurrence: general region of occurrence cited in literature, without specific recor d details, number of sights, number or data on individuals; Plumage: Ppale – pale- morph, Pdark – dark-morph plumage, Pextdark – extreme dark-morph. Source/ Sex/Age/ No. Country County Locality Year Citation Record Collection number Plumage/Nest Sutter & Diaz in Whitacre et al. 131 Mexico CampecheCalakmul Ruins 1992 Literature visual adult soaring Montes Azules Biosphere Grosselet & Gutierrez-Carbonel 132 Mexico Chiapas 2004 Literature photo adult Reserve 2007 133 Mexico Chiapas Chiapas n.a. Whitacre et al. 2012 Literature occurence n.a. 134 Mexico Quintana Roo Quintana Roo n.a. Whitacre et al. 2012 Literature occurence n.a. 135 Belize Orange WalkChan Chich Lodge 1995 Hall 1995 Literature occurence n.a. 136 Belize ToledoToledo 1995 Howell et al. 2000 Literature occurence n.a. 137 Belize Cayo Cayo 1995 Howell et al. 2001 Literature occurence n.a. 138 Belize Orange Walk Orange 1995 Howell et al. 2002 Literature occurence n.a. 139 Belize Southeastern Region 2006 Jones & Komar 2006 Literature occurence n.a. 140 Guatemala Petén Flores 1981 Ellis & Whaley 1981 Literature occurence n.a. 141 Guatemala Petén Tikal National Park 1994 Whitacre et al. 2012 Literature nest 142 Guatemala Petén Tikal National Park 1995 Whitacre et al. 2012 Literature nest 143 Guatemala Petén Tikal National Park 2006 Grijalva & Eisermann 2006 Literature young 144 Guatemala Atlantic Region 2006 Eisermann & Avendaño 2007 Literature n.a. 145 Guatemala Petén Flores 1978ORNIS AMNH812849 skin n.a. complete 146 Guatemala Petén Flores 1966 ORNIS ROM115862 sleleton + n.a. partial skin 147 Honduras La Ceiba La Ceiba 1903 Bangs 1903 Literature collected young 148 Honduras San Pedro Sula San Pedro Sula 1968 Monroe 1968 Literature occurence n.a. A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Revista Brasileira de Ornitologia, 23(1), 2015 Source/ Sex/Age/ No. Country County Locality Year Citation Record Collection number Plumage/Nest Quebrada Kahkatingni - near 149 Honduras 1999 Russell Thorstrom in GRIN 2013 Literature young Patuca River 150 Honduras Occurrence in Honduras 2002 Bonta & Anderson 2002 Literature occurence n.a. 151 Honduras La Ceiba La Ceiba 1902 ORNIS MCZ110535 skin n.a. Hormigero Community, 152 Nicaragua Jinotega Cerro Sasiaya, at Bosawas 2001 GRIN 2013 Literature occurence n.a. Biosphere Reserve 153 Nicaragua San Juan del Nicaragua Bartola Reserve 1994 Múnera-Roldán et al. 2007 Literature adult 154 Nicaragua San Juan del Nicaragua Bartola Reserve 1999 Múnera-Roldán et al. 2007 Literature adult 155 Nicaragua Prinzapolka, Zelaya Alamikangban Community 2005 Kjeldsen 2005 Literature adult La Selva Biological Station, 156 Costa Rica Heredia 1989 Stiles & Skutch 1989 Literature occurence n.a. Sarapiquí region Sirena Biological Station, 157 Costa Rica Sirena Corcovado National Park, 1989 Stiles & Skutch 1989 Literature occurence n.a. Osa Peninsula 158 Costa Rica Límon Cuabre - Sixaola River 1910 Carrier 1910 Literature occurence n.a. 159 Costa Rica Puntarenas Cerro Cañas Gordas 1964 Slud 1964 Literature occurence n.a. La Finca Selva - Braulio 160 Costa Rica San Jose Limon 2004 Jones 2004 Literature occurence n.a. Carrillo National Park Rara Avis Jungle Lodge 161 Costa Rica San Jose Limon -Braulio Carrillo National 2004 Jones 2004 Literature occurence n.a. Park 162 Costa Rica Límon Tortuguero 2006 Jones & Komar 2006 Literature occurence n.a. 163 Costa Rica Límon Tortuguero 2005 G. Ocklind Pers. comm.photo pair Carlos Calvo Obando photo/ 164 Costa Rica Límon Tortuguero National Park 2013 Fundación Rapaces de Costa Rica, PCGR Database photo adult/ Ppale P Camacho Varella pers. comm. Ron Osborne photo/ Fundación Caño Harold - Tortuguero 165 Costa Rica Límon 2014 Rapaces de Costa Rica, P Camacho PCGR Database photo adult/ Ppale National Park Varella pers. comm. Rodolfo Vargas Leiton photo/ Crucitas - Cutris de San 166 Costa Rica Alajuela 2011 Fundación Rapaces de Costa Rica, PCGR Database photo adult/ Pdark Carlos P Camacho Varella pers. comm. 167 Costa Rica Límon Cuabre 1904ORNIS CMP23989 skin n.a. Caribbean slopes, Southeast 168 Panama 1965 Wetmore 1965 Literature occurence n.a. and East Panamá A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Revista Brasileira de Ornitologia, 23(1), 2015 Source/ Sex/Age/ No. Country County Locality Year Citation Record Collection number Plumage/Nest 169 Panama Los Santos Azuero Peninsula 1989 Ridgely & Gwynne 1989 Literature occurence n.a. 170 Panama Los Santos/Mariato Cerro Hoya National Park 1989 Ridgely & Gwynne 1989 Literature occurence n.a. 171 Panama Panamá Province region 1989 Ridgely & Gwynne 1989 Literature occurence n.a. unsubstantiated 172 PanamaChiriqui Province region 1989 Ridgely & Gwynne 1989 Literature n.a. occurence Coiba Island, Gulf of unsubstantiated 173 Panama Veraguás 1989 Ridgely & Gwynne 1989 Literature n.a. Chiriquí occurence 174 Panama Panamá Canal Barro Colorado Island 1989 Ridgely & Gwynne 1989 Literature occurence n.a. 175 PanamaChiriquí Boquete 1989 Ridgely & Gwynne 1989 Literature occurence n.a. 176 Panama Panamá Canal Zone 1989 Ridgely & Gwynne 1989 Literature occurence n.a. egg laying Chiquita River region, 177 Panama Guna Yala 1989 Kiff et al. 1989 Literature by CEPEPE n.a. Central Panamá captive interspecific 178 Panama Dárien Quintín Community 2006 Vargas et al. 2006 Literature n.a. interation 179 PanamaCólon San Lorenzo National Park 2007 Jones & Komar 2007 Literature young Cana Camp - Darién 180 Panama Dárien 2010 E. Groenewoud IBC photo adult/Ppale National Park Rancho Frio Camp - Darién vocalization 181 Panama Dárien 2013 A. Spencer XC127521 nest National Park near nest 182 PanamaGamboa Canal Zone 1981 Van den Berg, A. B. Mac28459 vocalization female/Pdark 183 Panama Changuinola 1928ORNIS MCZ137642 skin n.a. 184 Panama Dárien Perme 1929ORNIS MCZ155152 skin female 185 Panama N.i. Banana River 1928 ORNIS MCZ137127 skin female 186 Panama Kuna Yala “Puerto” Obaldia 1930 ORNIS MCZ156514 skin male 187 Panama Kuna Yala San Blas - Port Obaldia 1935 ORNIS FMNH100685 male of pair male 188 Panama Kuna Yala San Blas - Port Obaldia 1935 ORNIS FMNH100729 female of pair female 189 Panama Panamá Canal Barro Colorado Island 1936ORNIS AMNH300600 skin n.a. Tapalisa, eastern of Panamá - 190 Panama Dárien 1915ORNIS AMNH135352 skin n.a. Tapalisa River Tapalisa, eastern of Panamá - 191 Panama Dárien 1915ORNIS AMNH135353 skin n.a. Tapalisa River 192 Colombia Chocó Region 1986 Hilty & Brown 1986 Literature occurence n.a. A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Revista Brasileira de Ornitologia, 23(1), 2015 Source/ Sex/Age/ No. Country County Locality Year Citation Record Collection number Plumage/Nest 193 Colombia Chocó Baudó Mountains 1986 Hilty & Brown 1986 Literature occurence n.a. Achicaya River Valley 194 Colombia Valle del Cauca 1986 Hilty & Brown 1986 Literature occurence n.a. (Anchicaya) 195 Colombia Cordova Sinú River Valley 1986 Hilty & Brown 1986 Literature occurence n.a. 196 Colombia Cordova Cordova 1986 Hilty & Brown 1986 Literature occurence n.a. 197 Colombia Zúlia Perijá Moutains 1986 Hilty & Brown 1986 Literature occurence n.a. 198 Colombia La Guajira Carraipa 1986 Hilty & Brown 1986 Literature occurence n.a. 199 Colombia East of Andes region 1986 Hilty & Brown 1986 Literature occurence n.a. 200 Colombia Meta Vilavivencio region 1986 Hilty & Brown 1986 Literature occurence n.a. 201 Colombia Caquetá Caquetá 1986 Hilty & Brown 1986 Literature occurence n.a. Museum, no 202 Colombia Amazonas Letícia 1972Márquez et al. 2005 Literature n.a. data Museum, no 203 Colombia Caquetá Belén 1941 Márquez et al. 2005 Literature n.a. data Museum, no 204 Colombia Chocó Salaqui River 1940 Márquez et al. 2005 Literature n.a. data Museum, no 205 Colombia Chocó Juradó River 1940 Márquez et al. 2005 Literature n.a. data 206 Colombia Caquetá Morélia region n.a. ORNIS ANSP153087 skin n.a. Complete 207 Colombia Chocó Truandó River (Truanto) n.a. ORNIS USNM17781 specimen in n.a. alcohol Complete 208 Colombia Cordova Sinú River 1949 ORNIS USNM410536 specimen in n.a. alcohol Complete 209 Colombia Chocó Acandi, Gulf of Uraba 1949 ORNIS USNM425433 specimen in n.a. alcohol 210 Colombia Chocó Jampavado River 1940 ORNIS FMNH102242 skin n.a. 211 Colombia Chocó Juradó River 1940 ORNIS FMNH102243 skin n.a. 212 Colombia Antioquia Cuturu 1947 ORNIS FMNH190728 skin female 213 Ecuador PichinchaPichincha region 2001 Ridgely & Greenfield 2001 Literature recorded n.a. 214 Ecuador Base of Andes region 2001 Ridgely & Greenfield 2001 Literature occurence n.a. 215 Ecuador Esmeraldas Esmeraldas 2007 Muniz-Lopes et al. 2007 Literature occurence n.a. A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Revista Brasileira de Ornitologia, 23(1), 2015 Source/ Sex/Age/ No. Country County Locality Year Citation Record Collection number Plumage/Nest 216 Ecuador Sucumbios Cuyabeno Wildlife Reserve 2007 Libor Vaincenbacher Photo IBC snake predation Ppale adult/chick/ 217 Ecuador Sucumbios Cuyabeno Wildlife Reserve 2014 R. Muniz-Lopes - PCAHE pers. comm visual nest 218 Ecuador Napo Napo Wildlife Center 2008 T. Cloudman list of species adultPdark adult female/ 219 Ecuador Not reported Quito Zoo at Guayllabamba 2011 Montalvo & Montalvo 2011 Literature captive Ppale 220 Bolivia Beni Beni 1994 M. Pearman 1994 in GRIN 2013 Literature occurence n.a. 221 Bolivia Beni Noel Kempff National Park 1998 Bates et al. 1998 Literature occurence n.a. 222 Bolivia La Paz La Paz 2003 Hennessey et al. 2003 Literature occurence n.a. 223 Bolivia Santa Cruz Santa Cruz 2003 Hennessey et al. 2003 Literature occurence n.a. 224 Bolivia Mérida Caparú Biological Station 2005 Vidoz et al. 2010 Literature occurence n.a. Gallery forest, Eastern region 225 Peru n.a. Clements & Shany 2001 Literature occurence n.a. of Andes Donated to Oklahoma City 226 Peru Amazonas Departmet 1978 Kiff et al. 1989 Literature female collected female/nest Zoo Reserva Tambopata - 227 Peru Madre de Dios 1994 Parker III et al. 1994 Literature occurence n.a. Candamo Reserva Tambopata - 228 Peru Madre de Dios 2007 Raine 2007 Literature nest Candamo Photo personal adults/chick/ 229 Peru Madre de Dios Amazon Manu Lodge 1977 R. Fabbri, pers. arch. archive nest Photo personal adults/chick/ 230 Peru Madre de Dios Amazon Manu Lodge 2006 R. Fabbri, pers. arch. archive nest Quebrada Blanco Biological predation on 231 Peru Cuzco 2001 Vazquez & Heymann 2001 Literature n.a. Station monkeys Photo personal 232 Peru Uacayali Pacaya municipally 2012 A. Morales young archive Centro de Reproducion 233 Peru Not reported 2013 J. A. Otero pers. comm. captive 6 individuals Huayco at Lima 234 Peru Iquitos Peru Lodge, Tahuayo River 2013 Peru Lodge list of species occurence n.a. 235 Peru Amazonas Departamento do Amazonas n.a. ORNIS LSUMZ84285 skin n.a. complete in 236 Peru Loreto Departamento de Loreto n.a. ORNIS LSUMZ114339 n.a. alcohol complete in 237 Peru Loreto Departamento de Loreto n.a. ORNIS LSUMZ114589 n.a. alcohol A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Revista Brasileira de Ornitologia, 23(1), 2015 Source/ Sex/Age/ No. Country County Locality Year Citation Record Collection number Plumage/Nest 238 Peru Loreto Departamento de Loreto n.a. ORNIS LSUMZ118952 feathers n.a. 239 Venezuela North of Orinoco River 2003 Hilty 2003 Literature occurence n.a. 240 Venezuela Caura River 2003 Hilty 2003 Literature occurence n.a. 241 Venezuela Maracaibo Basin 2003 Hilty 2003 Literature occurence n.a. 242 Venezuela Sierra Perijá 2003 Hilty 2003 Literature occurence n.a. 243 Venezuela Zulia 2003 Hilty 2003 Literature occurence n.a. 244 Venezuela MeridaAndes 2003Hilty 2003 Literature occurence n.a. 245 Venezuela Lara 2003 Hilty 2003 Literature occurence n.a. 246 VenezuelaAmazonas Amazonas Province 2003 Hilty 2003 Literature occurence n.a. 247 Venezuela Bolivar 2003 Hilty 2003 Literature occurence n.a. 248 Venezuela Margarita Margarita Island 2003 Hilty 2003 Literature occurence n.a. 249 Venezuela Barinas Obispos municipality 2006 Uztcátagui et al. 2010 Literature occurence n.a. 250 VenezuelaBolivar Gran Sabana region 2011 Crease & Tepedino 2013 Literature nest n.a. 251 Guyana Lowland forest environments n.a. Braun et al. 2000 Literature occurence n.a. 252 Guyana Rupununi Chief Rewa Reserve (head) 2011 Pickles et al. 2011 Literature occurence n.a. Upper Takatu - Upper 253 Guyana Rupununi 1964ORNIS ROM94735 skin n.a. Essequibos Upper Takatu - Upper 254 Guyana Rupununi 1964 ORNIS ROM94725 skin n.a. Essequibos 255 Guyana Bartica Kalacoon (Kalakun) n.a. ORNIS AMNH804578 skin n.a. Forest areas, General 256 French Guyana n.a. Thiollay 2007 Literature n.a. occurrence predation on 257 French Guyana Cayenne Nouragues Field Station 1992 Julliot 1994 Literature n.a. monkey Approuague River, Regina 258 French Guyana Cayenne 2011 Johann Tascon Literature adult/Pextdark municipality 259 French Guyana Non reported Guyana Zoo, in Macouria n.a. Maxcobigo IBC photo male/Pdark Forest areas, General 260 Surinam n.a. Haverschimidt & Mees 1994 Literature occurence n.a. occurrence predation Raleigh Falls-Voltz Bergue 261 Surinam Sipaliwini District 1987 Trail 1987 Literature on Rupicola n.a. Nature Reserve rupicola A review of the distribution of the Crested Eagle, Morphnus guianensis (Daudin, 1800) (Accipitridae: Harpiinae), including range extensions Felipe Bittioli R. Gomes and Tânia M. Sanaiotti Revista Brasileira de Ornitologia, 23(1), 2015 Source/ Sex/Age/ No. Country County Locality Year Citation Record Collection number Plumage/Nest Resident in the country, 262 Argentina n.a. Mazar-Barnett & Pearman 2001 Literature n.a. General occurrence 263 Argentina Missiones El Piñalito Provincial Park 2001 Pearman 2001 Literature adult 264 Argentina Santa Ana Santa Ana 1913 Bertoni 1913 Literature occurence n.a. 265 Argentina Missiones Iguazú National Park 1980 Rumboll & Straneck (In Olrog,1985) Literature pair in display n.a. 266 Paraguay Alto Paraná region 2004 Del Castillo & Clay 2004 Literature occurence n.a. 267 Paraguay Itapúa San Rafael Nacional Park 1997 Madroño 1997 Literature pair displaying n.a. 268 Paraguay Itapúa San Rafael Nacional Park 1997 Madroño 1997 Literature adult adult captured, San Rafael Nacional Park - died 269 Paraguay Itapúa Del Castillo & Clay 2004 MHNIB872 housed in adult collected and donated to Zoo 2002 Museum Colônia Aurora, Region of N. Lopes in Del Castillo & Clay 270 Paraguay Itapúa 2003 Literature adult flying San Rafael Nacional Park 2004
Ornithology Research – Springer Journals
Published: Mar 1, 2015
Keywords: Conservation; Falconiformes; Neotropics; Raptor
Access the full text.
Sign up today, get DeepDyve free for 14 days. | 1 | 11 |
<urn:uuid:1fc13423-4e30-4cda-be8c-8d57715550e4> | Dietary intake and main food sources of vitamin D as a function of age, sex, vitamin D status, body composition and income in an elderly German cohort
Background: Elderly subjects are at risk of insufficient vitamin D status mainly because of diminished capacity for cutaneous vitamin D synthesis. In cases of insufficient endogenous production, vitamin D status depends on vitamin D intake.
Objective: The purpose of this study is to identify the main food sources of vitamin D in elderly subjects and to analyse whether contributing food sources differ by sex, age, vitamin D status, body mass index (BMI), or household income. In addition, we analysed the factors that influence dietary vitamin D intake in the elderly.
Design and subjects: This is a cross-sectional study in 235 independently living German elderly aged 66–96 years (BMI=27±4 kg/m2). Vitamin D intake was assessed by a 3-day estimated dietary record.
Results: The main sources of dietary vitamin D were fish/fish products followed by eggs, fats/oils, bread/bakery products, and milk/dairy products. Differences in contributing food groups by sex, age, vitamin D status, and BMI were not found. Fish contributed more to vitamin D intake in subjects with a household income of <1,500 €/month compared to subjects with higher income. In multiple regression analysis, fat intake and frequency of fish consumption were positive determinants of dietary vitamin D intake, whereas household income and percentage total body fat negatively affected vitamin D intake. Other parameters, including age, sex, physical activity, smoking, intake of energy, milk, eggs and alcohol, showed no significant association with vitamin D intake.
Conclusion: Low habitual dietary vitamin D intake does not affect vitamin D status in summer, and fish is the major contributor to vitamin D intake independent of sex, age, vitamin D status, BMI, and the income of subjects.
Keywords: 25-hydroxyvitamin D; diet; food sources; fish consumption; body composition
(Published: 17 September 2014)
Citation: Food & Nutrition Research 2014, 58: 23632 - http://dx.doi.org/10.3402/fnr.v58.23632
Responsible Editor: Anna Olofsdottir, University of Iceland, Iceland.
Authors retain copyright of their work, with first publication rights granted to SNF Swedish Nutrition Foundation. Read the full Copyright- and Licensing Statement. | 1 | 3 |
<urn:uuid:e460949f-706d-42e3-a4c1-0508c25af593> | As a liquid it can conform to the etched and porous structure of the anode and the grown oxide layer, and form a "tailor-made" cathode. BOX 161356 Mobile, AL 36616 Otherwise, the electrolyte has to deliver the oxygen for self-healing processes, and water is the best chemical substance to do that.. Inverters rely on capacitors to provide a smooth power output at varying levels of current; however electrolytic capacitors have a limited lifespan and age faster than dry components. The first reason for inverter failure is electro-mechanical wear on capacitors. In most cases, the effects of a low-energy surge aren’t severe. The Japanese manufacturer Rubycon became a leader in the development of new water-based electrolyte systems with enhanced conductivity in the late 1990s. . The thing with capacitor failure, is that it’s highly dependent on the type of capacitor, as well as the application it’s used in. Common values are 20 mfd and 40 mfd. The forming takes place whenever a positive voltage is applied to the anode, and generates an oxide layer whose thickness varies according to the applied voltage. Capacitors, especially large electrolytic capcitors in power supplies, do tend to fail over time, but it can take a very long time... Like, 10+ years (and in some cases decades longer). When capacitors are in use, energy surges and high temperatu… The new type of capacitor was called "Low-ESR" or "Low-Impedance", "Ultra-Low-Impedance" or "High-Ripple Current" series in the data sheets. Faulty capacitors have been a problem since capacitors' initial development, but the first flawed capacitors linked to Taiwanese raw material problems were reported by the specialist magazine Passive Component Industry in September 2002.Shortly thereafter, two mainstream electronics journals reporte… This generates gas, which increases internal pressure. Under normal conditions the ESR has a very low value which stays that way for many years unless the rubber seal is defective, in which case the electrolyte’s water component gradually dries out and the ESR creeps up with time. The wear-out mechanism reaches a limit where devices will fall out of specification for the application. The life of an electrolytic capacitor with defective electrolyte can be as little as two years. Try turning your system off and back on again, and if the problem persists, you’ll need to call in for professional help. He then took the faulty formula to the Luminous Town Electric company in China, where he had previously been employed. Aluminium electrolytic capacitors with non-solid electrolyte have grooves in the top of the case, forming a vent, which is designed to split open in the event of excessive gas pressure caused by heat, short circuit, or failing electrolyte. Capacitor casing sitting crooked on the circuit board, caused by the bottom rubber plug being pushed out, sometimes with electrolyte having leaked onto the motherboard from the base of the capacitor, visible as dark-brown or black surface deposits on the PCB. Modern electrolytic capacitors are based on the same fundamental design. These type of capacitors have high voltage and … This applies to most capacitors, but especially to aluminum. Anode surface with grown plaques of aluminium hydroxide, Anode surface with grown beads of aluminium hydroxide. But a failing capacitor may not show physical symptoms. The weak point remains and the anodic corrosion is ongoing. Continued operation of the capacitor can result in increased end termination resistance, additional heating, and eventual failure. Test equipment can diagnose failing capacitors. First is the electrolytic capacitor. This electrochemical behavior explains the self-healing mechanism of non-solid electrolytic capacitors. In electrolytic capacitors, the dielectric can crack in both low- and high-energy surges. So what is ESR and why would I need to know about that? There are lots of great web pages that include instructions on how to replace the aluminum electrolytic capacitors to repair the monitor. Vargel, M. Jacques, M. P. Schmidt, Corrosion of Aluminium, 2004 Elsevier B.V., Alfonso Berduque, Zongli Dou, Rong Xu, BHC Components Ltd (KEMET), Electrochemical Studies for Aluminium Electrolytic Capacitor Applications: Corrosion Analysis of Aluminium in Ethylene Glycol-Based Electrolytes. This issue has dominated the development of electrolytic capacitors over many decades. To protect the metallic aluminium against the aggressiveness of the water, some phosphate compounds, known as inhibitors or passivators, can be used to produce long-term stable capacitors with high-aqueous electrolytes. This led me to watch Dave Jones of eevblog.com fame discussion of electrolytic’s. Normally the anode foil is covered by the dielectric aluminium oxide (Al2O3) layer, which protects the base aluminium metal against the aggressiveness of aqueous alkali solutions. First, a strongly exothermic reaction transforms metallic aluminium (Al) into aluminium hydroxide, Al(OH)3: This reaction is accelerated by a high electric field and by high temperatures, and is accompanied by a pressure buildup in the capacitor housing, caused by the released hydrogen gas. Domed tops 3. It can be more than 10 years for a 1000 h/105 °C capacitor operating at 40 °C. (The "vent" is stamped into the top of the casing of a can-shaped capacitor, forming a seam that is meant to split to relieve pressure build-up inside, preventing an explosion.). Electrolytic capacitors also have a self-healing ability, although to a lesser extent than film capacitors. Construction of a typical single-ended aluminium electrolytic capacitor with non-solid electrolyte, Closeup cross-section diagram of electrolytic capacitor, showing capacitor foils and oxide layers. Intermittent or outright failure Bad surface mount capacitors are not always easy to ide… The scientist then developed a copy of this electrolyte. Overheating is a primary cause of a failed start capacitor. ZL series capacitors with the new water-based electrolyte can attain an impedance of 23 mΩ with a ripple current of 1820 mA, an overall improvement of 30%. In a nutshell, they fail more often then not because of excessive temperature. The leakage current will increase to drive this self-healing effect. If the problem is in the centrifugal switch, thermal switch, or capacitor, the motor is usually serviced and repaired. Non-solid aluminium electrolytic capacitors without visible symptoms, which have improperly formulated electrolyte, typically show two electrical symptoms: When examining a failed electronic device, the failed capacitors can easily be recognized by clearly visible symptoms that include the following:, Failed Chhsi capacitor with crusty electrolyte buildup on the top, Failed capacitors next to CPU motherboard socket, Failed Tayeh capacitors which have vented subtly through their aluminium tops, Failed electrolytic capacitors with swollen can tops and expelled rubber seals, dates of manufacture "0106" and "0206" (June 2001 and June 2002), Failed capacitor has exploded and exposed internal elements, and another has partially blown off its casing, Failed Choyo capacitors (black color) which have leaked brownish electrolyte onto the motherboard. The electrolyte should also provide oxygen for the forming processes and self-healing. Capacitors react to energy surges in various ways. Temperature is of great concern to any capacitor. This diversity of requirements for the liquid electrolyte results in a broad variety of proprietary solutions, with thousands of patented electrolytes. Symptoms of electrolytic capacitor malfunction in radio and television are most often some form of audible noise, light or dark lines within the picture scan, or outright power supply failure. , Most of the affected capacitors were produced from 1999 to 2003 and failed between 2002 and 2005. What is Video TDR Failure? Run Capacitors A run capacitor is an energy-saving device that is in the motor circuit at all times. The non-solid aluminium electrolytic capacitors with improperly formulated electrolyte mostly belonged to the so-called "low equivalent series resistance (ESR)", "low impedance", or "high ripple current" e-cap series. For more on capacitors, visit http://www.electronicproducts.com/passives.asp. H. Kaesche, Die Korrosion der Metalle - Physikalisch-chemische Prinzipien und aktuelle Probleme, Springer-Verlag, Berlin, 1966, Chang, Jeng-Kuei, Liao, Chi-Min, Chen, Chih-Hsiung, Tsai, Wen-Ta, Effect of electrolyte composition on hydration resistance of anodized aluminium oxide, Learn how and when to remove this template message, Center for Advanced Life Cycle Engineering, "Low-ESR Aluminium Electrolytic Failures Linked to Taiwanese Raw Material Problems", The Capacitor Plague, Posted on 26 November 2010 by PC Tools, "Faults & Failures: Leaking capacitors muck up motherboards", "Taiwanese component problems may cause mass recalls", Capacitor failures plague motherboard vendors, GEEK, 7 February 2003, "Mainboardhersteller steht für Elko-Ausfall gerade", Michael Singer, CNET News, PCs plagued by bad capacitors, "Suit Over Faulty Computers Highlights Dell's Decline", "Taiwanese Cap Makers Deny Responsibility", "Capacitor plague, identifizierte Hersteller (~identified vendors)", "Stolen formula for capacitors causing computers to burn out", "A. Albertsen, Electrolytic Capacitor Lifetime Estimation", "Aluminium electrolytic capacitors - Principles | ELNA", "The Reaction between Anodic Aluminium Oxide and Water", https://en.wikipedia.org/w/index.php?title=Capacitor_plague&oldid=993661557, Wikipedia articles needing page number citations from November 2015, Articles needing additional references from August 2018, All articles needing additional references, Articles with unsourced statements from November 2015, Creative Commons Attribution-ShareAlike License, capacitance value decreases to below the rated value, increased capacitance value, up to twice the rated value, which fluctuates after heating and cooling of the capacitor body, Bulging of the vent on top of the capacitor. However, the company would not reveal the name of the capacitor maker that supplied the faulty products. Many of the poorly designed capacitors made it to mass market. All electrolytic capacitors with non-solid electrolyte age over time, due to evaporation of the electrolyte. Many PC users were affected, and caused an avalanche of reports and comments on thousands of blogs and other web communities. This oxide layer is electrically insulating and serves as the dielectric of the capacitor. Many other equipment manufacturers unknowingly assembled and sold boards with faulty capacitors, and as a result the effect of the capacitor plague could be seen in all kinds of devices worldwide. , The non-solid aluminium electrolytic capacitors with improperly formulated electrolyte mostly belonged to the so-called "low equivalent series resistance (ESR)", "low impedance", or "high ripple current" e-cap series. It was only the electrolytics that were failing. If the energy is extremely high, however, a complete failure can occur. Inorganic Chemistry. After roughly 120 years of development billions of these inexpensive and reliable capacitors are used in electronic devices. A materials scientist working for Rubycon in Japan left the company, taking the secret water-based electrolyte formula for Rubycon's ZA and ZL series capacitors, and began working for a Chinese company. Fluid leaking 5. Capacitor Failure Modes Experience has shown that capacitor failures are second only to semiconductors and vacuum tubes in components prone to malfunction in electronic equipment. The first commercially used electrolytes in the mid-twentieth century were mixtures of ethylene glycol and boric acid. Aluminium electrolytic capacitors with non-solid electrolyte are generally called "electrolytic capacitors" or "e-caps". Unhappily this layer of electrolyte has electrical resistance which, along with the (negligible) resistance of the connecting leads and alumin(i)um foil plates, forms the capacitor’s Equivalent Series Resistance. The failure of electrolytic capacitors can be hazardous, resulting in an explosion or fire. The liquid electrolyte, which is the cathode of the capacitor, covers the irregular surface of the oxide layer of the anode perfectly, and makes the increased anode surface effectual, thus increasing the effective capacitance. [page needed] The initial self-healing process for building a new oxide layer is prevented by a defect or a weak dielectric point, and generated hydrogen gas escapes into the capacitor. If a run capacitor fails, the motor can display a variety of problems including not starting, overheating, and vibrating. For aluminum polymers, ESR impedance increases due to polymer degradation. I thought this is an electrolytic capacitor and it should have the weak-point; in order to avoid them building up too much pressure during a failure. To avoid this, film/foil capacitors with infinite dU/dT, or complex series construction metallized capacitors with high dU/dT, should be used in high-current applications. BTW, he is really funny and if you want to hear a rant on Electrolytic s then you need to watch that video. almost anhydrous electrolytes based on organic solvents, such as. I also found this initially but mention it here only in passing as it applied to other types of Capacitors and I thus far all I am seeing fail are electrolytic. An air conditioner that doesn’t blow cold air is one of the first signs of a problem many homeowners notice. I imagine the post is oriented towards electrolytic capacitors, but I’ll give a quick run down of the types you’re likely to see in consumer electronics. As it is known that aluminium can be dissolved by alkaline liquids, but not that which is mildly acidic, an energy dispersive X-ray spectroscopy (EDX or EDS) fingerprint analysis of the electrolyte of the faulty capacitors was made, which detected dissolved aluminium in the electrolyte. In the same year, the scientist's staff left China, stealing again the mis-copied formula and moving to Taiwan, where they would have created their own company, producing capacitors and propagating even more of this faulty formula of capacitor electrolytes. Reverse voltage will also damage a tantalum, as will extreme thermal shock from out-of-control mounting profiles or heating from excess ripple current. Thus, even in the first apparently water-free electrolytes, esterification reactions could generate a water content of up to 20 percent. Asymmetrical or stretched plastic jacket 4. Bob Parker, www.ludens.cl, and wikipedia.org do a much better job of explaining this than I and have built special circuitry to identify these out of spec capacitors. Next step is to purchase a ESR meter. Electrolytic capacitors that operate at a lower temperature can have a considerably longer lifespan. The normal process of oxide formation or self-healing is carried out in two reaction steps. But it is also a chemical mixture of solvents with acid or alkali additives, which must be non-corrosive (chemically inert) so that the capacitor, whose inner components are made of aluminium, remains stable over its expected lifetime. In 2001, a scientist working in the Rubycon Corporation in Japan stole a mis-copied formula for capacitors' electrolytes. , In the November/December 2002 issue of Passive Component Industry, following its initial story about defective electrolyte, reported that some large Taiwanese manufacturers of electrolytic capacitors were denying responsibility for defective products.. McGraw-Hill. The results of chemical analysis were confirmed by measuring electrical capacitance and leakage current in a long-term test lasting 56 days. These electrolytic capacitors are typically labeled "low-impedance", "low-ESR", or "high-ripple-current" with voltage ratings up to 100 V. Despite these advantages, researchers faced several challenges during development of water-based electrolytic capacitors. I think, I am going to with the kit from Bob Parker at http://members.ozemail.com.au/~bobpar/esrmeter.htm. In addition to the good conductivity of operating electrolytes, there are other requirements, including chemical stability, chemical compatibility with aluminium, and low cost. The normal lifespan of a non-solid electrolytic capacitor of consumer quality, typically rated at 2000 h/85 °C and operating at 40 °C, is roughly 6 years. However, one independent laboratory analysis of defective capacitors has shown that many of the premature failures appear to be associated with high water content and missing inhibitors in the electrolyte, as described below. An electrolytic capacitor can fail gradually over a period of years as it dries out. The more I started looking at things that had failed around here, the more I began to see their point. Electrolytic capacitors also have a self-healing ability, although to a lesser extent than film capacitors. In any design, it is important to know how a capacitor will react to a surge or high temperatures to determine the most suitable component for the application. The capacitance usually decreases and the ESR usually increases. BLOG POST: https://tehnoblog.org/graphics-card-repair-how-i-fixed-gpu-card/ Dying GPU? ? A second aluminium foil strip, called the "cathode foil", serves to make electrical contact with the electrolyte. Patnaik, P. (2002). On air-handling equipment, the motor may start but will always fall short of normal operating speed. It is known that the "normal" course of building a stable aluminium oxide layer by the transformation of aluminium, through the intermediate step of aluminium hydroxide, can be interrupted by an excessively alkaline or basic electrolyte. Failure depends upon the capacitor type and enviroment (primarily voltage, heat and amount of ripple current). Since phosphate ions were missing and the electrolyte was also alkaline in the investigated Taiwanese electrolytes, the capacitor evidently lacked any protection against water damage, and the formation of more-stable alumina oxides was inhibited. Although the analysis methods were very sensitive in detecting such pressure-relieving compounds, no traces of such agents were found within the failed capacitors. Inhibitors—such as chromates, phosphates, silicates, nitrates, fluorides, benzoates, soluble oils, and certain other chemicals—can reduce the anodic and cathodic corrosion reactions. In e-caps using an alkaline electrolyte this aluminium hydroxide will not be transformed into the desired stable form of aluminium oxide. The marks were not easily linked to familiar companies or product brands. Siegmund, Bell System Technical Journal, v8, 1. Good comparable Japanese capacitors had an electrolyte that was acidic, with a pH of around 4. Auflage. The circuit required a very high voltage capacitor and so it used two 4.7uF 400V electrolytic capacitors in series, I replaced them with one 2.2uF 900V film capacitor. Shortly thereafter, two mainstream electronics journals reported the discovery of widespread prematurely failing capacitors, from Taiwanese manufacturers, in motherboards. I found that interesting because if I fail to see any hard visual evidence then I am going to be looking at the caps that are closer to high temperature locations on the circuit board. He mentioned something about his Bob Parkers ESR (Equivalent Series Resistance) meter and a google search opened up the flood gates about failing electrolytic caps and loads of information. Problems with Windows booting up? The spacer separates the foil strips to avoid direct metallic contact which would produce a short circuit. These electrolytes had a voltage-dependent life span, because at higher voltages the leakage current based on the aggressiveness of the water would increase exponentially; and the associated increased consumption of electrolyte would lead to a faster drying out. A radiation shield between the cap and the hot component prevents the hot component from accelerating failure mechanisms, which can be simply a shorter lifetime (or faster parameter drift), or the opening of the pressure relief vent in extreme cases. For electrolytic and double-layer capacitors, there will be impedance and ESR increase due to electrolyte loss. There are also a number of symptoms that will tell you if the capacitor on a motor is faulty: The motor will not start its load, but if you spin the load by hand, the motor will run properly. 41–63, A. Güntherschulze, H. Betz, Elektrolytkondensatoren, Verlag Herbert Cram, Berlin, 2. Handbook of Inorganic Chemicals. Film capacitors will have some oxidation of the metal conductors, increasing the dissipation factor. The foil is roughened by electrochemical etching to enlarge the effective capacitive surface. On a circuit board, capacitors should not be mounted close to heat sources. Dry e-caps are therefore electrically useless. The highly competitive market in digital data technology and high-efficiency power supplies rapidly adopted these new components because of their improved performance. Forming creates a very thin oxide barrier layer on the anode surface. The motherboard manufacturer ABIT Computer Corp. was the only affected manufacturer that publicly admitted to defective capacitors obtained from Taiwan capacitor makers being used in its products. A common failure mode in this application occurs when such a capacitor "leaks" dc, which upsets the operating point of the next stage. Cross-section side view of etched 10 V low voltage anode foil, SEM image of the rough anode surface of an unused electrolytic capacitor, showing the openings of pores in the anode, Ultra-thin-cross-section of an etched pore in a low-voltage anode foil, 100,000-fold magnification, light grey: aluminium, dark grey: amorphous aluminium oxide, light: pore, in which the electrolyte is active. -- I should state that I bought these from a Chinese website, so I'm not sure about the quality. High currents can cause this failure by evaporating the connection between the metallization and the end contact. Industrial espionage was implicated in the capacitor plague, in connection with the theft of an electrolyte formula. W. J. Bernard, J. J. Randall Jr., The Reaction between Anodic Aluminium Oxide and Water. Split vents Symptoms of defective capacitors may include: 1. But if capacitors are properly selected, they are also the least common.Often the useful life of capacitors is longer than the application itself. The capacitance usually decreases and the ESR usually increases. One of the aluminium foil strips, called the anode, chemically roughened and oxidized in a process called forming, holds a very thin oxide layer on its surface as an electrical insulator serving as the dielectric of the capacitor. However, the corrosion problems linked to water hindered, up to that time, the use of it in amounts larger than 20% of the electrolyte, the water-driven corrosion using the above-mentioned electrolytes being kept under control with chemical inhibitors that stabilize the oxide layer.. One problem of the forming or self-healing processes in non-solid aluminium electrolytics is that of corrosion, the electrolyte having to deliver enough oxygen to generate the oxide layer, with water, corrosive of aluminium, being the most efficient way. The first flawed capacitors linked to Taiwanese raw material problems were reported by the specialist magazine Passive Component Industry in September 2002. The capacitance should normally degrade to as low as 70% of the rated value, and the ESR increase to twice the rated value, over the normal life span of the component, before it should be considered as a "degradation failure". Capacitors for ac applications range from high-voltage oil-filled devices, such as the one shown in Figure 5.5, to low voltage, high capacitance devices of the type typically found in power… A 2003 article in The Independent claimed that the cause of the faulty capacitors was in fact due to a mis-copied formula. That is way too low to be using these caps in home appliances. Randomly occurring failures in capacitors during their use are the most important source of failures in capacitors. However, reactions do not come to a standstill, as more and more hydroxide grows in the pores of the anode foil, and the first reaction step produces more and more hydrogen gas in the can, increasing the pressure. Because not all manufacturers had offered recalls or repairs, do-it-yourself repair instructions were written and published on the Internet. But water will react quite aggressively and even violently with unprotected aluminium, converting metallic aluminium (Al) into aluminium hydroxide (Al(OH)3), via a highly exothermic reaction that gives off heat, causing gas expansion that can lead to an explosion of the capacitor. In 2005, Dell spent some US$420 million replacing motherboards outright and on the logistics of determining whether a system was in need of replacement.. There is no wear-out mechanism for solid aluminum or tantalum capacitors, which is a major advantage over wet aluminum capacitors. In the 1990s a third class of electrolytes was developed by Japanese researchers. During a wear-out period, failures increase per hour and become more predictable. While industrial customers confirmed the failures, they were not able to trace the source of the faulty components. Later, electrolytes were developed to work with water of up to 70% by weight. W. BONOMO, G. HOOPER, D. RICHARDSON, D. ROBERTS, and TH. But if capacitors are properly selected, they are also the least common.Often the useful life of capacitors is longer than the application itself. January 1229, pp. AC Capacitor: Reasons and Signs Associated with Failure and Defects | Insight from Your Trusted St. Paul, MN Heating and AC Repair Service Provider 8/17/2020. Broken or cracked vent, often accompanied with visible crusty rust-like brown or red dried electrolyte deposits. To avoid failures in high-temperature applications, the designer should use capacitors with lower losses, a larger size, or a higher temperature rating. Water, with its high permittivity of ε = 81, is a powerful solvent for electrolytes, and possesses high solubility for conductivity-enhancing concentrations of salt ions, resulting in significantly improved conductivity compared to electrolytes with organic solvents like GBL. When this failure occurs and a capacitor feeds a pot, the presence of dc will cause the pot to scratch when adjusted, and the pot … But even if the electrical parameters are out of their specifications, the assignment of failure to the electrolyte problem is not a certainty. Ch. The assembly is inserted into an aluminium can and sealed with a plug. This provides a reservoir of electrolyte to extend the lifetime of the capacitor. This page was last edited on 11 December 2020, at 20:21. E-caps can fail without any visible symptoms. It is known that water is extremely corrosive to pure aluminium and introduces chemical defects. I also found this great article when researching ESR titled, “Capacitor Testing, Safe Discharging and Other Related Information”. Many single-phase compressors require a start capacitor to assist in starting the motor. Strange, Carts USA 2006, The Effects of Electrolyte Composition on the Deformation Characteristics of Wet Aluminium ICD Capacitors, J. M. Sanz, J. M. Albella, J. M. Martinez-Duart, On the inhibition of the reaction between anodic aluminium oxide and water. The report of Hillman and Helmold proved that the cause of the failed capacitors was a faulty electrolyte mixture used by the Taiwanese manufacturers, which lacked the necessary chemical ingredients to ensure the correct pH of the electrolyte over time, for long-term stability of the electrolytic capacitors. However, if inhibitors are used in an insufficient amount, they tend to increase pitting. Because it has been customary in electrolytic capacitors to bind the excess hydrogen by using reducing or depolarizing compounds, such as aromatic nitrogen compounds or amines, to relieve the resulting pressure, the researchers then searched for compounds of this type. The name "electrolytic capacitor" derives from the electrolyte, the conductive liquid inside the capacitor. At the beginning of the 1990s, some Japanese manufacturers started the development of a new, low-ohmic water-based class of electrolytes. Electrolytics are the largest capacitors in the radio, with values from about 5 mfd (microfarads) to as much as 80 or even 200 mfd. The improved conductivity of the new electrolyte can be seen by comparing two capacitors, both of which have a nominal capacitance of 1000 μF at 16 V rated voltage, in a package with a diameter of 10 mm and a height of 20 mm. This completes the ‘outer’ electrical connection to the alumin(i)um oxide dielectric, which coats the anode foil. Ceramics will have capacitance loss due to oxide vacancy migration. They always show low capacitance values and very high ohmic ESR values. Up to the mid 1990s electrolytes could be roughly placed into two main groups: It was known that water is a very good solvent for low ohmic electrolytes. However, the pH value of the electrolyte is ideally about 7 (neutral); and measurements carried out as early as the 1970s have shown that leakage current is increased, due to chemically induced defects, when the pH value deviates from this ideal value. Its a difficult and time consuming process when they don’t show obvious signs of bulging or leaking as the photo above shows. After several years of development, researchers led by Shigeru Uzawa had found a mixture of inhibitors that suppressed the aluminium hydration. | 1 | 6 |
Subsets and Splits