Dataset Viewer
id
stringlengths 47
47
| text
stringlengths 274
671k
| keywords_count
int64 1
10
| codes_count
int64 2
839
|
---|---|---|---|
<urn:uuid:efffa1be-d968-41f5-b3e7-ddb772865645> | Glasses-free 3D is something available on just a handful of TVs, which manage it only through complicated hardware, and at no small cost.
Still, the effect is pretty smooth and no more tiresome for the eyes than glasses-assisted technology, from what we've been able to gather.
There is one more accomplishment that tech experts are looking for: holographic television, which offers a different perspective at the action, depending on the angle from which one beholds the TV.
Researchers from the Massachusetts institute of Technology (MIT) claim to have created something of the sort.
Called Tensor Display, it uses several layers of liquid crystal displays (LCDs) with a refresh rate of 360 Hz per second.
The technique is different from the one used in Nintendo's 3DS
, which has two layers of LCD screens (the bottom for light and dark bands and the top for the two slightly offset images).
The problem with this old method (a century old really) is that the only way so far known for creating multiple perspectives would rely on complicated hardware and algorithms. Hundreds of perspectives would have to be produced in order to suit a moving viewer, and that means that too much info has to be displayed at once.
Every frame of the stereo-3D video would need the screen to flicker 10 times, each with a different pattern. Thus, a convincing stereo-3D illusion would need a 1,000 Hz refresh rate.
MIT's Tensor Display lowers that requirement by using a higher number of LCDs, although it does bring another problem: the pattern calculation becomes more complex. Fortunately, the researcher had a convenient factor to exploit: not all aspects of a scene change with the viewing angle. This reduced the amount of information that needed to be sent to the LCD screens.
The end result was a panel that produces stereo-3D images based on calculations similar to those behind CT, X-ray and computed tomography, of all things (they produce 3D images of internal organs).
The Media Lab researchers will demo a Tensor Display prototype at Siggraph 2012 (5-9 August), made of three LCD panels. A second model will have two LCDs with a sheet of lenses between them (refract light left and right), primarily for wider viewing angles (50 degrees rather than 20).
Practical and commercial applications should appear soon, or at least sooner than any alternatives.
“Holography works, it’s beautiful, nothing can touch its quality. The problem, of course, is that holograms don’t move,” said
Douglas Lanman, a postdoc at the Media Lab.
“To make them move, you need to create a hologram in real time, and to do that, you need little tiny pixels, smaller than anything we can build at large volume at low cost. So the question is, what do we have now? We have LCDs. They are incredibly mature, and they are cheap.” | 1 | 3 |
<urn:uuid:ae2c73f3-799a-453a-b633-b15ff2e2fe0a> | What You Should Know About Using HBO In Diabetic Wounds
- Volume 16 - Issue 5 - May 2003
- 8371 reads
- 0 comments
How HBO Can Facilitate Wound Healing
HBO consists of a patient breathing 100 percent oxygen while his or her entire body is enclosed in a pressure chamber. The topical application of oxygen is not recognized by the FDA as hyperbaric therapy and is not reimbursed by Medicare.
Oxygen is transported two ways in that it is chemically bound to the hemoglobin and physically dissolved in the plasma. When the patient breathes air at sea level, the hemoglobin is already fully saturated so increasing the amount of respired oxygen can affect only the plasma-dissolved oxygen. Breathing oxygen at an elevated atmospheric pressure produces an increase in the plasma-dissolved oxygen fraction, which is proportional to the atmospheric pressure of the respired gas.
Monoplace hyperbaric chambers are usually compressed with oxygen whereas multiplace chambers are compressed with air while the patient breathes 100 percent oxygen using a hood or aviator’s face mask. Typical treatments involve 90 minutes of oxygen breathing at 2.0 to 2.5 atmospheres absolute (ATA) with air breaks administered at 20- to 30-minute intervals in order to reduce the risk of central nervous system oxygen toxicity.
(While HBO treatment is remarkably safe, be aware that otologic and pulmonary barotrauma and central nervous system, pulmonary and ocular oxygen toxicity can occur. Central nervous system oxygen toxicity is rare, but it can manifest as seizures. Seizures are less likely to occur if there are brief periods of air breathing.)
Arterial PO2 elevations of 1500 mmHg or greater are achieved when the body is exposed to pressures of 2 to 2.5 ATA. Soft tissue and muscle PO2 levels can be elevated to about 300 mmHg. Oxygen diffusion varies in a direct linear relationship to the increased partial pressure of oxygen present in the circulating plasma caused by HBO. At pressures of 3 ATA, the diffusion radius of oxygen into the extravascular compartment is estimated to increase from 64 micons to about 247 micons at the pre-capillary arteriole.
This significant level of hyperoxygenation allows for the reversal of localized tissue hypoxia, which may be secondary to ischemia or to other local factors within the compromised tissue. Hypoxia is a biochemical barrier to normal wound healing.
In the hypoxic wound, HBO treatment allows an acute correction of the pathophysiology related to oxygen deficiency and impaired wound healing. Using HBO increases oxygen levels within the marginally vascularized periwound compartment, enhancing leukocyte bacteriocidal function. It may also potentiate some antibiotic effects. There are direct toxic effects on anaerobic bacteria and suppression of exotoxin production. It also enhances collagen synthesis and cross-linking, and other matrix deposition.
What The Clinical Evidence Reveals
Additionally, recent evidence suggests that HBO may induce specific growth factor receptors (PDGF) and stimulate growth factor (VEGF) release. There is also evidence that employing HBO may ameliorate or prevent leukocyte-mediated ischemia reperfusion injury.10-13
There have been 13 published peer-reviewed studies (including seven randomized, controlled trials) of HBO in diabetic foot wounds. A total of 606 diabetic patients received HBO with a 71 percent bipedal limb salvage rate, compared to 463 control patients who had a 53 percent bipedal limb salvage rate. All diabetic wounds were Wagner III-IV. It is interesting to compare this to the becaplermin clinical trials that involved Wager II ulcers. Control patients had healing rates of 25 percent while those receiving becaplermin had healing rates of 43 percent.
A large retrospective series of 1,144 diabetic foot ulcer patients demonstrated the effectiveness of using adjunctive HBO in modified Wagner III, IV and V (equivalent to Wagner grade II, III and V) ulcers, based on ulcer/wound improvement, healing and salvage of bipedal ambulation (see “The Impact Of HBO: What One Study Shows” above).14 Currently, CMS policy reimburses only for treatment of Wagner III and greater ulcers. | 1 | 20 |
<urn:uuid:88a3943e-6d77-4211-8079-34a232f009fd> | In Writing Secure PHP, I covered a few of the most common security holes in websites. It's time to move on, though, to a few more advanced techniques for securing a website. As techniques for 'breaking into' a site or crashing a site become more advanced, so must the methods used to stop those attacks.
Most hosting environments are very similar, and rather predictable. Many web developers are also very predictable. It doesn't take a genius to guess that a site's includes (and most dynamic sites use an includes directory for common files) is an www.website.com/includes/. If the site owner has allowed directory listing on the server, anyone can navigate to that folder and browse files.
Imagine for a second that you have a database connection script, and you want to connect to the database from every page on your site. You might well place that in your includes folder, and call it something like connect.inc. However, this is very predictable - many people do exactly this. Worst of all, a file with the extension ".inc" is usually rendered as text and output to the browser, rather than processed as a PHP script - meaning if someone were to visit that file in a browser, they'll be given your database login information.
Placing important files in predictable places with predictable names is a recipe for disaster. Placing them outside the web root can help to lessen the risk, but is not a foolproof solution. The best way to protect your important files from vulnerabilities is to place them outside the web root, in an unusually-named folder, and to make sure that error reporting is set to off (which should make life difficult for anyone hoping to find out where your important files are kept). You should also make sure directory listing is not allowed, and that all folders have a file named "index.html" in (at least), so that nobody can ever see the contents of a folder.
Never, ever, give a file the extension ".inc". If you must have ".inc" in the extension, use the extension ".inc.php", as that will ensure the file is processed by the PHP engine (meaning that anything like a username and password is not sent to the user). Always make sure your includes folder is outside your web root, and not named something obvious. Always make sure you add a blank file named "index.html" to all folders like include or image folders - even if you deny directory listing yourself, you may one day change hosts, or someone else may alter your server configuration - if directory listing is allowed, then your index.html file will make sure the user always receives a blank page rather than the directory listing. As well, always make sure directory listing is denied on your web server (easily done with .htaccess or httpd.conf).
Out of sheer curiosity, shortly after writing this section of this tutorial, I decided to see how many sites I could find in a few minutes vulnerable to this type of attack. Using Google and a few obvious search phrases, I found about 30 database connection scripts, complete with usernames and passwords. A little more hunting turned up plenty more open include directories, with plenty more database connections and even FTP details. All in, it took about ten minutes to find enough information to cause serious damage to around 50 sites, without even using these vulnerabilities to see if it were possible to cause problems for other sites sharing the same server.
Most site owners now require an online administration area or CMS (content management system), so that they can make changes to their site without needing to know how to use an FTP client. Often, these are placed in predictable locations (as covered in the last article), however placing an administration area in a hard-to-find location isn't enough to protect it.
Most CMSes allow users to change their password to anything they choose. Many users will pick an easy-to-remember word, often the name of a loved one or something similar with special significance to them. Attackers will use something called a "dictionary attack" (or "brute force attack") to break this kind of protection. A dictionary attack involves entering each word from the dictionary in turn as the password until the correct one is found.
The best way to protect against this is threefold. First, you should add a turing test to a login page. Have a randomly generated series of letters and numbers on the page that the user must enter to login. Make sure this series changes each time the user tries to login, that it is an image (rather than simple text), and that it cannot be identified by an optical character recognition script.
Second, add in a simple counter. If you detect a certain number of failed logins in a row, disable logging in to the administration area until it is reactivated by someone responsible. If you only allow each potential attacker a small number of attempts to guess a password, they will have to be very lucky indeed to gain access to the protected area. This might be inconvenient for authentic users, however is usually a price worth paying.
Finally, make sure you track IP addresses of both those users who successfully login and those who don't. If you spot repeated attempts from a single IP address to access the site, you may consider blocking access from that IP address altogether.
One excellent way to make sure that even if you have a problem with someone accessing your database who shouldn't be able to, you can limit the damage they can cause. Modern databases like MySQL and SQL Server allow you to control what a user can and cannot do. You can give users (or not) permission to create data, edit, delete, and more using these permissions. Usually, I try and ensure that I only allow users to add and edit data.
If a site requires an item be deleted, I will usually set the front end of the site to only appear to delete the item. For example, you could have a numeric field called "item_deleted", and set it to 1 when an item is deleted. You can then use that to prevent users seeing these items. You can then purge these later if required, yourself, while not giving your users "delete" permissions for the database. If a user cannot delete or drop tables, neither can someone who finds out the user login to the database (though obviously they can still do damage).
PHP contains a variety of commands with access to the operating system of the server, and that can interact with other programs. Unless you need access to these specific commands, it is highly recommended that you disable them entirely.
For example, the eval() function allows you to treat a string as PHP code and execute it. This can be a useful tool on occasion. However, if using the eval() function on any input from the user, the user could cause all sorts of problems. You could be, without careful input validation, giving the user free reign to execute whatever commands he or she wants.
There are ways to get around this. Not using eval() is a good start. However, the php.ini file gives you a way to completely disable certain functions in PHP - "disable_functions". This directive of the php.ini file takes a comma-separated list of function names, and will completely disable these in PHP. Commonly disabled functions include ini_set(), exec(), fopen(), popen(), passthru(), readfile(), file(), shell_exec() and system().
It may be (it usually is) worth enabling safe_mode on your server. This instructs PHP to limit the use of functions and operators that can be used to cause problems. If it is possible to enable safe_mode and still have your scripts function, it is usually best to do so.
Finally, Be Completely and Utterly Paranoid
Much as I hate to bring this point up again, it still holds true (and always will). Most of the above problems can be avoided through careful input validation. Some become obvious points to address when you assume everyone is out to destroy your site. If you are prepared for the worst, you should be able to deal with anything.
Ready for more? Try Writing Secure PHP, Part 3. | 1 | 4 |
<urn:uuid:f37ea100-b3b9-472e-bffa-c0ee6d515f58> | Right now, the accelerator is stopped for the annual maintenance shutdown. This is the opportunity to fix all problems that occurred during the past year both on the accelerator and the experiments. The detectors are opened and all accessible malfunctioning equipment is being repaired or replaced.
In the 27-km long LHC tunnel, surveyors are busy getting everything realigned to a high precision, while various repairs and maintenance operations are on their way. By early March, all magnets will have been cooled down again and prepared for operation.
The experimentalists are not only working on their detectors but also improving all aspects of their software: the detector simulations, event reconstruction algorithms, particle identification schemes and analysis techniques are all being revised.
By late March, the LHC will resume colliding protons with the goal of delivering about 16 inverse femtobarns of data, compared to 5 inverse femtobarns in 2011. This will enable the experiments to improve the precision of all measurements achieved so far, push all searches for new phenomena slightly further and explore areas not yet tackled. The hope is to discover particles associated with new physics revealing the existence of new phenomena. The CMS and ATLAS physicists are looking for dozens of hypothetical particles, the Higgs boson being the most publicized but only one of many.
When protons collide in the LHC accelerator, the energy released materializes in the form of massive but unstable particles. This is a consequence of the well-known equation E=mc2, which simply states that energy (represented by E) and mass (m) are equivalent, each one can change into the other. The symbol c2 represents the speed of light squared and acts like a conversion factor. This is why in particle physics we measure particle masses in units of energy like GeV (giga electronvolt) or TeV (tera electronvolt). One electronvolt is the energy acquired by an electron through a potential difference of one volt.
It is therefore easier to create lighter particles since less energy is required. Over the past few decades, we have already observed the lighter particles countless times in various experiments. So we know fairly well how many events containing them we should observe. We can tell when new particles are created when we see more events of a certain topology than what we expect from those well-known phenomena, which we refer to as the background.
We can claim that something additional and new is also occurring when we see an excess of events. Of course, the bigger the excess, the easier it is to claim something new is happening. This is the reason why we accumulate so many events, each one being a snap-shots of the debris coming out of a proton-proton collisions. We want to be sure the excess cannot be due to some random fluctuation.
Some of the particles we are looking for are expected to have a mass in the order of a few hundred GeV. This is the case for the Higgs boson and we already saw possible signs of its presence last year. If the observed excess continues to grow as we collect more data in 2012, it will be enough to claim the Higgs boson discovery beyond any doubt in 2012 or rule it out forever.
Other hypothetical particles may have masses as large as a few thousand GeV or equivalently, a few TeV. In 2011, the accelerator provided 7 TeV of energy at the collision point. The more energy the accelerator has, the higher the reach in masses, just like one cannot buy a 7000 CHF car with 5000 CHF. So to create a pair of particles with a mass of 3.5 TeV (or 3500 GeV), one needs to provide at least 7 TeV to produce them. But since some of the energy is shared among many particles, the effective limit is lower than the accelerator energy.
There are ongoing discussions right now to decide if the LHC will be operating at 8 TeV this year instead of 7 TeV as in 2011. The decision will be made in early February.
If CERN decides to operate at 8 TeV, the chances of finding very heavy particles will slightly increase, thanks to the extra energy available. This will be the case for searches for particles like the W’ or Z’, a heavier version of the well-known W and Z bosons. For these, collecting more data in 2012 will probably not be enough to push the current limits much farther. We will need to wait until the LHC reaches full energy at 13 or 14 TeV in 2015 to push these searches higher than in 2011 where limits have already been placed around 1 TeV.
For LHCb and ALICE, the main goal is not to find new particles. LHCb aims at making extremely precise measurements to see if there are any weak points in the current theoretical model, the Standard Model of particle physics. For this, more data will make a whole difference. Already in 2011, they saw the first signs of CP-violation involving charm quarks and hope to confirm this observation. This measurement could shed light on why matter overtook antimatter as the universe expanded after the Big Bang when matter and antimatter must have been created in equal amounts. They will also investigate new techniques and new channels.
Meanwhile, ALICE has just started analyzing the 2011 data taken in November with lead ion collisions. The hope is to better understand how the quark-gluon plasma formed right after the Big Bang. This year, a special run involving collisions of protons and lead ions should bring a new twist in this investigation.
Exploring new corners, testing new ideas, improving the errors on all measurements and most likely the final answer on the Higgs, that is what we are in with the LHC for in 2012. Let’s hope that in 2012 the oriental dragon, symbol of perseverance and success, will see our efforts bear fruit.
To be alerted of new postings, follow me on Twitter: @GagnonPauline or sign-up on this mailing list to receive and e-mail notification. | 1 | 10 |
<urn:uuid:17491b81-e5b3-497e-a74e-3dbb58b5134f> | The subject of batteries for field shooters used to be as simple as charging them until the red light went out, slapping them on the camera and shooting until they died. Now, the typical ENG/EFP crew carries a much wider array of battery-operated devices. Notebook computers, cell and satellite phones, PDAs, belt-clipped radios, micro-mixers and even GPS receivers may accompany camcorders and batt-lights. Modern field shooters must know their way around battery systems.
Modern batteries communicate digitally to chargers like the Anton-Bauer Dual 2702 Powerchager shown above while talking to the user through an LCD window.
Batteries are usually defined by the chemistry they use. The three most common types are nickel cadmium (NiCd), nickel metal hydride (NiMH), and lithium ion (Li-ion). Each has its strengths and weaknesses. We’ll compare their performance later in the article. But first, let’s define the specifications we use to judge them.
Regardless of the battery type involved, there are a few fundamental specifications that field crews will frequently encounter, including energy density, fast-charge time, self-discharge time, maintenance requirement and C-rate.
Energy density is a measure of how much power the battery will deliver for its weight, and is usually measured in watt-hours per kilogram (Wh/kg). This is one of the central factors in matching battery type to application.
Fast-charge time is another factor to consider. Usually measured as a fraction of the battery’s rated capacity over time, this parameter has seen dramatic advances with the advent of battery-centric charging using smart batteries and chargers.
Another primary factor in matching batteries to their uses is a spec called “self-discharge time,” usually measured as a percentage of capacity per month. This refers to the rate at which the fully charged battery will lose its charge while at rest. Self-discharge is an important parameter because this decline in voltage is not linear.
This photo shows the two most common camera battery mounts. The camera on the left has the Anton-Bauer Gold Mount. The other has the Sony V-mount.
Most battery types tend to lose a significant portion of their charge within the first 24 hours of storage, followed by a slower but steady discharge. Storage at higher-than-normal room temperatures will degrade internal resistance and accelerate self-discharge on any battery.
A significant specification is the maintenance requirement. This typically refers to how often an equalizing or topping charge should be applied. In the case of nickel-based batteries, the maintenance requirement will include “exercising” the battery by running it down to its end-of-discharge voltage and then fully recharging to combat the infamous memory effect in NiCd batteries.
The C-rate is a measurement of the charge and discharge current of the battery. A discharge of 1C will equal the published current capacity of the battery. A battery rated at 500 mAh (milliamp hours) will discharge at 1C to deliver that current for one hour. If discharged at 2C, the same battery should provide 1000 milliamps for a half hour. Note that the measurement is made from maximum capacity to the end-of-discharge level, not to 0V. On NiCds, for instance, the typical end-of-discharge level is 1V per cell. Li-ions generally discharge to 3V.
While there are many other battery specs, such as load current, cost-per-cycle, overcharge tolerance and cycle life, the specs mentioned above will form the basic stepping stones to a good battery-to-application match. Let’s see how the various battery chemistries compare on these main specs.
Despite the emergence of new battery types, the nickel-cadmium or NiCd batteries maintain a prominent place in powering professional camcorders, batt-lights and portable comm radios. This is due to their exceptional performance in high-current applications. NiCds also accept fast charges quite well compared to the other battery chemistries. Typical fast-charge time on NiCd units is one hour, while NiMH batteries will fast-charge in two to four hours and deliver about one-fourth the load current.
Cameras have shrunk while lenses and batteries have kept their size and weight, allowing each to balance the other. Without rear-mount batteries, smaller cameras would be front-heavy, and on shoulder-mounted cameras, balance rather than weight is the critical factor.
NiCd batteries will self-discharge slightly faster than NiMH and much faster than Li-ion types. The big edge that the NiMH and Li-ion batteries have over NiCd is in energy density. In applications that require a high power-to-weight ratio, the Li-ion is the king of these beasts, with a typical spec of 100Wh/kg to 130Wh/kg. By comparison, NiMHs offer a power-to-weight ratio ranging from 60Wh/kg to 120Wh/kg, while NiCds range from 45Wh/kg to 80Wh/kg.
The Achilles heel of NiCd batteries is their maintenance requirement. They must be regularly exercised (some harried shooters might say exorcised) to avoid the formation of crystals inside the battery and the resulting tendency to discharge only as far as the minimum voltage level to which they have been frequently run. Also, since cadmium is an environmentally toxic metal, NiCd batteries are increasingly seen as a liability. Some countries now severely limit their use due to disposal problems.
Memory or mismatch?
Frequently, what appears to be a memory effect may be a mismatch between the cutoff voltage level of the device and that of the battery. To get the full capacity of the battery, its end-of-discharge voltage must be higher than the cutoff voltage for the camcorder or other device being powered. A mismatch in these values will cause the device to quit while the battery still has power. Mimicking the memory effect, this will cause a nickel-based battery to be repeatedly recharged before reaching its own end-of-discharge voltage and eventually develop a real memory.
Getting simpler again
The latest “smart” batteries, chargers and cameras can communicate digitally. The battery can control the smart charger for the perfect charge cycle and the cameras can display all the needed power parameters right in the viewfinder. Just when the mix of battery chemistries and their characteristics was becoming increasingly complex, the advent of digital communication between the central components promises to make things a good bit easier.
Bennett Liles is a writer and TV production engineer in the Atlanta area. | 1 | 3 |
<urn:uuid:83f0bc79-8573-43f4-a382-d76593631bb6> | Community inDetail, Part 3
Myths and Realities
Myth 1: Institutions are the best setting for some individuals with severe intellectual and developmental disabilities.
Four groups of people are often cited as the most difficult to serve in the community.
- Medically Fragile: Some institution residents have complex medical conditions such as seizure disorder, aspiration risk, and dysphagia, requiring intensive medical support. If skilled nursing and medical planning are provided, successful community placement of people with complex medical issues can be ensured (Kozma et al., 2003).
- Dual Diagnoses: Half of institution residents have a condition requiring psychiatric attention (Lakin et al., 2009). Often people with dual diagnoses need high levels of services and supports that require integrated interventions from both ID/DD and mental health providers. Often ID/DD providers do not have the capacity to provide treatment for mental health issues, and mental health providers do not have the capacity to provide self-care supports to address ID/DD issues. Joint system planning can be difficult because the two types of services are available through different funding streams (Day, 2009).
- Involved with the Criminal Justice System: Developmental services agencies are expected to serve a public safety function for these individuals. This can be challenging in the context of developing a system designed to promote self-determination and community participation (Bascom, 2009).
- Older People Who Have Spent Many Years in the Institution: Older residents who have spent many years in an institution present several challenges; they (or their parents or guardians) may feel that the institution is their home and they do not want to be uprooted. Many have never had the experience of living in the community.
Some states have developed specific strategies to meet the needs of challenging populations, including those with the most significant challenges. People with co-occurring developmental disabilities and mental illnesses and older adults with developmental disabilities are particularly vulnerable populations. They face barriers to services related to a lack of coordination and collaboration across service systems, as well as gaps in research, clinical expertise, and access to appropriate programs. This lack of coordination has many causes, including separate systems for financing services; a reluctance by mental health and developmental disabilities systems to allocate scarce resources for a high-needs population that could be served in another service system; established provider networks that are not cross-trained; and the evolution of advocacy movements emphasizing different priorities. In many cases, specific barriers to service may be both a cause and a result of the lack of coordination across systems.
In 2002, the Surgeon General addressed the needs of vulnerable populations in A National Blueprint to Improve the Health of Persons with [Mental Retardation].
States and advocates have implemented strategies and programs to address the needs of people with complex medical needs, dual diagnoses, and older adults with development disabilities. For example:
- To facilitate the closure of Agnews Developmental Center, California created 23 licensed homes in the community that provide sophisticated medical support (SB 962 homes). Although they are expensive (an average monthly cost of $15,000 per person), they seem to be meeting the needs of a medically fragile population (California Health and Human Service Agency ,2010).
- In 2008, Tennessee opened a 16-bed ICF/DD with medical services including 24-hour nursing care.
- Missouri advocates founded the Association on Aging with Developmental Disabilities to increase awareness of the importance of providing community-based services and support focusing on older adults with developmental disabilities.
- The Florida Department of Elder Affairs sponsored training for service providers on meeting the needs of aging people with developmental disabilities. (www.adrc-tae.org/tiki-download_file.php?fileId=30426)
- As part of a federal lawsuit settlement, the State of Hawaii is required to take specific steps to identify people with developmental disabilities within the mental health system and ensure that there are smooth discharges from the state psychiatric hospital.
- In 2008, the New Jersey Department of Human Services convened the Dual Diagnosis Task Force to examine and resolve the serious lack of services, unmet service needs, and other significant obstacles to receiving mental health and developmental disability services. The task force made recommendations on a framework for change that would enable the service system to effectively serve the needs of children and adults with developmental disabilities and co-occurring mental health and/or behavior disorders.
- Oregon and several other states use person-centered planning, coupled with individual budgeting, to adequately address complex individual needs.
- Maryland’s Rosewood Center placed 17 of the 30 court-committed individuals in the community and 13 in a secure residential facility to ensure public safety. In the community, the individuals were placed in small residences with a range of supports, including one-to-one supervision and/or awake overnight supervision, or creative monitoring in a small (up to three individuals) residential setting with day, vocational, or supported employment services. Monitoring may include oversight by another agency (regular reporting to a probation officer through the Department of Corrections) or monitoring devices (alarmed windows and doors) (Maryland Developmental Disabilities Administration, 2008).
Myth 2: The quality of care cannot be assured in a community-based residential setting.
Opponents of institutional closure argue that it is easier to monitor the quality of a small number of large institutions rather than a large number of smaller facilities. Proponents of deinstitutionalization admit that “in the early phases of deinstitutionalization, efforts to develop quality assurance strategies suited to community services were sometimes subordinated in the rush to meet court-ordered deadlines” (Bradley and Kimmich, 2003).
Most states have now developed mechanisms to monitor the quality of community-based services. However, no quality assurance mechanism is foolproof, and incidents of abuse, neglect, and even death occur in the community, just as they do in institutions. We have found no studies comparing the rate of adverse incidents in the community with the rate in institutional settings.
Family, friends, and neighbors play important roles in assuring safety and service quality for people in community-based settings. Several researchers found that family presence and participation in the person’s life can be an important safeguard for security and service quality (Lemay, 2009) and should be regarded as the most important and dependable source of quality assurance.
Although there are few specific federal requirements as to how states must assure quality, states must persuade the Centers for Medicare and Medicaid Services (CMS) that the state can assure health and welfare. CMS has established a Quality Framework that addresses access, PCP and service delivery, provider capacity, participant safeguards, rights and responsibilities, outcomes and satisfaction, and system performance. Though it is not regulatory, it provides a framework for certain expectations of quality outcomes for HCBS Waiver program services.
In recent years, most states and communities have increased regulation or oversight of community-based services. Most states have multifaceted systems of quality assurance, including the participation of different stakeholders in and outside government and the service system. Systems of quality assurance include the following (from Bascom, 2009):
- Licensure: Group homes and other community residences where three or more unrelated people with disabilities live require licensure.
- Quality Management Reviews: Reviewers assess Medicaid-funded services to ensure compliance with state and federal Medicaid standards. In Vermont, for example, site visits are conducted every two years, with follow-up as appropriate.
- Guardianship: Public guardians who are provided to adults with developmental disabilities play distinct quality assurance functions. They are expected to have regular (in some states at least monthly) face-to-face contact with the people for whom they are guardians and to monitor their welfare and quality of life and advocate for appropriate services.
- Safety and Accessibility Checks: All residences of people with developmental disabilities are inspected for compliance with safety and accessibility standards.
- Consumer and Family Surveys: Annually, about 25 states participate in the National Association of State Directors of Developmental Disability Services and Human Services Research Institute NCI survey, which canvasses consumer and family members to measure the satisfaction of people receiving services and to measure what services people report receiving. (http://www2.hsri.org/nci)
- Critical Incident Reporting Process: Most states have a critical incident reporting process, whereby developmental disability service providers report to the state developmental disability agency when certain incidents take place, such as the death of someone receiving services; use of restrictive procedures; allegations of abuse, neglect, or exploitation; or criminal behavior by or against someone receiving services.
- Grievance and Appeals: The only formal federal requirement for developmental disability service providers is that they provide rights of appeal for eligibility decisions. However, many states require each developmental disability service provider to have written grievance and appeals procedures and to inform applicants and service recipients of that process.
- Abuse Complaints: Any human service provider is legally required to file an immediate report of any suspected abuse, neglect, or exploitation of a vulnerable adult.
- Medicaid Fraud Unit: The Medicaid Fraud Unit is a specially staffed unit within the Office of the Attorney General. It investigates allegations of criminal activity, including abuse, neglect, or exploitation, in any Medicaid-funded facility or involving a person receiving Medicaid-funded supports.
- Service Coordination: The role of service coordinator or case manager often includes the functions of monitoring and advocacy. In some states, the service coordinator is the focal point for individual-based quality assurance at the local level.
- Advocacy: Empowered service users and families are powerful components in the quality assurance chain. Self-advocacy groups work to empower people with disabilities to learn about their rights, step forward, and speak for themselves. In addition, advocacy organizations such as The Arc provide information, support, and advocacy for people with disabilities and their families.
- Other Organizations: Other organizations develop the capacity to monitor specific groups of people. For example, the Guardianship Trust in Vermont provides regular, structured individually based citizen monitoring of residential services provided by the state. Brandon Training School Association is an alliance of parents and other people concerned with the well-being of former residents of Brandon Training School.
Myth 3: Community-based settings do not offer the same level of safety as institutional settings.
All states take measures to make sure that people, whether living in institutions or in the community, are healthy, safe, and protected from harm. However, if the state’s safeguards are not rigorous, closely enforced, and monitored, people with developmental disabilities are not safe, regardless of where they live. Two significant factors increase the risk of abuse and neglect: isolation from family and a system that rewards compliant attitudes among people with developmental disabilities (Valenti-Hein and Schwartz, 1995).
The NCI 2009–2010 survey shows that the majority of people with ID/DD feel safe in their home, in their neighborhood, and their work/day program/daily activity. More than 90 percent of the individuals surveyed reported that they have someone to go to when they feel afraid. Nevertheless, some opponents of deinstitutionalization claim that the safeguards offered in the community are inadequate to ensure the physical safety of a very vulnerable population.
Based on newspaper reports, Protection and Advocacy investigations, and state investigations, it is clear that instances of abuse and neglect occur in community settings, and some of them result in unnecessary deaths. However, the same can be said about institutions. For example, the 2009 “fight club” incident, in which institution workers forced residents to fight one another while employees taped the incidents on their cell phones, made national news. In 2007, the Atlanta Journal-Constitution published an exposé on state mental health hospitals that revealed more than 100 suspicious deaths of patients during the previous five years (Judd, 2010). The 2002 death of Brian Kent in Kiley Center in Waukegan, Illinois, revealed a pattern of neglect caused by unprofessional attitudes, administrative indifference, lack of competence, and caregiver fatigue (Equip for Equality, 2008).
As systems of care become more sophisticated and mature, states are able to move toward increasing their quality assurance efforts to protect health and safety. Missouri, for example, has instituted a Health Identification Planning System, which represents the quality monitoring process for the discovery and remediation of health and safety concerns for individuals in Division of Developmental Disability community residential services. A Health Inventory tool is completed on all people when they enter a community placement and annually, as well as when there are significant health changes. Regional Office registered nurses complete Nursing Reviews on individuals with a defined score on their health inventory. These reviews evaluate the provider’s health supports and services, evaluate the individual’s response to treatment, and identify unmet health care needs.
Missouri also created an Office of Constituent Services to serve as an advocate for people with ID/DD.
Myth 4: Mortality rates are higher in the community for individuals with ID/DD than in Institutions.
Older adults or adults who are medically fragile have a higher mortality rate regardless of where they live (or their geographic location). As a result, mortality comparisons are not straightforward and require complex statistical approaches. For example, a Massachusetts study on deaths showed that the average age at death varied across residential settings. The study indicated generally that the average age of death for each residential setting reflects the relative age and health status of the residents in each of the residential settings. The study also showed that mortality rates are lowest among people living at home or with family. (Center for Developmental Disabilities Evaluation and Research (CDDER), 2010). The study showed that people with developmental disabilities generally died of the same causes as the general population. Heart disease remained the leading cause of death and Alzheimer’s disease the second leading cause.
The Massachusetts Department of Developmental Services (DDS), in collaboration with the CDDER, has focused on the health status of people with developmental disabilities. Examples of projects they have taken on in Massachusetts include the following:
- Identification and customization of a health screening tool for use by direct supportive providers
- Development of Preventive Health Guidelines for Individuals with Mental Retardation
- Root Cause Analysis training and support
- Incident Management protocol development
- Mapping the community-based system of mental health and physical health supports
- Annual mortality reports
- Annual Quality Assurance reports and the development of web-based Quality Briefs
- Implementation of the DDS STOP Falls Pilot to identify patterns and risk factors for falls among people with ID/DD
- Implementation and evaluation of a pilot study of DDS’s new Health Promotion and Coordination initiative
- Support in development of training modules for community providers
- Quantitative analysis of clinical service capacity within the residential provider system
- Analysis of Medicaid pharmacy utilization claims data
An increasing number of states conduct mortality studies, review each death, and have proactively begun programs and initiatives to improve the health status of people with developmental disabilities. However, adults with developmental disabilities are more likely to develop chronic health conditions at younger ages than other adults due to biological factors related to syndromes and associated developmental disabilities, limited access to adequate health care, and lifestyle and environmental issues. They have higher rates of obesity, sedentary behaviors, and poor nutritional habits than the general population (Yamaki, 2005).
Most studies find that the mortality rate is comparable across settings or is favorable in community settings. For example:
- Conroy and Adler (1998) found improved survival for people leaving the Pennhurst Institution for life in the community and no evidence of transfer trauma.
- Lerman, Apgar, and Jordan (2003) found the death ratio of 150 movers who left a New Jersey institution was quite comparable to a matched group of 150 stayers after controlling for critical high risk variables.
- Heller et al. (1998) found that, although transitions from institutions or nursing homes to community settings may result in short-term stress and risks that may affect mortality (transfer trauma), the long-term survival rates improve.
- Hsieh et al. (2009) found that, regardless of residential location, those who had a greater variation in the physical environment and greater involvement in social activities had a lower risk of mortality.
Despite such findings, opponents of deinstitutionalization continue to use the mortality argument. In its advocacy literature, one group continues to cite Strauss, Eyman, and Grossman (1996) and Strauss, Kastner, and Shavelle (1998), who suggest that people with developmental disabilities, particularly those with severe disabilities, have higher mortality rates in the community than in institutions.
Subsequent studies did not reproduce these results. O’Brien and Zaharia (1998) question the accuracy of the database used by Strauss and colleagues, Durkin (1996) critiques Strauss’s methodology, and Lerman et al. (2003) review a number of unsuccessful attempts to reproduce the results. | 1 | 11 |
<urn:uuid:30443ad2-0ef4-4d6c-ac81-9942432630eb> | Colon cancer, a malignant tumor of the large intestine, affects both men and women. It affects 2-6% of all men and women in their lifetime.
The vast majority of colon cancer cases are not hereditary. However, approximately 5 percent of individuals with colon cancer have a hereditary form. In those families, the chance of developing colon cancer is significantly higher than in the average person. Identifying those individuals and families that might be at-risk for hereditary colon and associated cancers can dramatically reduce the number of cancer diagnoses in these families.
Colon Cancer Genes
Several genes have been identified which contribute to a susceptibility to colon cancer. The two most common inherited colon cancer conditions include FAP and HNPCC.
- FAP (familial adenomatous polyposis)
Individuals with this syndrome develop many polyps in their colon (often over 100). People who inherit mutations in the APC gene have a nearly 100 percent chance of developing colon cancer by age 40. In addition, having FAP increases the risk of developing hepatoblastoma, dermoids, fibromas, and other cancers. If a patient has more than 10 adenomatous polyps in their lifetime a cancer risk assessment is appropriate.
- HNPCC (hereditary nonpolyposis colorectal cancer)Individuals with an HNPCC gene mutation have an estimated 80 percent lifetime risk of developing colon or rectal cancer. There is also a 40-60 percent chance for endometrial cancer. Other cancer risks are increased as well.
Patients with the following characteristics should be referred for a cancer risk assessment:
- Patient diagnosed with colon cancer younger than age 50 years.
- Patient has multiple colon cancers or more than one HNPCC related cancer.*
- Patient has colon cancer and one relative with an HNPCC related tumor* under age 50 years.
- Patient has colon cancer and two or more first or second degree (parents, siblings, aunts, uncles, grandparents) relatives with HNPCC related* cancers at any age.
*Colon, endometrial, ovarian, stomach, small bowel, biliary tract or transitional cell of the renal pelvis. | 1 | 5 |
<urn:uuid:471ac2ee-9ae9-4ba4-873f-f68c3ed27f25> | Digital Audio Networking Demystified
The OSI model helps bring order to the chaos of various digital audio network options.
Credit: Randall Fung/Corbis
Networking has been a source of frustration and confusion for pro AV professionals for decades. Fortunately, the International Organization of Standardization, more commonly referred to as ISO, created a framework in the early 1980s called the Open Systems Interconnection (OSI) Reference Model, a seven-layer framework that defines network functions, to help simplify matters.
Providing a common understanding of how to communicate to each layer, the OSI model (Fig. 1) is basically the foundation of what makes data networking work. Although it's not important for AV professionals to know the intricate details of each layer, it is vital to at least have a grasp of the purpose of each layer as well as general knowledge of the common protocols in each one. Let's take a look at the some key points.
The Seven Layers
Starting from the bottom up, the seven layers of the OSI Reference Model are Physical, Data Link, Network, Transport, Session, Presentation, and Application. The Physical layer is just that — the hardware's physical connection that describes its electrical characteristics. The Data Link layer is the logic connection, defining the type of network. For example, the Data Link layer defines whether or not it is an Ethernet or Asynchronous Transfer Mode (ATM) network. There is also more than one data network transport protocol. The Data Link layer is divided into two sub-layers: the Media Access Control (MAC) and the Logical Link Control (above the MAC as you move up the OSI Reference Model).
The seven layers of the Open Systems Interconnection (OSI) Reference Model for network functions.
Here is one concrete example of how the OSI model helps us understand networking technologies. Some people assume that any device with a CAT-5 cable connected to it is an Ethernet device. But it is Ethernet's Physical layer that defines an electrical specification and physical connection — CAT-5 terminated with an RJ-45 connector just happens to be one of them. For a technology to fully qualify as an Ethernet standard, it requires full implementation of both the Physical and Data Link layers.
The Network layer — the layer at which network routers operate — “packetizes” the data and provides routing information. The common protocol for this layer is the Internet Protocol (IP).
Layer four is the Transport layer. Keep in mind that this layer has a different meaning in the OSI Reference Model compared to how we use the term “transport” for moving audio around. The Transport layer provides protocols to determine the delivery method. The most popular layer four protocol is Transmission Control Protocol (TCP). Many discuss TCP/IP as one protocol, but actually they are two separate protocols on two different layers. TCP/IP is usually used as the data transport for file transfers or audio control applications.
Comparison of four digital audio technologies using the OSI model as a framework.
TCP provides a scheme where it sends an acknowledge message for each packet received by a sending device. If it senses that it is missing a packet of information, it will send a message back to the sender to resend. This feature is great for applications that are not time-dependent, but is not useful in real-time applications like audio and video.
Streaming media technologies most common on the Web use another method called User Datagram Protocol (UDP), which simply streams the packets. The sender never knows if it actually arrives or not. Professional audio applications have not used UDP because they are typically Physical layer or Data Link layer technologies — not Transport layer. However, a newcomer to professional audio networking, Australia-based Audinate, has recently become the first professional audio networking technology to use UDP/IP technology over Ethernet with its product called Dante.
The Session and Presentation layers are not commonly used in professional audio networks; therefore, they will not be covered in this article. Because these layers can be important to some integration projects, you may want to research the OSI model further to complete your understanding of this useful tool.
The purpose of the Application layer is to provide the interface tools that make networking useful. It is not used to move audio around the network. It controls, manages, and monitors audio devices on a network. Popular protocols are File Transfer Protocol (FTP), Telnet, Hypertext Transfer Protocol (HTTP), Domain Name System (DNS), and Virtual Private Network (VPN), to name just a few.
Now that you have a basic familiarity with the seven layers that make up the OSI model, let's dig a little deeper into the inner workings of a digital audio network.
Breaking Down Audio Networks
Audio networking can be broken into in two main concepts: control and transport. Configuring, monitoring, and actual device control all fall into the control category and use several standard communication protocols. Intuitively, getting digital audio from here to there is the role of transport.
Control applications can be found in standard protocols of the Application layer. Application layer protocols that are found in audio are Telnet, HTTP, and Simple Network Management Protocol (SNMP). Telnet is short for TELetype NETwork and was one of the first Internet protocols. Telnet provides command-line style communication to a machine. One example of Telnet usage in audio is the Peavey MediaMatrix, which uses this technology, known as RATC, as a way to control MediaMatrix devices remotely.
SNMP is a protocol for monitoring devices on a network. There are several professional audio and video manufacturers that support this protocol, which provides a method for managing the status or health of devices on a network. SNMP is a key technology in Network Operation Center (NOC) monitoring. It is an Application layer protocol that communicates to devices on the network through UDP/IP protocols, which can be communicated over a variety of data transport technologies.
Control systems can be manufacturer-specific, such as Harman Pro's HiQnet, QSC Audio's QSControl, or third party such as Crestron's CresNet, where the control software communicates to audio devices through TCP/IP. In many cases, TCP/IP-based control can run on the same network as the audio signal transport, and some technologies (such as CobraNet and Dante) are designed to allow data traffic to coexist with audio traffic.
The organizing and managing of audio bits is the job of the audio Transport. This is usually done by the audio protocol. Aviom, CobraNet, and EtherSound are protocols that organize bits for transport on the network. The transport can be divided into two categories: logical and physical.
Purely physical layer technologies, such as Aviom, use hardware to organize and move digital bits. More often than not, a proprietary chip is used to organize and manage them. Ethernet-based technologies packetize the audio and send it to the Data Link and Physical layers to be transported on Ethernet devices. Ethernet is both a logical and physical technology that packetizes or “frames” the audio in the Data Link layer and sends it to the Physical layer to be moved to another device on the network. Ethernet's Physical layer also has a Physical layer chip, referred to as the PHY chip, which can be purchased from several manufacturers.
Comparing Digital Audio Systems
The more familiar you are with the OSI model, the easier it will be to understand the similarities and differences of the various digital audio systems. For many people, there is a tendency to gloss over the OSI model and just talk about networking-branded protocols. However, understanding the OSI model will bring clarity to your understanding of digital audio networking (Fig. 2).
Due to the integration of pro AV systems, true networking schemes are vitally important. A distinction must be made between audio networking and digital audio transports. Audio networks are defined as those meeting the commonly used standard protocols, where at least the Physical and Data Link layer technologies and standard network appliances (such as hubs and switches) can be used. There are several technologies that meet this requirement using IEEE 1394 (Firewire), Ethernet, and ATM technologies, to name a few. However, because Ethernet is widely deployed in applications ranging from large enterprises to the home, this will be the technology of focus. All other technologies that do not meet this definition will be considered digital audio transport systems, and not a digital audio network.
There are at least 15 schemes for digital audio transport systems and audio networking. Three of the four technologies presented here have been selected because of their wide acceptance in the industry based on the number of manufacturers that support it.
Let's compare four CAT-5/Ethernet technologies: Aviom, EtherSound, CobraNet, and Dante. This is not to be considered a “shoot-out” between technologies but rather a discussion to gain understanding of some of the many digital system options available to the AV professional.
As previously noted, Aviom is a Physical layer–only technology based on the classifications outlined above. It does use an Ethernet PHY chip, but doesn't meet the electrical characteristics of Ethernet. Therefore, it cannot be connected to standard Ethernet hubs or switches. Aviom uses a proprietary chip to organize multiple channels of audio bits to be transported throughout a system, and it falls in the classification of a digital audio transport system.
EtherSound and CobraNet are both 802.3 Ethernet– compliant technologies that can be used on standard data Ethernet switches. There is some debate as to whether EtherSound technology can be considered a true Ethernet technology because it requires a dedicated network. EtherSound uses a proprietary scheme for network control, and CobraNet uses standard data networking methods. The key difference for both the AV and data professional is that EtherSound uses a dedicated network, and CobraNet does not. There are other differences that may be considered before choosing between CobraNet and EtherSound, but both are considered to be layer two (Data Link) technologies.
Dante uses Ethernet, but it is considered a layer four technology (Transport). It uses UDP for audio transport and IP for audio routing on an Ethernet transport, commonly referred to as UDP/IP over Ethernet.
At this point you may be asking yourself why does the audio industry have so many technologies? Why can't there be one standard like there is in the data industry?
The answer to the first question relates to time. Audio requires synchronous delivery of bits. Early Ethernet networks weren't much concerned with time. Ethernet is asynchronous, meaning there isn't a concern when and how data arrives as long as it gets there. Therefore, to put digital audio on a data network requires a way to add a timing mechanism. Time is an issue in another sense, in that your options depend on technology or market knowledge available at the time when you develop your solution. When and how you develop your solution leads to the question of a single industry standard.
Many people don't realize that the data industry does in fact have more than one standard: Ethernet, ATM, FiberChannel, and SONET. Each layer of the OSI model has numerous protocols for different purposes. The key is that developers follow the OSI model as a framework for network functions and rules for communicating between them. If the developer wants to use Ethernet, he or she is required to have this technology follow the rules for communicating to the Data Link layer, as required by the Ethernet standard.
Because one of the key issues for audio involves time, it's important to use it wisely.
There are two types of time that we need to be concerned with in networking: clock time and latency. Clock time in this context is a timing mechanism that is broken down into measurable units, such as milliseconds. In digital audio systems, latency is the time duration between when audio or a bit of audio goes into a system until the bit comes out the other side. Latency has many causes, but arguably the root cause in audio networks is the design of its timing mechanism. In addition, there is a tradeoff between the timing method and bandwidth. A general rule of thumb is that as the resolution of the timing mechanism increases, the more bandwidth that's required from the network.
Ethernet, being an asynchronous technology, requires a timing method to be added to support the synchronous nature of audio. The concepts and methodology of clocking networks for audio are key differences among the various technologies.
CobraNet uses a time mechanism called a beat packet. This packet is sent out in 1.33 millisecond intervals and communicates with CobraNet devices. Therefore, the latency of a CobraNet audio network can't be less than 1.33 milliseconds. CobraNet was introduced in 1995 when large-scale DSP-based digital systems started replacing analog designs in the market. Because the “sound system in a box” was new, there was great scrutiny of these systems. A delay or latency in some time-critical applications was noticed, considered to be a challenge of using digital systems. However, many believe that latency is an overly exaggerated issue in most applications where digital audio systems are deployed. In fact, this topic could be an article unto itself.
A little history of digital systems and networking will provide some insight on the reason why there are several networking technologies available today. In the late '90s, there were two “critical” concerns in the digital audio industry: Year of 2000 compliance (Y2K) and latency. To many audio pros, using audio networks like CobraNet seemed impossible because of the delay —at that time, approximately 5 milliseconds, or in video terms, less time than a frame of video.
Enter EtherSound, introduced in 2001, which addressed the issue of latency by providing an Ethernet networking scheme with low latency and better bit-depth and higher sampling rate than CobraNet. The market timing and concern over latency gave EtherSound an excellent entry point. But since reducing latency down to 124 microseconds limits available bandwidth for data traffic, a dedicated network is required for a 100-MB EtherSound network. Later, to meet the market demands of lower latency requirements, CobraNet introduced variable latency, with 1.33 milliseconds being the minimum. With the Ethernet technologies discussion thus far, there is a relationship between the bit-depth and sample rate to the clocking system.
Audio is not the only industry with a need for real-time clocking schemes. Communications, military, and industrial applications also require multiple devices to be connected together on a network and function in real-time. A group was formed from these markets, and they took on the issue of real-time clocking while leveraging the widely deployed Ethernet technology. The outcome was the IEEE 1588 standard for a real-time clocking system for Ethernet networks in 2002.
As a late entry to the networking party, Audinate's Dante comes to the market with the advantage of using new technologies like IEEE 1588 to solve many of the current challenges in networking audio. Using this clocking technology in Ethernet allows Dante to provide sample accurate timing and synchronization while achieving latency as low as 34 microseconds. Coming to the market later also has the benefit of Gigabit networking being widely supported, which provides the increased bandwidth requirement of ultra-low latency. It should be noted here that EtherSound does have a Gigabit version, and CobraNet does work on Gigabit infrastructure with added benefits but it is currently a Fast Ethernet technology.
Dante provides a flexible solution to many of the current tradeoffs that require one system on another due to design requirements of latency verses bandwidth, because Dante can support different latency, bit depth, and sample rates in the same system. For example, this allows a user to provide a low-latency, higher bandwidth assignment to in-ear monitoring while at the same time use a higher latency assignment in areas where latency is less of a concern (such as front of house), thereby reducing the overall network bandwidth requirement.
The developers of CobraNet and Dante are both working toward advancing software so that AV professionals and end-users can configure, route audio, and manage audio devices on a network. The goal is to make audio networks “plug-and-play” for those that don't want to know anything about networking technologies. One of the advances to note is called “device discovery,” where the software finds all of the audio devices on the network so you don't have to configure them in advance. The software also has advance features for those who want to dive into the details of their audio system.
Advances in digital audio systems and networking technologies will continue to change to meet market applications and their specific requirements. Aviom's initial focus was to create a personal monitoring system, and it developed a digital audio transport to better serve this application. Aviom's low-latency transport provided a solution to the market that made it the perfect transport for many live applications. CobraNet provides the AV professional with a solution to integrate audio, video, and data systems on an enterprise switched network. EtherSound came to the market by providing a low-latency audio transport using standard Ethernet 802.3 technology. Dante comes to the market after significant change and growth and Gigabit networking and new technologies like IEEE 1588 to solve many of challenges of using Ethernet in real-time systems.
Networking audio and video can seem chaotic, but gaining an understanding of the OSI model helps bring order to the chaos. It not only provides an understanding of the various types of technology, but it also provides a common language to communicate for both AV and data professionals. Keeping it simple by using the OSI model as the foundation and breaking audio networking down into two functional parts (control and transport) will help you determine which networking technology will best suit your particular application.
Brent Harshbarger is the founder of m3tools located in Atlanta. He can be reached at [email protected]. | 1 | 5 |
<urn:uuid:a42971e3-6316-4a4b-b05f-388e48b4808d> | Frankly speaking, you cannot create a Linux partition larger than 2 TB using the fdisk command. The fdisk won't create partitions larger than 2 TB. This is fine for desktop and laptop users, but on server you need a large partition. For example, you cannot create 3TB or 4TB partition size (RAID based) using the fdisk command. It will not allow you to create a partition that is greater than 2TB. In this tutorial, you will learn more about creating Linux filesystems greater than 2 Terabytes to support enterprise grade operation under any Linux distribution.
To solve this problem use GNU parted command with GPT. It supports Intel EFI/GPT partition tables. Partition Table (GPT) is a standard for the layout of the partition table on a physical hard disk. It is a part of the Extensible Firmware Interface (EFI) standard proposed by Intel as a replacement for the outdated PC BIOS, one of the few remaining relics of the original IBM PC. EFI uses GPT where BIOS uses a Master Boot Record (MBR).
(Fig.01: Diagram illustrating the layout of the GUID Partition Table scheme. Each logical block (LBA) is 512 bytes in size. LBA addresses that are negative indicate position from the end of the volume, with −1 being the last addressable block. Imaged Credit Wikipedia)
Linux GPT Kernel Support
EFI GUID Partition support works on both 32bit and 64bit platforms. You must include GPT support in kernel in order to use GPT. If you don't include GPT support in Linux kernelt, after rebooting the server, the file system will no longer be mountable or the GPT table will get corrupted. By default Redhat Enterprise Linux / CentOS comes with GPT kernel support. However, if you are using Debian or Ubuntu Linux, you need to recompile the kernel. Set CONFIG_EFI_PARTITION to y to compile this feature.
File Systems Partition Types [*] Advanced partition selection [*] EFI GUID Partition support (NEW) ....
Find Out Current Disk Size
Type the following command:
# fdisk -l /dev/sdb
Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdb doesn't contain a valid partition table
Linux Create 3TB partition size
To create a partition start GNU parted as follows:
# parted /dev/sdb
GNU Parted 2.3 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted)
Creates a new GPT disklabel i.e. partition table:
(parted) mklabel gpt
Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue? Yes/No? yes (parted)
Next, set the default unit to TB, enter:
(parted) unit TB
To create a 3TB partition size, enter:
(parted) mkpart primary 0 0
(parted) mkpart primary 0.00TB 3.00TB
To print the current partitions, enter:
Model: ATA ST33000651AS (scsi) Disk /dev/sdb: 3.00TB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 0.00TB 3.00TB 3.00TB ext4 primary
Quit and save the changes, enter:
Information: You may need to update /etc/fstab.
Use the mkfs.ext3 or mkfs.ext4 command to format the file system, enter:
# mkfs.ext3 /dev/sdb1
# mkfs.ext4 /dev/sdb1
mkfs.ext4 /dev/sdb1 mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 183148544 inodes, 732566272 blocks 36628313 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 22357 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848, 512000000, 550731776, 644972544 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 31 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
Type the following commands to mount /dev/sdb1, enter:
# mkdir /data
# mount /dev/sdb1 /data
# df -H
Filesystem Size Used Avail Use% Mounted on /dev/sdc1 16G 819M 14G 6% / tmpfs 1.6G 0 1.6G 0% /lib/init/rw udev 1.6G 123k 1.6G 1% /dev tmpfs 1.6G 0 1.6G 0% /dev/shm /dev/sdb1 3.0T 211M 2.9T 1% /data
Make sure you replace /dev/sdb1 with actual RAID or Disk name or Block Ethernet device such as /dev/etherd/e0.0. Do not forget to update /etc/fstab, if necessary. Also note that booting from a GPT volume requires support in your BIOS / firmware. This is not supported on non-EFI platforms. I suggest you boot server from another disk such as IDE / SATA / SSD disk and store data on /data.
- How Basic Disks and Volumes Work (little outdated but good to understand basic concept)
- GUID Partition Table from the Wikipedia
- man pages parted
Updated for accuracy!
- 30 Handy Bash Shell Aliases For Linux / Unix / Mac OS X
- Top 30 Nmap Command Examples For Sys/Network Admins
- 25 PHP Security Best Practices For Sys Admins
- 20 Linux System Monitoring Tools Every SysAdmin Should Know
- 20 Linux Server Hardening Security Tips
- Linux: 20 Iptables Examples For New SysAdmins
- Top 20 OpenSSH Server Best Security Practices
- Top 20 Nginx WebServer Best Security Practices
- 20 Examples: Make Sure Unix / Linux Configuration Files Are Free From Syntax Errors
- 15 Greatest Open Source Terminal Applications Of 2012
- My 10 UNIX Command Line Mistakes
- Top 10 Open Source Web-Based Project Management Software
- Top 5 Email Client For Linux, Mac OS X, and Windows Users
- The Novice Guide To Buying A Linux Laptop | 1 | 15 |
<urn:uuid:759ff0b9-9458-45d0-8deb-368c01089695> | Opportunities and Challenges in High Pressure Processing of Foods
By Rastogi, N K; Raghavarao, K S M S; Balasubramaniam, V M; Niranjan, K; Knorr, D
Consumers increasingly demand convenience foods of the highest quality in terms of natural flavor and taste, and which are free from additives and preservatives. This demand has triggered the need for the development of a number of nonthermal approaches to food processing, of which high-pressure technology has proven to be very valuable. A number of recent publications have demonstrated novel and diverse uses of this technology. Its novel features, which include destruction of microorganisms at room temperature or lower, have made the technology commercially attractive. Enzymes and even spore forming bacteria can be inactivated by the application of pressure-thermal combinations, This review aims to identify the opportunities and challenges associated with this technology. In addition to discussing the effects of high pressure on food components, this review covers the combined effects of high pressure processing with: gamma irradiation, alternating current, ultrasound, and carbon dioxide or anti-microbial treatment. Further, the applications of this technology in various sectors-fruits and vegetables, dairy, and meat processing-have been dealt with extensively. The integration of high-pressure with other matured processing operations such as blanching, dehydration, osmotic dehydration, rehydration, frying, freezing / thawing and solid- liquid extraction has been shown to open up new processing options. The key challenges identified include: heat transfer problems and resulting non-uniformity in processing, obtaining reliable and reproducible data for process validation, lack of detailed knowledge about the interaction between high pressure, and a number of food constituents, packaging and statutory issues.
Keywords high pressure, food processing, non-thermal processing
Consumers demand high quality and convenient products with natural flavor and taste, and greatly appreciate the fresh appearance of minimally processed food. Besides, they look for safe and natural products without additives such as preservatives and humectants. In order to harmonize or blend all these demands without compromising the safety of the products, it is necessary to implement newer preservation technologies in the food industry. Although the fact that “high pressure kills microorganisms and preserves food” was discovered way back in 1899 and has been used with success in chemical, ceramic, carbon allotropy, steel/alloy, composite materials and plastic industries for decades, it was only in late 1980′s that its commercial benefits became available to the food processing industries. High pressure processing (HPP) is similar in concept to cold isostatic pressing of metals and ceramics, except that it demands much higher pressures, faster cycling, high capacity, and sanitation (Zimmerman and Bergman, 1993; Mertens and Deplace, 1993). Hite (1899) investigated the application of high pressure as a means of preserving milk, and later extended the study to preserve fruits and vegetables (Hite, Giddings, and Weakly, 1914). It then took almost eighty years for Japan to re- discover the application of high-pressure in food processing. The use of this technology has come about so quickly that it took only three years for two Japanese companies to launch products, which were processed using this technology. The ability of high pressure to inactivate microorganisms and spoilage catalyzing enzymes, whilst retaining other quality attributes, has encouraged Japanese and American food companies to introduce high pressure processed foods in the market (Mermelstein, 1997; Hendrickx, Ludikhuyze, Broeck, and Weemaes, 1998). The first high pressure processed foods were introduced to the Japanese market in 1990 by Meidi-ya, who have been marketing a line of jams, jellies, and sauces packaged and processed without application of heat (Thakur and Nelson, 1998). Other products include fruit preparations, fruit juices, rice cakes, and raw squid in Japan; fruit juices, especially apple and orange juice, in France and Portugal; and guacamole and oysters in the USA (Hugas, Garcia, and Monfort, 2002). In addition to food preservation, high- pressure treatment can result in food products acquiring novel structure and texture, and hence can be used to develop new products (Hayashi, 1990) or increase the functionality of certain ingredients. Depending on the operating parameters and the scale of operation, the cost of highpressure treatment is typically around US$ 0.05-0.5 per liter or kilogram, the lower value being comparable to the cost of thermal processing (Thakur and Nelson, 1998; Balasubramaniam, 2003).
The non-availability of suitable equipment encumbered early applications of high pressure. However, recent progress in equipment design has ensured worldwide recognition of the potential for such a technology in food processing (Could, 1995; Galazka and Ledward, 1995; Balci and Wilbey, 1999). Today, high-pressure technology is acknowledged to have the promise of producing a very wide range of products, whilst simultaneously showing potential for creating a new generation of value added foods. In general, high-pressure technology can supplement conventional thermal processing for reducing microbial load, or substitute the use of chemical preservatives (Rastogi, Subramanian, and Raghavarao, 1994).
Over the past two decades, this technology has attracted considerable research attention, mainly relating to: i) the extension of keeping quality (Cheftel, 1995; Farkas and Hoover, 2001), ii) changing the physical and functional properties of food systems (Cheftel, 1992), and iii) exploiting the anomalous phase transitions of water under extreme pressures, e.g. lowering of freezing point with increasing pressures (Kalichevsky, Knorr, and Lillford, 1995; Knorr, Schlueter, and Heinz, 1998). The key advantages of this technology can be summarized as follows:
1. it enables food processing at ambient temperature or even lower temperatures;
2. it enables instant transmittance of pressure throughout the system, irrespective of size and geometry, thereby making size reduction optional, which can be a great advantage;
3. it causes microbial death whilst virtually eliminating heat damage and the use of chemical preservatives/additives, thereby leading to improvements in the overall quality of foods; and
4. it can be used to create ingredients with novel functional properties.
The effect of high pressure on microorganisms and proteins/ enzymes was observed to be similar to that of high temperature. As mentioned above, high pressure processing enables transmittance of pressure rapidly and uniformly throughout the food. Consequently, the problems of spatial variations in preservation treatments associated with heat, microwave, or radiation penetration are not evident in pressure-processed products. The application of high pressure increases the temperature of the liquid component of the food by approximately 3C per 100 MPa. If the food contains a significant amount of fat, such as butter or cream, the temperature rise is greater (8-9C/100 MPa) (Rasanayagam, Balasubramaniam, Ting, Sizer, Bush, and Anderson, 2003). Foods cool down to their original temperature on decompression if no heat is lost to (or gained from) the walls of the pressure vessel during the holding stage. The temperature distribution during the pressure-holding period can change depending on heat transfer across the walls of the pressure vessel, which must be held at the desired temperature for achieving truly isothermal conditions. In the case of some proteins, a gel is formed when the rate of compression is slow, whereas a precipitate is formed when the rate is fast. High pressure can cause structural changes in structurally fragile foods containing entrapped air such as strawberries or lettuce. Cell deformation and cell damage can result in softening and cell serum loss. Compression may also shift the pH depending on the imposed pressure. Heremans (1995) indicated a lowering of pH in apple juice by 0.2 units per 100 MPa increase in pressure. In combined thermal and pressure treatment processes, Meyer (2000) proposed that the heat of compression could be used effectively, since the temperature of the product can be raised from 70-90C to 105-120C by a compression to 700 MPa, and brought back to the initial temperature by decompression.
As a thermodynamic parameter, pressure has far-reaching effects on the conformation of macromolecules, the transition temperature of lipids and water, and a number of chemical reactions (Cheftel, 1992; Tauscher, 1995). Phenomena that are accompanied by a decrease in volume are enhanced by pressure, and vice-versa (principle of Le Chatelier). Thus, under pressure, reaction equilibriums are shifted towards the most compact state, and the reaction rate constant is increased or decreased, depending on whether the “activation volume” of the reaction (i.e. volume of the activation complex less volume of reactants) is negative or positive. It is likely that pressure a\lso inhibits the availability of the activation energy required for some reactions, by affecting some other energy releasing enzymatic reactions (Farr, 1990). The compression energy of 1 litre of water at 400 MPa is 19.2 kJ, as compared to 20.9 kJ for heating 1 litre of water from 20 to 25C. The low energy levels involved in pressure processing may explain why covalent bonds of food constituents are usually less affected than weak interactions. Pressure can influence most biochemical reactions, since they often involve change in volume. High pressure controls certain enzymatic reactions. The effect of high pressure on protein/enzyme is reversible unlike temperature, in the range 100-400 MPa and is probably due to conformational changes and sub-unit dissociation and association process (Morild, 1981).
For both the pasteurization and sterilization processes, a combined treatment of high pressure and temperature are frequently considered to be most appropriate (Farr, 1990; Patterson, Quinn, Simpson, and Gilmour, 1995). Vegetative cells, including yeast and moulds, are pressure sensitive, i.e. they can be inactivated by pressures of ~300-600 MPa (Knorr, 1995; Patterson, Quinn, Simpson, and Gilmour, 1995). At high pressures, microbial death is considered to be due to permeabilization of cell membrane. For instance, it was observed that in the case of Saccharomyces cerevasia, at pressures of about 400 MPa, the structure and cytoplasmic organelles were grossly deformed and large quantities of intracellular material leaked out, while at 500 MPa, the nucleus could no longer be recognized, and a loss of intracellular material was almost complete (Farr, 1990). Changes that are induced in the cell morphology of the microorganisms are reversible at low pressures, but irreversible at higher pressures where microbial death occurs due to permeabilization of the cell membrane. An increase in process temperature above ambient temperature, and to a lesser extent, a decrease below ambient temperature, increases the inactivation rates of microorganisms during high pressure processing. Temperatures in the range 45 to 50C appear to increase the rate of inactivation of pathogens and spoilage microorganisms. Preservation of acid foods (pH ≤ 4.6) is, therefore, the most obvious application of HPP as such. Moreover, pasteurization can be performed even under chilled conditions for heat sensitive products. Low temperature processing can help to retain nutritional quality and functionality of raw materials treated and could allow maintenance of low temperature during post harvest treatment, processing, storage, transportation, and distribution periods of the life cycle of the food system (Knorr, 1995).
Bacterial spores are highly pressure resistant, since pressures exceeding 1200 MPa may be needed for their inactivation (Knorr, 1995). The initiation of germination or inhibition of germinated bacterial spores and inactivation of piezo-resistive microorganisms can be achieved in combination with moderate heating or other pretreatments such as ultrasound. Process temperature in the range 90-121C in conjunction with pressures of 500-800 MPa have been used to inactivate spores forming bacteria such as Clostridium botulinum. Thus, sterilization of low-acid foods (pH > 4.6), will most probably rely on a combination of high pressure and other forms of relatively mild treatments.
High-pressure application leads to the effective reduction of the activity of food quality related enzymes (oxidases), which ensures high quality and shelf stable products. Sometimes, food constituents offer piezo-resistance to enzymes. Further, high pressure affects only non-covalent bonds (hydrogen, ionic, and hydrophobic bonds), causes unfolding of protein chains, and has little effect on chemical constituents associated with desirable food qualities such as flavor, color, or nutritional content. Thus, in contrast to thermal processing, the application of high-pressure causes negligible impairment of nutritional values, taste, color flavor, or vitamin content (Hayashi, 1990). Small molecules such as amino acids, vitamins, and flavor compounds remain unaffected by high pressure, while the structure of the large molecules such as proteins, enzymes, polysaccharides, and nucleic acid may be altered (Balci and Wilbey, 1999).
High pressure reduces the rate of browning reaction (Maillard reaction). It consists of two reactions, condensation reaction of amino compounds with carbonyl compounds, and successive browning reactions including metanoidin formation and polymerization processes. The condensation reaction shows no acceleration by high pressure (5-50 MPa at 50C), because it suppresses the generation of stable free radicals derived from melanoidin, which are responsible for the browning reaction (Tamaoka, Itoh, and Hayashi, 1991). Gels induced by high pressure are found to be more glossy and transparent because of rearrangement of water molecules surrounding amino acid residues in a denatured state (Okamoto, Kawamura, and Hayashi, 1990).
The capability and limitations of HPP have been extensively reviewed (Thakur and Nelson, 1998; Smelt, 1998;Cheftal, 1995; Knorr, 1995; Fair, 1990; Tiwari, Jayas, and Holley, 1999; Cheftel, Levy, and Dumay, 2000; Messens, Van Camp, and Huyghebaert, 1997; Ontero and Sanz, 2000; Hugas, Garriga, and Monfort, 2002; Lakshmanan, Piggott,and Paterson, 2003; Balasubramaniam, 2003; Matser, Krebbers, Berg, and Bartels, 2004; Hogan, Kelly, and Sun, 2005; Mor-Mur and Yuste, 2005). Many of the early reviews primarily focused on the microbial efficacy of high-pressure processing. This review comprehensively covers the different types of products processed by highpressure technology alone or in combination with the other processes. It also discusses the effect of high pressure on food constituents such as enzymes and proteins. The applications of this technology in fruits and vegetable, dairy and animal product processing industries are covered. The effects of combining high- pressure treatment with other processing methods such as gamma- irradiation, alternating current, ultrasound, carbon dioxide, and anti microbial peptides have also been described. Special emphasis has been given to opportunities and challenges in high pressure processing of foods, which can potentially be explored and exploited.
EFFECT OF HIGH PRESSURE ON ENZYMES AND PROTEINS
Enzymes are a special class of proteins in which biological activity arises from active sites, brought together by a three- dimensional configuration of molecule. The changes in active site or protein denaturation can lead to loss of activity, or changes the functionality of the enzymes (Tsou, 1986). In addition to conformational changes, enzyme activity can be influenced by pressure-induced decompartmentalization (Butz, Koller, Tauscher, and Wolf, 1994; Gomes and Ledward, 1996). Pressure induced damage of membranes facilitates enzymesubstrate contact. The resulting reaction can either be accelerated or retarded by pressure (Butz, Koller, Tauscher, and Wolf, 1994; Gomes and Ledward, 1996; Morild, 1981). Hendrickx, Ludikhuy ze, Broeck, and Weemaes ( 1998) and Ludikhuyze, Van Loey, and Indrawati et al. (2003) reviewed the combined effect of pressure and temperature on enzymes related to the ity of fruits and vegetables, which comprises of kinetic information as well as process engineering aspects.
Pectin methylesterase (PME) is an enzyme, which normally tends to lower the viscosity of fruits products and adversely affect their texture. Hence, its inactivation is a prerequisite for the preservation of such products. Commercially, fruit products containing PME (e.g. orange juice and tomato products) are heat pasteurized to inactivate PME and prolong shelf life. However, heating can deteriorate the sensory and nutritional quality of the products. Basak and Ramaswamy (1996) showed that the inactivation of PME in orange juice was dependent on pressure level, pressure-hold time, pH, and total soluble solids. An instantaneous pressure kill was dependent only on pressure level and a secondary inactivation effect dependent on holding time at each pressure level. Nienaber and Shellhammer (2001) studied the kinetics of PME inactivation in orange juice over a range of pressures (400-600 MPa) and temperatures (25-5O0C) for various process holding times. PME inactivation followed a firstorder kinetic model, with a residual activity of pressure-resistant enzyme. Calculated D-values ranged from 4.6 to 117.5 min at 600 MPa/50C and 400 MPa/25C, respectively. Pressures in excess of 500 MPa resulted in sufficiently faster inactivation rates for economic viability of the process. Binh, Van Loey, Fachin, Verlent, Indrawati, and Hendrickx (2002a, 2002b) studied the kinetics of inactivation of strawberry PME. The combined effect of pressure and temperature on inactivation kinetics followed a fractional-conversion model. Purified strawberry PME was more stable toward high-pressure treatments than PME from oranges and bananas. Ly-Nguyen, Van Loey, Fachin, Verlent, Hendrickx (2002) showed that the inactivation of the banana PME enzyme during heating at temperature between 65 and 72.5C followed first order kinetics and the effect of pressure treatment of 600-700 MPa at 10C could be described using a fractionalconversion model. Stoforos, Crelier, Robert, and Taoukis (2002) demonstrated that under ambient pressure, tomato PME inactivation rates increased with temperature, and the highest rate was obtained at 75C. The inactivation rates were dramatically reduced as soon as the essing pressure was raised beyond 75C. High inactivation rates were obtained at a pressure higher than 700 MPa. Riahi and Ramaswamy (2003) studied high- pressure inactivation kinetics of PME isolated from a variety of sources and showed that PME from a microbial source was more resistant \to pressure inactivation than from orange peel. Almost a full decimal reduction in activity of commercial PME was achieved at 400 MPa within 20 min.
Verlent, Van Loey, Smout, Duvetter, Nguyen, and Hendrickx (2004) indicated that the optimal temperature for tomato pectinmethylesterase was shifted to higher values at elevated pressure compared to atmospheric pressure, creating the possibilities for rheology improvements by the application of high pressure.
Castro, Van Loey, Saraiva, Smout, and Hendrickx (2006) accurately described the inactivation of the labile fraction under mild-heat and high-pressure conditions by a fractional conversion model, while a biphasic model was used to estimate the inactivation rate constant of both the fractions at more drastic conditions of temperature/ pressure (10-64C, 0.1-800 MPa). At pressures lower than 300 MPa and temperatures higher than 54C, an antagonistic effect of pressure and temperature was observed.
Balogh, Smout, Binh, Van Loey, and Hendrickx (2004) observed the inactivation kinetics of carrot PME to follow first order kinetics over a range of pressure and temperature (650800 MPa, 10-40C). Enzyme stability under heat and pressure was reported to be lower in carrot juice and purified PME preparations than in carrots.
The presence of pectinesterase (PE) reduces the quality of citrus juices by destabilization of clouds. Generally, the inactivation of the enzyme is accomplished by heat, resulting in a loss of fresh fruit flavor in the juice. High pressure processing can be used to bypass the use of extreme heat for the processing of fruit juices. Goodner, Braddock, and Parish (1998) showed that the higher pressures (>600 MPa) caused instantaneous inactivation of the heat labile form of the enzyme but did not inactivate the heat stable form of PE in case of orange and grapefruit juices. PE activity was totally lost in orange juice, whereas complete inactivation was not possible in case of grapefruit juices. Orange juice pressurized at 700 MPa for l min had no cloud loss for more than 50 days. Broeck, Ludikhuyze, Van Loey, and Hendrickx (2000) studied the combined pressure-temperature inactivation of the labile fraction of orange PE over a range of pressure (0.1 to 900 MPa) and temperature (15 to 65C). Pressure and temperature dependence of the inactivation rate constants of the labile fraction was quantified using the well- known Eyring and Arrhenius relations. The stable fraction was inactivated at a temperature higher than 75C. Acidification (pH 3.7) enhanced the thermal inactivation of the stable fraction, whereas the addition of Ca^sup ++^ ions (IM) suppressed inactivation. At elevated pressure (up to 900 MPa), an antagonistic effect of pressure and temperature on inactivation of the stable fraction was observed. Ly-Nguyen, Van Loey, Smout, Ozean, Fachin, Verlent, Vu- Truong, Duvetter, and Hendrickx (2003) investigated the combined heat and pressure treatments on the inactivation of purified carrot PE, which followed a fractional-conversion model. The thermally stable fraction of the enzyme could not be inactivated. At a lower pressure (<300 MPa) and higher temperature (>50C), an antagonistic effect of pressure and heat was observed.
High pressures induced conformational changes in polygalacturonase (PG) causing reduced substrate binding affinity and enzyme inactivation. Eun, Seok, and Wan ( 1999) studied the effect of high-pressure treatment on PG from Chinese cabbage to prevent the softening and spoilage of plant-based foods such as kimchies without compromising quality. PG was inactivated by the application of pressure higher than 200 MPa for l min. Fachin, Van Loey, Indrawati, Ludikhuyze, and Hendrickx (2002) investigated the stability of tomato PG at different temperatures and pressures. The combined pressure temperature inactivation (300-600 MPa/50 -50C) of tomato PG was described by a fractional conversion model, which points to Ist-order inactivation kinetics of a pressure-sensitive enzyme fraction and to the occurrence of a pressure-stable PG fraction. Fachin, Smout, Verlent, Binh, Van Loey, and Hendrickx (2004) indicated that in the combination of pressure-temperature (5- 55C/100-600 MPa), the inactivation of the heat labile portion of purified tomato PG followed first order kinetics. The heat stable fraction of the enzyme showed pressure stability very similar to that of heat labile portion.
Peelers, Fachin, Smout, Van Loey, and Hendrickx (2004) demonstrated that effect of high-pressure was identical on heat stable and heat labile fractions of tomato PG. The isoenzyme of PG was detected in thermally treated (140C for 5 min) tomato pieces and tomato juice, whereas, no PG was found in pressure treated tomato juice or pieces.
Verlent, Van Loey, Smout, Duvetter, and Hendrickx (2004) investigated the effect of nigh pressure (0.1 and 500 MPa) and temperature (25-80C) on purified tomato PG. At atmospheric pressure, the optimum temperature for enzyme was found to be 55-60C and it decreased with an increase in pressure. The enzyme activity was reported to decrease with an increase in pressure at a constant temperature.
Shook, Shellhammer, and Schwartz (2001) studied the ability of high pressure to inactivate lipoxygenase, PE and PG in diced tomatoes. Processing conditions used were 400,600, and 800 MPa for 1, 3, and 5 min at 25 and 45C. The magnitude of the applied pressure had a significant effect in inactivating lipoxygenase and PG, with complete loss of activity occurring at 800 MPa. PE was very resistant to the pressure treatment.
Polyphenoloxidase and Pemxidase
Polyphenoloxidase (PPO) and peroxidase (POD), the enzymes responsible for color and flavor loss, can be selectively inactivated by a combined treatment of pressure and temperature. Gomes and Ledward (1996) studied the effects of pressure treatment (100-800 MPa for 1-20 min) on commercial PPO enzyme available from mushrooms, potatoes, and apples. Castellari, Matricardi, Arfelli, Rovere, and Amati ( 1997) demonstrated that there was a limited inactivation of grape PPO using pressures between 300 and 600 MPa. At 900 MPa, a low level of PPO activity was apparent. In order to reach complete inactivation, it may be necessary to use high- pressure processing treatments in conjunction with a mild thermal treatment (40-50C). Weemaes, Ludikhuyze, Broeck, and Hendrickx (1998) studied the pressure stabilities of PPO from apple, avocados, grapes, pears, and plums at pH 6-7. These PPO differed in pressure stability. Inactivation of PPO from apple, grape, avocado, and pear at room temperature (25C) became noticeable at approximately 600, 700, 800 and 900 MPa, respectively, and followed first-order kinetics. Plum PPO was not inactivated at room temperature by pressures up to 900 MPa. Rastogi, Eshtiaghi, and Knorr (1999) studied the inactivation effects of high hydrostatic pressure treatment (100-600 MPa) combined with heat treatment (0-60C) on POD and PPO enzyme, in order to develop high pressure-processed red grape juice having stable shelf-life. The studies showed that the lowest POD (55.75%) and PPO (41.86%) activities were found at 60C, with pressure at 600 and 100 MPa, respectively. MacDonald and Schaschke (2000) showed that for PPO, both temperature and pressure individually appeared to have similar effects, whereas the holding time was not significant. On the other hand, in case of POD, temperature as well as interaction between temperature and holding time had the greatest effect on activity. Namkyu, Seunghwan, and Kyung (2002) showed that mushroom PPO was highly pressure stable. Exposure to 600 MPa for 10 min reduced PPO activity by 7%; further exposure had no denaturing effect. Compression for 10 and 20 min up to 800 MPa, reduced activity by 28 and 43%, respectively.
Rapeanu, Van Loey, Smout, and Hendrickx (2005) indicated that the thermal and/or high-pressure inactivation of grape PPO followed first order kinetics. A third degree polynomial described the temperature/pressure dependence of the inactivation rate constants. Pressure and temperature were reported to act synergistically, except in the high temperature (≥45C)-low pressure (≥300 MPa) region where an antagonistic effect was observed.
Gomes, Sumner, and Ledward (1997) showed that the application of increasing pressures led to a gradual reduction in papain enzyme activity. A decrease in activity of 39% was observed when the enzyme solution was initially activated with phosphate buffer (pH 6.8) and subjected to 800 MPa at ambient temperature for 10 min, while 13% of the original activity remained when the enzyme solution was treated at 800 MPa at 60C for 10 min. In Tris buffer at pH 6.8 after treatment at 800 MPa and 20C, papain activity loss was approximately 24%. The inactivation of the enzyme is because of induced change at the active site causing loss of activity without major conformational changes. This loss of activity was due to oxidation of the thiolate ion present at the active site.
Weemaes, Cordt, Goossens, Ludikhuyze, Hendrickx, Heremans, and Tobback (1996) studied the effects of pressure and temperature on activity of 3 different alpha-amylases from Bacillus subtilis, Bacillus amyloliquefaciens, and Bacillus licheniformis. The changes in conformation of Bacillus licheniformis, Bacillus subtilis, and Bacillus amyloliquefaciens amylases occurred at pressures of 110, 75, and 65 MPa, respectively. Bacillus licheniformis amylase was more stable than amylases from Bacillus subtilis and Bacillus amyloliquefaciens to the combined heat/pressure treatment.
Riahi and Ramaswamy (2004) demonstrated that pressure inactivation of amylase in apple juice was significantly (P < 0.01 ) influenced by pH, pressure, holding time, and temperature. The inactivation was described using a bi-phasic model. The application of high pressure was sh\own to completely inactivate amylase. The importance of the pressure pulse and pressure hold approach for inactivation of amylase was also demonstrated.
High pressure denatures protein depending on the protein type, processing conditions, and the applied pressure. During the process of denaturation, the proteins may dissolve or precipitate on the application of high pressure. These changes are generally reversible in the pressure range 100-300 MPa and irreversible for the pressures higher than 300 MPa. Denaturation may be due to the destruction of hydrophobic and ion pair bonds, and unfolding of molecules. At higher pressure, oligomeric proteins tend to dissociate into subunits becoming vulnerable to proteolysis. Monomeric proteins do not show any changes in proteolysis with increase in pressure (Thakur and Nelson, 1998).
High-pressure effects on proteins are related to the rupture on non-covalent interactions within protein molecules, and to the subsequent reformation of intra and inter molecular bonds within or between the molecules. Different types of interactions contribute to the secondary, tertiary, and quaternary structure of proteins. The quaternary structure is mainly held by hydrophobic interactions that are very sensitive to pressure. Significant changes in the tertiary structure are observed beyond 200 MPa. However, a reversible unfolding of small proteins such as ribonuclease A occurs at higher pressures (400 to 800 MPa), showing that the volume and compressibility changes during denaturation are not completely dominated by the hydrophobic effect. Denaturation is a complex process involving intermediate forms leading to multiple denatured products. secondary structure changes take place at a very high pressure above 700 MPa, leading to irreversible denaturation (Balny and Masson, 1993).
Figure 1 General scheme for pressure-temperature phase diagram of proteins, (from Messens, Van Camp, and Huyghebaert, 1997).
When the pressure increases to about 100 MPa, the denaturation temperature of the protein increases, whereas at higher pressures, the temperature of denaturation usually decreases. This results in the elliptical phase diagram of native denatured protein shown in Fig. 1. A practical consequence is that under elevated pressures, proteins denature usually at room temperature than at higher temperatures. The phase diagram also specifies the pressure- temperature range in which the protein maintains its native structure. Zone III specifies that at high temperatures, a rise in denaturation temperature is found with increasing pressure. Zone II indicates that below the maximum transition temperature, protein denaturation occurs at the lower temperatures under higher pressures. Zone III shows that below the temperature corresponding to the maximum transition pressure, protein denaturation occurs at lower pressures using lower temperatures (Messens, Van Camp, and Huyghebaert, 1997).
The application of high pressure has been shown to destabilize casein micelles in reconstituted skim milk and the size distribution of spherical casein micelles decrease from 200 to 120 nm; maximum changes have been reported to occur between 150-400 MPa at 20C. The pressure treatment results in reduced turbidity and increased lightness, which leads to the formation of a virtually transparent skim milk (Shibauchi, Yamamoto, and Sagara, 1992; Derobry, Richard, and Hardy, 1994). The gels produced from high-pressure treated skim milk showed improved rigidity and gel breaking strength (Johnston, Austin, and Murphy, 1992). Garcia, Olano, Ramos, and Lopez (2000) showed that the pressure treatment at 25C considerably reduced the micelle size, while pressurization at higher temperature progressively increased the micelle dimensions. Anema, Lowe, and Stockmann (2005) indicated that a small decrease in the size of casein micelles was observed at 100 MPa, with slightly greater effects at higher temperatures or longer pressure treatments. At pressure >400 MPa, the casein micelles disintegrated. The effect was more rapid at higher temperatures although the final size was similar in all samples regardless of the pressure or temperature. At 200 MPa and 1O0C, the casein micelle size decreased slightly on heating, whereas, at higher temperatures, the size increased as a result of aggregation. Huppertz, Fox, and Kelly (2004a) showed that the size of casein micelles increased by 30% upon high-pressure treatment of milk at 250 MPa and micelle size dropped by 50% at 400 or 600 MPa.
Huppertz, Fox, and Kelly (2004b) demonstrated that the high- pressure treatment of milk at 100-600 MPa resulted in considerable solubilization of alphas 1- and beta-casein, which may be due to the solubilization of colloidal calcium phosphate and disruption of hydrophobic interactions. On storage of pressure, treated milk at 5C dissociation of casein was largely irreversible, but at 20C, considerable re-association of casein was observed. The hydration of the casein micelles increased on pressure treatment (100-600 MPa) due to induced interactions between caseins and whey proteins. Pressure treatment increased levels of alphas 1- and beta-casein in the soluble phase of milk and produced casein micelles with properties different to those in untreated milk. Huppertz, Fox, and Kelly (2004c) demonstrated that the casein micelle size was not influenced by pressures less than 200 MPa, but a pressure of 250 MPa increased the micelle size by 25%, while pressures of 300 MPa or greater, irreversibly reduced the size to 50% ofthat in untreated milk. Denaturation of alpha-lactalbumin did not occur at pressures less than or equal to 400 MPa, whereas beta-lactoglobulin was denatured at pressures greater than 100 MPa.
Galazka, Ledward, Sumner, and Dickinson (1997) reported loss of surface hydrophobicity due to application of 300 MPa in dilute solution. Pressurizing beta-lactoglobulin at 450 MPa for 15 minutes resulted in reduced solubility in water. High-pressure treatment induced extensive protein unfolding and aggregation when BSA was pressurized at 400 MPa. Beta-lactoglobulin appears to be more sensitive to pressure than alpha-lactalbumin. Olsen, Ipsen, Otte, and Skibsted (1999) monitored the state of aggregation and thermal gelation properties of pressure-treated beta-lactoglobulin immediately after depressurization and after storage for 24 h at 50C. A pressure of 150 MPa applied for 30 min, or pressures higher than 300 MPa applied for 0 or 30 min, led to formation of soluble aggregates. When continued for 30 min, a pressure of 450 MPa caused gelation of the 5% beta-lactoglobulin solution. Iametti, Tansidico, Bonomi, Vecchio, Pittia, Rovere, and DaIl’Aglio (1997) studied irreversible modifications in the tertiary structure, surface hydrophobicity, and association state of beta-lactoglobulin, when solutions of the protein at neutral pH and at different concentrations, were exposed to pressure. Only minor irreversible structural modifications were evident even for treatments as intense as 15 min at 900 MPa. The occurrence of irreversible modifications was time-dependent at 600 MPa but was complete within 2 min at 900 MPa. The irreversibly modified protein was soluble, but some covalent aggregates were formed. Subirade, Loupil, Allain, and Paquin (1998) showed the effect of dynamic high pressure on the secondary structure of betalactoglobulin. Thermal and pH sensitivity of pressure treated beta-lactoglobulin was different, suggesting that the two forms were stabilized by different electrostatic interactions. Walker, Farkas, Anderson, and Goddik (2004) used high- pressure processing (510 MPa for 10 min at 8 or 24C) to induce unfolding of beta-lactoglobulin and characterized the protein structure and surface-active properties. The secondary structure of the protein processed at 8C appeared to be unchanged, whereas at 24C alpha-helix structure was lost. Tertiary structures changed due to processing at either temperature. Model solutions containing the pressure-treated beta-lactoglobulin showed a significant decrease in surface tension. Izquierdo, Alli, Gmez, Ramaswamy, and Yaylayan (2005) demonstrated that under high-pressure treatments (100-300 MPa), the β-lactoglobulin AB was completely hydrolyzed by pronase and α-chymotrypsin. Hinrichs and Rademacher (2005) showed that the denaturation kinetics of beta-lactoglobulin followed second order kinetics while for alpha-lactalbumin it was 2.5. Alpha- lactalbumin was more resistant to denaturation than beta- lactoglobulin. The activation volume for denaturation of beta- lactoglobulin was reported to decrease with increasing temperature, and the activation energy increased with pressure up to 200 MPa, beyond which it decreased. This demonstrated the unfolding of the protein molecules.
Drake, Harison, Apslund, Barbosa-Canovas, and Swanson (1997) demonstrated that the percentage moisture and wet weight yield of cheese from pressure treated milk were higher than pasteurized or raw milk cheese. The microbial quality was comparable and some textural defects were reported due to the excess moisture content. Arias, Lopez, and Olano (2000) showed that high-pressure treatment at 200 MPa significantly reduced rennet coagulation times over control samples. Pressurization at 400 MPa led to coagulation times similar to those of control, except for milk treated at pH 7.0, with or without readjustment of pH to 6.7, which presented significantly longer coagulation times than their non-pressure treated counterparts.
Hinrichs and Rademacher (2004) demonstrated that the isobaric (200-800 MPa) and isothermal (-2 to 70C) denaturation of beta- lactoglobulin and alpha-lactalbumin of whey protein followed 3rd and 2nd order kinetics, respectively. Isothermal pressure denaturation of both beta-lactoglobulin A and B did not differ significantly and an increase in temperature resulted in an increase in thedenaturation rate. At pressures higher than 200 MPa, the denaturation rate was limited by the aggregation rate, while the pressure resulted in the unfolding of molecules. The kinetic parameters of denaturation were estimated using a single step non- linear regression method, which allowed a global fit of the entire data set. Huppertz, Fox, and Kelly (2004d) examined the high- pressure induced denaturation of alpha-lactalbumin and beta- lactoglobulin in dairy systems. The higher level of pressure- induced denaturation of both proteins in milk as compared to whey was due to the absence of casein micelles and colloidal calcium phosphate in the whey.
The conformation of BSA was reported to remain fairly stable at 400 MPa due to a high number of disulfide bonds which are known to stabilize its three dimensional structure (Hayakawa, Kajihara, Morikawa, Oda, and Fujio, 1992). Kieffer and Wieser (2004) indicated that the extension resistance and extensibility of wet gluten were markedly influenced by high pressure (up to 800 MPa), while the temperature and the duration of pressure treatment (30-80C for 2-20 min) had a relatively lesser effect. The application of high pressure resulted in a marked decrease in protein extractability due to the restructuring of disulfide bonds under high pressure leading to the incorporation of alpha- and gamma-gliadins in the glutenin aggregate. The change in secondary structure following high- pressure treatment was also reported.
The pressure treatment of myosin led to head-to-head interaction to form oligomers (clumps), which became more compact and larger in size during storage at constant pressure. Even after pressure treatment at 210 MPa for 5 minutes, monomieric myosin molecules increased and no gelation was observed for pressure treatment up to 210 MPa for 30 minutes. Pressure treatment did not also affect the original helical structure of the tail in the myosin monomers. Angsupanich, Edde, and Ledward (1999) showed that high pressure- induced denaturation of myosin led to formation of structures that contained hydrogen bonds and were additionally stabilized by disulphide bonds.
Application of 750 MPa for 20 minutes resulted in dimerization of metmyoglobin in the pH range of 6-10, whereas maximum pH was not at isoelectric pH (6.9). Under acidic pH conditions, no dimers were formed (Defaye and Ledward, 1995). Zipp and Kouzmann ( 1973) showed the formation of precipitate when pressurized (750 MPa for 20 minutes) near the isoelectric point, the precipitate redissolved slowly during storage. Pressure treatment had no effect on lipid oxidation in the case of minced meat packed in air at pressure less than 300 MPa, while the oxidation increased proportionally at higher pressures. However, on exposure to higher pressure, minced meat in contact with air oxidized rapidly. Pressures > 300-400 MPa caused marked denaturation of both myofibriller and sarcoplasmic proteins in washed pork muscle and pork mince (Ananth, Murano and Dickson, 1995). Chapleau and Lamballerie (2003) showed that high-pressure treatment induced a threefold increase in the surface hydrophobicity of myofibrillar proteins between O and 450 MPa. Chapleau, Mangavel, Compoint, and Lamballerie (2004) reported that high pressure modified the secondary structure of myofibrillar proteins extracted from cattle carcasses. Irreversible changes and aggregation were reported at a pressure higher than 300 MPa, which can potentially affect the functional properties of meat products. Lamballerie, Perron, Jung, and Cheret (2003) indicated that high pressure treatment increases cathepsin D activity, and that pressurized myofibrils are more susceptible to cathepsin D action than non- pressurized myofibrils. The highest cathepsin D activity was observed at 300 MPa. Cariez, Veciana, and Cheftel ( 1995) demonstrated that L color values increased significantly in meat treated at 200-350 MPa, the meat becoming pink, and a-value decreased in meat treated at 400-500 MPa to give a grey-brown color. The total extractable myoglobin decreased in meat treated at 200- 500 MPa, while the metmyoglobin content of meat increased and the oxymyoglobin decreased at 400500 MPa. Meat discoloration from pressure processing resulted in a whitening effect at 200-300 MPa due to globin denaturation, and/or haem displacement/release, or oxidation of ferrous myoglobin to ferric myoglobin at pressure higher than 400 MPa.
The conformation of the main protein component of egg white, ovalbumin, remains fairly stable when pressurized at 400 MPa, may be due to the four disulfide bonds and non-covalent interactions stabilizing the three dimensional structure of ovalbumin (Hayakawa, Kajihara, Morikawa, Oda, and Fujio, 1992). Hayashi, Kawamura, Nakasa and Okinada (1989) reported irreversible denaturation of egg albumin at 500-900 MPa with concomitant increase in susceptibility to subtilisin. Zhang, Li, and Tatsumi (2005) demonstrated that the pressure treatment (200-500 MPa) resulted in denaturation of ovalbumin. The surface hydrophobicity of ovalbumin was found to increase with increase in pressure treatment and the presence of polysaccharide protected the protein against denaturation. Iametti, Donnizzelli, Pittia, Rovere, Squarcina, and Bonomi (1999) showed that the addition of NaCl or sucrose to egg albumin prior to high- pressure treatment (up to 10 min at 800 MPa) prevented insolubulization or gel formation after pressure treatment. As a consequence of protein unfolding, the treated albumin had increased viscosity but retained its foaming and heat-gelling properties. Farr (1990) reported the modification of functionality of egg proteins. Egg yolk formed a gel when subjected to a pressure of 400 MPa for 30 minutes at 25C, kept its original color, and was soft and adhesive. The hardness of the pressure treated gel increased and adhesiveness decreased with an increase in pressure. Plancken, Van Loey, and Hendrickx (2005) showed that the application of high pressure (400- 700 MPa) to egg white solution resulted in an increase in turbidity, surface hydrophobicity, exposed sulfhydryl content, and susceptibility to enzymatic hydrolysis, while it resulted in a decrease in protein solubility, total sulfhydryl content, denaturation enthalpy, and trypsin inhibitory activity. The pressure- induced changes in these properties were shown to be dependent on the pressuretemperature and the pH of the solution. Speroni, Puppo, Chapleau, Lamballerie, Castellani, Aon, and Anton (2005) indicated that the application of high pressure (200-600 MPa) at 2OC to low- density lipoproteins did not change the solubility even if the pH is changed, whereas aggregation and protein denaturation were drastically enhanced at pH 8. Further, the application of high- pressure under alkaline pH conditions resulted in decreased droplet flocculation of low-density lipoproteins dispersions.
The minimum pressure required for the inducing gelation of soya proteins was reported to be 300 MPa for 10-30 minutes and the gels formed were softer with lower elastic modulus in comparison with heat-treated gels (Okamoto, Kawamura, and Hayashi, 1990). The treatment of soya milk at 500 MPa for 30 min changed it from a liquid state to a solid state, whereas at lower pressures and at 500 MPa for 10 minutes, the milk remained in a liquid state, but indicated improved emulsifying activity and stability (Kajiyama, Isobe, Uemura, and Noguchi, 1995). The hardness of tofu gels produced by high-pressure treatment at 300 MPa for 10 minutes was comparable to heat induced gels. Puppo, Chapleau, Speroni, Lamballerie, Michel, Anon, and Anton (2004) demonstrated that the application of high pressure (200-600 MPa) on soya protein isolate at pH 8.0 resulted in an increase in a protein hydorphobicity and aggregation, a reduction of free sulfhydryl content and a partial unfolding of the 7S and 11S fractions at pH 8. The change in the secondary structure leading to a more disordered structure was also reported. Whereas at pH 3.0, the protein was partially denatured and insoluble aggregates were formed, the major molecular unfolding resulted in decreased thermal stability, increased protein solubility, and hydorphobicity. Puppo, Speroni, Chapleau, Lamballerie, An, and Anton (2005) studied the effect of high- pressure (200, 400, and 600 MPa for 10 min at 10C) on the emulsifying properties of soybean protein isolates at pH 3 and 8 (e.g. oil droplet size, flocculation, interfacial protein concentration, and composition). The application of pressure higher than 200 MPa at pH 8 resulted in a smaller droplet size and an increase in the levels of depletion flocculation. However, a similar effect was not observed at pH 3. Due to the application of high pressure, bridging flocculation decreased and the percentage of adsorbed proteins increased irrespective of the pH conditions. Moreover, the ability of the protein to be adsorbed at the oil- water interface increased. Zhang, Li, Tatsumi, and Isobe (2005) showed that the application of high pressure treatment resulted in the formation of more hydrophobic regions in soy protein, which dissociated into subunits, which in some cases formed insoluble aggregates. High-pressure denaturation of beta-conglycinin (7S) and glycinin (11S) occurred at 300 and 400 MPa, respectively. The gels formed had the desirable strength and a cross-linked network microstructure.
Soybean whey is a by-product of tofu manufacture. It is a good source of peptides, proteins, oligosaccharides, and isoflavones, and can be used in special foods for the elderly persons, athletes, etc. Prestamo and Penas (2004) studied the antioxidative activity of soybean whey proteins and their pepsin and chymotrypsin hydrolysates. The chymotrypsin hydrolysate showed a higher antioxidative activity than the non-hydrolyzed protein, but the pepsin hydrolysate showed an opposite trend. High pressure processing at 100 MPa inc\reased the antioxidative activity of soy whey protein, but decreased the antioxidative activity of the hydrolysates. High pressure processing increased the pH of the protein hydrolysates. Penas, Prestamo, and Gomez (2004) demonstrated that the application of high pressure (100 and 200 MPa, 15 min, 37C) facilitated the hydrolysis of soya whey protein by pepsin, trypsin, and chymotrypsin. It was shown that the highest level of hydrolysis occurred at a treatment pressure of 100 MPa. After the hydrolysis, 5 peptides under 14 kDa with trypsin and chymotrypsin, and 11 peptides with pepsin were reported.
COMBINATION OF HIGHPRESSURE TREATMENT WITH OTHER NON-THERMAL PROCESSING METHODS
Many researchers have combined the use of high pressure with other non-thermal operations in order to explore the possibility of synergy between processes. Such attempts are reviewed in this section.
Crawford, Murano, Olson, and Shenoy (1996) studied the combined effect of high pressure and gamma-irradiation for inactivating Clostridium spmgenes spores in chicken breast. Application of high pressure reduced the radiation dose required to produce chicken meat with extended shelf life. The application of high pressure (600 MPa for 20 min at 8O0C) reduced the irradiation doses required for one log reduction of Clostridium spmgenes from 4.2 kGy to 2.0 kGy. Mainville, Montpetit, Durand, and Farnworth (2001) studied the combined effect of irradiation and high pressure on microflora and microorganisms of kefir. The irradiation treatment of kefir at 5 kGy and high-pressure treatment (400 MPa for 5 or 30 min) deactivated the bacteria and yeast in kefir, while leaving the proteins and lipids unchanged.
The exposure of microbial cells and spores to an alternating current (50 Hz) resulted in the release of intracellular materials causing loss or denaturation of cellular components responsible for the normal functioning of the cell. The lethal damage to the microorganisms enhanced when the organisms are exposed to an alternating current before and after the pressure treatment. High- pressure treatment at 300 MPa for 10 min for Escherichia coli cells and 400 MPa for 30 min for Bacillus subtalis spores, after the alternating current treatment, resulted in reduced surviving fractions of both the organisms. The combined effect was also shown to reduce the tolerant level of microorganisms to other challenges (Shimada and Shimahara, 1985, 1987; Shimada, 1992).
The pretreatment with ultrasonic waves (100 W/cm^sup 2^ for 25 min at 25C) followed by high pressure (400 MPa for 25 min at 15C) was shown to result in complete inactivation of Rhodoturola rubra. Neither ultrasonic nor high-pressure treatment alone was found to be effective (Knorr, 1995).
Carbon Dioxide and Argon
Heinz and Knorr (1995) reported a 3 log reduction of supercritical CO2 pretreated cultures. The effect of the pretreatment on germination of Bacillus subtilis endospores was monitored. The combination of high pressure and mild heat treatment was the most effective in reducing germination (95% reduction), but no spore inactivation was observed.
Park, Lee, and Park (2002) studied the combination of high- pressure carbon dioxide and high pressure as a nonthermal processing technique to enhance the safety and shelf life of carrot juice. The combined treatment of carbon dioxide (4.90 MPa) and high-pressure treatment (300 MPa) resulted in complete destruction of aerobes. The increase in high pressure to 600 MPa in the presence of carbon dioxide resulted in reduced activities of polyphenoloxidase (11.3%), lipoxygenase (8.8%), and pectin methylesterase (35.1%). Corwin and Shellhammer (2002) studied the combined effect of high-pressure treatment and CO2 on the inactivation of pectinmethylesterase, polyphenoloxidase, Lactobacillus plantarum, and Escherichia coli. An interaction was found between CO2 and pressure at 25 and 50C for pectinmethylesterase and polyphenoloxidase, respectively. The activity of polyphenoloxidase was decreased by CO2 at all pressure treatments. The interaction between CO2 and pressure was significant for Lactobacillus plantarum, with a significant decrease in survivors due to the addition of CO2 at all pressures studied. No significant effect on E. coli survivors was seen with CO2 addition. Truong, Boff, Min, and Shellhammer (2002) demonstrated that the addition of CO2 (0.18 MPa) during high pressure processing (600 MPa, 25C) of fresh orange juice increases the rate of PME inactivation in Valencia orange juice. The treatment time due to CO2 for achieving the equivalent reduction in PME activity was from 346 s to 111 s, but the overall degree of PME inactivation remained unaltered.
Fujii, Ohtani, Watanabe, Ohgoshi, Fujii, and Honma (2002) studied the high-pressure inactivation of Bacillus cereus spores in water containing argon. At the pressure of 600 MPa, the addition of argon reportedly accelerated the inactivation of spores at 20C, but had no effect on the inactivation at 40C.
The complex physicochemical environment of milk exerted a strong protective effect on Escherichia coli against high hydrostatic pressure inactivation, reducing inactivation from 7 logs at 400 MPa to only 3 logs at 700 MPa in 15 min at 20C. A substantial improvement in inactivation efficiency at ambient temperature was achieved by the application of consecutive, short pressure treatments interrupted by brief decompressions. The combined effect of high pressure (500 MPa) and natural antimicrobial peptides (lysozyme, 400 g/ml and nisin, 400 g/ml) resulted in increased lethality for Escherichia coli in milk (Garcia, Masschalck, and Michiels, 1999).
OPPORTUNITIES FOR HIGH PRESSURE ASSISTED PROCESSING
The inclusion of high-pressure treatment as a processing step within certain manufacturing flow sheets can lead to novel products as well as new process development opportunities. For instance, high pressure can precede a number of process operations such as blanching, dehydration, rehydration, frying, and solid-liquid extraction. Alternatively, processes such as gelation, freezing, and thawing, can be carried out under high pressure. This section reports on the use of high pressures in the context of selected processing operations.
Eshtiaghi and Knorr (1993) employed high pressure around ambient temperatures to develop a blanching process similar to hot water or steam blanching, but without thermal degradation; this also minimized problems associated with water disposal. The application of pressure (400 MPa, 15 min, 20C) to the potato sample not only caused blanching but also resulted in a four-log cycle reduction in microbial count whilst retaining 85% of ascorbic acid. Complete inactivation of polyphenoloxidase was achieved under the above conditions when 0.5% citric acid solution was used as the blanching medium. The addition of 1 % CaCl^sub 2^ solution to the medium also improved the texture and the density. The leaching of potassium from the high-pressure treated sample was comparable with a 3 min hot water blanching treatment (Eshtiaghi and Knorr, 1993). Thus, high- pressures can be used as a non-thermal blanching method.
Dehydration and Osmotic Dehydration
The application of high hydrostatic pressure affects cell wall structure, leaving the cell more permeable, which leads to significant changes in the tissue architecture (Fair, 1990; Dornenburg and Knorr, 1994, Rastogi, Subramanian, and Raghavarao, 1994; Rastogi and Niranjan, 1998; Rastogi, Raghavarao, and Niranjan, 2005). Eshtiaghi, Stute, and Knorr (1994) reported that the application of pressure (600 MPa, 15 min at 70C) resulted in no significant increase in the drying rate during fluidized bed drying of green beans and carrot. However, the drying rate significantly increased in the case of potato. This may be due to relatively limited permeabilization of carrot and beans cells as compared to potato. The effects of chemical pre-treatment (NaOH and HCl treatment) on the rates of dehydration of paprika were compared with products pre-treated by applying high pressure or high intensity electric field pulses (Fig. 2). High-pressure (400 MPa for 10 min at 25C) and high intensity electric field pulses (2.4 kV/cm, pulse width 300 s, 10 pulses, pulse frequency 1 Hz) were found to result in drying rates comparable with chemical pre-treatments. The latter pre-treatments, however, eliminated the use of chemicals (Ade- Omowaye, Rastogi, Angersbach, and Knorr, 2001).
Figure 2 (a) Effects of various pre-treatments such as hot water blanching, high pressure and high intensity electric field pulse treatment on dehydration characteristics of red paprika (b) comparison of drying time (from Ade-Omowaye, Rastogi, Angersbach, and Knorr, 2001).
Figure 3 (a) Variation of moisture and (b) solid content (based on initial dry matter content) with time during osmotic dehydration (from Rastogi and Niranjan, 1998).
Generally, osmotic dehydration is a slow process. Application of high pressures causes permeabilization of the cell structure (Dornenburg and Knorr, 1993; Eshtiaghi, Stute, and Knorr, 1994; Fair, 1990; Rastogi, Subramanian, and Raghavarao, 1994). This phenomenon has been exploited by Rastogi and Niranjan (1998) to enhance mass transfer rates during the osmotic dehydration of pineapple (Ananas comsus). High-pressure pre-treatments (100-800 MPa) were found to enhance both water removal as well as solid gain (Fig. 3). Measured diffusivity values for water were found to be four-fold greater, whilst solute (sugar) diffusivity values were found to be two-fold greater. Compression and decompression occurring during high pressure pre-treatment itself caused the removal of a significant amount of water, which was attributed to the cell wall rupture (Rastogi and Niranjan, 1998). Differential interference contrast microscopic examination showed the ext\ent of cell wall break-up with applied pressure (Fig. 4). Sopanangkul, Ledward, and Niranjan (2002) demonstrated that the application of high pressure (100 to 400 MPa) could be used to accelerate mass transfer during ingredient infusion into foods. Application of pressure opened up the tissue structure and facilitated diffusion. However, higher pressures above 400 MPa induced starch gelatinization also and hindered diffusion. The values of the diffusion coefficient were dependent on cell permeabilization and starch gelatinization. The maximum value of diffusion coefficient observed represented an eight-fold increase over the values at ambient pressure.
The synergistic effect of cell permeabilization due to high pressure and osmotic stress as the dehydration proceeds was demonstrated more clearly in the case of potato (Rastogi, Angersbach, and Knorr, 2000a, 2000b, 2003). The moisture content was reduced and the solid content increased in the case of samples treated at 400 MPa. The distribution of relative moisture (M/M^sub o^) and solid (S/S^sub o^) content as well as the cell permeabilization index (Zp) (shown in Fig. 5) indicate that the rate of change of moisture and solid content was very high at the interface and decreased towards the center (Rastogi, Angersbach, and Knorr, 2000a, 2000b, 2003).
Most dehydrated foods are rehydrated before consumption. Loss of solids during rehydration is a major problem associated with the use of dehydrated foods. Rastogi, Angersbach, Niranjan, and Knorr (2000c) have studied the transient variation of moisture and solid content during rehydration of dried pineapples, which were subjected to high pressure treatment prior to a two-stage drying process consisting of osmotic dehydration and finish-drying at 25C (Fig. 6). The diffusion coefficients for water infusion as well as for solute diffusion were found to be significantly lower in high-pressure pre- treated samples. The observed decrease in water diffusion coefficient was attributed to the permeabilization of cell membranes, which reduces the rehydration capacity (Rastogi and Niranjan, 1998). The solid infusion coefficient was also lower, and so was the release of the cellular components, which form a gel- network with divalent ions binding to de-esterified pectin (Basak and Ramaswamy, 1998; Eshtiaghi, Stute, and Knorr, 1994; Rastogi Angersbach, Niranjan, and Knorr, 2000c). Eshtiaghi, Stute, and Knorr (1994) reported that high-pressure treatment in conjunction with subsequent freezing could improve mass transfer during rehydration of dried plant products and enhance product quality.
Figure 4 Microstructures of control and pressure treated pineapple (a) control; (b) 300 MPa; (c) 700 MPa. ( 1 cm = 41.83 m) (from Rastogi and Niranjan, 1998).
Ahromrit, Ledward, and Niranjan (2006) explored the use of high pressures (up to 600 MPa) to accelerate water uptake kinetics during soaking of glutinous rice. The results showed that the length and the diameter the of the rice were positively correlated with soaking time, pressure and temperature. The water uptake kinetics was shown to follow the well-known Fickian model. The overall rates of water uptake and the equilibrium moisture content were found to increase with pressure and temperature.
Zhang, Ishida, and Isobe (2004) studied the effect of highpressure treatment (300-500 MPa for 0-380 min at 20C) on the water uptake of soybeans and resulting changes in their microstructure. The NMR analysis indicated that water mobility in high-pressure soaked soybean was more restricted and its distribution was much more uniform than in controls. The SEM analysis revealed that high pressure changed the microstructures of the seed coat and hilum, which improved water absorption and disrupted the individual spherical protein body structures. Additionally, the DSC and SDS-PAGE analysis revealed that proteins were partially denatured during the high pressure soaking. Ibarz, Gonzalez, Barbosa-Canovas (2004) developed the kinetic models for water absorption and cooking time of chickpeas with and without prior high-pressure treatment (275-690 MPa). Soaking was carried out at 25C for up to 23 h and cooking was achieved by immersion in boiling water until they became tender. As the soaking time increased, the cooking time decreased. High-pressure treatment for 5 min led to reductions in cooking times equivalent to those achieved by soaking for 60-90 min.
Ramaswamy, Balasubramaniam, and Sastry (2005) studied the effects of high pressure (33, 400 and 700 MPa for 3 min at 24 and 55C) and irradiation (2 and 5 kGy) pre-treatments on hydration behavior of navy beans by soaking the treated beans in water at 24 and 55C. Treating beans under moderate pressure (33 MPa) resulted in a high initial moisture uptake (0.59 to 1.02 kg/kg dry mass) and a reduced loss of soluble materials. The final moisture content after three hours of soaking was the highest in irradiated beans (5 kGy) followed by high-pressure treatment (33 MPa, 3 min at 55C). Within the experimental range of the study, Peleg’s model was found to satisfactorily describe the rate of water absorption of navy beans.
A reduction of 40% in oil uptake during frying was observed, when thermally blanched frozen potatoes were replaced by high pressure blanched frozen potatoes. This may be due to a reduction in moisture content caused by compression and decompression (Rastogi and Niranjan, 1998), as well as the prevalence of different oil mass transfer mechanisms (Knorr, 1999).
Solid Liquid Extraction
The application of high pressure leads to rearrangement in tissue architecture, which results in increased extractability even at ambient temperature. Extraction of caffeine from coffee using water could be increased by the application of high pressure as well as increase in temperature (Knorr, 1999). The effect of high pressure and temperature on caffeine extraction was compared to extraction at 100C as well as atmospheric pressure (Fig. 7). The caffeine yield was found to increase with temperature at a given pressure. The combination of very high pressures and lower temperatures could become a viable alternative to current industrial practice.
Figure 5 Distribution of (a, b) relative moisture and (c, d) solid content as well as (e, f) cell disi | 1 | 36 |
<urn:uuid:bb74f096-0199-42c6-9dec-aa8f42f0a803> | WHAT IS DVI ?
DVI stands for (D)igital (V)ideo (I)nterface.
DVI is a popular form of video interface technology made to
maximize the quality of flat panel LCD monitors and modern video graphics cards.
It was a replacement for the short-lived P&D
Plug & Display standard, and a step up from the digital-only
DFP format for older flat panels. DVI cables are very
popular with video card manufacturers, and most
cards nowadays include one or two DVI output ports.
In addition to being used as the standard computer interface,
the DVI standard was, for a short while, the digital transfer method of choice
for HDTVs and other high-end video
displays for TV, movies, and DVDs. Likewise, even a few
top-end DVD players have featured DVI outputs in addition
to the high-quality analog Component Video. The digital market
has now settled on the HDMI interface
for high-definition media delivery, and DVI more exclusive to
the computer market.
WHAT ARE THE DVI FORMATS ?
There are three types of DVI connections: DVI-Digital, DVI-Analog, and DVI-Integrated
DVI-D - True Digital Video
||If you are connecting a DVI computer to a DVI monitor, this is the cable you want.
DVI-D cables are used for direct digital connections between
source video (namely, video cards) and LCD monitors. This provides a faster, higher-quality image
than with analog, due to the nature of the digital format.
All video cards initially produce a digital video signal, which
is converted into analog at the VGA output. The analog signal
travels to the monitor and is re-converted to a digital
signal. DVI-D eliminates the analog conversion process and
improves the connection between source and display.
DVI-A - High-Res Analog
DVI-I - The Best of Both Worlds
DVI-I cables are integrated cables which are capable of
transmitting either a digital-to-digital signal or an
analog-to-analog signal. This makes it a more versatile cable, being
usable in either digital or analog situations.
see all DVI-I cables
Like any other format, DVI digital and analog formats are
non-interchangeable. This means that a DVI-D cable will not
work on an analog system, nor a DVI-A on a digital system.
To connect an analog source to a digital display, you'll need
a VGA to DVI-D electronic convertor.
To connect a digital output to an analog monitor, you'll need
to use a DVI-D to VGA convertor (currently unavailable).
WHAT ARE SINGLE AND DUAL LINKS ?
The Digital formats are available in DVI-D Single-Link and
Dual-Link as well as DVI-I Single-Link and Dual-Link format
connectors. These DVI cables send information using a digital information
format called TMDS (transition minimized differential signaling).
Single link cables use one TMDS 165Mhz transmitter, while dual
links use two. The dual link DVI pins effectively double the power of
transmission and provide an increase of speed and signal quality;
i.e. a DVI single link 60-Hz LCD can display a resolution of 1920 x 1200,
while a DVI dual link can display a resolution of 2560 x 1600.
HOW FAR IS THE DVI MAXIMUM LENGTH?
The official DVI specification mandates that all DVI equipment must
maintain a signal at 5 meters (16 feet) in length. But many manufacturers
are putting out much stronger cards and bigger monitors, so the maximum
length possible is never exact.
Although the mandated DVI spec is 5 meters, we do carry cables up to 25 feet,
and have succesfully extended them even longer than that (although results do vary
depending on hardware). For guaranteed signal quality on long runs, you should
consider using a powered DVI signal booster.
There is a common misconception regarding digital video cables, which is the
belief that an "all digital" signal is an either-or result: either the cable
works, or it doesn't. In reality, while there is no signal degredation in digital
video like there is with analog, cable quality and length can make a difference in
When a DVI run is unstable, you may see artifacts and "sparkling" pixels on your display;
further degredation tends to flicker out or shake, and the ultimate sign of loss
is a blank display. In-house tests on varying equipment have produced strong signals up
to 9 and 10 meters long. Tests at 12 meters generally resulted in signal noise and
an unusuable image on the display, and anything longer rendered no image at all.
Keep in mind that when using DVI-I cables at extensive lengths, you
may not be seeing a digitally-clear image on your screen. Because analog
has a much longer run, your display may auto-switch once the digital signal
is too weak. For this reason, long runs are best done with VGA (for analog)
or HDMI (for digital).
If you have no option other than DVI, make sure you're getting the best
image by using DVI-D cables and verifing that your display is set to digital input.
HOW DO I KNOW WHICH CABLE TO USE?
Determining which type of DVI cable to use for your products is
critical in getting the right cable the first time. Check both of
the female DVI plugs to determine what signals they are compatible
If you still have questions, look at our DVI cable guide for an easy-to-use chart to
help you find the right cable for you.
- If one or both connections are DVI-D, you need a DVI-D cable.
- If one or both connections are DVI-A, you need a DVI-A cable.
- If one connection is DVI and the other is VGA, and the DVI is analog-compatible, you need a DVI to VGA cable or a DVI/VGA adaptor.
- If both connections are DVI-I, you may use any DVI cable, but a DVI-I cable is recommended.
- If one connection is analog and the other connection is digital, there is no way to connect them with
a single cable. You'll have to use an electronic converter box, such as our analog VGA to digital DVI/HDMI converter.
HOW TO RECOGNIZE A DVI CABLE
There are two variables in every DVI connector, and
each represents one characteristic.
The flat pin on one side denotes whether the cable is
digital or analog:
The pinsets vary depending on whether the cable is single-link, dual-link, or analog:
- A flat pin with four surrounding pins is either DVI-I or DVI-A
- A flat pin alone denotes DVI-D
- Two separated 9-pin sets (rows of 6) for a single-link cable
- A solid 24-pin set (rows of 8) for a dual-link cable
- A separated 8-pin and 4-pin set is for DVI-A.
DVI Connector Guide
|DVI-D Single Link
||DVI-I Single Link|
||Digital & Analog|
|Two sets of nine pins, and a solitary flat blade
||One set of eight pins and one set of four pins, with four contacts around the blade
||Two sets of nine pins and four contacts around the blade|
|DVI-D Dual Link
||DVI-I Dual Link|
|Digital Only||Digital & Analog|
|Three rows of eight pins and a solitary flat blade
||Three rows of eight pins and four contacts around the blade|
List of DataPro DVI Cables:
1141 DVI-D Single Digital Video Cable - for simple computer/monitor setups
1142 DVI-D Dual Digital Video Cable - the most DVI cable for most applications
1149 DVI-D Dual Digital Video Extension Cable - for a longer DVI connection
1143 DVI-I Panelmount Extension Cable - for installing a DVI port on a plate or bulkhead
1145 DVI-I Analog to VGA/SVGA Video Cable -for connecting a DVI computer to a VGA monitor
1145-A DVI Analog Male to VGA Female Adaptor - for converting a DVI port into a VGA port
1145-B DVI Analog Female to VGA Male Adaptor - for converting a VGA port into a DVI port
1146 DVI-A Analog Video Cable - for analog-only signals over a DVI connector
1148 DVI-I Dual Digital & Analog - for dual digital/analog data capabilities
1140 DVI-I DIG/ANA Extension Cable (M/F) - for extending both digital and analog signals
Written by Anthony van Winkle
for DataPro International Inc.|
Unauthorized duplication strictly prohibited.
© 1986-2013 DataPro International Inc | 1 | 9 |
<urn:uuid:942e4267-e67d-4dfb-af01-91d9b0b0d555> | A rather well known scientist named Ray Kurzweil believes the concept of nanoscopic electronic devices is feasible by 2030. These tiny devices (similar to devices referred to as “nanites” on Star Trek: The Next Generation) will enter the bloodstream and augment various bodily functions, including the possibility of transmitting information in the brain a short distance outside the body. It all sounds like science fiction, but Kurzweil believes it will be possible to “reverse engineer” the brain by 2020, and a US$1,000 investment in computing power in 2029 will provide 1,000 times the computing capacity of a human brain. I'm not sure what to say here. I believe wholeheartedly that true artificial intelligence will be achieved within my lifetime (the next 40-50 years), and I have no doubts that these kinds of devices will actually come into existence in the decades to come. I'm not sure about the timeline. 2020, 2029 … those aren't too far off. In fact, if I look back to my high school years I realize that I graduated over 15 years ago! That seems like yesterday to me. But, we aren't flying around in air-vehicles or using anti-gravitation, and the oft-promised dream of cold fusion has never reared its head. 2030? I'd have to say no. 2130? I'd have to say absolutely. Read more at EE Times.
USER COMMENTS 29 comment(s)
|Jeez Rick… (2:44pm EST Fri Sep 27 2002)
….I'm feeling kind of old here myself. If some of this stuff does come to pass I imagine whatever concept we have of privacy will be gone–inhale a couple of nanites in your daily dose of room dust and start transmitting your thoughts to the world. (Or for the paranoid having your thoughts controlled.)
I still kind of wonder how artifical intelligence will actually appear, our ideas of instinct and emotion may appear outmoded to it, as would the concept of self, since barring something truly exotic like an organically grown computer an artificial intelligence could replicate (or delete) itself on a whim, and shut down and come back. Then again maybe it'll all just flop like the George Jetson style Jetpack……. - by Ziwiwiwiwiwiwiwiwiwi
|So, if we all own an AI…. (4:22pm EST Fri Sep 27 2002)
…with 1,000 the smarts of any of us, will anyone actually LISTEN to the beastie?
I will gladly let it pick stocks for me as long as half of all gains are banked immediately and it has no credit – the builtin self-limiting nature of this scheme can save me from total ruin.
Still, as much as I despise Congress, the idea of replacing them with, oh, say three supercomputers, each with a million of these 1,000brain A.I. somehow give me pause. - by UrGeek
|Ziwi (4:29pm EST Fri Sep 27 2002)
That all depends on what you define as intelligence. There's a species called the sphex (basically a type of wasp)that, upon bringing food back to its “lair”, will drop the food, check out the lair, and bring the food in once the coast is clear. Sounds like a very intelligent mechanism. However, if you move the food while the sphex is checking its home, he will come out, reposition the food, go back into its home again, and when the coast is clear get the food. See how easily an act that ould easily be considered intelligent turns into stupid repitition? What really makes something intelligent? - by SDB
|One the other hand… (4:30pm EST Fri Sep 27 2002)
…I would LOVE to have it proofread and autocorrect my posts. I would gladly pay a $1,000 for that. I seem to have develop some weird neural deficit in the last 10 years with pueral and singular and with my time grammar. And programming assembler killed all of my spelling skills in the 80's.
I good grammar A.I., hidden tranparently in Mozilla or even in the opsys, would be cool.
|Yeah, live in the now (4:34pm EST Fri Sep 27 2002)
“$1000 investment in computing power in 2029 will provide 1,000 times the computing capacity of a human brain” sounds like a complete load of bollocks to me.
'Feasible' is such a vague word that it almost has no weight when talking about anything like this… - by Wee wee
|Old tag line (4:37pm EST Fri Sep 27 2002)
I used to use the following tag line (back in my FidoNet/BBS days):
- by Rick C. Hodgin
|As far as AI goes… (5:07pm EST Fri Sep 27 2002)
There are many different theories of the “mind”. Physicalists (either token or type) believe that anything construed as a mental event is strictly a result of physical events in the brain (such as C-Fibre 812 firing). If this theory is correct, it should be fairly easy to deconstruct the brain. While I give no wieght to dualists or realists, I do belive Nagle brings up an excellent concept in his “What is it like to be a bat?” Even if we do create a being that passes the Turing Test, or whatever other level is perscribed to dub it as intelligent, it seems necessarily the case that these instances of AI will be totally different than people. Most of you say that this is obvious, but you're looking too closely to the surface. Paralleled with Nagle's argument, if you were to spend a day as an artificially intelligent robot, you would never be able to accurately describe it to anyone else.
Now this is important because it would seem that at best a machine could be pseudointelligent and not artifically intelligent. The dichotomy splits as this: Pseudointelligent beings are able to fully communicate with a type intelligent being, while artificially intelligent beings would be intelligent but would ultimately not be able to fully communicate with any other form of intelligence other than its own. So there are a lot of questions we still need to answer. What is personal identity? What do we mean by intelligence? How do we know if something is intelligent? Anyone see the movie The Imposter? Kurzweil makes the claim that we will have the technology to do so, and he goes far enough to say that we will. What he neglects is that we must answer these questions before we know our method of creating intelligence is good. - by SDB
|re: SDB (6:11pm EST Fri Sep 27 2002)
You have a good point there, as I've wondered often if they do make a machine that is intelligent and coaxed to respond and behave as a human, how would it really be seeing the world? It'd be kind of like trying to explain full color vision to someone born colorblind, you really couldn't. The thing I'm concerned about is how this intelligence would view naturallly born life with none of our instincts or built in behaviors, reinforced as much by evolution as culture, what would be this thing's prime motivator? Survival? Winning at chess? And how could it really empathize with living beings (emotion) without having any basis to do so? - by Ziwiwiwiwiwiwiwiwiwi
|maybe that fast (12:08am EST Sat Sep 28 2002)
but the brain is not just speed but it is also self powered, generates chemicals, and regulates thousands of system indirectly! when you can get a chip to power itself regulate its own temp walk talk and think! then it will be as powerful as the human brain-as amd and mac fans know speed is not the end all be all of perfomance its also quality of coding and actual performance - by some guy
|SDB (11:10am EST Sat Sep 28 2002)
“What is personal identity?”
The recognition of “self” as being separate and discting from other things.
“What do we mean by intelligence?”
Intelligence is the ability to convert into objects capable of being processed any form of input that can be received, and that objectification must be able to closely emulate, anticipate and predict events witnessed in the real world. The ability to take objects, either those converted from input or ones dredged up automously, and process them in ways that don't follow real-world examples. In short, the ability to deconstruct, process and reconstruct.
“How do we know if something is intelligent?”
It will be capable of understanding something it has not previously been told about.
Why is this the test? Because the ability to objectify the environment/input AI is exposed to must be at a level sufficient to deal with something more than baseline instruction. In short, you can't teach a machine to be intelligent. You have to create the environment which allows AI to propagate.
I've long held the belief (and argument) that true AI does not require a specific conveyance, ie it doesn't have to be a brain, or a computer or anything else. Anything capable of:
1) Objectifying its input
will be intelligent. It could be a jar of semi-conducting ooze. If configured correctly, it could be intelligent. It could be a planet. If configured correctly, it could be intelligent. It could be a computer. If configured correctly, it could be intelligent.
Commonly items associated with AI, such as emotion, feelings, instinct and personal identity, are not required for intelligence to exist. One would think/expect that self-awareness would naturally follow intelligence, but it doesn't have to.
- by Rick C. Hodgin
|AI – Artificial Intelligence (11:19am EST Sat Sep 28 2002)
I finally saw the movie. I have to say I was damned impressed. It was an excellent example of what AI will be like. I think the nail was hit squarely on the head.
Now, the above statement is excepting (placing to the side) the ridiculous notions there at the end where the little boy waits an eternity for something that can never happen (let alone for a day).
But, the concepts conveyed on how it acted were amazingly well done. And, knowing man, the concept of a Flesh Fair, I think, was dead on. People are people, dumb, frightened animals trying to grasp that which they won't take the time to understand.
- by Rick C. Hodgin
|re:People are people, dumb, frightened animals (1:23am EST Sun Sep 29 2002)
And we are looking to reverse-engineer these beasts AND make them 1,000 times more powerful!
We must be mad…
MAD you hear me MAD - by Bill T.
|We're missing someone… (10:01am EST Sun Sep 29 2002)
Where's GoatGuy? He posted a comment a couple of weeks ago about Star Trek fictional “science” never having an impact on real scientific endeavors. He asked someone to cite an example for a bag of donuts. Darn it! GoatGuy, if you see this article, what do you think? - by Lt Cox
|Hmm (1:03pm EST Sun Sep 29 2002)
Hmm, two little points to make.
Firstly with regard to Nanites. The main purpose for these will probably have little todo with AI, or even what Kurzweil says. People already now are making small implants which can regulate medicinal injections into someone, so they don't have to bother taking a pill every two hours. Using such things to monitor your health, perscribe drugs, cure illnesses(even cancer? I can just imagine a horde of nanites attacking a tumour for some reason). As for monitoring your brain, that would require a far greater understanding of our synapses than we have now, and a whole load more. If “internal thought monitoring” does happen, I doubt it will even be in the same centuary as nanites first appear.
As for my 2p on Artificial Intelligence, I think the characteristic that most people treasure as giving humans “intelligence” is our ability to be creative, not act like mechanical robots(despite humans being horribly predictable on the whole), but even that could be faked on a computer relatively easily. I don't think that a computer you can have an “intresting” conversation with is that far off, I actually believe we could possibly do it with the computing power we have now adays. Just a decent internet filtering device and some fancy programming, I actually doubt it would be that hard. It wouldn't be intelligent by definitions, but I wouldn't notice.
Just some(rather long) thoughts :-) - by Sev
|So who gets the bag of donuts? (4:43pm EST Sun Sep 29 2002)
We really can thank SciFi for one major contribution: “The Names of Things”. The term 'nanite' is brilliant. SciFi's “phaser” became the root of the law-enforcement 'taser'. Virtual reality was at first driven almost exclusively by Gibson's (and other author's) vision of a near future. Submarines were envisioned by Verne well before science got around to inventing practical vessels.
I just don't know where to send the donuts.
- by GoatGuy
|Nanites, Technological Progress, et al (5:11pm EST Sun Sep 29 2002)
As well thought out and fascinating as the conjecture is, it seems to have as a central axiom the assumption that mankind will continue to progress at present pace. Just the same is it possible that mankind will fold inward on itself, grasping wildly at fading strands of knowledge like so much light torn asunder from the skies in wake of a star's collapse.
To say that mankind's illogical, irrational, and irascible heritage will blithely dance to the dirge of science is to assume the dominance of thought and reason over the darker aspects of human nature. The consequence of laymen assumption is little, but they who would understand the universe must first understand themselves, lest they reach to far and knowing all yet not enough drown the world with the tears of the afflicted. - by Thanatos
|Thanatos (9:19pm EST Sun Sep 29 2002)
Note all men have that darker aspect to their nature. Men, in general, do. A man, in and of himself, doesn't always.
Regardless of whether or not progress continues at its present pace, man's search for more will continue. Even if it's interrupted for 500 years due to some catastrophe, it will again surface.
The only thing that will stop it is man himself (or something unforseeable like an alien attack).
- by Rick C. Hodgin
|O YA!!!!!!!!!!!! (11:02pm EST Sun Sep 29 2002)
In 20 years we will be able to Shit gold also, i am working on that. Right now i have it down to i eat gold and i will shit it, but i figure in 20 years i will eat a stake and shit gold. Beat that… - by Gold
|Nanites (nanoprobes) in the bloodstream (8:37am EST Mon Sep 30 2002)
Oooh.. isn't that what the Borg injected into organisms bloodstream to “assimilate” them. I suspect a government ploy to turn us all into mindless drones to do their bidding. :) - by 7of9
|A few issues for Rick (1) (9:07am EST Mon Sep 30 2002)
“What is personal identity?”
“The recognition of “self” as being separate and discting from other things.”
In philosophy there is a refutation of this known as the teletransporter argument. Basically it says that Bob walks into the teletransporter. The teletransporter scans the entire state of physics of Bob's body. I then communicates this information to another teletransporter in another room. The first teletransporter decomposes Bob's body into a bucket of atoms for later use and the second teletransporter rebuilds Bob's body from its own bucket of atoms. Unfortunately there was a glitch in the system and this time it made two identical copies at the same time. Both recognize themselves as Bob. They all have identical memories of “Bob's past” and in all other ways are exactly identical to the original Bob.
|Borg (9:14am EST Mon Sep 30 2002)
The Borg inject nanoprobes, not nanites. :)
- by Rick C. Hodgin
|SDB (9:16am EST Mon Sep 30 2002)
One of the problems with such a teletransporter, where it would unbuild your body then rebuild it later, is that *neither* of the two would be Bob. Bob got destroyed earlier. The tougher question, is if your body is rebuilt from a totally different set of atoms, is it still you? - by Sev
|SDB (9:24am EST Mon Sep 30 2002)
This was addressed in an episode of TNG. Riker beamed up to the ship, while a kind of transporter accident duplicated Riker by beaming another one of himself down onto the planet. Where there were one, now there are two.
Both were alive. Both were viable. Both had identical memories of the past (up until the point they were created as separate and distinct individuals through the transporter accident).
Which one is Riker? They both are. The recognition of “self” as being separate and distinct from other things is the only criteria for personal identity. It doesn't matter if that personal identity is based on a duplication through happenstance, mechanical failures or outright wizardry.
- by Rick C. Hodgin
|Luquid Lung and Liver (9:41am EST Mon Sep 30 2002)
I read an article somewhere talking about bloodstream machines. They had the idea of a blood born machine that could break up carbon dioxide being carried back to be breathed out back into oxygen to give the body all the oxygen it needed.
So people could hold there breathe underwater for long periods of time or run long distances without fatige or work on breaking the 3 minute mile.
On the more pratical side the tobacco companies could give it to their victims to restore their health and advert the big lawsuits.
|(2) (10:06am EST Mon Sep 30 2002)
“Intelligence is the ability to convert into objects capable of being processed any form of input that can be received, and that objectification must be able to closely emulate, anticipate and predict events witnessed in the real world. The ability to take objects, either those converted from input or ones dredged up automously, and process them in ways that don't follow real-world examples. In short, the ability to deconstruct, process and reconstruct.”
Allow me to pick out a few main ideas from this. I believe some of the things you are characterizing in this definition are consciousness, knowledge, and synthesis. In this case we are using consciousness as a means to collect not just input, but as you stated “any form of input that can be received”. It must have “knowledge” (in quotes because I have yet to see a good epistemological view, and discussing current ones would be exhaustive and pointless) in order to objectify and categorize what it sees. Finally, and most importantly, is the ability to process input by synthesizing it with existing knowledge.
|Personal Identity (10:21am EST Mon Sep 30 2002)
“Which one is Riker? They both are. The recognition of “self” as being separate and distinct from other things is the only criteria for personal identity. It doesn't matter if that personal identity is based on a duplication through happenstance, mechanical failures or outright wizardry.”
Personal Identity implies a uniqueness of person. It would be obsurd to have two identical identites. Now, I assume by your words that each Riker has a unique personal identity in the fact that each Riker views itself as seperate from all other beings, including the other Riker. Look at it this way though, both Rikers recognize their individuality in an absolutely identical fashion. If you asked either Riker what makes it seperate and sitinct from all other living things, they will both answer the same. This, as stated before, is a direct contradiction to the definition of identity. Some will argue that their identites are different because though they share identical pasts, from a certain point they are each experiencing seperate sets of input, thus when you ask them this question, their answers will vary in some way as “a fork in the road”. This is assuming too much, though. All things being equal, there is a point where the road forks and both Rikers exist at that point. Were the question to be administered at that point, the outcome would be as first stated. I would say that to have personal identity, you must recognize yourself as seperate and distinct, but the identity itself is how you arrived at that conclusion. - by SDB
|(3) (10:31am EST Mon Sep 30 2002)
How do we know if something is intelligent?
What if someone built a block machine that emulated all the qualities you outlined in your answer? This is a common counterargument to the Turing Test. While it is not physically possible to create such a machine, it is logically possible and thus should be qualified to take on a matter of theory.
|Rick (8:07am EST Tue Oct 01 2002)
“The Borg inject nanoprobes, not nanites. :)”
But what is the difference?
|Cryogenics (11:32am EST Sun Jan 26 2003)
It's only a matter of time before nanoscopic electronic devices become a reality. The only thing that could stop such invention is a complete breakdown of our society. Though that's a real possibility but I will forgo the argument in favor of the positive outlook we will at least continue on into 2200 with our current system intact.
The theory of nanoscopic electronic devices has moved out of the “if' category and into the “when” category. The medical possibilities of reengineering our atomic structure our endless. But the most profound application has yet to be mentioned here.
Cryogenics. The freezing process slows down molecular decay and for all intents and purposes, halts cell degradation. There is a few percentile worth of cell damage that occurs during deep freezing. Nanoscopic electronic devices could reverse this damage. One could easily remain in cryogenic stasis until the technology catches up to the level necessary for reanimation.
Since it is now only a matter of when nonoscopic electronic devises are able to manipulate the subatomic universe, suddenly cryogenics has a whole new importance. - by Kameron | 1 | 10 |
<urn:uuid:cc353426-9876-4a68-8b5d-4656b97db5eb> | Report: Chance of a Catastrophic Solar Storm Is 1 in 8; Would Take Down Power Grid, Food Transportation, Water Utilities, Financial SystemsMarch 8, 2012
By Mac Slavo
According to a recent study published by Space Weather: The International Journal of Research and Applications, we have roughly a 12% chance of getting hit with a solar storm so powerful that it could take down the national power grid and yield catastrophic consequences for the general population. Pete Riley, a senior scientist at Predictive Science in San Diego, is the author of the study which looks at the probability of the occurrence of extreme weather events:
- Probability of a Carrington event occurring over next decade is ~12%
- Space physics datasets often display a power-law distribution
- Power-law distribution can be exploited to predict extreme events
By virtue of their rarity, extreme space weather events, such as the Carrington event of 1859, are difficult to study, their rates of occurrence are difficult to estimate, and prediction of a specific future event is virtually impossible. Additionally, events may be extreme relative to one parameter but normal relative to others. In this study, we analyze several measures of the severity of space weather events (flare intensity, coronal mass ejection speeds)…
By showing that the frequency of occurrence scales as an inverse power of the severity of the event, and assuming that this relationship holds at higher magnitudes, we are able to estimate the probability that an event larger than some criteria will occur within a certain interval of time in the future. For example, the probability of another Carrington event occurring within the next decade is ∼12%.
The 1859 Carrington Event, as described by Wired Science, may have been a marvel to observers and caused some setbacks in the developing telegraph infrastructure at the time, but a similar occurrence today could be a global game changer:
At the time of the Carrington Event, telegraph stations caught on fire, their networks experienced major outages and magnetic observatories recorded disturbances in the Earth’s field that were literally off the scale.
In today’s electrically dependent modern world, a similar scale solar storm could have catastrophic consequences. Auroras damage electrical power grids and may contribute to the erosion of oil and gas pipelines. They can disrupt GPS satellites and disturb or even completely black out radio communication on Earth.
During a geomagnetic storm in 1989, for instance, Canada’s Hydro-Quebec power grid collapsed within 90 seconds, leaving millions without power for up to nine hours.
The potential collateral damage in the U.S. of a Carrington-type solar storm might be between $1 trillion and $2 trillion in the first year alone, with full recovery taking an estimated four to 10 years, according to a 2008 report from the National Research Council.
The post-storm effects of such an event are underestimated by the majority of the world’s population, including our political leadership. Like an electro magentic pulse attack, according to the National Research Council a massive enough solar storm could have long term effects that ”would likely include, for example, disruption of the transportation, communication, banking, and finance systems, and government services; the breakdown of the distribution of potable water owing to pump failure; and the loss of perishable foods and medications because of lack of refrigeration.”
The worst case scenario has been outlined by the Center for Security Policy, which suggests that an EMP, or a solar storm that results in similar magnetic discharge across the United States, could potentially leave 90% of Americans dead within the first year:
“Within a year of that attack, nine out of 10 Americans would be dead, because we can’t support a population of the present size in urban centers and the like without electricity,” said Frank Gaffney, president of the Center for Security Policy. “And that is exactly what I believe the Iranians are working towards.”
In the documentary Urban Danger, Congressman Roscoe Bartlett warns of the threat posed by a downed power grid and urges his fellow citizens to take action to protect themselves for the inevitable results that would follow:
We could have events in the future where the power grid will go down and it’s not, in any reasonable time, coming back up. For instance, if when the power grid went down some of our large transformers were destroyed, damaged beyond use, we don’t make any of those in this country. They’re made overseas and you order one and 18 months to two years later they will deliver it. Our power grid is very vulnerable. It’s very much on edge. Our military knows that.
There are a number of events that could create a situation in the cities where civil unrest would be a very high probability. And, I think that those who can, and those who understand, need to take advantage of the opportunity when these winds of strife are not blowing to move their families out of the city.
For many, a 1 in 8 chance of a catastrophic event occurring in a decade’s time may be nothing to worry about.
For the emergency, disaster and preparedness minded individual, however, a massive solar storm with the potential to take out our modern day power grid and utility infrastructure is just one in a variety of potentially catastrophic natural and man-made scenarios that could lead to the collapse of life in America as we know it today.
Though any given event on its own may have a low probability of occurrence, when combined with other potentialities like economic collapse, currency collapse, global or regional military conflict, Super EMP, political destabilization, massive earthquakes (such as on the New Madrid fault), Tsunamis, asteroids, pandemic, and cyber attacks the odds of a game changing paradigm shift in our lifetime’s rise significantly. | 1 | 8 |
<urn:uuid:c245724a-2167-4474-8ae3-f5dbeff1ea40> | Here's a pop quiz: What kinds of food can kids with cystic fibrosis (say: sis-tik fy-bro-sus) eat?
mac 'n cheese
all of the above
If you picked #5, you're right! Kids with cystic fibrosis (CF) can eat all of these foods — and they usually need to eat more of them than most kids do.
We'll explain why in a minute, but first let's look at what CF is.
CF is a disease that affects epithelial (say: eh-puh-thee-lee-ul) cells, which are found in lots of places in the body, like the lungs and the digestive system. Problems in these cells can upset the balance of salt and water in the body. This causes the body to make abnormally thick mucus (like the kind in your nose when you have a really bad cold), which can clog up the lungs and make if hard for kids with CF to breathe.
This problem can also affect the digestive system and block the body from getting the good stuff from food it needs, like fats and vitamins. This means kids with CF may be short for their age and not weigh enough. Kids with CF may get sick a lot more often than other kids because of these lung and digestive problems.
CF is a genetic (say: juh-neh-tik) disease. This means that you can't catch CF. Kids with CF are born with it because a gene for the disease is passed on to them by both of their parents, who each carry a CF gene in their bodies, but don't have CF themselves.
The good news is that when kids with CF eat well and take their medicines, they can keep themselves healthier.
All kids need to eat well to grow up healthy and strong. But kids with CF need to eat more than most other kids, so they and their parents often work with a CF dietitian (say: dy-uh-tih-shun) to plan what they should eat. A dietitian is someone who knows all about food and nutrition.
Each kid is different, but most kids with CF will eat three meals a day plus snacks to make sure that they get all of the calories they need. This isn't all that different from other kids, but the meals and snacks that a kid with CF eats should have more calories and fat in them. It's also very important that a kid with CF not miss meals.
So what do kids with CF use these extra calories for? Like everyone else, kids with CF need calories to grow, to gain weight, and to have energy to play. But kids with CF need extra calories because their bodies have a hard time absorbing fat and nutrients (say: noo-tree-entz) in food. Instead of absorbing the fat and nutrients from food, some of these important things can go right out of the person's body in their bowel movements. Kids with CF also may need more calories to help their bodies fight the lung infections they tend to get.
Let's take a closer look at some important nutrients and where to find them.
It's All in the Nutrients
Nutrients are the things in food that help keep our bodies running well. Kids with CF have some nutrients that they need to make sure they eat each day. These include:
Iron. Iron is essential for carrying oxygen to all the body's cells. You can find iron in some cereals, meats, dried fruits, and dark green vegetables.
Zinc. Zinc is important for growth, healing, and staying healthy. You'll find zinc in meats, liver, eggs, and seafood.
Calcium. Calcium helps build strong bones. Milk, yogurt, cheese, and calcium-fortified juices are rich in calcium.
Salt. Kids with CF lose a lot of salt in their sweat, especially during hot weather and when they exercise. A good way to replace this salt is by adding salt to food and eating salty snacks. During hot weather and when kids play sports, they may need sports drinks during and after practice or gym class.
All kids need to eat a balanced diet of regular meals and snacks that include plenty of fruits, veggies, grains, dairy products, and protein. But kids with CF need to work with their CF dietitian and their parents to create meal plans. Meal plans are important to ensure that kids with CF get all the calories and nutrients they need.
This might sound hard, but here are some simple tips. Click on the links for some great recipes that a grownup can help you make:
For some kids with CF, eating lots of great meals isn't enough — they may need a little extra help.
Some kids with CF need to take vitamins, especially for vitamins A, D, E, and K. These vitamins help kids stay healthy. But to do their work, they have to be absorbed by the body and dissolved in fat. Because most kids with CF have trouble absorbing fat into their bodies, they often have low levels of these vitamins and need to take larger amounts of them as pills.
Most kids with CF need to take pills that contain enzymes (say: en-zimes). Someone takes enzymes because his or her pancreas (say: pan-kree-us) doesn't work properly. The pancreas is a gland that's connected to the small intestine (say: in-tes-tun). It makes juices containing enzymes that help the small intestine digest fat, starch, and protein. If the pancreas can't make these juices normally, the problem is called pancreatic insufficiency (say: pan-kree-ah-tik in-suh-fih-shun-see).
Most kids with CF will have pancreatic insufficiency by the time they are 8 or 9 years old. It's important for these kids to take enzymes before they eat most foods. The enzymes will help these kids digest their food better.
Many people want to eat less and lose weight. Kids with CF have the opposite problem: They have to eat when they aren't hungry, don't feel like eating, or none of their friends are eating. If you are a kid with CF, remember that eating well and taking your enzymes and vitamins will help you have the energy to do all the great things you want to do with your friends, from playing soccer to going to sleepovers.
And if you have a friend with CF, now you know why he or she digs in every day at lunch. Maybe you can dig into a healthy lunch, too — and help other kids understand why eating right helps someone with CF stay healthy and strong. | 1 | 2 |
<urn:uuid:3449fd2d-577a-482d-8206-1a04fe16968e> | I hear people talk about "defragging" their computers all the time as a way to make it faster, but I'm not really sure what that means. What does defragging do and is it something I need to do to my computer? How often?
"Defrag" is short for "defragment," which is a maintenance task required by your hard drives.
Most hard drives have spinning platters, with data stored in different places around that platter. When your computer writes data to your drive, it does so in "blocks" that are ordered sequentially from one side of the drive's platter to the other. Fragmentation happens when those files get split between blocks that are far away from each other. The hard drive then takes longer to read that file because the read head has to "visit" multiple spots on the platter. Defragmentation puts those blocks back in sequential order, so your drive head doesn't have to run around the entire platter to read a single file. Image by XZise.
When You Should (and Shouldn't) Defragment
Fragmentation doesn't cause your computer to slow down as much as it used to—at least not until it's very fragmented—but the simple answer is yes, you should still defragment your computer. However, what you need to do depends on a few factors. Here's what you need to know.
If You Use a Solid-State Drive: No Defragmentation Necessary
If you have a solid-state drive (SSD) in your computer, you do not need to defragment it. Solid-state drives, unlike regular hard drives, don't use a spinning platter to store data, and it doesn't take any extra time to read from different parts of the drive. So, defragmentation won't offer any performance increases (though SSDs do require their own maintenance).
Of course, if you have other non-solid-state drives in your computer, those will still need defragmentation.
If You Use Windows 7 or 8
Windows 7 and Windows 8 automatically defragment your hard drives for you on a schedule, so you shouldn't have to worry about it yourself. To make sure everything's running smoothly, open up the Start menu or Start screen and type "defrag." Open up Windows' Disk Defragmenter and make sure it's running on a schedule as intended. It should tell you when it was last run and whether your drives have any fragmentation.
Note: A lot of you are finding that Windows 7's "automatic" defrag leaves a lot to be desired. All the more reason you should check in with Disk Defragmenter every once in a while and make sure it's doing its job! Windows 8 seems to be much better about running it regularly.
Note that in Windows 8, you'll see your SSDs in the Disk Defragmenter, but it doesn't actually defrag them; it's just performing other SSD-related maintenance. So don't worry if it's checked off along with the other drives.
If You Use Windows XP
If you're on Windows XP, you'll need to defragment your drives yourself. Just open the Start menu, click Run, type
Dfrg.msc and press Enter. You'll open up the Disk Defragmenter, from which you can defragment each of your drives individually. You should do this about once a week or so, but if you want, you can set it to run on a schedule using Windows' Task Scheduler.
If you're using an SSD, you should really upgrade to Windows 7, since XP doesn't have any built-in tools for SSD maintenance.
If You Use a Mac
If you use a Mac, then you probably don't need to manually defragment, since OS X will do it automatically for you (at least for small files). However, sometimes defragging—particularly if you have a lot of very large files—can help speed up a slow Mac.
When You Should Use a Third-Party Defragmenting Tool
We've talked a bit before about the best defragmenting tools, since Windows' built-in Disk Defragmenter isn't the only one. However, for the vast majority of people, Windows' built-in tools are just fine. Third-party tools are useful if you want to see which files are fragmented or defragment system files (like if you're trying to shrink a drive), but are otherwise unnecessary for most users. So kick back, let your scheduled defragger do it's thing, and forget about it! | 1 | 2 |
<urn:uuid:79c90052-8f99-4694-b38f-9d67797a4e77> | Page Layout / Keylining
Prior to the introduction of electronic publishing, page layouts were composed
as mechanicals (also called paste-ups or keylines). Artists and design
professionals (sometimes called keyliners) would assemble various elements on
the paste-up board. The elements could include type, which was set on a phototypesetter
and output in galleys
of type on phototypesetting paper; boxes drawn for photo placement; and line
art which was shot in a camera onto a stat, or as a photographic print. Each
element was pasted into position using wax, spray mount, or rubber cement. Crop
marks and registration marks were drawn on the board and then a tissue overlay
was attached over the layout to be used as a guide for indicating the location
of color breaks or writing special instructions. The resulting layout was referred
to as "camera-ready art".
After completing the layout, a negative was produced. Black and white photos
were shot separately onto film using a screen. Color photos were sent to a color
separator where the full color image was separated
into 4 films with each film representing one of the basic color components of
the image. The separate films were then stripped up with the rest of the job.
Solid black or rubylith windows were cut to allow for a clear window on the
negative where the photos would be placed.
Today, artwork is prepared primarily on a computer. Apple Macintosh
has been the computer most widely used by the printing industry in
their art departments, although PC's are increasingly being used.
Photos can be scanned into the computer and retouched or color corrected.
Type is easily set into a page layout program and images are imported.
The color is added directly to the type and the graphics. The resulting
file is then output to create a proof, film, or plates. The computer
has shortened the process of preparing artwork significantly and making
changes is faster and easier.
It is important to learn the system of measurement used in the graphic
arts and publishing industries. The system of measurement is based
upon the point and the pica. A point is defined as exactly
1/72 of an inch. There are 12 points in a pica and 6 picas to the
inch. Type and rules are specified in points and page elements are
measured in picas and points.
Consider the following items concerning the text in your project:
- Choose a type face or font that compliments and suits your project.
There are many fonts from which to choose and picking the right
font and style makes a big difference. Use only Adobe Postscript
Type 1 typefaces and avoid using TrueType. When working with the
Macintosh, do not use fonts that are named after cities, such as
Geneva, New York, Chicago, etc. They are "Screen fonts"
only and are needed by the Mac system and are not to be used for
More on fonts...
- Avoid using "ALL CAPS" or keep it to a minimum. Type
that is set in all caps is more difficult to read than type set
in upper and lowercase. When type is set in caps, all the letters
have a similar size and shape, making it hard for the eye to distinguish
the letterforms. The use of italic or bold may be a good alternative
in order to highlight words.
- If you create a logo or headline in an illustration program using
a typeface that is not one of the standard fonts, convert it to
outlines before importing it into the page layout program.
- Style sheets should be used to maintain a consistent look to the
design.. They save time on large projects, and they allow changes
to be made easily to the entire document with just one change in
the style sheets.
- Set tabs and use them to space across your page instead of using
several spaces. If you use spaces, the lines of text may be misaligned
- Use hanging indents for bulleted and numbered lists. Set the tab
in the place where the text begins after a bullet or number and
then set the left indent to be the same amount and the first line
indent to a negative number of the same amount. Do not put in a
hard return and tab over. Doing this makes it difficult to make
changes or edit the text later.
- Do not use more than one space between sentences.
not use the straight quote and apostrophe marks on the keyboard.
Some programs will let you set the default to automatically convert
to "curly quotes". If you are not using a program that
will convert it for you, then use the following special command
Opening double quote = option-[
Closing double quote = option-shift-[
Opening single quote = option-]
Closing single quote = option-shift-]
Inch = Symbol font option-, (comma)
Foot = Symbol font option-4
|PC(hold down the Alt key
and use numbers in Num keypad)
Opening double quote = alt-0147
Closing double quote = alt-0148
Opening single quote = alt-0145
Closing single quote = alt-0146
Inch = Symbol font alt-0178
Foot = Symbol font alt-0162
ligatures, which combine letters to make a single letter, such as
fi and fl.
- Use reverse type sparingly, but if you do use it, the font should
be sans serif type and least 10 point. Avoid using a thin type face,
which can fill in easily and get lost in the background.
- Be careful when overprinting type on a four-color or tinted image.
Avoid overprinting on busy areas in photos. Dark type can get lost
in dark areas of the photo. Outlining the type in a contrasting
color will help to emphasize the type.
- Make sure small text is composed as a solid color so that it is
easy to read.
the actual bold, italic, or bold-italic fonts in the font menu instead
of selecting them on the style palette. If the printer font is not
available for that font, it will still look fine on the screen and
also on a laser or inkjet printer, but will not print on a Postscript
printer. Never apply a bold style to a font that is already bold
because it will result in misaligned spacing when it is output.
not use the outline option in the style palette. Instead, use your
illustration program to create outline type. To create outline type,
first type the text, apply the outline twice as thick as is desired,
copy, paste on top, apply a fill color, and a stoke of none. If
you apply just a stroke to the text without pasting a copy on top,
the letters will appear thinner. Note that you may have to adjust
the kerning and tracking.
- Do not use the shadow style because the tint value and the color
cannot be controlled. Create shadows in an illustration program.
- Use the baseline shift feature only for certain characters to
be moved up or down. Do not use it to shift whole paragraphs. Instead,
use the "leading", "space before" and "space
- Instead of hitting a tab key to indent the first line use the
"indent first line" feature. Never type 5 spaces for an
- Change the hyphenation and justification settings. Typically the
default settings are set too loose.
- Set your leading instead of using the "auto" leading,
which may give you uneven and uncontrolled spacing.
- Avoid having widows and orphans. A widow is a single word floating
at the top of a new page or column and an orphan is a lone word
on a line at the end of paragraph.
- Always proofread your copy and have another person proofread it
as well. Know and use the proofreaders'
- Apply kerning and tracking to change the look of your text.
Rules / Boxes
- When drawing rules and boxes, consider the following points:
- Print fine rules in solid colors only.
- Never use "hairline" for rules or boxes. It isn't actually
a size and the results will vary. The thinnest rule you should use
is .5 pt.
- Draw your boxes by using the box tool, not by drawing 4 lines.
The corners may not connect. "Close enough" on the screen
may not be the same when it is output on an imagesetter.
- Do not cover an unwanted part of an image with a white box. Just
because you can't see it on the screen, doesn't mean it's not there.
The covered part of the image will still take RIP time to process
- Use the rulers and grids for accuracy.
- Computer graphics are grouped into two main categories: vector
graphics and bitmap images. Vector-art graphics consist of lines
and curves created with software programs like Adobe Illustrator®
and Macromedia FreeHand and are saved as EPS files. The resolution
they print at depends on the printing device. Bitmap images (also
called raster images) are created or scanned into Adobe Photoshop®
and saved as either TIFF or EPS images. The resolution of the images
is set in Photoshop, depending on the type of output you will be
using. There are other programs that can be used to create graphics,
but those mentioned are the most popular programs used by service
bureaus and printing companies.
- The files can then be imported or placed into your layout in the
desktop publishing software. They should not be renamed after they
have been placed, or they will appear to be missing. If you send
your files to be output elsewhere, make sure to include the EPS
or TIFF files with the layout file. They are linked files and the
external files are needed when it is time to output.
- Additional information is listed below to assist you when working
- Scale, rotate, and crop your images in your image editing software
before you import them. If you edit your image in the page layout
program, it takes much longer to RIP the files or at times it may
make the file impossible to RIP under any circumstance.
- The resolution of the image should be twice the line screen rating
(lines per inch) that will be used for printing the product. For
example, most commercial printers use a 150-175 line screen, therefore
the resolution of the image should be 300-350 dpi. Check with the
printer to see what line screen they use. Resolution for scanned
line art should be 1200 dpi if you are outputting on a high-end
- Before resizing images, consider the resolution that is desired
for the output. The maximum recommended enlargement is 200%, unless
the final image is to be printed at a resolution less than the resolution
of the original scan. If you don't know the exact size you'll need,
it is better to make it larger and reduce it rather than to enlarge
- Crop out the unwanted parts of the graphic in the graphics program
rather than in the layout program. The entire imported file has
to be processed on the RIP even if it isn't being used. White borders
around the image count! Also, do not cover something up with a white
box if you don't want it to print. Crop out the unwanted part instead.
- OPI/FPO or APR - Open Press Interface/For
Position Only or Automatic Picture Replacement - This concept uses
a low resolution version of an image to be used in your layout program.
The high resolution version is stored on a server and it is swapped
out when it is sent to the RIP. Using a low resolution file in the
layout saves on file size and time. You can make adjustments such
as cropping, resizing, rotating, flopping, etc., although it is
better to have them made on the actual image first. Changes made
to the image itself, such as color changes, can only be accomplished
on the high-res version. Never change the file names of either file
because changing the name will lose the connection.
- Save your images as CMYK, not RGB, if your job is going to be
printed on a press.
- When placing bitmap files in Quark, never set the background of
the picture box to "none". It should always be set to
- PhotoCD - When a photo is placed on a PhotoCD, realize that the
images are stored as RGB images rather than CMYK, which is the format
needed for printing. The photo can be converted in image editing
programs such as Photoshop, but the results can be unpredictable,
if you are not experienced in editing images.
- When you create an EPS or TIFF file in Photoshop and import it
into a page layout program, a white background will be placed around
the image. If you want a transparent background, you will need to
create a clipping path in Photoshop.
- Do not place an EPS within another EPS file, this causes major
problems in the RIP. Use copy and paste instead.
- Remember, when you place an EPS file which includes text, the
font must also be provided in order for the file to be output.
Back to Top | 1 | 6 |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 18