Dataset Viewer
Auto-converted to Parquet
id
stringlengths
47
47
text
stringlengths
426
671k
keywords_count
int64
1
10
codes_count
int64
2
4.68k
<urn:uuid:a42971e3-6316-4a4b-b05f-388e48b4808d>
Frankly speaking, you cannot create a Linux partition larger than 2 TB using the fdisk command. The fdisk won't create partitions larger than 2 TB. This is fine for desktop and laptop users, but on server you need a large partition. For example, you cannot create 3TB or 4TB partition size (RAID based) using the fdisk command. It will not allow you to create a partition that is greater than 2TB. In this tutorial, you will learn more about creating Linux filesystems greater than 2 Terabytes to support enterprise grade operation under any Linux distribution. To solve this problem use GNU parted command with GPT. It supports Intel EFI/GPT partition tables. Partition Table (GPT) is a standard for the layout of the partition table on a physical hard disk. It is a part of the Extensible Firmware Interface (EFI) standard proposed by Intel as a replacement for the outdated PC BIOS, one of the few remaining relics of the original IBM PC. EFI uses GPT where BIOS uses a Master Boot Record (MBR). (Fig.01: Diagram illustrating the layout of the GUID Partition Table scheme. Each logical block (LBA) is 512 bytes in size. LBA addresses that are negative indicate position from the end of the volume, with −1 being the last addressable block. Imaged Credit Wikipedia) Linux GPT Kernel Support EFI GUID Partition support works on both 32bit and 64bit platforms. You must include GPT support in kernel in order to use GPT. If you don't include GPT support in Linux kernelt, after rebooting the server, the file system will no longer be mountable or the GPT table will get corrupted. By default Redhat Enterprise Linux / CentOS comes with GPT kernel support. However, if you are using Debian or Ubuntu Linux, you need to recompile the kernel. Set CONFIG_EFI_PARTITION to y to compile this feature. File Systems Partition Types [*] Advanced partition selection [*] EFI GUID Partition support (NEW) .... Find Out Current Disk Size Type the following command: # fdisk -l /dev/sdb Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdb doesn't contain a valid partition table Linux Create 3TB partition size To create a partition start GNU parted as follows: # parted /dev/sdb GNU Parted 2.3 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) Creates a new GPT disklabel i.e. partition table: (parted) mklabel gpt Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue? Yes/No? yes (parted) Next, set the default unit to TB, enter: (parted) unit TB To create a 3TB partition size, enter: (parted) mkpart primary 0 0 (parted) mkpart primary 0.00TB 3.00TB To print the current partitions, enter: Model: ATA ST33000651AS (scsi) Disk /dev/sdb: 3.00TB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 0.00TB 3.00TB 3.00TB ext4 primary Quit and save the changes, enter: Information: You may need to update /etc/fstab. Use the mkfs.ext3 or mkfs.ext4 command to format the file system, enter: # mkfs.ext3 /dev/sdb1 # mkfs.ext4 /dev/sdb1 mkfs.ext4 /dev/sdb1 mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 183148544 inodes, 732566272 blocks 36628313 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 22357 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848, 512000000, 550731776, 644972544 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 31 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. Type the following commands to mount /dev/sdb1, enter: # mkdir /data # mount /dev/sdb1 /data # df -H Filesystem Size Used Avail Use% Mounted on /dev/sdc1 16G 819M 14G 6% / tmpfs 1.6G 0 1.6G 0% /lib/init/rw udev 1.6G 123k 1.6G 1% /dev tmpfs 1.6G 0 1.6G 0% /dev/shm /dev/sdb1 3.0T 211M 2.9T 1% /data Make sure you replace /dev/sdb1 with actual RAID or Disk name or Block Ethernet device such as /dev/etherd/e0.0. Do not forget to update /etc/fstab, if necessary. Also note that booting from a GPT volume requires support in your BIOS / firmware. This is not supported on non-EFI platforms. I suggest you boot server from another disk such as IDE / SATA / SSD disk and store data on /data. - How Basic Disks and Volumes Work (little outdated but good to understand basic concept) - GUID Partition Table from the Wikipedia - man pages parted Updated for accuracy! - 30 Handy Bash Shell Aliases For Linux / Unix / Mac OS X - Top 30 Nmap Command Examples For Sys/Network Admins - 25 PHP Security Best Practices For Sys Admins - 20 Linux System Monitoring Tools Every SysAdmin Should Know - 20 Linux Server Hardening Security Tips - Linux: 20 Iptables Examples For New SysAdmins - Top 20 OpenSSH Server Best Security Practices - Top 20 Nginx WebServer Best Security Practices - 20 Examples: Make Sure Unix / Linux Configuration Files Are Free From Syntax Errors - 15 Greatest Open Source Terminal Applications Of 2012 - My 10 UNIX Command Line Mistakes - Top 10 Open Source Web-Based Project Management Software - Top 5 Email Client For Linux, Mac OS X, and Windows Users - The Novice Guide To Buying A Linux Laptop
1
7
<urn:uuid:759ff0b9-9458-45d0-8deb-368c01089695>
Opportunities and Challenges in High Pressure Processing of Foods By Rastogi, N K; Raghavarao, K S M S; Balasubramaniam, V M; Niranjan, K; Knorr, D Consumers increasingly demand convenience foods of the highest quality in terms of natural flavor and taste, and which are free from additives and preservatives. This demand has triggered the need for the development of a number of nonthermal approaches to food processing, of which high-pressure technology has proven to be very valuable. A number of recent publications have demonstrated novel and diverse uses of this technology. Its novel features, which include destruction of microorganisms at room temperature or lower, have made the technology commercially attractive. Enzymes and even spore forming bacteria can be inactivated by the application of pressure-thermal combinations, This review aims to identify the opportunities and challenges associated with this technology. In addition to discussing the effects of high pressure on food components, this review covers the combined effects of high pressure processing with: gamma irradiation, alternating current, ultrasound, and carbon dioxide or anti-microbial treatment. Further, the applications of this technology in various sectors-fruits and vegetables, dairy, and meat processing-have been dealt with extensively. The integration of high-pressure with other matured processing operations such as blanching, dehydration, osmotic dehydration, rehydration, frying, freezing / thawing and solid- liquid extraction has been shown to open up new processing options. The key challenges identified include: heat transfer problems and resulting non-uniformity in processing, obtaining reliable and reproducible data for process validation, lack of detailed knowledge about the interaction between high pressure, and a number of food constituents, packaging and statutory issues. Keywords high pressure, food processing, non-thermal processing Consumers demand high quality and convenient products with natural flavor and taste, and greatly appreciate the fresh appearance of minimally processed food. Besides, they look for safe and natural products without additives such as preservatives and humectants. In order to harmonize or blend all these demands without compromising the safety of the products, it is necessary to implement newer preservation technologies in the food industry. Although the fact that “high pressure kills microorganisms and preserves food” was discovered way back in 1899 and has been used with success in chemical, ceramic, carbon allotropy, steel/alloy, composite materials and plastic industries for decades, it was only in late 1980′s that its commercial benefits became available to the food processing industries. High pressure processing (HPP) is similar in concept to cold isostatic pressing of metals and ceramics, except that it demands much higher pressures, faster cycling, high capacity, and sanitation (Zimmerman and Bergman, 1993; Mertens and Deplace, 1993). Hite (1899) investigated the application of high pressure as a means of preserving milk, and later extended the study to preserve fruits and vegetables (Hite, Giddings, and Weakly, 1914). It then took almost eighty years for Japan to re- discover the application of high-pressure in food processing. The use of this technology has come about so quickly that it took only three years for two Japanese companies to launch products, which were processed using this technology. The ability of high pressure to inactivate microorganisms and spoilage catalyzing enzymes, whilst retaining other quality attributes, has encouraged Japanese and American food companies to introduce high pressure processed foods in the market (Mermelstein, 1997; Hendrickx, Ludikhuyze, Broeck, and Weemaes, 1998). The first high pressure processed foods were introduced to the Japanese market in 1990 by Meidi-ya, who have been marketing a line of jams, jellies, and sauces packaged and processed without application of heat (Thakur and Nelson, 1998). Other products include fruit preparations, fruit juices, rice cakes, and raw squid in Japan; fruit juices, especially apple and orange juice, in France and Portugal; and guacamole and oysters in the USA (Hugas, Garcia, and Monfort, 2002). In addition to food preservation, high- pressure treatment can result in food products acquiring novel structure and texture, and hence can be used to develop new products (Hayashi, 1990) or increase the functionality of certain ingredients. Depending on the operating parameters and the scale of operation, the cost of highpressure treatment is typically around US$ 0.05-0.5 per liter or kilogram, the lower value being comparable to the cost of thermal processing (Thakur and Nelson, 1998; Balasubramaniam, 2003). The non-availability of suitable equipment encumbered early applications of high pressure. However, recent progress in equipment design has ensured worldwide recognition of the potential for such a technology in food processing (Could, 1995; Galazka and Ledward, 1995; Balci and Wilbey, 1999). Today, high-pressure technology is acknowledged to have the promise of producing a very wide range of products, whilst simultaneously showing potential for creating a new generation of value added foods. In general, high-pressure technology can supplement conventional thermal processing for reducing microbial load, or substitute the use of chemical preservatives (Rastogi, Subramanian, and Raghavarao, 1994). Over the past two decades, this technology has attracted considerable research attention, mainly relating to: i) the extension of keeping quality (Cheftel, 1995; Farkas and Hoover, 2001), ii) changing the physical and functional properties of food systems (Cheftel, 1992), and iii) exploiting the anomalous phase transitions of water under extreme pressures, e.g. lowering of freezing point with increasing pressures (Kalichevsky, Knorr, and Lillford, 1995; Knorr, Schlueter, and Heinz, 1998). The key advantages of this technology can be summarized as follows: 1. it enables food processing at ambient temperature or even lower temperatures; 2. it enables instant transmittance of pressure throughout the system, irrespective of size and geometry, thereby making size reduction optional, which can be a great advantage; 3. it causes microbial death whilst virtually eliminating heat damage and the use of chemical preservatives/additives, thereby leading to improvements in the overall quality of foods; and 4. it can be used to create ingredients with novel functional properties. The effect of high pressure on microorganisms and proteins/ enzymes was observed to be similar to that of high temperature. As mentioned above, high pressure processing enables transmittance of pressure rapidly and uniformly throughout the food. Consequently, the problems of spatial variations in preservation treatments associated with heat, microwave, or radiation penetration are not evident in pressure-processed products. The application of high pressure increases the temperature of the liquid component of the food by approximately 3C per 100 MPa. If the food contains a significant amount of fat, such as butter or cream, the temperature rise is greater (8-9C/100 MPa) (Rasanayagam, Balasubramaniam, Ting, Sizer, Bush, and Anderson, 2003). Foods cool down to their original temperature on decompression if no heat is lost to (or gained from) the walls of the pressure vessel during the holding stage. The temperature distribution during the pressure-holding period can change depending on heat transfer across the walls of the pressure vessel, which must be held at the desired temperature for achieving truly isothermal conditions. In the case of some proteins, a gel is formed when the rate of compression is slow, whereas a precipitate is formed when the rate is fast. High pressure can cause structural changes in structurally fragile foods containing entrapped air such as strawberries or lettuce. Cell deformation and cell damage can result in softening and cell serum loss. Compression may also shift the pH depending on the imposed pressure. Heremans (1995) indicated a lowering of pH in apple juice by 0.2 units per 100 MPa increase in pressure. In combined thermal and pressure treatment processes, Meyer (2000) proposed that the heat of compression could be used effectively, since the temperature of the product can be raised from 70-90C to 105-120C by a compression to 700 MPa, and brought back to the initial temperature by decompression. As a thermodynamic parameter, pressure has far-reaching effects on the conformation of macromolecules, the transition temperature of lipids and water, and a number of chemical reactions (Cheftel, 1992; Tauscher, 1995). Phenomena that are accompanied by a decrease in volume are enhanced by pressure, and vice-versa (principle of Le Chatelier). Thus, under pressure, reaction equilibriums are shifted towards the most compact state, and the reaction rate constant is increased or decreased, depending on whether the “activation volume” of the reaction (i.e. volume of the activation complex less volume of reactants) is negative or positive. It is likely that pressure a\lso inhibits the availability of the activation energy required for some reactions, by affecting some other energy releasing enzymatic reactions (Farr, 1990). The compression energy of 1 litre of water at 400 MPa is 19.2 kJ, as compared to 20.9 kJ for heating 1 litre of water from 20 to 25C. The low energy levels involved in pressure processing may explain why covalent bonds of food constituents are usually less affected than weak interactions. Pressure can influence most biochemical reactions, since they often involve change in volume. High pressure controls certain enzymatic reactions. The effect of high pressure on protein/enzyme is reversible unlike temperature, in the range 100-400 MPa and is probably due to conformational changes and sub-unit dissociation and association process (Morild, 1981). For both the pasteurization and sterilization processes, a combined treatment of high pressure and temperature are frequently considered to be most appropriate (Farr, 1990; Patterson, Quinn, Simpson, and Gilmour, 1995). Vegetative cells, including yeast and moulds, are pressure sensitive, i.e. they can be inactivated by pressures of ~300-600 MPa (Knorr, 1995; Patterson, Quinn, Simpson, and Gilmour, 1995). At high pressures, microbial death is considered to be due to permeabilization of cell membrane. For instance, it was observed that in the case of Saccharomyces cerevasia, at pressures of about 400 MPa, the structure and cytoplasmic organelles were grossly deformed and large quantities of intracellular material leaked out, while at 500 MPa, the nucleus could no longer be recognized, and a loss of intracellular material was almost complete (Farr, 1990). Changes that are induced in the cell morphology of the microorganisms are reversible at low pressures, but irreversible at higher pressures where microbial death occurs due to permeabilization of the cell membrane. An increase in process temperature above ambient temperature, and to a lesser extent, a decrease below ambient temperature, increases the inactivation rates of microorganisms during high pressure processing. Temperatures in the range 45 to 50C appear to increase the rate of inactivation of pathogens and spoilage microorganisms. Preservation of acid foods (pH ≤ 4.6) is, therefore, the most obvious application of HPP as such. Moreover, pasteurization can be performed even under chilled conditions for heat sensitive products. Low temperature processing can help to retain nutritional quality and functionality of raw materials treated and could allow maintenance of low temperature during post harvest treatment, processing, storage, transportation, and distribution periods of the life cycle of the food system (Knorr, 1995). Bacterial spores are highly pressure resistant, since pressures exceeding 1200 MPa may be needed for their inactivation (Knorr, 1995). The initiation of germination or inhibition of germinated bacterial spores and inactivation of piezo-resistive microorganisms can be achieved in combination with moderate heating or other pretreatments such as ultrasound. Process temperature in the range 90-121C in conjunction with pressures of 500-800 MPa have been used to inactivate spores forming bacteria such as Clostridium botulinum. Thus, sterilization of low-acid foods (pH > 4.6), will most probably rely on a combination of high pressure and other forms of relatively mild treatments. High-pressure application leads to the effective reduction of the activity of food quality related enzymes (oxidases), which ensures high quality and shelf stable products. Sometimes, food constituents offer piezo-resistance to enzymes. Further, high pressure affects only non-covalent bonds (hydrogen, ionic, and hydrophobic bonds), causes unfolding of protein chains, and has little effect on chemical constituents associated with desirable food qualities such as flavor, color, or nutritional content. Thus, in contrast to thermal processing, the application of high-pressure causes negligible impairment of nutritional values, taste, color flavor, or vitamin content (Hayashi, 1990). Small molecules such as amino acids, vitamins, and flavor compounds remain unaffected by high pressure, while the structure of the large molecules such as proteins, enzymes, polysaccharides, and nucleic acid may be altered (Balci and Wilbey, 1999). High pressure reduces the rate of browning reaction (Maillard reaction). It consists of two reactions, condensation reaction of amino compounds with carbonyl compounds, and successive browning reactions including metanoidin formation and polymerization processes. The condensation reaction shows no acceleration by high pressure (5-50 MPa at 50C), because it suppresses the generation of stable free radicals derived from melanoidin, which are responsible for the browning reaction (Tamaoka, Itoh, and Hayashi, 1991). Gels induced by high pressure are found to be more glossy and transparent because of rearrangement of water molecules surrounding amino acid residues in a denatured state (Okamoto, Kawamura, and Hayashi, 1990). The capability and limitations of HPP have been extensively reviewed (Thakur and Nelson, 1998; Smelt, 1998;Cheftal, 1995; Knorr, 1995; Fair, 1990; Tiwari, Jayas, and Holley, 1999; Cheftel, Levy, and Dumay, 2000; Messens, Van Camp, and Huyghebaert, 1997; Ontero and Sanz, 2000; Hugas, Garriga, and Monfort, 2002; Lakshmanan, Piggott,and Paterson, 2003; Balasubramaniam, 2003; Matser, Krebbers, Berg, and Bartels, 2004; Hogan, Kelly, and Sun, 2005; Mor-Mur and Yuste, 2005). Many of the early reviews primarily focused on the microbial efficacy of high-pressure processing. This review comprehensively covers the different types of products processed by highpressure technology alone or in combination with the other processes. It also discusses the effect of high pressure on food constituents such as enzymes and proteins. The applications of this technology in fruits and vegetable, dairy and animal product processing industries are covered. The effects of combining high- pressure treatment with other processing methods such as gamma- irradiation, alternating current, ultrasound, carbon dioxide, and anti microbial peptides have also been described. Special emphasis has been given to opportunities and challenges in high pressure processing of foods, which can potentially be explored and exploited. EFFECT OF HIGH PRESSURE ON ENZYMES AND PROTEINS Enzymes are a special class of proteins in which biological activity arises from active sites, brought together by a three- dimensional configuration of molecule. The changes in active site or protein denaturation can lead to loss of activity, or changes the functionality of the enzymes (Tsou, 1986). In addition to conformational changes, enzyme activity can be influenced by pressure-induced decompartmentalization (Butz, Koller, Tauscher, and Wolf, 1994; Gomes and Ledward, 1996). Pressure induced damage of membranes facilitates enzymesubstrate contact. The resulting reaction can either be accelerated or retarded by pressure (Butz, Koller, Tauscher, and Wolf, 1994; Gomes and Ledward, 1996; Morild, 1981). Hendrickx, Ludikhuy ze, Broeck, and Weemaes ( 1998) and Ludikhuyze, Van Loey, and Indrawati et al. (2003) reviewed the combined effect of pressure and temperature on enzymes related to the ity of fruits and vegetables, which comprises of kinetic information as well as process engineering aspects. Pectin methylesterase (PME) is an enzyme, which normally tends to lower the viscosity of fruits products and adversely affect their texture. Hence, its inactivation is a prerequisite for the preservation of such products. Commercially, fruit products containing PME (e.g. orange juice and tomato products) are heat pasteurized to inactivate PME and prolong shelf life. However, heating can deteriorate the sensory and nutritional quality of the products. Basak and Ramaswamy (1996) showed that the inactivation of PME in orange juice was dependent on pressure level, pressure-hold time, pH, and total soluble solids. An instantaneous pressure kill was dependent only on pressure level and a secondary inactivation effect dependent on holding time at each pressure level. Nienaber and Shellhammer (2001) studied the kinetics of PME inactivation in orange juice over a range of pressures (400-600 MPa) and temperatures (25-5O0C) for various process holding times. PME inactivation followed a firstorder kinetic model, with a residual activity of pressure-resistant enzyme. Calculated D-values ranged from 4.6 to 117.5 min at 600 MPa/50C and 400 MPa/25C, respectively. Pressures in excess of 500 MPa resulted in sufficiently faster inactivation rates for economic viability of the process. Binh, Van Loey, Fachin, Verlent, Indrawati, and Hendrickx (2002a, 2002b) studied the kinetics of inactivation of strawberry PME. The combined effect of pressure and temperature on inactivation kinetics followed a fractional-conversion model. Purified strawberry PME was more stable toward high-pressure treatments than PME from oranges and bananas. Ly-Nguyen, Van Loey, Fachin, Verlent, Hendrickx (2002) showed that the inactivation of the banana PME enzyme during heating at temperature between 65 and 72.5C followed first order kinetics and the effect of pressure treatment of 600-700 MPa at 10C could be described using a fractionalconversion model. Stoforos, Crelier, Robert, and Taoukis (2002) demonstrated that under ambient pressure, tomato PME inactivation rates increased with temperature, and the highest rate was obtained at 75C. The inactivation rates were dramatically reduced as soon as the essing pressure was raised beyond 75C. High inactivation rates were obtained at a pressure higher than 700 MPa. Riahi and Ramaswamy (2003) studied high- pressure inactivation kinetics of PME isolated from a variety of sources and showed that PME from a microbial source was more resistant \to pressure inactivation than from orange peel. Almost a full decimal reduction in activity of commercial PME was achieved at 400 MPa within 20 min. Verlent, Van Loey, Smout, Duvetter, Nguyen, and Hendrickx (2004) indicated that the optimal temperature for tomato pectinmethylesterase was shifted to higher values at elevated pressure compared to atmospheric pressure, creating the possibilities for rheology improvements by the application of high pressure. Castro, Van Loey, Saraiva, Smout, and Hendrickx (2006) accurately described the inactivation of the labile fraction under mild-heat and high-pressure conditions by a fractional conversion model, while a biphasic model was used to estimate the inactivation rate constant of both the fractions at more drastic conditions of temperature/ pressure (10-64C, 0.1-800 MPa). At pressures lower than 300 MPa and temperatures higher than 54C, an antagonistic effect of pressure and temperature was observed. Balogh, Smout, Binh, Van Loey, and Hendrickx (2004) observed the inactivation kinetics of carrot PME to follow first order kinetics over a range of pressure and temperature (650800 MPa, 10-40C). Enzyme stability under heat and pressure was reported to be lower in carrot juice and purified PME preparations than in carrots. The presence of pectinesterase (PE) reduces the quality of citrus juices by destabilization of clouds. Generally, the inactivation of the enzyme is accomplished by heat, resulting in a loss of fresh fruit flavor in the juice. High pressure processing can be used to bypass the use of extreme heat for the processing of fruit juices. Goodner, Braddock, and Parish (1998) showed that the higher pressures (>600 MPa) caused instantaneous inactivation of the heat labile form of the enzyme but did not inactivate the heat stable form of PE in case of orange and grapefruit juices. PE activity was totally lost in orange juice, whereas complete inactivation was not possible in case of grapefruit juices. Orange juice pressurized at 700 MPa for l min had no cloud loss for more than 50 days. Broeck, Ludikhuyze, Van Loey, and Hendrickx (2000) studied the combined pressure-temperature inactivation of the labile fraction of orange PE over a range of pressure (0.1 to 900 MPa) and temperature (15 to 65C). Pressure and temperature dependence of the inactivation rate constants of the labile fraction was quantified using the well- known Eyring and Arrhenius relations. The stable fraction was inactivated at a temperature higher than 75C. Acidification (pH 3.7) enhanced the thermal inactivation of the stable fraction, whereas the addition of Ca^sup ++^ ions (IM) suppressed inactivation. At elevated pressure (up to 900 MPa), an antagonistic effect of pressure and temperature on inactivation of the stable fraction was observed. Ly-Nguyen, Van Loey, Smout, Ozean, Fachin, Verlent, Vu- Truong, Duvetter, and Hendrickx (2003) investigated the combined heat and pressure treatments on the inactivation of purified carrot PE, which followed a fractional-conversion model. The thermally stable fraction of the enzyme could not be inactivated. At a lower pressure (<300 MPa) and higher temperature (>50C), an antagonistic effect of pressure and heat was observed. High pressures induced conformational changes in polygalacturonase (PG) causing reduced substrate binding affinity and enzyme inactivation. Eun, Seok, and Wan ( 1999) studied the effect of high-pressure treatment on PG from Chinese cabbage to prevent the softening and spoilage of plant-based foods such as kimchies without compromising quality. PG was inactivated by the application of pressure higher than 200 MPa for l min. Fachin, Van Loey, Indrawati, Ludikhuyze, and Hendrickx (2002) investigated the stability of tomato PG at different temperatures and pressures. The combined pressure temperature inactivation (300-600 MPa/50 -50C) of tomato PG was described by a fractional conversion model, which points to Ist-order inactivation kinetics of a pressure-sensitive enzyme fraction and to the occurrence of a pressure-stable PG fraction. Fachin, Smout, Verlent, Binh, Van Loey, and Hendrickx (2004) indicated that in the combination of pressure-temperature (5- 55C/100-600 MPa), the inactivation of the heat labile portion of purified tomato PG followed first order kinetics. The heat stable fraction of the enzyme showed pressure stability very similar to that of heat labile portion. Peelers, Fachin, Smout, Van Loey, and Hendrickx (2004) demonstrated that effect of high-pressure was identical on heat stable and heat labile fractions of tomato PG. The isoenzyme of PG was detected in thermally treated (140C for 5 min) tomato pieces and tomato juice, whereas, no PG was found in pressure treated tomato juice or pieces. Verlent, Van Loey, Smout, Duvetter, and Hendrickx (2004) investigated the effect of nigh pressure (0.1 and 500 MPa) and temperature (25-80C) on purified tomato PG. At atmospheric pressure, the optimum temperature for enzyme was found to be 55-60C and it decreased with an increase in pressure. The enzyme activity was reported to decrease with an increase in pressure at a constant temperature. Shook, Shellhammer, and Schwartz (2001) studied the ability of high pressure to inactivate lipoxygenase, PE and PG in diced tomatoes. Processing conditions used were 400,600, and 800 MPa for 1, 3, and 5 min at 25 and 45C. The magnitude of the applied pressure had a significant effect in inactivating lipoxygenase and PG, with complete loss of activity occurring at 800 MPa. PE was very resistant to the pressure treatment. Polyphenoloxidase and Pemxidase Polyphenoloxidase (PPO) and peroxidase (POD), the enzymes responsible for color and flavor loss, can be selectively inactivated by a combined treatment of pressure and temperature. Gomes and Ledward (1996) studied the effects of pressure treatment (100-800 MPa for 1-20 min) on commercial PPO enzyme available from mushrooms, potatoes, and apples. Castellari, Matricardi, Arfelli, Rovere, and Amati ( 1997) demonstrated that there was a limited inactivation of grape PPO using pressures between 300 and 600 MPa. At 900 MPa, a low level of PPO activity was apparent. In order to reach complete inactivation, it may be necessary to use high- pressure processing treatments in conjunction with a mild thermal treatment (40-50C). Weemaes, Ludikhuyze, Broeck, and Hendrickx (1998) studied the pressure stabilities of PPO from apple, avocados, grapes, pears, and plums at pH 6-7. These PPO differed in pressure stability. Inactivation of PPO from apple, grape, avocado, and pear at room temperature (25C) became noticeable at approximately 600, 700, 800 and 900 MPa, respectively, and followed first-order kinetics. Plum PPO was not inactivated at room temperature by pressures up to 900 MPa. Rastogi, Eshtiaghi, and Knorr (1999) studied the inactivation effects of high hydrostatic pressure treatment (100-600 MPa) combined with heat treatment (0-60C) on POD and PPO enzyme, in order to develop high pressure-processed red grape juice having stable shelf-life. The studies showed that the lowest POD (55.75%) and PPO (41.86%) activities were found at 60C, with pressure at 600 and 100 MPa, respectively. MacDonald and Schaschke (2000) showed that for PPO, both temperature and pressure individually appeared to have similar effects, whereas the holding time was not significant. On the other hand, in case of POD, temperature as well as interaction between temperature and holding time had the greatest effect on activity. Namkyu, Seunghwan, and Kyung (2002) showed that mushroom PPO was highly pressure stable. Exposure to 600 MPa for 10 min reduced PPO activity by 7%; further exposure had no denaturing effect. Compression for 10 and 20 min up to 800 MPa, reduced activity by 28 and 43%, respectively. Rapeanu, Van Loey, Smout, and Hendrickx (2005) indicated that the thermal and/or high-pressure inactivation of grape PPO followed first order kinetics. A third degree polynomial described the temperature/pressure dependence of the inactivation rate constants. Pressure and temperature were reported to act synergistically, except in the high temperature (≥45C)-low pressure (≥300 MPa) region where an antagonistic effect was observed. Gomes, Sumner, and Ledward (1997) showed that the application of increasing pressures led to a gradual reduction in papain enzyme activity. A decrease in activity of 39% was observed when the enzyme solution was initially activated with phosphate buffer (pH 6.8) and subjected to 800 MPa at ambient temperature for 10 min, while 13% of the original activity remained when the enzyme solution was treated at 800 MPa at 60C for 10 min. In Tris buffer at pH 6.8 after treatment at 800 MPa and 20C, papain activity loss was approximately 24%. The inactivation of the enzyme is because of induced change at the active site causing loss of activity without major conformational changes. This loss of activity was due to oxidation of the thiolate ion present at the active site. Weemaes, Cordt, Goossens, Ludikhuyze, Hendrickx, Heremans, and Tobback (1996) studied the effects of pressure and temperature on activity of 3 different alpha-amylases from Bacillus subtilis, Bacillus amyloliquefaciens, and Bacillus licheniformis. The changes in conformation of Bacillus licheniformis, Bacillus subtilis, and Bacillus amyloliquefaciens amylases occurred at pressures of 110, 75, and 65 MPa, respectively. Bacillus licheniformis amylase was more stable than amylases from Bacillus subtilis and Bacillus amyloliquefaciens to the combined heat/pressure treatment. Riahi and Ramaswamy (2004) demonstrated that pressure inactivation of amylase in apple juice was significantly (P < 0.01 ) influenced by pH, pressure, holding time, and temperature. The inactivation was described using a bi-phasic model. The application of high pressure was sh\own to completely inactivate amylase. The importance of the pressure pulse and pressure hold approach for inactivation of amylase was also demonstrated. High pressure denatures protein depending on the protein type, processing conditions, and the applied pressure. During the process of denaturation, the proteins may dissolve or precipitate on the application of high pressure. These changes are generally reversible in the pressure range 100-300 MPa and irreversible for the pressures higher than 300 MPa. Denaturation may be due to the destruction of hydrophobic and ion pair bonds, and unfolding of molecules. At higher pressure, oligomeric proteins tend to dissociate into subunits becoming vulnerable to proteolysis. Monomeric proteins do not show any changes in proteolysis with increase in pressure (Thakur and Nelson, 1998). High-pressure effects on proteins are related to the rupture on non-covalent interactions within protein molecules, and to the subsequent reformation of intra and inter molecular bonds within or between the molecules. Different types of interactions contribute to the secondary, tertiary, and quaternary structure of proteins. The quaternary structure is mainly held by hydrophobic interactions that are very sensitive to pressure. Significant changes in the tertiary structure are observed beyond 200 MPa. However, a reversible unfolding of small proteins such as ribonuclease A occurs at higher pressures (400 to 800 MPa), showing that the volume and compressibility changes during denaturation are not completely dominated by the hydrophobic effect. Denaturation is a complex process involving intermediate forms leading to multiple denatured products. secondary structure changes take place at a very high pressure above 700 MPa, leading to irreversible denaturation (Balny and Masson, 1993). Figure 1 General scheme for pressure-temperature phase diagram of proteins, (from Messens, Van Camp, and Huyghebaert, 1997). When the pressure increases to about 100 MPa, the denaturation temperature of the protein increases, whereas at higher pressures, the temperature of denaturation usually decreases. This results in the elliptical phase diagram of native denatured protein shown in Fig. 1. A practical consequence is that under elevated pressures, proteins denature usually at room temperature than at higher temperatures. The phase diagram also specifies the pressure- temperature range in which the protein maintains its native structure. Zone III specifies that at high temperatures, a rise in denaturation temperature is found with increasing pressure. Zone II indicates that below the maximum transition temperature, protein denaturation occurs at the lower temperatures under higher pressures. Zone III shows that below the temperature corresponding to the maximum transition pressure, protein denaturation occurs at lower pressures using lower temperatures (Messens, Van Camp, and Huyghebaert, 1997). The application of high pressure has been shown to destabilize casein micelles in reconstituted skim milk and the size distribution of spherical casein micelles decrease from 200 to 120 nm; maximum changes have been reported to occur between 150-400 MPa at 20C. The pressure treatment results in reduced turbidity and increased lightness, which leads to the formation of a virtually transparent skim milk (Shibauchi, Yamamoto, and Sagara, 1992; Derobry, Richard, and Hardy, 1994). The gels produced from high-pressure treated skim milk showed improved rigidity and gel breaking strength (Johnston, Austin, and Murphy, 1992). Garcia, Olano, Ramos, and Lopez (2000) showed that the pressure treatment at 25C considerably reduced the micelle size, while pressurization at higher temperature progressively increased the micelle dimensions. Anema, Lowe, and Stockmann (2005) indicated that a small decrease in the size of casein micelles was observed at 100 MPa, with slightly greater effects at higher temperatures or longer pressure treatments. At pressure >400 MPa, the casein micelles disintegrated. The effect was more rapid at higher temperatures although the final size was similar in all samples regardless of the pressure or temperature. At 200 MPa and 1O0C, the casein micelle size decreased slightly on heating, whereas, at higher temperatures, the size increased as a result of aggregation. Huppertz, Fox, and Kelly (2004a) showed that the size of casein micelles increased by 30% upon high-pressure treatment of milk at 250 MPa and micelle size dropped by 50% at 400 or 600 MPa. Huppertz, Fox, and Kelly (2004b) demonstrated that the high- pressure treatment of milk at 100-600 MPa resulted in considerable solubilization of alphas 1- and beta-casein, which may be due to the solubilization of colloidal calcium phosphate and disruption of hydrophobic interactions. On storage of pressure, treated milk at 5C dissociation of casein was largely irreversible, but at 20C, considerable re-association of casein was observed. The hydration of the casein micelles increased on pressure treatment (100-600 MPa) due to induced interactions between caseins and whey proteins. Pressure treatment increased levels of alphas 1- and beta-casein in the soluble phase of milk and produced casein micelles with properties different to those in untreated milk. Huppertz, Fox, and Kelly (2004c) demonstrated that the casein micelle size was not influenced by pressures less than 200 MPa, but a pressure of 250 MPa increased the micelle size by 25%, while pressures of 300 MPa or greater, irreversibly reduced the size to 50% ofthat in untreated milk. Denaturation of alpha-lactalbumin did not occur at pressures less than or equal to 400 MPa, whereas beta-lactoglobulin was denatured at pressures greater than 100 MPa. Galazka, Ledward, Sumner, and Dickinson (1997) reported loss of surface hydrophobicity due to application of 300 MPa in dilute solution. Pressurizing beta-lactoglobulin at 450 MPa for 15 minutes resulted in reduced solubility in water. High-pressure treatment induced extensive protein unfolding and aggregation when BSA was pressurized at 400 MPa. Beta-lactoglobulin appears to be more sensitive to pressure than alpha-lactalbumin. Olsen, Ipsen, Otte, and Skibsted (1999) monitored the state of aggregation and thermal gelation properties of pressure-treated beta-lactoglobulin immediately after depressurization and after storage for 24 h at 50C. A pressure of 150 MPa applied for 30 min, or pressures higher than 300 MPa applied for 0 or 30 min, led to formation of soluble aggregates. When continued for 30 min, a pressure of 450 MPa caused gelation of the 5% beta-lactoglobulin solution. Iametti, Tansidico, Bonomi, Vecchio, Pittia, Rovere, and DaIl’Aglio (1997) studied irreversible modifications in the tertiary structure, surface hydrophobicity, and association state of beta-lactoglobulin, when solutions of the protein at neutral pH and at different concentrations, were exposed to pressure. Only minor irreversible structural modifications were evident even for treatments as intense as 15 min at 900 MPa. The occurrence of irreversible modifications was time-dependent at 600 MPa but was complete within 2 min at 900 MPa. The irreversibly modified protein was soluble, but some covalent aggregates were formed. Subirade, Loupil, Allain, and Paquin (1998) showed the effect of dynamic high pressure on the secondary structure of betalactoglobulin. Thermal and pH sensitivity of pressure treated beta-lactoglobulin was different, suggesting that the two forms were stabilized by different electrostatic interactions. Walker, Farkas, Anderson, and Goddik (2004) used high- pressure processing (510 MPa for 10 min at 8 or 24C) to induce unfolding of beta-lactoglobulin and characterized the protein structure and surface-active properties. The secondary structure of the protein processed at 8C appeared to be unchanged, whereas at 24C alpha-helix structure was lost. Tertiary structures changed due to processing at either temperature. Model solutions containing the pressure-treated beta-lactoglobulin showed a significant decrease in surface tension. Izquierdo, Alli, Gmez, Ramaswamy, and Yaylayan (2005) demonstrated that under high-pressure treatments (100-300 MPa), the β-lactoglobulin AB was completely hydrolyzed by pronase and α-chymotrypsin. Hinrichs and Rademacher (2005) showed that the denaturation kinetics of beta-lactoglobulin followed second order kinetics while for alpha-lactalbumin it was 2.5. Alpha- lactalbumin was more resistant to denaturation than beta- lactoglobulin. The activation volume for denaturation of beta- lactoglobulin was reported to decrease with increasing temperature, and the activation energy increased with pressure up to 200 MPa, beyond which it decreased. This demonstrated the unfolding of the protein molecules. Drake, Harison, Apslund, Barbosa-Canovas, and Swanson (1997) demonstrated that the percentage moisture and wet weight yield of cheese from pressure treated milk were higher than pasteurized or raw milk cheese. The microbial quality was comparable and some textural defects were reported due to the excess moisture content. Arias, Lopez, and Olano (2000) showed that high-pressure treatment at 200 MPa significantly reduced rennet coagulation times over control samples. Pressurization at 400 MPa led to coagulation times similar to those of control, except for milk treated at pH 7.0, with or without readjustment of pH to 6.7, which presented significantly longer coagulation times than their non-pressure treated counterparts. Hinrichs and Rademacher (2004) demonstrated that the isobaric (200-800 MPa) and isothermal (-2 to 70C) denaturation of beta- lactoglobulin and alpha-lactalbumin of whey protein followed 3rd and 2nd order kinetics, respectively. Isothermal pressure denaturation of both beta-lactoglobulin A and B did not differ significantly and an increase in temperature resulted in an increase in thedenaturation rate. At pressures higher than 200 MPa, the denaturation rate was limited by the aggregation rate, while the pressure resulted in the unfolding of molecules. The kinetic parameters of denaturation were estimated using a single step non- linear regression method, which allowed a global fit of the entire data set. Huppertz, Fox, and Kelly (2004d) examined the high- pressure induced denaturation of alpha-lactalbumin and beta- lactoglobulin in dairy systems. The higher level of pressure- induced denaturation of both proteins in milk as compared to whey was due to the absence of casein micelles and colloidal calcium phosphate in the whey. The conformation of BSA was reported to remain fairly stable at 400 MPa due to a high number of disulfide bonds which are known to stabilize its three dimensional structure (Hayakawa, Kajihara, Morikawa, Oda, and Fujio, 1992). Kieffer and Wieser (2004) indicated that the extension resistance and extensibility of wet gluten were markedly influenced by high pressure (up to 800 MPa), while the temperature and the duration of pressure treatment (30-80C for 2-20 min) had a relatively lesser effect. The application of high pressure resulted in a marked decrease in protein extractability due to the restructuring of disulfide bonds under high pressure leading to the incorporation of alpha- and gamma-gliadins in the glutenin aggregate. The change in secondary structure following high- pressure treatment was also reported. The pressure treatment of myosin led to head-to-head interaction to form oligomers (clumps), which became more compact and larger in size during storage at constant pressure. Even after pressure treatment at 210 MPa for 5 minutes, monomieric myosin molecules increased and no gelation was observed for pressure treatment up to 210 MPa for 30 minutes. Pressure treatment did not also affect the original helical structure of the tail in the myosin monomers. Angsupanich, Edde, and Ledward (1999) showed that high pressure- induced denaturation of myosin led to formation of structures that contained hydrogen bonds and were additionally stabilized by disulphide bonds. Application of 750 MPa for 20 minutes resulted in dimerization of metmyoglobin in the pH range of 6-10, whereas maximum pH was not at isoelectric pH (6.9). Under acidic pH conditions, no dimers were formed (Defaye and Ledward, 1995). Zipp and Kouzmann ( 1973) showed the formation of precipitate when pressurized (750 MPa for 20 minutes) near the isoelectric point, the precipitate redissolved slowly during storage. Pressure treatment had no effect on lipid oxidation in the case of minced meat packed in air at pressure less than 300 MPa, while the oxidation increased proportionally at higher pressures. However, on exposure to higher pressure, minced meat in contact with air oxidized rapidly. Pressures > 300-400 MPa caused marked denaturation of both myofibriller and sarcoplasmic proteins in washed pork muscle and pork mince (Ananth, Murano and Dickson, 1995). Chapleau and Lamballerie (2003) showed that high-pressure treatment induced a threefold increase in the surface hydrophobicity of myofibrillar proteins between O and 450 MPa. Chapleau, Mangavel, Compoint, and Lamballerie (2004) reported that high pressure modified the secondary structure of myofibrillar proteins extracted from cattle carcasses. Irreversible changes and aggregation were reported at a pressure higher than 300 MPa, which can potentially affect the functional properties of meat products. Lamballerie, Perron, Jung, and Cheret (2003) indicated that high pressure treatment increases cathepsin D activity, and that pressurized myofibrils are more susceptible to cathepsin D action than non- pressurized myofibrils. The highest cathepsin D activity was observed at 300 MPa. Cariez, Veciana, and Cheftel ( 1995) demonstrated that L color values increased significantly in meat treated at 200-350 MPa, the meat becoming pink, and a-value decreased in meat treated at 400-500 MPa to give a grey-brown color. The total extractable myoglobin decreased in meat treated at 200- 500 MPa, while the metmyoglobin content of meat increased and the oxymyoglobin decreased at 400500 MPa. Meat discoloration from pressure processing resulted in a whitening effect at 200-300 MPa due to globin denaturation, and/or haem displacement/release, or oxidation of ferrous myoglobin to ferric myoglobin at pressure higher than 400 MPa. The conformation of the main protein component of egg white, ovalbumin, remains fairly stable when pressurized at 400 MPa, may be due to the four disulfide bonds and non-covalent interactions stabilizing the three dimensional structure of ovalbumin (Hayakawa, Kajihara, Morikawa, Oda, and Fujio, 1992). Hayashi, Kawamura, Nakasa and Okinada (1989) reported irreversible denaturation of egg albumin at 500-900 MPa with concomitant increase in susceptibility to subtilisin. Zhang, Li, and Tatsumi (2005) demonstrated that the pressure treatment (200-500 MPa) resulted in denaturation of ovalbumin. The surface hydrophobicity of ovalbumin was found to increase with increase in pressure treatment and the presence of polysaccharide protected the protein against denaturation. Iametti, Donnizzelli, Pittia, Rovere, Squarcina, and Bonomi (1999) showed that the addition of NaCl or sucrose to egg albumin prior to high- pressure treatment (up to 10 min at 800 MPa) prevented insolubulization or gel formation after pressure treatment. As a consequence of protein unfolding, the treated albumin had increased viscosity but retained its foaming and heat-gelling properties. Farr (1990) reported the modification of functionality of egg proteins. Egg yolk formed a gel when subjected to a pressure of 400 MPa for 30 minutes at 25C, kept its original color, and was soft and adhesive. The hardness of the pressure treated gel increased and adhesiveness decreased with an increase in pressure. Plancken, Van Loey, and Hendrickx (2005) showed that the application of high pressure (400- 700 MPa) to egg white solution resulted in an increase in turbidity, surface hydrophobicity, exposed sulfhydryl content, and susceptibility to enzymatic hydrolysis, while it resulted in a decrease in protein solubility, total sulfhydryl content, denaturation enthalpy, and trypsin inhibitory activity. The pressure- induced changes in these properties were shown to be dependent on the pressuretemperature and the pH of the solution. Speroni, Puppo, Chapleau, Lamballerie, Castellani, Aon, and Anton (2005) indicated that the application of high pressure (200-600 MPa) at 2OC to low- density lipoproteins did not change the solubility even if the pH is changed, whereas aggregation and protein denaturation were drastically enhanced at pH 8. Further, the application of high- pressure under alkaline pH conditions resulted in decreased droplet flocculation of low-density lipoproteins dispersions. The minimum pressure required for the inducing gelation of soya proteins was reported to be 300 MPa for 10-30 minutes and the gels formed were softer with lower elastic modulus in comparison with heat-treated gels (Okamoto, Kawamura, and Hayashi, 1990). The treatment of soya milk at 500 MPa for 30 min changed it from a liquid state to a solid state, whereas at lower pressures and at 500 MPa for 10 minutes, the milk remained in a liquid state, but indicated improved emulsifying activity and stability (Kajiyama, Isobe, Uemura, and Noguchi, 1995). The hardness of tofu gels produced by high-pressure treatment at 300 MPa for 10 minutes was comparable to heat induced gels. Puppo, Chapleau, Speroni, Lamballerie, Michel, Anon, and Anton (2004) demonstrated that the application of high pressure (200-600 MPa) on soya protein isolate at pH 8.0 resulted in an increase in a protein hydorphobicity and aggregation, a reduction of free sulfhydryl content and a partial unfolding of the 7S and 11S fractions at pH 8. The change in the secondary structure leading to a more disordered structure was also reported. Whereas at pH 3.0, the protein was partially denatured and insoluble aggregates were formed, the major molecular unfolding resulted in decreased thermal stability, increased protein solubility, and hydorphobicity. Puppo, Speroni, Chapleau, Lamballerie, An, and Anton (2005) studied the effect of high- pressure (200, 400, and 600 MPa for 10 min at 10C) on the emulsifying properties of soybean protein isolates at pH 3 and 8 (e.g. oil droplet size, flocculation, interfacial protein concentration, and composition). The application of pressure higher than 200 MPa at pH 8 resulted in a smaller droplet size and an increase in the levels of depletion flocculation. However, a similar effect was not observed at pH 3. Due to the application of high pressure, bridging flocculation decreased and the percentage of adsorbed proteins increased irrespective of the pH conditions. Moreover, the ability of the protein to be adsorbed at the oil- water interface increased. Zhang, Li, Tatsumi, and Isobe (2005) showed that the application of high pressure treatment resulted in the formation of more hydrophobic regions in soy protein, which dissociated into subunits, which in some cases formed insoluble aggregates. High-pressure denaturation of beta-conglycinin (7S) and glycinin (11S) occurred at 300 and 400 MPa, respectively. The gels formed had the desirable strength and a cross-linked network microstructure. Soybean whey is a by-product of tofu manufacture. It is a good source of peptides, proteins, oligosaccharides, and isoflavones, and can be used in special foods for the elderly persons, athletes, etc. Prestamo and Penas (2004) studied the antioxidative activity of soybean whey proteins and their pepsin and chymotrypsin hydrolysates. The chymotrypsin hydrolysate showed a higher antioxidative activity than the non-hydrolyzed protein, but the pepsin hydrolysate showed an opposite trend. High pressure processing at 100 MPa inc\reased the antioxidative activity of soy whey protein, but decreased the antioxidative activity of the hydrolysates. High pressure processing increased the pH of the protein hydrolysates. Penas, Prestamo, and Gomez (2004) demonstrated that the application of high pressure (100 and 200 MPa, 15 min, 37C) facilitated the hydrolysis of soya whey protein by pepsin, trypsin, and chymotrypsin. It was shown that the highest level of hydrolysis occurred at a treatment pressure of 100 MPa. After the hydrolysis, 5 peptides under 14 kDa with trypsin and chymotrypsin, and 11 peptides with pepsin were reported. COMBINATION OF HIGHPRESSURE TREATMENT WITH OTHER NON-THERMAL PROCESSING METHODS Many researchers have combined the use of high pressure with other non-thermal operations in order to explore the possibility of synergy between processes. Such attempts are reviewed in this section. Crawford, Murano, Olson, and Shenoy (1996) studied the combined effect of high pressure and gamma-irradiation for inactivating Clostridium spmgenes spores in chicken breast. Application of high pressure reduced the radiation dose required to produce chicken meat with extended shelf life. The application of high pressure (600 MPa for 20 min at 8O0C) reduced the irradiation doses required for one log reduction of Clostridium spmgenes from 4.2 kGy to 2.0 kGy. Mainville, Montpetit, Durand, and Farnworth (2001) studied the combined effect of irradiation and high pressure on microflora and microorganisms of kefir. The irradiation treatment of kefir at 5 kGy and high-pressure treatment (400 MPa for 5 or 30 min) deactivated the bacteria and yeast in kefir, while leaving the proteins and lipids unchanged. The exposure of microbial cells and spores to an alternating current (50 Hz) resulted in the release of intracellular materials causing loss or denaturation of cellular components responsible for the normal functioning of the cell. The lethal damage to the microorganisms enhanced when the organisms are exposed to an alternating current before and after the pressure treatment. High- pressure treatment at 300 MPa for 10 min for Escherichia coli cells and 400 MPa for 30 min for Bacillus subtalis spores, after the alternating current treatment, resulted in reduced surviving fractions of both the organisms. The combined effect was also shown to reduce the tolerant level of microorganisms to other challenges (Shimada and Shimahara, 1985, 1987; Shimada, 1992). The pretreatment with ultrasonic waves (100 W/cm^sup 2^ for 25 min at 25C) followed by high pressure (400 MPa for 25 min at 15C) was shown to result in complete inactivation of Rhodoturola rubra. Neither ultrasonic nor high-pressure treatment alone was found to be effective (Knorr, 1995). Carbon Dioxide and Argon Heinz and Knorr (1995) reported a 3 log reduction of supercritical CO2 pretreated cultures. The effect of the pretreatment on germination of Bacillus subtilis endospores was monitored. The combination of high pressure and mild heat treatment was the most effective in reducing germination (95% reduction), but no spore inactivation was observed. Park, Lee, and Park (2002) studied the combination of high- pressure carbon dioxide and high pressure as a nonthermal processing technique to enhance the safety and shelf life of carrot juice. The combined treatment of carbon dioxide (4.90 MPa) and high-pressure treatment (300 MPa) resulted in complete destruction of aerobes. The increase in high pressure to 600 MPa in the presence of carbon dioxide resulted in reduced activities of polyphenoloxidase (11.3%), lipoxygenase (8.8%), and pectin methylesterase (35.1%). Corwin and Shellhammer (2002) studied the combined effect of high-pressure treatment and CO2 on the inactivation of pectinmethylesterase, polyphenoloxidase, Lactobacillus plantarum, and Escherichia coli. An interaction was found between CO2 and pressure at 25 and 50C for pectinmethylesterase and polyphenoloxidase, respectively. The activity of polyphenoloxidase was decreased by CO2 at all pressure treatments. The interaction between CO2 and pressure was significant for Lactobacillus plantarum, with a significant decrease in survivors due to the addition of CO2 at all pressures studied. No significant effect on E. coli survivors was seen with CO2 addition. Truong, Boff, Min, and Shellhammer (2002) demonstrated that the addition of CO2 (0.18 MPa) during high pressure processing (600 MPa, 25C) of fresh orange juice increases the rate of PME inactivation in Valencia orange juice. The treatment time due to CO2 for achieving the equivalent reduction in PME activity was from 346 s to 111 s, but the overall degree of PME inactivation remained unaltered. Fujii, Ohtani, Watanabe, Ohgoshi, Fujii, and Honma (2002) studied the high-pressure inactivation of Bacillus cereus spores in water containing argon. At the pressure of 600 MPa, the addition of argon reportedly accelerated the inactivation of spores at 20C, but had no effect on the inactivation at 40C. The complex physicochemical environment of milk exerted a strong protective effect on Escherichia coli against high hydrostatic pressure inactivation, reducing inactivation from 7 logs at 400 MPa to only 3 logs at 700 MPa in 15 min at 20C. A substantial improvement in inactivation efficiency at ambient temperature was achieved by the application of consecutive, short pressure treatments interrupted by brief decompressions. The combined effect of high pressure (500 MPa) and natural antimicrobial peptides (lysozyme, 400 g/ml and nisin, 400 g/ml) resulted in increased lethality for Escherichia coli in milk (Garcia, Masschalck, and Michiels, 1999). OPPORTUNITIES FOR HIGH PRESSURE ASSISTED PROCESSING The inclusion of high-pressure treatment as a processing step within certain manufacturing flow sheets can lead to novel products as well as new process development opportunities. For instance, high pressure can precede a number of process operations such as blanching, dehydration, rehydration, frying, and solid-liquid extraction. Alternatively, processes such as gelation, freezing, and thawing, can be carried out under high pressure. This section reports on the use of high pressures in the context of selected processing operations. Eshtiaghi and Knorr (1993) employed high pressure around ambient temperatures to develop a blanching process similar to hot water or steam blanching, but without thermal degradation; this also minimized problems associated with water disposal. The application of pressure (400 MPa, 15 min, 20C) to the potato sample not only caused blanching but also resulted in a four-log cycle reduction in microbial count whilst retaining 85% of ascorbic acid. Complete inactivation of polyphenoloxidase was achieved under the above conditions when 0.5% citric acid solution was used as the blanching medium. The addition of 1 % CaCl^sub 2^ solution to the medium also improved the texture and the density. The leaching of potassium from the high-pressure treated sample was comparable with a 3 min hot water blanching treatment (Eshtiaghi and Knorr, 1993). Thus, high- pressures can be used as a non-thermal blanching method. Dehydration and Osmotic Dehydration The application of high hydrostatic pressure affects cell wall structure, leaving the cell more permeable, which leads to significant changes in the tissue architecture (Fair, 1990; Dornenburg and Knorr, 1994, Rastogi, Subramanian, and Raghavarao, 1994; Rastogi and Niranjan, 1998; Rastogi, Raghavarao, and Niranjan, 2005). Eshtiaghi, Stute, and Knorr (1994) reported that the application of pressure (600 MPa, 15 min at 70C) resulted in no significant increase in the drying rate during fluidized bed drying of green beans and carrot. However, the drying rate significantly increased in the case of potato. This may be due to relatively limited permeabilization of carrot and beans cells as compared to potato. The effects of chemical pre-treatment (NaOH and HCl treatment) on the rates of dehydration of paprika were compared with products pre-treated by applying high pressure or high intensity electric field pulses (Fig. 2). High-pressure (400 MPa for 10 min at 25C) and high intensity electric field pulses (2.4 kV/cm, pulse width 300 s, 10 pulses, pulse frequency 1 Hz) were found to result in drying rates comparable with chemical pre-treatments. The latter pre-treatments, however, eliminated the use of chemicals (Ade- Omowaye, Rastogi, Angersbach, and Knorr, 2001). Figure 2 (a) Effects of various pre-treatments such as hot water blanching, high pressure and high intensity electric field pulse treatment on dehydration characteristics of red paprika (b) comparison of drying time (from Ade-Omowaye, Rastogi, Angersbach, and Knorr, 2001). Figure 3 (a) Variation of moisture and (b) solid content (based on initial dry matter content) with time during osmotic dehydration (from Rastogi and Niranjan, 1998). Generally, osmotic dehydration is a slow process. Application of high pressures causes permeabilization of the cell structure (Dornenburg and Knorr, 1993; Eshtiaghi, Stute, and Knorr, 1994; Fair, 1990; Rastogi, Subramanian, and Raghavarao, 1994). This phenomenon has been exploited by Rastogi and Niranjan (1998) to enhance mass transfer rates during the osmotic dehydration of pineapple (Ananas comsus). High-pressure pre-treatments (100-800 MPa) were found to enhance both water removal as well as solid gain (Fig. 3). Measured diffusivity values for water were found to be four-fold greater, whilst solute (sugar) diffusivity values were found to be two-fold greater. Compression and decompression occurring during high pressure pre-treatment itself caused the removal of a significant amount of water, which was attributed to the cell wall rupture (Rastogi and Niranjan, 1998). Differential interference contrast microscopic examination showed the ext\ent of cell wall break-up with applied pressure (Fig. 4). Sopanangkul, Ledward, and Niranjan (2002) demonstrated that the application of high pressure (100 to 400 MPa) could be used to accelerate mass transfer during ingredient infusion into foods. Application of pressure opened up the tissue structure and facilitated diffusion. However, higher pressures above 400 MPa induced starch gelatinization also and hindered diffusion. The values of the diffusion coefficient were dependent on cell permeabilization and starch gelatinization. The maximum value of diffusion coefficient observed represented an eight-fold increase over the values at ambient pressure. The synergistic effect of cell permeabilization due to high pressure and osmotic stress as the dehydration proceeds was demonstrated more clearly in the case of potato (Rastogi, Angersbach, and Knorr, 2000a, 2000b, 2003). The moisture content was reduced and the solid content increased in the case of samples treated at 400 MPa. The distribution of relative moisture (M/M^sub o^) and solid (S/S^sub o^) content as well as the cell permeabilization index (Zp) (shown in Fig. 5) indicate that the rate of change of moisture and solid content was very high at the interface and decreased towards the center (Rastogi, Angersbach, and Knorr, 2000a, 2000b, 2003). Most dehydrated foods are rehydrated before consumption. Loss of solids during rehydration is a major problem associated with the use of dehydrated foods. Rastogi, Angersbach, Niranjan, and Knorr (2000c) have studied the transient variation of moisture and solid content during rehydration of dried pineapples, which were subjected to high pressure treatment prior to a two-stage drying process consisting of osmotic dehydration and finish-drying at 25C (Fig. 6). The diffusion coefficients for water infusion as well as for solute diffusion were found to be significantly lower in high-pressure pre- treated samples. The observed decrease in water diffusion coefficient was attributed to the permeabilization of cell membranes, which reduces the rehydration capacity (Rastogi and Niranjan, 1998). The solid infusion coefficient was also lower, and so was the release of the cellular components, which form a gel- network with divalent ions binding to de-esterified pectin (Basak and Ramaswamy, 1998; Eshtiaghi, Stute, and Knorr, 1994; Rastogi Angersbach, Niranjan, and Knorr, 2000c). Eshtiaghi, Stute, and Knorr (1994) reported that high-pressure treatment in conjunction with subsequent freezing could improve mass transfer during rehydration of dried plant products and enhance product quality. Figure 4 Microstructures of control and pressure treated pineapple (a) control; (b) 300 MPa; (c) 700 MPa. ( 1 cm = 41.83 m) (from Rastogi and Niranjan, 1998). Ahromrit, Ledward, and Niranjan (2006) explored the use of high pressures (up to 600 MPa) to accelerate water uptake kinetics during soaking of glutinous rice. The results showed that the length and the diameter the of the rice were positively correlated with soaking time, pressure and temperature. The water uptake kinetics was shown to follow the well-known Fickian model. The overall rates of water uptake and the equilibrium moisture content were found to increase with pressure and temperature. Zhang, Ishida, and Isobe (2004) studied the effect of highpressure treatment (300-500 MPa for 0-380 min at 20C) on the water uptake of soybeans and resulting changes in their microstructure. The NMR analysis indicated that water mobility in high-pressure soaked soybean was more restricted and its distribution was much more uniform than in controls. The SEM analysis revealed that high pressure changed the microstructures of the seed coat and hilum, which improved water absorption and disrupted the individual spherical protein body structures. Additionally, the DSC and SDS-PAGE analysis revealed that proteins were partially denatured during the high pressure soaking. Ibarz, Gonzalez, Barbosa-Canovas (2004) developed the kinetic models for water absorption and cooking time of chickpeas with and without prior high-pressure treatment (275-690 MPa). Soaking was carried out at 25C for up to 23 h and cooking was achieved by immersion in boiling water until they became tender. As the soaking time increased, the cooking time decreased. High-pressure treatment for 5 min led to reductions in cooking times equivalent to those achieved by soaking for 60-90 min. Ramaswamy, Balasubramaniam, and Sastry (2005) studied the effects of high pressure (33, 400 and 700 MPa for 3 min at 24 and 55C) and irradiation (2 and 5 kGy) pre-treatments on hydration behavior of navy beans by soaking the treated beans in water at 24 and 55C. Treating beans under moderate pressure (33 MPa) resulted in a high initial moisture uptake (0.59 to 1.02 kg/kg dry mass) and a reduced loss of soluble materials. The final moisture content after three hours of soaking was the highest in irradiated beans (5 kGy) followed by high-pressure treatment (33 MPa, 3 min at 55C). Within the experimental range of the study, Peleg’s model was found to satisfactorily describe the rate of water absorption of navy beans. A reduction of 40% in oil uptake during frying was observed, when thermally blanched frozen potatoes were replaced by high pressure blanched frozen potatoes. This may be due to a reduction in moisture content caused by compression and decompression (Rastogi and Niranjan, 1998), as well as the prevalence of different oil mass transfer mechanisms (Knorr, 1999). Solid Liquid Extraction The application of high pressure leads to rearrangement in tissue architecture, which results in increased extractability even at ambient temperature. Extraction of caffeine from coffee using water could be increased by the application of high pressure as well as increase in temperature (Knorr, 1999). The effect of high pressure and temperature on caffeine extraction was compared to extraction at 100C as well as atmospheric pressure (Fig. 7). The caffeine yield was found to increase with temperature at a given pressure. The combination of very high pressures and lower temperatures could become a viable alternative to current industrial practice. Figure 5 Distribution of (a, b) relative moisture and (c, d) solid content as well as (e, f) cell disi
1
12
<urn:uuid:8146b9ea-7ecd-41c5-998c-4e12edee7f49>
There is limited data on the nutritional status of Asian patients with various aetiologies of cirrhosis. This study aimed to determine the prevalence of malnutrition and to compare nutritional differences between various aetiologies. A cross-sectional study of adult patients with decompensated cirrhosis was conducted. Nutritional status was assessed using standard anthropometry, serum visceral proteins and subjective global assessment (SGA). Thirty six patients (mean age 59.8 ± 12.8 years; 66.7% males; 41.6% viral hepatitis; Child-Pugh C 55.6%) with decompensated cirrhosis were recruited. Malnutrition was prevalent in 18 (50%) patients and the mean caloric intake was low at 15.2 kcal/kg/day. SGA grade C, as compared to SGA grade B, demonstrated significantly lower anthropometric values in males (BMI 18.1 ± 1.6 vs 26.3 ± 3.5 kg/m2, p < 0.0001; MAMC 19.4 ± 1.5 vs 24.5 ± 3.6 cm, p = 0.002) and females (BMI 19.4 ± 2.7 vs 28.9 ± 4.3, p = 0.001; MAMC 18.0 ± 0.9 vs 28.1 ± 3.6, p < 0.0001), but not with visceral proteins. The SGA demonstrated a trend towards more malnutrition in Child-Pugh C compared to Child-Pugh B liver cirrhosis (40% grade C vs 25% grade C, p = 0.48). Alcoholic cirrhosis had a higher proportion of SGA grade C (41.7%) compared to viral (26.7%) and cryptogenic (28.6%) cirrhosis, but this was not statistically significant. Significant malnutrition in Malaysian patients with advanced cirrhosis is common. Alcoholic cirrhosis may have more malnutrition compared to other aetiologies of cirrhosis. Cirrhosis of the liver is a devastating condition, commonly the result of decades of chronic inflammation from toxin (eg alcohol), viral infection (eg Hepatitis B) or immune mediated disease (eg autoimmune disease). As a result of the complex pathophysiological processes associated with cirrhosis, it results in significant morbidity such as gastrointestinal bleeding from portal hypertension, and eventual mortality in many patients . The prognosis of patients with advanced cirrhosis is grim, with a 5-year survival rate of <10%. Patients with decompensated liver cirrhosis form the majority of cases that are admitted into gastroenterology units world-wide and represent a significant burden on health-care resources . In addition to the associated morbidity highlighted above, protein-energy malnutrition (PEM) has often been observed in patients with liver cirrhosis [3,4]. Previous studies in Western patients have documented malnutrition rates from 20% in compensated liver cirrhosis up to 60% in decompensated liver cirrhosis . Causes for malnutrition in liver cirrhosis are known to include a reduction in oral intake (for various causes), increased protein catabolism and insufficient synthesis, and malabsorption/maldigestion associated with portal hypertension [3,5,6]. Although a consequence of the disease, malnutrition alone can lead to further morbidity in patients with liver cirrhosis. Increased rates of septic complications, poorer quality of life, and a reduced life span have all been observed in cirrhotics with poorer nutrition status compared to those without [7,8]. In Asia, the high prevalence of chronic Hepatitis B infection, has resulted in large numbers of people developing liver cirrhosis with its' associated complications . Most of the data on malnutrition in patients with cirrhosis have been derived from Western patients in whom chronic alcohol ingestion has been the commonest aetiology. Alcoholic patients are known to develop malnutrition for other reasons apart from liver damage per se . It is uncertain, therefore, if Asian patients with cirrhosis have the same degree of malnutrition and its' resultant morbidity as patients with cirrhosis from other parts of the world. The aims of this study were: a) to determine the prevalence of malnutrition in Malaysian patients with cirrhosis using standard nutritional assessment tools and b) to compare nutritional differences between various aetiologies. Local institutional ethics committee approval was sought before commencement of the study. A cross-sectional study of Asian patients admitted for decompensation of cirrhosis to this tertiary institution, between August 2006 and March 2007, was undertaken. The inclusion criteria were adults aged 18 years and above, admitted for the reason of decompensation of cirrhosis. Patients with hepatocellular carcinoma and severe, i.e. Grade 3 or 4, hepatic encephalopathy were excluded. Eligible patients were given an information sheet in both English and Malay language detailing the objectives and nature of the study. Informed consent was obtained in all patients prior to participation. Cirrhosis was diagnosed based on a combination of clinical features, blood profile and radiological imaging. Clinical features were those of portal hypertension, i.e. ascites and/or gastrointestinal varices. Blood profile included evidence of thrombocytopenia and/or coagulopathy. Radiological features, either with trans-abdominal ultrasound or computerized tomography, had to demonstrate a small shrunken liver with or without splenomegaly and intra-abdominal varices. Severity of liver disease was calculated according to the Child-Pugh score with grades A (mild) to C (severe) indicating degree of hepatic reserve and function . Nutritional assessment was based on the following: anthropometry, visceral proteins, lean body mass and subjective global assessment (SGA). All measurements were taken by the same single investigator, to avoid any inter-observer variation. All patients in the study had a baseline body mass index (BMI), i.e. weight (kg)/height (meter) ² performed. Although a crude measure of nutritional status, BMI was used as a baseline comparison between cirrhotic patients and the local healthy population . Further anthropometric measurements included the following: midarm circumference (MAC), triceps skinfold thickness (TST), midarm muscle circumference (MAMC) and handgrip strength. MAC was measured to the nearest centimeter with a measuring tape at the right arm. TST, an established measure of fat stores, was measured to the nearest millimeter at the right arm using Harpenden skinfold caliper (Baty Ltd, British Indicators) in a standard manner . Three measurements were taken for both TST and MAC, with average values calculated and recorded. Mid-arm muscle circumference (MAMC), an established measure of muscle protein mass, was calculated from MAC and TST using a standard formula: MAMC = MAC - (3.1415*TSF) . Handgrip strength, a simple and effective tool to measure nutritional status, was measured with a hydraulic hand dynamometer (JAMAR) in kilogram force (Kg/F) . Three measurements were made on each arm and an average taken from all measurements. A combination of handgrip strength <30 kg/F and MAMC <23 cm had previously been shown to have a 94% sensitivity and 97% negative predictive value in identifying malnourished patients . Serum albumin concentration is the most frequently used laboratory measure of nutritional status. Although non-specific, it has been used to assess change in nutritional status and stratifying risk of malnutrition . A reduction in serum albumin in the absence of other causes has been shown to represent liver damage and hence forms part of the normal items for a classic liver function test. Serum transferrin has a half-life of 9 days, and can be used as a marker for malnutrition. Good correlation between transferrin level with the Child-Pugh score has been demonstrated before and a reduced level of serum transferrin is additionally indicative of decreased caloric intake. Subjective global assessment Subjective global assessment (SGA) is a simple evaluation tool that allows physicians to incorporate clinical findings and subjective patient history into a nutritional assessment . Based on history taking and physical examination, nutritional ratings of patients are obtained as follows: well-nourished-A, moderately malnourished-B and severely malnourished-C. The SGA has been shown to be a valid and useful clinical nutritional assessment tool for patients of various medical conditions . Malnutrition was defined as <5th percentile MAMC for purposes of standardization with the literature and for accurate comparisons with other cirrhotic populations . However, it is recognized that other markers of malnutrition such hand grip strength and SGA have been used, albeit in fewer studies. Dietary intake and assessment Assessment of individual patient's oral intake during hospitalization was determined by the dietary recall method done every three days for two weeks and an average intake was calculated and recorded. The objective was to determine the adequacy of caloric intake per patient with minimum reporting bias. Calculation of calories of food and drinks intake (composition of the diet) was based on local reference data . All data was entered into Statistical Packages for the Social Sciences (SPSS) version 13.0 (Chicago, Illinois, USA) software for analysis. Continuous variables were expressed as means with standard deviation and analysed with student's t-test or Mann-Whitney where appropriate whilst categorical data were analysed using the χ2 test. For the comparison of nutritional status in cirrhotic patients of various etiologies, SGA was also utilized as this has been shown to reliably identify malnutrition-related muscle dysfunction . Statistical significance was assumed at a p value of <0.05. A total of 36 patients with decompensated liver cirrhosis were recruited during the study period. The basic demography and clinical features are highlighted in Table 1. The mean age of the patients was 59.8 ± 12.8 years and the most common reason for admission was tense ascites requiring paracentesis. Viral hepatitis (n = 15, 41.6%) and alcoholic liver disease (n = 12, 33.3%) were the most common aetiology of cirrhosis. 7/12 patients with alcoholic liver disease had an active alcohol intake at the time of the study. All patients had advanced liver disease with 16 (44.4%) cases of Child-Pugh B and 20 (55.6%) cases of Child-Pugh C cirrhosis. Table 1. Patient profile Malnutrition, i.e. MAMC < 5th percentile, was prevalent in 18/36 (50%) patients and the mean caloric intake of all cirrhotic patients was low at 15.2 kcal/kg/day. Biochemically, the mean serum albumin (20.6 ± 6.0 g/l) and the mean serum transferrin (1.6 ± 0.7 g/l) were lower than normal values. 24 (66.7%) patients had SGA Grade B nutritional status and 12 (33.3%) were SGA Grade C, i.e. all patients had some level of malnutrition based on the SGA scale. Table 2 illustrates the nutritional parameters of the subjects, according to SGA grades. As expected, in both male and female patients with cirrhosis, mean values of anthropometric measurements such as BMI, MAMC and TST demonstrated significant differences in values between cirrhotic patients in SGA grades B and C, with a higher SGA grade correlating well with lower anthropometric values. However, this difference was not observed with visceral proteins such as serum albumin or transferrin (Table 2). Table 2. Nutritional parameters in patients with cirrhosis according to gender and Subjective Global Assessment grades Differences in nutritional status in Child-Pugh B and C liver cirrhosis was assessed with the SGA (Table 3). There was a higher proportion of patients with SGA grade C in Child-Pugh C cirrhotic compared to Child-Pugh B cirrhotic patients, although this was not statistically significant ( 40% vs 25%, p = 0.48). However, serum albumin (17.9 ± 4.4 vs 24.1 ± 6.0 g/L, p = 0.001) and transferrin (1.3 ± 0.6 vs 2.0 ± 0.5 g/L, p < 0.0001) levels were demonstrated to be significantly lower in patients with Child-Pugh C liver cirrhosis compared to those with Child-Pugh B disease. Caloric intake was further observed to be significantly less in patients with Child-Pugh C disease compared to patients with Child-Pugh B disease (13.3 ± 4.9 vs 17.6 ± 5.7 Kcal/kg/day, p = 0.018). Table 3. Subjective Global Assessment in varying severity and aetiologies of cirrhosis Aetiology of liver disease and nutritional parameters The incidence of malnutrition, defined as % < 5th percentile MAMC, in the different aetiologies of cirrhosis were as follows: Alcoholic liver disease n = 9/12 (75%), ViralHepatitis (Hepatitis B & C combined) n = 5/15 (33.3%), Cryptogenic n = 2/7 (28.6%) and Autoimmune n = 1/2 (50%). Differences in nutritional status between the various aetiologies of cirrhosis were examined with the SGA (Table 3). Excluding the extremely small number of autoimmune cirrhotic patients, there was a non-statistically significant increase in the proportion of SGA grade C cases in patients with an alcoholic aetiology (41.7%), compared to those with a viral (26.7%) and cryptogenic (28.6%) aetiology for cirrhosis. This study of nutritional assessment in Malaysian patients with advanced cirrhosis has several limitations. The sample size was small, resulting in some limitations with the relevance of the results from the study. Furthermore, the study was conducted on a selected group of patients with cirrhosis, namely those with advanced end-stage disease who had been admitted to hospital for decompensation. Additionally, a significant proportion of patients with ascites did not have dry weight measurements done, which could have influenced BMI and calorie calculation results. Nevertheless, this study provides useful nutritional data which is currently lacking among Asian patients with advanced cirrhosis. This study demonstrated that the prevalence of malnutrition, defined by MAMC < 5th percentile, was 50% in Malaysian patients with advanced cirrhosis. The patients with cirrhosis exhibited a range of nutritional abnormalities, with protein-energy malnutrition of 50% (MAMC < 5th percentile) and fat store depletions of 30% (TST <5th percentile). BMI measurements in less malnourished cirrhotic patients were not different from the general population, mainly due to the fact that ascites and peripheral oedema contributed significantly to body weight in cirrhotic patients, and true lean body mass was not taken into account . The poor caloric intake of 15.2 kcal/kg/day is lower than the recommended level (24 - 40 kcal/kg/day ), and may have been one of the causes of this malnutrition, although other factors are well recognized [3,5]. The level of malnutrition identified in this study appears to be comparable to published data from Italy (34% of cirrhotics with MAMC < 5th percentile) , a hospital-based study of 315 patients from France (58.7% of Child-Pugh C cirrhotic patients with MAMC < 5th percentile) and a previous study from Thailand (38% of cirrhotics with TSF <10th percentile) . This data suggests that nutritional deficiencies in cirrhosis are likely to be uniform worldwide, regardless of the ethnic distribution or socioeconomic status (believed to be higher in Western patients compared to Asians) of the population involved. This study further supported the utility of the SGA in Asian patients with cirrhosis. Although anthropometric tools such as the MAMC and hand grip strength are known to be better predictors of malnutrition in adult patients with cirrhosis , these tools are not necessarily practical for everyday use. The SGA, compared to standard anthropometry, is much more applicable in clinical practice and has previously been demonstrated to be highly predictive of malnutrition in advanced cirrhosis . We demonstrated in this study that SGA grade C patients with cirrhosis had significantly lower anthropometric measurements compared to SGA grade B cases, indicating that the SGA was able to differentiate nutritional status fairly well. In terms of clinical severity, we were able to demonstrate a trend towards a higher proportion of SGA grade C in patients with Child-Pugh C cirrhosis compared to Child-Pugh B disease. The lack of statistical significance in this observation was probably a result of the small sample size of our study population, i.e. a Type II statistical error. Furthermore, the caloric intake in patients with more advanced cirrhosis was significantly lower with a likelihood of more malnutrition in this group. In this study, we demonstrated that serum visceral protein levels did not differ significantly between SGA Grade B and C, but varied markedly between Child-Pugh B and C liver disease. This indicated that visceral proteins were not influenced by nutritional status but more by the severity of hepatic dysfunction . Differences in malnutrition between various aetiologies of cirrhosis were explored in this study. The frequency of malnutrition in alcohol-related cirrhosis was higher than other aetiologies and the SGA demonstrated a trend towards more severe malnutrition in adults with alcoholic cirrhosis compared to other types of cirrhosis. The latter was not statistically significant, probably as result of the small number of patients in this study. One of the possible explanations for this finding was that 7/12 alcoholic patients were still actively consuming alcohol at the time of the study, leading to more severe nutritional deficiencies in these patients as previously reported . Our findings are in agreement with studies that have been conducted in larger populations. In a study of 1402 patients with cirrhosis in Italy, there was a higher incidence of malnutrition in alcoholic cirrhosis patients compared to other aetiologies of liver cirrhosis . In a Thai study of 60 patients with cirrhosis, the degree of malnutrition was higher in patients with alcoholic cirrhosis and these patients had more complications of cirrhosis compared to other aetiologies . In summary, malnutrition in Malaysian patients with various aetiologies of cirrhosis is common, together with an inadequate caloric intake. Clinical assessment with the SGA demonstrated a trend towards more malnutrition with increasing clinical severity and in alcohol related liver disease, although this was not statistically significant. Serum visceral proteins were not found to be an appropriate tool for nutritional assessment in adults with decompensated cirrhosis. A study with a larger sample is required to substantiate these findings. The authors declare that they have no competing interests. MLST designed the study, performed data collection, data analysis and drafted the manuscript. KLG provided administrative support. SHMT provided technical support. SR assisted in data analysis and interpretation. SM assisted in data interpretation and critical revision of the manuscript. All authors reviewed and approved final version of the manuscript. This study was funded by the following bodies: 1. Long-Term Research fund (Vote F), University of Malaya (Vote no: FQ020/2007A) 2. Educational grant from the Malaysian Society of Gastroenterology and Hepatology Ann Intern Med 1967, 66(1):165-198. PubMed Abstract Coltorti M, Del Vecchio-Blanco C, Caporaso N, Gallo C, Castellano L: Liver cirrhosis in Italy. A multicentre study on presenting modalities and the impact on health care resources. National Project on Liver Cirrhosis Group. Ital J Gastroenterol 1991, 23(1):42-48. PubMed Abstract J Med Assoc Thai 2001, 84(7):982-988. PubMed Abstract Southeast Asian J Trop Med Public Health 1979, 10(4):621-626. PubMed Abstract J Fla Med Assoc 1979, 66(4):463-465. PubMed Abstract Med J Malaysia 2000, 55(1):108-128. PubMed Abstract Figueiredo FA, Dickson ER, Pasha TM, Porayko MK, Therneau TM, Malinchoc M, DiCecco SR, Francisco-Ziller NM, Kasparova P, Charlton MR: Utility of standard nutritional parameters in detecting body cell mass depletion in patients with end-stage liver disease.
1
2
<urn:uuid:9d2d92e7-d38b-4d17-b328-511f440971a5>
We've worked out an exclusive deal for our members to bring you this product at a price lower than what everyone else pays anywhere on the internet! The definitive answer to correct lip-sync error for up to four sources! When you watch TV or movies, do you ever notice how picture and sound are sometimes OUT OF SYNC? The presenter's lips don't move quite at the same time as their voice? Irritating isn't it. This is known as lip sync error. Even if you haven't consciously noticed lip-sync error (we avoid this impossibility by subconsciously looking away) b]research at Stanford University[/b] discovered it causes a negative impact on our perception of the characters and story. Lip sync error affects a huge number of displays, including modern plasma TVs, LCD screens, DLP TVs and digital projectors. The Felston DD740 solves the frustrating problem of lip sync error for anyone with an A/V amplifier or home theater system. What causes lip sync error? There are many causes but most boil down to the video signal being delayed more than the audio signal allowing speech to be heard 'before' the lip movement that produced it is seen. Digital image processing within broadcasts and within modern displays delays video and allows audio to arrive too soon. Sound "before" the action that produces it can never occur in nature and is therefore very disturbing when the brain tries to process this conflicting and impossible visual and aural information. Most people initially only notice lip-sync error when it exceeds 40 to 75 ms but this varies enormously and really depends upon the individual's defence mechanism - how far he can look away from the moving lips so as to ignore the increasing lip-sync error. We call the value at which it is noticed consciously their "threshold of recognition". An individual's "threshold of recognition" falls greatly once it has been reached and lip-sync error has been noticed. At that point their defence mechanism can no longer compensate and the sync problem enters their conscious mind. The same person who was never bothered by a 40 ms lip-sync error may, after noticing 120 ms error, become far more sensitive and notice errors only a small fraction of their previous "threshold". Many people can "see" lip-sync errors as small as "one milli-second" and some can even detect 1/3 ms errors. How do you fix lip sync error? The only way to correct lip-sync error caused by delayed video is to delay audio an equal amount. The Felston DD740 digital audio delay solves lip-sync error by letting you add an audio delay to compensate for all the cumulative video delays - no matter what their cause - at the touch of a button on its remote. It connects between four digital audio sources and your AV receiver (or digital speaker system) allowing you to delay the audio to match the video achieving "perfect lip-sync". Unlike the audio delay feature found in most a/v receivers, the DD740 is designed for easy "on-the-'fly" adjustment while viewing with no image disturbance. This makes fine tuning for perfect lip-sync practical as it changes between programs or discs, and the DD740's 680ms delay corrects larger lip-sync errors common in HDTV. Why doesn't HDMI 1.3+ fix this? The widely misunderstood "automatic lip-sync correction" feature of HDMI 1.3 does nothing more than "automatically" set the same fixed delay most receivers set manually. It does nothing to correct a/v sync error already in broadcasts or discs which changes from program to program and disc to disc. Ironically, it can make lip-sync error "worse" when audio arrives delayed. Does the Felston DD740 work with HD lossless audio found on Blu-ray discs? No. The Felston DD740 is a S/PDIF coax/toslink device. Lossless audio such as DTS-HD Master Audio and Dolby TrueHD found on Blu-ray discs is only available over HDMI. Does the Felston DD740 work with HD lossless audio found on Blu-ray discs? We are not aware of any similar audio delay boxes that accept HDMI. There are other manufacturers of s/pdif delay boxes similar to the Felston DD740 but both are over twice the price and don't offer as many features (neither has numeric pad delay entry, 36 presets, or 1/3 ms adjustment). An HDMI delay box would need to an HDMI "repeater" (often called a splitter) since the HD Audio is HDCP encrypted along with the video. It would require an HDMI "receiver" chip (like TV's have) to decrypt the audio and video data and a "huge" memory to store it for delay but it would also require an HDMI "transmitter" chip (like a Blu-ray player has) to HDCP encrypt the re-aligned audio and video for output. If an HDMI delay box ever comes to market it will no doubt be expensive but like our other products we would offer it to our members at the best price on the internet. Felston DD740 Features 680ms delay (340ms for 96kHz signals) On-the-fly adjustment with no image overlay Tweaking in 1ms and 1/3ms steps 36 preset delays for instant recall Fully featured remote control with numeric keypad for discrete delay entry Discrete input switching, with input's last delay restored Automatic optical-to-coax/coax-to-optical conversion 4 digital audio inputs, 2 digital audio outputs (optical and coax) Adjustable display brightness Discrete IR commands for integration with learning remotes No effect on audio quality thanks to bit-perfect reproduction The Felston digital audio delays solve lip-sync error by allowing you to delay the digital audio signal to match the delay* in the video signal thereby restoring perfect lip-sync. The delay unit is inserted in the digital audio path between your video source (DVD/Blu-ray disc player, DVR, etc.) and your AV receiver as in the diagram above. Since the DD740's "bit-perfect reproduction" does not change the digital audio signal, it is compatible with PCM and all present and future s/pdif surround sound formats at both 48 KHz and 96 KHz. Since there is nothing in the video or audio signal to define when they are in sync it is a subjective adjustment and this is where the remote control excels. It remembers the last delay setting used on each input and includes 36 presets where common delays can be stored for instant recall. But most importantly, the + and - buttons allow dynamic "on-the-fly" delay adjustments while watching with no image disturbance - an essential feature allowing tweaking for "perfect-sync". These are necessary features for true lip-sync correction and not generally available on even the most expensive AV receivers that claim a lip-sync delay feature. At first thought it might appear the DD740 audio delay could not correct for "already delayed audio in the arriving signal" but in conjunction with the video delay of your LCD, DLP, or plasma display it actually can - up to the display's video delay. That is, if your display delays video 100 ms your DD740 will correct lip-sync errors from 100 ms audio lagging to 580 ms audio leading. * Normally lip-sync error is due to video delays in both the arriving signal as well as in the display allowing audio to arrive too soon but when broadcasters over-correct for the video delay they added the arriving signal might have audio delayed instead of video. Using the DD740 In standby mode, audio passes through with no delay while coax to optical and optical to coax conversion remains active. When the DD740 is switched on, the signal is output with your last selected delay. When you notice lip-sync error, correcting it is simply a matter of adding or subtracting audio delay. The plus and minus delay buttons allow adjustment in 1 ms steps (or even 1/3 ms). You adjust while watching your program and there is no image disturbance at all as you press the buttons and shift the audio into alignment with video. As you use your DD740 you will notice that different sources, different discs, and different broadcasts require different delays for perfect lip-sync so the DD740 includes 36 delay presets (9 per input) to remember these commonly used settings making it easy to get to the optimum delay quickly. It also features direct numeric entry so if you know the desired delay you just enter the numbers. That feature is even more valuable when used with programmable learning remote controls (e.g. Pronto, Harmony, URC, etc.) since it allows full control of the DD740 using its comprehensive discrete IR commands. A/V receivers do not offer all these DD740 features but the most important and overriding advantage of the DD740 is the ease of delay adjustment while watching with no image overlays to disrupt your viewing. With an A/V receiver that forces you to use a set-up menu overlaying your image every time you need to adjust the delay, perfect lip-sync just isn’t practical. But my amplifier only has an optical input! No problem. The DD740 transmits the selected source to both outputs simultaneously. This means an a/v amplifier with just one input (optical or coax) can be used with four digital audio sources (two coax and two optical). Which types of audio signal can the DD740 delay? In order to solve lip sync issues, the DD740 delays digital audio signals passing between your source equipment (e.g. disc player, set-top box) and your home theater amplifier via a DIGITAL AUDIO CABLE. Digital audio cable is either optical (toslink) or a coax cable fitted with a single RCA phono socket at each end. NOTE: The DD740 is not directly compatible with ANALOG (stereo) audio signals. Analog audio signals use a pair of leads that connect to two RCA phono sockets (usually one with a red plastic insert and one white). However, if you use a home theater amplifier then analog sources can be used with the DD740 via an adaptor. Can the DD740 delay DTS? Yes. In fact the DD740 can delay any s/pdif digital audio (coax and optical) format that is used today, i.e. Dolby Digital, DTS, Dolby Digital EX, DTS 96/24, PCM, etc. Does the DD740 reduce sound quality? Absolutely Not. There is no change at all in the quality of audio when using a DD740, as the audio is being transmitted digitally. The DD740 simply stores the digital bits coming in and then outputs them, unchanged, after the delay period. Since the data is digital, a perfect copy is made with absolutely no deterioration in sound quality. Can I use the DD740 with an analog (stereo) source? Yes! All that is needed is a low cost, third-party analog-to-digital converter. This simply connects between the analog audio source and the DD740. Such a converter costs about US$40 (25GBP) or less depending on your location. Please note, the output from the DD740 is still digital audio, and so you will still need an AV amplifier with a digital audio input or a speaker system that accepts s/pdif digital audio input. How can I connect more than 4 digital audio sources? If you need to connect more than 4 digital sources, or for example have 3 sources that each require optical connections, then third-party adapters are available. For instance, to connect an optical (toslink) source to a coax input of the DD740, a simple optical-to-coax converter may be used. These are available at low prices both from online stores and at audio accessory shops in the high street. For example the unit shown on the right. It is widely available from audio accessory retailers, priced at approx US$30. Alternatively, a powered digital audio switch may be used. AVOID the use of mechanical toslink switches and splitters since they can reduce light levels and degrade the digital audio signals reaching the DD740 and may cause occasional dropouts in sound or not work at all. Toslink switches that do not require external power are definitely “mechanical” but remote controlled powered switches may also be mechanical internally. Powered switches that offer coax to Toslink and/or Toslink to coax conversion will not be mechanical and should work fine. A suitable powered digital audio switch is the Midiman CO2, for example. It will connect to two digital sources, one coax and one optical. The CO2's output connects to any of the DD740's inputs, leaving the other three inputs available for a total of 5 sources. When the time comes to use one of the inputs connected via the CO2, simply move its switch to the source required. How do I use a learning remote control with the DD740? The DD740 includes features to allow extensive control by learning remotes. What is the longest cable I can use with the DD740? We recommend that, for best results, all cables (coax and optical) are kept to the shortest lengths practical. It is not possible to say exactly what the maximum length of cable is that may be used, since that will depend on the quality and condition of the cable and also on the equipment at the other end. However, as a guide a maximum length of 5 metres (15 feet) is advisable for any cable connected to the DD740. In particular, we recommend that the DD740 is positioned near to your digital audio source and connected to it using short cables. Problems that may occur if a cable is too long include audio drop outs (occasional short periods of silence) or loss of audio altogether. Audio Delay Capabilities: 0 - 680 milliseconds in 1ms or 0.33ms steps (32-48kHz sample rate signals) 0 - 340 milliseconds in 1ms or 0.33ms steps (96kHz sample rate signals) 36 user-programmable presets (9 per input) Remote control handset included? YES Full functionality available from handset May be integrated with learning remote controls, including additional discrete command codes 2 x Digital Audio In (coaxial) RCA phono socket (75 ohm) 2 x Digitial Audio In (optical) toslink socket Digital Audio Out (coaxial) RCA phono socket (75 ohm) Digital Audio Out (optical) toslink socket DC power supply socket 9V DC (+ve center pole), 200mA from power adaptor Less than 2 Watts for the DD740 from 9VDC. AC power consumption will depend upon the country specific power adaptor used with the unit but will not exceed 5 watts in any case. Size: 5.7" (145mm) x 4.1" (105mm) x 1.4" (35mm) Weight: 9.9oz (280g) approx Integration with learning remotes The DD740 is compatible with learning remote controls, providing seamless integration with your A/V system. Every learning remote is capable of replicating the IR commands of the DD740's own remote control. Please refer to the instructions that accompany your learning remote for details of how to do this. In addition, the DD740 has extra IR commands that can be programmed into more sophisticated learning remotes such as the Philips Pronto and ProntoNEO. By sequencing these commands, total control of the DD740 may be achieved. For example, turning on the DD740, selecting the input, selecting the delay preset and even setting its display brightness, all from a single button press on your learning remote. The full set of IR commands available are: Power On* (discrete) Power Off* (discrete) Digit 0-9 (discrete) Input A-D (discrete) Preset 1-5 (discrete) Preset 6-9* (discrete) Brightness 1-5* (discrete) *Only available when using a suitable programmable learning remote control. DD740 digital audio delay 2 x AAA batteries You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum You cannot attach files in this forum You can download files in this forum
1
47
<urn:uuid:b789c5c0-7b8a-4b55-8230-92805ced4a38>
Kelso, Scottish Borders |Scottish Gaelic: Cealsaidh| Kelso seen from the Cobby Tweedside meadow Kelso shown within the Scottish Borders |OS grid reference| |- Edinburgh||44 mi (71 km)| |- London||350 mi (560 km)| |Council area||Scottish Borders| |Lieutenancy area||Roxburgh, Ettrick and Lauderdale| |Sovereign state||United Kingdom| |UK Parliament||Berwickshire, Roxburgh and Selkirk| |Scottish Parliament||Ettrick, Roxburgh and Berwickshire| Kelso (Scottish Gaelic: Cealsaidh, Scots: Kelsae) is a market town and civil parish in the Scottish Borders area of Scotland. It lies where the rivers Tweed and Teviot have their confluence. The parish has a population of 6,385. Kelso's main tourist draws are the ruined Kelso Abbey and Floors Castle, a William Adam designed house completed in 1726. The bridge at Kelso was designed by John Rennie who later built London Bridge. The town of Kelso came into being as a direct result of the creation of Kelso Abbey in 1128. The town's name stems from the fact that the earliest settlement stood on a chalky outcrop, and the town was known as Calkou (or perhaps Calchfynydd) in those early days. Standing on the opposite bank of the river Tweed from the now-vanished royal burgh of Roxburgh, Kelso and its sister hamlet of Wester Kelso were linked to the burgh by a ferry at Wester Kelso. A small hamlet existed before the completion of the Abbey in 1128 but the settlement started to flourish with the arrival of the monks. Many were skilled craftsmen, and they helped the local population as the village expanded. The Abbey controlled much of life in Kelso-area burgh of barony, called Holydean, until the Reformation in the 16th century. After that, the power and wealth of the Abbey declined. The Kerr family of Cessford took over the barony and many of the Abbey's properties around the town. By the 17th century, they virtually owned Kelso. In Roxburgh Street is the outline of a horseshoe petrosomatoglyph where the horse of Charles Edward Stuart cast a shoe as he was riding it through the town on his way to Carlisle in 1745. He is also said to have planted a white rosebush in his host's garden, descendants of which are still said to flourish in the area. For some period of time the Kelso parish was able to levy a tax of 2 pence on every Scottish pint sold within the town. The power to do this was extended for 21 years in 1802 under the Kelso Two Pennies Scots Act when the money was being used to replace a bridge across the river Tweed that had been destroyed by floods. Kelso High School provides secondary education to the town, and primary education is provided by Edenside Primary and Broomlands Primary. The town has much sport and recreation, the River Tweed at Kelso is renowned for its salmon fishing, there are two eighteen-hole golf courses as well as a National Hunt (jumping) horse racing track, the course is known as "Britain's Friendliest Racecourse", racing first took place in Kelso in 1822. In 2005 the town hosted the 'World Meeting of 2CV Friends' in the grounds of nearby Floors Castle. Over 7,000 people took over the town and are said[by whom?] to have brought in more than 2 million pounds to the local economy. According to a letter dated October 17, 1788, 'The workmen now employed in digging the foundations of some religious houses which stood upon St. James' Green, where the great annual fair of that name is now held in the neighbourhood of this town, have dug up two sone [sic] coffins of which the bones were entire, several pieces of painted glass, a silver coin of Robert II, and other antique relics'.[unreliable source?] The town's rugby union team (Kelso RFC) are highly respected, and their annual rugby sevens tournament takes place in early May. Famous former players include John Jeffrey, Roger Baird, Andrew Ker and Adam Roxburgh, all of whom featured in 7's teams that dominated the Borders circuit in the 1980s, including several wins in the blue ribbon event at Melrose. Kelso RFC also hold an annual rugby fixture; this fixture is the oldest unbroken fixture between a Scottish and Welsh side; the opposition is famous for being the birthplace of the Film Actor Richard Burton and Vocalist Ivor Emannuel is is small village nesttled in the beautiful South Wales Valleys called Pontrhydyfen. The fixture was founded some 47 years ago[when?] by Ian Henderson, a local Kelso businessman and Tom Owen fixture Secretary of Pontrhydyfen RFC. The two teams currently play for the DT Owen Cup; the two clubs alternate the fixture, one year they play in Kelso (the first fixture venue) and the following year in Pontrhydyfen. This fixture has nurtured generations of friendships and the 50th anniversary of this fixture will be held in 2013; this is unique, some[who?] claim to have the longest fixture between a Scottish and Welsh side, however this is the longest unbroken fixture. Every year in July, the town celebrates the border tradition of Common Riding, known as Kelso Civic Week. The festival lasts a full week and is headed by the Kelsae Laddie with his Right and Left Hand Men. The Laddie and his followers visit neighbouring villages on horseback with the climax being the Yetholm Ride on the Saturday. There are many competitions and social events every day. There have been many songs written about Kelso (or Kelsae), (most notably "Kelsae Bonnie Kelsae") but the most recent one is "Yetholm Day", composed by Gary Cleghorn, a young follower of Civic Week for many years. The song tells the story of the Kelsae Laddie and his followers on the Saturday ride-out to Kirk Yetholm and Town Yetholm. Every September, Kelso hosts its annual fair every first weekend of September, the weekend includes Drinking, dancing, Street Entertainers, Live Music, Stalls , Free Music concert.The Fair attracts around over the whole weekend around 10,000 people to the town. As a fund-raiser for Kelso Civic Week, Gary Cleghorn has involved Ex Laddies and locals to sing some of the old Kelso songs, plus some new songs by local artists, on a CD, "Songs of Kelso", which is sold in the town by local shops and public houses. Sir Walter Scott attended Kelso Grammar School in 1783 and he said of the town, "it is the most beautiful if not the most romantic village in Scotland". Another attraction is the Cobby Riverside Walk which goes from the town centre to Floors Castle along the banks of the Tweed passing the point where it is joined by the River Teviot. Kelso has two bridges that span the River Tweed, "Rennie's Bridge" was completed in 1803 to replace an earlier one washed away in the floods of 1797, it was built by John Rennie of Haddington, who later went on to build Waterloo Bridge in London, his bridge in Kelso is a smaller and earlier version of Waterloo Bridge. The bridge was the cause of local rioting in 1854 when the Kelso population objected to paying tolls even when the cost of construction had been covered, the Riot Act was read, three years later tolls were abolished. Hunter's Bridge, a kilometre downstream, is a modern construction built to take much of the heavy traffic that has damaged Rennie's bridge by diverting vehicles around the town. Famous people from Kelso have included civil engineer Sir James Brunlees (1816–1892) who constructed many railways in the United Kingdom as well as designing the docks at Avonmouth and Whitehaven. Sir William Fairbairn (1789–1874) was another engineer who built the first iron hulled steamship the Lord Dundas and constructed over 1000 bridges using the tubular steel method which he pioneered. Thomas Pringle the writer, poet and abolitionist, was born at nearby Blakelaw, a 500-acre (2.0 km2) farmstead four miles (6 km) to the south of the town where his father was the tenant. Floors Castle Floors castle is a large stately home just outside Kelso. It is a popular visitor attraction. Adjacent to the house there is a large walled garden with a cafe, a small garden centre and the Star Plantation. Kelso is twinned with two cities abroad: See also - Mac an Tàilleir, Iain (2003) Placenames. (pdf) Pàrlamaid na h-Alba. Retrieved 20 January 2010.[dead link] - Scots Language Centre: Scottish Place Names in Scots - General Register Office for Scotland: Census 2001: Civil parish: Kelso Retrieved 23 March 2011 - Westwood, Jennifer (1985), Albion. A guide to Legendary Britain. Pub. Grafton Books. London. ISBN 0-246-11789-3. P. 378. - The Law Commission & the Scottish Law Commission (2012), Statute Law Repeals: 19th Report (PDF), Law Commission, pp. 321–323, retrieved 2012-04-25 - Coin Hoard Article Further reading - Kelso Scottish Borders - Kelso Songs - Photos of Kelso - Coin Hoard Article - Media related to Kelso at Wikimedia Commons
1
2
<urn:uuid:25ec666d-9f36-4a1e-95d6-7f96b2b578be>
The Voortrekker Road (H2-2): The Voortrekker Road (H2-2) south-east of Pretoriuskop, was used by Carolus Trichardt, who was the son of the Voortrekker Louis Trichardt. He was commissioned in 1849 by the Transvaal Government of the time to open up a regular route between the northern interior and Delagoa Bay. Albasiniís caravans were the main users of this road. Over the years his porters transported thousands of kilograms of goods from the coast and carried back loads of ivory. The trader Fernades da Neves accompanied one of Albasiniís caravans in 1860 and reported that it took 24 days to complete the 250 mile journey between the coast and Pretoriuskop. The trip employed 150 Tsongas each of which carried 40lb of trade goods, 68 porters carried food and camping equipment and 17 Elephant hunters kept guard. The Vootrekker Road was improved in 1896 by the trader Alois Nelmapius to cater for the transport of supplies to Lydenburg and Mac Mac, where gold had been discovered. The road was used a lot by transport riders on their way to what was then known as Portuguese East Africa (today Mozambique). The road descends from the mountainous Pretoriuskop sourveld into the rolling hills of mixed bushwillow woodland past a number of geological features. It is a good drive for if you want to get into the game rich plains south of Skukuza. It was on this road that the little terrier Jock of the Bushveld was born. Jockís story was written by his master, Sir Percy Fitzpatrick, a former transport rider who became a politician and businessman in the early 20th century. The main landmark on this road is Ship Mountain (662m), used as a navigational aid by early pioneers and travellers. It is geologically distinctive from the surrounding granite countryside because it is made up of gabbro, a hardier relative of basalt. Per an old tale, Ship Mountain was a fort used by the Sotho people in the 18th century to protect themselves and their cattle against Swazi raiders coming in from the south. Sotho warriors would hide families and livestock in the caves on the top of Ship Mountain and then use rocks to stop the Swazi from getting to the top. At the foot of Ship Mountain was a trading store run by Chief Manungu. He was part of Joao Albasiniís trading empire. Along this road you can still see the fence of the boma which was used during the first white Rhino relocation in Kruger Park in which 2 bulls and the 2 cows were released in October 1961. In the 1960s 320 white Rhino were released into southern Kruger Park from the Umfolozi Game Reserve in KwaZulu-Natal, and a further 12 were released in the north. The Pretoriuskop area is one of the best locations to track and see white Rhino. The Voortrekker Road roughly follows a line of thornveld to Afsaal. The significance of this is that the grass is more edible than the sourveld and the wildlife can easily be seen. Even though there are not a lot of animals in this part of the Park, there are far higher numbers. By the 1900s almost all of the game had been shot out of the region by early hunters. It was so bad that when Stevenson-Hamilton surveyed the area in 1902, he noted that the only wild animal he saw between Ship Mountain and Skukuza was a single Reedbuck. Closest Rest Camps:
1
2
<urn:uuid:37b03aa5-fd62-4141-b7d5-39e4fd6a71d2>
Forgot your username or password? What do professional designers really do? This question needs to be asked in order to answer why you need a design education and what you need to study. The projects created by designers give form to the communication between their client and an audience. To do this, designers ask: What is the nature of the client? What is the nature of the audience? How does the client want to be perceived by the audience? Designers also explore the content of the message the client wishes to send, and they determine the appropriate form and media to convey that message. They manage the communication process, from understanding the problem to finding the solution. In other words, designers develop and implement overall communication strategies for their clients. Some of the projects presented here will probably seem familiar because of their broad exposure in the media. Others, which are limited to a particular audience, may surprise you. You'll see that design arrests attention, identifies, persuades, sells, educates, and gives visual delight. There is a streak of pragmatism in American culture-our society tends to focus on results. The processes that went into creating these design projects are often invisible, but the designer's own words describe the significant strategies. It's clear that some projects, because of their size, would be inconceivable without considerable project management skills. And the range of content clearly demonstrates the designer's need for a good liberal arts education to aid in understanding and communicating divers design content. The projects that follow represent various media, such as print (graphic design's historic medium) and three-dimensional graphic design media, including environmental graphic design, exhibitions, and signage. Electronic media, such as television and computers, as well as film and video are also represented. Various kinds of communication are included, from corporate communications to publishing and government communications. Some project focus on a specialization within design, such as corporate identity programs or type design. Information design and interface design (the design of computer screens for interactivity) reflect the contemporary need to streamline information and to use new media Three designer roles are also highlighted. Developed over a lifetime, these careers go beyond the commonly understood role of the designer. The corporate executive oversees design for a large company; the university professor teaches the next generation of designers and thus influences the future of the field; the design entrepreneur engages in design initiation as an independent business. Consider these and other design-related roles as you plan your studies and early job experience. The projects and designers presented her were selected to illustrate the range of graphic design activities and to represent the exceptional rather than the ordinary. Seeing the best can give you a glimpse into the possibilities that await you in the competitive, creative, and rewarding field of design. [ top ] Digital design is the creation of highly manipulated images on the computer. These images then make their final appearance in print. Although computers have been around since the forties, they were not reasonable tools for designers until the first Macintoshes came out in 1984. April Greiman was an early computer enthusiast who believes that graphic design has always been involved with technology. After all, Gutenberg's fifteenth-century invention of movable type created a design as well as an information Greiman's first interest was video, which led naturally to the computer and its possibilities. She bought her first Mac as a toy, but soon found it an indispensable creative tool. “I work intuitively and play with technology,” says Greiman. “I like getting immediate feedback from the computer screen, and I like to explore alternative color and form quickly on-screen. Artwork that exists as binary signals seems mysterious to me. It is an exhilarating medium!” She wants to design everything and to control and play with all kinds of sensory experience. Designers working with digital design need to be more than technicians. Consequently, their studies focus on perception, aesthetics, and visual form-making as well as on technology. I didn't have the math skills (so I thought) to become an architect. My high school training in the arts was in the “commercial art” realm. Later at an art school interview I was told I was strong in graphic design. So as not to humiliate myself, not knowing what graphic design was, I just proceeded onwards?the “relaxed forward-bent” approach, my trademark! -April The book remains our primary way of delivering information. Its form has not changed for centuries, and its internal organization-table of contents, chapter, glossaries, and so forth-is so commonplace that we take it for granted. But now a challenger has appeared: the computer. No longer merely a tool for preparing art for the printer, the computer is an information medium in itself. Computer-based design delivers information according to the user's particular interest. Information is restructured into webs that allow entry from different points, a system that may be more like our actual thinking processes than the near order of the book is. On the computer, the designer can use time and sound in addition to text and image to draw attention, to animate an explanation, or to present an alternative way to understand a concept. This new technology demands designers who can combine analysis with intuition. Clement Mok does just that. He is a certified Apple software developer (he can program) and a graphic designer comfortable in most media. QuickTime system software, recently released by Apple, supports the capability to do digital movies on the Macintosh. As system software, it is really invisible. “Providing users with this great technology isn't enough,” says Mok. “You also have to give them ideas for what they can do and samples they can use.” Mok addressed this problem by developing QuickClips, a CD library of three hundred film clips ranging from excerpts of classic films to original videos and animations created by his staff. These fifteen- to ninety-second movies can be incorporated into user-created presentations. It is like having a small video store in your computer. With QuickClips, Mok opened new avenues for presentation with the computer. It is easy to overlook type design because it is everywhere. Typically we read for content and ignore the familiar structural forms of our alphabet and its formal construction in a typeface. Only when the characters are very large, or are presented to us in an unusual way, do we pay attention to the beautiful curves and rhythms of repetition that form our visible language. Since Gutenberg's invention of movable type in the mid-fifteenth century, the word has become increasingly technological in its appearance. Early type was cast in metal, but today's new type design is often created digitally on the computer through a combination of visual and mathematical manipulation. The history of culture can be told through the history of the letterform. The lineage of many typefaces can be traced back to Greek inscriptions, medieval scribal handwriting, or early movable type. Lithos, which means “stone” in Greek, was designed by Carol Twombly as a classically inspired typeface. She examined Greek inscription before attempting to capture the spirit of these letterforms in a type system for contemporary use. Lithos was not an exact copy from history nor was it created automatically on the computer. Hand sketches, settings that used the typeface in words and sentences were developed and evaluated. Some were judged to be too stiff, some “too funky,” but finally one was just right. These were the early steps in the search for the form and spirit of the typeface. Later steps included controlling the space between letters an designing the variations in weight for a bold font. Twombly even designed foreign-language variations. Clearly, patience and a well-developed eye for form and system are necessities for a type designer. As a kid, when I wasn't climbing trees, skiing, or riding horses, I was drawing and sculpting simple things. I wanted a career involving art of some kind. The restrictions of two-dimensional communication appealed to my need for structure and my desire to have my work speak for me. The challenge of communicating an idea or feeling within the further confines of the Latin alphabet lad me from graphic design into type design. -Carol Most people have had the experience of losing themselves in a film but probably haven't given much thought to the transition we go through mentally and emotionally as we move from reality to fantasy. Film titles help to create this transition. The attention narrows, the “self” slips away, and the film washes over the senses. Film titles set the dramatic stage; they tune our emotions to the proper pitch so that we enter into the humor, mystery, or pathos of a film with hardly a blink. Rich Greenberg is a traditionally schooled designer who now works entirely in film. His recent Dracula titles are a classic teaser. He begins with the question: What is this film about? Vampires. What signals vampires for most of us? Blood. Greenberg believes that a direct approach using the simplest idea is usually the best. “What I do in film is the opposite of what is done with the print image. Dracula is a very good example of the process. There is very little information on the screen at any time, and you let the effect unfold slowly so the audience doesn't know what they're looking at until the very end. In print, everything has to be up front because you have so little time to get attention. In film you hold back; otherwise it would be boring. The audience is captive at a film-I can play with their minds.” Special effects are also of interest to Greenberg. In Predator, the designer asked, How can I create a feeling of fear? He began by exploring the particular possibilities for horror that depend on a monster's ability to camouflage himself so he seems to disappear into the environment. The designer's visual problem was to find a way for the object to be there and not be there. It was like looking into the repeating, diminishing image in a barber's mirror. To complicate matters, the effect needed to work just as well when the monster was in motion. Whether designing opening title or special effects that will appear throughout a film, designers have to keep their purpose in mind. According to Greenberg, “Nobody goes to a film for the effects; they go for the story. Effects must support the Motion graphics, such as program openings or graphic demonstrations within a television program, require the designer to choreograph space and time. Images, narration, movement, sound and music are woven into a multisensory communication. Chris Pullman at WGBH draws an analogy between creating a magazine with its cover, table of contents, letters to the editor, and articles, to that of a television program like Columbus and the Age of Discovery. In both cases, the designer must find a visual vocabulary to provide common visual features. Columbus opens slowly and smoothly, establishing a time and a place. A ship rocking on the waves becomes a kind of “wallpaper” on which to show credits. The opening is a reference to what happened—it speaks of ships, ocean, New World, Earth—without actually telling the story. In contrast, the computer-graphic map sequences are technical animation and a critical part of the storytelling. Was Columbus correct in his vision of the landmass west of Europe? Something was there, but what and how big? Was it the Asian landmass Columbus had promised to find? In 1516, Magellan sailed around the Americas by rounding Cape Horn-and found 5,000 more miles of sea travel to Japan! Columbus had made a colossal miscalculation. The designer needed to visualize this error. Authentic ancient maps established the perspective of the past; computer animation provided the story as we understand it today and extended the viewer's perspective with a three-dimensional presentation. Pullman created a 3-D database with light source and ocean detail for this fifty-seven-second sequence. “The move was designed to follow the retreating edge of darkness, as the sun revealed the vastness of the Pacific Ocean and the delicate track of Magellan's expedition snaked west. As the Pacific finally fills the whole frame, the music, narration, and camera work conspire to create that one goose-bump moment. In video, choreography, not composition, is the Objects, statistics, documentary photographs, labels, lighting, text and headlines, color, space, and place—these are the materials of exhibition design. The designer's problem is how to frame these materials with a storyline that engages and informs an audience and makes the story come alive. The Ellis Island Immigration Museum provides an example of how exhibition designers solve such a The museum at Ellis Island honors the many thousand of immigrants who passed through this processing center on their way to becoming United States citizens. It also underscores our diversity as a nation. The story is told from two perspectives: the personal quest for a better life, which focuses on individuals and families, and the mass migration itself, a story of epic Tom Geismar wanted to evoke a strong sense of the people who moved through the spaces of Ellis Island. In the entry to the baggage room, he used space as a dramatic device to ignite the viewer's curiosity. Using a coarse screen like that used in old newspapers, Geismar enlarged old photographs to life size and then mounted these transparent images on glass. The result is an open space in which ghostly people from the past seem to appear. The problem of how to dramatize statistical information was another challenge. The exhibit Where We Came From: Sources of Immigration uses three-dimensional bar charts to show the number of people coming from various continents in twenty-year intervals; the height of the vertical element signals volume. The Peopling of America, a thematic flag of one thousand faces, shows Americans today. The faces are mounted on two sides of a prism; the third side of the prism is an American flag. This striking design becomes a focal point for the visitor and is retained as a powerful memory. Exhibit design creates a story in space. Designers who work in this field tend to enjoy complexity and are skilled in composition and visual framing, model making, and the use of diagrams, graphics, and maps. Even as an adolescent, I was interested in “applied art.” I was attracted to the combination of “art” (drawing, painting, etc.) and its practical application. While there was no established profession at the time (or certainly none that I knew of), my eyes were opened by the Friend-Heftner book Graphic Design and my taste more fully formed under a group of talented teachers in graduate school. I still enjoy the challenge of problem solving. As people become more mobile-exploring different countries, cities, sites, and buildings—complex signage design helps them locate their destinations and work out a travel plan. One large and multifaceted tourist attraction that recently revamped its signage design is the world-famous Louvre Museum, in Paris, France. In addition to the complexity of the building and its art collection, language and cultural differences proved to be fundamental design problems in developing a signage system for the Louvre. Carbone Smolan Associates was invited to compete for this project sponsored by the French government. In his proposal, Ken Carbone emphasized his team's credentials, their philosophy regarding signage projects, and their conceptual approach to working on complex projects. Carbone Smolan Associates won the commission because they were sensitive to French culture, they were the only competitor to ask questions, and their proposal was unique in developing scenarios for how museum visitors would actually use the signage system. The seventeenth-century Louvre, with its strikingly modern metal-and-glass entryway designed in the 1980s, presented a visual contrast of classicism and modernity. Should the signage harmonize with the past or emphasize the present? The design solution combines Granjon, a seventeenth-century French typeface, with The signage design also had to address an internal navigational problem: how would visitors find their way through the various buildings? To add to the potential confusion, art collections are often moved around within the museum. The designers came up with an innovative plan: they created “neighborhoods” within the Louvre, neighborhoods that remained the same regardless of the collection currently in place. The signage identified the specific neighborhoods; the design elements of a printed guide (available in five languages) related each neighborhood to a particular Louvre environment. It's clear that signage designers need skills in design systems and planning as well as in diagramming and model Design simply provided the broadest range of creative opportunities. It also appealed to my personal interest in two- and three-dimensional work including everything from a simple poster to a major exhibit. -Ken Carbone Packaging performs many functions: it protects, stores, displays, announces a product's identity, promotes, and sometimes instructs. But today, given increased environmental concern and waste-recycling needs, packaging has come under scrutiny. The functions packaging has traditionally performed remain; what is needed now is environmentally responsive design. Fitch Richardson Smith developed just such a design-really an “un-packaging” strategy-for the Gardenia line of watering products. A less-is-more strategy was ideally suited to capture the loyalty of an environmentally aware consumer-a gardener. The designers' approach was to eliminate individual product packaging by using sturdy, corrugated, precut shipping bins as point-of-purchase displays. Hangtags on individual products were designed to answer the customer's questions at point-of-sale and to be saved for use-and-care instructions at home. This approach cut costs and reduced environmental impact in both manufacturing and consumption. What's more, Gardena discovered that customers liked being able to touch and hold the products before purchase. Retailers report that this merchandising system reduces space needs, permits tailoring of the product assortment, and minimizes the burden on the sales staff. A modular system, it is expandable and adaptable and can be presented freestanding or on shelves or pegboards. The graphics are clear, bright, and logical, reinforcing the systematic approach to merchandising and information design. Contemporary environmental values are clearly expressed in this packaging solution. The product connects with consumers who care about their gardens, and the packaging-design solution relates to their concern about the Earth. Package designers tend to have a strong background in three-dimensional design, design and product management, and design Environmental graphics establish a particular sense of place through the use of two- and three-dimensional forms, graphics, and signage. The 1984 Olympics is an interesting example of a project requiring this kind of design treatment. The different communication needs of the various Olympics participants—athletes, officials, spectators, support crews, and television viewers—together with the project's brief use, combined to create an environmental-design problem of daunting dimension and In the 1984 Los Angeles Games, the focus was on how a multicultural American city could embrace and international event. Arrangements were basic and low-budget. Events, planned to be cost-conscious and inclusive, were integrated into Los Angeles rather than isolated from it. Old athletic stadiums were retrofitted rather than replaced with new ones. These ideas and values, as well as the celebratory, international nature of the Olympics, needed to be expressed in its environmental design. One of the most important considerations was to design a visual system that would provide identity and unity for individual events that were scattered throughout an existing urban environment. Through the use of color and light, the visual system highlighted the geographic and climatic connection between Los Angeles and the Mediterranean environment of the original Greek Games. The graphics expressed celebration, while the three-dimensional physical forms were a kind of “instant architecture”—sonotubes, scaffolding, and existing surfaces were signed and painted with the visual system. The clarity and exuberance of the system brought the pieces together in a cohesive, immediately recognizable way. Under the direction of Sussman/Prejza, the design took form in workshops and warehouses all over the city. Logistics—the physical scope of the design and the time required for its development and installation—demanded that the designers exhibit not only skill with images, symbols, signs, and model making but also considerable Strategic design planners are interested in the big picture. They help clients create innovation throughout an industry rather than in one individually designed object or communication. First, the strategic design planners develop a point of view about what the client needs to do. Then they orchestrate the use of a wide variety of design specialties. The end result integrates these specialties into an entire vision for the client and the customer. This approach unites business goals, such as customer satisfaction of increasing market share, with specific design performance. The scope of strategic design planning is illustrated by one Doblin Group project. Customer satisfaction was the goal of the Amoco Customer-Driven Facility. Larry Keeley, a strategic design planner at the Doblin Group, relates that “the idea was to reconceive the nature of the gas station. And like many design programs, this one began with a rough sketch that suggested how gas stations might function very differently.” The design team needed to go beyond giving Amoco a different “look.” They needed to consider customer behavior, the quality of the job for employees, the kinds of fuel the car of the future might use, and thousands of other details. Everything was to be built around the convenience and comfort of customers. Keeley and his team collaborated with other design and engineering firms to analyze, prototype, and pilot-test the design. The specific outcomes of the project include developments that are not often associated with graphic design. For example, the project developed new construction materials as well as station-operation methods that are better for the environment and the customer. A gas nozzle that integrated the display of dispensed gas with a fume-containment system was also developed. This system was designed to be particularly user-friendly to handicapped or elderly customers. For Amoco itself, software-planning tools were developed to help the company decide where to put gas stations so that they become good neighbors. These new kinds of gas stations are now in operation and are a success. Creating a visual system is like designing a game. You need to ask: What is the purpose? What are the key elements and relationships? What are the rules? And where are the opportunities for surprise? With over 350 national parks and millions of visitors, the United States National Park Service (NPS) needed a publication system to help visitors orient themselves no matter which park they were in, to understand the geological or historical significance of the park, and to better access its recreational opportunities. The parts of the system had to work individually and as a whole. Systems design involves considerations of user needs, communication consistency, design processes, production requirements, and economies of scale, including the standardization of sizes. Rather than examining and designing an isolated piece, the designer of a system considers the whole, abstracting its requirements and essential elements to form a kind of game plan for the creation of its parts. When Massimo Vignelli was hired to work with the NPS design staff, they agreed on a publication system with six elements: a limited set of formats; full-sheet presentations; park names used as logotypes; horizontal organization for text, maps, and images; standardized, open, asymmetric typographic layout; and a master grid to coordinate design with printing. The system supports simple, bold graphics like Liberty or detailed information like Shenandoah Park, with its relief map, text, and photographs. A well-conceived system is not a straightjacket; it leaves room for imaginative solutions. It releases the designer from solving the same problem again and again and directs creative energy to the unique aspects of a communication. To remain vital and current, the system must anticipate problems and opportunities. Designers working in this area need design-planning skills as well as creativity with text, images, symbols, signs, diagrams, graphs, and Educational publishing isn't just textbooks anymore. Traditional materials are now joined by a number of new options. Because children and teenagers grow up with television and computers, they are accustomed to interactive experiences. This, plus the fact that students learn best in different ways—some by eye and some by ear—makes educational publishing an important challenge for Ligature believes that combining visual and verbal learning components in a cooperative, creative environment is or paramount importance in developing educational materials. Ligature uses considerate instructional design, incorporating fine art, illustrations, and diagrams, to produce educational products that are engaging, substantive, relevant, and effective. A Ligature project for a middle school language arts curriculum presents twelve thematic units in multiple ways: as a full-color magazine, a paperback anthology, an audiotape, several videotapes, a language arts survival guide giving instruction on writing, software, fine art transparencies, and a teacher's guide containing suggestions for integrating these materials. These rich learning resources encourage creativity on the part of both teachers and students and allow a more interactive approach to learning. Middle school students are in transition from child to adult. The central design issue was to create materials that look youthful but not childish, that are fresh, fun, and lively, yet look “grown up.” The anthology has few illustrations and looks very adult, while the magazine uses type and many lively images as design In educational publishing, multidisciplinary creative teams use prototype testing to explore new ideas. Materials are also field-tested on teachers and students. Designers going into instructional systems development need to be interested in information, communication, planning, and teamwork. What makes you pick up a particular magazine? What do you look at first? What keeps you turning the pages? In general, your answers probably involve some combinations of content (text) and design (images, typography, and other graphic elements). Magazine designers ask those same questions for every issue they work on; then they try to imagine the answers of their own particular audience-their slice of the magazine market. At Rolling Stone, designers work in conjunction with the art director, editors, and photo editors to add a “visual voice” to the text. They think carefully about their audience and use a variety of images and typefaces to keep readers interested. “We try to pull the reader in with unique and lively opening pages and follow through with turnpages that have a good balance of photos and pullquotes to keep the reader interested,” says deputy art director Gail Anderson. Designers also select typefaces that suggest the appropriate mood for each story. The designers work on their features from conception to execution, consulting with editors to help determine the amount of space that each story needs. They also work with the copy and production departments on text changes, letterspacing, type, and the sizing of art. At the beginning of the two-week cycle, designers start with printouts of feature stories. They select photographs and design a headline. Over the course of the next two to three days, they design the layouts. At the same time, each Rolling Stone designer is responsible for one or more of the magazine's departments and lays out those pages as well. Eventually both editors and designers sign off on various stages of the production process and examine final proofs. Anderson is excited about how the new technology has changed the role of magazine designers. “We now have the freedom to set and design type ourselves, to experiment with color and see the results instantly, and to work in what feels like 3-D. The designer's role has certainly expanded, and I think it is taken more seriously than it was even a few years ago.” Magazine designers should enjoy working with both type and images, be attuned to content concerns and able to work well with editors, have technological expertise, and be able to tolerate tight deadlines. Drawing—deciding what is significant detail, what can be suggested, and what needs dramatic development—is a skill that all designers need in order to develop their own ideas and share them with others. Many designers use drawing as the core of their work. Milton Glaser is such a designer. Keeping a creative edge and searching for new opportunities for visual development are important aspects of a lively design practice. When Glaser felt an urge to expand his drawing vocabulary and to do more personally satisfying work, he found himself attracted to the impressionist artist Claude Monet. Glaser liked the way Monet looked: his physical characteristics expressed something familiar and yet mysterious. Additionally, Monet's visual vocabulary was foreign to Glaser whose work is more linear and graphic. While many designers would be intimidated by Monet's stature in the art world, Glaser was not because he was consciously seeking an opportunity for visual growth. In a sense, Glaser's drawings of Monet were a lark-an invention done lightly. Glaser worked directly from nature, from photographs, and from memory in order to open himself to new possibilities. The drawings, forty-eight in all, were done over a year and a half and then were shown in a gallery in Milan. They became the catalog for a local printer who wanted to demonstrate his color fidelity and excellence in flexibility of vision: the selection of detail, the balancing of light and shadow, and the varying treatments of figure and Drawing is a rich and immediate way to represent the world, but drawing can also illustrate ideas in partnership with design. Creating the key graphic element that identifies a product or service and separates it from its competitors is a challenging design problem. The identity needs to be clear and memorable. It should be adaptable to extreme changes in scale, from a matchbox to a large illuminated sign. And it must embody the character and quality of what it identifies. This capturing of an intangible is an important feature of identity design, but it is also a subtle Hotel Hankyu International is the flagship hotel for the Hankyu Corporation, a huge, diversified Japanese company. It is relatively small for a luxury hotel, with only six floors of accommodations. The client wanted to establish the hotel as an international hotel, rather than a Japanese hotel. In Japan, “international” mean European or American. Consequently, the client did not look to Japanese designers, but they hired Pentagram-with the understanding that the hotel's emblem would be a flower, since flowers are universally associated with pleasure. The identity was commissioned first, before other visual decisions (such as those about the interior architecture) were made. Here the graphic designer could set the visual agenda. Rather than one flower, six flowers were designed as the identity, one for each floor. To differentiate itself in its market, this small luxury hotel benefited from an extravagant design. Each flower is made up of four lines that emerge from the base of a square. The flowers are reminiscent of the 1920s Art Deco period, which suggest sophistication and world travel. Color and related typefaces link the flowers. One typeface is a custom-designed, slim Roman alphabet with proportions similar to those of the flowers. The other consists of Japanese characters and was designed by a Japanese The identity appears on signage, room folder, stationery, packaging, and other hotel amenities. It is clear and memorable and conveys a sense of luxury. Designers working with identity design need to be skilled manipulators of visual abstraction, letterforms, and design systems. Systems design seeks to unify and coordinate all aspects of a complex communication. It strives to achieve consistent verbal and visual treatment and to reduce production time and cost. Systems design requires a careful problem-solving approach to handling Caterpillar Inc. is a worldwide heavy-equipment and engine manufacturer. Its most visible and highly used document is the Specalog, a product-information book containing specifications, sales and marketing information, and a competitive-product reference list. A Specalog is produced for each of fifty different product types into twenty-six languages. The catalog output totals seventy million pages annually. Before Siegel & Gale took on Specalog, no formal guidelines existed, so the pages took too much time to create and were inconsistent with Caterpillar's literature strategy and corporate image. Bringing systematic order and clarity to this mountain of information was Siegel & Gale's task. First they asked questions: What do customers and dealers need to know? What do the information producers (Caterpillar's product units) want? An analysis of existing Specalogs revealed problems with both verbal and visual language: there was no clear organization for content; language was generic; product images were taken from too great a distance; and specifications charts lacked typographic clarity. The brochures of Caterpillar's competitors were also analyzed so as not to miss opportunities to make Specalog distinctive. These activities resulted in a clear set of design A working prototype was tested with customers and dealers. Following revisions, the new design was implemented worldwide. Its significant features include an easy-to-use template system compatible with existing Macintosh computers (thus allowing for local-market customization), a thirty-percent saving in production time and cost, and increased approval by both customers and dealers. Achieving standardization while encouraging customization is a strategy in many large international organizations. Designers involved with projects like this study information design, design planning, and evaluation techniques. Designers are problem solvers who create solutions regardless of the medium. But, designers create within the confines of reality. The challenge is to push the limits of reality to achieve the most effective solution. -Lorena Cummings Whether they are large or small, corporations need to remind their public who they are, what they are doing, and how well they are doing it. Even the venerable Wall Street banking firm of J.P. Morgan needs to assert itself so the public remembers its existence and service. Corporate communications serve this function, and the design of these messages goes a long way toward establishing Usually corporate communications include identity programs and annual reports, but there are also other opportunities to communicate the corporate message. Since 1918, J.P. Morgan has published a unique guide that keeps up with the changing world of commerce and travel. The World Holiday and Time Guide covers over two hundred countries, and keeps the traveler current with twenty-four time zones. In the Guide, the international businessperson can find easy-to-read tables and charts giving the banking hours as well as opening and closing business times for weekdays and holidays. Specific cultural holidays, such as Human Rights Day (December 10) in Namibia and National Tree Planting Day (March 23) in Lesotho are included. The seventy-five-year history of the Guide is also an informal chronicle of world change. It has described the rise and decline of Communism and the liberation of colonial Africa and Asia; today it keeps up with the recent territorial changes in Europe. The covers of the Guide invite the user to celebrate travel and cultural diversity; the interior format is a model of clarity sand convenience. In-house design groups have two functions: they provide a design service for their company and they maintain the corporate image. Because projects are often annual, responsibility for them moves around the design group, helping to sustain creativity and to generate a fresh approach to communication. Consequently, the Guide is the work of several designers. To work in corporate communications, designers need skills relating to typography, information design, and print design. My early exposure to a design studio made me aware of the design profession as an opportunity to apply analytical abilities to an interest in the fine arts. Graduate design programs made it possible for me to delve more deeply into the aspects of design I found personally interesting. Since then, the nature of the design profession, which constantly draws the designer into a wide range of subjects and problems, has continued to interest me in each new project. It's been this opportunity to satisfy personal interests while earning a living that ha made design my long-term career choice. -Won Chung Just as profit-oriented corporations need to present a carefully defined visual identity to their public, so must a nonprofit organization like the Walker Art Center. Even with limited resources, this museum uses graphic designers to present its best face to the public. For twenty years the Walker Art Center presented itself in a quiet, restrained, and neutral manner. It was a model of contemporary corporate graphics. But times change, and like many American museums, the Walker is now taking another look at its role in society. The questions the Walker is considering include: What kind of museum is this? Who is its audience? How does the museum tell its story to its audience? What should its visual identity and publications look like? Identity builds expectation. Does the identity established by the museum's communications really support the programs the Walker offers? The stock-in-trade of the Walker Art Center includes exhibitions and the performing arts for audiences ranging from children to scholars, educational programs, and avant-garde programming in film and video. As the museum's programming becomes even more varied, the old “corporate” identity represented by a clean, utilitarian design no longer seems appropriate. To better represent the expanded range of art and audience at the museum, The Design Studio, an internal laboratory for design experimentation at the Walker, is purposely blurring aspect of high and low culture and using more experimental typefaces and more eclectic communication approaches. Posters, catalogs, invitations to exhibitions, and mailers for film and performing art programs often have independent design and typographic approaches, while the calendar and members' magazine provide a continuity of design. Publication design, symbol and identity systems, and type and image relationships are among the areas of expertise necessary for in-house museum designers. I like the way words look, the way ideas can become things. I like the social, activist, practical, and aesthetic aspects of design. -Laurie Haycock Makela How do you get around in an unfamiliar city? What if the language is completely different from English? What kind of guidebook can help you bridge the communication gap? Access Tokyo is a successful travel guide to one of the most complex cities in the world. It is also an example of information design, the goal of which is clarity and usefulness. Richard Wurman began the Tokyo project as an innocent, without previous experience in that city. His challenge was to see if he could understand enough about Tokyo to make major decisions about what to include in a guidebook. He also needed to develop useful instruction to help the English-speaking tourist get around. Ignorance (lack of information) and intelligence (knowing how to find that information) led him to ask the questions that brought insight and order to his project. Using his skill in information and book design, the designer used his own experience as a visitor to translate the experience of Tokyo for others. Access Tokyo presents the historical, geographical, and cultural qualities that make Tokyo unique, as well as resources and locations for the outsider. Maps are a particular challenge since they require reducing information to its essential structure. The map for the Yamanote Line, a subway that rings Tokyo, is clear and memorable. The guide is bilingual because of the language gap between English with it Roman alphabet and Japanese with its ideographic signs. The traveler can read facts of interest in English but can also show the Japanese translation to a cab Wurman also wanted to get the cultural viewpoint across. To this end, he asked Japanese architects, painters, and designers to contribute graphics to the project. The colorful tangram (a puzzle made by cutting a square of paper into five triangles, a square, and a rhomboid) is abstract in a very Japanese way. Access Tokyo bridges the culture chasm as well as the information “One who organizes, manages, and assumes the risks of a business enterprise?” This dictionary definition of the word entrepreneur is a bland description of a very interesting possibility in design. A design entrepreneur extends the general definition: he or she must have a particular vision of an object and its market. While many designers believe they could be their own best client, few act on this notion. Tibor Kalman of M&Co. acted: he was a design entrepreneur. Kalman's firm, M&Co, was not without clients in the usual sense. Their innovative graphics for the Talking Heads music video “(Nothing but) Flowers” demonstrates that creativity and even fun are possible in traditional design work for clients. But somehow this wasn't enough for Kalman. He was frustrated with doing the packaging, advertising, and promotion for things he often viewed critically. He wanted to do the “real thing,” the object Kalman started with a traditional object, a wristwatch. He then applied his own particular sense of humor and elegant restraint to the “ordinary” watch in order to examine formal ideas about time. The Pie watch gives only a segment, or slice, of time, while the Ten One 4 wristwatch is such a masterpiece of understatement that it is in the permanent design collection of the Museum of Modern Art. Other variations include Romeo (with Roman numeral rather that Arabic ones), Straphanger (with the face rotated ninety degrees to accommodate easy reading on the subway), and Bug (with bugs substituted for the usual numerals). These few examples give a sense of the wry humor that transforms an ordinary object into a unique personal pleasure. Entrepreneurial design requires creativity and business savvy along with design and project-management skills. Of course, an innovative concept is also a necessity. Vision and risk-taking are important attributes for the design entrepreneur. I became a designer by accident; it was less boring than working in a store. I do have some regrets, however, as I would prefer to be in control of content rather form. -Tibor If you are a take-charge person with vision, creativity, and communication and organizational skills, becoming a design executive might be a good long-term career goal. Obviously, no one starts out with this job; it takes years to grow into it. A brief review of Robert Blaich's career can illustrate what being a design executive is all about. Educated as an architect, Blaich became involved with marketing when he joined Herman Miller, a major American furniture maker. Then he assumed a product-planning role and began to consciously build design talent for the organization. By the time he was vice president of design and communication, Blaich was running Herman Miller's entire design program (including communication, product, and architectural design). In a sense, he was their total In 1980 Blaich came to Philips Electronics N.V., an international manufacturer of entertainment and information systems. Located in the Netherlands, Philips is the world's twenty-eighth largest corporation and was seen by many Americans as a stodgy foreign giant. The president of Philips asked Blaich to take the corporation in new directions. By the time Blaich left in 1992, design was a strategic part of Philips's operation and its dull image was reinvigorated and unified. What's more, the corporation now saw its key functions as research, design, manufacturing, marketing, and human resources-in that order. Design's number-two position reflected a new understanding of its importance. Today, as president of Blaich Associates, Blaich is a consultant for Philips and responsible for corporate identity and for strategic notions of design. Just what do corporate design executives do? They look at design from a business point of view, critique work, support new ideas, foster creativity and collaboration, bring in new talent, and develop new design capabilities. They are design activists in a The best teaching is about learning, exploring, and making connections. Teachers in professional programs are almost never exclusively educators; they also practice design. Sheila Levrant de Bretteville is a case in point. She is a professor of graphic design at Yale University and owner of The Sheila Studio. Both her teaching and design are geared toward hopeful and inspiring Looking at a student assignment and at one of de Bretteville's own design projects illuminates the interplay of teaching and practice. De Bretteville saw the windows of abandoned stores in New Haven as an opportunity to communicate across class and color lines. She chose the theme of “grandparents,” which formed a connection between her Yale students and people of the community. The windows became large poster that told stories of grandparents as immigrants, as labor leaders, as the very aged, and more. The project gave students the opportunity to explore the requirements of space, materials, and information. De Bretteville's project, Biddy Mason: Time and Place, is an example of environmental-design. Located in Los Angeles, it explores the nine decades of Biddy Mason's life: as a slave prior to her arrival in California and as a free woman in Los Angeles where she later lived and worked and founded the AME church. “I wanted to celebrate this woman's perseverance and generosity,” says de Bretteville. “Now everyone who comes to this place will know about her and the city that benefited from her presence here.” A designed tactile environment, which included the imperfections in the slate and concrete wall, required working with processes and materials that were often unpredictable-like the struggles Biddy faced in her life. Biddy Mason and the grandparent windows connect design practice and teaching as de Bretteville encourages her students to use their knowledge, skills, and passion to connect to the community through design. Graphic Design: A Career Guide and Education Directory Edited by Sharon Helmer Poggenpohl The American Institute of Graphic Arts Suppose you want to announce or sell something, amuse or persuade someone, explain a complicated system or demonstrate a process. In other words, you have a message you want to communicate. How do you “send” it? Section: Tools and Resources - There are probably as many kinds of designers as there are kinds of design, so how do you know whether a career in design might be right for you? Tips for design students on finding the first job. Section: Tools and Resources - It’s tempting to cling to craft and tools as the core of design curricula. But with some research and experimentation in the classroom, author David Sherwin found a new model for helping students to become more creative, collaborative and resilient (and more employable in the process). Section: Tools and Resources - job search, professional development, motivation, teaching What does a project manager do, and does your design firm need one? Emily Carr rises to the challenge, takes ownership of the issue and delivers some solutions. Section: Tools and Resources - collaboration, new business development, studio management Senior DesignerAmerican Association for Cancer Research Philadelphia, PennsylvaniaMay 23 2013 Ambitious Type: Sean Freeman May 16, 2013 LPforDesign (The LivingPrinciples) RT @sustainbrands: Sustainable Brands '13 video feed registration is now open. RSVP free: http://t.co/spE0WXVfw0 LPforDesign (The LivingPrinciples) Nice video of @Patagonia founder & CEO, Yvon Chouinard & @Makower about 'The company as activist' http://t.co/oCjWzlomPM Starbucks VIA Packaging Fail Safe: Debbie Millman’s Advice on Courage and the Creative Life Posted by Maria Popova 10 days ago from
1
2
<urn:uuid:218e2553-d742-457b-9271-eb2fb6409684>
more a grant application, really) The aim is to create a complete philosophy of mathematics based directly on applied mathematics, taking the view that mathematics is not about other-worldly entities like numbers or sets, nor a mere language of science, but a direct science of structural features of the real world like symmetry, continuity and ratios. Applied mathematicians take it for granted they are studying certain real features of the world - properties like symmetry and continuity. Modern developments in mathematics such as chaos theory and computer simulation have confirmed that view, but traditional philosophy of mathematics has remained fixated instead on complicated formal results concerning the simplest mathematical entities, numbers and sets. Using straightforward examples that exhibit the richness of the mathematical study of complexity, the grant project will develop an Aristotelian realist philosophy of mathematics that challenges the usual Platonist and other classical options. In argument readable by an educated philosophical or scientific audience, it shows how mathematics finds the necessities hidden below the surface of our For most of the twentieth century, the philosophy of mathematics was dominated by the competing schools of logicism, formalism and intuitionism, all of which emphasised the role of human thought and symbols in creating mathematics. Dating from around 1900, they were generally regarded as unsatisfactory, especially in explaining applied mathematics. (Körner 1962) For example logicism, the theory developed by Frege and Russell that mathematics is just logic, proved untenable on technical grounds as well as giving no insight into how trivial logical truths could prove so useful in dealing with the real world. Those schools shared this problem with Platonism, the traditional alternative according to which mathematics is about an abstract or other-worldly realm inhabited by numbers, sets and so on; Platonism always found it hard to explain the mysterious connection between that other world and the real objects of our world which are counted and weighed. Platonism also has significant epistemological problems, being susceptible to Benacerraf's challenge (1973). The challenge is to explain how knowledge of mathematics is possible, given (i) a broadly causal approach to epistemology, and (ii) the view that mathematical objects are abstract. Despite this difficulty, many working mathematicians continue to find Platonism attractive, in part because it seems to be the only realist position By the time of Eugene Wigner's celebrated 1960 article `The unreasonable effectiveness of mathematics in the natural sciences', it was clear that new directions in the philosophy of mathematics were needed. In the last thirty years, there has been a diverse range of responses to the impasse, but there has been no agreement on what is the leading direction, or even consensus within particular schools on whether the problem of the applicability of mathematics is adequately solved. Much of the best work has been in a Platonist direction. Works such as Colyvan 2001a and 2001b have showed that Platonism has substantial resources and is not easily dismissed, while Steiner 1998 presented a direct Platonist attack on the problem of the applicability of mathematics. Nevertheless we believe (for reasons to be developed more fully in our project) that these authors have not succeeded in dealing with the argument advanced originally by Aristotle, that sciences of the real world should be able to deal with real properties directly, and reference to abstract objects in another world creates philosophical difficulties without being necessary for explaining the necessary interconnections between the real properties. In particular we believe an adequate epistemology for realism has yet to be developed. For this reason we also disagree with the school led by Resnik (1997) and Shapiro (1997, 2004) (surveyed in Reck and Price 2000 and Parsons 2004) Although like us they accept the slogan "mathematics is the science of structure" and they have made many perceptive observations on the way mathematics looks at structure and patterns, their theory is in our view vitiated as a complete philosophy of mathematics by their tendency to regard "structures" as a kind of Platonist entity similar to numbers and sets. There have also been nominalist philosophies of mathematics (Field 1980, Azzouni 1994, Chihara 2004), which we believe are subject to the insurmountable obstacles that dog nominalism in general. As with the Platonists, they speak as if Platonism and nominalism are the only alternatives, whereas Aristotelian realists believe those two schools make the same error, of supposing that everything that exists is an individual (whether physical or abstract). The nominalists did however usefully describe some possibilities of discussing mathematical realities without reference to Platonist abstract entities. One of the more important developments in philosophy of mathematics in the last quarter of the twentieth century is the rise of indispensability arguments for mathematical realism. According to the Quine-Putnam indispensability arguments, we must believe in the existence of mathematical objects if we accept our best physical theories at face value. Our best physical theories make indispensable reference to mathematical objects. We agree that indispensability arguments are important but believe their significance has been misunderstood because of the Platonism-or-nominalism dichotomy being assumed. That encourages a fundamentalist attitude to mathematical language, as if numbers must either exist fully as abstact entities, or not exist in any way at all. Some subtlety is needed as to what exactly is concluded to be indispensable. (Baker 2003). Moreover, care must be taken so as to make room in naturalism for the distinctive methods employed in generating mathematical knowledge (Maddy 1992). Instead, we will argue, mathematical language is indeed about some real aspects of the world, but not about abstract objects. Mathematics does not stand to natural science as a tool stands to a constructed entity; rather the object of scientific study exemplifies, or instantiates, a mathematical structure. (What to say of mathematical structures that have no physical instantiation is an issue that we will also consider carefully.) Thus a (pure) quantum state is a vector, and a space-time is a differentiable manifold, and both facts constrain the object in very definite, mathematically understood, ways. We will be guided by more hopeful developments from a number of Australian authors (Armstrong 1988, 1991, Forrest and Armstrong 1987, Bigelow 1988, Bigelow and Pargetter 1990, Michell 1994, Mortensen 1998), supported by a few overseas writings that are not explicitly in the philosophy of mathematics (Dennett 1991, Devlin 1994, Mundy 1987) They hark back to the old theory of medieval and early modern Aristotelians that mathematics is the "science of quantity" one still visible in some basic developments of nineteenth-century mathematics (Newstead 2001) but thereafter ignored.. This work is situated in the Australian realist theory of universals defended by D.M. Armstrong. Lengths, weights, time intervals and so on are real properties of things, and so are the relations between those properties. So a ratio such as 2.71, for example, is conceived to be the (real) relation that can be shared by pairs of lengths, pairs of weights and pairs of time intervals. A similar analysis is given of whole numbers like 4, which is a real relation between a heap of, say, parrots, and the "unit-making" property, being-a-parrot. This school of thought has unfortunately been little noticed outside Australia, a situation we hope to remedy. It has also confined itself to analysing only the most simple and traditional mathematical such as numbers and sets, thus ignoring the richer mathematical structures like symmetry and network topology, and the more applied mathematical sciences such as operations research, where, we believe, the strengths of a structuralist philosophy of mathematics are both more obvious and better connected with the concerns of Those concerns have broadened in ways that demand to be considered philosophically. The last sixty years have seen the creation of a number of new "formal" or "mathematical" sciences, or "sciences of complexity" - operations research, theoretical computer science, information theory, descriptive statistics, mathematical ecology, control theory and others. Theorists of science have almost ignored them, despite the remarkable fact that (from the way the practitioners speak) they seem to have come upon the "philosophers' stone" a way of converting knowledge about the real world into certainty, merely by thinking. (Franklin 1994) In these sciences and more generally in the natural sciences, there has been a better appreciation of the role of "systems concepts" like "ecosystem", "water cycle", "energy balance", "feedback" and "equilibrium" are systems concepts. They provide the language for studying complex interactions. They are generalisable to other complex systems, such as those in business, and so show the relevance of scientific systems thinking to the wider world. They unify and give a perspective on science itself, and on its connections with the science of complexity, mathematics. (Franklin 2000) The present project will give the first extended philosophical consideration to the full range of this body of The part of the project most undeveloped so far is its epistemology. Once it is established that mathematics deals with structural aspects of the world, how are those aspects known? Where Platonism has immense difficulties in explaining how we could know about entities such as number which it takes to be in "another world", Aristotelian approaches give promise of a more direct epistemology, since one can sense symmetry (for example) as well as one can sense colour. Realising that promise is difficult, however, since one needs to integrate an Aristotelian theory of abstraction (the cognition of one feature of reality, say colour, in abstraction from others, such as shape) with what is known from cognitive psychology on pattern recognition and the comparison of modalities (for example, how the brain compares felt and seen shape). The well-known role of proof in establishing mathematical knowledge needs to be integrated as well. Again, there is little work at present on that topic. After recalling the general reasons for accepting an Aristotelian realist position on universals (these reasons are developed by other writers, but still need collecting and expounding in a way relevant to the mathematical case), and illustrating them in the examples just mentioned, we will be in a position to develop the core of the theory that mathematics is a science of certain real properties. One task is to distinguish two substantially different kinds of properties that are both objects of mathematics. An older theory held that mathematics is the "science of quantity", a newer one that it studies structure or patterns. Both quantity and structure are real features of the world, but different ones. Both are studied by mathematics. The division between the two roughly corresponds to the division between elementary and higher mathematics. The first component of the project will consist in an investigation of the indispensability argument and its relation to quantum mechanics. For while quantum mechanics presents an argument for realism about the complex number field, it also suggests that this field has primacy. And since this field subsumes the natural numbers and the reals, it suggests a significant limitation to the science of quantity conception since that is inextricably linked with linearly orderable fields. We believe this represents an area of hitherto untapped connections and arguments that is capable of throwing great light on the relation between physics and mathematics. Thus one thing we will be concerned with is the significance of the Montgomery-Olydzko law. This suggests that the eigenvalues of a random Hermitian matrix (such as might be found in certain quantum mechanical problems) have the same spacing properties as the non-trivial zeroes of the Riemann zeta function - which are not spaced randomly. This is now fairly widely confirmed - but it suggests a connection between very different areas of science: between the traditional a priori and traditional a posteriori. (What role quantum mechanics is itself playing in this connection is still an unsolved question. Professor Barry Mazur of Harvard has made some interesting comments to us on this problem.) We will develop arguments that the Aristotelian realist view has the greatest chance of explaining this connection; just as it has the best chance of explaining what we call inverse indispensability in general. On this argument there is also an "unreasonable dependence" of mathematics on physics. The discovery of the infinite number of exotic differential structures on four dimensional manifolds (making four dimensional manifolds unique in differential geometry) offers a very striking example of this phenomenon - since the exotic structures arose out of mathematical physics. This inverse indispensability can only really be explained, we argue, on the Aristotelian view. After establishing our metaphysical case arguing for our view that mathematics studies structural aspects of the real world we will move to epistemological issues. Theory on how mathematics can be known is an underdeveloped part of structural philosophies of mathematics, and is well recognised as a major difficulty for realist philosophies of mathematics in general. In this second component of the project, we will show that Benacerrafs challenge can be overcome by our brand of realism. The fundamental dilemma for realists was identified (by Benacerraf 1973) as the problem of providing a naturalistic (or broadly causal) epistemology for mathematics, if mathematics indeed refers to something real. How can those objects affect us, so that we can know about them? That is very difficult to explain on a Platonist view, since Platonic objects do not have causal power. Aristotelian views such as ours permit us to develop a much more plausible and direct answer, since structural features of real things, such as symmetry, can affect us in the same way as, for example, their colour, and so can be directly perceived. On our Aristotelian view, the objects of mathematics do not exist outside of space and time, but are immanent in space and time. Consequently, we hold that some simple mathematical ideas are indeed acquired in a causal manner. It is true that some of the more complicated entities spoken of in mathematics, such as the Hilbert spaces of quantum mechanics, do not seem to be directly perceivable. In order to move from simple perception of patterns to sophisticated mathematical theorising, it is necessary to form abstract ideas of structures and quantities. Therefore, we (and especially the research assistant employed by the grant) will pay special attention to the role of abstraction in generating mathematical knowledge. Aristotelians hold that mathematicians abstract or "separate in thought" features of objects that they perceive in the real world. We will survey various interpretations of abstraction, and present a theory on which abstraction draws attention to mathematical features of existing physical objects (but does not bring into existence any kind of Platonist "abstract objects"). We anticipate the objection that the natural world does not have the perfect precise structures needed in mathematics. We will therefore consider the role of idealisation in abstraction, and compare it to the uses of idealisation in physics (e.g. massless points and frictionless planes). In neither case should idealisation undermine the reality of the phenomena studied. We will rebut various objections that have been raised against the meaningfulness of possibility of abstraction, notably by Frege (1884/1950). Freges objections are an important reason for the neglect of an Aristotelian approach. However, we will demonstrate that Freges criticisms do not touch Aristotelian realism. In particular, objections having to do with how the individuality of mathematical objects is preserved if they are obtained by abstraction do not apply to our theory. Since mathematical objects are universals, they are not individual particulars and not subject to this objection. The several components of the project cohere very well, since a proper understanding of the indispensability of mathematics and physics to one another yields rich results in metaphysics and epistemology. Finally, the theoretical work of the project is complemented throughout by the extensive knowledge of a working mathematician. The main lines along which our argument should proceed are clear, but there is much detailed work to be done to consider and reinterpret existing material, and to ensure coherence between the various parts of the project metaphysical, epistemological, mathematical and quantum-mechanical. We anticipate finding that the understanding of the relation of mathematics and physics produced by consideration of the indispensability argument in the first part of the project will shape our epistemology in the second part of the project. Throughout our findings will be grounded by the examples of a working mathematician. In the light of this plan, we would anticipate the three years of the work on the grant being structured as follows: 1: CI Franklin to complete current writing on "quantity" as an object of mathematics, CI Heathcote to research and write on issues relating to quantum mechanics, both CIs to work with research assistant on initial research on epistemological issues of abstraction, pattern recognition and proof. 2: Research assistant to work intensively on epistemology, with input from CIs; research assistant or CI Heathcote to visit Cambridge and St Andrews for conferences; submission of several academic papers to journals; planning of book and negotiation with possible publishers. 3: Completion and submission of book containing the full work, probably to Oxford University Press. Armstrong, D.M., 1988, `Are quantities relations? A reply to Bigelow and Pargetter Philosophical Studies 54, 305-16. Armstrong, D.M., 1991, `Classes are states of affairs', Mind 100, 189-200. Azzouni, J., 1994, Metaphysical Myths, Mathematical Practice, Cambridge, Cambridge University Press. Baker, A., `The indispensability argument and multiple foundations of mathematics, Philosophical Quarterly 53, 49-67. Benacerraf, P., 1965, `What numbers could not be, Philosophical Review 74, 495-512. Benacerraf, P. 1973. Mathematical Truth, Journal of Philosophy 70, 661-79. Bigelow, J., 1988, The Reality of Numbers: A Physicalist's Philosophy of Mathematics, Clarendon, Bigelow J. and R. Pargetter, 1990, Science and Necessity, Cambridge University Press, Chihara, C.S., 2004, A Structural Account of Mathematics, Clarendon, Oxford. Colyvan, M., 2001a, The Indispensability of Mathematics, Oxford University Press, New York. Colyvan, M., 2001b, `The miracle of applied mathematics, Synthese 127, 265-77. Dennett, D., 1991, `Real patterns, Journal of Philosophy 88, 27-51. Devlin, K.J., 1994, Mathematics: The Science of Patterns, Scientific American Library, New York. Field, H., 1980, Science Without Numbers: A Defence of Nominalism, Princeton University Press, Fine, K., 2001. Limits of Abstraction. Oxford University Forrest P. and D.M. Armstrong, 1987, `The nature of number', Philosophical Papers 16, Frege, G., 1884/1950. Foundations of Arithmetic. Blackwell, Oxford. J., 1989, `Mathematical necessity and reality', Australasian J. of Philosophy 67, 286-294. J., 2000, `Diagrammatic reasoning and modelling in the imagination, the secret weapons of the Scientific Revolution', in 1543 and All That, Image and Word, Change and Continuity in the Proto-Scientific Revolution, ed. G. Freeland & A. Corones (Kluwer, Dordrecht), pp. 53-115 Franklin, J., 1994, `The formal sciences discover the philosophers' stone', Studies in History and Philosophy of Science 25, Franklin, J., 2000, `Complexity theory, mathematics and the unity of science', History, Philosophy and New South Wales Science Teaching Third Annual Conference, ed. M. Matthews, pp. 91-4. Franklin, J., 2003, Corrupting the Youth: A History of Philosophy in Australia, Macleay Press, Sydney. Hale, B., 1996. Structuralisms Unpaid Epistemological Debts, Philosophia Mathematica, Heathcote, A., 1990, `Unbounded operators and the incompleteness of quantum mechanics, Philosophy of Science S90, 523-34. Körner, S., 1962, The Philosophy of Mathematics: An Introduction, Harper, New York. Mac Lane, S., 1986, Mathematics: Form and Function, Springer, New York. Maddy, P., 1990. Realism in Mathematics, Clarendon Press, Oxford. Maddy, P., 1992. Indispensability and Mathematical Practice, Journal of Philosophy 89, Maddy, P., 1997. Naturalism in Mathematics, Clarendon Press, Oxford. Michell, J., 1994, `Numbers as quantitative relations and the traditional theory of measurement, British Journal for the Philosophy of Science 45, 389-406. Mortensen, C., 1998, `On the possibility of science without numbers, Australasian J. of Philosophy 76, Mundy, B., 1987, `The metaphysics of quantity, Philosophical Studies 51, 29-54. Newstead, A.G.J., 2001, `Aristotle and modern mathematical theories of the continuum, in D. Sfendoni-Mentzou, ed, Aristotle and Contemporary Science, vol. 2, Lang, Parsons, C., 2004, `Structuralism and metaphysics, Philosophical Quarterly 54, 1951/1980. Two dogmas of empiricism, in From a Logical Point of View, Harvard University Press, Harvard. Reck, E., and M. Price, 2000, `Structures and structuralism in contemporary philosophy of mathematics, Synthese 125, 341-383. Resnik, M.D., 1997, Mathematics as a Science of Patterns, Clarendon, Oxford. Shapiro, S., 1997, Philosophy of Mathematics: Structure and Ontology, Oxford University Press, New York. Shapiro, S., 2004, `Foundations of mathematics: metaphysics, epistemology, structure, Philosophical Quarterly 54, 16-37. Steiner, M., 1975. Mathematical Knowledge, Cornell University Press, Ithaca. Steiner, M., 1998, The Applicability of Mathematics as a Philosophical Problem, Harvard University Weyl, H., 1952, Symmetry, Princeton University Press, Princeton. Wigner, E., 1960, `The unreasonable effectiveness of mathematics in the natural sciences, Communications in Pure and Applied Mathematics 13, 1-14. This site created by James Franklin with help from
1
5
<urn:uuid:e4a8fc4e-ffce-4401-a07c-9f3a2fba900d>
"Understanding Herbicide Resistance\nof an Enzyme in the “Pigments of Life”\nAt the ARS Natural (...TRUNCATED)
1
2
<urn:uuid:e0539abb-e5a2-4502-8566-f507c82fdcf0>
"CTS IIT MADRAS\n1. slur : speech\nans: smulge : writing (choice is B)\n2. cpahlet : shoulder\nans: (...TRUNCATED)
1
3
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
19