text
stringlengths
105
4.44k
label
int64
0
9
label_text
stringclasses
10 values
To discuss the visibility of defects in topographic images according to theory, consider the exemplary case of a single dislocation: It will give rise to contrast in topography only if the lattice planes involved in diffraction are distorted in some way by the existence of the dislocation. This is true in the case of an edge dislocation if the scattering vector of the Bragg reflection used is parallel to the Burgers vector of the dislocation, or at least has a component in the plane perpendicular to the dislocation line, but not if it is parallel to the dislocation line. In the case of a screw dislocation, the scattering vector has to have a component along the Burgers vector, which is now parallel to dislocation line. As a rule of thumb, a dislocation will be invisible in a topograph if the vector product is zero. (A more precise rule will have to distinguish between screw and edge dislocations and to also take the direction of the dislocation line into account – see e.g. [http://www.msel.nist.gov/practiceguides/SP960_10.pdf].) If a defect is visible, often there occurs not only one, but several distinct images of it on the topograph. Theory predicts three images of single defects: The so-called direct image, the kinematical image, and the intermediary image. For details see e.g. (Authier 2003).
3
Analytical Chemistry
Deconvolution can be used to apparently improve spectral resolution. In the case of NMR spectra, the process is relatively straight forward, because the line shapes are Lorentzian, and the convolution of a Lorentzian with another Lorentzian is also Lorentzian. The Fourier transform of a Lorentzian is an exponential. In the co-domain (time) of the spectroscopic domain (frequency) convolution becomes multiplication. Therefore, a convolution of the sum of two Lorentzians becomes a multiplication of two exponentials in the co-domain. Since, in FT-NMR, the measurements are made in the time domain division of the data by an exponential is equivalent to deconvolution in the frequency domain. A suitable choice of exponential results in a reduction of the half-width of a line in the frequency domain. This technique has been rendered all but obsolete by advances in NMR technology. A similar process has been applied for resolution enhancement of other types of spectra, with the disadvantage that the spectrum must be first Fourier transformed and then transformed back after the deconvoluting function has been applied in the spectrum's co-domain.
7
Physical Chemistry
Cellular metabolism is represented by a large number of metabolic reactions involving the conversion of the carbon source (usually glucose) into the building blocks needed for macromolecular biosynthesis. These reactions form metabolic networks within cells. These networks can then be used to study metabolism within cells. To allow these networks to interact, a tight connection between them is necessary. This connection is provided by usage of common cofactors such as ATP, ADP, NADH and NADPH. In addition to this, sharing of some metabolites between the different networks further tightens the connections between the different networks.
1
Biochemistry
Steel is an alloy composed of between 0.2 and 2.0 percent carbon, with the balance being iron. From prehistory through the creation of the blast furnace, iron was produced from iron ore as wrought iron, 99.82–100 percent Fe, and the process of making steel involved adding carbon to iron, usually in a serendipitous manner, in the forge, or via the cementation process. The introduction of the blast furnace reversed the problem. A blast furnace produces pig iron — an alloy of approximately 90 percent iron and 10 percent carbon. When the process of steel-making is started with pig iron, instead of wrought iron, the challenge is to remove a sufficient amount of carbon to reduce it to the 0.2 to 2 percentage for steel. Before about 1860, steel was an expensive product, made in small quantities and used mostly for swords, tools and cutlery; all large metal structures were made of wrought or cast iron. Steelmaking was centered in Sheffield and Middlesbrough, Britain, which supplied the European and American markets. The introduction of cheap steel was due to the Bessemer and the open hearth processes, two technological advances made in England. In the Bessemer process, molten pig iron is converted to steel by blowing air through it after it was removed from the furnace. The air blast burned the carbon and silicon out of the pig iron, releasing heat and causing the temperature of the molten metal to rise. Henry Bessemer demonstrated the process in 1856 and had a successful operation going by 1864. By 1870 Bessemer steel was widely used for ship plate. By the 1850s, the speed, weight, and quantity of railway traffic was limited by the strength of the wrought iron rails in use. The solution was to turn to steel rails, which the Bessemer process made competitive in price. Experience quickly proved steel had much greater strength and durability and could handle the increasingly heavy and faster engines and cars. After 1890 the Bessemer process was gradually supplanted by open-hearth steelmaking and by the middle of the 20th century was no longer in use. The open-hearth process originated in the 1860s in Germany and France. The usual open-hearth process used pig iron, ore, and scrap, and became known as the Siemens-Martin process. Its process allowed closer control over the composition of the steel; also, a substantial quantity of scrap could be included in the charge. The crucible process remained important for making high-quality alloy steel into the 20th century. By 1900 the electric arc furnace was adapted to steelmaking and by the 1920s, the falling cost of electricity allowed it to largely supplant the crucible process for specialty steels.
8
Metallurgy
The unique relationship between the compressibility factor and the reduced temperature, , and the reduced pressure, , was first recognized by Johannes Diderik van der Waals in 1873 and is known as the two-parameter principle of corresponding states. The principle of corresponding states expresses the generalization that the properties of a gas which are dependent on intermolecular forces are related to the critical properties of the gas in a universal way. That provides a most important basis for developing correlations of molecular properties. As for the compressibility of gases, the principle of corresponding states indicates that any pure gas at the same reduced temperature, , and reduced pressure, , should have the same compressibility factor. The reduced temperature and pressure are defined by : and Here and are known as the critical temperature and critical pressure of a gas. They are characteristics of each specific gas with being the temperature above which it is not possible to liquify a given gas and is the minimum pressure required to liquify a given gas at its critical temperature. Together they define the critical point of a fluid above which distinct liquid and gas phases of a given fluid do not exist. The pressure-volume-temperature (PVT) data for real gases varies from one pure gas to another. However, when the compressibility factors of various single-component gases are graphed versus pressure along with temperature isotherms many of the graphs exhibit similar isotherm shapes. In order to obtain a generalized graph that can be used for many different gases, the reduced pressure and temperature, and , are used to normalize the compressibility factor data. Figure 2 is an example of a generalized compressibility factor graph derived from hundreds of experimental PVT data points of 10 pure gases, namely methane, ethane, ethylene, propane, n-butane, i-pentane, n-hexane, nitrogen, carbon dioxide and steam. There are more detailed generalized compressibility factor graphs based on as many as 25 or more different pure gases, such as the Nelson-Obert graphs. Such graphs are said to have an accuracy within 1–2 percent for values greater than 0.6 and within 4–6 percent for values of 0.3–0.6. The generalized compressibility factor graphs may be considerably in error for strongly polar gases which are gases for which the centers of positive and negative charge do not coincide. In such cases the estimate for may be in error by as much as 15–20 percent. The quantum gases hydrogen, helium, and neon do not conform to the corresponding-states behavior and the reduced pressure and temperature for those three gases should be redefined in the following manner to improve the accuracy of predicting their compressibility factors when using the generalized graphs: : and where the temperatures are in kelvins and the pressures are in atmospheres.
7
Physical Chemistry
The oxidase catalyzes the transfer of four electrons from reduced plastoquinone to molecular oxygen to form water . The net reaction is written below: 2 QH + O → 2 Q + 2 HO Analysis of substrate specificity revealed that the enzyme almost exclusively catalyzes the reduction of plastoquinone over other quinones such as ubiquinone and duroquinone. Additionally, iron is essential for the catalytic function of the enzyme and cannot be substituted by another metal cation like Cu, Zn, or Mn at the catalytic center. It is unlikely that four electrons could be transferred at once in a single iron cluster, so all of the proposed mechanisms involve two separate two-electron transfers from reduced plastoquinone to the di-iron center. In the first step common to all proposed mechanisms, one plastoquinone is oxidized and both irons are reduced from iron(III) to iron(II). Four different mechanisms are proposed for the next step, oxygen capture. One mechanism proposes a peroxide intermediate, after which one oxygen atom is used to create water and another is left bound in a diferryl configuration. Upon one more plastoquinone oxidation, a second water molecule is formed and the irons return to a +3 oxidation state. The other mechanisms involve the formation of Fe(III)-OH or Fe(IV)-OH and a tyrosine radical. These radical-based mechanisms could explain why over-expression of the PTOX gene causes increased generation of reactive oxygen species.
5
Photochemistry
Fatty acids are broken down to acetyl-CoA by means of beta oxidation inside the mitochondria, whereas fatty acids are synthesized from acetyl-CoA outside the mitochondria, in the cytosol. The two pathways are distinct, not only in where they occur, but also in the reactions that occur, and the substrates that are used. The two pathways are mutually inhibitory, preventing the acetyl-CoA produced by beta-oxidation from entering the synthetic pathway via the acetyl-CoA carboxylase reaction. It can also not be converted to pyruvate as the pyruvate dehydrogenase complex reaction is irreversible. Instead the acetyl-CoA produced by the beta-oxidation of fatty acids condenses with oxaloacetate, to enter the citric acid cycle. During each turn of the cycle, two carbon atoms leave the cycle as CO in the decarboxylation reactions catalyzed by isocitrate dehydrogenase and alpha-ketoglutarate dehydrogenase. Thus each turn of the citric acid cycle oxidizes an acetyl-CoA unit while regenerating the oxaloacetate molecule with which the acetyl-CoA had originally combined to form citric acid. The decarboxylation reactions occur before malate is formed in the cycle. Only plants possess the enzymes to convert acetyl-CoA into oxaloacetate from which malate can be formed to ultimately be converted to glucose. However, acetyl-CoA can be converted to acetoacetate, which can decarboxylate to acetone (either spontaneously, or catalyzed by acetoacetate decarboxylase). It can then be further metabolized to isopropanol which is excreted in breath/urine, or by CYP2E1 into hydroxyacetone (acetol). Acetol can be converted to propylene glycol. This converts to pyruvate (by two alternative enzymes), or propionaldehyde, or to -lactaldehyde then -lactate (the common lactate isomer). Another pathway turns acetol to methylglyoxal, then to pyruvate, or to -lactaldehyde (via -lactoyl-glutathione or otherwise) then -lactate. D-lactate metabolism (to glucose) is slow or impaired in humans, so most of the D-lactate is excreted in the urine; thus -lactate derived from acetone can contribute significantly to the metabolic acidosis associated with ketosis or isopropanol intoxication. -Lactate can complete the net conversion of fatty acids into glucose. The first experiment to show conversion of acetone to glucose was carried out in 1951. This, and further experiments used carbon isotopic labelling. Up to 11% of the glucose can be derived from acetone during starvation in humans. The glycerol released into the blood during the lipolysis of triglycerides in adipose tissue can only be taken up by the liver. Here it is converted into glycerol 3-phosphate by the action of glycerol kinase which hydrolyzes one molecule of ATP per glycerol molecule which is phosphorylated. Glycerol 3-phosphate is then oxidized to dihydroxyacetone phosphate, which is, in turn, converted into glyceraldehyde 3-phosphate by the enzyme triose phosphate isomerase. From here the three carbon atoms of the original glycerol can be oxidized via glycolysis, or converted to glucose via gluconeogenesis.
1
Biochemistry
haemagglutination activity domain - haemolysin expression modulating protein family - hairpin - haploid - haploinsufficiency - HdeA family - helix-loop-helix - helminth protein - hematopoietic stem cell - hemophilia - heteroduplex DNA - heterozygous - highly conserved sequence - Hirschsprungs disease - histone - HLA-Y - hnRNA - holoprosencephaly - homologous recombination - homology - homozygous - host strain (bacterial) - HspQ protein domain - human artificial chromosome - Human Genome Project - human immunodeficiency virus - HumHot - Huntingtons disease - hybridization - hybridoma - hydrophilicity plot - hydroxydechloroatrazine ethylaminohydrolase -
1
Biochemistry
Biological soil uptake is the dominant sink of atmospheric H. Both aerobic and anaerobic microbial metabolisms consume H by oxidizing it in order to reduce other compounds during respiration. Aerobic H oxidation is known as the Knallgas reaction. Anaerobic H oxidation often occurs during interspecies hydrogen transfer in which H produced during fermentation is transferred to another organism, which uses the H to reduce CO to CH or acetate, to HS, or Fe to Fe. Interspecies hydrogen transfer keeps H concentrations very low in most environments because fermentation becomes less thermodynamically favorable as the partial pressure of H increases.
1
Biochemistry
Igor V. Komarov graduated with distinction from Taras Shevchenko National University of Kyiv, and started to work at the same university in 1986 first as an engineer. He obtained his Candidate of Sciences degree in 1991 in organic chemistry at Taras Shevchenko National University of Kyiv under supervision of Mikhail Yu. Kornilov; the candidate thesis was devoted to the use of lanthanide shift reagents in NMR spectroscopy. Afterwards, he was a postdoctoral fellow at the University Chemical Laboratory in Cambridge (1996–1997, United Kingdom) and at the [https://www.catalysis.de/home/ Institut für Organische Katalyseforschung] in Rostock (2000–2001, Germany). He holds the Supramolecular Chemistry Chair of Institute of High Technologies at Taras Shevchenko National University. Komarov earned his Doctor of Sciences degree in 2003; the title of his thesis is "Design and synthesis of model compounds: study of stereoelectronic, steric effects, reactive intermediates, catalytic enantioselective hydrogenation and dynamic protection of functional groups" He is also a scientific advisor for Enamine Ltd. and Lumobiotics GmbH. Igor V. Komarov was awarded the title of Professor in 2007.
0
Organic Chemistry
The stability of the Keggin structure allows the metals in the anion to be readily reduced. Depending on the solvent, acidity of the solution and the charge on the α-Keggin anion, it can be reversibly reduced in one- or multiple-electron steps. For example, the silicotungstate anion can be reduced a −20 state. Some anions such as silicotungstic acid are strong enough as an acid as sulfuric acid and can be used in its place as an acid catalyst.
7
Physical Chemistry
In chemistry and thermodynamics, the standard enthalpy of formation or standard heat of formation of a compound is the change of enthalpy during the formation of 1 mole of the substance from its constituent elements in their reference state, with all substances in their standard states. The standard pressure value is recommended by IUPAC, although prior to 1982 the value 1.00 atm (101.325 kPa) was used. There is no standard temperature. Its symbol is ΔH. The superscript Plimsoll on this symbol indicates that the process has occurred under standard conditions at the specified temperature (usually 25 °C or 298.15 K). Standard states are defined for various types of substances. For a gas, it is the hypothetical state the gas would assume if it obeyed the ideal gas equation at a pressure of 1 bar. For a gaseous or solid solute present in a diluted ideal solution, the standard state is the hypothetical state of concentration of the solute of exactly one mole per liter (1 M) at a pressure of 1 bar extrapolated from infinite dilution. For a pure substance or a solvent in a condensed state (a liquid or a solid) the standard state is the pure liquid or solid under a pressure of 1 bar. For elements that have multiple allotropes, the reference state usually is chosen to be the form in which the element is most stable under 1 bar of pressure. One exception is phosphorus, for which the most stable form at 1 bar is black phosphorus, but white phosphorus is chosen as the standard reference state for zero enthalpy of formation. For example, the standard enthalpy of formation of carbon dioxide is the enthalpy of the following reaction under the above conditions: All elements are written in their standard states, and one mole of product is formed. This is true for all enthalpies of formation. The standard enthalpy of formation is measured in units of energy per amount of substance, usually stated in kilojoule per mole (kJ mol), but also in kilocalorie per mole, joule per mole or kilocalorie per gram (any combination of these units conforming to the energy per mass or amount guideline). All elements in their reference states (oxygen gas, solid carbon in the form of graphite, etc.) have a standard enthalpy of formation of zero, as there is no change involved in their formation. The formation reaction is a constant pressure and constant temperature process. Since the pressure of the standard formation reaction is fixed at 1 bar, the standard formation enthalpy or reaction heat is a function of temperature. For tabulation purposes, standard formation enthalpies are all given at a single temperature: 298 K, represented by the symbol .
7
Physical Chemistry
ATP has recently been proposed to act as a biological hydrotrope and has been shown to affect proteome-wide solubility.
1
Biochemistry
The Central Pollution Control Board of India released the Air (Prevention and Control of Pollution) Act in 1981, amended in 1987, to address concerns about air pollution in India. While the document does not differentiate between VOCs and other air pollutants, the CPCB monitors "oxides of nitrogen (NO), sulphur dioxide (SO), fine particulate matter (PM10) and suspended particulate matter (SPM)".
0
Organic Chemistry
Proteins formed by the translation of the mRNA (messenger RNA, a coded information from DNA for protein synthesis) play a major role in regulating gene expression. To understand how they regulate gene expression it is necessary to identify DNA sequences that they interact with. Techniques have been developed to identify sites of DNA-protein interactions. These include ChIP-sequencing, CUT&RUN sequencing and Calling Cards.
1
Biochemistry
In the US, biological energy is expressed using the energy unit Calorie with a capital C (i.e. a kilocalorie), which equals the energy needed to increase the temperature of 1 kilogram of water by 1 °C (about 4.18 kJ). Energy balance, through biosynthetic reactions, can be measured with the following equation: :Energy intake (from food and fluids) = Energy expended (through work and heat generated) + Change in stored energy (body fat and glycogen storage) The first law of thermodynamics states that energy can be neither created nor destroyed. But energy can be converted from one form of energy to another. So, when a calorie of food energy is consumed, one of three particular effects occur within the body: a portion of that calorie may be stored as body fat, triglycerides, or glycogen, transferred to cells and converted to chemical energy in the form of adenosine triphosphate (ATP – a coenzyme) or related compounds, or dissipated as heat.
1
Biochemistry
Ge was born in China. She attended Peking University for her undergraduate studies, where she studied chemistry. After graduating in 1997 Ge moved to the United States, where she joined Cornell University as a doctoral student. Here she started to work on mass spectrometry, using electron-capture dissociation to study proteins. She worked under the supervision of Tadhg Begley and Fred McLafferty. After completing her doctorate, Ge worked as a research scientist at Wyeth.
1
Biochemistry
In the book, Lane discusses what he considers to be a major gap in biology: why life operates the way that it does, and how it began. In his view as a biochemist, the core question is about energy, as all cells handle energy in the same way, relying on a steep electrochemical gradient across the very small thickness of a membrane in a cell – to power all the chemical reactions of life. The electrical energy is transformed into forms that the cell can use by a chain of energy-handling structures including ancient proteins such as cytochromes, ion channels, and the enzyme ATP synthase, all built into the membrane. Once evolved, this chain has been conserved by all living things, showing that it is vital to life. He argues that such an electrochemical gradient could not have arisen in ordinary conditions, such as the open ocean or Darwin's "warm little pond". He argues instead (following Günter Wächtershäuser) that life began in deep-sea hydrothermal vents, as these contain chemicals that effectively store energy that cells could use, as long as the cells provided a membrane to generate the needed gradient by maintaining different concentrations of chemicals on either side. Once cells similar to bacteria (the first prokaryotes, cells without a nucleus) had emerged, he writes, they stayed like that for two and a half billion years. Then, just once, cells jumped in complexity and size, acquiring a nucleus and other organelles, and complex behavioural features including sex, which he notes have become universal in complex (eukaryotic) life forms including plants, animals, and fungi. The book is illustrated with 37 figures taken by permission from a wide variety of research sources. They include a timeline, photographs, cladograms, electron flow diagrams and diagrams of the life cycle of cells and their chromosomes.
1
Biochemistry
The du Noüy Padday rod consists of a rod usually on the order of a few millimeters square making a small ring. The rod is often made from a composite metal material that may be roughened to ensure complete wetting at the interface. The rod is cleaned with water, alcohol and a flame or with strong acid to ensure complete removal of surfactants. The rod is attached to a scale or balance via a thin metal hook. The Padday method uses the maximum pull force method, i.e. the maximum force due to the surface tension is recorded as the probe is first immersed ca. one mm into the solution and then slowly withdrawn from the interface. The main forces acting on a probe are the buoyancy (due to the volume of liquid displaced by the probe) and the mass of the meniscus adhering to the probe. This is an old, reliable, and well-documented technique. An important advantage of the maximum pull force technique is that the receding contact angle on the probe is effectively zero. The maximum pull force is obtained when the buoyancy force reaches its minimum, The surface tension measurement used in the Padday devices based on the du Noüy ring/maximum pull force method is explained further here: The force acting on the probe can be divided into two components: : i) Buoyancy stemming from the volume displaced by the probe, and : ii) the mass of the meniscus of the liquid adhering to the probe. The latter is in equilibrium with the surface tension force, i.e. where :* is the perimeter of the probe, :* is the surface tension and the weight of the meniscus under the probe. In the situation considered here the volume displaced by the probe is included in the meniscus. :* is the contact angle between the probe and the solution that is measured, and is negligible for the majority of solutions with Kibron’s probes. Thus, the force measured by the balance is given by where :* is the force acting on the probe and :* is the force due to buoyancy. At the point of detachment the volume of the probe immersed in the solution vanishes, and thus, also the buoyancy term. This is observed as a maximum in the force curve, which relates to the surface tension through The above derivation holds for ideal conditions. Non-idealities, e.g. from defect probe shape, are partly compensated in the calibration routine using a solution with known surface tension.
7
Physical Chemistry
Source: If a solid body is modeled by a constant hard-core field, then or where Here For the hard solid potential where is the position of the potential discontinuity. So, in this case
7
Physical Chemistry
The rising level of the Caspian Sea between 1995 and 1996 reduced the number of habitats for rare species of aquatic vegetation. This has been attributed to a general lack of seeding material in newly formed coastal lagoons and water bodies. Many rare and endemic plant species of Russia are associated with the tidal areas of the Volga delta and riparian forests of the Samur River delta. The shoreline is also a unique refuge for plants adapted to the loose sands of the Central Asian Deserts. The principal limiting factors to successful establishment of plant species are hydrological imbalances within the surrounding deltas, water pollution, and various land reclamation activities. The water level change within the Caspian Sea is an indirect reason for which plants may not get established. These affect aquatic plants of the Volga Delta, such as Aldrovanda vesiculosa and the native Nelumbo caspica. About 11 plant species are found in the Samur River delta, including the unique liana forests that date back to the Tertiary period. Since 2019 UNESCO has admitted the lush Hyrcanian forests of Mazandaran, Iran as World Heritage Site under category (ix).
2
Environmental Chemistry
Electrolytic refining of copper was first patented in England by James Elkington in 1865 and the first electrolytic copper refinery was built by Elkington in Burry Port, South Wales in 1869. There were teething problems with the new technology. For example, the early refineries had trouble producing firm deposits on the cathodes. As a result, there was much secrecy between refinery operators as each strove to maintain a competitive edge. The nature of the cathode used to collect the copper is a critical part of the technology. The properties of copper are highly susceptible to impurities. For example, an arsenic content of 0.1% can reduce the conductivity of copper by 23% and a bismuth content of just 0.001% makes copper brittle. The material used in the cathode must not contaminate the copper being deposited, or it will not meet the required specifications. The current efficiency of the refining process depends, in part, on how close the anodes and cathodes can be placed in the electrolytic cell. This, in turn, depends on the straightness of both the anode and the cathode. Bumps and bends in either can lead to short-circuiting or otherwise affect the current distribution and also the quality of the cathode copper. Prior to the development of the Isa Process technology, the standard approach was to use a starter sheet of high-purity copper as the initial cathode. These starter sheets are produced in special electrolytic cells by electrodeposition of copper for 24 hours onto a plate made of copper coated with oil (or treated with other similar face-separation materials) or of titanium. Thousands of sheets could be needed every day, and the original method of separating the starter sheet from the “mother plate” (referred to as “stripping”) was entirely manual. Starter sheets are usually quite light. For example, the starter sheets used in the CRL refinery weighed 10 pounds (4.53 kilograms). Thus, they are thin and need to be handled carefully to avoid bending. Over time, the formation of starter sheets was improved by mechanisation, but there was still a high labour input. Once the starter sheets were formed, they had to be flattened, to reduce the likelihood of short circuits, and then cut, formed and punched to make loops from which the starter sheets are hung from conductive copper hanger bars in the electrolytic cells (see Figure 1). The starter sheets are inserted in the refining cells and dissolved copper electrodeposits on them to produce the cathode copper product (see Figure 2). Because of the manufacturing cost of the starter sheets, refineries using them tend to keep them in the cells as long as possible, usually 12–14 days. On the other hand, the anodes normally reside in the cells for 24–28 days, meaning that there are two cathodes produced from each anode. The starter sheets have a tendency to warp, due to the mechanical stresses they encounter and often need to be removed from the refining cells after about two days to be straightened in presses before being returned to the cells. The tendency to warp leads to frequent short-circuiting. Due to their limitations, it is difficult for copper produced on starter sheets to meet modern specifications for the highest-purity copper.
8
Metallurgy
In mathematics and solid state physics, the first Brillouin zone (named after Léon Brillouin) is a uniquely defined primitive cell in reciprocal space. In the same way the Bravais lattice is divided up into Wigner–Seitz cells in the real lattice, the reciprocal lattice is broken up into Brillouin zones. The boundaries of this cell are given by planes related to points on the reciprocal lattice. The importance of the Brillouin zone stems from the description of waves in a periodic medium given by Bloch's theorem, in which it is found that the solutions can be completely characterized by their behavior in a single Brillouin zone. The first Brillouin zone is the locus of points in reciprocal space that are closer to the origin of the reciprocal lattice than they are to any other reciprocal lattice points (see the derivation of the Wigner–Seitz cell). Another definition is as the set of points in k-space that can be reached from the origin without crossing any Bragg plane. Equivalently, this is the Voronoi cell around the origin of the reciprocal lattice. There are also second, third, etc., Brillouin zones, corresponding to a sequence of disjoint regions (all with the same volume) at increasing distances from the origin, but these are used less frequently. As a result, the first Brillouin zone is often called simply the Brillouin zone. In general, the n-th Brillouin zone consists of the set of points that can be reached from the origin by crossing exactly n − 1 distinct Bragg planes. A related concept is that of the irreducible Brillouin zone, which is the first Brillouin zone reduced by all of the symmetries in the point group of the lattice (point group of the crystal). The concept of a Brillouin zone was developed by Léon Brillouin (1889–1969), a French physicist. Within the Brillouin zone, a constant-energy surface represents the loci of all the -points (that is, all the electron momentum values) that have the same energy. Fermi surface is a special constant-energy surface that separates the unfilled orbitals from the filled ones at zero kelvin.
3
Analytical Chemistry
Wild Fermentation: The Flavor, Nutrition, and Craft of Live-Culture Foods is a 2003 book by Sandor Katz that discusses the ancient practice of fermentation. While most of the conventional literature assumes the use of modern technology, Wild Fermentation focuses more on the practice and culture of fermenting food. The term "wild fermentation" refers to the reliance on naturally occurring bacteria and yeast to ferment food. For example, conventional bread making requires the use of a commercial, highly specialized yeast, while wild-fermented bread relies on naturally occurring cultures that are found on the flour, in the air, and so on. Similarly, the book's instructions on sauerkraut require only cabbage and salt, relying on the cultures that naturally exist on the vegetable to perform the fermentation. The book also discusses some foods that are not, strictly speaking, wild ferments such as miso, yogurt, kefir, and nattō. Beyond food, the book includes some discussion of social, personal, and political issues, such as the legality of raw milk cheeses in the United States. Newsweek has referred to Wild Fermentation as the "fermentation bible".
1
Biochemistry
Since the dawn of modern chemistry, humic substances are among the most studied among the natural materials. Despite long study, their molecular structure remains elusive. The traditional view is that humic substances are heteropolycondensates, in varying associations with clay. A more recent view is that relatively small molecules also play a role. Humic substances account for 50 – 90% of cation exchange capacity. Similar to clay, char and colloidal humus hold cation nutrients.
9
Geochemistry
SINEs are classified as non-LTR retrotransposons because they do not contain long terminal repeats (LTRs). There are three types of SINEs common to vertebrates and invertebrates: CORE-SINEs, V-SINEs, and AmnSINEs. SINEs have 50-500 base pair internal regions which contain a tRNA-derived segment with A and B boxes that serve as an internal promoter for RNA polymerase III.
1
Biochemistry
The traditional synthetic route uses Raney nickel and has been further improved over time, for example by the use of ibuprofen and AlCl. Overall, it is a cost-effective method with moderate reaction conditions that is easy to handle and suitable for industrial production.
4
Stereochemistry
At neutral pH, thiocarboxylic acids are fully ionized. Thiocarboxylic acids are about 100 times more acidic than the analogous carboxylic acids. For PhC(O)SH pK = 2.48 vs 4.20 for PhC(O)OH. For thioacetic acid the pK is near 3.4 vs 4.72 for acetic acid. The conjugate base of thioacetic acid, thioacetate is reagents for installing thiol groups via the displacement of alkyl halides to give the thioester, which in turn are susceptible to hydrolysis: Thiocarboxylic acids react with various nitrogen functional groups, such as organic azide, nitro, and isocyanate compounds, to give amides under mild conditions. This method avoids needing a highly nucleophilic aniline or other amine to initiate an amide-forming acyl substitution, but requires synthesis and handling of the unstable thiocarboxylic acid. Unlike the Schmidt reaction or other nucleophilic-attack pathways, the reaction with an aryl or alkyl azide begins with a [3+2] cycloaddition; the resulting heterocycle expels N and the sulfur atom to give the monosubstituted amide.
0
Organic Chemistry
The large array of tests can be categorised into sub-specialities of: *General or routine chemistry – commonly ordered blood chemistries (e.g., liver and kidney function tests). *Special chemistry – elaborate techniques such as electrophoresis, and manual testing methods. *Clinical endocrinology – the study of hormones, and diagnosis of endocrine disorders. *Toxicology – the study of drugs of abuse and other chemicals. *Therapeutic Drug Monitoring – measurement of therapeutic medication levels to optimize dosage. *Urinalysis – chemical analysis of urine for a wide array of diseases, along with other fluids such as CSF and effusions *Fecal analysis – mostly for detection of gastrointestinal disorders.
1
Biochemistry
The first decade of the 20th century brought the basics of quantum theory (Planck, Einstein) and interpretation of spectral series of hydrogen by Lyman in VUV and by Paschen in infrared. Ritz formulated the combination principle. John William Nicholson had created an atomic model in 1912, a year before Niels Bohr, that was both nuclear and quantum in which he showed that electron oscillations in his atom matched the solar and nebular spectral lines. Bohr had been working on his atom during this period, but Bohr's model had only a single ground state and no spectra until he incorporated the Nicholson model and referenced the Nicholson papers in his model of the atom. In 1913, Bohr formulated his quantum mechanical model of atom. This stimulated empirical term analysis. Bohr published a theory of the hydrogen-like atoms that could explain the observed wavelengths of spectral lines due to electrons transitioning from different energy states. In 1937 "E. Lehrer created the first fully-automated spectrometer" to help more accurately measure spectral lines. With the development of more advanced instruments such as photo-detectors scientists were then able to more accurately measure specific wavelength absorption of substances.
7
Physical Chemistry
While RBS is generally used to measure the bulk composition and structure of a sample, it is possible to obtain some information about the structure and composition of the sample surface. When the signal is channeled to remove the bulk signal, careful manipulation of the incident and detection angles can be used to determine the relative positions of the first few layers of atoms, taking advantage of blocking effects. The surface structure of a sample can be changed from the ideal in a number of ways. The first layer of atoms can change its distance from subsequent layers (relaxation); it can assume a different two-dimensional structure than the bulk (reconstruction); or another material can be adsorbed onto the surface. Each of these cases can be detected by RBS. For example, surface reconstruction can be detected by aligning the beam in such a way that channeling should occur, so that only a surface peak of known intensity should be detected. A higher-than-usual intensity or a wider peak will indicate that the first layers of atoms are failing to block the layers beneath, i.e. that the surface has been reconstructed. Relaxations can be detected by a similar procedure with the sample tilted so the ion beam is incident at an angle selected so that first-layer atoms should block backscattering at a diagonal; that is, from atoms which are below and displaced from the blocking atom. A higher-than-expected backscattered yield will indicate that the first layer has been displaced relative to the second layer, or relaxed. Adsorbate materials will be detected by their different composition, changing the position of the surface peak relative to the expected position. RBS has also been used to measure processes which affect the surface differently from the bulk by analyzing changes in the channeled surface peak. A well-known example of this is the RBS analysis of the premelting of lead surfaces by Frenken, Maree and van der Veen. In an RBS measurement of the Pb(110) surface, a well-defined surface peak which is stable at low temperatures was found to become wider and more intense as temperature increase past two-thirds of the bulk melting temperature. The peak reached the bulk height and width as temperature reached the melting temperature. This increase in the disorder of the surface, making deeper atoms visible to the incident beam, was interpreted as pre-melting of the surface, and computer simulations of the RBS process produced similar results when compared with theoretical pre-melting predictions. RBS has also been combined with nuclear microscopy, in which a focused ion beam is scanned across a surface in a manner similar to a scanning electron microscope. The energetic analysis of backscattered signals in this kind of application provides compositional information about the surface, while the microprobe itself can be used to examine features such as periodic surface structures.
7
Physical Chemistry
Mono-BOC-cystamine (mono BOC protected cystamine) is a tert-butyloxycarbonyl (BOC) derivative of cystamine used as crosslinker in biotechnology and molecular biology applications. This compound was originally reported by Hansen et al.
1
Biochemistry
Most of the methyl chloride present in the environment ends up being released to the atmosphere. After being released into the air, the atmospheric lifetime of this substance is about 10 months with multiple natural sinks, such as ocean, transport to the stratosphere, soil, etc. On the other hand, when the methyl chloride emitted is released to water, it will be rapidly lost by volatilization. The half-life of this substance in terms of volatilization in the river, lagoon and lake is 2.1 h, 25 h and 18 days, respectively. The amount of methyl chloride in the stratosphere is estimated to be 2 x 10 tonnes per year, representing 20-25% of the total amount of chlorine that is emitted to the stratosphere annually.
2
Environmental Chemistry