Dataset Viewer
Auto-converted to Parquet
text
large_string
id
large_string
score
float64
tokens
int64
format
large_string
topic
large_string
fr_ease
float64
Previous abstract Next abstract Session 40 - The Interstellar Medium. Display session, Tuesday, June 09 Gamma Ray Burst (GRB) explosions can make kpc-size shells and holes in the interstellar media (ISM) of spiral galaxies if much of the energy heats the local gas to above 10^7 K. Disk blowout is probably the major cause for energy loss in this case, but the momentum acquired during the pressurized expansion phase can be large enough that the bubble still snowplows to a kpc diameter. This differs from the standard model for the origin of such shells by multiple supernovae, which may have problems with radiative cooling, evaporative losses, and disk blow-out. Evidence for giant shells with energies of \sim10^53 ergs are summarized. Some contain no obvious central star clusters and may be GRB remnants, although sufficiently old clusters would be hard to detect. The expected frequency of GRBs in normal galaxies can account for the number of such shells. Program listing for Tuesday
<urn:uuid:e2300ad5-01dd-4e80-92b3-7ec88785cc9d>
2.765625
208
Content Listing
Science & Tech.
47.385488
Tornadoes are the most intense storms on the planet, and they’re never discussed without at least some mention of the term wind shear. Many of us sitting at home, though, have no idea what wind shear is, or if we do, how it affects tornado production. What is Wind Shear Wind shear, although it might sound complex, is a simple concept. Wind shear is merely the change in wind with height, in terms of wind direction and speed. I think that we all understand that the wind is generally stronger in the atmosphere over our heads than it is here on the ground, and if we think of the atmosphere in terms of the three dimensions that it has, it should not be surprising that the wind above us might also be blowing from a different direction than the wind at the ground. When that happens–the wind speed and direction vary with height–wind shear is occurring. Wind Shear and Supercell Thunderstorms This wind shear is an important part of the process in the development of a supercell thunderstorm, from which the vast majority of strong tornadoes form. All thunderstorms are produced by a powerful updraft–a surge of air that rises from the ground into the upper levels of the atmosphere, and when this updraft forms in an area where wind shear is present, the updraft is influence by this speed and different direction of the wind above, pushing the column of air in the updraft into a more vertical alignment. Rain’s Influence on Tornado Production Needless to say, thunderstorms typically produce very heavy rain, and rain-cooled air is much heavier than the warm air of the updraft, so the rain-cooled air, produces a compensating downdraft (what comes up, must come down). This downdraft pushes the part of the rotating air that was forced in its direction by the stronger wind aloft downward, and the result is a horizontal column of rotating air. That’s Not a Tornado! I know what you’re thinking that you’ve seen enough TLC or Discovery Channel shows to know that a horizontal column of air is NOT a tornado; you need a vertical column of air. This Can Be a Tornado You’re right, but remember the updraft that is driving the thunderstorm is still working, and it’s able to pull the horizontal, spinning column of air into the thunderstorm, resulting in a vertical column of spinning air. (NOAA image showing vertical column of air in a supercell thunderstorm) The result is a rotating thunderstorm capable of producing a tornado, and it would not be possible without wind shear. (NOAA image showing tornado formation in supercell thunderstorm)
<urn:uuid:7400301c-e625-46d5-be90-1020cf8d52f8>
4.15625
573
Personal Blog
Science & Tech.
45.080294
Using the Moon as a High-Fidelity Analogue Environment to Study Biological and Behavioural Effects of Long-Duration Space Exploration Goswami, Nandu and Roma, Peter G. and De Boever, Patrick and Clément, Gilles and Hargens, Alan R. and Loeppky, Jack A. and Evans, Joyce M. and Stein, T. Peter and Blaber, Andrew P. and Van Loon, Jack J.W.A. and Mano, Tadaaki and Iwase, Satoshi and Reitz, Guenther and Hinghofer-Szalkay, Helmut G. (2012) Using the Moon as a High-Fidelity Analogue Environment to Study Biological and Behavioural Effects of Long-Duration Space Exploration. Planetary and Space Science, Epub ahead of print (in press). Elsevier. DOI: 10.1016/j.pss.2012.07.030. Full text not available from this repository. Due to its proximity to Earth, the Moon is a promising candidate for the location of an extra-terrestrial human colony. In addition to being a high-fidelity platform for research on reduced gravity, radiation risk, and circadian disruption, the Moon qualifies as an isolated, confined, and extreme (ICE) environment suitable as an analogue for studying the psychosocial effects of long-duration human space exploration missions and understanding these processes. In contrast, the various Antarctic research outposts such as Concordia and McMurdo serve as valuable platforms for studying biobehavioral adaptations to ICE environments, but are still Earth-bound, and thus lack the low-gravity and radiation risks of space. The International Space Station (ISS), itself now considered an analogue environment for long-duration missions, better approximates the habitable infrastructure limitations of a lunar colony than most Antarctic settlements in an altered gravity setting. However, the ISS is still protected against cosmic radiation by the earth magnetic field, which prevents high exposures due to solar particle events and reduces exposures to galactic cosmic radiation. On Moon the ICE environments are strengthened, radiations of all energies are present capable of inducing performance degradation, as well as reduced gravity and lunar dust. The interaction of reduced gravity, radiation exposure, and ICE conditions may affect biology and behavior--and ultimately mission success--in ways the scientific and operational communities have yet to appreciate, therefore a long-term or permanent human presence on the Moon would ultimately provide invaluable high-fidelity opportunities for integrated multidisciplinary research and for preparations of a manned mission to Mars. |Title:||Using the Moon as a High-Fidelity Analogue Environment to Study Biological and Behavioural Effects of Long-Duration Space Exploration| |Journal or Publication Title:||Planetary and Space Science| |In Open Access:||No| |In ISI Web of Science:||Yes| |Volume:||Epub ahead of print (in press)| |Keywords:||Physiology, Orthostatic tolerance, Muscle deconditioning, Behavioural health, Psychosocial adaptation, Radiation, Lunar dust, Genes, Proteomics| |HGF - Research field:||Aeronautics, Space and Transport, Aeronautics, Space and Transport| |HGF - Program:||Space, Raumfahrt| |HGF - Program Themes:||W EW - Erforschung des Weltraums, R EW - Erforschung des Weltraums| |DLR - Research area:||Space, Raumfahrt| |DLR - Program:||W EW - Erforschung des Weltraums, R EW - Erforschung des Weltraums| |DLR - Research theme (Project):||W - Vorhaben MSL-Radiation (old), R - Vorhaben MSL-Radiation| |Institutes and Institutions:||Institute of Aerospace Medicine > Radiation Biology| |Deposited By:||Kerstin Kopp| |Deposited On:||27 Aug 2012 08:05| |Last Modified:||07 Feb 2013 20:40| Repository Staff Only: item control page
<urn:uuid:25dbfda6-18d6-4e04-9bf5-fe7dcc73d69b>
3.09375
887
Academic Writing
Science & Tech.
24.740737
Science -- Asher et al. 307 (5712): 1091: We describe several fossils referable to Gomphos elkema from deposits close to the Paleocene-Eocene boundary at Tsagan Khushu, Mongolia. Gomphos shares a suite of cranioskeletal characters with extant rabbits, hares, and pikas but retains a primitive dentition and jaw compared to its modern relatives. Phylogenetic analysis supports the position of Gomphos as a stem lagomorph and excludes Cretaceous taxa from the crown radiation of placental mammals. Our results support the hypothesis that rodents and lagomorphs radiated during the Cenozoic and diverged from other placental mammals close to the Cretaceous-Tertiary boundary. Lagomorphs are rabbits, hares, and pikas. This might be referred to as a "missing link" of the rodents. Why do we care? Most mammals are rodents, and this tells us about the evolution of the most successful group of mammals. Cool!
<urn:uuid:fa9d11c3-ad57-40a6-8915-a8b1cd687729>
2.921875
220
Personal Blog
Science & Tech.
36.115
Basic Use To make a new number, a simple initialization suffices: var foo = 0; // or whatever number you want foo = 1; //foo = 1 foo += 2; //foo = 3 (the two gets added on) foo -= 2; //foo = 1 (the two gets removed) Number literals define the number value. In particular: They appear as a set of digits of varying length. Negative literal numbers have a minus sign before the set of digits. Floating point literal numbers contain one decimal point, and may optionally use the E notation with the character e. An integer literal may be prepended with "0", to indicate that a number is in base-8. (8 and 9 are not octal digits, and if found, cause the integer to be read in the normal base-10). An integer literal may also be found with "0x", to indicate a hexadecimal number. The Math Object Unlike strings, arrays, and dates, the numbers aren't objects. The Math object provides numeric functions and constants as methods and properties. The methods and properties of the Math object are referenced using the dot operator in the usual way, for example: var varOne = Math.ceil(8.5); var varPi = Math.PI; var sqrt3 = Math.sqrt(3); Methods random() Generates a pseudo-random number. var myInt = Math.random(); max(int1, int2) Returns the highest number from the two numbers passed as arguments. var myInt = Math.max(8, 9); document.write(myInt); //9 min(int1, int2) Returns the lowest number from the two numbers passed as arguments. var myInt = Math.min(8, 9); document.write(myInt); //8 floor(float) Returns the greatest integer less than the number passed as an argument. var myInt = Math.floor(90.8); document.write(myInt); //90; ceil(float) Returns the least integer greater than the number passed as an argument. var myInt = Math.ceil(90.8); document.write(myInt); //91; round(float) Returns the closest integer to the number passed as an argument. var myInt = Math.round(90.8); document.write(myInt); //91;
<urn:uuid:eecdd55e-49d8-40e4-9834-6f3dce28fa4c>
3.96875
508
Documentation
Software Dev.
72.693517
Refraction and Acceleration Name: Christopher S. Why is it that when light travels from a more dense to a less dense medium, its speed is higher? I've read answers to this question in your archives but, sadly, still don't get it. One answer (Jasjeet S Bagla) says that we must not ask the question because light is massless, hence questions of acceleration don't make sense. It does, however, seem to be OK to talk about different speeds of light. If you start at one speed and end at a higher one, why is one not allowed to talk about acceleration? Bagla goes on to say that it depends on how the em fields behave in a given medium. It begs the question: what is it about, say, Perspex and air that makes light accelerate, oops, travel at different speeds? If you're dealing with the same ray of light, one is forced to speak of acceleration, no? What other explanation is there for final velocity>initial velocity? Arthur Smith mentioned a very small "evanescent" component that travels ahead at c. Where can I learn more about this? Sorry for the long question. I understand that F=ma and if there is no m, you cannot talk about a, but, again, you have one velocity higher than another for the same thing. I need to know more than "that's just the way em fields are!" An explanation that satisfies me relates to travel through an interactive medium. When light interacts with an atom, the photon of light is absorbed and then emitted. For a moment, the energy of the light is within the atom. This causes a slight delay. Light travels at the standard speed of light until interacting with another atom. It is absorbed and emitted, causing another slight delay. The average effect is taking more time to travel a meter through glass than through air. This works like a slower speed. An individual photon does not actually slow down. It gets delayed repeatedly by the atoms of the medium. A more dense medium has more atoms per meter to Dr. Ken Mellendorf Illinois Central College Congratulations! on not being willing to accept "that is just the way em fields are!" The answer to your inquiry is not all that simple (my opinion), but I won't try to do so in the limited space allowed here, not to say my own limitations of knowledge. Like so many "simple" physics questions, I find the most lucid, but accurate, explanation in Richard Feynman's, "Lectures on Physics" which most libraries will have. Volume I, Chapter 31-1 through 31-6, which describes refraction, dispersion, diffraction. The "answer" has to do with how matter alters the electric field of incident radiation, but I won't pretend to be able to do a better job than Feynman. The answer is that you are not dealing with the same ray of light. In vacuum a photon just keeps going at the speed of light. In a medium, however, it interacts with the atoms, often being absorbed while bumping an atomic or molecular motion into a higher energy state. The excited atom/molecule then can jump to a lower energy state, emitting a photon while doing so. This can obviously make light appear to travel slower in a In detail, it is a very complicated question, requiring at least a graduate course in electromagnetism to begin to understand. Why, for example do the emitted photons tend to travel in the same direction? Best, Richard J. Plano Click here to return to the Physics Archives Update: June 2012
<urn:uuid:d2b35c16-35c7-477e-80c7-8dded3739ec4>
3.03125
794
Q&A Forum
Science & Tech.
58.858511
Giant Manta Ray Giant Manta Ray Manta birostris Divers often describe the experience of swimming beneath a manta ray as like being overtaken by a huge flying saucer. This ray is the biggest in the world, but like the biggest shark, the whale shark, it is a harmless consumer of plankton. When feeding, it swims along with its cavernous mouth wide open, beating its huge triangular wings slowly up and down. On either side of the mouth, which is at the front of the head, there are two long paddles, called cephalic lobes. These lobes help funnel plankton into the mouth. A stingerless whiplike tail trails behind. Giant manta rays tend to be found over high points like seamounts where currents bring plankton up to them. Small fish called remoras often travel attached to these giants, feeding on food scraps along the way. Giant mantas are ovoviviparous, so the eggs develop and hatch inside the mother. These rays can leap high out of the water, to escape predators, clean their skin of parasites or communicate.
<urn:uuid:f3984201-a44a-42d6-802f-de566b1e8a6e>
3.09375
238
Knowledge Article
Science & Tech.
55.646214
|Gallium metal is silver-white and melts at approximately body temperature (Wikipedia image).| |Atomic Number:||31||Atomic Radius:||187 pm (Van der Waals)| |Atomic Symbol:||Ga||Melting Point:||29.76 °C| |Atomic Weight:||69.72||Boiling Point:||2204 °C| |Electron Configuration:||[Ar]4s23d104p1||Oxidation States:||3| From the Latin word Gallia, France; also from Latin, gallus, a translation of "Lecoq," a cock. Predicted and described by Mendeleev as ekaaluminum, and discovered spectroscopically by Lecoq de Boisbaudran in 1875, who in the same year obtained the free metal by electrolysis of a solution of the hydroxide in KOH. Gallium is often found as a trace element in diaspore, sphalerite, germanite, bauxite, and coal. Some flue dusts from burning coal have been shown to contain as much 1.5 percent gallium. It is one of four metals -- mercury, cesium, and rubidium -- which can be liquid near room temperature and, thus, can be used in high-temperature thermometers. It has one of the longest liquid ranges of any metal and has a low vapor pressure even at high temperatures. There is a strong tendency for gallium to supercool below its freezing point. Therefore, seeding may be necessary to initiate solidification. Ultra-pure gallium has a beautiful, silvery appearance, and the solid metal exhibits a conchoidal fracture similar to glass. The metal expands 3.1 percent on solidifying; therefore, it should not be stored in glass or metal containers, because they may break as the metal solidifies. High-purity gallium is attacked only slowly by mineral acids. Gallium wets glass or porcelain and forms a brilliant mirror when it is painted on glass. It is widely used in doping semiconductors and producing solid-state devices such as transistors. Magnesium gallate containing divalent impurities, such as Mn+2, is finding use in commercial ultraviolet-activated powder phosphors. Gallium arsenide is capable of converting electricity directly into coherent light. Gallium readily alloys with most metals, and has been used as a component in low-melting alloys. Its toxicity appears to be of a low order, but should be handled with care until more data is available.
<urn:uuid:317a0fc8-b8f1-4147-a9ac-f69a1f176048>
3.46875
546
Knowledge Article
Science & Tech.
38.890701
If superparticles were to exist the decay would happen far more often. This test is one of the "golden" tests for supersymmetry and it is one that on the face of it this hugely popular theory among physicists has failed. Prof Val Gibson, leader of the Cambridge LHCb team, said that the new result was "putting our supersymmetry theory colleagues in a spin". The results are in fact completely in line with what one would expect from the Standard Model. There is already concern that the LHCb's sister detectors might have expected to have detected superparticles by now, yet none have been found so far.This certainly does not rule out SUSY, but it is getting to the same level as cold fusion if positive experimental result does not come soon.
<urn:uuid:72def0d3-296d-49d8-bdf5-73c351dd6672>
2.6875
163
Personal Blog
Science & Tech.
46.709545
Let and be two differentiable functions. We will say that and are proportional if and only if there exists a constant C such that . Clearly any function is proportional to the zero-function. If the constant C is not important in nature and we are only interested into the proportionality of the two functions, then we would like to come up with an equivalent criteria. The following statements are equivalent: Therefore, we have the following: Define the Wronskian of and to be , that is The following formula is very useful (see reduction of order technique): Remark: Proportionality of two functions is equivalent to their linear dependence. Following the above discussion, we may use the Wronskian to determine the dependence or independence of two functions. In fact, the above discussion cannot be reproduced as is for more than two functions while the Wronskian does....
<urn:uuid:b7bc34b8-0f1f-4df8-8e8d-e56fc9c8fec5>
2.6875
180
Knowledge Article
Science & Tech.
38.502318
Forecast Texas Fire Danger (TFD) The Texas Fire Danger(TFD) map is produced by the National Fire Danger Rating System (NFDRS). Weather information is provided by remote, automated weather stations and then used as an input to the Weather Information Management System (WIMS). The NFDRS processor in WIMS produces a fire danger rating based on fuels, weather, and topography. Fire danger maps are produced daily. In addition, the Texas A&M Forest Service, along with the SSL, has developed a five day running average fire danger rating map. Daily RAWS information is derived from an experimental project - DO NOT DISTRIBUTE
<urn:uuid:a789fd8d-b873-45cf-b01d-af6eca242a5d>
3.015625
136
Knowledge Article
Science & Tech.
31.717
x2/3 + y2/3 = a2/3 x = a cos3(t), y = a sin3(t) Click below to see one of the Associated curves. |Definitions of the Associated curves||Evolute| |Involute 1||Involute 2| |Inverse curve wrt origin||Inverse wrt another circle| |Pedal curve wrt origin||Pedal wrt another point| |Negative pedal curve wrt origin||Negative pedal wrt another point| |Caustic wrt horizontal rays||Caustic curve wrt another point| The astroid only acquired its present name in 1836 in a book published in Vienna. It has been known by various names in the literature, even after 1836, including cubocycloid and paracycle. The length of the astroid is 6a and its area is 3πa2/8. The gradient of the tangent T from the point with parameter p is -tan(p). The equation of this tangent T is x sin(p) + y cos(p) = a sin(2p)/2 Let T cut the x-axis and the y-axis at X and Y respectively. Then the length XY is a constant and is equal to a. It can be formed by rolling a circle of radius a/4 on the inside of a circle of radius a. It can also be formed as the envelope produced when a line segment is moved with each end on one of a pair of perpendicular axes. It is therefore a glissette. Other Web site: |Main index||Famous curves index| |Previous curve||Next curve| |History Topics Index||Birthplace Maps| |Mathematicians of the day||Anniversaries for the year| |Societies, honours, etc||Search Form| The URL of this page is:
<urn:uuid:367a0525-d005-4467-93f1-a7ac123614d1>
2.71875
409
Knowledge Article
Science & Tech.
54.846538
Science Fair Project Encyclopedia The chloride ion is formed when the element chlorine picks up one electron to form the anion (negatively charged ion) Cl−. The salts of hydrochloric acid HCl contain chloride ions and are also called chlorides. An example is table salt, which is sodium chloride with the chemical formula NaCl. In water, it dissolves into Na+ and Cl− ions. The word chloride can also refer to a chemical compound in which one or more chlorine atoms are covalently bonded in the molecule. This means that chlorides can be either inorganic or organic compounds. The simplest example of an inorganic covalently bonded chloride is hydrogen chloride, HCl. A simple example of an organic covalently bonded chloride is chloromethane (CH3Cl), often called methyl chloride. Other examples of inorganic covalently bonded chlorides which are used as reactants are: - phosphorus trichloride, phosphorus pentachloride, and thionyl chloride - all three are reactive chlorinating reagents which have been used in a laboratory. - Disulfur dichloride (SCl2) - used for vulcanization of rubber. Chloride ions have important physiological roles. For instance, in the central nervous system the inhibitory action of glycine and some of the action of GABA relies on the entry of Cl− into specific neurons. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:4e76b8fd-c479-45d7-8ee7-faf61495aecb>
4.59375
320
Knowledge Article
Science & Tech.
27.864975
Next: Radiative heat flux Up: Loading Previous: Distributed heat flux Contents Convective heat flux is a flux depending on the temperature difference between the body and the adjacent fluid (liquid or gas) and is triggered by the *FILM card. It takes the form where is the a flux normal to the surface, is the film coefficient, is the body temperature and is the environment fluid temperature (also called sink temperature). Generally, the sink temperature is known. If it is not, it is an unknown in the system. Physically, the convection along the surface can be forced or free. Forced convection means that the mass flow rate of the adjacent fluid (gas or liquid) is known and its temperature is the result of heat exchange between body and fluid. This case can be simulated by CalculiX by defining network elements and using the *BOUNDARY card for the first degree of freedom in the midside node of the element. Free convection, for which the mass flow rate is a n unknown too and a result of temperature differences, cannot be simulated. Next: Radiative heat flux Up: Loading Previous: Distributed heat flux Contents guido dhondt 2012-10-06
<urn:uuid:47d24057-e332-41de-bbe6-0338e16b49a6>
3.3125
249
Tutorial
Science & Tech.
41.094375
RR Lyrae starArticle Free Pass RR Lyrae star, any of a group of old giant stars of the class called pulsating variables (see variable star) that pulsate with periods of about 0.2–1 day. They belong to the broad Population II class of stars (see Populations I and II) and are found mainly in the thick disk and halo of the Milky Way Galaxy and often in globular clusters. There are several subclasses—designated RRa, RRb, RRc, and RRd—based on the manner in which the light varies with time. The intrinsic luminosities of RR Lyrae stars are relatively well-determined, which makes them useful as distance indicators. What made you want to look up "RR Lyrae star"? Please share what surprised you most...
<urn:uuid:ca821097-b750-4e33-85da-b6754420e0dc>
2.921875
171
Knowledge Article
Science & Tech.
63.468978
Study promoter activity using the Living Colors Fluorescent Timer, a fluorescent protein that shifts color from green to red over time (1). This color change provides a way to visualize the time frame of promoter activity, indicating where in an organism the promoter is active and also when it becomes inactive. Easily detect the red and green emissions indicating promoter activity with fluorescence microscopy or flow cytometry. Easily Characterize Promoter Activity The Fluorescent Timer is a mutant form of the DsRed fluorescent reporter, containing two amino acid substitutions which increase its fluorescence intensity and endow it with a distinct spectral property: as the Fluorescent Timer matures, it changes color—in a matter of hours, depending on the expression system used. Shortly after its synthesis, the Fluorescent Timer begins emitting green fluorescence but as time passes, the fluorophore undergoes additional changes that shift its fluorescence to longer wavelengths. When fully matured the protein is bright red. The protein’s color shift can be used to follow the on and off phases of gene expression (e.g., during embryogenesis and cell differentiation). Fluorescent Timer under the control of the heat shock promoter hsp16-41 in a transgenic C. elegans embryo. The embryo was heat-shocked in a 33°C water bath. Promoter activity was studied during the heat shock recovery period. Green fluorescence was observed in the embryo as early as two hr into the recovery period. By 50 hr after heat shock, promoter activity had ceased, as indicated by the lack of green color. pTimer (left) is primarily intended to serve as a convenient source of the Fluorescent Timer cDNA. Use pTimer-1 (right) to monitor transcription from different promoters and promoter/ enhancer combinations inserted into the MCS located upstream of the Fluorescent Timer coding sequence. Without the addition of a functional promoter, this vector will not express the Fluorescent Timer. Detecting Timer Fluorescent Protein You can detect the Fluorescent Timer with the DsRed Polyclonal Antibody. You can use the DsRed1-C Sequencing Primer to sequence wild-type DsRed1 C-terminal gene fusions, including Timer fusions. Terskikh, A., et al. (2000) Science290(5496):1585–1588.
<urn:uuid:fee85558-8ff7-41a4-9a52-a042d84e5f3a>
2.6875
499
Knowledge Article
Science & Tech.
36.829775
Killing Emacs means ending the execution of the Emacs process. If you started Emacs from a terminal, the parent process normally resumes control. The low-level primitive for killing Emacs is This command calls the hook kill-emacs-hook, then exits the Emacs process and kills it. If exit-data is an integer, that is used as the exit status of the Emacs process. (This is useful primarily in batch operation; see Batch Mode.) If exit-data is a string, its contents are stuffed into the terminal input buffer so that the shell (or whatever program next reads input) can read them. kill-emacs function is normally called via the higher-level command C-x C-c save-buffers-kill-terminal). See Exiting. It is also called automatically if Emacs receives a SIGHUP operating system signal (e.g., when the controlling terminal is disconnected), or if it receives a SIGINT signal while running in batch mode (see Batch Mode). This normal hook is run by kill-emacs, before it kills Emacs. kill-emacscan be called in situations where user interaction is impossible (e.g., when the terminal is disconnected), functions on this hook should not attempt to interact with the user. If you want to interact with the user when Emacs is shutting down, use kill-emacs-query-functions, described below. When Emacs is killed, all the information in the Emacs process, aside from files that have been saved, is lost. Because killing Emacs inadvertently can lose a lot of work, the save-buffers-kill-terminal command queries for confirmation if you have buffers that need saving or subprocesses that are running. It also runs the abnormal hook save-buffers-kill-terminalis killing Emacs, it calls the functions in this hook, after asking the standard questions and before calling kill-emacs. The functions are called in order of appearance, with no arguments. Each function can ask for additional confirmation from the user. If any of them returns save-buffers-kill-emacsdoes not kill Emacs, and does not run the remaining functions in this hook. Calling kill-emacsdirectly does not run this hook.
<urn:uuid:af93ad35-c5de-4297-a667-afc7347bbc6c>
2.6875
488
Documentation
Software Dev.
51.422678
Boulder trails are common to the interior of Menelaus crater as materials erode from higher topography and roll toward the crater floor. Downhill is to the left, image width is 500 m, LROC NAC M139802338L [NASA/GSFC/Arizona State University]. Most boulder trails are relatively high reflectance, but running through the center of this image is a lower reflectance trail. This trail is smaller than the others, and its features may be influenced by factors such as mass of the boulder, boulder speed as it traveled downhill, and elevation from which the boulder originated. For example, is the boulder trail less distinct than the others because the boulder was smaller? What about the spacing of boulder tracks? The spacing of bounce-marks along boulder trails may say something about boulder mass and boulder speed. But why is this boulder trail low reflectance when all of the surrounding trails are higher reflectance? Perhaps this boulder trail is lower reflectance because the boulder gently bounced as it traveled downhill, and barely disturbed a thin layer of regolith? The contrast certainly appears similar to the astronauts' footprints and paths around the Apollo landing sites. Or, maybe the boulder fell apart during its downhill travel and the trail is simply made up of pieces of the boulder - we just don't know yet. LROC WAC context of Menelaus crater at the boundary between Mare Serenitatis and the highlands (dotted line). The arrow marks the location of today's featured image at contact between the crater floor and NE crater wall [NASA/GSFC/Arizona State University]. What do you think? Why don't you follow the trail to its source in the full LROC NAC frame and see if you can find any other low reflectance trails.
<urn:uuid:ce50e516-2229-404a-b328-7d80cdfd0d33>
3.25
362
Comment Section
Science & Tech.
50.615374
of the Giant Squid scientifically known as Architeuthis dux, is the largest of all invertebrates. Scientists believe it can be as long as 18 metres (60 feet). This specimen was collected by Dr Gordon Williamson who worked as the resident ships biologist for the whaling company Salvesons. He examined the stomach contents of 250 Sperm Whales Physeter macrocephalus keeping the largest squid beak and discarding the smaller until he ended up with this magnificent specimen.
<urn:uuid:03dc2cd4-80be-4c32-8ff8-4b196542656b>
3.03125
105
Knowledge Article
Science & Tech.
43.41975
is a C based interpreter (runloop) that executes, what different compiler (like Mildew ) produce. If you want to help SMOP, you can just take on one of the lowlevel S1P implementations and write it. If you have any questions ask ruoso or pmurias at #perl6 @ irc.freenode.org. The Slides for the talk Perl 6 is just a SMOP are available, it introduces a bit of the reasoning behind SMOP. A newer version of the talk presented at YAPC::EU 2008 is available SMOP is an alternative implementation of a C engine to run Perl 6. It is focused in getting the most pragmatic approach possible, but still focusing in being able to support all Perl 6 features. Its core resembles Perl 5 in some ways, and it differs from Parrot in many ways, including the fact that SMOP is not a Virtual Machine. SMOP is simply a runtime engine that happens to have a interpreter run loop. The main difference between SMOP and Parrot (besides the not-being-a-vm thing), is that SMOP is from bottom-up an implementation of the Perl 6 OO features, in a way that SMOP should be able to do a full bootstrap of the Perl 6 type system. Parrot on the other hand have a much more static low-level implementation (the PMC) The same way PGE is a project on top of Parrot, SMOP will need a grammar engine for itself. SMOP is the implementation that is stressing the meta object protocol more than any other implementation, and so far that has been a very fruitful exercise, with Larry making many clarifications on the object system thanks to SMOP. Important topics on SMOP - SMOP doesn't recurse in the C stack, and it doesn't actually define a mandatory paradigm (stack-based or register-based). SMOP has a Polymorphic Eval, that allows you to switch from one interpreter loop to another using Continuation Passing Style. See SMOP Stackless. - SMOP doesn't define a object system in its own. The only thing it defines is the concept of SMOP Responder Interface, which then encapsulates whatever object system. This feature is fundamental to implement the SMOP Native Types. - SMOP is intended to bootstrap itself from the low-level to the high-level. This is achieved by the fact that everything in SMOP is an Object. This way, even the low-level objects can be exposed to the high level runtime. See SMOP OO Bootstrap. - SMOP won't implement a parser in its own, it will use STD or whatever parser gets ported to its runtime first. - In order to enable the bootstrap, the runtime have a set of SMOP Constant Identifiers that are available for the sub-language compilers to use. - There are some special SMOP Values Not Subject to Garbage Collection. - A new interpreter implementation SMOP Mold replaced SLIME - The "official" smop Perl 6 compiler is mildew - it lives in v6/mildew - Currently there exists an old Elf backend which targets SMOP - it lives in misc/elfish/elfX SMOP GSoC 2009 See the Old SMOP Changelog
<urn:uuid:9ef4d308-fa15-4196-86db-2db8b4c54358>
2.875
694
Knowledge Article
Software Dev.
53.614756
The word vivisection was first coined in the 1800s to denote the experimental dissection of live animals - or humans. It was created by activists who opposed the practice of experimenting on animals. The Roman physician Celsus claimed that in Alexandria in the 3rd century BCE physicians had performed vivisections on sentenced criminals, but vivisection on humans was generally outlawed. Experimenters frequently used living animals. Most early modern researchers considered this practice acceptable, believing that animals felt no pain. Even those who opposed vivisection in the early modern period did not usually do so out of consideration for the animals, but because they thought that this practice would coarsen the experimenter, or because they were concerned that animals stressed under experimental conditions did not represent the normal state of the body. Prompted by the rise of experimental physiology and the increasing use of animals, an anti-vivisection movement started in the 1860s. Its driving force, the British journalist Frances Power Cobbe (1822-1904), founded the British Victoria Street Society in 1875, which gave rise to the British government's Cruelty to Animals Act of 1876. This law regulated the use of live animals for experimental purposes. R A Kopaladze, 'Ivan P. Pavlov's view on vivisection', Integr. Physiol. Behav. Sci., 4 (2000), pp 266-271 C Lansbury, The Old Brown Dog: Women, Workers, and Vivisection in Edwardian England (Madison: University of Wisconsin Press, 1985) P Mason, The Brown Dog Affair: The Story of a Monument that Divided the Nation (London: Two Stevens, 1997) N A Rupke, (ed.) Vivisection in Historical Perspective (London: Crooms Helm, 1987) The science of the functioning of living organisms and their component parts.
<urn:uuid:302a84f1-d0b1-4e14-8e71-b2ded9ee5190>
3.71875
392
Knowledge Article
Science & Tech.
36.06538
The Weekly Newsmagazine of Science Volume 155, Number 19 (May 8, 1999) |<<Back to Contents| By J. Raloff Canadian scientists have identified the likely culprit behind some historic, regional declines in Atlantic salmon. The researchers find that a near-ubiquitous water pollutant can render young, migrating fish unable to survive a life at sea. Heavy, late-spring spraying of forests with a pesticide laced with nonylphenol during the 1970s and '80s was the clue that led the biologists to unmask that chemical's role in the transitory decline of salmon in East Canada. Though these sprays have ended, concentrations of nonylphenols in forest runoff then were comparable to those in the effluent of some pulp mills, industrial facilities, and sewage-treatment plants today. Downstream of such areas, the scientists argue, salmon and other migratory fish may still be at risk. Nonylphenols are surfactants used in products from pesticides to dishwashing detergents, cosmetics, plastics, and spermicides. Because waste-treatment plants don't remove nonylphenols well, these chemicals can build up in downstream waters (SN: 1/8/94, p. 24). When British studies linked ambient nonylphenol pollution to reproductive problems in fish (SN: 2/26/94, p. 142), Wayne L. Fairchild of Canada's Department of Fisheries and Oceans in Moncton, New Brunswick, became concerned. He recalled that an insecticide used on local forests for more than a decade had contained large amounts of nonylphenols. They helped aminocarb, the oily active ingredient in Matacil 1.8D, dissolve in water for easier spraying. Runoff of the pesticide during rains loaded the spawning and nursery waters of Atlantic salmon with nonylphenols. Moreover, this aerial spraying had tended to coincide with the final stages of smoltificationthe fish's transformation for life at sea. To probe for effects of forest spraying, Fairchild and his colleagues surveyed more than a decade of river-by-river data on fish. They overlaid these numbers with archival data on local aerial spraying with Matacil 1.8D or either of two nonylphenol-free pesticides. One contained the same active ingredient, aminocarb, as Matacil 1.8D does. Most of the lowest adult salmon counts between 1973 and 1990 occurred in rivers where smolts would earlier have encountered runoff of Matacil 1.8D, Fairchild's group found. In 9 of 19 cases of Matacil 1.8D spraying for which they had good data, salmon returns were lower than they were within the 5 years earlier and 5 years later, they report in the May Environmental Health Perspectives. No population declines were associated with the other two pesticides. The researchers have now exposed smolts in the laboratory to various nonylphenol concentrations, including some typical of Canadian rivers during the 1970s. The fish remained healthyuntil they entered salt water, at which point they exhibited a failure-to-thrive syndrome. "They looked like they were starving," Fairchild told Science News. Within 2 months, he notes, 20 to 30 percent died. Untreated smolts adjusted normally to salt water and fattened up. Steffen S. Madsen, a fish ecophysiologist at Odense University in Denmark, is not surprised, based on his own experiments. To move from fresh water to the sea, a fish must undergo major hormonal changes that adapt it for pumping out excess salt. A female preparing to spawn in fresh water must undergo the opposite change. Since estrogen triggers her adaptation, Madsen and a colleague decided to test how smolts would respond to estrogen or nonylphenol, an estrogen mimic. In the lab, they periodically injected salmon smolts with estrogen or nonylphenol over 30 days, and at various points placed them in seawater for 24 hours. Salt in the fish's blood skyrocketed during the day-long trials, unlike salt in untreated smolts. "Our preliminary evidence indicates that natural and environ- mental estrogens screw up the pituitary," Madsen says. The gland responds by making prolactin, a hormone that drives freshwater adaptation. Judging by Fairchild's data, Madsen now suspects that any fish that migrates between fresh and salt water may be similarly vulnerable to high concentrations of pollutants that mimic estrogen. From Science News, Vol. 155, No. 19, May 8, 1999, p. 293. Copyright © 1999, Science Service. Copyright © 1999 Science Service
<urn:uuid:3ac50003-34df-4326-9ff5-f4278ff44a0b>
3.109375
978
Truncated
Science & Tech.
47.450967
Gaia theory is a class of scientific models of the geo-biosphere in which life as a whole fosters and maintains suitable conditions for itself by helping to create an environment on Earth suitable for its continuity. The first such theory was created by the atmospheric scientist and chemist, Sir James Lovelock, who developed his hypotheses in the 1960s before formally publishing the concept, first in the New Scientist (February 13, 1975) and then in the 1979 book "Quest for Gaia". He hypothesized that the living matter of the planet functioned like a single organism and named this self-regulating living system after the Greek goddess, Gaia, using a suggestion of novelist William Golding. Gaia "theories" have non-technical predecessors in the ideas of several cultures. Today, "Gaia theory" is sometimes used among non-scientists to refer to hypotheses of a self-regulating Earth that are non-technical but take inspiration from scientific models. Among some scientists, "Gaia" carries connotations of lack of scientific rigor, quasi-mystical thinking about the planet arth, and therefore Lovelock's hypothesis was received initially with much antagonism by much of the scientific community. No controversy exists, however, that life and the physical environment significantly influence one another. Gaia theory today is a spectrum of hypotheses, ranging from the undeniable (Weak Gaia) to the radical (Strong Gaia). At one end of this spectrum is the undeniable statement that the organisms on the Earth have radically altered its composition. A stronger position is that the Earth's biosphere effectively acts as if it is a self-organizing system, which works in such a way as to keep its systems in some kind of meta-equilibrium that is broadly conducive to life. The history of evolution, ecology and climate show that the exact characteristics of this equilibrium intermittently have undergone rapid changes, which are believed to have caused extinctions and felled civilisations. Biologists and earth scientists usually view the factors that stabilize the characteristics of a period as an undirected emergent property or entelechy of the system; as each individual species pursues its own self-interest, for example, their combined actions tend to have counterbalancing effects on environmental change. Opponents of this view sometimes point to examples of life's actions that have resulted in dramatic change rather than stable equilibrium, such as the conversion of the Earth's atmosphere from a reducing environment to an oxygen-rich one. However, proponents will point out that those atmospheric composition changes created an environment even more suitable to life. Some go a step further and hypothesize that all lifeforms are part of a single living planetary being called Gaia. In this view, the atmosphere, the seas and the terrestrial crust would be results of interventions carried out by Gaia through the coevolving diversity of living organisms. While it is arguable that the Earth as a unit does not match the generally accepted biological criteria for life itself (Gaia has not yet reproduced, for instance), many scientists would be comfortable characterising the earth as a single "system". The most extreme form of Gaia theory is that the entire Earth is a single unified organism; in this view the Earth's biosphere is consciously manipulating the climate in order to make conditions more conducive to life. Scientists contend that there is no evidence at all to support this last point of view, and it has come about because many people do not understand the concept of homeostasis. Many non-scientists instinctively see homeostasis as an activity that requires conscious control, although this is not so. Much more speculative versions of Gaia theory, including all versions in which it is held that the Earth is actually conscious or part of some universe-wide evolution, are currently held to be outside the bounds of science. This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Gaia".
<urn:uuid:7a3fa081-9c60-42a7-8ec4-1d8c386b4009>
3.4375
794
Knowledge Article
Science & Tech.
23.657602
Giant Water Scavenger Beetle |Geographical Range||North America| |Scientific Name||Hydrophilus triangularis| |Conservation Status||Not listed by IUCN| The name says it all. This large beetle lives in water, where it scavenges vegetation and insect parts. The insect can store a supply of air within its silvery belly, much like a deep-sea diver stores air in a tank.
<urn:uuid:469863a4-9f80-47c2-ad04-ee7f0adecfd5>
3.078125
91
Knowledge Article
Science & Tech.
34.880113
WAKING the GIANT Bill McGuire While we transmit more than two million tweets a day and nearly one hundred trillion emails each year, we're also emitting record amounts of carbon dioxide (CO2). Bill McGuire, professor of geophysical and climate hazards at University College London, expects our continued rise in greenhouse gas emissions to awaken a slumbering giant: the Earth's crust. In Waking the Giant: How a Changing Climate Triggers Earthquakes, Tsunamis and Volcanoes (Oxford University Press), he explains that when the Earth's crust (or geosphere) becomes disrupted from rising temperatures and a C[O.sub.2]-rich atmosphere, natural disasters strike more frequently and with catastrophic force. Applying a "straightforward presentation of what we know about how climate and the geosphere interact," the book links previous warming periods 20,000 to 5,000 years ago with a greater abundance of tsunamis, landslides, seismic activity and volcanic eruptions. McGuire urgently warns of the "tempestuous future of our own making" as we progressively inch toward a similar climate. Despite his scientific testimony to Congress stating "what is going on in the Arctic now is the biggest and fastest thing that Nature has ever done" and the "incontrovertible" data that the Earth's climate draws lively response from the geosphere, brutal weather events are still not widely seen as being connected to human influence. Is our global population sleepwalking toward imminent destruction, he asks, until "it is obvious, even to the most entrenched denier, that our climate is being transformed?"
<urn:uuid:46ed79e4-97dd-492f-bf29-99304e01f4ee>
3.046875
330
Nonfiction Writing
Science & Tech.
28.729356
Want to stay on top of all the space news? Follow @universetoday on Twitter Sidereal Time is the time is takes for celestial bodies to ascend and descend in the night sky. We know that celestial bodies are in reality, fixed in their positions. The reason for their dramatic movement in the night is because of the rotation of the earth. This is the same reason why the Sun and the Moon seem to rise and set. For the longest time, this motion caused many philosophers and astronomers to assume that the Earth was the center of the Universe. Fortunately later astronomers like Copernicus were able to discern the true movements of the Earth, Moon, and Sun helping to explain their movements. The time that it takes for a star, planet or other fixed celestial body to ascend and descend in the night sky is also called sidereal period. Coincidentally this time corresponds to the time it takes for the Earth to rotate one revolution which is just under 24 hours. Sidereal time is not like solar time which is measured by the movement of the sun. Or the lunar cycles which take about 28 days. It is the relative angle of a celestial object to the prime meridian of the vernal equinox of the earth. IF these terms are confusing, here is what they mean. In cartography, the Earth is bisected by two major lines of longitude and latitude. These lines are the 0 degree points on the globe. The 0 degree point for the latitude is the Equator the point where the Earth is perfectly bisected. It cut through South America and Africa. The 0 degree point for the longitude is the prime meridian. It exact location is Greenwich, UK. The Equinoxes are essentially the times of the year when the sun rise and sets at the exact same point of the horizon at the equator. This means that these are the only times the solar day is equally divided into 12 hours of day and 12 hours of night. The hour angle for a celestial object relative to this meridian is what we call sidereal time.This angle changes with the rotation of the Earth creating a pattern of ascension and descent for celestial bodies in the Earth’s sky. With the knowledge of sidereal time astronomers can predict the positions of stars. The values for the sidereal time of celestial objects is compile in a table or start chart called an ephemeris. With this guide to sidereal time astronomers can find a celestial object regardless of the change in their position over the year. There are also some great resources on the net. The U.S. Naval observatory has an online clock to help you find out the sidereal time in your area. There is also a great explanation on the astronomy section of the Cornell university site.
<urn:uuid:678e8811-82bd-4c27-af17-f540e64bc52a>
3.75
564
Knowledge Article
Science & Tech.
54.148823
Here's the way the NWS defines it: Forecasts issued by the National Weather Service routinely include a "PoP" (probability of precipitation) statement, which is often expressed as the "chance of rain" or "chance of precipitation".http://www.srh.noaa.gov/ffc/?n=pop ZONE FORECASTS FOR NORTH AND CENTRAL GEORGIA NATIONAL WEATHER SERVICE PEACHTREE CITY GA 119 PM EDT THU MAY 8 2008 INCLUDING THE CITIES OF...ATLANTA...CONYERS...DECATUR... 119 PM EDT THU MAY x 2008 .THIS AFTERNOON...MOSTLY CLOUDY WITH A 40 PERCENT CHANCE OF SHOWERS AND THUNDERSTORMS. WINDY. HIGHS IN THE LOWER 80S. NEAR STEADY TEMPERATURE IN THE LOWER 80S. SOUTH WINDS 15 TO 25 MPH. .TONIGHT...MOSTLY CLOUDY WITH A CHANCE OF SHOWERS AND THUNDERSTORMS IN THE EVENING...THEN A SLIGHT CHANCE OF SHOWERS AND THUNDERSTORMS AFTER MIDNIGHT. LOWS IN THE MID 60S. SOUTHWEST WINDS 5 TO 15 MPH. CHANCE OF RAIN 40 PERCENT. What does this "40 percent" mean? ...will it rain 40 percent of of the time? ...will it rain over 40 percent of the area? The "Probability of Precipitation" (PoP) describes the chance of precipitation occurring at any point you select in the area. How do forecasters arrive at this value? Mathematically, PoP is defined as follows: PoP = C x A where "C" = the confidence that precipitation will occur somewhere in the forecast area, and where "A" = the percent of the area that will receive measureable precipitation, if it occurs at all. So... in the case of the forecast above, if the forecaster knows precipitation is sure to occur ( confidence is 100% ), he/she is expressing how much of the area will receive measurable rain. ( PoP = "C" x "A" or "1" times ".4" which equals .4 or 40%.) But, most of the time, the forecaster is expressing a combination of degree of confidence and areal coverage. If the forecaster is only 50% sure that precipitation will occur, and expects that, if it does occur, it will produce measurable rain over about 80 percent of the area, the PoP (chance of rain) is 40%. ( PoP = .5 x .8 which equals .4 or 40%. ) In either event, the correct way to interpret the forecast is: there is a 40 percent chance that rain will occur at any given point in the area.
<urn:uuid:64f70112-bac2-48dc-87e7-d1404797fade>
3.421875
616
Comment Section
Science & Tech.
73.397381
A compiler is a computer program that takes code and generates either object code or translates code in one language into another language. When it generates code into another language usually the other language is either compiled (into object code) , interpreted , or even compiled again into another language. Object code can be run on your computer as a regular program. In the days when compute time cost thousands of dollars compilation was done by hand. Now compilation is usually done by a program. edit-hint Expand to compilation techniques?
<urn:uuid:880d3bad-144c-4602-89ac-2eec0a853e79>
3.40625
102
Knowledge Article
Software Dev.
25.430852
GloMax®-Multi Jr Method for DNA Quantitation Using Hoechst 33258 - Comments & Ratings Quantitation of DNA is an important step for many practices in molecular biology. Common techniques that use DNA, such as sequencing, cDNA synthesis and cloning, RNA transcription, transfection, nucleic acid labeling (e.g., random prime labeling), etc., all benefit from a defined template concentration. Failure to produce results from these techniques sometimes can be attributed to an incorrect estimate of the DNA template used. The concentration of a nucleic acid most commonly is measured by UV absorbance at 260nm (A260). Absorbance methods are limited in sensitivity, however, due to a high level of background interference.
<urn:uuid:8cdb1656-8511-466e-b3f6-681a7cf80615>
2.734375
149
Knowledge Article
Science & Tech.
28.8033
.NET Type Design Guidelines |Visual C# Tutorials| |.NET Framework Tutorials| .NET Type Design Guidelines |© 2006 Microsoft Corp.| |This tutorial—.NET Type Design Guidelines—is from Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries, by Krzysztof Cwalina, Brad Abrams. Copyright © 2006 Microsoft Corp.. All rights reserved. This article is reproduced by permission. This tutorial has been edited especially for C# Online.NET. Read the book review!| (This article was written and annotated by members of the Microsoft Common Language Runtime (CLR) and .NET teams and other experts.) Type Design Guidelines in .NET From the CLR perspective, there are only two categories of types—reference types and value types—but for the purpose of framework design discussion we divide types into more logical groups, each with its own specific design rules. Figure 4-1 shows these logical groups. Classes are the general case of reference types. They make up the bulk of types in the majority of frameworks. Classes owe their popularity to the rich set of object-oriented features they support and to their general applicability. Base classes and abstract classes are special logical groups related to extensibility. Extensibility and base classes are covered in Chapter 6. Interfaces are types that can be implemented both by reference types and value types. This allows them to serve as roots of polymorphic hierarchies of reference types and value types. In addition, interfaces can be used to simulate multiple inheritance, which is not natively supported by the CLR. Structs are the general case of value types and should be reserved for small, simple types, similar to language primitives. Enums are a special case of value types used to define short sets of values, such as days of the week, console colors, and so on. Static classes are types intended as containers for static members. They are commonly used to provide shortcuts to other operations. Delegates, exceptions, attributes, arrays, and collections are all special cases of reference types intended for specific uses, and guidelines for their design and usage are discussed elsewhere in this book. - DO ensure that each type is a well-defined set of related members, not just a random collection of unrelated functionality. - It is important that a type can be described in one simple sentence. A good definition should also rule out functionality that is only tangentially related. |If you have ever managed a team of people you know that they don't do well without a crisp set of responsibilities. Well, types work the same way. I have noticed that types without a firm and focused scope tend to be magnets for more random functionality, which, over time, make a small problem a lot worse. It becomes more difficult to justify why the next member with even more random functionality does not belong in the type. As the focus of the members in a type blurs, the developer's ability to predict where to find a given functionality is impaired, and therefore so is productivity.| |Good types are like good diagrams: What has been omitted is as important to clarity and usability as what has been included. Every additional member you add to a type starts at a net negative value and only by proven usefulness does it go from there to positive. If you add too much in an attempt to make the type more useful to some, you are just as likely to make the type useless to everyone.| | When I was learning OOP back in the early 1980s, I was taught a mantra that I still honor today: If things get too complicated, make more types. Sometimes, I find that I am thinking really hard trying to define a good set of methods for a type. When I start to feel that I'm spending too much time on this or when things just don't seem to fit together well, I remember my mantra and I define more, smaller types where each type has well-defined functionality. This has worked extremely well for me over the years. On the flip side, sometimes types do end up being dumping grounds for various loosely related functions. The .NET Framework offers several types like this, such as |
<urn:uuid:6c35af72-3e52-40ad-bf2e-d5f5676c535e>
3
868
Documentation
Software Dev.
42.544528
The Physics Help Forum not working today, at least not from my ISP, so this goes here. It's basically a math deal anyway: The formula to calculate the force of a point of mass, let's call them planets, that results from its being gravitationally attracted by another point of mass is Newton's: Where is the force of the planet that results from the gravitational attraction exerted upon it by the other planet, is Newton's gravity constant, and are the respecive masses of the planets, and d is the distance between them. For simplicity's sake let's say all the planets considered are of the same mass, so we can write instead of . Now, if I'm not mistaken, the formula for calculating the force of a planet resulting from the gravitational attraction of more than two planets is: Where is the force on the jth planet resulting from the gravitational attraction of the other planets, and is the distance between the jth planet and the kth planet. My question is "where is the vector addition?" That is, when considering the force on one planet that results from the gravitational attraction of many other planets, we have to take into account not only the distance of the other planets from planet j but also their position with respect to it (right?). Take for example the simple case of three planets in the same plane. Planet j is at the origin. Planet k is one unit to the right of j on the x axis, while planet l is one unit up the y axis. If the masses all equal 1, then, by the formula above, the force on planet j would be: But is a function of both the distance and the position, right? So we must consider not only the Gravitational Forces individually exerted upon j by k and l, but also the angle at which these forces are exerted. That is, we must add the vectors. To add vectors you just plug in the x value and y value sums of the added vectors into pythagoras' formula. The force on planet j should therefore be: (Where is the angle subtended by a line drawn from planet j to planet x. I.e. and ) So, what am I missing here? I am fully aware that I, and not Newton, am missing something here. Someone please help point this out for me.
<urn:uuid:7d99e7e1-4e2a-4168-989f-9de25f473394>
3.53125
480
Q&A Forum
Science & Tech.
58.705377
New study challenges previous findings that humans are an altruistic anomaly, and positions chimpanzees as cooperative, especially when their partners are patient. Researchers at the Yerkes National Primate Research Center, have shown chimpanzees have a significant bias for prosocial behavior. This, the study authors report, is in contrast to previous studies that positioned chimpanzees as reluctant altruists and led to the widely held belief that human altruism evolved in the last six million years only after humans split from apes. The current study findings are available in the online edition of Proceedings of the National Academy of Sciences. According to Yerkes researchers Victoria Horner, PhD, Frans de Waal, PhD, and their colleagues, chimpanzees may not have shown prosocial behaviors in other studies because of design issues, such as the complexity of the apparatus used to deliver rewards and the distance between the animals. “I have always been skeptical of the previous negative findings and their over-interpretation, says Dr. de Waal. “This study confirms the prosocial nature of chimpanzees with a different test, better adapted to the species,” he continues.
<urn:uuid:5d537746-8ad2-44d6-8586-ae6a035cf9b2>
3.09375
228
Personal Blog
Science & Tech.
26.0775
Elements | Blogs Wednesday, September 7, 2011 Is There Oxygen in Space? Yes, this summer astronomers using the Herschel Telescope identified oxygen molecules in space. They found these molecules in the Orion nebula, 1,344 light years away. Oxygen is the third most abundant element in the universe. Until now, scientists have only seen individual oxygen atoms in space. We do not breathe individual oxygen atoms, but rather oxygen molecules. (A molecule is a group of atoms banded together and it is the smallest unit of chemical compound that can take part in a chemical reaction.) Oxygen molecules make up 20% of the air we breathe. Scientists theorize that the oxygen molecules were locked up in water ice that... Thursday, March 10, 2011 I'm Atoms (Scientific Cover of Jason Mraz's I'm Yours) Here in Chicago it has been gray for the last three weeks – no sun, just melting snow and rain. This song made our day. It has sunshine, great music and atoms! The lyrics include fabulous lines such as: “Atoms bond together to form molecules Most of what’s surrounding me and you…” This science verse has been set to the music of Jason Mraz’s “I’m Yours”. This is a must watch! Saturday, February 26, 2011 The Deep Carbon Observatory Here at SuperSmart Carbon, we love learning about carbon. Apparently, we are not alone. There is a project being launched called the Deep Carbon Observatory that is being funded by the Alfred P. Sloan Foundation. The purpose of this group is to study carbon deep inside the earth. Carbon makes up somewhere from 0.7% to 3.2% of the earth’s elements. We know that there is carbon trapped under the earth’s crust, but we don’t know how much. The Deep Carbon Observatory is going to study how much carbon there is in the earth and what happens to it. Another question is what form is the... Friday, February 25, 2011 Where does gas come from? Carbon! (We always love it when the answer is carbon.) The gas we use to power our cars comes from decomposing organic matter. What does that mean? All life has carbon in it -- this includes everything living from you and me to zebras, tapeworms, tulips and seaweed. Since all living things have carbon in them, they are referred to as organic matter. Non-organic matter includes things like rocks, water and metals. When something organic dies, it goes into the earth’s surface. For example, when a leaf falls off a tree, it settles on the ground. Over the next months, it slowly rots and... Friday, February 11, 2011 How to Name an Element After Yourself Here on the SuperSmart Carbon blog, I will talk about the elements a lot because "Carbon" is an element. SuperSmart Carbon is a blue guy with a green hat and in this blog, he looks like he is 1 1/2 inches high. He has two rings around him with six yellow spheres. Although cute, SuperSmart Carbon does not exactly look like elements in the real world. Elements are really, really, small. You cannot see them with the naked eye, or even with a microscope. Although you can't see elements, they are all around you. Everything is made up of elements: the computer you are reading this blog on, the table the computer sits on, the air you...
<urn:uuid:b5177112-be1e-4086-9d85-858522f9c4b9>
2.921875
735
Content Listing
Science & Tech.
66.67267
Air MassAn extensive body of the atmosphere whose physical properties, particularly temperature and humidity, exhibit only small and continuous differences in the horizontal. It may extend over an area of several million square kilometres and over a depth of several kilometres. Backing WindCounter-clockwise change of wind direction, in either hemisphere. Beaufort ScaleWind force scale, original based on the state of the sea, expressed in numbers from 0 to 12. FetchDistance along a large water surface trajectory over which a wind of almost uniform direction and speed blows. FogSuspension of very small, usually microscopic water droplets in the air, generally reducing the horizontal visibility at the Earth's surface to less than 1 km. FrontThe interface or transition zone between air masses of different densities (temperature and humidity). Gale Force WindWind with a speed between 34 and 47 knots. Beaufort scale wind force 8 or 9. GustSudden, brief increase of the wind speed over its mean value. HazeSuspension in the atmosphere of extremely small, dry particles which are invisible to the naked eye but numerous enough to give the sky an opalescent appearance. HighRegion of the atmosphere where the pressures are high relative to those in the surrounding region at the same level. HurricaneName given to a warm core tropical cyclone with maximum surface winds of 118 km/h (64 knots) or greater in the North Atlantic, the Caribbean, the Gulf of Mexico and in the Eastern North Pacific Ocean. KnotUnit of speed equal to one nautical mile per hour. (1.852 km/h) Land BreezeWind of coastal regions, blowing at night from the land towards a large water surface as a result of the nocturnal cooling of the land surface. Line SquallSquall which occurs in a line. LowRegion of the atmosphere in which the pressures are lower then those of the surrounding regions at the same level. MistSuspension in the air of microscopic water droplets which reduce the visibility at the Earth's surface. PressureForce per unit area exerted by the atmosphere on any surface by virtue of its weight; it is equivalent to the weight of a vertical column of air extending above a surface of unit area to the outer limit of the atmosphere. RidgeRegion of the atmosphere in which the pressure is high relative to the surrounding region at the same level. Sea BreezeWind in coastal regions, blowing by day from a large water surface towards the land as a result of diurnal heating of the land surface. Sea FogFog which forms in the lower part of a moist air mass moving over a colder surface (water). Sea StateLocal state of agitation of the sea due to the combined effects of wind and swell. SquallAtmospheric phenomenon characterizes by an abrupt and large increase of wind speed with a duration of the order of minutes which diminishes suddenly. It is often accompanied by showers or thundershowers. Storm Force WindWind with a wind speed between 48 and 63 knots. Beaufort scale wind force 10 or 11. Storm SurgeThe difference between the actual water level under influence of a meteorological disturbance (storm tide) and the level which would have been attained in the absence of the meteorological disturbance (i.e. astronomical tide). SwellAny system of water waves which has left its generating area. ThunderstormSudden electrical discharge manifested by a flash of light and a sharp or rumbling sound. Thunderstorms are associated with convective clouds and are, more often, accompanied by precipitation in the form of rain showers, hail, occasionally snow, snow pellets, or ice pellets. Tropical CycloneGeneric term for a non-frontal synoptic scale cyclone originating over tropical or sub-tropical waters with organized convection and definite cyclonic surface wind circulation. Tropical DepressionWind speed up to 33 knots. Tropical DisturbanceLight surface winds with indications of cyclonic circulation. Tropical StormMaximum wind speed of 34 to 47 knots. TroughAn elongated area of relatively low atmospheric pressure. VeeringClockwise change of wind direction, in either hemisphere. VisibilityGreatest distance at which a black object of suitable dimensions can be seen and recognized against the horizon sky during daylight or could be seen and recognized during the night if the general illumination were raised to the normal daylight level. WaterspoutA phenomenon consisting of an often violent whirlwind revealed by the presence of a cloud column or inverted cloud cone (funnel cloud), protruding from the base of a cumulonimbus, and of a bush composed of water droplets raised from the surface of the sea. Its behaviour is characterized by a tendency to dissipate upon reaching shore. Wave HeightVertical distance between the trough and crest of a wave. Wave PeriodsTime between the passage of two successive wave crests past a fixed point.
<urn:uuid:c43d0fad-4182-427f-88ff-559827fbce8b>
3.484375
1,023
Structured Data
Science & Tech.
32.817154
Science Fair Project Encyclopedia The sampling frequency or sampling rate defines the number of samples per second taken from a continuous signal to make a discrete signal. The inverse of the sampling frequency is the sampling period or sampling time, which is the time between samples. The sampling frequency can only be applied to samplers in which each sample is periodically taken. There is no rule that limits a sampler from taking a sample at a non-periodic rate. If a signal has a bandwidth of 100 Hz then to avoid aliasing the sampling frequency must be greater than 200 Hz. In some cases, it is desirable to have a sampling frequency more than twice the bandwidth so that a digital filter can be used in exchange for a weaker analog anti-aliasing filter. This process is known as oversampling. In digital audio, common sampling rates are: - 8,000 Hz - telephone, adequate for human speech - 11,025 Hz - 22,050 Hz - radio - 44,100 Hz - compact disc - 48,000 Hz - digital sound used for films and professional audio - 96,000 or 192,400 Hz - DVD-Audio, some LPCM DVD audio tracks, BD-ROM (Blu-ray Disc) audio tracks, and HD-DVD (High-Definition DVD) audio tracks In digital video, which uses a CCD as the sensor, the sampling rate is defined the frame/field rate, rather than the notional pixel clock. All modern TV cameras use CCDs, and the image sampling frequency is the repetition rate of the CCD integration period. - 13.5 MHz - CCIR 601, D1 video - Continuous signal vs. Discrete signal - Digital control - Sample and hold - Sample (signal) - Sampling (information theory) - Signal (information theory) The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:d25b5562-8f30-4fd1-bc51-46f94956427e>
3.984375
414
Knowledge Article
Science & Tech.
55.315025
The life-giving ideas of chemistry are not reducible to physics. Or, if one tries to reduce them, they wilt at the edges, lose not only much of their meaning, but interest too. And, most importantly, they lose their chemical utility—their ability to relate seemingly disparate compounds to each other, their fecundity in inspiring new experiments. I'm thinking of concepts such as the chemical bond, a functional group and the logic of substitution, aromaticity, steric effects, acidity and basicity, electronegativity and oxidation-reduction. As well as some theoretical ideas I've been involved in personally—through-bond coupling, orbital symmetry control, the isolobal analogy. Consider the notion of oxidation state. If you had to choose two words to epitomize the same-and-not-the-same nature of chemistry, would you not pick ferrous and ferric? The concept evolved at the end of the 19th century (not without confusion with "valency"), when the reality of ions in solution was established. As did a multiplicity of notations—ferrous iron is iron in an oxidation state of +2 (or is it 2+?) or Fe(II). Schemes for assigning oxidation states (sometimes called oxidation numbers) adorn every introductory chemistry text. They begin with the indisputable: In compounds, the oxidation states of the most electronegative elements (those that hold on most tightly to their valence electrons), oxygen and fluorine for example, are –2 and –1, respectively. After that the rules grow ornate, desperately struggling to balance wide applicability with simplicity. The oxidation-state scheme had tremendous classificatory power (for inorganic compounds, not organic ones) from the beginning. Think of the sky blue color of chromium(II) versus the violet or green of chromium(III) salts, the four distinctly colored oxidation states of vanadium. Oliver Sacks writes beautifully of the attraction of these colors for a boy starting out in chemistry. And not only boys. But there was more to oxidation states than just describing color. Or balancing equations. Chemistry is transformation. The utility of oxidation states dovetailed with the logic of oxidizing and reducing agents—molecules and ions that with ease removed or added electrons to other molecules. Between electron transfer and proton transfer you have much of reaction chemistry. I want to tell you how this logic leads to quite incredible compounds, but first let's look for trouble. Not for molecules—only for the human beings thinking about them. Those Charges are Real, Aren't They? Iron is not only ferrous or ferric, but also comes in oxidation states ranging from +6 (in BaFeO4) to –2 (in Fe(CO)42–, a good organometallic reagent). Is there really a charge of +6 on the iron in the first compound and a –2 charge in the carbonylate? Of course not, as Linus Pauling told us in one of his many correct (among some incorrect) intuitions. Such large charge separation in a molecule is unnatural. Those iron ions aren't bare—the metal center is surrounded by more or less tightly bound "ligands" of other simple ions (Cl– for instance) or molecular groupings (CN–, H2O, PH3, CO). The surrounding ligands act as sources or sinks of electrons, partly neutralizing the formal charge of the central metal atom. At the end, the net charge on a metal ion, regardless of its oxidation state, rarely lies outside the limits of +1 to –1. Actually, my question should have been countered critically by another: How do you define the charge on an atom? A problem indeed. A Socratic dialogue on the concept would bring us to the unreality of dividing up electrons so they are all assigned to atoms and not partly to bonds. A kind of tortured pushing of quantum mechanical, delocalized reality into a classical, localized, electrostatic frame. In the course of that discussion it would become clear that the idea of a charge on an atom is a theoretical one, that it necessitates definition of regions of space and algorithms for divvying up electron density. And that discussion would devolve, no doubt acrimoniously, into a fight over the merits of uniquely defined but arbitrary protocols for assigning that density. People in the trade will recognize that I'm talking about "Mulliken population analysis" or "natural bond analysis" or Richard Bader's beautifully worked out scheme for dividing up space in a molecule. What about experiment? Is there an observable that might gauge a charge on an atom? I think photoelectron spectroscopies (ESCA or Auger) come the closest. Here one measures the energy necessary to promote an inner-core electron to a higher level or to ionize it. Atoms in different oxidation states do tend to group themselves at certain energies. But the theoretical framework that relates these spectra to charges depends on the same assumptions that bedevil the definition of a charge on an atom. An oxidation state bears little relation to the actual charge on the atom (except in the interior of the sun, where ligands are gone, there is plenty of energy, and you can have iron in oxidation states up to +26). This doesn't stop the occasional theoretician today from making a heap of a story when the copper in a formal Cu(III) complex comes out of a calculation bearing a charge of, say, +0.51. Nor does it stop oxidation states from being just plain useful. Many chemical reactions involve electron transfer, with an attendant complex of changes in chemical, physical and biological properties. Oxidation state, a formalism and not a representation of the actual electron density at a metal center, is a wonderful way to "bookkeep" electrons in the course of a reaction. Even if that electron, whether added or removed, spends a good part of its time on the ligands. But enough theory, or, as some of my colleagues would sigh, anthropomorphic platitudes. Let's look at some beautiful chemistry of extreme oxidation states. Incredible, But True Recently, a young Polish postdoctoral associate, Wojciech Grochala, led me to look with him at the chemical and theoretical design of novel high-temperature superconductors. We focused on silver (Ag) fluorides (F) with silver in oxidation states II and III. The reasoning that led us there is described in our forthcoming paper. For now let me tell you about some chemistry that I learned in the process. I can only characterize this chemistry as incredible but true. (Some will say that I should have known about it, since it was hardly hidden, but the fact is I didn't.) Here is what Ag(II), unique to fluorides, can do. In anhydrous HF solutions it oxidizes Xe to Xe(II), generates C6F6+ salts from perfluorobenzene, takes perfluoropropylene to perfluoropropane, and liberates IrF6 from its stable anion. These reactions may seem abstruse to a nonchemist, but believe me, it's not easy to find a reagent that would accomplish them. Ag(III) is an even stronger oxidizing agent. It oxidizes MF6– (where M=Pt or Ru) to MF6. Here is what Neil Bartlett at the University of California at Berkeley writes of one reaction: "Samples of AgF3 reacted incandescently with metal surfaces when frictional heat from scratching or grinding of the AgF3 occurred." Ag(II), Ag(III) and F are all about equally hungry for electrons. Throw them one, and it's not at all a sure thing that the electron will wind up on the fluorine to produce fluoride (F–). It may go to the silver instead, in which case you may get some F2 from the recombination of F atoms. Not that everyone can (or wants to) do chemistry in anhydrous HF, with F2 as a reagent or being produced as well. In a recent microreview, Thomas O'Donnell says (with some understatement), "... this solvent may seem to be an unlikely choice for a model solvent system, given its reactivity towards the usual materials of construction of scientific equipment." (And its reactivity with the "materials of construction" of human beings working with that equipment!) But, O'Donnell goes on to say, "... with the availability of spectroscopic and electrochemical equipment constructed from fluorocarbons such as Teflon and Kel-F, synthetic sapphire and platinum, manipulation of and physicochemical investigation of HF solutions in closed systems is now reasonably straightforward." For this we must thank the pioneers in the field—generations of fluorine chemists, but especially Bartlett and Boris Zemva of the University of Ljubljana. Bartlett reports the oxidation of AgF2 to AgF4– (as KAgF4) using photochemical irradiation of F2 in anhydrous HF (made less acidic by adding KF to the HF). And Zemva used Kr2+ (in KrF2) to react with AgF2 in anhydrous HF in the presence of XeF6 to make XeF5+AgF4–. What a startling list of reagents! To appreciate the difficulty and the inspiration of this chemistry, one must look at the original papers, or at the informal letters of the few who have tried it. You can find some of Neil Bartlett's commentary in the article that Wojciech and I wrote, and in an interview with him. Charge It, Please Chemists are always changing things. How to tune the propensity of a given oxidation state to oxidize or reduce? One way to do it is by changing the charge on the molecule that contains the oxidizing or reducing center. The syntheses of the silver fluorides cited above contain some splendid examples of this strategy. Let me use Bartlett's words again, just explaining that "electronegativity" gauges in some rough way the tendency of an atom to hold on to electrons. (High electronegativity means the electron is strongly held, low electronegativity that it is weakly held.) It's easy to make a high oxidation state in an anion because an anion is electron-rich. The electronegativity is lower for a given oxidation state in an anion than it is in a neutral molecule. That in turn, is lower than it is in a cation. If I take silver and I expose it to fluorine in the presence of fluoride ion, in HF, and expose it to light to break of F2 to atoms, I convert the silver to silver(III), AgF4-. This is easy because the AG(III) is in an anion. I can then pass in boron trifluoride and precipitate silver trifluoride, which is now a much more potent oxidizer than AgF4- because the electronegativity in the neutral AgF3 is much higher than it is in the anion. If I can now take away a fluoride ion, and make a cation, I drive the electronegativity even further up. With such a cation, for example, AgF2+, I can steal the electron from PtF6- and make PtF6.... This is an oxidation that even Kr(II) is unable to bring about. Simple, but powerful reasoning. And it works. A World Record? Finally, a recent oxidation-state curiosity: What is the highest oxidation state one could get in a neutral molecule? Pekka Pyykkö and coworkers suggest cautiously, but I think believably, that octahedral UO6, that is U(XII), may exist. There is evidence from other molecules that uranium 6p orbitals can get involved in bonding, which is what they would have to do in UO6. What wonderful chemistry has come—and still promises to come—from the imperfect logic of oxidation states! © Roald Hoffmann I am grateful to Wojciech Grochala, Robert Fay and Debra Rolison for corrections and comments. Thanks to Stan Marcus for suggesting the title of this column.
<urn:uuid:17b06ea8-6a78-4eda-b899-ce63819d7113>
3.046875
2,582
Comment Section
Science & Tech.
42.922943
You have to like the attitude of Thomas Henning (Max-Planck-Institut für Astronomie). The scientist is a member of a team of astronomers whose recent work on planet formation around TW Hydrae was announced this afternoon. Their work used data from ESA’s Herschel space observatory, which has the sensitivity at the needed wavelengths for scanning TW Hydrae’s protoplanetary disk, along with the capability of taking spectra for the telltale molecules they were looking for. But getting observing time on a mission like Herschel is not easy and funding committees expect results, a fact that didn’t daunt the researcher. Says Henning, “If there’s no chance your project can fail, you’re probably not doing very interesting science. TW Hydrae is a good example of how a calculated scientific gamble can pay off.” I would guess the relevant powers that be are happy with this team’s gamble. The situation is this: TW Hydrae is a young star of about 0.6 Solar masses some 176 light years away. The proximity is significant: This is the closest protoplanetary disk to Earth with strong gas emission lines, some two and a half times closer than the next possible subjects, and thus intensely studied for the insights it offers into planet formation. Out of the dense gas and dust here we can assume that tiny grains of ice and dust are aggregating into larger objects and one day planets. Image: Artist’s impression of the gas and dust disk around the young star TW Hydrae. New measurements using the Herschel space telescope have shown that the mass of the disk is greater than previously thought. Credit: Axel M. Quetz (MPIA). The challenge of TW Hydrae, though, has been that the total mass of the molecular hydrogen gas in its disk has remained unclear, leaving us without a good idea of the particulars of how this infant system might produce planets. Molecular hydrogen does not emit detectable radiation, while basing a mass estimate on carbon monoxide is hampered by the opacity of the disk. For that matter, basing a mass estimate on the thermal emissions of dust grains forces astronomers to make guesses about the opacity of the dust, so that we’re left with uncertainty — mass values have been estimated anywhere between 0.5 and 63 Jupiter masses, and that’s a lot of play. Error bars like these have left us guessing about the properties of this disk. The new work takes a different tack. While hydrogen molecules don’t emit measurable radiation, those hydrogen molecules that contain a deuterium atom, in which the atomic nucleus contains not just a proton but an additional neutron, emit significant amounts of radiation, with an intensity that depends upon the temperature of the gas. Because the ratio of deuterium to hydrogen is relatively constant near the Sun, a detection of hydrogen deuteride can be multiplied out to produce a solid estimate of the amount of molecular hydrogen in the disk. The Herschel data allow the astronomers to set a lower limit for the disk mass at 52 Jupiter masses, the most useful part of this being that this estimate has an uncertainty ten times lower than the previous results. A disk this massive should be able to produce a planetary system larger than the Solar System, which scientists believe was produced by a much lighter disk. When Henning spoke about taking risks, he doubtless referred to the fact that this was only the second time hydrogen deuteride has been detected outside the Solar System. The pitch to the Herschel committee had to be persuasive to get them to sign off on so tricky a detection. But 36 Herschel observations (with a total exposure time of almost seven hours) allowed the team to find the hydrogen deuteride they were looking for in the far-infrared. Water vapor in the atmosphere absorbs this kind of radiation, which is why a space-based detection is the only reasonable choice, although the team evidently considered the flying observatory SOFIA, a platform on which they were unlikely to get approval given the problematic nature of the observation. Now we have much better insight into a budding planetary system that is taking the same route our own system did over four billion years ago. What further gains this will help us achieve in testing current models of planet formation will be played out in coming years. The paper is Bergin et al., “An Old Disk That Can Still Form a Planetary System,” Nature 493 ((31 January 2013), pp. 644–646 (preprint). Be aware as well of Hogerheijde et al., “Detection of the Water Reservoir in a Forming Planetary System,” Science 6054 (2011), p. 338. The latter, many of whose co-authors also worked on the Bergin paper, used Herschel data to detect cold water vapor in the TW Hydrae disk, with this result: Our Herschel detection of cold water vapor in the outer disk of TW Hya demonstrates the presence of a considerable reservoir of water ice in this protoplanetary disk, sufficient to form several thousand Earth oceans worth of icy bodies. Our observations only directly trace the tip of the iceberg of 0.005 Earth oceans in the form of water vapor. Clearly, TW Hydrae has much to teach us. Addendum: This JPL news release notes that although a young star, TW Hydrae had been thought to be past the stage of making giant planets: “We didn’t expect to see so much gas around this star,” said Edwin Bergin of the University of Michigan in Ann Arbor. Bergin led the new study appearing in the journal Nature. “Typically stars of this age have cleared out their surrounding material, but this star still has enough mass to make the equivalent of 50 Jupiters,” Bergin said.
<urn:uuid:a225f201-6f03-4503-bb76-bd2fde1838a7>
3.515625
1,210
Knowledge Article
Science & Tech.
46.712272
Consider four vectors ~ F1, ~ F2, ~ F3, and ~ F4, wheretheir magnitudes are F1= 43 N, F2= 36 N, F3 = 19 N, andF4 = 54 N.Let θ1 =120o, θ2 = −130o,θ3 = 200, and θ4 = −67o, measured from thepositive x axis with the counter-clockwiseangular direction aspositive. What is the magnitudeof the resultant vector ~F , where ~F = ~ F1 +~ F2 +~ F3 +~ F4? Answer in units of N. What is the direction ofthis resultant vector~F? Note: Give the anglein degrees, use counterclockwise as the positiveangular direction, between the limits from the positive xaxis. Answer in units ofo I worked out the first part of thequestion by using trigonomic rules. My X value=-5.68671and my Y value=-33.5474. The magnitude came out to 34.026N. I tried finding the direction by usingθ=tan-1(y/x) but i cant get the rightanswer.
<urn:uuid:6424f806-15f1-4352-8ed4-15e67ff2dc91>
3.375
267
Q&A Forum
Science & Tech.
80.950653
An electron is a subatomic particles of spin 1/2. It couples with photons and, thus, is electrically charged. It is a lepton with a rest mass of 9.109 * 10 − 31kg and an electric charge of − 1.602 * 10 − 19 C, which is the smallest known charge possible for an isolated particle (confined quarks have fractional charge). The electric charge of the electron e is used as a unit of charge in much of physics. Electron pairs within an orbital system have opposite spins due to the Pauli exclusion principle; this characteristic spin pairing allows electrons to exist in the same quantum orbital, as the opposing magnetic dipole moments induced by each of the electrons ensures that they are attracted together. Current theories consider the electron as a point particle, as no evidence for internal structure has been observed. As a theoretical construct, electrons have been able to explain other observed phenomena, such as the shell-like structure of an atom, energy distribution around an atom, and energy beams (electron and positron beams). - ↑ Massimi, M. (2005). Pauli's Exclusion Principle, The Origin and Validation of a Scientific Principle. Cambridge University Press. pp. 7–8 - ↑ Mauritsson, J.. "Electron filmed for the first time ever". Lunds Universitet. Retrieved 2008-09-17. http://www.atomic.physics.lu.se/research/attosecond_physics - ↑ Chao, A.W.; Tigner, M. (1999). Handbook of Accelerator Physics and Engineering. World Scientific. pp. 155, 188. ISBN 981-02-3500-3.
<urn:uuid:e1790b63-dd2a-43d8-ae60-c3a435647df2>
3.859375
352
Knowledge Article
Science & Tech.
58.2225
Math is the basis for music, but for those of us who aren’t virtuosic at either, the connection isn’t always easy to grasp. Which is what makes the videos of Vi Hart, a “mathemusician” with a dedicated YouTube following, so wonderful. Hart explains complex phenomena--from cardioids to Carl Gauss--using simple (and often very) funny means. As Maria Popova pointed out yesterday, Hart’s latest video is a real doozy. In it, she uses a music box and a Möbius strip to explain space-time, showing how the two axes of musical notation (pitch and tempo) correspond to space and time. Using the tape notation as a model for space-time, she cuts and folds it to show the finite ways you can slice and dice the axes. Then, she shows us how you can loop the tape into a continuous strip of twinkling notes: If you fold space-time into a Mobius strip, you get your melody, and then the inversion, the melody played upside down. And then right side up again. And so on. So rather than folding and cutting up space-time, just cut and tape a little loop of space-time, to be played over, and over. It’s a pretty magical observation, and it makes even me--the prototypical math dunce--wish I’d tried harder. Yet there’s still time: Hart works for the Khan Academy, a nonprofit that offers free educational videos about math, biology, and more. Check it out. [H/t Brain Pickings]
<urn:uuid:a37519b2-ce71-4875-976f-9b4e9a28090c>
3.28125
346
Personal Blog
Science & Tech.
59.43732
The clock Command The clock command has facilities for getting the current time, formatting time values, and scanning printed time strings to get an integer time value. The clock command was added in Tcl 7.5. Table 13-1 summarizes the clock command: Table 13-1. The clock command. |clock clicks||A system-dependent high resolution counter.| |clock format value ?-format str?||Formats a clock value according to str.| |clock scan string ?-base clock? ?-gmt boolean?||Parses date string and return seconds value. The clock value determines the date.| |clock seconds||Returns the current time in seconds.| The following command prints the current time: clock format [clock seconds] => Sun Nov 24 14:57:04 1996 The clock seconds command returns the current time, in seconds since a starting epoch. The clock format command formats an integer value into a date string. It takes an optional argument that controls the format. The format strings contains % keywords that are replaced with the year, month, day, date, hours, minutes, and seconds, in various formats. The default string is: %a %b %d %H:%M:%S %Z %Y Tables 13-2 and 13-3 summarize the clock formatting strings: Table 13-2. Clock formatting keywords. |%%||Inserts a %. | |%a||Abbreviated weekday name (Mon, Tue, etc.). | |%A||Full weekday name (Monday, Tuesday, etc.). | |%b||Abbreviated month name (Jan, Feb, etc.). | |%B||Full month name. | |%c||Locale specific date and time (e.g., Nov 24 16:00:59 1996).| |%d||Day of month (01 ?31). | |%H||Hour in 24-hour format (00 ?23). | |%I||Hour in 12-hour format (01 ?12). | |%j||Day of year (001 ?366). | |%m||Month number (01 ?12). | |%M||Minute (00 ?59). | |%p||AM/PM indicator. | |%S||Seconds (00 ?59). | |%U||Week of year (00 ?52) when Sunday starts the week.| |%w||Weekday number (Sunday = 0). | |%W||Week of year (01 ?52) when Monday starts the week. | |%x||Locale specific date format (e.g., Feb 19 1997).| |%X||Locale specific time format (e.g., 20:10:13).| |%y||Year without century (00 ?99).| |%Y||Year with century (e.g. 1997).| |%Z||Time zone name.| Table 13-3. UNIX-specific clock formatting keywords. |%D||Date as %m/%d/%y (e.g., 02/19/97).| |%e||Day of month (1 ?31), no leading zeros. | |%h||Abbreviated month name. | |%n||Inserts a newline. | |%r||Time as %I:%M:%S %p (e.g., 02:39:29 PM).| |%R||Time as %H:%M (e.g., 14:39).| |%t||Inserts a tab. | |%T||Time as %H:%M:%S (e.g., 14:34:29).| The clock clicks command returns the value of the system's highest resolution clock. The units of the clicks are not defined. The main use of this command is to measure the relative time of different performance tuning trials. The following command counts the clicks per second over 10 seconds, which will vary from system to system: Example 13-1 Calculating clicks per second. set t1 [clock clicks] after 10000 ;# See page 218 set t2 [clock clicks] puts "[expr ($t2 - $t1)/10] Clicks/second" => 1001313 Clicks/second The clock scan command parses a date string and returns a seconds value. The command handles a variety of date formats. If you leave off the year, the current year is assumed. Year 2000 Compliance Tcl implements the standard interpretation of two-digit year values, which is that 70?9 are 1970?999, 00?9 are 2000?069. Versions of Tcl before 8.0 did not properly deal with two-digit years in all cases. Note, however, that Tcl is limited by your system's time epoch and the number of bits in an integer. On Windows, Macintosh, and most UNIX systems, the clock epoch is January 1, 1970. A 32-bit integer can count enough seconds to reach forward into the year 2037, and backward to the year 1903. If you try to clock scan a date outside that range, Tcl will raise an error because the seconds counter will overflow or underflow. In this case, Tcl is just reflecting limitations of the underlying system. If you leave out a date, clock scan assumes the current date. You can also use the -base option to specify a date. The following example uses the current time as the base, which is redundant: clock scan "10:30:44 PM" -base [clock seconds] The date parser allows these modifiers: year, month, fortnight (two weeks), week, day, hour, minute, second. You can put a positive or negative number in front of a modifier as a multiplier. For example: clock format [clock scan "10:30:44 PM 1 week"] => Sun Dec 01 22:30:44 1996 clock format [clock scan "10:30:44 PM -1 week"] Sun Nov 17 22:30:44 1996 You can also use tomorrow, yesterday, today, now, last, this, next, and ago, as modifiers. clock format [clock scan "3 years ago"] => Wed Nov 24 17:06:46 1993 Both clock format and clock scan take a -gmt option that uses Greenwich Mean Time. Otherwise, the local time zone is used. clock format [clock seconds] -gmt true => Sun Nov 24 09:25:29 1996 clock format [clock seconds] -gmt false => Sun Nov 24 17:25:34 1996
<urn:uuid:f36d7530-13dd-4d6a-8426-ea739f255160>
3.765625
1,432
Documentation
Software Dev.
94.315313
|This is a measure of the brightness of a celestial object. The lower the value, the brighter the object, so magnitude -4 is brighter than magnitude 0, which is in turn brighter than magnitude +4. The scale is logarithmic, and a difference of 5 magnitudes means a brightness difference of exactly 100 times. A difference of one magnitude corresponds to a brightness difference of around 2.51 (the fifth root of 100). The system was started by the ancient Greeks, who divided the stars into one of six magnitude groups with stars of the first magnitude being the first ones to be visible after sunset. In modern times, the scale has been extended in both directions and more strictly defined. Examples of magnitude values for well-known objects are; |Sun||-26.7 (about 400 000 times brighter than full Moon!)| |Brightest Iridium flares||-8| |Venus (at brightest)||-4.4| |International Space Station||-2| |Sirius (brightest star)||-1.44| |Limit of human eye||+6 to +7| |Limit of 10x50 binoculars||+9| |Limit of Hubble Space Telescope||+30|
<urn:uuid:a13e5774-8a15-4ad6-bc01-def7c66a2edb>
4.25
260
Structured Data
Science & Tech.
60.330227
Range: Vancouver - Baja Calif. depth: 6-18 (38) m. Table of Contents The Sea Grape Commonly known as "sea grapes," Botryocladia (botryo=grape, cladia=branches) pseudodichotoma is an abundant member of the RHODOPHYTA (red algae). The following phylogeny consists of links to list of common characteristics which justify Botryocladia's inclusion: - thallus is 10-30 cm. tall - elongate, pyriform (pear-shaped), sacchate (sack-like) branches - sacchate branches are 4-7 cm long and 6-25 mm in diameter - branches contain a colorless, acidic, polysaccharide and protein mucilage which makes them bouyant and therefore better able to compete for light - 3 cell layers - pigmented cortical cells - unpigmented medium sized gelatinous cells - unpigmented large gelatinous medullar cells (& specialized gland cells cluster in groups of 10-20 on the inward facing surface of medullar cells which in pseudodichotoma are noticeably smaller than its neighbors. It is easy to view secretory cells under a microscope by making cross-sections with a razorblade. with all Florideophyceae, B.pseudodichotoma has a tri-phasic life cells of the diploid tetrasporophyte undergo meiosis to create cruciate tetraspores (3.88 million/day). Each of the 4 spores can grow into a haploid gametophyte (male or female). mature male gametophyte emits spermatia which fertilize cells on the female gametophyte. Where fertilization has succeeded, a diploid carposporophyte grows on the female gametophyte. carposporophyte has a pore opening to the outside through which it releases diploid carpospores. These carpospores settle and grow into
<urn:uuid:5af214eb-c261-4fff-a47c-c2ca3a8e2822>
2.875
452
Knowledge Article
Science & Tech.
26.541283
Joined: 16 Mar 2004 |Posted: Tue Aug 04, 2009 2:40 pm Post subject: Immune Responses Jolted into Action by Nanohorns |The immune response triggered by carbon nanotube-like structures could be harnessed to help treat infectious diseases and cancers, say researchers. The way tiny structures like nanotubes can trigger sometimes severe immune reactions has troubled researchers trying to use them as vehicles to deliver drugs inside the body in a targeted way. White blood cells can efficiently detect and capture nanostructures, so much research is focused on allowing nanotubes and similar structures to pass unmolested in the body. But a French-Italian research team plans to use nanohorns, a cone-shaped variety of carbon nanotubes, to deliberately provoke the immune system. They think that the usually unwelcome immune response could kick-start the body into fighting a disease or cancer more effectively. To test their theory, Alberto Bianco and Hélène Dumortier at the CNRS Institute in Strasbourg, France, in collaboration with Maurizio Prato at the University of Trieste, Italy, gave carbon nanohorns to mouse white blood cells in a Petri dish. The macrophage cells' job is to swallow foreign particles. After 24 hours, most of the macrophages had swallowed some nanohorns. But they had also begun to release reactive oxygen compounds and other small molecules that signal to other parts of the immune system to become more active. The researchers think they could tune that cellular distress call to a particular disease or cancer, by filling the interior of nanohorns with particular antigens, like ice cream filling a cone. "The nanohorns would deliver the antigen to the macrophages while also triggering a cascade of pro-inflammatory effects," Dumortier says. "This process should initiate an antigen-specific immune response." "There is still a long way to go before this interesting approach might become safe and effective," says Ruth Duncan at Cardiff University , UK . "Safety would ultimately depend on proposed dose, the frequency of dose and the route of administration," she says. Dumortier agrees more work is needed, but adds that the results so far suggest that nanohorns are less toxic to cells than normal nanotubes can be. "No sign of cell death was visible upon three days of macrophage culture in the presence of nanohorns," Dumortier says. Recent headline-grabbing results suggest that nanotubes much longer than they are wide can cause similar inflammation to asbestos . But nanohorns do not take on such proportions and so would not be expected to have such an effect. Journal reference: 10 Advanced Materials (DOI: 1002/adma.200702753) Source: New Scientist /... Subscribe to the IoN newsletter.
<urn:uuid:5cade7be-722d-4875-86c2-cdb3dd43ad4f>
3.390625
593
Comment Section
Science & Tech.
32.083152
Atomic oxygen, a corrosive space gas, finds many applications on Earth. An Atomic Innovation for Artwork Oxygen may be one of the most common substances on the planet, but recent space research has unveiled a surprising number of new applications for the gas, including restoring damaged artwork. It all started with a critical problem facing would-be spacecraft: the gasses just outside the Earth’s atmosphere are highly corrosive. While most oxygen atoms on Earth’s surface occur in pairs, in space the pair is often split apart by short-wave solar radiation, producing singular atoms. Because oxygen so easily bonds with other substances, it is highly corrosive in atomic form, and it gradually wears away the protective layering on orbiting objects such as satellites and the International Space Station (ISS). To combat this destructive gas, NASA recreated it on Earth and applied it to different materials to see what would prove most resistant. The coatings developed through these experiments are currently used on the ISS. During the tests, however, scientists also discovered applications for atomic oxygen that have since proved a success in the private sector. Breathing New Life into Damaged Art In their experiments, NASA researchers quickly realized that atomic oxygen interacted primarily with organic materials. Soon after, they partnered with churches and museums to test the gas’s ability to restore fire-damaged or vandalized art. Atomic oxygen was able to remove soot from fire-damaged artworks without altering the paint. It was first tested on oil paintings: In 1989, an arson fire at St. Alban’s Episcopal Church in Cleveland nearly destroyed a painting of Mary Magdalene. Although the paint was blistered and charred, atomic oxygen treatment plus a reapplication of varnish revitalized it. And in 2002, a fire at St. Stanislaus Church (also in Cleveland) left two paintings with soot damage, but atomic oxygen removed it. Buoyed by the successes with oil paints, the engineers also applied the restoration technique to acrylics, watercolors, and ink. At Pittsburgh’s Carnegie Museum of Art, where an Andy Warhol painting, Bathtub, has been kissed by a lipstick-wearing vandal, a technician successfully removed the offending pink mark with a portable atomic oxygen gun. The only evidence that the painting had been treated—a lightened spot of paint—was easily restored by a conservator. A Genuine Difference-maker When the successes in art restoration were publicized, forensic analysts who study documents became curious about using atomic oxygen to detect forgeries. They found that it can assist analysts in figuring out whether important documents such as checks or wills have been altered, by revealing areas of overlapping ink created in the modifications. The gas has biomedical applications as well. Atomic oxygen technology can be used to decontaminate orthopedic surgical hip and knee implants prior to surgery. Such contaminants contribute to inflammation that can lead to joint loosening and pain, or even necessitate removing the implant. Previously, there was no known chemical process that fully removed these inflammatory toxins without damaging the implants. Atomic oxygen, however, can oxidize any organic contaminants and convert them into harmless gases, leaving a contaminant-free surface. Thanks to NASA’s work, atomic oxygen—once studied in order to keep it at bay in space—is being employed in surprising, powerful ways here on Earth. To learn more about this NASA spinoff, read the original article
<urn:uuid:672eb588-eeaa-401f-81e0-1a0e5c9d984f>
3.703125
714
Knowledge Article
Science & Tech.
27.007077
Evolution can fall well short of perfection. Claire Ainsworth and Michael Le Page assess where life has gone spectacularly wrong THE ascent of Mount Everest's 8848 metres without bottled oxygen in 1978 suggests that human lungs are pretty impressive organs. But that achievement pales in comparison with the feat of the griffon vulture that set the record for the highest recorded bird flight in 1975 when it was sucked into the engine of a plane flying at 11,264 metres. Birds can fly so high partly because of the way their lungs work. Air flows through bird lungs in one direction only, pumped through by interlinked air sacs on either side. This gives them numerous advantages over lungs like our own. In mammals' two-way lungs, not as much fresh air reaches the deepest parts of the lungs, and incoming air is diluted by the oxygen-poor air that remains after ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:ad635de7-8a5e-4c98-be53-8c463594f176>
3.28125
207
Truncated
Science & Tech.
59.637347
New Zealand grasshoppers belong to the subfamily Catantopinae. A number of species are present including the common small Phaulacridium of the more coastal areas, the larger species of Sigaus of the tussock lands, and the alpine genera Paprides and Brachaspis, which include some quite large species. These inhabit the alpine areas of the South Island, some preferring scree and others tussock areas. They apparently survive the rigorous alpine winter conditions both as nymphs and as adults, and it is possible that they can withstand complete freezing. All species are plant feeders and lay batches of eggs or pods in short holes in the ground which they excavate with their abdomen. After hatching, the young nymphs moult four or five times before becoming adult. by Graeme William Ramsay, M.SC., PH.D., Entomology Division, Department of Scientific and Industrial Research, Nelson.
<urn:uuid:feefb68d-09c3-45d7-bc1b-52166c84268c>
3.515625
196
Knowledge Article
Science & Tech.
45.262532
In mathematics, hyperbolic functions are analogs of the ordinary trigonometric, or circular, functions. The basic hyperbolic functions are the hyperbolic sine "sinh" (typically pronounced /ˈsɪntʃ/ or /ˈʃaɪn/), and the hyperbolic cosine "cosh" (typically pronounced /ˈkɒʃ/), from which are derived the hyperbolic tangent "tanh" (typically pronounced /ˈtæntʃ/ or /ˈθæn/), etc., in analogy to the derived trigonometric functions. The inverse hyperbolic functions are the area hyperbolic sine "arsinh" (also called "asinh", or sometimes by the misnomer of "arcsinh") and so on. Just as the points (cos t, sin t) form a circle with a unit radius, the points (cosh t, sinh t) form the right half of the equilateral hyperbola. Hyperbolic functions occur in the solutions of some important linear differential equations, for example the equation defining a catenary, and Laplace's equation in Cartesian coordinates. The latter is important in many areas of physics, including electromagnetic theory, heat transfer, fluid dynamics, and special relativity. Hyperbolic functions were introduced in the 18th century by the Swiss mathematician Johann Heinrich Lambert. The hyperbolic functions are: Via complex numbers the hyperbolic functions are related to the circular functions as follows: where is the imaginary unit defined as . Note that, by convention, sinh2x means (sinhx)2, not sinh(sinhx); similarly for the other hyperbolic functions when used with positive exponents. Another notation for the hyperbolic cotangent function is , though cothx is far more common. Hyperbolic sine and cosine satisfy the identity which is similar to the Pythagorean trigonometric identity. It can also be shown that the area under the graph of cosh x from A to B is equal to the arc length of cosh x from A to B. For a full list of integrals of hyperbolic functions, see list of integrals of hyperbolic functions In the above expressions, C is called the constant of integration. It is possible to express the above functions as Taylor series: A point on the hyperbola xy = 1 with x > 1 determines a hyperbolic triangle in which the side adjacent to the hyperbolic angle is associated with cosh while the side opposite is associated with sinh. However, since the point (1,1) on this hyperbola is a distance √2 from the origin, the normalization constant 1/√2 is necessary to define cosh and sinh by the lengths of the sides of the hyperbolic triangle. and the property that cosh t ≥ 1 for all t. The hyperbolic functions are periodic with complex period 2πi (πi for hyperbolic tangent and cotangent). The parameter t is not a circular angle, but rather a hyperbolic angle which represents twice the area between the x-axis, the hyperbola and the straight line which links the origin with the point (cosh t, sinh t) on the hyperbola. The function cosh x is an even function, that is symmetric with respect to the y-axis. The function sinh x is an odd function, that is −sinh x = sinh(−x), and sinh 0 = 0. The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities. In fact, Osborn's rule states that one can convert any trigonometric identity into a hyperbolic identity by expanding it completely in terms of integral powers of sines and cosines, changing sine to sinh and cosine to cosh, and switching the sign of every term which contains a product of 2, 6, 10, 14, ... sinhs. This yields for example the addition theorems the "double angle formulas" and the "half-angle formulas" The derivative of sinh x is cosh x and the derivative of cosh x is sinh x; this is similar to trigonometric functions, albeit the sign is different (i.e., the derivative of cos x is −sin x). The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic ones that does not involve complex numbers. The graph of the function a cosh(x/a) is the catenary, the curve formed by a uniform flexible chain hanging freely under gravity. From the definitions of the hyperbolic sine and cosine, we can derive the following identities: These expressions are analogous to the expressions for sine and cosine, based on Euler's formula, as sums of complex exponentials. Since the exponential function can be defined for any complex argument, we can extend the definitions of the hyperbolic functions also to complex arguments. The functions sinh z and cosh z are then holomorphic. Relationships to ordinary trigonometric functions are given by Euler's formula for complex numbers:
<urn:uuid:34eefbfb-968b-4240-9caa-0182a3ca0559>
4.0625
1,119
Knowledge Article
Science & Tech.
37.831287
This is one of my favorite stories. In short, one of John Burk’s (@occam98) students wanted to launch a space balloon. If you want all the details, this post at Quantum Progress pretty much says it all. The part that makes this story so cool is that it was the student who did all of the set up and fundraising and stuff. Love it. Oh, and the student is apparently named “M.” I wonder if the student is either one of the Men in Black or a James Bond scientist. Ok, you know what I do, right? I need to add something. Here is a very nice video of the space balloon launch. You know I like to use pictures for data from time to time, right? One problem is that I don’t know much about cameras. There, I said it. Really, almost all of my photos are made with my phone. That is what makes the phone so great, you almost always have your camera with you. To make these pictures useful for physics, it helps to know the angular size of the picture. Here is a diagram so you can see what I am talking about: There are 20 seconds left on the clock. Your team is down by 2 points such that a field goal would win it. The ball is spotted on the hash mark at the 15 yard line and it is first down. What to do? Should you call a run play so that the ball is in the center of the field? Or should the ball be kicked from where it is? So there is the question. Is it better to kick the ball from an angle or move back and kick it head on? Let me just look at one aspect of this situation. What is the angular size of the goal post from the location of the kicker? I am not looking at the height of the horizontal goal post – I will assume the kicker can get the ball over this. This was on reddit. It is an image from google maps showing an aircraft. Not surprising, there are lots of aircraft that get caught by the cameras in mid flight. But what about the colors? Is this some rainbow-unicorn plane? I am not sure of the exact details, but this rainbow effect is from the camera. I am not sure why, but this camera is capturing red green and blue (and probably white) colors separately at different times. Here is the actual link to the google map. The first thing that comes to my mind is – I wonder how fast the plane was moving. That question is difficult to answer because I don’t know how much time was between each ‘color filter’ photo. Oh well, I will proceed anyway. First, some info. Reading through the very insightful reddit comments, it seems the commenters are certain that the plain is an Embraer ERJ 145. Really, all I need is the length. Wikipedia lists it with a 29.87 m length and a 20.04 meter wingspan. From the image, does the rainbow plane have the same ratio of length to wingspan as listed? Ok, not quite the same. Maybe that is close enough. The one thing is that the image clearly has some distortion. Either the plane it turning or the image has been adjusted to make it look like it is a top down view. Well, surfing around a bit I couldn’t find another plane that was close in length/wing span ratio. I am going with ERJ 145. If I scale the image from the length of the plane, how “far” between the different colors? Here is a plot of the 4 color images. Note that for this image, I put the axis along the fuselage of the plane. The points are the locations of the back tip of one of the wings. The first cool thing that I can learn from this is that there must have been a cross-wind. The aircraft is not traveling in the direction that it is heading. Of course this is not uncommon, planes do this all the time. Oh, let me not that I am assume the aircraft is far enough away from the satellite that the multiple colors are due to the motion of the plane and not the satellite. This is probably a good assumption since the houses below are not rainbow colored. What about the speed? If it is moving at a constant velocity, then: I know the changes in position. So, let me just call the change in time 1 cs (cs for camera-second). This means that the plane’s speed would be 1.8 m/cs. Ok, let’s just play a game. What if the time between frames was 1/100th of a second? That would mean that the speed would be 180 m/s or 400 mph. That is possible since wikipedia lists the max speed at around 550 mph. If the time between images is 1/30th of a second (I picked that because that is a common frame rate for video) then the speed would be 54 m/s (120 mph). That doesn’t seem too low. I would imaging the landing speed would be around that speed (or maybe a little lower – but what do I know?) But WAIT – there is more. Can I determine the altitude of the plane? Well, suppose I have two objects of two different lengths that are two different distances from a camera. Here is an example. My notation here looks a little messy, but both objects have a length (L) and a distance from the camera (r). They also have an angular size, denoted by θ. About angular size, I can write the following. I don’t know the distances from the camera and I don’t know the angles. But, I can sort of measure the angles. Suppose I measure the number of pixels each object takes up in the photo. Then the angular size could be written as: Where p1 is the pixel size of an object and c is some constant for that particular camera. Now I can re-write these angular equations and divide so that I get rid of the c. I can get values for all the stuff on the right of that equation. Here are my values (object 1 is the plane and object 2 is the background – really, I will just use the scale provided by google maps). Oh, one more thing. I am not going to measure the pixel length but rather some arbitrary length of the same scale. L1 = 29.87 m p1 = 1 unit L2 = 10 m p2 = 0.239 unit Putting in my values above I get the ratio of the distances from the camera as: Now I just need one of the r‘s – ideally it would be r2 (the distance the camera is from the ground). Wikipedia says that the satellite images are typically taken from an aircraft flying 800-1500 feet high. So, suppose r2 = 1500 feet (457 meters). In this case the altitude of the rainbow plane would be: 1000 feet would mean that the rainbow plane is probably landing (or taking off). It looks like Teterboro Airport is quite close and the rainbow plane is heading that way. I claim landing. So, here is what I can say: Airspeed. Really, I don’t have a definite answer. Like I said before it depends on the camera rate. If I had to pick (and I don’t) I would say that the rainbow plane is going 120 mph and the time between different colored images is 1/30th of a second. Altitude. If I go with the higher value of the typical google-map planes (like the google map cars but with wings) then the altitude would be around 1000 feet. This lower altitude is why I used the lower value for the airspeed. Windspeed. Now I am changing my answer for windspeed. I am going to pretend like there is no wind. The perpendicular motion of the colored images could be due to the motion of the google-map plane.
<urn:uuid:ab1372c3-67f1-40bb-a97d-79e3d444774a>
2.8125
1,668
Personal Blog
Science & Tech.
77.914116
If you really want to hit a home run with a global warming story, manage to link climate change to the beloved rainforest of the Amazon. The rainforest there is considered by many to be the “lungs of the planet,” the rainforest surely contains a cure for any ailment imaginable, all species in the place are critical to the existence of life on the Earth, and the people of the Amazon are surely the most knowledgeable group on the planet regarding how to care for Mother Earth. The global warming alarmists have taken full advantage of the Amazon and they are very quick to suggest that the Amazon ecosystem is extremely sensitive to climate change. Furthermore, not only can climate change impact the Amazon, but global climate itself is strongly linked to the state of the Amazon rainforest. But, as usual, there is more to this story than meets to eye (or, rather, the press). For instance, a headline last year from USA Today sounded the alarm declaring “Amazon hit by climate chaos of floods, drought”. In the first few sentences, we learn that “Across the Amazon basin, river dwellers are adding new floors to their stilt houses, trying to stay above rising floodwaters that have killed 44 people and left 376,000 homeless. Flooding is common in the world’s largest remaining tropical wilderness, but this year the waters rose higher and stayed longer than they have in decades, leaving fruit trees entirely submerged. Only four years ago, the same communities suffered an unprecedented drought that ruined crops and left mounds of river fish flapping and rotting in the mud. Experts suspect global warming may be driving wild climate swings that appear to be punishing the Amazon with increasing frequency.” This piece is typical of thousands of other news stories about calamities in the Amazon that are immediately blamed on global warming. Other headlines quickly found include “Ocean Warming - Not El Niño - Drove Severe Amazon Drought in 2005” or “Amazon Droughts Will Accelerate Global Warming” or “Amazon Could Shrink by 85% due to Climate Change, Scientists Say.” Notice that climate change can cause droughts and floods in the Amazon PLUS droughts in the Amazon can cause global warming (by eliminating trees that could uptake atmospheric carbon dioxide). Throughout many of these stories, the words “delicate” and “irreversible” are used over and over. As we have discussed countless times in other essays, climate models are predicting the greatest warming in the mid-to-high latitudes of the Northern Hemisphere during the winter season. The Amazon is not located in a part of the Earth expected to have substantial warming due to the buildup of greenhouse gases. Somewhat surprisingly, the IPCC Technical Summary comments “The sign of the precipitation response is considered less certain over both the Amazon and the African Sahel. These are regions in which there is added uncertainty due to potential vegetation-climate links, and there is less robustness across models even when vegetation feedbacks are not included.” Basically, the models are not predicting any big changes in precipitation in the Amazon due to the change in atmospheric composition, nor are the models predicting any big change in temperature. Should the people of the Amazon deforest the place down to a parking lot, there is evidence that precipitation would decrease. There is a lot going on in the Amazon – deforestation, elevated carbon dioxide levels, global warming, and all these reported recent droughts and floods. One would think that the entire place is a wreck! A recent article in Hydrological Processes might come as a huge surprise to the climate change crusade. The first two sentences of the abstract made this one an immediate favorite at World Climate Report. The author has the nerve to write “Rainfall and river indices for both the northern and southern Amazon were used to identify and explore long-term climate variability on the region. From a statistical analysis of the hydrometeorological series, it is concluded that no systematic unidirectional long-term trends towards drier or wetter conditions have been identified since the 1920s.” We should leave it at that! The author is José Marengo with Brazil’s “Centro de Ciéncia do Sistema Terrestre/Instituto Nacional de Pesquisas Espaciais”; the work was funded by the Brazilian Research Council and the “UK Global Opportunity Fund-GOF-Dangerous Climate Change”. Very interesting – we suspect the “Dangerous Climate Change” group was not happy with the first two sentences of the abstract. José Marengo begins the piece noting “The main objective of this study is the assessment of long-term trends and cycles in precipitation in the entire Amazon basin, and over the northern and southern sections. It was addressed by analysing rainfall and streamflow indices, dating from the late 1920s”. The Figure 1 shows his subregions within the greater Amazon basin. Figure 1. Orientation map showing the rainfall network used on this study for (a) northern Amazonia (NAR) and (b) southern Amazonia (SAR) (from Marengo, 2009). The bottom line here is amazing. The author writes “The analysis of the annual rainfall time series in the Amazon represented by the NAR and SAR indices indicates slight negative trends for the northern Amazon and positive trends for the southern Amazon. However, they are weak and significant at 5% only in the southern Amazon” (Figure 2). So, nothing is happening out of the ordinary in the north and the south is getting wetter. There is definitely variability around the weak trends, but it all seems to be related to natural variability, not deforestation or global warming. Figure 2. Historical hydrometeorological indices for the Amazon basin. They are expressed as anomalies normalized by the standard deviation from the long-term mean, (a) northern Amazonia, (b) southern Amazonia. The thin line represents the trend. The broken line represents the 10-year moving average (from Marengo, 2009). Marengo notes “Since 1929, long-term tendencies and trends, some of them statistically significant, have been detected in a set of regional-average rainfall time series in the Amazon basin and supported by the analysis of some river streamflow time series. These long-term variations are more characteristic of decadal and multi-decadal modes, indicators of natural climate variability, rather than any unidirectional trend towards drier conditions (as one would expect, due to increased deforestation or to global warming).” [emphasis added] José – nice work, have a Cuervo on us!!! Marengo, J.A. 2009. Long-term trends and cycles in the hydrometeorology of the Amazon basin since the late 1920s. Hydrological Processes, 23, 3236-3244.
<urn:uuid:1d043e2c-548a-4380-aff6-44daad02285d>
2.84375
1,450
Personal Blog
Science & Tech.
33.824703
Consider the following in Haskell: let p x = x ++ show x in putStrLn $ p"let p x = x ++ show x in putStrLn $ p" Evaluate this expression in an interactive Haskell session and it prints itself out. But there's a nice little cheat that made this easy: the Haskell 'show' function conveniently wraps a string in quotation marks. So we simply have two copies of once piece of code: one without quotes followed by one in quotes. In C, on the other hand, there is a bit of a gotcha. You need to explicitly write code to print those extra quotation marks. And of course, just like in Haskell, this code needs to appear twice, once out of quotes and once in. But the version in quotes needs the quotation marks to be 'escaped' using backslash so it's notactually the same as the first version. And that means we can't use exactly the same method as with Haskell. The standard workaround is not to represent the quotation marks directly in the strings, but instead to use the ASCII code for this character and use C's convenient %c mechanism to print at. For example: Again we were lucky, C provides this great %c mechanism. What do you need in a language to be sure you can write a self-replicator? It turns out there is a very general approach to writing self-replicators that's described in Vicious Circles. What follows is essentially from there except that I've simplified the proofs by reducing generality. We'll use capital letters to represent programs. Typically these mean 'inert' strings of characters. I'll use square brackets to indicate the function that the program evaluates. So if P is a program to compute the mathematical function p, we write [P](x) = p(x). P is a program and [P] is a function. We'll consider both programs that take arguments like the P I just mentioned, and also programs, R, that take no arguments, so [R] is simply the output or return value of the program R. Now we come to an important operation. We've defined [P](x) to be the result of running P with input x. Now we define P(x) to be the program P modified so that it no longer takes an argument or input but instead substitutes the 'hard-coded' value of x instead. In other words [P(x)] = [P](x). P(x) is, of course, another program. There are also many ways of implementing P(x). We could simply evaluate [P](x) and write a program to simply print this out or return it. On the other hand, we could do the absolute minimum and write a new piece of code that simply calls P and supplies it with a hard-coded argument. Whatever we choose is irrelevant to the following discussion. So here's the demand that we make of our programming language: that it's powerful enough for us to write a program that can compute P(x) from inputs P and x. This might not be a trivial program to write, but it's not conceptually hard either. It doesn't have gotchas like the quotation mark issue above. Typically we can compute P(x) by some kind of textual substitution on P. With that assumption in mind, here's a theorem: any program P that takes one argument or input has a fixed point, X, in the sense that running P with input X gives the same result as just running X. Given an input X, P acts just like an interpreter for the programming language as it outputs the same thing as an interpreter would given input X. So here's a proof: Define the function f(Q) = [P](Q(Q)). We've assumed that we can write a program that computes P(x) from P and x so we know we can write a program to compute Q(Q) for any Q. We can then feed this as an input to [P]. So f is obviously computable by some program which we call Q0. So [Q0](Q) = [P](Q(Q)). Now the fun starts: [P](Q0(Q0)) = [Q0](Q0) (by definition of Q0) = [Q0(Q0)] (by definition of P(x)) In other words Q0(Q0) is our fixed point. So now take P to compute the identity function. Then [Q0(Q0)] = [P](Q0(Q0)) = Q0(Q0). So Q0(Q0) outputs itself when run! What's more, this also tells us how to do other fun stuff like write a program to print itself out backwards. And it tells us how to do this in any reasonably powerful programming language. We don't need to worry about having to work around problems like 'escaping' quotation marks - we can always find a way to replicate the escape mechanism too. So does it work in practice? Well it does for Haskell - I derived the Haskell fragment above by applying this theorem directly, and then simplifying a bit. For C++, however, it might give you a piece of code that is longer than you want. In fact, you can go one step further and write a program that automatically generates a self-replicator. Check out Samuel Moelius's kpp. It is a preprocessor that converts an ordinary C++ program into one that can access its own source code by including the code to generate its own source within it. Another example of an application of these methods is Futamura's theorem which states that there exists a program that can take as input an interpreter for a language and output a compiler. I personally think this is a little bogus.
<urn:uuid:e9f8736e-fa3e-4ea6-b907-b80b1d97b5d9>
3.171875
1,215
Personal Blog
Software Dev.
69.541045
Gold has been known since prehistory. The symbol is derived from Latin aurum (gold). AuI 9.2 eV, AuII 20.5 eV, AuIII 30.0 eV. Absorption lines of AuI In the sun, the equivalent width of AuI 3122(1) is 0.005. Behavior in non-normal stars The probable detection of Au I was announced by Jaschek and Malaroda (1970) in one Ap star of the Cr-Eu-Sr subgroup. Fuhrmann (1989) detected Au through the ultimate line of Au II at 1740(2) in several Bp stars of the Si and Ap stars of the Cr-Eu-Sr subgroups. The presence of Au seems to be associated with that of platinum and mercury. Au has one stable isotope, Au197 and 20 short-lived isotopes and isomers. Au can only be produced by the r process. Published in "The Behavior of Chemical Elements in Stars", Carlos Jaschek and Mercedes Jaschek, 1995, Cambridge University Press.
<urn:uuid:8d506fc6-f879-413c-9824-20930fe8e0a0>
3.75
240
Structured Data
Science & Tech.
77.703306
Adult survival rates of Shag (Phalacrocorax aristotelis), Common Guillemot (Uria aalge), Razorbill (Alca torda), Puffin (Fratercula arctica) and Kittiwake (Rissa tridactyla) on the Isle of May 1986-96 Harris, M. P.; Wanless, S.; Rothery, P.. 2000 Adult survival rates of Shag (Phalacrocorax aristotelis), Common Guillemot (Uria aalge), Razorbill (Alca torda), Puffin (Fratercula arctica) and Kittiwake (Rissa tridactyla) on the Isle of May 1986-96. Atlantic Seabirds, 2. 133-150.Full text not available from this repository. On the Isle of May between 1986 and 1996, the average adult survival of Shags Phalacrocorax aristotelis was 82.1%, Common Guillemots Uria aalge 95.2%, Razorbills Alca torda 90.5%, Puffins Fratercula arctica 91.6% and Kittiwakes Rissa tridactyla 88.2%. Shags, Razorbills and Puffins all had a single year of exceptionally low survival but these years did not coincide. In contrast, Kittiwake survival declined significantly over the period and there was evidence that substantial non-breeding occurred in several years. Breeding success of Kittiwakes also declined, which gives rise to concern for its future status. Given a high enough level of resighting, return rates (the proportion of birds known to be alive one year that were seen the next year) on a year-by-year basis provide a reasonable indication of relative changes in adult survival. |Programmes:||CEH Programmes pre-2009 publications > Other| |CEH Sections:||_ Biodiversity & Population Processes| |Additional Keywords:||Shag, Phalacrocorax aristotelis, Common Guillemot, Uria aalge, Razorbill, Alca torda, Puffin, Fratercula arctica, Kittiwake, Rissa tridactyla| |NORA Subject Terms:||Zoology| |Date made live:||08 Dec 2008 21:30| Actions (login required)
<urn:uuid:c2223b59-5dd0-474f-acd5-a52f82c794e8>
2.765625
516
Academic Writing
Science & Tech.
31.554757
I’ve been looking for a good, easy to read document outlining the latest climate science research and putting it in context for Copenhagen and I think I’ve found it. Today in Sydney, the Climate Change Research Centre, a unit of the University of New South Wales, released The Copenhagen Diagnosis. It’s free to download or view online in a nice rich text format so credit to the centre for making it accessible in multiple attractive formats. But most praise has to be reserved for the 26 contributing authors who have laid out the science to make it easy to understand for a layman like myself. Chapters cover aspects of climate science including “the atmosphere”, “permafrost and hydrates” and “global sea level”. Throughout are scattered common questions about climate change and answers designed to clear up confusion. An example: “Are we just in a natural warming phase, recovering from the ‘little ice age?‘. The document, once pictures and the reference section is including is a slim 50 pages. If you want something to get yourself up to speed on the science ahead of Copenhagen this could well be the document to download. Its even better if you have a colleague willing to run across the road and get it bound for you as I have! The executive summary of the Copenhagen Diagnosis, which I’ve excerpted below gives the basics you need to know if even 50 pages is too much to handle as we head into the highly-stressful (for everyone other than academics) end of year period. The diplomats and politicians soon to board flights to Denmark could do worse than slip a copy of The Copenhagen Diagnosis into their cabin luggage. The most significant recent climate change findings are: Surging greenhouse gas emissions: Global carbon dioxide emissions from fossil fuels in 2008 were nearly 40% higher than those in 1990. Even if global emission rates are stabilized at present-day levels, just 20 more years of emissions would give a 25% probability that warming exceeds 2°C, even with zero emissions after 2030. Every year of delayed action increases the chances of exceeding 2°C warming. Recent global temperatures demonstrate human-induced warming: Over the past 25 years temperatures have increased at a rate of 0.19°C per decade, in very good agreement with predictions based on greenhouse gas increases. Even over the past ten years, despite a decrease in solar forcing, the trend continues to be one of warming. Natural, short-term fluctuations are occurring as usual, but there have been no significant changes in the underlying warming trend. Acceleration of melting of ice-sheets, glaciers and ice-caps: A wide array of satellite and ice measurements now demonstrate beyond doubt that both the Greenland and Antarctic ice-sheets are losing mass at an increasing rate. Melting of glaciers and ice-caps in other parts of the world has also accelerated since 1990. Rapid Arctic sea-ice decline: Summer-time melting of Arctic sea-ice has accelerated far beyond the expectations of climate models. The area of sea-ice melt during 2007-2009 was about 40% greater than the average prediction from IPCC AR4 climate models. Current sea-level rise underestimated: Satellites show recent global average sea-level rise (3.4 mm/yr over the past 15 years) to be ~80% above past IPCC predictions. This acceleration in sea-level rise is consistent with a doubling in contribution from melting of glaciers, ice caps, and the Greenland and West-Antarctic ice-sheets. Sea-level predictions revised: By 2100, global sea-level is likely to rise at least twice as much as projected by Working Group 1 of the IPCC AR4; for unmitigated emissions it may well exceed 1 meter. The upper limit has been estimated as ~ 2 meters sea level rise by 2100. Sea level will continue to rise for centuries after global temperatures have been stabilized, and several meters of sea level rise must be expected over the next few centuries. Delay in action risks irreversible damage: Several vulnerable elements in the climate system (e.g. continental ice-sheets, Amazon rainforest, West African monsoon and others) could be pushed towards abrupt or irreversible change if warming continues in a business-as-usual way throughout this century. The risk of transgressing critical thresholds (’tipping points’) increases strongly with ongoing climate change. Thus waiting for higher levels of scientific certainty could mean that some tipping points will be crossed before they are recognized. The turning point must come soon: If global warming is to be limited to a maximum of 2 °C above pre-industrial values, global emissions need to peak between 2015 and 2020 and then decline rapidly. To stabilize climate, a decarbonized global society — with near-zero emissions of CO2 and other long-lived greenhouse gases — needs to be reached well within this century. More specifically, the average annual per-capita emissions will have to shrink to well under 1 metric ton CO2 by 2050. This is 80-95% below the per-capita emissions in developed nations in 2000.
<urn:uuid:6de73326-296f-4b7a-b8ba-84761d55c25e>
2.78125
1,051
Personal Blog
Science & Tech.
41.26094
Classifying Critical Points So let’s say we’ve got a critical point of a multivariable function . That is, a point where the differential vanishes. We want something like the second derivative test that might tell us more about the behavior of the function near that point, and to identify (some) local maxima and minima. We’ll assume here that is twice continuously differentiable in some region around . The analogue of the second derivative for multivariable functions is the second differential . This function assigns to every point a bilinear function of two displacement vectors and , and it measures the rate at which the directional derivative in the direction of is changing as we move in the direction of . That is, If we choose coordinates on given by an orthonormal basis , we can write the second differential in terms of coordinates This matrix is often called the “Hessian” of at the point . As I said above, this is a bilinear form. Further, Clairaut’s theorem tells us that it’s a symmetric form. Then the spectral theorem tells us that we can find an orthonormal basis with respect to which the Hessian is actually diagonal, and the diagonal entries are the eigenvalues of the matrix. So let’s go back and assume we’re working with such a basis. This means that our second partial derivatives are particularly simple. We find that for we have and for , the second partial derivative is an eigenvalue which we can assume (without loss of generality) are nondecreasing. That is, . Now, if all of these eigenvalues are positive at a critical point , then the Hessian is positive-definite. That is, given any direction we have . On the other hand, if all of the eigenvalues are negative, the Hessian is negative definite; given any direction we have . In the former case, we’ll find that has a local minimum in a neighborhood of , and in the latter case we’ll find that has a local maximum there. If some eigenvalues are negative and others are positive, then the function has a mixed behavior at we’ll call a “saddle” (sketch the graph of near to see why). And if any eigenvalues are zero, all sorts of weird things can happen, though at least if we can find one positive and one negative eigenvalue we know that the critical point can’t be a local extremum. We remember that the determinant of a diagonal matrix is the product of its eigenvalues, so if the determinant of the Hessian is nonzero then either we have a local maximum, we have a local minimum, or we have some form of well-behaved saddle. These behaviors we call “generic” critical points, since if we “wiggle” the function a bit (while maintaining a critical point at ) the Hessian determinant will stay nonzero. If the Hessian determinant is zero, wiggling the function a little will make it nonzero, and so this sort of critical point is not generic. This is the sort of unstable situation analogous to a failure of the second derivative test. Unfortunately, the analogy doesn’t extent, in that the sign of the Hessian determinant isn’t instantly meaningful. In two dimensions a positive determinant means both eigenvalues have the same sign — denoting a local maximum or a local minimum — while a negative determinant denotes eigenvalues of different signs — denoting a saddle. This much is included in multivariable calculus courses, although usually without a clear explanation why it works. So, given a direction vector so that , then since is in , there will be some neighborhood of so that for all . In particular, there will be some range of so that . For any such point we can use Taylor’s theorem with to tell us that for some . And from this we see that for every so that . A similar argument shows that if then for any near in the direction of . Now if the Hessian is positive-definite then every direction from gives us , and so every point near satisfies . If the Hessian is negative-definite, then every point near satisfies . And if the Hessian has both positive and negative eigenvalues then within any neighborhood we can find some directions in which and some in which .
<urn:uuid:1470b6e0-0c2a-416e-a3f3-01bb7910efed>
2.6875
931
Academic Writing
Science & Tech.
42.500034
Science Fair Project Encyclopedia Cryonics is the practice of preserving organisms, or at least their brains, for possible future revival by storing them at cryogenic temperatures where metabolism and decay are almost completely stopped. An organism held in such a state (either frozen or vitrified) is said to be cryopreserved. Barring social disruptions, cryonicists believe that a perfectly vitrified person can be expected to remain physically viable for at least 30,000 years, after which time cosmic ray damage is thought to be irreparable. Many scientists in the field, most notably Ralph Merkle and Brian Wowk, hold that molecular nanotechnology has the potential to extend even this limit many times over. To its detractors, the justification for cryonics is unclear, given the primitive state of preservation technology. Advocates counter that even a slim chance of revival is better than no chance. In the future, they speculate, not only will conventional health services be improved, but they will also quite likely have expanded even to the conquering of old age itself (see links at the bottom). Therefore, if one could preserve one's body (or at least the contents of one's mind) for, say, another hundred years, one might well be resuscitated and live indefinitely long. But critics of the field contend that, while an interesting technical idea, cryonics is currently little more than a pipedream, that current "patients" will never be successfully revived, and that decades of research, at least, must occur before cryonics is to be a legitimate field with any hope of success. Probably the most famous cryopreserved patient is Ted Williams. The popular urban legend that Walt Disney was cryopreserved is false; he was cremated, and interred at Forest Lawn Memorial Park Cemetery. Robert Heinlein, who wrote enthusiastically of the concept, was cremated and his ashes distributed over the Pacific Ocean. Timothy Leary was a long-time cryonics advocate, and signed up with a major cryonics provider. He changed his mind, however, shortly before his death, and so was not cryopreserved. Obstacles to success Damage from ice formation Cryonics has traditionally been dismissed by mainstream cryobiology, of which it is arguably a part. The reason generally given for this dismissal is that the freezing process creates ice crystals, which damage cells and cellular structures—a condition sometimes called "whole body freezer burn "—so as to render any future repair impossible. Cryonicists have long argued, however, that the extent of this damage was greatly exaggerated by the critics, presuming that some reasonable attempt is made to perfuse the body with cryoprotectant chemicals (traditionally glycerol) that inhibit ice crystal formation. According to cryonicists, however, the freezer burn objection became moot around the turn of the millennium, when cryobiologists Greg Fahy and Brian Wowk, of Twenty-First Century Medicine developed major improvements in cryopreservation technology, including new cryoprotectants and new cryoprotectant solutions, that greatly improved the feasibility of eliminating ice crystal formation entirely, allowing vitrification (preservation in a glassy rather than frozen state). In a glass, the molecules do not rearrange themselves into grainy ice crystals as the solution cools, but instead become locked together while still randomly arranged as in a fluid, forming a "solid liquid" as the temperature falls below the glass transition temperature. Alcor Life Extension Foundation, the world's largest cryonics provider, has since been using these cryoprotectants, along with a new, faster cooling method, to vitrify whole human brains. They continue to use the less effective glycerol-based freezing for patients who opt to have their whole bodies preserved, since vitrification of an entire body is beyond current technical capabilities. The only other full-service cryonics provider in the world, the Cryonics Institute, is currently testing its own vitrification solution. Current solutions being used for vitrification are stable enough to avoid crystallization even when a vitrified brain is warmed up. This has recently allowed brains to be vitrified, warmed back up, and examined for ice damage using light and electron microscopy. No ice crystal damage was found. However, if the circulation of the brain is compromised, protective chemicals may not be able to reach all parts of the brain, and freezing may occur either during cooling or during warming. Cryonicists argue, however, that injury caused during cooling can be repaired before the vitrified brain is warmed back up, and that damage during rewarming can be prevented by adding more cryoprotectant in the solid state, or by improving rewarming methods. Some critics have speculated that because a cryonics patient has been declared legally dead, their organs are dead, and thus unable to allow cryoprotectants to reach the majority of cells. Cryonicists respond that it has been empirically demonstrated that, so long as the cryopreservation process begins immediately after legal death is declared, the individual organs (and perhaps even the patient as a whole) remain biologically alive, and vitrification (particularly of the brain) is quite feasible. Critics have often quipped that it is easier to revive a corpse than a cryonically frozen body. Many cryonicists might actually agree with this, provided that the "corpse" were fresh, but they would argue that such a "corpse" may actually be biologically alive, under optimal conditions. A declaration of legal death does not mean that life has suddenly ended—death is a gradual process, not a sudden event. Rather, legal death is a declaration by medical personnel that there is nothing more they can do to save the patient. But if the body is clearly biologically dead, having been sitting at room temperature for a period of time, or having been traditionally embalmed, then cryonicists would hold that such a body is far less revivable than a cryonically preserved patient, since any process of resuscitation will depend on the quality of the structural and molecular preservation of the brain, which is largely destroyed by ischemic damage (from lack of blood flow) within minutes or hours of cardiac arrest, if the body is left to sit at room temperature. Traditional embalming also largely destroys this crucial neurological structure. Cryonicists would also point out that the definitions of "death" and "corpse" currently in use may change with future medical advances, just as they have changed in the past, and so they generally reject the idea that they are trying to "raise the dead", viewing their procedures instead as highly experimental medical procedures, whose efficacy is yet to be either demonstrated or refuted. Some also suggest that if technology is developed that allows mind transfer, revival of the frozen brain might not even be required; the mind of the patient could instead be "uploaded" into an entirely new substrate. The biggest drawback to current vitrification practice is a costs issue. Because the only really cost-effective means of storing a cryopreserved person is in liquid nitrogen, possibly large-scale fracturing of the brain occurs, a result of cooling to −196°C, the temperature of liquid nitrogen. Fracture-free vitrification would require inexpensive storage at a temperature significantly below the glass transition temperature of about −125°C, but high enough to avoid fracturing (−150°C is about right). Alcor is currently developing such a storage system. Alcor believes, however, that even before such a storage system is developed, the current vitrification method is far superior to traditional glycerol-based freezing, since the fractures are very clean breaks that occur even with traditional glycerol cryoprotection, and the loss of neurological structure is still less than that caused by ice formation, by orders of magnitude. While cryopreservation arrangements can be expensive (currently ranging from $28,000 to $150,000), most cryonicists pay for it with life insurance. The elderly, and others who may be uninsurable for health reasons, will often pay for the procedure through their estate. Others simply invest their money over a period of years, accepting the risk that they might die in the meantime. All in all, cryonics is actually quite affordable for the vast majority of those in the industrialized world who really want it, especially if they make arrangements while still young. Even assuming perfect cryopreservation techniques, many cryonicists would still regard eventual revival as a long shot. In addition to the many technical hurdles that remain, the likelihood of obtaining a good cryopreservation is not very high because of logistical problems. The likelihood of the continuity of cryonics organizations as businesses, and the threat of legislative interference in the practice, don't help the odds either. Most cryonicists, therefore, regard their cryopreservation arrangements as a kind of medical insurance—not certain to keep them alive, but better than no chance at all and still a rational gamble to take. Brain vs. whole-body cryopreservation During the 1980s, the problems associated with crystallization were becoming better appreciated, and the emphasis shifted from whole body to brain-only or "neuropreservation", on the assumption that the rest of the body could be regrown, perhaps by cloning of the person's DNA or by using embryonic stem cell technology. The main goal now seems to be to preserve the information contained in the structure of the brain, on which memory and personal identity depends. Available scientific and medical evidence suggests that the mechanical structure of the brain is wholly responsible for personal identity and memories (for instance, spinal cord injury victims, organ transplant patients, and amputees appear to retain their personal identity and memories). Damage caused by freezing and fracturing is thought to be potentially repairable in the future, using nanotechnology, which will enable the manipulation of matter at the molecular level. To critics, this appears a kind of futuristic deus ex machina, but while the engineering details remain speculative, the rapidity of scientific advances over the past century, and more recently in the field of nanotechnology itself, suggest to some that there may be no insurmountable problems. And the cryopreserved patient can wait a long time. With the advent of vitrification, the importance of nanotechnology to the cryonics movement may begin to decrease. Some critics, and even some cryonicists, question this emphasis on the brain, arguing that during neuropreservation some information about the body's phenotype will be lost and the new body may feel "unwanted", and that in case of brain damage the body may serve as a crude backup, helping restore indirectly some of the memories. Partly for this reason, the Cryonics Institute preserves only whole bodies. Some proponents of neuropreservation agree with these concerns, but still feel that lower costs and better brain preservation justify preserving only the brain. Historically, cryonics began in 1962 with the publication of The Prospect of Immortality by Robert Ettinger. In the 1970s, the damage caused by crystallization was not well understood. Two early organizations went bankrupt, allowing their patients to thaw out, bringing the matter to the public eye, at which point the problem with cellular damage became more well known and the practice gained something of the reputation of a scam. During the 1980s, the extent of the damage from the freezing process became much clearer and better known, and the emphasis of the movement began to shift from whole-body to neuropreservation. Alcor currently preserves about 60 human bodies and heads in Scottsdale, Arizona. Before the company moved to Arizona from Riverside, California in 1994, it was the center of several controversies, including a county coroner's ruling that a client was murdered with barbiturates before her head was removed by the company's staff. Alcor contended that the drug was administered after her death. No charges were ever filed. - engineered negligible senescence - life extension, - Interstellar travel, - Immortality Institute, The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:808b609d-c9b2-4043-aeea-548f59273c25>
3.4375
2,487
Knowledge Article
Science & Tech.
20.021667
Next: Measuring Rotation Previous: Angles of Elevation and Depression Right Triangles, Bearings, and other Applications Back to the top of the page ↑ You need to be signed in to perform this action. Please sign-in and try again. Oops, looks like cookies are disabled on your browser. Click to see how to enable them.
<urn:uuid:1a8136cc-012a-4ac8-b554-48f3c8db5bfe>
2.75
79
Truncated
Science & Tech.
59.965833
Last August, a 3,000-pound, eight-by-22 foot-robotic platform was launched into the Hudson River just north of Denning’s Point Peninsula in Beacon, N.Y. On board the floating platform are state-of-the-art sensors that will provide continuous air and water monitoring including barometric pressure, wind speed and direction, water depth, temperature, salinity and flow rate. The sensors will also measure the levels of hydrogen contaminants, dissolved oxygen, and chlorophyll-a (a green pigment found in algae). The data will be transferred in real time to researchers who can track fluctuations in these measurements. The information provides a detailed record of the overall health of the river. This will alert scientists and environmentalists to escalating pollution levels or to episodic events that can be problematic, such as algae blooms, which can lead to hypoxia. Hypoxia is characterized by a low concentration of oxygen that is exacerbated by increases in nutrients or a particular set of physical conditions. It is associated with fish kills among other problems. This technology, which promises to revolutionize the way bodies of water are monitored, was developed by a team of scientists and researchers headed up by James Bonner ’85, professor of civil & environmental engineering and director of Clarkson’s Center for the Environment (CCE). “Our goal is to eventually cover the entire 315-mile river from Mt. Marcy to New York City with a network of sensors,” explains Bonner. “The technology will allow us to create a cyber-infrastructure that stores and processes a great deal of data about the Hudson River. Scientists and engineers around the world will be able to access this information via the Internet.” Bonner began the development of this real-time monitoring technology at the Shoreline Environmental Research Facility at Texas A&M University where he served as founding director. While in Corpus Christi, Bonner and fellow researchers developed sensing systems that they used to monitor the Gulf of Mexico. Since joining the Clarkson faculty in 2007, Bonner (who holds a Ph.D. from Clarkson) has continued his NSF-funded research program with an eye toward transferring the technology to map and monitor the ecological health of the rivers, Great Lakes and the St. Lawrence Seaway. The Hudson River monitoring project is a joint partnership between Clarkson University; the Beacon Institute for Rivers and Estuaries, a not-for-profit environmental research organization; and IBM. Last year, Bonner was named the Beacon Institute’s REON Director of Research and will lead the development and implementation of the River and Estuary Observatory Network (REON). The Hudson River project is the first step in a larger plan to develop technology-based monitoring and forecasting network for rivers and estuaries. “Tremendous human impact occurs in the regions where rivers and estuaries meet the ‘coastal margin’ — coastal wetlands, bays and shorelines,” explains Bonner. “In the United States, this region is home to 70 percent of the population and 20 of its 25 largest cities. It is also where most industry and ports are found. Damage to these ecosystems comes from this increased density of anthropogenic activity associated with pollution from industry, farms and the surrounding communities.” For example, hypoxia generally occurs in aquatic systems where the water is poorly mixed excluding oxygen and trapping pollutants in the “hypolimion” — the dense bottom layer in a stratified body of water. Chemical reactions within the hypolimion and with bottom sediments depletes the benthic oxygen so aerobic organisms such as fish, oysters, clams and other bottom dwelling organisms perish. “This problem is a growing national concern, for example increasing areas of the Gulf of Mexico (thousands of square miles), portions of the Great Lakes, embayments such Corpus Christi Bay and other near-shore areas are experiencing hypoxia,” says Bonner. IBM is working with Bonner and the Beacon Institute to develop the cyber framework that will store the data and provide assessment tools, which researchers around the world will be able to use. “Scientists will be able to analyze data and develop models on any environmental parameter of interest.” For Bonner, one of the most exciting aspects of the project is the way it will transform environmental science and engineering. “The old-fashioned method of retrieving data by collecting samples at discreet locations at only a few times gives a static, incomplete and aliased view or understanding. With this technology, we’ll be able to get real-time data that reflects the constantly changing, dynamic environment of the river. The information will be far more reliable.”
<urn:uuid:02237b71-3d97-43b4-b615-8779adad0180>
3.03125
982
Knowledge Article
Science & Tech.
32.182778
I saw some tutorial pages on the internet about how to read files using C++ But I'm kind of confused because there isn't anything in code indicate where the file is from. So I think I need some explanation. It will open file in current (working) folder. If you want to open file which is in another folder you may write full path: ofstream ofs("C:\\some_folder\\some_file"); There is version of the constructor (and open() function) which takes std::string, if you use them. What you pass is actually the file path, so you can give a full path, or a relative path. If you just specify the filename that is a relative path. Relative paths are relative to the working directory of program. If you start your program by double clicking on the executable file the working directory will be the directory where the executable file is located. If you start your program from the command line the working directory will be the directory that you set using the cd command. If you start your program from an IDE the working directory is often set to the project directory (not the source directory) but this can differ between IDEs.
<urn:uuid:539cacc7-a7b0-4ae5-a649-fc47d6f41c8c>
3.375
242
Q&A Forum
Software Dev.
49.120155
THE FRAGILE FAUNA OF ILLINOIS CAVES by Steven J. Taylor and Donald W. Webb Illinois has several hundred caves, many of them in nearly pristine condition. This unique and fragile environment is home to a diverse array of creatures, including organisms that are completely limited to the cave environment, species that may be found in similar habitats above ground, and the many animals that accidentally wander, fall, or are washed into caves. Many cave animals are highly adapted for the unique and harsh living conditions they encounter underground. caves can be found in four distinct karst regions: in the Mississippian limestone of the Shawnee Hills, in the Salem Plateau and in the Lincoln Hills, and in the Ordovician limestone of the Driftless Area. These caves have been forming though the interaction of geology vegetation, and rainfall for the past 300 million years. Shallow seas covered much of Illinois during the Mississippian Period. When the seas receded, forests grew over the exposed sedimentary rocks; and rainwater-which had become slightly acidic through interaction with carbon dioxide from both the atmosphere and the bacterial breakdown of organic material-then seeped into cracks and bedding planes. As the limestone dissolved, conduits formed. These conduits eventually developed the geologic features characteristic of karst terrain-caves, sinking streams, springs, and sinkholes. INTO THE TWILIGHT ZONE Caves can be divided into three ecological zones. The entrance zone is similar in light, temperature, and relative humidity to the surrounding surface habitat, and the creatures that live there resemble the animals that live in the moist shaded areas near the cave. Hear we find the eastern phoebe (Sayornis phoebe), a small gray bird whose nest is constructed on bare bedrock walls out of mosses and other debris. In the leaf litter, we find many animals of the forest floor: redbacked salamanders, harvestmen (or daddy-longlegs), snails, earthworms, millipedes, centipedes, beetles, ants, and springtails. Cave entrances are often funnel shaped or have sheer vertical walls, and organisms and organic debris tend to concentrate at the bottom. The entrance zone also provides a highly protected environment for overwintering organisms. Deeper inside the cave, in the twilight zone, there is much less light, and photosynthesizing plants are no longer able to grow. The temperature and relative humidity fluctuate here, but the environment is usually damp and cool. Many animals from the entrance zone wander into the twilight zone, but most of these creatures must eventually return to the land above. Several species of cave crickets are common in this part of the cave, sometimes appearing in large numbers on walls or ceilings. In larger caves, there is a dark zone characterized by constant temperature (about 54-58*F in Illinois) and the absence of light. Here, the relative humidity approaches the saturation point. Many animals in the dark zone are capable of completing their entire life cycles without leaving the cave although food is scarce in the absence of photosynthesis. In this zone, there are fewer species of organisms. Creatures who live here eat primarily organic debris-wood, leaves, and accidental animals. Dark-zone dwellers get some of their nutrients from the feces of bats and cave crickets, animals that leave the cave at night to feed on the surface. Raccoons, common cave explorers in Illinois, also leave their waste behind. A wide array of bacteria and fungi feast upon these nutrient-rich items. Other animals then feed upon the fungi and bacteria. Springtails, minute insects typically overlooked by the casual observes, are important fungus feeders, and a variety of beetles, flies, and millipedes get their nourishment this way as well. These organisms may then become the prey of cave-inhabiting spiders, harvestmen, predacious fly larvae known as webworms, and an occasional cave salamander. In the winter, pickerel frogs, mosquitoes, and some moths move into cave to wait for warmer weather. ADAPTING AND SURVIVING Common Cave inhabitants include (left to right) the moth, Scoliopteryz libatrix, which does not have a comon name; the cave salamander (Eurycea lucifuga); and the monorail worm (Macrocera nonilis). Animals that live in caves vary greatly in their degree of adaptation to the cave environment. Accidental animals live there only temporarily; they will either leave or die. Animals that frequent cave but must return to the surface at some point in there life cycles are know as trogloxenes. Bats and cave crickets are two examples. Troglophiles are animals that can complete their entire life cycles within a cave, but they may also be found in cool, moist habitats outside of caves. Tow troglophilic vertebrates found in or near Illinois caves are the cave salamander (Eurycea lucifuga) and the spring cavefish (Forbesicthys agassizi). Diane Tecic, district heritage biologist for the Illinois Department of Natural Resources, looks for cave-adapted organisms in organic debris with Illinois caver Tim Sickbert. Most cave animals are trogloxenes and troglophiles; only 20 to 30% of the animals in North American caves are troglobites. Troglobites are animals that live exclusively in caves; they are especially interesting because of their unique morphological, physiological, behavioral, and life-history adaptations. Many troglobites, for example, lack body pigment. Because they live where there is no light, there is no evolutionary advantage for them in maintaining the colors that might be characteristic of their relatives and ancestors that live above ground. In cave-adapted species, the evolutionary pressure to maintain functional eyes is also greatly reduced, and these species have been under strong selective pressure to evolve other means of sensing their surroundings. Their legs and antennae usually have more sensory nerve endings than related above-ground species. These appendages serve important tactile functions and are often greatly elongated in cave-dwelling creatures. Adaptations that allow species to exist in an environment with very low nutrient input are not as obvious. Many cave-adapted species produce fewer offspring than their surface-inhabiting relatives, but individual eggs may contain more nutrients. In some species, timing of reproduction may be synchronized with spring flooding and its new supply of nutrients. Other species, lacking the above-ground seasonal cues of temperature and photoperiod, may reproduce year-round. Cave adaptations may include a reduced metabolic rate, allowing animals to live on limited food resources for long periods of time. Illinois has many troglobitic invertebrates but no troglobitic vertebrates. As cave-adapted species become specialized, they also tend to become geographically isolated. The geological and hydrological history of some areas may divide species into isolated populations, and these populations, over time, may evolve into distinct species. During glacial periods, caves, as serve as refugia for some aquatic, soil-, and litter-inhabiting animals. These species may become "stranded" in caves when glaciers retreat surface conditions are not suitable for recolonization. VULNERABITLIY OF CAVE ENVIRONMENTS Human disturbance affects cave ecosystems just as it affects other ecosystems. As a result of changes we make on the surface, we unknowingly alter cave environments, destroying unique and valuable organisms before we even know of their existence. The public knows very little about caves and the organisms that inhabit them. Small wonder then that the importance of protecting groundwater, caves, and cave life is not fully appreciated. It is not uncommon to find sinkholes filled with trash, serving as natural garbage cans for rural waste disposal. Visitors sometimes permanently damage caves with graffiti, break stalactites and stalagmites, and carelessly The very adaptations that allow troglobites to survive in the harsh cave environment make these animals more vulnerable to changes made by humans. The reduced metabolic rates that allow these animals to survive in a nutrient-poor environment also make them less competitive when organic enrichment is introduced in the form of fertilizers, livestock and agricultural waste, and human sewage. In Illinois, this effect is commonly seen in stream-inhibiting amphipods (small shrimplike animals) and isopods (small crustaceans related to terrestrial pillbugs or sowbugs). These groups contain troglobites that are highly adapted to cave environments; they also contain more opportunistic troglophilic species, which have a competitive advantage in the presence of high levels or organic waste. Amphipods and isopods feed on small particles of organic debris and on decomposers such as bacteria and fungi. Because they ingest large quantities of this material, they are exposed to contamination from a variety of pollutants. In Illinois, samples of these animals collected in 1992 were found to contain dieldrin and breakdown products of DDT. They were also found to contain moderate levels of mercury, although mercury was not detected in any water samples from the same sites. Sedimentation also threatens aquatic species. Topsoil run-off from rural development and agricultural fields enters caves readily when vegetative buffers around sinkholes are too small or nonexistent. This sediment fills the spaces in gravel streambeds, eliminating the microhabitats that allow many cavedwelling species to exist. As a result, cave streams with high sediment loads ten to contain few species. Sometimes, humans can't easily see the value of these subterranean systems, especially when their own interests conflict with the health of cave communities. Such a conflict is occurring now in our most biologically and hydrologically significant karst area, the Salem Plateau of Monroe and St. Clair counties. As part of the greater St. Louis metropolitan area, the Salem Plateau is experiencing rapid population growth. Scientists can estimate the level and types of threats that this growth brings to the biological integrity of the region, but it's much more difficult to develop protected areas, educational programs, and new regulatory mechanisms within the existing political, social, and geographic framework. Illinois caves are a high priority for conservation because cave organisms face serious threats from agriculture and increasing urbanization. Also, the unique and fragile cave and environment provides a home for organisms found nowhere else in the world. It is not usually possible to include the entire drainage basin of significant caves within nature preserves or other conservation easements. To manage a cave effectively, scientists must understand the hydrology of a cave's subterranean conduits. This knowledge is gained by doing extensive dye tracing studies and cave mapping. Both of these activities are time- and labor-intensive. Already, the drainage basins of some of our largest cave systems are being compromised by agriculture and rural housing projects. Educating the public-particularly politicians, farmers, and children-about land use and the impact of human activities is key to the long-term health of cave communities. We must also enact appropriate regulations for rural residential development-especially wastewater treatment-and for agricultural activities in a karst landscape. For more information on cave conservation and management, contact the National Speleological Society, 2813 Cave Avenue, Huntsville, AL 35810-4431, or Steven Taylor or Donald Webb at the Center for Biology, Illinois Natural History Survey, 607 East Peabody Drive, Champaign, IL 61820. Steven J. Taylor is an aquatic entomologist in the Center for Biodiversity at the Illinois Natural History Survey in Champaign. Donald W. Webb is an insect systematist, also at the Center for Biodiversity. A GOOD NEIGHBOR POLICY In a few caves in Monroe and St. Clair counties, you can find a small shrimplike creature that exists nowhere else in the world. The Illinois cave amphipod has made our corner of the world its home, but it may not be here long unless humans take steps to protect its environment. This unassuming cave creature has been proposed for listing as a federally endangered species. Cave amphipods inhabit the bottoms of pools and riffles in large cave streams, where they creep among cobbles and under stones, feeding on decaying leaf litter and organic debris. Food is scarce in this environment, and the amphipods have developed chemosensory structures that detect the odor of food sources, such as dead or injured animals. Injured or dying amphipods are vulnerable to such predators as flatworms, cave salamanders, and even other amphipods. But the greatest threat these vulnerable creatures face is the deterioration of the environment. The Illinois cave amphipod lives near the greater St. Louis metropolitan area, a region that has been experiencing dramatic population growth for the past 10 years. Continued urbanization without appropriate sewage treatment and disposal is especially threatening to the amphipods existence. Other serious threats are siltation and the presence of agricultural chemicals in subterranean Fortunately for the amphipod, the quality of life for people on the land above depends on water quality in streams below. Because agricultural chemicals and bacteria associated with sewage have been found in well water, springs, and cave streams in this area, a concerted effort is being made to improve the water quality in this karst region. Efforts to provide communities with safe drinking water could also provide a healthy cave environment and help ensure the further existence of our underground neighbor, the Illinois
<urn:uuid:df2ab0ff-bb86-415b-be4c-863c8014597f>
3.78125
3,029
Knowledge Article
Science & Tech.
21.357917
There are many types of biomass—organic matter such as plants, residue from agriculture and forestry, and the organic component of municipal and industrial wastes—that can now be used to produce fuels, chemicals, and power. Wood has been used to provide heat for thousands of years. This flexibility has resulted in increased use of biomass technologies. According to the Energy Information Administration, 53% of all renewable energy consumed in the United States was biomass-based in Biomass technologies break down organic matter to release stored energy from the sun. Biofuels are liquid or gaseous fuels produced from biomass. Most biofuels are used for transportation, but some are used as fuels to produce electricity. The expanded use of biofuels offers an array of benefits for our energy security, economic growth, and environment. Current biofuels research focuses on new forms of biofuels such as ethanol and biodiesel, and on biofuels conversion processes. Ethanol—an alcohol—is made primarily from the starch in corn grain. It is most commonly used as an additive to petroleum-based fuels to reduce toxic air emissions and increase octane. Today, roughly half of the gasoline sold in the United States includes 5%-10% ethanol. Biodiesel use is relatively small, but its benefits to air quality are Biodiesel is produced through a process that combines organically-derived oils with alcohol (ethanol or methanol) in the presence of a catalyst to form ethyl or methyl ester. The biomass-derived ethyl or methyl esters can be blended with conventional diesel fuel or used as a neat fuel (100% biodiesel). Biomass resources include any plant-derived organic matter that is available on a renewable basis. These materials are commonly referred to Biomass feedstocks include dedicated energy crops, agricultural crops, forestry residues, aquatic crops, biomass processing residues, municipal waste, and animal waste. Dedicated energy crops Herbaceous energy crops are perennials that are harvested annually after taking 2 to 3 years to reach full productivity. These include such grasses as switchgrass, miscanthus (also known as elephant grass or e-grass), bamboo, sweet sorghum, tall fescue, kochia, wheatgrass, and others. Short-rotation woody crops are fast-growing hardwood trees that are harvested within 5 to 8 years of planting. These include hybrid poplar, hybrid willow, silver maple, eastern cottonwood, green ash, black walnut, sweetgum, and sycamore. Agricultural crops include currently available commodity products such as cornstarch and corn oil, soybean oil and meal, wheat starch, and vegetable oils. They generally yield sugars, oils, and extractives, although they can also be used to produce plastics as well as other chemicals and Agriculture Crop Residues Agriculture crop residues include biomass materials, primarily stalks and leaves, that are not harvested or removed from fields in commercial use. Examples include corn stover (stalks, leaves, husks, and cobs), wheat straw, and rice straw. With approximately 80 million acres of corn planted annually, corn stover is expected to become a major feedstock for biopower Forestry residues include biomass not harvested or removed from logging sites in commercial hardwood and softwood stands as well as material resulting from forest management operations such as pre-commercial thinning and removal of dead and dying trees. There are a variety of aquatic biomass resources, such as algae, giant kelp, other seaweed, and marine microflora. Biomass Processing Residues Biomass processing yields byproducts and waste streams that are collectively called residues and have significant energy potential. Residues are simple to use because they have already been collected. For example, the processing of wood for products or pulp produces unused sawdust, bark, branches, and leaves/needles. Residential, commercial, and institutional post-consumer waste contains a significant proportion of plant-derived organic material that constitute a renewable energy resource. Waste paper, cardboard, wood waste, and yard waste are examples of biomass resources in municipal waste. Farms and animal-processing operations create animal wastes that constitute a complex source of organic materials with environmental consequences. These wastes can be used to make many products, including Some biomass feedstocks, such as municipal waste, are found throughout the United States. Others, such as energy crops, are concentrated in the eastern half of the country. As technologies develop to more efficiently process complex feedstocks, the biomass resource base will expand. Collecting Gas from Landfills Landfills can be a source of energy. Organic waste produces a gas called methane as it decomposes, or rots. Methane is the same energy-rich gas that is in natural gas, the fuel sold by natural gas utility companies. It is colorless and odorless. Natural gas utilities add an odorant (bad smell) so people can detect seeping gas, but it can be dangerous to people or the environment. New rules require landfills to collect methane gas as a pollution and safety measure. compiled from The British Antarctic Study, NASA, Environment Canada, UNEP, EPA and other sources as stated and credited Researched by Charles Welch-Updated daily This Website is a project of the The Ozone Hole Inc. a 501(c)(3) Nonprofit Organization http://www.theozonehole.com
<urn:uuid:43454431-e724-4640-b136-09b9e018b7c6>
3.828125
1,230
Knowledge Article
Science & Tech.
24.609679
Introductionfox, carnivorous mammal of the dog family, found throughout most of the Northern Hemisphere. It has a pointed face, short legs, long, thick fur, and a tail about one half to two thirds as long as the head and body, depending on the species. Solitary most of the year, foxes do not live in dens except in the breeding season; they sleep concealed in grasses or thickets, their tails curled around them for warmth. During the breeding season a fox pair establishes a den, often in a ground burrow made by another animal, in which the young are raised; the male hunts for the family. The young are on their own after about five months; the adults probably find new mates each season. Foxes feed on insects, earthworms, small birds and mammals, eggs, carrion, and vegetable matter, especially fruits. Unlike other members of the dog family, which run down their prey, foxes usually hunt by stalking and pouncing. They are known for their raids on poultry but are nonetheless very beneficial to farmers as destroyers of rodents. Foxes are occasionally preyed upon by larger carnivores, such as wolves and bobcats, as well as by humans and their dogs; birds of prey may capture the young. Despite extensive killing of foxes, most species continue to flourish. In Europe this is due in part to the regulatory laws passed for the benefit of hunters. Mounted foxhunting, with dogs, became popular in the 14th cent. and was later introduced into the Americas; special hunting dogs, called foxhounds, have been bred for this sport. Great Britain banned foxhunting in which the hounds kill the fox in 2005. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. See more Encyclopedia articles on: Vertebrate Zoology
<urn:uuid:b06f1991-fec6-49bf-b55b-75db6d59f18d>
3.5
382
Knowledge Article
Science & Tech.
50.583511
Here is a fun one, There was a man who greatly enjoyed golf. He also could make a perfectly consistent swing. So out of curiosity he decided to challenge a mathematician. So first he brought the mathematician to a golf field, with his golf club, a tee, and a ball. He sets the ball on the tee, all ready to swing, and then he asks the mathematician, “Write me a formula where z is the total distance the ball will travel, assuming there is no wind, the ground is level, The ball starts one inch off the ground, and I hit it with x force at y angle, all before I hit the ball.” He then swings his club, hits the ball and much to his surprise the mathematician succeeds. Not only did the mathematician have a flawless formula, but he also had the shortest formula he could have possibly written. What was his formula? Last edited by TheTick (2013-02-28 15:50:15)
<urn:uuid:070e6cdd-a083-43f2-9577-27e03e835620>
2.765625
201
Comment Section
Science & Tech.
65.922443
New on the IBM developerWorks, there's an article looking at using the Scilab software integrated into PHP to perform some more complicated mathematical processing. Scripting languages like Ruby, Python, and PHP power modern-day server-side Web development. These languages are great because you can easily and rapidly build Web sites. However, their downfall is their inefficiency with complicated algorithms, such as those found in mathematics and the sciences. [...] In this article, we'll investigate one particular way to merge the power of a particular bit of scientific software - Scilab - with the ease of development and Web-friendliness of a server-side language: PHP. Your script uses the Scilab tool from the command line, called via something like exec, and parsing the output to spit the results back out to the viewer. They show how to create two pages with form elements for allowing the user to interact with the script and one that helps you generate a graph based on some results.
<urn:uuid:134f1f86-6c7d-48c9-abb1-a1be577339f4>
2.6875
199
Truncated
Software Dev.
42.139
Gamma ray bursts are believed to be the most energetic phenomena in the universe. In one second they can emit more than 100 times the energy that the sun does throughout its entire 10 billion year life. This energy output is short lived, however, and within days the burst has faded forever beyond the reach of our telescopes. 3000 bursts having been detected through their gamma ray emission, only 30 have been seen with ground-based telescopes, and only one of these has been observed within an hour. In an ambitious project to detect the gamma ray bursts in the crucial first minute of their occurence, the School of Physics has entered a collaboration with the University of Michigan, Los Alamos National Laboratories, and Lawrence Livermore National Laboratory, to place a robotic telescope, ROTSE-III, at Siding Spring Observatory. triggered into action by a signal relayed through the Internet from an earth-orbiting satellite. The specially designed mounting for ROTSE-III allows it to point to any position in the sky and take an image within 5-10 seconds. The images are then automatically analysed for any new or rapidly varying sources, and this information is made available to other observatories throughout the world within minutes. The precise positions provided by ROTSE-III are essential to allow the worlds largest telescopes to observe the gamma for the new telescope occurred in March 2001. The enclosure and weather station were installed in April 2001, with the telescope itself to be delivered in mid-2002.
<urn:uuid:41af5c95-84cb-4b31-990a-6fbb28055062>
3.875
327
Knowledge Article
Science & Tech.
29.343642
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
383

Collection including b47176870/fineweb-edu-sci-super