text
large_string | id
large_string | score
float64 | tokens
int64 | format
large_string | topic
large_string | fr_ease
float64 |
|---|---|---|---|---|---|---|
Previous abstract Next abstract
Session 40 - The Interstellar Medium.
Display session, Tuesday, June 09
Gamma Ray Burst (GRB) explosions can make kpc-size shells and holes in the interstellar media (ISM) of spiral galaxies if much of the energy heats the local gas to above 10^7 K. Disk blowout is probably the major cause for energy loss in this case, but the momentum acquired during the pressurized expansion phase can be large enough that the bubble still snowplows to a kpc diameter. This differs from the standard model for the origin of such shells by multiple supernovae, which may have problems with radiative cooling, evaporative losses, and disk blow-out. Evidence for giant shells with energies of \sim10^53 ergs are summarized. Some contain no obvious central star clusters and may be GRB remnants, although sufficiently old clusters would be hard to detect. The expected frequency of GRBs in normal galaxies can account for the number of such shells.
Program listing for Tuesday
|
<urn:uuid:e2300ad5-01dd-4e80-92b3-7ec88785cc9d>
| 2.765625 | 208 |
Content Listing
|
Science & Tech.
| 47.385488 |
Tornadoes are the most intense storms on the planet, and they’re never discussed without at least some mention of the term wind shear. Many of us sitting at home, though, have no idea what wind shear is, or if we do, how it affects tornado production.
What is Wind Shear
Wind shear, although it might sound complex, is a simple concept. Wind shear is merely the change in wind with height, in terms of wind direction and speed. I think that we all understand that the wind is generally stronger in the atmosphere over our heads than it is here on the ground, and if we think of the atmosphere in terms of the three dimensions that it has, it should not be surprising that the wind above us might also be blowing from a different direction than the wind at the ground. When that happens–the wind speed and direction vary with height–wind shear is occurring.
Wind Shear and Supercell Thunderstorms
This wind shear is an important part of the process in the development of a supercell thunderstorm, from which the vast majority of strong tornadoes form.
All thunderstorms are produced by a powerful updraft–a surge of air that rises from the ground into the upper levels of the atmosphere, and when this updraft forms in an area where wind shear is present, the updraft is influence by this speed and different direction of the wind above, pushing the column of air in the updraft into a more vertical alignment.
Rain’s Influence on Tornado Production
Needless to say, thunderstorms typically produce very heavy rain, and rain-cooled air is much heavier than the warm air of the updraft, so the rain-cooled air, produces a compensating downdraft (what comes up, must come down). This downdraft pushes the part of the rotating air that was forced in its direction by the stronger wind aloft downward, and the result is a horizontal column of rotating air.
That’s Not a Tornado!
I know what you’re thinking that you’ve seen enough TLC or Discovery Channel shows to know that a horizontal column of air is NOT a tornado; you need a vertical column of air.
This Can Be a Tornado
You’re right, but remember the updraft that is driving the thunderstorm is still working, and it’s able to pull the horizontal, spinning column of air into the thunderstorm, resulting in a vertical column of spinning air.
(NOAA image showing vertical column of air in a supercell thunderstorm)
The result is a rotating thunderstorm capable of producing a tornado, and it would not be possible without wind shear.
(NOAA image showing tornado formation in supercell thunderstorm)
|
<urn:uuid:7400301c-e625-46d5-be90-1020cf8d52f8>
| 4.15625 | 573 |
Personal Blog
|
Science & Tech.
| 45.080294 |
Using the Moon as a High-Fidelity Analogue Environment to Study Biological and Behavioural Effects of Long-Duration Space Exploration
Goswami, Nandu and Roma, Peter G. and De Boever, Patrick and Clément, Gilles and Hargens, Alan R. and Loeppky, Jack A. and Evans, Joyce M. and Stein, T. Peter and Blaber, Andrew P. and Van Loon, Jack J.W.A. and Mano, Tadaaki and Iwase, Satoshi and Reitz, Guenther and Hinghofer-Szalkay, Helmut G. (2012) Using the Moon as a High-Fidelity Analogue Environment to Study Biological and Behavioural Effects of Long-Duration Space Exploration. Planetary and Space Science, Epub ahead of print (in press). Elsevier. DOI: 10.1016/j.pss.2012.07.030.
Full text not available from this repository.
Due to its proximity to Earth, the Moon is a promising candidate for the location of an extra-terrestrial human colony. In addition to being a high-fidelity platform for research on reduced gravity, radiation risk, and circadian disruption, the Moon qualifies as an isolated, confined, and extreme (ICE) environment suitable as an analogue for studying the psychosocial effects of long-duration human space exploration missions and understanding these processes. In contrast, the various Antarctic research outposts such as Concordia and McMurdo serve as valuable platforms for studying biobehavioral adaptations to ICE environments, but are still Earth-bound, and thus lack the low-gravity and radiation risks of space. The International Space Station (ISS), itself now considered an analogue environment for long-duration missions, better approximates the habitable infrastructure limitations of a lunar colony than most Antarctic settlements in an altered gravity setting. However, the ISS is still protected against cosmic radiation by the earth magnetic field, which prevents high exposures due to solar particle events and reduces exposures to galactic cosmic radiation. On Moon the ICE environments are strengthened, radiations of all energies are present capable of inducing performance degradation, as well as reduced gravity and lunar dust. The interaction of reduced gravity, radiation exposure, and ICE conditions may affect biology and behavior--and ultimately mission success--in ways the scientific and operational communities have yet to appreciate, therefore a long-term or permanent human presence on the Moon would ultimately provide invaluable high-fidelity opportunities for integrated multidisciplinary research and for preparations of a manned mission to Mars.
|Title:||Using the Moon as a High-Fidelity Analogue Environment to Study Biological and Behavioural Effects of Long-Duration Space Exploration|
|Journal or Publication Title:||Planetary and Space Science|
|In Open Access:||No|
|In ISI Web of Science:||Yes|
|Volume:||Epub ahead of print (in press)|
|Keywords:||Physiology, Orthostatic tolerance, Muscle deconditioning, Behavioural health, Psychosocial adaptation, Radiation, Lunar dust, Genes, Proteomics|
|HGF - Research field:||Aeronautics, Space and Transport, Aeronautics, Space and Transport|
|HGF - Program:||Space, Raumfahrt|
|HGF - Program Themes:||W EW - Erforschung des Weltraums, R EW - Erforschung des Weltraums|
|DLR - Research area:||Space, Raumfahrt|
|DLR - Program:||W EW - Erforschung des Weltraums, R EW - Erforschung des Weltraums|
|DLR - Research theme (Project):||W - Vorhaben MSL-Radiation (old), R - Vorhaben MSL-Radiation|
|Institutes and Institutions:||Institute of Aerospace Medicine > Radiation Biology|
|Deposited By:||Kerstin Kopp|
|Deposited On:||27 Aug 2012 08:05|
|Last Modified:||07 Feb 2013 20:40|
Repository Staff Only: item control page
|
<urn:uuid:25dbfda6-18d6-4e04-9bf5-fe7dcc73d69b>
| 3.09375 | 887 |
Academic Writing
|
Science & Tech.
| 24.740737 |
Science -- Asher et al. 307 (5712): 1091:
We describe several fossils referable to Gomphos elkema from deposits close to the Paleocene-Eocene boundary at Tsagan Khushu, Mongolia. Gomphos shares a suite of cranioskeletal characters with extant rabbits, hares, and pikas but retains a primitive dentition and jaw compared to its modern relatives. Phylogenetic analysis supports the position of Gomphos as a stem lagomorph and excludes Cretaceous taxa from the crown radiation of placental mammals. Our results support the hypothesis that rodents and lagomorphs radiated during the Cenozoic and diverged from other placental mammals close to the Cretaceous-Tertiary boundary.
Lagomorphs are rabbits, hares, and pikas. This might be referred to as a "missing link" of the rodents. Why do we care? Most mammals are rodents, and this tells us about the evolution of the most successful group of mammals. Cool!
|
<urn:uuid:fa9d11c3-ad57-40a6-8915-a8b1cd687729>
| 2.921875 | 220 |
Personal Blog
|
Science & Tech.
| 36.115 |
Basic Use To make a new number, a simple initialization suffices:
var foo = 0; // or whatever number you want
foo = 1; //foo = 1 foo += 2; //foo = 3 (the two gets added on) foo -= 2; //foo = 1 (the two gets removed)
Number literals define the number value. In particular: They appear as a set of digits of varying length. Negative literal numbers have a minus sign before the set of digits. Floating point literal numbers contain one decimal point, and may optionally use the E notation with the character e. An integer literal may be prepended with "0", to indicate that a number is in base-8. (8 and 9 are not octal digits, and if found, cause the integer to be read in the normal base-10). An integer literal may also be found with "0x", to indicate a hexadecimal number. The Math Object Unlike strings, arrays, and dates, the numbers aren't objects. The Math object provides numeric functions and constants as methods and properties. The methods and properties of the Math object are referenced using the dot operator in the usual way, for example:
var varOne = Math.ceil(8.5); var varPi = Math.PI; var sqrt3 = Math.sqrt(3);
Methods random() Generates a pseudo-random number.
var myInt = Math.random();
max(int1, int2) Returns the highest number from the two numbers passed as arguments.
var myInt = Math.max(8, 9); document.write(myInt); //9
min(int1, int2) Returns the lowest number from the two numbers passed as arguments.
var myInt = Math.min(8, 9); document.write(myInt); //8
floor(float) Returns the greatest integer less than the number passed as an argument.
var myInt = Math.floor(90.8); document.write(myInt); //90;
ceil(float) Returns the least integer greater than the number passed as an argument.
var myInt = Math.ceil(90.8); document.write(myInt); //91;
round(float) Returns the closest integer to the number passed as an argument.
var myInt = Math.round(90.8); document.write(myInt); //91;
|
<urn:uuid:eecdd55e-49d8-40e4-9834-6f3dce28fa4c>
| 3.96875 | 508 |
Documentation
|
Software Dev.
| 72.693517 |
Refraction and Acceleration
Name: Christopher S.
Why is it that when light travels from a more dense to a
less dense medium, its speed is higher? I've read answers to this
question in your archives but, sadly, still don't get it. One answer
(Jasjeet S Bagla) says that we must not ask the question because light is
massless, hence questions of acceleration don't make sense. It does,
however, seem to be OK to talk about different speeds of light. If you
start at one speed and end at a higher one, why is one not allowed to
talk about acceleration? Bagla goes on to say that it depends on how the
em fields behave in a given medium. It begs the question: what is it
about, say, Perspex and air that makes light accelerate, oops, travel at
different speeds? If you're dealing with the same ray of light, one is
forced to speak of acceleration, no? What other explanation is there for
final velocity>initial velocity? Arthur Smith mentioned a very small
"evanescent" component that travels ahead at c. Where can I learn more
about this? Sorry for the long question. I understand that F=ma and if
there is no m, you cannot talk about a, but, again, you have one velocity
higher than another for the same thing. I need to know more than "that's
just the way em fields are!"
An explanation that satisfies me relates to travel through an interactive
medium. When light interacts with an atom, the photon of light is absorbed
and then emitted. For a moment, the energy of the light is within the atom.
This causes a slight delay. Light travels at the standard speed of light
until interacting with another atom. It is absorbed and emitted, causing
another slight delay. The average effect is taking more time to travel a
meter through glass than through air. This works like a slower speed. An
individual photon does not actually slow down. It gets delayed repeatedly by
the atoms of the medium. A more dense medium has more atoms per meter to
Dr. Ken Mellendorf
Illinois Central College
Congratulations! on not being willing to accept "that is just the way em
fields are!" The answer to your inquiry is not all that simple (my opinion),
but I won't try to do so in the limited space allowed here, not to say my
own limitations of knowledge.
Like so many "simple" physics questions, I find the most lucid, but
accurate, explanation in
Richard Feynman's, "Lectures on Physics" which most libraries will have.
Volume I, Chapter 31-1 through 31-6, which describes refraction, dispersion,
diffraction. The "answer" has to do with how matter alters the electric
field of incident radiation, but I won't pretend to be able to do a better
job than Feynman.
The answer is that you are not dealing with the same ray of light. In
vacuum a photon just keeps going at the speed of light. In a medium,
however, it interacts with the atoms, often being absorbed while bumping
an atomic or molecular motion into a higher energy state. The excited
atom/molecule then can jump to a lower energy state, emitting a photon
while doing so. This can obviously make light appear to travel slower in a
In detail, it is a very complicated question, requiring at least a
graduate course in electromagnetism to begin to understand. Why, for
example do the emitted photons tend to travel in the same direction?
Best, Richard J. Plano
Click here to return to the Physics Archives
Update: June 2012
|
<urn:uuid:d2b35c16-35c7-477e-80c7-8dded3739ec4>
| 3.03125 | 794 |
Q&A Forum
|
Science & Tech.
| 58.858511 |
Giant Manta Ray
Giant Manta Ray Manta birostris
Divers often describe the experience of swimming beneath a manta ray as like being overtaken by a huge flying saucer. This ray is the biggest in the world, but like the biggest shark, the whale shark, it is a harmless consumer of plankton.
When feeding, it swims along with its cavernous mouth wide open, beating its huge triangular wings slowly up and down. On either side of the mouth, which is at the front of the head, there are two long paddles, called cephalic lobes. These lobes help funnel plankton into the mouth. A stingerless whiplike tail trails behind.
Giant manta rays tend to be found over high points like seamounts where currents bring plankton up to them. Small fish called remoras often travel attached to these giants, feeding on food scraps along the way. Giant mantas are ovoviviparous, so the eggs develop and hatch inside the mother. These rays can leap high out of the water, to escape predators, clean their skin of parasites or communicate.
|
<urn:uuid:f3984201-a44a-42d6-802f-de566b1e8a6e>
| 3.09375 | 238 |
Knowledge Article
|
Science & Tech.
| 55.646214 |
|Gallium metal is silver-white and melts at approximately body temperature (Wikipedia image).|
|Atomic Number:||31||Atomic Radius:||187 pm (Van der Waals)|
|Atomic Symbol:||Ga||Melting Point:||29.76 °C|
|Atomic Weight:||69.72||Boiling Point:||2204 °C|
|Electron Configuration:||[Ar]4s23d104p1||Oxidation States:||3|
From the Latin word Gallia, France; also from Latin, gallus, a translation of "Lecoq," a cock. Predicted and described by Mendeleev as ekaaluminum, and discovered spectroscopically by Lecoq de Boisbaudran in 1875, who in the same year obtained the free metal by electrolysis of a solution of the hydroxide in KOH.
Gallium is often found as a trace element in diaspore, sphalerite, germanite, bauxite, and coal. Some flue dusts from burning coal have been shown to contain as much 1.5 percent gallium.
It is one of four metals -- mercury, cesium, and rubidium -- which can be liquid near room temperature and, thus, can be used in high-temperature thermometers. It has one of the longest liquid ranges of any metal and has a low vapor pressure even at high temperatures.
There is a strong tendency for gallium to supercool below its freezing point. Therefore, seeding may be necessary to initiate solidification.
Ultra-pure gallium has a beautiful, silvery appearance, and the solid metal exhibits a conchoidal fracture similar to glass. The metal expands 3.1 percent on solidifying; therefore, it should not be stored in glass or metal containers, because they may break as the metal solidifies.
High-purity gallium is attacked only slowly by mineral acids.
Gallium wets glass or porcelain and forms a brilliant mirror when it is painted on glass. It is widely used in doping semiconductors and producing solid-state devices such as transistors.
Magnesium gallate containing divalent impurities, such as Mn+2, is finding use in commercial ultraviolet-activated powder phosphors. Gallium arsenide is capable of converting electricity directly into coherent light. Gallium readily alloys with most metals, and has been used as a component in low-melting alloys.
Its toxicity appears to be of a low order, but should be handled with care until more data is available.
|
<urn:uuid:317a0fc8-b8f1-4147-a9ac-f69a1f176048>
| 3.46875 | 546 |
Knowledge Article
|
Science & Tech.
| 38.890701 |
If superparticles were to exist the decay would happen far more often. This test is one of the "golden" tests for supersymmetry and it is one that on the face of it this hugely popular theory among physicists has failed.
Prof Val Gibson, leader of the Cambridge LHCb team, said that the new result was "putting our supersymmetry theory colleagues in a spin".
The results are in fact completely in line with what one would expect from the Standard Model. There is already concern that the LHCb's sister detectors might have expected to have detected superparticles by now, yet none have been found so far.This certainly does not rule out SUSY, but it is getting to the same level as cold fusion if positive experimental result does not come soon.
|
<urn:uuid:72def0d3-296d-49d8-bdf5-73c351dd6672>
| 2.6875 | 163 |
Personal Blog
|
Science & Tech.
| 46.709545 |
Let and be two differentiable functions. We will say that and are proportional if and only if there exists a constant C such that . Clearly any function is proportional to the zero-function. If the constant C is not important in nature and we are only interested into the proportionality of the two functions, then we would like to come up with an equivalent criteria. The following statements are equivalent:
Therefore, we have the following:
Define the Wronskian of and to be , that is
The following formula is very useful (see reduction of order technique):
Remark: Proportionality of two functions is equivalent to their linear dependence. Following the above discussion, we may use the Wronskian to determine the dependence or independence of two functions. In fact, the above discussion cannot be reproduced as is for more than two functions while the Wronskian does....
|
<urn:uuid:b7bc34b8-0f1f-4df8-8e8d-e56fc9c8fec5>
| 2.6875 | 180 |
Knowledge Article
|
Science & Tech.
| 38.502318 |
Forecast Texas Fire Danger (TFD)
The Texas Fire Danger(TFD) map is produced by the National Fire Danger Rating System (NFDRS). Weather information is provided by remote, automated weather stations and then used as an input to the Weather Information Management System (WIMS). The NFDRS processor in WIMS produces a fire danger rating based on fuels, weather, and topography. Fire danger maps are produced daily. In addition, the Texas A&M Forest Service, along with the SSL, has developed a five day running average fire danger rating map.
Daily RAWS information is derived from an experimental project - DO NOT DISTRIBUTE
|
<urn:uuid:a789fd8d-b873-45cf-b01d-af6eca242a5d>
| 3.015625 | 136 |
Knowledge Article
|
Science & Tech.
| 31.717 |
x2/3 + y2/3 = a2/3
x = a cos3(t), y = a sin3(t)
Click below to see one of the Associated curves.
|Definitions of the Associated curves||Evolute|
|Involute 1||Involute 2|
|Inverse curve wrt origin||Inverse wrt another circle|
|Pedal curve wrt origin||Pedal wrt another point|
|Negative pedal curve wrt origin||Negative pedal wrt another point|
|Caustic wrt horizontal rays||Caustic curve wrt another point|
The astroid only acquired its present name in 1836 in a book published in Vienna. It has been known by various names in the literature, even after 1836, including cubocycloid and paracycle.
The length of the astroid is 6a and its area is 3πa2/8.
The gradient of the tangent T from the point with parameter p is -tan(p). The equation of this tangent T is
x sin(p) + y cos(p) = a sin(2p)/2
Let T cut the x-axis and the y-axis at X and Y respectively. Then the length XY is a constant and is equal to a.
It can be formed by rolling a circle of radius a/4 on the inside of a circle of radius a.
It can also be formed as the envelope produced when a line segment is moved with each end on one of a pair of perpendicular axes. It is therefore a glissette.
Other Web site:
|Main index||Famous curves index|
|Previous curve||Next curve|
|History Topics Index||Birthplace Maps|
|Mathematicians of the day||Anniversaries for the year|
|Societies, honours, etc||Search Form|
The URL of this page is:
|
<urn:uuid:367a0525-d005-4467-93f1-a7ac123614d1>
| 2.71875 | 409 |
Knowledge Article
|
Science & Tech.
| 54.846538 |
Science Fair Project Encyclopedia
The chloride ion is formed when the element chlorine picks up one electron to form the anion (negatively charged ion) Cl−. The salts of hydrochloric acid HCl contain chloride ions and are also called chlorides. An example is table salt, which is sodium chloride with the chemical formula NaCl. In water, it dissolves into Na+ and Cl− ions.
The word chloride can also refer to a chemical compound in which one or more chlorine atoms are covalently bonded in the molecule. This means that chlorides can be either inorganic or organic compounds. The simplest example of an inorganic covalently bonded chloride is hydrogen chloride, HCl. A simple example of an organic covalently bonded chloride is chloromethane (CH3Cl), often called methyl chloride.
Other examples of inorganic covalently bonded chlorides which are used as reactants are:
- phosphorus trichloride, phosphorus pentachloride, and thionyl chloride - all three are reactive chlorinating reagents which have been used in a laboratory.
- Disulfur dichloride (SCl2) - used for vulcanization of rubber.
Chloride ions have important physiological roles. For instance, in the central nervous system the inhibitory action of glycine and some of the action of GABA relies on the entry of Cl− into specific neurons.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
|
<urn:uuid:4e76b8fd-c479-45d7-8ee7-faf61495aecb>
| 4.59375 | 320 |
Knowledge Article
|
Science & Tech.
| 27.864975 |
Next: Radiative heat flux Up: Loading Previous: Distributed heat flux Contents
Convective heat flux is a flux depending on the temperature difference between the body and the adjacent fluid (liquid or gas) and is triggered by the *FILM card. It takes the form
where is the a flux normal to the surface, is the film coefficient, is the body temperature and is the environment fluid temperature (also called sink temperature). Generally, the sink temperature is known. If it is not, it is an unknown in the system. Physically, the convection along the surface can be forced or free. Forced convection means that the mass flow rate of the adjacent fluid (gas or liquid) is known and its temperature is the result of heat exchange between body and fluid. This case can be simulated by CalculiX by defining network elements and using the *BOUNDARY card for the first degree of freedom in the midside node of the element. Free convection, for which the mass flow rate is a n unknown too and a result of temperature differences, cannot be simulated.
Next: Radiative heat flux Up: Loading Previous: Distributed heat flux Contents guido dhondt 2012-10-06
|
<urn:uuid:47d24057-e332-41de-bbe6-0338e16b49a6>
| 3.3125 | 249 |
Tutorial
|
Science & Tech.
| 41.094375 |
RR Lyrae starArticle Free Pass
RR Lyrae star, any of a group of old giant stars of the class called pulsating variables (see variable star) that pulsate with periods of about 0.2–1 day. They belong to the broad Population II class of stars (see Populations I and II) and are found mainly in the thick disk and halo of the Milky Way Galaxy and often in globular clusters. There are several subclasses—designated RRa, RRb, RRc, and RRd—based on the manner in which the light varies with time. The intrinsic luminosities of RR Lyrae stars are relatively well-determined, which makes them useful as distance indicators.
What made you want to look up "RR Lyrae star"? Please share what surprised you most...
|
<urn:uuid:ca821097-b750-4e33-85da-b6754420e0dc>
| 2.921875 | 171 |
Knowledge Article
|
Science & Tech.
| 63.468978 |
Study promoter activity using the Living Colors Fluorescent Timer, a fluorescent protein that shifts color from green to red over time (1). This color change provides a way to visualize the time frame of promoter activity, indicating where in an organism the promoter is active and also when it becomes inactive. Easily detect the red and green emissions indicating promoter activity with fluorescence microscopy or flow cytometry.
Easily Characterize Promoter Activity
The Fluorescent Timer is a mutant form of the DsRed fluorescent reporter, containing two amino acid substitutions which increase its fluorescence intensity and endow it with a distinct spectral property: as the Fluorescent Timer matures, it changes color—in a matter of hours, depending on the expression system used. Shortly after its synthesis, the Fluorescent Timer begins emitting green fluorescence but as time passes, the fluorophore undergoes additional changes that shift its fluorescence to longer wavelengths. When fully matured the protein is bright red. The protein’s color shift can be used to follow the on and off phases of gene expression (e.g., during embryogenesis and cell differentiation).
Fluorescent Timer under the control of the heat shock promoter hsp16-41 in a transgenic C. elegans embryo. The embryo was heat-shocked in a 33°C water bath. Promoter activity was studied during the heat shock recovery period. Green fluorescence was observed in the embryo as early as two hr into the recovery period. By 50 hr after heat shock, promoter activity had ceased, as indicated by the lack of green color.
pTimer (left) is primarily intended to serve as a convenient source of the Fluorescent Timer cDNA. Use pTimer-1 (right) to monitor transcription from different promoters and promoter/ enhancer combinations inserted into the MCS located upstream of the Fluorescent Timer coding sequence. Without the addition of a functional promoter, this vector will not express the Fluorescent Timer.
Detecting Timer Fluorescent Protein
You can detect the Fluorescent Timer with the DsRed Polyclonal Antibody.
You can use the DsRed1-C Sequencing Primer to sequence wild-type DsRed1 C-terminal gene fusions, including Timer fusions.
Terskikh, A., et al. (2000) Science290(5496):1585–1588.
|
<urn:uuid:fee85558-8ff7-41a4-9a52-a042d84e5f3a>
| 2.6875 | 499 |
Knowledge Article
|
Science & Tech.
| 36.829775 |
Killing Emacs means ending the execution of the Emacs process.
If you started Emacs from a terminal, the parent process normally
resumes control. The low-level primitive for killing Emacs is
This command calls the hook
kill-emacs-hook, then exits the Emacs process and kills it.
If exit-data is an integer, that is used as the exit status of the Emacs process. (This is useful primarily in batch operation; see Batch Mode.)
If exit-data is a string, its contents are stuffed into the terminal input buffer so that the shell (or whatever program next reads input) can read them.
kill-emacs function is normally called via the
higher-level command C-x C-c
save-buffers-kill-terminal). See Exiting. It is also called automatically if Emacs receives a
SIGHUP operating system signal (e.g., when the
controlling terminal is disconnected), or if it receives a
SIGINT signal while running in batch mode (see Batch Mode).
This normal hook is run by
kill-emacs, before it kills Emacs.
kill-emacscan be called in situations where user interaction is impossible (e.g., when the terminal is disconnected), functions on this hook should not attempt to interact with the user. If you want to interact with the user when Emacs is shutting down, use
kill-emacs-query-functions, described below.
When Emacs is killed, all the information in the Emacs process,
aside from files that have been saved, is lost. Because killing Emacs
inadvertently can lose a lot of work, the
save-buffers-kill-terminal command queries for confirmation if
you have buffers that need saving or subprocesses that are running.
It also runs the abnormal hook
save-buffers-kill-terminalis killing Emacs, it calls the functions in this hook, after asking the standard questions and before calling
kill-emacs. The functions are called in order of appearance, with no arguments. Each function can ask for additional confirmation from the user. If any of them returns
save-buffers-kill-emacsdoes not kill Emacs, and does not run the remaining functions in this hook. Calling
kill-emacsdirectly does not run this hook.
|
<urn:uuid:af93ad35-c5de-4297-a667-afc7347bbc6c>
| 2.6875 | 488 |
Documentation
|
Software Dev.
| 51.422678 |
Boulder trails are common to the interior of Menelaus crater as materials erode from higher topography and roll toward the crater floor. Downhill is to the left, image width is 500 m, LROC NAC M139802338L [NASA/GSFC/Arizona State University].
Most boulder trails are relatively high reflectance, but running through the center of this image is a lower reflectance trail. This trail is smaller than the others, and its features may be influenced by factors such as mass of the boulder, boulder speed as it traveled downhill, and elevation from which the boulder originated. For example, is the boulder trail less distinct than the others because the boulder was smaller? What about the spacing of boulder tracks? The spacing of bounce-marks along boulder trails may say something about boulder mass and boulder speed. But why is this boulder trail low reflectance when all of the surrounding trails are higher reflectance? Perhaps this boulder trail is lower reflectance because the boulder gently bounced as it traveled downhill, and barely disturbed a thin layer of regolith? The contrast certainly appears similar to the astronauts' footprints and paths around the Apollo landing sites. Or, maybe the boulder fell apart during its downhill travel and the trail is simply made up of pieces of the boulder - we just don't know yet.
LROC WAC context of Menelaus crater at the boundary between Mare Serenitatis and the highlands (dotted line). The arrow marks the location of today's featured image at contact between the crater floor and NE crater wall [NASA/GSFC/Arizona State University].
What do you think? Why don't you follow the trail to its source in the full LROC NAC frame and see if you can find any other low reflectance trails.
|
<urn:uuid:ce50e516-2229-404a-b328-7d80cdfd0d33>
| 3.25 | 362 |
Comment Section
|
Science & Tech.
| 50.615374 |
of the Giant Squid scientifically known as Architeuthis
dux, is the largest of all invertebrates. Scientists
believe it can be as long as 18 metres (60 feet). This specimen
was collected by Dr Gordon Williamson who worked as the resident
ships biologist for the whaling company Salvesons. He examined
the stomach contents of 250 Sperm Whales Physeter macrocephalus
keeping the largest squid beak and discarding the smaller until
he ended up with this magnificent specimen.
|
<urn:uuid:03dc2cd4-80be-4c32-8ff8-4b196542656b>
| 3.03125 | 105 |
Knowledge Article
|
Science & Tech.
| 43.41975 |
is a C based interpreter (runloop) that executes, what different compiler (like Mildew ) produce.
If you want to help SMOP, you can just take on one of the lowlevel S1P implementations and write it. If you have any questions ask ruoso or pmurias at #perl6 @ irc.freenode.org.
The Slides for the talk Perl 6 is just a SMOP are available, it introduces a bit of the reasoning behind SMOP. A newer version of the talk presented at YAPC::EU 2008 is available
SMOP is an alternative implementation of a C engine to run Perl 6. It is focused in getting the most pragmatic approach possible, but still focusing in being able to support all Perl 6 features. Its core resembles Perl 5 in some ways, and it differs from Parrot in many ways, including the fact that SMOP is not a Virtual Machine. SMOP is simply a runtime engine that happens to have a interpreter run loop.
The main difference between SMOP and Parrot (besides the not-being-a-vm thing), is that SMOP is from bottom-up an implementation of the Perl 6 OO features, in a way that SMOP should be able to do a full bootstrap of the Perl 6 type system. Parrot on the other hand have a much more static low-level implementation (the PMC)
The same way PGE is a project on top of Parrot, SMOP will need a grammar engine for itself.
SMOP is the implementation that is stressing the meta object protocol more than any other implementation, and so far that has been a very fruitful exercise, with Larry making many clarifications on the object system thanks to SMOP.
Important topics on SMOP
- SMOP doesn't recurse in the C stack, and it doesn't actually define a mandatory paradigm (stack-based or register-based). SMOP has a Polymorphic Eval, that allows you to switch from one interpreter loop to another using Continuation Passing Style. See SMOP Stackless.
- SMOP doesn't define a object system in its own. The only thing it defines is the concept of SMOP Responder Interface, which then encapsulates whatever object system. This feature is fundamental to implement the SMOP Native Types.
- SMOP is intended to bootstrap itself from the low-level to the high-level. This is achieved by the fact that everything in SMOP is an Object. This way, even the low-level objects can be exposed to the high level runtime. See SMOP OO Bootstrap.
- SMOP won't implement a parser in its own, it will use STD or whatever parser gets ported to its runtime first.
- In order to enable the bootstrap, the runtime have a set of SMOP Constant Identifiers that are available for the sub-language compilers to use.
- There are some special SMOP Values Not Subject to Garbage Collection.
- A new interpreter implementation SMOP Mold replaced SLIME
- The "official" smop Perl 6 compiler is mildew - it lives in v6/mildew
- Currently there exists an old Elf backend which targets SMOP - it lives in misc/elfish/elfX
SMOP GSoC 2009
See the Old SMOP Changelog
|
<urn:uuid:9ef4d308-fa15-4196-86db-2db8b4c54358>
| 2.875 | 694 |
Knowledge Article
|
Software Dev.
| 53.614756 |
The word vivisection was first coined in the 1800s to denote the experimental dissection of live animals - or humans. It was created by activists who opposed the practice of experimenting on animals. The Roman physician Celsus claimed that in Alexandria in the 3rd century BCE physicians had performed vivisections on sentenced criminals, but vivisection on humans was generally outlawed. Experimenters frequently used living animals. Most early modern researchers considered this practice acceptable, believing that animals felt no pain. Even those who opposed vivisection in the early modern period did not usually do so out of consideration for the animals, but because they thought that this practice would coarsen the experimenter, or because they were concerned that animals stressed under experimental conditions did not represent the normal state of the body.
Prompted by the rise of experimental physiology and the increasing use of animals, an anti-vivisection movement started in the 1860s. Its driving force, the British journalist Frances Power Cobbe (1822-1904), founded the British Victoria Street Society in 1875, which gave rise to the British government's Cruelty to Animals Act of 1876. This law regulated the use of live animals for experimental purposes.
R A Kopaladze, 'Ivan P. Pavlov's view on vivisection', Integr. Physiol. Behav. Sci., 4 (2000), pp 266-271
C Lansbury, The Old Brown Dog: Women, Workers, and Vivisection in Edwardian England (Madison: University of Wisconsin Press, 1985)
P Mason, The Brown Dog Affair: The Story of a Monument that Divided the Nation (London: Two Stevens, 1997)
N A Rupke, (ed.) Vivisection in Historical Perspective (London: Crooms Helm, 1987)
The science of the functioning of living organisms and their component parts.
|
<urn:uuid:302a84f1-d0b1-4e14-8e71-b2ded9ee5190>
| 3.71875 | 392 |
Knowledge Article
|
Science & Tech.
| 36.06538 |
The Weekly Newsmagazine of Science
Volume 155, Number 19 (May 8, 1999)
|<<Back to Contents|
By J. Raloff
Canadian scientists have identified the likely culprit behind some historic, regional declines in Atlantic salmon. The researchers find that a near-ubiquitous water pollutant can render young, migrating fish unable to survive a life at sea.
Heavy, late-spring spraying of forests with a pesticide laced with nonylphenol during the 1970s and '80s was the clue that led the biologists to unmask that chemical's role in the transitory decline of salmon in East Canada. Though these sprays have ended, concentrations of nonylphenols in forest runoff then were comparable to those in the effluent of some pulp mills, industrial facilities, and sewage-treatment plants today. Downstream of such areas, the scientists argue, salmon and other migratory fish may still be at risk.
Nonylphenols are surfactants used in products from pesticides to dishwashing detergents, cosmetics, plastics, and spermicides. Because waste-treatment plants don't remove nonylphenols well, these chemicals can build up in downstream waters (SN: 1/8/94, p. 24).
When British studies linked ambient nonylphenol pollution to reproductive problems in fish (SN: 2/26/94, p. 142), Wayne L. Fairchild of Canada's Department of Fisheries and Oceans in Moncton, New Brunswick, became concerned. He recalled that an insecticide used on local forests for more than a decade had contained large amounts of nonylphenols. They helped aminocarb, the oily active ingredient in Matacil 1.8D, dissolve in water for easier spraying.
Runoff of the pesticide during rains loaded the spawning and nursery waters of Atlantic salmon with nonylphenols. Moreover, this aerial spraying had tended to coincide with the final stages of smoltificationthe fish's transformation for life at sea.
To probe for effects of forest spraying, Fairchild and his colleagues surveyed more than a decade of river-by-river data on fish. They overlaid these numbers with archival data on local aerial spraying with Matacil 1.8D or either of two nonylphenol-free pesticides. One contained the same active ingredient, aminocarb, as Matacil 1.8D does.
Most of the lowest adult salmon counts between 1973 and 1990 occurred in rivers where smolts would earlier have encountered runoff of Matacil 1.8D, Fairchild's group found. In 9 of 19 cases of Matacil 1.8D spraying for which they had good data, salmon returns were lower than they were within the 5 years earlier and 5 years later, they report in the May Environmental Health Perspectives. No population declines were associated with the other two pesticides.
The researchers have now exposed smolts in the laboratory to various nonylphenol concentrations, including some typical of Canadian rivers during the 1970s. The fish remained healthyuntil they entered salt water, at which point they exhibited a failure-to-thrive syndrome.
"They looked like they were starving," Fairchild told Science News. Within 2 months, he notes, 20 to 30 percent died. Untreated smolts adjusted normally to salt water and fattened up.
Steffen S. Madsen, a fish ecophysiologist at Odense University in Denmark, is not surprised, based on his own experiments.
To move from fresh water to the sea, a fish must undergo major hormonal changes that adapt it for pumping out excess salt. A female preparing to spawn in fresh water must undergo the opposite change. Since estrogen triggers her adaptation, Madsen and a colleague decided to test how smolts would respond to estrogen or nonylphenol, an estrogen mimic.
In the lab, they periodically injected salmon smolts with estrogen or nonylphenol over 30 days, and at various points placed them in seawater for 24 hours. Salt in the fish's blood skyrocketed during the day-long trials, unlike salt in untreated smolts. "Our preliminary evidence indicates that natural and environ- mental estrogens screw up the pituitary," Madsen says. The gland responds by making prolactin, a hormone that drives freshwater adaptation.
Judging by Fairchild's data, Madsen now suspects that any fish that migrates between fresh and salt water may be similarly vulnerable to high concentrations of pollutants that mimic estrogen.
From Science News, Vol. 155, No. 19, May 8, 1999, p. 293. Copyright © 1999, Science Service.
Copyright © 1999 Science Service
|
<urn:uuid:3ac50003-34df-4326-9ff5-f4278ff44a0b>
| 3.109375 | 978 |
Truncated
|
Science & Tech.
| 47.450967 |
Gaia theory is a class of scientific models of the geo-biosphere in which life as a whole fosters and maintains suitable conditions for itself by helping to create an environment on Earth suitable for its continuity. The first such theory was created by the atmospheric scientist and chemist, Sir James Lovelock, who developed his hypotheses in the 1960s before formally publishing the concept, first in the New Scientist (February 13, 1975) and then in the 1979 book "Quest for Gaia". He hypothesized that the living matter of the planet functioned like a single organism and named this self-regulating living system after the Greek goddess, Gaia, using a suggestion of novelist William Golding.
Gaia "theories" have non-technical predecessors in the ideas of several cultures. Today, "Gaia theory" is sometimes used among non-scientists to refer to hypotheses of a self-regulating Earth that are non-technical but take inspiration from scientific models. Among some scientists, "Gaia" carries connotations of lack of scientific rigor, quasi-mystical thinking about the planet arth, and therefore Lovelock's hypothesis was received initially with much antagonism by much of the scientific community. No controversy exists, however, that life and the physical environment significantly influence one another.
Gaia theory today is a spectrum of hypotheses, ranging from the undeniable (Weak Gaia) to the radical (Strong Gaia).
At one end of this spectrum is the undeniable statement that the organisms on the Earth have radically altered its composition. A stronger position is that the Earth's biosphere effectively acts as if it is a self-organizing system, which works in such a way as to keep its systems in some kind of meta-equilibrium that is broadly conducive to life. The history of evolution, ecology and climate show that the exact characteristics of this equilibrium intermittently have undergone rapid changes, which are believed to have caused extinctions and felled civilisations.
Biologists and earth scientists usually view the factors that stabilize the characteristics of a period as an undirected emergent property or entelechy of the system; as each individual species pursues its own self-interest, for example, their combined actions tend to have counterbalancing effects on environmental change. Opponents of this view sometimes point to examples of life's actions that have resulted in dramatic change rather than stable equilibrium, such as the conversion of the Earth's atmosphere from a reducing environment to an oxygen-rich one. However, proponents will point out that those atmospheric composition changes created an environment even more suitable to life.
Some go a step further and hypothesize that all lifeforms are part of a single living planetary being called Gaia. In this view, the atmosphere, the seas and the terrestrial crust would be results of interventions carried out by Gaia through the coevolving diversity of living organisms. While it is arguable that the Earth as a unit does not match the generally accepted biological criteria for life itself (Gaia has not yet reproduced, for instance), many scientists would be comfortable characterising the earth as a single "system".
The most extreme form of Gaia theory is that the entire Earth is a single unified organism; in this view the Earth's biosphere is consciously manipulating the climate in order to make conditions more conducive to life. Scientists contend that there is no evidence at all to support this last point of view, and it has come about because many people do not understand the concept of homeostasis. Many non-scientists instinctively see homeostasis as an activity that requires conscious control, although this is not so.
Much more speculative versions of Gaia theory, including all versions in which it is held that the Earth is actually conscious or part of some universe-wide evolution, are currently held to be outside the bounds of science.
This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Gaia".
|
<urn:uuid:7a3fa081-9c60-42a7-8ec4-1d8c386b4009>
| 3.4375 | 794 |
Knowledge Article
|
Science & Tech.
| 23.657602 |
Giant Water Scavenger Beetle
|Geographical Range||North America|
|Scientific Name||Hydrophilus triangularis|
|Conservation Status||Not listed by IUCN|
The name says it all. This large beetle lives in water, where it scavenges vegetation and insect parts. The insect can store a supply of air within its silvery belly, much like a deep-sea diver stores air in a tank.
|
<urn:uuid:469863a4-9f80-47c2-ad04-ee7f0adecfd5>
| 3.078125 | 91 |
Knowledge Article
|
Science & Tech.
| 34.880113 |
WAKING the GIANT Bill McGuire
While we transmit more than two million tweets a day and nearly one hundred trillion emails each year, we're also emitting record amounts of carbon dioxide (CO2). Bill McGuire, professor of geophysical and climate hazards at University College London, expects our continued rise in greenhouse gas emissions to awaken a slumbering giant: the Earth's crust. In Waking the Giant: How a Changing Climate Triggers Earthquakes, Tsunamis and Volcanoes (Oxford University Press), he explains that when the Earth's crust (or geosphere) becomes disrupted from rising temperatures and a C[O.sub.2]-rich atmosphere, natural disasters strike more frequently and with catastrophic force.
Applying a "straightforward presentation of what we know about how climate and the geosphere interact," the book links previous warming periods 20,000 to 5,000 years ago with a greater abundance of tsunamis, landslides, seismic activity and volcanic eruptions. McGuire urgently warns of the "tempestuous future of our own making" as we progressively inch toward a similar climate.
Despite his scientific testimony to Congress stating "what is going on in the Arctic now is the biggest and fastest thing that Nature has ever done" and the "incontrovertible" data that the Earth's climate draws lively response from the geosphere, brutal weather events are still not widely seen as being connected to human influence. Is our global population sleepwalking toward imminent destruction, he asks, until "it is obvious, even to the most entrenched denier, that our climate is being transformed?"
|
<urn:uuid:46ed79e4-97dd-492f-bf29-99304e01f4ee>
| 3.046875 | 330 |
Nonfiction Writing
|
Science & Tech.
| 28.729356 |
Want to stay on top of all the space news? Follow @universetoday on Twitter
Sidereal Time is the time is takes for celestial bodies to ascend and descend in the night sky. We know that celestial bodies are in reality, fixed in their positions. The reason for their dramatic movement in the night is because of the rotation of the earth. This is the same reason why the Sun and the Moon seem to rise and set. For the longest time, this motion caused many philosophers and astronomers to assume that the Earth was the center of the Universe. Fortunately later astronomers like Copernicus were able to discern the true movements of the Earth, Moon, and Sun helping to explain their movements. The time that it takes for a star, planet or other fixed celestial body to ascend and descend in the night sky is also called sidereal period. Coincidentally this time corresponds to the time it takes for the Earth to rotate one revolution which is just under 24 hours.
Sidereal time is not like solar time which is measured by the movement of the sun. Or the lunar cycles which take about 28 days. It is the relative angle of a celestial object to the prime meridian of the vernal equinox of the earth. IF these terms are confusing, here is what they mean. In cartography, the Earth is bisected by two major lines of longitude and latitude. These lines are the 0 degree points on the globe. The 0 degree point for the latitude is the Equator the point where the Earth is perfectly bisected. It cut through South America and Africa. The 0 degree point for the longitude is the prime meridian. It exact location is Greenwich, UK. The Equinoxes are essentially the times of the year when the sun rise and sets at the exact same point of the horizon at the equator. This means that these are the only times the solar day is equally divided into 12 hours of day and 12 hours of night. The hour angle for a celestial object relative to this meridian is what we call sidereal time.This angle changes with the rotation of the Earth creating a pattern of ascension and descent for celestial bodies in the Earth’s sky.
With the knowledge of sidereal time astronomers can predict the positions of stars. The values for the sidereal time of celestial objects is compile in a table or start chart called an ephemeris. With this guide to sidereal time astronomers can find a celestial object regardless of the change in their position over the year.
There are also some great resources on the net. The U.S. Naval observatory has an online clock to help you find out the sidereal time in your area. There is also a great explanation on the astronomy section of the Cornell university site.
|
<urn:uuid:678e8811-82bd-4c27-af17-f540e64bc52a>
| 3.75 | 564 |
Knowledge Article
|
Science & Tech.
| 54.148823 |
Here's the way the NWS defines it:
Forecasts issued by the National Weather Service routinely include a "PoP" (probability of precipitation) statement, which is often expressed as the "chance of rain" or "chance of precipitation".http://www.srh.noaa.gov/ffc/?n=pop
ZONE FORECASTS FOR NORTH AND CENTRAL GEORGIA
NATIONAL WEATHER SERVICE PEACHTREE CITY GA
119 PM EDT THU MAY 8 2008
INCLUDING THE CITIES OF...ATLANTA...CONYERS...DECATUR...
119 PM EDT THU MAY x 2008
.THIS AFTERNOON...MOSTLY CLOUDY WITH A 40 PERCENT CHANCE OF
SHOWERS AND THUNDERSTORMS. WINDY. HIGHS IN THE LOWER 80S. NEAR
STEADY TEMPERATURE IN THE LOWER 80S. SOUTH WINDS 15 TO 25 MPH.
.TONIGHT...MOSTLY CLOUDY WITH A CHANCE OF SHOWERS AND
THUNDERSTORMS IN THE EVENING...THEN A SLIGHT CHANCE OF SHOWERS
AND THUNDERSTORMS AFTER MIDNIGHT. LOWS IN THE MID 60S. SOUTHWEST
WINDS 5 TO 15 MPH. CHANCE OF RAIN 40 PERCENT.
What does this "40 percent" mean? ...will it rain 40 percent of of the time? ...will it rain over 40 percent of the area?
The "Probability of Precipitation" (PoP) describes the chance of precipitation occurring at any point you select in the area.
How do forecasters arrive at this value?
Mathematically, PoP is defined as follows:
PoP = C x A where "C" = the confidence that precipitation will occur somewhere in the forecast area, and where "A" = the percent of the area that will receive measureable precipitation, if it occurs at all.
So... in the case of the forecast above, if the forecaster knows precipitation is sure to occur ( confidence is 100% ), he/she is expressing how much of the area will receive measurable rain. ( PoP = "C" x "A" or "1" times ".4" which equals .4 or 40%.)
But, most of the time, the forecaster is expressing a combination of degree of confidence and areal coverage. If the forecaster is only 50% sure that precipitation will occur, and expects that, if it does occur, it will produce measurable rain over about 80 percent of the area, the PoP (chance of rain) is 40%. ( PoP = .5 x .8 which equals .4 or 40%. )
In either event, the correct way to interpret the forecast is: there is a 40 percent chance that rain will occur at any given point in the area.
|
<urn:uuid:64f70112-bac2-48dc-87e7-d1404797fade>
| 3.421875 | 616 |
Comment Section
|
Science & Tech.
| 73.397381 |
A compiler is a computer program that takes code and generates either object code or translates code in one language into another language. When it generates code into another language usually the other language is either compiled (into object code) , interpreted , or even compiled again into another language. Object code can be run on your computer as a regular program. In the days when compute time cost thousands of dollars compilation was done by hand. Now compilation is usually done by a program.
edit-hint Expand to compilation techniques?
|
<urn:uuid:880d3bad-144c-4602-89ac-2eec0a853e79>
| 3.40625 | 102 |
Knowledge Article
|
Software Dev.
| 25.430852 |
GloMax®-Multi Jr Method for DNA Quantitation Using Hoechst 33258
- Comments & Ratings
Quantitation of DNA is an important step for many practices in molecular biology. Common techniques that use DNA, such as sequencing, cDNA synthesis and cloning, RNA transcription, transfection, nucleic acid labeling (e.g., random prime labeling), etc., all benefit from a defined template concentration. Failure to produce results from these techniques sometimes can be attributed to an incorrect estimate of the DNA template used. The concentration of a nucleic acid most commonly is measured by UV absorbance at 260nm (A260). Absorbance methods are limited in sensitivity, however, due to a high level of background interference.
|
<urn:uuid:8cdb1656-8511-466e-b3f6-681a7cf80615>
| 2.734375 | 149 |
Knowledge Article
|
Science & Tech.
| 28.8033 |
.NET Type Design Guidelines
|Visual C# Tutorials|
|.NET Framework Tutorials|
.NET Type Design Guidelines
|© 2006 Microsoft Corp.|
|This tutorial—.NET Type Design Guidelines—is from Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries, by Krzysztof Cwalina, Brad Abrams. Copyright © 2006 Microsoft Corp.. All rights reserved. This article is reproduced by permission. This tutorial has been edited especially for C# Online.NET. Read the book review!|
(This article was written and annotated by members of the Microsoft Common Language Runtime (CLR) and .NET teams and other experts.)
Type Design Guidelines in .NET
From the CLR perspective, there are only two categories of types—reference types and value types—but for the purpose of framework design discussion we divide types into more logical groups, each with its own specific design rules. Figure 4-1 shows these logical groups.
Classes are the general case of reference types. They make up the bulk of types in the majority of frameworks. Classes owe their popularity to the rich set of object-oriented features they support and to their general applicability. Base classes and abstract classes are special logical groups related to extensibility. Extensibility and base classes are covered in Chapter 6.
Interfaces are types that can be implemented both by reference types and value types. This allows them to serve as roots of polymorphic hierarchies of reference types and value types. In addition, interfaces can be used to simulate multiple inheritance, which is not natively supported by the CLR.
Structs are the general case of value types and should be reserved for small, simple types, similar to language primitives.
Enums are a special case of value types used to define short sets of values, such as days of the week, console colors, and so on.
Static classes are types intended as containers for static members. They are commonly used to provide shortcuts to other operations.
Delegates, exceptions, attributes, arrays, and collections are all special cases of reference types intended for specific uses, and guidelines for their design and usage are discussed elsewhere in this book.
- DO ensure that each type is a well-defined set of related members, not just a random collection of unrelated functionality.
- It is important that a type can be described in one simple sentence. A good definition should also rule out functionality that is only tangentially related.
|If you have ever managed a team of people you know that they don't do well without a crisp set of responsibilities. Well, types work the same way. I have noticed that types without a firm and focused scope tend to be magnets for more random functionality, which, over time, make a small problem a lot worse. It becomes more difficult to justify why the next member with even more random functionality does not belong in the type. As the focus of the members in a type blurs, the developer's ability to predict where to find a given functionality is impaired, and therefore so is productivity.|
|Good types are like good diagrams: What has been omitted is as important to clarity and usability as what has been included. Every additional member you add to a type starts at a net negative value and only by proven usefulness does it go from there to positive. If you add too much in an attempt to make the type more useful to some, you are just as likely to make the type useless to everyone.|
| When I was learning OOP back in the early 1980s, I was taught a mantra that I still honor today: If things get too complicated, make more types. Sometimes, I find that I am thinking really hard trying to define a good set of methods for a type. When I start to feel that I'm spending too much time on this or when things just don't seem to fit together well, I remember my mantra and I define more, smaller types where each type has well-defined functionality. This has worked extremely well for me over the years. On the flip side, sometimes types do end up being dumping grounds for various loosely related functions. The .NET Framework offers several types like this, such as |
|
<urn:uuid:6c35af72-3e52-40ad-bf2e-d5f5676c535e>
| 3 | 868 |
Documentation
|
Software Dev.
| 42.544528 |
The Physics Help Forum not working today, at least not from my ISP, so this goes here. It's basically a math deal anyway:
The formula to calculate the force of a point of mass, let's call them planets, that results from its being gravitationally attracted by another point of mass is Newton's:
Where is the force of the planet that results from the gravitational attraction exerted upon it by the other planet, is Newton's gravity constant, and are the respecive masses of the planets, and d is the distance between them.
For simplicity's sake let's say all the planets considered are of the same mass, so we can write instead of .
Now, if I'm not mistaken, the formula for calculating the force of a planet resulting from the gravitational attraction of more than two planets is:
Where is the force on the jth planet resulting from the gravitational attraction of the other planets, and is the distance between the jth planet and the kth planet.
My question is "where is the vector addition?" That is, when considering the force on one planet that results from the gravitational attraction of many other planets, we have to take into account not only the distance of the other planets from planet j but also their position with respect to it (right?).
Take for example the simple case of three planets in the same plane. Planet j is at the origin. Planet k is one unit to the right of j on the x axis, while planet l is one unit up the y axis. If the masses all equal 1, then, by the formula above, the force on planet j would be:
But is a function of both the distance and the position, right? So we must consider not only the Gravitational Forces individually exerted upon j by k and l, but also the angle at which these forces are exerted. That is, we must add the vectors. To add vectors you just plug in the x value and y value sums of the added vectors into pythagoras' formula. The force on planet j should therefore be:
(Where is the angle subtended by a line drawn from planet j to planet x. I.e. and )
So, what am I missing here? I am fully aware that I, and not Newton, am missing something here. Someone please help point this out for me.
|
<urn:uuid:7d99e7e1-4e2a-4168-989f-9de25f473394>
| 3.53125 | 480 |
Q&A Forum
|
Science & Tech.
| 58.705377 |
New study challenges previous findings that humans are an altruistic anomaly, and positions chimpanzees as cooperative, especially when their partners are patient.
Researchers at the Yerkes National Primate Research Center, have shown chimpanzees have a significant bias for prosocial behavior. This, the study authors report, is in contrast to previous studies that positioned chimpanzees as reluctant altruists and led to the widely held belief that human altruism evolved in the last six million years only after humans split from apes. The current study findings are available in the online edition of Proceedings of the National Academy of Sciences.
According to Yerkes researchers Victoria Horner, PhD, Frans de Waal, PhD, and their colleagues, chimpanzees may not have shown prosocial behaviors in other studies because of design issues, such as the complexity of the apparatus used to deliver rewards and the distance between the animals.
“I have always been skeptical of the previous negative findings and their over-interpretation, says Dr. de Waal. “This study confirms the prosocial nature of chimpanzees with a different test, better adapted to the species,” he continues.
|
<urn:uuid:5d537746-8ad2-44d6-8586-ae6a035cf9b2>
| 3.09375 | 228 |
Personal Blog
|
Science & Tech.
| 26.0775 |
Elements | Blogs
Wednesday, September 7, 2011
Is There Oxygen in Space?
Yes, this summer astronomers using the Herschel Telescope identified oxygen molecules in space. They found these molecules in the Orion nebula, 1,344 light years away. Oxygen is the third most abundant element in the universe. Until now, scientists have only seen individual oxygen atoms in space. We do not breathe individual oxygen atoms, but rather oxygen molecules. (A molecule is a group of atoms banded together and it is the smallest unit of chemical compound that can take part in a chemical reaction.) Oxygen molecules make up 20% of the air we breathe. Scientists theorize that the oxygen molecules were locked up in water ice that...
Thursday, March 10, 2011
I'm Atoms (Scientific Cover of Jason Mraz's I'm Yours)
Here in Chicago it has been gray for the last three weeks – no sun, just melting snow and rain. This song made our day. It has sunshine, great music and atoms! The lyrics include fabulous lines such as: “Atoms bond together to form molecules Most of what’s surrounding me and you…” This science verse has been set to the music of Jason Mraz’s “I’m Yours”. This is a must watch!
Saturday, February 26, 2011
The Deep Carbon Observatory
Here at SuperSmart Carbon, we love learning about carbon. Apparently, we are not alone. There is a project being launched called the Deep Carbon Observatory that is being funded by the Alfred P. Sloan Foundation. The purpose of this group is to study carbon deep inside the earth. Carbon makes up somewhere from 0.7% to 3.2% of the earth’s elements. We know that there is carbon trapped under the earth’s crust, but we don’t know how much. The Deep Carbon Observatory is going to study how much carbon there is in the earth and what happens to it. Another question is what form is the...
Friday, February 25, 2011
Where does gas come from?
Carbon! (We always love it when the answer is carbon.) The gas we use to power our cars comes from decomposing organic matter. What does that mean? All life has carbon in it -- this includes everything living from you and me to zebras, tapeworms, tulips and seaweed. Since all living things have carbon in them, they are referred to as organic matter. Non-organic matter includes things like rocks, water and metals. When something organic dies, it goes into the earth’s surface. For example, when a leaf falls off a tree, it settles on the ground. Over the next months, it slowly rots and...
Friday, February 11, 2011
How to Name an Element After Yourself
Here on the SuperSmart Carbon blog, I will talk about the elements a lot because "Carbon" is an element. SuperSmart Carbon is a blue guy with a green hat and in this blog, he looks like he is 1 1/2 inches high. He has two rings around him with six yellow spheres. Although cute, SuperSmart Carbon does not exactly look like elements in the real world. Elements are really, really, small. You cannot see them with the naked eye, or even with a microscope. Although you can't see elements, they are all around you. Everything is made up of elements: the computer you are reading this blog on, the table the computer sits on, the air you...
|
<urn:uuid:b5177112-be1e-4086-9d85-858522f9c4b9>
| 2.921875 | 735 |
Content Listing
|
Science & Tech.
| 66.67267 |
Air MassAn extensive body of the atmosphere whose physical properties, particularly temperature and humidity, exhibit only small and continuous differences in the horizontal. It may extend over an area of several million square kilometres and over a depth of several kilometres.
Backing WindCounter-clockwise change of wind direction, in either hemisphere.
Beaufort ScaleWind force scale, original based on the state of the sea, expressed in numbers from 0 to 12.
FetchDistance along a large water surface trajectory over which a wind of almost uniform direction and speed blows.
FogSuspension of very small, usually microscopic water droplets in the air, generally reducing the horizontal visibility at the Earth's surface to less than 1 km.
FrontThe interface or transition zone between air masses of different densities (temperature and humidity).
Gale Force WindWind with a speed between 34 and 47 knots. Beaufort scale wind force 8 or 9.
GustSudden, brief increase of the wind speed over its mean value.
HazeSuspension in the atmosphere of extremely small, dry particles which are invisible to the naked eye but numerous enough to give the sky an opalescent appearance.
HighRegion of the atmosphere where the pressures are high relative to those in the surrounding region at the same level.
HurricaneName given to a warm core tropical cyclone with maximum surface winds of 118 km/h (64 knots) or greater in the North Atlantic, the Caribbean, the Gulf of Mexico and in the Eastern North Pacific Ocean.
KnotUnit of speed equal to one nautical mile per hour. (1.852 km/h)
Land BreezeWind of coastal regions, blowing at night from the land towards a large water surface as a result of the nocturnal cooling of the land surface.
Line SquallSquall which occurs in a line.
LowRegion of the atmosphere in which the pressures are lower then those of the surrounding regions at the same level.
MistSuspension in the air of microscopic water droplets which reduce the visibility at the Earth's surface.
PressureForce per unit area exerted by the atmosphere on any surface by virtue of its weight; it is equivalent to the weight of a vertical column of air extending above a surface of unit area to the outer limit of the atmosphere.
RidgeRegion of the atmosphere in which the pressure is high relative to the surrounding region at the same level.
Sea BreezeWind in coastal regions, blowing by day from a large water surface towards the land as a result of diurnal heating of the land surface.
Sea FogFog which forms in the lower part of a moist air mass moving over a colder surface (water).
Sea StateLocal state of agitation of the sea due to the combined effects of wind and swell.
SquallAtmospheric phenomenon characterizes by an abrupt and large increase of wind speed with a duration of the order of minutes which diminishes suddenly. It is often accompanied by showers or thundershowers.
Storm Force WindWind with a wind speed between 48 and 63 knots. Beaufort scale wind force 10 or 11.
Storm SurgeThe difference between the actual water level under influence of a meteorological disturbance (storm tide) and the level which would have been attained in the absence of the meteorological disturbance (i.e. astronomical tide).
SwellAny system of water waves which has left its generating area.
ThunderstormSudden electrical discharge manifested by a flash of light and a sharp or rumbling sound. Thunderstorms are associated with convective clouds and are, more often, accompanied by precipitation in the form of rain showers, hail, occasionally snow, snow pellets, or ice pellets.
Tropical CycloneGeneric term for a non-frontal synoptic scale cyclone originating over tropical or sub-tropical waters with organized convection and definite cyclonic surface wind circulation.
Tropical DepressionWind speed up to 33 knots.
Tropical DisturbanceLight surface winds with indications of cyclonic circulation.
Tropical StormMaximum wind speed of 34 to 47 knots.
TroughAn elongated area of relatively low atmospheric pressure.
VeeringClockwise change of wind direction, in either hemisphere.
VisibilityGreatest distance at which a black object of suitable dimensions can be seen and recognized against the horizon sky during daylight or could be seen and recognized during the night if the general illumination were raised to the normal daylight level.
WaterspoutA phenomenon consisting of an often violent whirlwind revealed by the presence of a cloud column or inverted cloud cone (funnel cloud), protruding from the base of a cumulonimbus, and of a bush composed of water droplets raised from the surface of the sea. Its behaviour is characterized by a tendency to dissipate upon reaching shore.
Wave HeightVertical distance between the trough and crest of a wave.
Wave PeriodsTime between the passage of two successive wave crests past a fixed point.
|
<urn:uuid:c43d0fad-4182-427f-88ff-559827fbce8b>
| 3.484375 | 1,023 |
Structured Data
|
Science & Tech.
| 32.817154 |
Science Fair Project Encyclopedia
The sampling frequency or sampling rate defines the number of samples per second taken from a continuous signal to make a discrete signal. The inverse of the sampling frequency is the sampling period or sampling time, which is the time between samples.
The sampling frequency can only be applied to samplers in which each sample is periodically taken. There is no rule that limits a sampler from taking a sample at a non-periodic rate.
If a signal has a bandwidth of 100 Hz then to avoid aliasing the sampling frequency must be greater than 200 Hz.
In some cases, it is desirable to have a sampling frequency more than twice the bandwidth so that a digital filter can be used in exchange for a weaker analog anti-aliasing filter. This process is known as oversampling.
In digital audio, common sampling rates are:
- 8,000 Hz - telephone, adequate for human speech
- 11,025 Hz
- 22,050 Hz - radio
- 44,100 Hz - compact disc
- 48,000 Hz - digital sound used for films and professional audio
- 96,000 or 192,400 Hz - DVD-Audio, some LPCM DVD audio tracks, BD-ROM (Blu-ray Disc) audio tracks, and HD-DVD (High-Definition DVD) audio tracks
In digital video, which uses a CCD as the sensor, the sampling rate is defined the frame/field rate, rather than the notional pixel clock. All modern TV cameras use CCDs, and the image sampling frequency is the repetition rate of the CCD integration period.
- 13.5 MHz - CCIR 601, D1 video
- Continuous signal vs. Discrete signal
- Digital control
- Sample and hold
- Sample (signal)
- Sampling (information theory)
- Signal (information theory)
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
|
<urn:uuid:d25b5562-8f30-4fd1-bc51-46f94956427e>
| 3.984375 | 414 |
Knowledge Article
|
Science & Tech.
| 55.315025 |
The life-giving ideas of chemistry are not reducible to physics. Or, if one tries to reduce them, they wilt at the edges, lose not only much of their meaning, but interest too. And, most importantly, they lose their chemical utility—their ability to relate seemingly disparate compounds to each other, their fecundity in inspiring new experiments. I'm thinking of concepts such as the chemical bond, a functional group and the logic of substitution, aromaticity, steric effects, acidity and basicity, electronegativity and oxidation-reduction. As well as some theoretical ideas I've been involved in personally—through-bond coupling, orbital symmetry control, the isolobal analogy.
Consider the notion of oxidation state. If you had to choose two words to epitomize the same-and-not-the-same nature of chemistry, would you not pick ferrous and ferric? The concept evolved at the end of the 19th century (not without confusion with "valency"), when the reality of ions in solution was established. As did a multiplicity of notations—ferrous iron is iron in an oxidation state of +2 (or is it 2+?) or Fe(II). Schemes for assigning oxidation states (sometimes called oxidation numbers) adorn every introductory chemistry text. They begin with the indisputable: In compounds, the oxidation states of the most electronegative elements (those that hold on most tightly to their valence electrons), oxygen and fluorine for example, are –2 and –1, respectively. After that the rules grow ornate, desperately struggling to balance wide applicability with simplicity.
The oxidation-state scheme had tremendous classificatory power (for inorganic compounds, not organic ones) from the beginning. Think of the sky blue color of chromium(II) versus the violet or green of chromium(III) salts, the four distinctly colored oxidation states of vanadium. Oliver Sacks writes beautifully of the attraction of these colors for a boy starting out in chemistry. And not only boys.
But there was more to oxidation states than just describing color. Or balancing equations. Chemistry is transformation. The utility of oxidation states dovetailed with the logic of oxidizing and reducing agents—molecules and ions that with ease removed or added electrons to other molecules. Between electron transfer and proton transfer you have much of reaction chemistry.
I want to tell you how this logic leads to quite incredible compounds, but first let's look for trouble. Not for molecules—only for the human beings thinking about them.
Those Charges are Real, Aren't They?
Iron is not only ferrous or ferric, but also comes in oxidation states ranging from +6 (in BaFeO4) to –2 (in Fe(CO)42–, a good organometallic reagent).
Is there really a charge of +6 on the iron in the first compound and a –2 charge in the carbonylate? Of course not, as Linus Pauling told us in one of his many correct (among some incorrect) intuitions. Such large charge separation in a molecule is unnatural. Those iron ions aren't bare—the metal center is surrounded by more or less tightly bound "ligands" of other simple ions (Cl– for instance) or molecular groupings (CN–, H2O, PH3, CO). The surrounding ligands act as sources or sinks of electrons, partly neutralizing the formal charge of the central metal atom. At the end, the net charge on a metal ion, regardless of its oxidation state, rarely lies outside the limits of +1 to –1.
Actually, my question should have been countered critically by another: How do you define the charge on an atom? A problem indeed. A Socratic dialogue on the concept would bring us to the unreality of dividing up electrons so they are all assigned to atoms and not partly to bonds. A kind of tortured pushing of quantum mechanical, delocalized reality into a classical, localized, electrostatic frame. In the course of that discussion it would become clear that the idea of a charge on an atom is a theoretical one, that it necessitates definition of regions of space and algorithms for divvying up electron density. And that discussion would devolve, no doubt acrimoniously, into a fight over the merits of uniquely defined but arbitrary protocols for assigning that density. People in the trade will recognize that I'm talking about "Mulliken population analysis" or "natural bond analysis" or Richard Bader's beautifully worked out scheme for dividing up space in a molecule.
What about experiment? Is there an observable that might gauge a charge on an atom? I think photoelectron spectroscopies (ESCA or Auger) come the closest. Here one measures the energy necessary to promote an inner-core electron to a higher level or to ionize it. Atoms in different oxidation states do tend to group themselves at certain energies. But the theoretical framework that relates these spectra to charges depends on the same assumptions that bedevil the definition of a charge on an atom.
An oxidation state bears little relation to the actual charge on the atom (except in the interior of the sun, where ligands are gone, there is plenty of energy, and you can have iron in oxidation states up to +26). This doesn't stop the occasional theoretician today from making a heap of a story when the copper in a formal Cu(III) complex comes out of a calculation bearing a charge of, say, +0.51.
Nor does it stop oxidation states from being just plain useful. Many chemical reactions involve electron transfer, with an attendant complex of changes in chemical, physical and biological properties. Oxidation state, a formalism and not a representation of the actual electron density at a metal center, is a wonderful way to "bookkeep" electrons in the course of a reaction. Even if that electron, whether added or removed, spends a good part of its time on the ligands.
But enough theory, or, as some of my colleagues would sigh, anthropomorphic platitudes. Let's look at some beautiful chemistry of extreme oxidation states.
Incredible, But True
Recently, a young Polish postdoctoral associate, Wojciech Grochala, led me to look with him at the chemical and theoretical design of novel high-temperature superconductors. We focused on silver (Ag) fluorides (F) with silver in oxidation states II and III. The reasoning that led us there is described in our forthcoming paper. For now let me tell you about some chemistry that I learned in the process. I can only characterize this chemistry as incredible but true. (Some will say that I should have known about it, since it was hardly hidden, but the fact is I didn't.)
Here is what Ag(II), unique to fluorides, can do. In anhydrous HF solutions it oxidizes Xe to Xe(II), generates C6F6+ salts from perfluorobenzene, takes perfluoropropylene to perfluoropropane, and liberates IrF6 from its stable anion. These reactions may seem abstruse to a nonchemist, but believe me, it's not easy to find a reagent that would accomplish them.
Ag(III) is an even stronger oxidizing agent. It oxidizes MF6– (where M=Pt or Ru) to MF6. Here is what Neil Bartlett at the University of California at Berkeley writes of one reaction: "Samples of AgF3 reacted incandescently with metal surfaces when frictional heat from scratching or grinding of the AgF3 occurred."
Ag(II), Ag(III) and F are all about equally hungry for electrons. Throw them one, and it's not at all a sure thing that the electron will wind up on the fluorine to produce fluoride (F–). It may go to the silver instead, in which case you may get some F2 from the recombination of F atoms.
Not that everyone can (or wants to) do chemistry in anhydrous HF, with F2 as a reagent or being produced as well. In a recent microreview, Thomas O'Donnell says (with some understatement), "... this solvent may seem to be an unlikely choice for a model solvent system, given its reactivity towards the usual materials of construction of scientific equipment." (And its reactivity with the "materials of construction" of human beings working with that equipment!) But, O'Donnell goes on to say, "... with the availability of spectroscopic and electrochemical equipment constructed from fluorocarbons such as Teflon and Kel-F, synthetic sapphire and platinum, manipulation of and physicochemical investigation of HF solutions in closed systems is now reasonably straightforward."
For this we must thank the pioneers in the field—generations of fluorine chemists, but especially Bartlett and Boris Zemva of the University of Ljubljana. Bartlett reports the oxidation of AgF2 to AgF4– (as KAgF4) using photochemical irradiation of F2 in anhydrous HF (made less acidic by adding KF to the HF). And Zemva used Kr2+ (in KrF2) to react with AgF2 in anhydrous HF in the presence of XeF6 to make XeF5+AgF4–. What a startling list of reagents!
To appreciate the difficulty and the inspiration of this chemistry, one must look at the original papers, or at the informal letters of the few who have tried it. You can find some of Neil Bartlett's commentary in the article that Wojciech and I wrote, and in an interview with him.
Charge It, Please
Chemists are always changing things. How to tune the propensity of a given oxidation state to oxidize or reduce? One way to do it is by changing the charge on the molecule that contains the oxidizing or reducing center. The syntheses of the silver fluorides cited above contain some splendid examples of this strategy. Let me use Bartlett's words again, just explaining that "electronegativity" gauges in some rough way the tendency of an atom to hold on to electrons. (High electronegativity means the electron is strongly held, low electronegativity that it is weakly held.)
It's easy to make a high oxidation state in an anion because an anion is electron-rich. The electronegativity is lower for a given oxidation state in an anion than it is in a neutral molecule. That in turn, is lower than it is in a cation. If I take silver and I expose it to fluorine in the presence of fluoride ion, in HF, and expose it to light to break of F2 to atoms, I convert the silver to silver(III), AgF4-. This is easy because the AG(III) is in an anion. I can then pass in boron trifluoride and precipitate silver trifluoride, which is now a much more potent oxidizer than AgF4- because the electronegativity in the neutral AgF3 is much higher than it is in the anion. If I can now take away a fluoride ion, and make a cation, I drive the electronegativity even further up. With such a cation, for example, AgF2+, I can steal the electron from PtF6- and make PtF6.... This is an oxidation that even Kr(II) is unable to bring about.
Simple, but powerful reasoning. And it works.
A World Record?
Finally, a recent oxidation-state curiosity: What is the highest oxidation state one could get in a neutral molecule? Pekka Pyykkö and coworkers suggest cautiously, but I think believably, that octahedral UO6, that is U(XII), may exist. There is evidence from other molecules that uranium 6p orbitals can get involved in bonding, which is what they would have to do in UO6.
What wonderful chemistry has come—and still promises to come—from the imperfect logic of oxidation states!
© Roald Hoffmann
I am grateful to Wojciech Grochala, Robert Fay and Debra Rolison for corrections and comments. Thanks to Stan Marcus for suggesting the title of this column.
|
<urn:uuid:17b06ea8-6a78-4eda-b899-ce63819d7113>
| 3.046875 | 2,582 |
Comment Section
|
Science & Tech.
| 42.922943 |
You have to like the attitude of Thomas Henning (Max-Planck-Institut für Astronomie). The scientist is a member of a team of astronomers whose recent work on planet formation around TW Hydrae was announced this afternoon. Their work used data from ESA’s Herschel space observatory, which has the sensitivity at the needed wavelengths for scanning TW Hydrae’s protoplanetary disk, along with the capability of taking spectra for the telltale molecules they were looking for. But getting observing time on a mission like Herschel is not easy and funding committees expect results, a fact that didn’t daunt the researcher. Says Henning, “If there’s no chance your project can fail, you’re probably not doing very interesting science. TW Hydrae is a good example of how a calculated scientific gamble can pay off.”
I would guess the relevant powers that be are happy with this team’s gamble. The situation is this: TW Hydrae is a young star of about 0.6 Solar masses some 176 light years away. The proximity is significant: This is the closest protoplanetary disk to Earth with strong gas emission lines, some two and a half times closer than the next possible subjects, and thus intensely studied for the insights it offers into planet formation. Out of the dense gas and dust here we can assume that tiny grains of ice and dust are aggregating into larger objects and one day planets.
Image: Artist’s impression of the gas and dust disk around the young star TW Hydrae. New measurements using the Herschel space telescope have shown that the mass of the disk is greater than previously thought. Credit: Axel M. Quetz (MPIA).
The challenge of TW Hydrae, though, has been that the total mass of the molecular hydrogen gas in its disk has remained unclear, leaving us without a good idea of the particulars of how this infant system might produce planets. Molecular hydrogen does not emit detectable radiation, while basing a mass estimate on carbon monoxide is hampered by the opacity of the disk. For that matter, basing a mass estimate on the thermal emissions of dust grains forces astronomers to make guesses about the opacity of the dust, so that we’re left with uncertainty — mass values have been estimated anywhere between 0.5 and 63 Jupiter masses, and that’s a lot of play.
Error bars like these have left us guessing about the properties of this disk. The new work takes a different tack. While hydrogen molecules don’t emit measurable radiation, those hydrogen molecules that contain a deuterium atom, in which the atomic nucleus contains not just a proton but an additional neutron, emit significant amounts of radiation, with an intensity that depends upon the temperature of the gas. Because the ratio of deuterium to hydrogen is relatively constant near the Sun, a detection of hydrogen deuteride can be multiplied out to produce a solid estimate of the amount of molecular hydrogen in the disk.
The Herschel data allow the astronomers to set a lower limit for the disk mass at 52 Jupiter masses, the most useful part of this being that this estimate has an uncertainty ten times lower than the previous results. A disk this massive should be able to produce a planetary system larger than the Solar System, which scientists believe was produced by a much lighter disk. When Henning spoke about taking risks, he doubtless referred to the fact that this was only the second time hydrogen deuteride has been detected outside the Solar System. The pitch to the Herschel committee had to be persuasive to get them to sign off on so tricky a detection.
But 36 Herschel observations (with a total exposure time of almost seven hours) allowed the team to find the hydrogen deuteride they were looking for in the far-infrared. Water vapor in the atmosphere absorbs this kind of radiation, which is why a space-based detection is the only reasonable choice, although the team evidently considered the flying observatory SOFIA, a platform on which they were unlikely to get approval given the problematic nature of the observation. Now we have much better insight into a budding planetary system that is taking the same route our own system did over four billion years ago. What further gains this will help us achieve in testing current models of planet formation will be played out in coming years.
The paper is Bergin et al., “An Old Disk That Can Still Form a Planetary System,” Nature 493 ((31 January 2013), pp. 644–646 (preprint). Be aware as well of Hogerheijde et al., “Detection of the Water Reservoir in a Forming Planetary System,” Science 6054 (2011), p. 338. The latter, many of whose co-authors also worked on the Bergin paper, used Herschel data to detect cold water vapor in the TW Hydrae disk, with this result:
Our Herschel detection of cold water vapor in the outer disk of TW Hya demonstrates the presence of a considerable reservoir of water ice in this protoplanetary disk, sufficient to form several thousand Earth oceans worth of icy bodies. Our observations only directly trace the tip of the iceberg of 0.005 Earth oceans in the form of water vapor.
Clearly, TW Hydrae has much to teach us.
Addendum: This JPL news release notes that although a young star, TW Hydrae had been thought to be past the stage of making giant planets:
“We didn’t expect to see so much gas around this star,” said Edwin Bergin of the University of Michigan in Ann Arbor. Bergin led the new study appearing in the journal Nature. “Typically stars of this age have cleared out their surrounding material, but this star still has enough mass to make the equivalent of 50 Jupiters,” Bergin said.
|
<urn:uuid:a225f201-6f03-4503-bb76-bd2fde1838a7>
| 3.515625 | 1,210 |
Knowledge Article
|
Science & Tech.
| 46.712272 |
Consider four vectors ~ F1, ~ F2, ~ F3, and ~ F4, wheretheir
magnitudes are F1= 43 N, F2= 36 N, F3 = 19 N, andF4 = 54 N.Let
θ1 =120o, θ2 =
−130o,θ3 = 200, and θ4 =
−67o, measured from thepositive x axis with
the counter-clockwiseangular direction aspositive.
What is the magnitudeof the resultant vector ~F , where ~F = ~
F1 +~ F2 +~ F3 +~ F4? Answer in units of N. What is the direction
ofthis resultant vector~F?
Note: Give the anglein degrees, use counterclockwise as the
positiveangular direction, between the limits
from the positive
xaxis. Answer in units ofo
I worked out the first part of thequestion by using
trigonomic rules. My X value=-5.68671and my Y
value=-33.5474. The magnitude came out to 34.026N. I tried
finding the direction by usingθ=tan-1(y/x) but i
cant get the rightanswer.
|
<urn:uuid:6424f806-15f1-4352-8ed4-15e67ff2dc91>
| 3.375 | 267 |
Q&A Forum
|
Science & Tech.
| 80.950653 |
An electron is a subatomic particles of spin 1/2. It couples with photons and, thus, is electrically charged. It is a lepton with a rest mass of 9.109 * 10 − 31kg and an electric charge of − 1.602 * 10 − 19 C, which is the smallest known charge possible for an isolated particle (confined quarks have fractional charge). The electric charge of the electron e is used as a unit of charge in much of physics.
Electron pairs within an orbital system have opposite spins due to the Pauli exclusion principle; this characteristic spin pairing allows electrons to exist in the same quantum orbital, as the opposing magnetic dipole moments induced by each of the electrons ensures that they are attracted together.
Current theories consider the electron as a point particle, as no evidence for internal structure has been observed.
As a theoretical construct, electrons have been able to explain other observed phenomena, such as the shell-like structure of an atom, energy distribution around an atom, and energy beams (electron and positron beams).
- ↑ Massimi, M. (2005). Pauli's Exclusion Principle, The Origin and Validation of a Scientific Principle. Cambridge University Press. pp. 7–8
- ↑ Mauritsson, J.. "Electron filmed for the first time ever". Lunds Universitet. Retrieved 2008-09-17. http://www.atomic.physics.lu.se/research/attosecond_physics
- ↑ Chao, A.W.; Tigner, M. (1999). Handbook of Accelerator Physics and Engineering. World Scientific. pp. 155, 188. ISBN 981-02-3500-3.
|
<urn:uuid:e1790b63-dd2a-43d8-ae60-c3a435647df2>
| 3.859375 | 352 |
Knowledge Article
|
Science & Tech.
| 58.2225 |
Math is the basis for music, but for those of us who aren’t virtuosic at either, the connection isn’t always easy to grasp. Which is what makes the videos of Vi Hart, a “mathemusician” with a dedicated YouTube following, so wonderful. Hart explains complex phenomena--from cardioids to Carl Gauss--using simple (and often very) funny means.
As Maria Popova pointed out yesterday, Hart’s latest video is a real doozy. In it, she uses a music box and a Möbius strip to explain space-time, showing how the two axes of musical notation (pitch and tempo) correspond to space and time. Using the tape notation as a model for space-time, she cuts and folds it to show the finite ways you can slice and dice the axes. Then, she shows us how you can loop the tape into a continuous strip of twinkling notes:
If you fold space-time into a Mobius strip, you get your melody, and then the inversion, the melody played upside down. And then right side up again. And so on. So rather than folding and cutting up space-time, just cut and tape a little loop of space-time, to be played over, and over.
It’s a pretty magical observation, and it makes even me--the prototypical math dunce--wish I’d tried harder. Yet there’s still time: Hart works for the Khan Academy, a nonprofit that offers free educational videos about math, biology, and more. Check it out.
[H/t Brain Pickings]
|
<urn:uuid:a37519b2-ce71-4875-976f-9b4e9a28090c>
| 3.28125 | 346 |
Personal Blog
|
Science & Tech.
| 59.43732 |
The clock Command
The clock command has facilities for getting the current time, formatting time values, and scanning printed time strings to get an integer time value. The clock command was added in Tcl 7.5. Table 13-1 summarizes the clock command:
Table 13-1. The clock command.
|clock clicks||A system-dependent high resolution counter.|
|clock format value ?-format str?||Formats a clock value according to str.|
|clock scan string ?-base clock? ?-gmt boolean?||Parses date string and return seconds value. The clock value determines the date.|
|clock seconds||Returns the current time in seconds.|
The following command prints the current time:
clock format [clock seconds]
=> Sun Nov 24 14:57:04 1996
The clock seconds command returns the current time, in seconds since a starting epoch. The clock format command formats an integer value into a date string. It takes an optional argument that controls the format. The format strings contains % keywords that are replaced with the year, month, day, date, hours, minutes, and seconds, in various formats. The default string is:
%a %b %d %H:%M:%S %Z %Y
Tables 13-2 and 13-3 summarize the clock formatting strings:
Table 13-2. Clock formatting keywords.
|%%||Inserts a %. |
|%a||Abbreviated weekday name (Mon, Tue, etc.). |
|%A||Full weekday name (Monday, Tuesday, etc.). |
|%b||Abbreviated month name (Jan, Feb, etc.). |
|%B||Full month name. |
|%c||Locale specific date and time (e.g., Nov 24 16:00:59 1996).|
|%d||Day of month (01 ?31). |
|%H||Hour in 24-hour format (00 ?23). |
|%I||Hour in 12-hour format (01 ?12). |
|%j||Day of year (001 ?366). |
|%m||Month number (01 ?12). |
|%M||Minute (00 ?59). |
|%p||AM/PM indicator. |
|%S||Seconds (00 ?59). |
|%U||Week of year (00 ?52) when Sunday starts the week.|
|%w||Weekday number (Sunday = 0). |
|%W||Week of year (01 ?52) when Monday starts the week. |
|%x||Locale specific date format (e.g., Feb 19 1997).|
|%X||Locale specific time format (e.g., 20:10:13).|
|%y||Year without century (00 ?99).|
|%Y||Year with century (e.g. 1997).|
|%Z||Time zone name.|
Table 13-3. UNIX-specific clock formatting keywords.
|%D||Date as %m/%d/%y (e.g., 02/19/97).|
|%e||Day of month (1 ?31), no leading zeros. |
|%h||Abbreviated month name. |
|%n||Inserts a newline. |
|%r||Time as %I:%M:%S %p (e.g., 02:39:29 PM).|
|%R||Time as %H:%M (e.g., 14:39).|
|%t||Inserts a tab. |
|%T||Time as %H:%M:%S (e.g., 14:34:29).|
The clock clicks command returns the value of the system's highest resolution clock. The units of the clicks are not defined. The main use of this command is to measure the relative time of different performance tuning trials. The following command counts the clicks per second over 10 seconds, which will vary from system to system:
Example 13-1 Calculating clicks per second.
set t1 [clock clicks]
after 10000 ;# See page 218
set t2 [clock clicks]
puts "[expr ($t2 - $t1)/10] Clicks/second"
=> 1001313 Clicks/second
The clock scan command parses a date string and returns a seconds value. The command handles a variety of date formats. If you leave off the year, the current year is assumed.
Year 2000 Compliance
Tcl implements the standard interpretation of two-digit year values, which is that 70?9 are 1970?999, 00?9 are 2000?069. Versions of Tcl before 8.0 did not properly deal with two-digit years in all cases. Note, however, that Tcl is limited by your system's time epoch and the number of bits in an integer. On Windows, Macintosh, and most UNIX systems, the clock epoch is January 1, 1970. A 32-bit integer can count enough seconds to reach forward into the year 2037, and backward to the year 1903. If you try to clock scan a date outside that range, Tcl will raise an error because the seconds counter will overflow or underflow. In this case, Tcl is just reflecting limitations of the underlying system.
If you leave out a date, clock scan assumes the current date. You can also use the -base option to specify a date. The following example uses the current time as the base, which is redundant:
clock scan "10:30:44 PM" -base [clock seconds]
The date parser allows these modifiers: year, month, fortnight (two weeks), week, day, hour, minute, second. You can put a positive or negative number in front of a modifier as a multiplier. For example:
clock format [clock scan "10:30:44 PM 1 week"]
=> Sun Dec 01 22:30:44 1996
clock format [clock scan "10:30:44 PM -1 week"]
Sun Nov 17 22:30:44 1996
You can also use tomorrow, yesterday, today, now, last, this, next, and ago, as modifiers.
clock format [clock scan "3 years ago"]
=> Wed Nov 24 17:06:46 1993
Both clock format and clock scan take a -gmt option that uses Greenwich Mean Time. Otherwise, the local time zone is used.
clock format [clock seconds] -gmt true
=> Sun Nov 24 09:25:29 1996
clock format [clock seconds] -gmt false
=> Sun Nov 24 17:25:34 1996
|
<urn:uuid:f36d7530-13dd-4d6a-8426-ea739f255160>
| 3.765625 | 1,432 |
Documentation
|
Software Dev.
| 94.315313 |
|This is a measure of the brightness of a celestial object. The lower the value, the brighter the object, so magnitude -4 is brighter than magnitude 0, which is in turn brighter than magnitude +4. The scale is logarithmic, and a difference of 5 magnitudes means a brightness difference of exactly 100 times. A difference of one magnitude corresponds to a brightness difference of around 2.51 (the fifth root of 100).
The system was started by the ancient Greeks, who divided the stars into one of six magnitude groups with stars of the first magnitude being the first ones to be visible after sunset. In modern times, the scale has been extended in both directions and more strictly defined.
Examples of magnitude values for well-known objects are;
|Sun||-26.7 (about 400 000 times brighter than full Moon!)|
|Brightest Iridium flares||-8|
|Venus (at brightest)||-4.4|
|International Space Station||-2|
|Sirius (brightest star)||-1.44|
|Limit of human eye||+6 to +7|
|Limit of 10x50 binoculars||+9|
|Limit of Hubble Space Telescope||+30|
|
<urn:uuid:a13e5774-8a15-4ad6-bc01-def7c66a2edb>
| 4.25 | 260 |
Structured Data
|
Science & Tech.
| 60.330227 |
Range: Vancouver - Baja Calif. depth: 6-18 (38) m.
Table of Contents
The Sea Grape
Commonly known as "sea grapes," Botryocladia (botryo=grape,
cladia=branches) pseudodichotoma is an abundant member of the RHODOPHYTA
(red algae). The following phylogeny consists of links to list of common
characteristics which justify Botryocladia's inclusion:
- thallus is 10-30 cm. tall
- elongate, pyriform (pear-shaped), sacchate (sack-like) branches
- sacchate branches are 4-7 cm long and 6-25 mm in diameter
- branches contain a colorless, acidic, polysaccharide and protein
mucilage which makes them bouyant and therefore better able to
compete for light
- 3 cell layers
- pigmented cortical cells
- unpigmented medium sized gelatinous cells
- unpigmented large gelatinous medullar cells (& specialized
gland cells cluster in groups of 10-20 on the inward facing surface
of medullar cells which in pseudodichotoma are noticeably
smaller than its neighbors. It is easy to view secretory cells under
a microscope by making cross-sections with a razorblade.
with all Florideophyceae, B.pseudodichotoma has a tri-phasic life
cells of the diploid tetrasporophyte undergo meiosis to create cruciate
tetraspores (3.88 million/day). Each of the 4 spores can grow into
a haploid gametophyte (male or female).
mature male gametophyte emits spermatia which fertilize cells on
the female gametophyte. Where fertilization has succeeded, a diploid
carposporophyte grows on the female gametophyte.
carposporophyte has a pore opening to the outside through which it
releases diploid carpospores. These carpospores settle and grow into
|
<urn:uuid:5af214eb-c261-4fff-a47c-c2ca3a8e2822>
| 2.875 | 452 |
Knowledge Article
|
Science & Tech.
| 26.541283 |
Joined: 16 Mar 2004
|Posted: Tue Aug 04, 2009 2:40 pm Post subject: Immune Responses Jolted into Action by Nanohorns
|The immune response triggered by carbon nanotube-like structures could be harnessed to help treat infectious diseases and cancers, say researchers.
The way tiny structures like nanotubes can trigger sometimes severe immune reactions has troubled researchers trying to use them as vehicles to deliver drugs inside the body in a targeted way.
White blood cells can efficiently detect and capture nanostructures, so much research is focused on allowing nanotubes and similar structures to pass unmolested in the body.
But a French-Italian research team plans to use nanohorns, a cone-shaped variety of carbon nanotubes, to deliberately provoke the immune system.
They think that the usually unwelcome immune response could kick-start the body into fighting a disease or cancer more effectively.
To test their theory, Alberto Bianco and Hélène Dumortier at the CNRS Institute in Strasbourg, France, in collaboration with Maurizio Prato at the University of Trieste, Italy, gave carbon nanohorns to mouse white blood cells in a Petri dish. The macrophage cells' job is to swallow foreign particles.
After 24 hours, most of the macrophages had swallowed some nanohorns. But they had also begun to release reactive oxygen compounds and other small molecules that signal to other parts of the immune system to become more active.
The researchers think they could tune that cellular distress call to a particular disease or cancer, by filling the interior of nanohorns with particular antigens, like ice cream filling a cone.
"The nanohorns would deliver the antigen to the macrophages while also triggering a cascade of pro-inflammatory effects," Dumortier says. "This process should initiate an antigen-specific immune response."
"There is still a long way to go before this interesting approach might become safe and effective," says Ruth Duncan at Cardiff University , UK . "Safety would ultimately depend on proposed dose, the frequency of dose and the route of administration," she says.
Dumortier agrees more work is needed, but adds that the results so far suggest that nanohorns are less toxic to cells than normal nanotubes can be. "No sign of cell death was visible upon three days of macrophage culture in the presence of nanohorns," Dumortier says.
Recent headline-grabbing results suggest that nanotubes much longer than they are wide can cause similar inflammation to asbestos . But nanohorns do not take on such proportions and so would not be expected to have such an effect.
Journal reference: 10 Advanced Materials (DOI: 1002/adma.200702753)
Source: New Scientist /...
Subscribe to the IoN newsletter.
|
<urn:uuid:5cade7be-722d-4875-86c2-cdb3dd43ad4f>
| 3.390625 | 593 |
Comment Section
|
Science & Tech.
| 32.083152 |
Atomic oxygen, a corrosive space gas, finds many applications on Earth.
An Atomic Innovation for Artwork
Oxygen may be one of the most common substances on the planet, but recent space research has unveiled a surprising number of new applications for the gas, including restoring damaged artwork.
It all started with a critical problem facing would-be spacecraft: the gasses just outside the Earth’s atmosphere are highly corrosive. While most oxygen atoms on Earth’s surface occur in pairs, in space the pair is often split apart by short-wave solar radiation, producing singular atoms. Because oxygen so easily bonds with other substances, it is highly corrosive in atomic form, and it gradually wears away the protective layering on orbiting objects such as satellites and the International Space Station (ISS).
To combat this destructive gas, NASA recreated it on Earth and applied it to different materials to see what would prove most resistant. The coatings developed through these experiments are currently used on the ISS.
During the tests, however, scientists also discovered applications for atomic oxygen that have since proved a success in the private sector.
Breathing New Life into Damaged Art
In their experiments, NASA researchers quickly realized that atomic oxygen interacted primarily with organic materials. Soon after, they partnered with churches and museums to test the gas’s ability to restore fire-damaged or vandalized art. Atomic oxygen was able to remove soot from fire-damaged artworks without altering the paint.
It was first tested on oil paintings: In 1989, an arson fire at St. Alban’s Episcopal Church in Cleveland nearly destroyed a painting of Mary Magdalene. Although the paint was blistered and charred, atomic oxygen treatment plus a reapplication of varnish revitalized it. And in 2002, a fire at St. Stanislaus Church (also in Cleveland) left two paintings with soot damage, but atomic oxygen removed it.
Buoyed by the successes with oil paints, the engineers also applied the restoration technique to acrylics, watercolors, and ink. At Pittsburgh’s Carnegie Museum of Art, where an Andy Warhol painting, Bathtub, has been kissed by a lipstick-wearing vandal, a technician successfully removed the offending pink mark with a portable atomic oxygen gun. The only evidence that the painting had been treated—a lightened spot of paint—was easily restored by a conservator.
A Genuine Difference-maker
When the successes in art restoration were publicized, forensic analysts who study documents became curious about using atomic oxygen to detect forgeries. They found that it can assist analysts in figuring out whether important documents such as checks or wills have been altered, by revealing areas of overlapping ink created in the modifications.
The gas has biomedical applications as well. Atomic oxygen technology can be used to decontaminate orthopedic surgical hip and knee implants prior to surgery. Such contaminants contribute to inflammation that can lead to joint loosening and pain, or even necessitate removing the implant. Previously, there was no known chemical process that fully removed these inflammatory toxins without damaging the implants. Atomic oxygen, however, can oxidize any organic contaminants and convert them into harmless gases, leaving a contaminant-free surface.
Thanks to NASA’s work, atomic oxygen—once studied in order to keep it at bay in space—is being employed in surprising, powerful ways here on Earth.
To learn more about this NASA spinoff, read the original article
|
<urn:uuid:672eb588-eeaa-401f-81e0-1a0e5c9d984f>
| 3.703125 | 714 |
Knowledge Article
|
Science & Tech.
| 27.007077 |
Evolution can fall well short of perfection. Claire Ainsworth and Michael Le Page assess where life has gone spectacularly wrong
THE ascent of Mount Everest's 8848 metres without bottled oxygen in 1978 suggests that human lungs are pretty impressive organs. But that achievement pales in comparison with the feat of the griffon vulture that set the record for the highest recorded bird flight in 1975 when it was sucked into the engine of a plane flying at 11,264 metres.
Birds can fly so high partly because of the way their lungs work. Air flows through bird lungs in one direction only, pumped through by interlinked air sacs on either side. This gives them numerous advantages over lungs like our own. In mammals' two-way lungs, not as much fresh air reaches the deepest parts of the lungs, and incoming air is diluted by the oxygen-poor air that remains after ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
|
<urn:uuid:ad635de7-8a5e-4c98-be53-8c463594f176>
| 3.28125 | 207 |
Truncated
|
Science & Tech.
| 59.637347 |
New Zealand grasshoppers belong to the subfamily Catantopinae. A number of species are present including the common small Phaulacridium of the more coastal areas, the larger species of Sigaus of the tussock lands, and the alpine genera Paprides and Brachaspis, which include some quite large species. These inhabit the alpine areas of the South Island, some preferring scree and others tussock areas. They apparently survive the rigorous alpine winter conditions both as nymphs and as adults, and it is possible that they can withstand complete freezing. All species are plant feeders and lay batches of eggs or pods in short holes in the ground which they excavate with their abdomen. After hatching, the young nymphs moult four or five times before becoming adult.
by Graeme William Ramsay, M.SC., PH.D., Entomology Division, Department of Scientific and Industrial Research, Nelson.
|
<urn:uuid:feefb68d-09c3-45d7-bc1b-52166c84268c>
| 3.515625 | 196 |
Knowledge Article
|
Science & Tech.
| 45.262532 |
In mathematics, hyperbolic functions are analogs of the ordinary trigonometric, or circular, functions. The basic hyperbolic functions are the hyperbolic sine "sinh" (typically pronounced /ˈsɪntʃ/ or /ˈʃaɪn/), and the hyperbolic cosine "cosh" (typically pronounced /ˈkɒʃ/), from which are derived the hyperbolic tangent "tanh" (typically pronounced /ˈtæntʃ/ or /ˈθæn/), etc., in analogy to the derived trigonometric functions. The inverse hyperbolic functions are the area hyperbolic sine "arsinh" (also called "asinh", or sometimes by the misnomer of "arcsinh") and so on.
Just as the points (cos t, sin t) form a circle with a unit radius, the points (cosh t, sinh t) form the right half of the equilateral hyperbola. Hyperbolic functions occur in the solutions of some important linear differential equations, for example the equation defining a catenary, and Laplace's equation in Cartesian coordinates. The latter is important in many areas of physics, including electromagnetic theory, heat transfer, fluid dynamics, and special relativity.
Hyperbolic functions were introduced in the 18th century by the Swiss mathematician Johann Heinrich Lambert.
The hyperbolic functions are:
Via complex numbers the hyperbolic functions are related to the circular functions as follows:
where is the imaginary unit defined as .
Note that, by convention, sinh2x means (sinhx)2, not sinh(sinhx); similarly for the other hyperbolic functions when used with positive exponents. Another notation for the hyperbolic cotangent function is , though cothx is far more common.
Hyperbolic sine and cosine satisfy the identity
which is similar to the Pythagorean trigonometric identity.
It can also be shown that the area under the graph of cosh x from A to B is equal to the arc length of cosh x from A to B.
For a full list of integrals of hyperbolic functions, see list of integrals of hyperbolic functions
In the above expressions, C is called the constant of integration.
It is possible to express the above functions as Taylor series:
A point on the hyperbola xy = 1 with x > 1 determines a hyperbolic triangle in which the side adjacent to the hyperbolic angle is associated with cosh while the side opposite is associated with sinh. However, since the point (1,1) on this hyperbola is a distance √2 from the origin, the normalization constant 1/√2 is necessary to define cosh and sinh by the lengths of the sides of the hyperbolic triangle.
and the property that cosh t ≥ 1 for all t.
The hyperbolic functions are periodic with complex period 2πi (πi for hyperbolic tangent and cotangent).
The parameter t is not a circular angle, but rather a hyperbolic angle which represents twice the area between the x-axis, the hyperbola and the straight line which links the origin with the point (cosh t, sinh t) on the hyperbola.
The function cosh x is an even function, that is symmetric with respect to the y-axis.
The function sinh x is an odd function, that is −sinh x = sinh(−x), and sinh 0 = 0.
The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities. In fact, Osborn's rule states that one can convert any trigonometric identity into a hyperbolic identity by expanding it completely in terms of integral powers of sines and cosines, changing sine to sinh and cosine to cosh, and switching the sign of every term which contains a product of 2, 6, 10, 14, ... sinhs. This yields for example the addition theorems
the "double angle formulas"
and the "half-angle formulas"
The derivative of sinh x is cosh x and the derivative of cosh x is sinh x; this is similar to trigonometric functions, albeit the sign is different (i.e., the derivative of cos x is −sin x).
The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic ones that does not involve complex numbers.
The graph of the function a cosh(x/a) is the catenary, the curve formed by a uniform flexible chain hanging freely under gravity.
From the definitions of the hyperbolic sine and cosine, we can derive the following identities:
These expressions are analogous to the expressions for sine and cosine, based on Euler's formula, as sums of complex exponentials.
Since the exponential function can be defined for any complex argument, we can extend the definitions of the hyperbolic functions also to complex arguments. The functions sinh z and cosh z are then holomorphic.
Relationships to ordinary trigonometric functions are given by Euler's formula for complex numbers:
|
<urn:uuid:34eefbfb-968b-4240-9caa-0182a3ca0559>
| 4.0625 | 1,119 |
Knowledge Article
|
Science & Tech.
| 37.831287 |
This is one of my favorite stories. In short, one of John Burk’s (@occam98) students wanted to launch a space balloon. If you want all the details, this post at Quantum Progress pretty much says it all. The part that makes this story so cool is that it was the student who did all of the set up and fundraising and stuff. Love it. Oh, and the student is apparently named “M.” I wonder if the student is either one of the Men in Black or a James Bond scientist.
Ok, you know what I do, right? I need to add something. Here is a very nice video of the space balloon launch.
You know I like to use pictures for data from time to time, right? One problem is that I don’t know much about cameras. There, I said it. Really, almost all of my photos are made with my phone. That is what makes the phone so great, you almost always have your camera with you.
To make these pictures useful for physics, it helps to know the angular size of the picture. Here is a diagram so you can see what I am talking about:
There are 20 seconds left on the clock. Your team is down by 2 points such that a field goal would win it. The ball is spotted on the hash mark at the 15 yard line and it is first down. What to do? Should you call a run play so that the ball is in the center of the field? Or should the ball be kicked from where it is?
So there is the question. Is it better to kick the ball from an angle or move back and kick it head on? Let me just look at one aspect of this situation. What is the angular size of the goal post from the location of the kicker? I am not looking at the height of the horizontal goal post – I will assume the kicker can get the ball over this.
This was on reddit. It is an image from google maps showing an aircraft. Not surprising, there are lots of aircraft that get caught by the cameras in mid flight. But what about the colors? Is this some rainbow-unicorn plane? I am not sure of the exact details, but this rainbow effect is from the camera. I am not sure why, but this camera is capturing red green and blue (and probably white) colors separately at different times. Here is the actual link to the google map.
The first thing that comes to my mind is – I wonder how fast the plane was moving. That question is difficult to answer because I don’t know how much time was between each ‘color filter’ photo. Oh well, I will proceed anyway. First, some info. Reading through the very insightful reddit comments, it seems the commenters are certain that the plain is an Embraer ERJ 145. Really, all I need is the length. Wikipedia lists it with a 29.87 m length and a 20.04 meter wingspan. From the image, does the rainbow plane have the same ratio of length to wingspan as listed?
Ok, not quite the same. Maybe that is close enough. The one thing is that the image clearly has some distortion. Either the plane it turning or the image has been adjusted to make it look like it is a top down view. Well, surfing around a bit I couldn’t find another plane that was close in length/wing span ratio. I am going with ERJ 145.
If I scale the image from the length of the plane, how “far” between the different colors? Here is a plot of the 4 color images.
Note that for this image, I put the axis along the fuselage of the plane. The points are the locations of the back tip of one of the wings. The first cool thing that I can learn from this is that there must have been a cross-wind. The aircraft is not traveling in the direction that it is heading. Of course this is not uncommon, planes do this all the time. Oh, let me not that I am assume the aircraft is far enough away from the satellite that the multiple colors are due to the motion of the plane and not the satellite. This is probably a good assumption since the houses below are not rainbow colored.
What about the speed? If it is moving at a constant velocity, then:
I know the changes in position. So, let me just call the change in time 1 cs (cs for camera-second). This means that the plane’s speed would be 1.8 m/cs. Ok, let’s just play a game. What if the time between frames was 1/100th of a second? That would mean that the speed would be 180 m/s or 400 mph. That is possible since wikipedia lists the max speed at around 550 mph. If the time between images is 1/30th of a second (I picked that because that is a common frame rate for video) then the speed would be 54 m/s (120 mph). That doesn’t seem too low. I would imaging the landing speed would be around that speed (or maybe a little lower – but what do I know?)
But WAIT – there is more. Can I determine the altitude of the plane? Well, suppose I have two objects of two different lengths that are two different distances from a camera. Here is an example.
My notation here looks a little messy, but both objects have a length (L) and a distance from the camera (r). They also have an angular size, denoted by θ. About angular size, I can write the following.
I don’t know the distances from the camera and I don’t know the angles. But, I can sort of measure the angles. Suppose I measure the number of pixels each object takes up in the photo. Then the angular size could be written as:
Where p1 is the pixel size of an object and c is some constant for that particular camera. Now I can re-write these angular equations and divide so that I get rid of the c.
I can get values for all the stuff on the right of that equation. Here are my values (object 1 is the plane and object 2 is the background – really, I will just use the scale provided by google maps). Oh, one more thing. I am not going to measure the pixel length but rather some arbitrary length of the same scale.
L1 = 29.87 m
p1 = 1 unit
L2 = 10 m
p2 = 0.239 unit
Putting in my values above I get the ratio of the distances from the camera as:
Now I just need one of the r‘s – ideally it would be r2 (the distance the camera is from the ground). Wikipedia says that the satellite images are typically taken from an aircraft flying 800-1500 feet high. So, suppose r2 = 1500 feet (457 meters). In this case the altitude of the rainbow plane would be:
1000 feet would mean that the rainbow plane is probably landing (or taking off). It looks like Teterboro Airport is quite close and the rainbow plane is heading that way. I claim landing.
So, here is what I can say:
Airspeed. Really, I don’t have a definite answer. Like I said before it depends on the camera rate. If I had to pick (and I don’t) I would say that the rainbow plane is going 120 mph and the time between different colored images is 1/30th of a second.
Altitude. If I go with the higher value of the typical google-map planes (like the google map cars but with wings) then the altitude would be around 1000 feet. This lower altitude is why I used the lower value for the airspeed.
Windspeed. Now I am changing my answer for windspeed. I am going to pretend like there is no wind. The perpendicular motion of the colored images could be due to the motion of the google-map plane.
|
<urn:uuid:ab1372c3-67f1-40bb-a97d-79e3d444774a>
| 2.8125 | 1,668 |
Personal Blog
|
Science & Tech.
| 77.914116 |
If you really want to hit a home run with a global warming story, manage to link climate change to the beloved rainforest of the Amazon. The rainforest there is considered by many to be the “lungs of the planet,” the rainforest surely contains a cure for any ailment imaginable, all species in the place are critical to the existence of life on the Earth, and the people of the Amazon are surely the most knowledgeable group on the planet regarding how to care for Mother Earth.
The global warming alarmists have taken full advantage of the Amazon and they are very quick to suggest that the Amazon ecosystem is extremely sensitive to climate change. Furthermore, not only can climate change impact the Amazon, but global climate itself is strongly linked to the state of the Amazon rainforest.
But, as usual, there is more to this story than meets to eye (or, rather, the press).
For instance, a headline last year from USA Today sounded the alarm declaring “Amazon hit by climate chaos of floods, drought”. In the first few sentences, we learn that “Across the Amazon basin, river dwellers are adding new floors to their stilt houses, trying to stay above rising floodwaters that have killed 44 people and left 376,000 homeless. Flooding is common in the world’s largest remaining tropical wilderness, but this year the waters rose higher and stayed longer than they have in decades, leaving fruit trees entirely submerged. Only four years ago, the same communities suffered an unprecedented drought that ruined crops and left mounds of river fish flapping and rotting in the mud. Experts suspect global warming may be driving wild climate swings that appear to be punishing the Amazon with increasing frequency.”
This piece is typical of thousands of other news stories about calamities in the Amazon that are immediately blamed on global warming. Other headlines quickly found include “Ocean Warming - Not El Niño - Drove Severe Amazon Drought in 2005” or “Amazon Droughts Will Accelerate Global Warming” or “Amazon Could Shrink by 85% due to Climate Change, Scientists Say.” Notice that climate change can cause droughts and floods in the Amazon PLUS droughts in the Amazon can cause global warming (by eliminating trees that could uptake atmospheric carbon dioxide). Throughout many of these stories, the words “delicate” and “irreversible” are used over and over.
As we have discussed countless times in other essays, climate models are predicting the greatest warming in the mid-to-high latitudes of the Northern Hemisphere during the winter season. The Amazon is not located in a part of the Earth expected to have substantial warming due to the buildup of greenhouse gases. Somewhat surprisingly, the IPCC Technical Summary comments “The sign of the precipitation response is considered less certain over both the Amazon and the African Sahel. These are regions in which there is added uncertainty due to potential vegetation-climate links, and there is less robustness across models even when vegetation feedbacks are not included.” Basically, the models are not predicting any big changes in precipitation in the Amazon due to the change in atmospheric composition, nor are the models predicting any big change in temperature. Should the people of the Amazon deforest the place down to a parking lot, there is evidence that precipitation would decrease. There is a lot going on in the Amazon – deforestation, elevated carbon dioxide levels, global warming, and all these reported recent droughts and floods. One would think that the entire place is a wreck!
A recent article in Hydrological Processes might come as a huge surprise to the climate change crusade. The first two sentences of the abstract made this one an immediate favorite at World Climate Report. The author has the nerve to write “Rainfall and river indices for both the northern and southern Amazon were used to identify and explore long-term climate variability on the region. From a statistical analysis of the hydrometeorological series, it is concluded that no systematic unidirectional long-term trends towards drier or wetter conditions have been identified since the 1920s.” We should leave it at that!
The author is José Marengo with Brazil’s “Centro de Ciéncia do Sistema Terrestre/Instituto Nacional de Pesquisas Espaciais”; the work was funded by the Brazilian Research Council and the “UK Global Opportunity Fund-GOF-Dangerous Climate Change”. Very interesting – we suspect the “Dangerous Climate Change” group was not happy with the first two sentences of the abstract.
José Marengo begins the piece noting “The main objective of this study is the assessment of long-term trends and cycles in precipitation in the entire Amazon basin, and over the northern and southern sections. It was addressed by analysing rainfall and streamflow indices, dating from the late 1920s”. The Figure 1 shows his subregions within the greater Amazon basin.
Figure 1. Orientation map showing the rainfall network used on this study for (a) northern Amazonia (NAR) and (b) southern Amazonia (SAR) (from Marengo, 2009).
The bottom line here is amazing. The author writes “The analysis of the annual rainfall time series in the Amazon represented by the NAR and SAR indices indicates slight negative trends for the northern Amazon and positive trends for the southern Amazon. However, they are weak and significant at 5% only in the southern Amazon” (Figure 2). So, nothing is happening out of the ordinary in the north and the south is getting wetter. There is definitely variability around the weak trends, but it all seems to be related to natural variability, not deforestation or global warming.
Figure 2. Historical hydrometeorological indices for the Amazon basin. They are expressed as anomalies normalized by the standard deviation from the long-term mean, (a) northern Amazonia, (b) southern Amazonia. The thin line represents the trend. The broken line represents the 10-year moving average (from Marengo, 2009).
Marengo notes “Since 1929, long-term tendencies and trends, some of them statistically significant, have been detected in a set of regional-average rainfall time series in the Amazon basin and supported by the analysis of some river streamflow time series. These long-term variations are more characteristic of decadal and multi-decadal modes, indicators of natural climate variability, rather than any unidirectional trend towards drier conditions (as one would expect, due to increased deforestation or to global warming).” [emphasis added]
José – nice work, have a Cuervo on us!!!
Marengo, J.A. 2009. Long-term trends and cycles in the hydrometeorology of the Amazon basin since the late 1920s. Hydrological Processes, 23, 3236-3244.
|
<urn:uuid:1d043e2c-548a-4380-aff6-44daad02285d>
| 2.84375 | 1,450 |
Personal Blog
|
Science & Tech.
| 33.824703 |
Consider the following in Haskell:
let p x = x ++ show x in putStrLn $ p"let p x = x ++ show x in putStrLn $ p"
Evaluate this expression in an interactive Haskell session and it prints itself out. But there's a nice little cheat that made this easy: the Haskell 'show' function conveniently wraps a string in quotation marks. So we simply have two copies of once piece of code: one without quotes followed by one in quotes. In C, on the other hand, there is a bit of a gotcha. You need to explicitly write code to print those extra quotation marks. And of course, just like in Haskell, this code needs to appear twice, once out of quotes and once in. But the version in quotes needs the quotation marks to be 'escaped' using backslash so it's notactually the same as the first version. And that means we can't use exactly the same method as with Haskell. The standard workaround is not to represent the quotation marks directly in the strings, but instead to use the ASCII code for this character and use C's convenient %c mechanism to print at. For example:
Again we were lucky, C provides this great %c mechanism. What do you need in a language to be sure you can write a self-replicator?
It turns out there is a very general approach to writing self-replicators that's described in Vicious Circles. What follows is essentially from there except that I've simplified the proofs by reducing generality.
We'll use capital letters to represent programs. Typically these mean 'inert' strings of characters. I'll use square brackets to indicate the function that the program evaluates. So if P is a program to compute the mathematical function p, we write [P](x) = p(x). P is a program and [P] is a function. We'll consider both programs that take arguments like the P I just mentioned, and also programs, R, that take no arguments, so [R] is simply the output or return value of the program R.
Now we come to an important operation. We've defined [P](x) to be the result of running P with input x. Now we define P(x) to be the program P modified so that it no longer takes an argument or input but instead substitutes the 'hard-coded' value of x instead. In other words [P(x)] = [P](x). P(x) is, of course, another program. There are also many ways of implementing P(x). We could simply evaluate [P](x) and write a program to simply print this out or return it. On the other hand, we could do the absolute minimum and write a new piece of code that simply calls P and supplies it with a hard-coded argument. Whatever we choose is irrelevant to the following discussion. So here's the demand that we make of our programming language: that it's powerful enough for us to write a program that can compute P(x) from inputs P and x. This might not be a trivial program to write, but it's not conceptually hard either. It doesn't have gotchas like the quotation mark issue above. Typically we can compute P(x) by some kind of textual substitution on P.
With that assumption in mind, here's a theorem: any program P that takes one argument or input has a fixed point, X, in the sense that running P with input X gives the same result as just running X. Given an input X, P acts just like an interpreter for the programming language as it outputs the same thing as an
interpreter would given input X.
So here's a proof:
Define the function f(Q) = [P](Q(Q)). We've assumed that we can write a program that computes P(x) from P and x so we know we can write a program to compute Q(Q) for any Q. We can then feed this as an input to [P]. So f is obviously computable by some program which we call Q0. So [Q0](Q) = [P](Q(Q)).
Now the fun starts:
[P](Q0(Q0)) = [Q0](Q0) (by definition of Q0)
= [Q0(Q0)] (by definition of P(x))
In other words Q0(Q0) is our fixed point.
So now take P to compute the identity function. Then [Q0(Q0)] = [P](Q0(Q0)) = Q0(Q0). So Q0(Q0) outputs itself when run! What's more, this also tells us how to do other fun stuff like write a program to print itself out backwards. And it tells us how to do this in any reasonably powerful programming language. We don't need to worry about having to work around problems like 'escaping' quotation marks - we can always find a way to replicate the escape mechanism too.
So does it work in practice? Well it does for Haskell - I derived the Haskell fragment above by applying this theorem directly, and then simplifying a bit. For C++, however, it might give you a piece of code that is longer than you want. In fact, you can go one step further and write a program that automatically generates a self-replicator. Check out Samuel Moelius's kpp. It is a preprocessor that converts an ordinary C++ program into one that can access its own source code by including the code to generate its own source within it.
Another example of an application of these methods is Futamura's theorem which states that there exists a program that can take as input an interpreter for a language and output a compiler. I personally think this is a little bogus.
|
<urn:uuid:e9f8736e-fa3e-4ea6-b907-b80b1d97b5d9>
| 3.171875 | 1,215 |
Personal Blog
|
Software Dev.
| 69.541045 |
Gold has been known since prehistory. The symbol is derived from Latin aurum (gold).
AuI 9.2 eV, AuII 20.5 eV, AuIII 30.0 eV.
Absorption lines of AuI
In the sun, the equivalent width of AuI 3122(1) is 0.005.
Behavior in non-normal stars
The probable detection of Au I was announced by Jaschek and Malaroda (1970) in one Ap star of the Cr-Eu-Sr subgroup. Fuhrmann (1989) detected Au through the ultimate line of Au II at 1740(2) in several Bp stars of the Si and Ap stars of the Cr-Eu-Sr subgroups. The presence of Au seems to be associated with that of platinum and mercury.
Au has one stable isotope, Au197 and 20 short-lived isotopes and isomers.
Au can only be produced by the r process.
Published in "The Behavior of Chemical Elements in Stars", Carlos Jaschek and Mercedes Jaschek, 1995, Cambridge University Press.
|
<urn:uuid:8d506fc6-f879-413c-9824-20930fe8e0a0>
| 3.75 | 240 |
Structured Data
|
Science & Tech.
| 77.703306 |
Adult survival rates of Shag (Phalacrocorax aristotelis), Common Guillemot (Uria aalge), Razorbill (Alca torda), Puffin (Fratercula arctica) and Kittiwake (Rissa tridactyla) on the Isle of May 1986-96
Harris, M. P.; Wanless, S.; Rothery, P.. 2000 Adult survival rates of Shag (Phalacrocorax aristotelis), Common Guillemot (Uria aalge), Razorbill (Alca torda), Puffin (Fratercula arctica) and Kittiwake (Rissa tridactyla) on the Isle of May 1986-96. Atlantic Seabirds, 2. 133-150.Full text not available from this repository.
On the Isle of May between 1986 and 1996, the average adult survival of Shags Phalacrocorax aristotelis was 82.1%, Common Guillemots Uria aalge 95.2%, Razorbills Alca torda 90.5%, Puffins Fratercula arctica 91.6% and Kittiwakes Rissa tridactyla 88.2%. Shags, Razorbills and Puffins all had a single year of exceptionally low survival but these years did not coincide. In contrast, Kittiwake survival declined significantly over the period and there was evidence that substantial non-breeding occurred in several years. Breeding success of Kittiwakes also declined, which gives rise to concern for its future status. Given a high enough level of resighting, return rates (the proportion of birds known to be alive one year that were seen the next year) on a year-by-year basis provide a reasonable indication of relative changes in adult survival.
|Programmes:||CEH Programmes pre-2009 publications > Other|
|CEH Sections:||_ Biodiversity & Population Processes|
|Additional Keywords:||Shag, Phalacrocorax aristotelis, Common Guillemot, Uria aalge, Razorbill, Alca torda, Puffin, Fratercula arctica, Kittiwake, Rissa tridactyla|
|NORA Subject Terms:||Zoology|
|Date made live:||08 Dec 2008 21:30|
Actions (login required)
|
<urn:uuid:c2223b59-5dd0-474f-acd5-a52f82c794e8>
| 2.765625 | 516 |
Academic Writing
|
Science & Tech.
| 31.554757 |
I’ve been looking for a good, easy to read document outlining the latest climate science research and putting it in context for Copenhagen and I think I’ve found it.
Today in Sydney, the Climate Change Research Centre, a unit of the University of New South Wales, released The Copenhagen Diagnosis. It’s free to download or view online in a nice rich text format so credit to the centre for making it accessible in multiple attractive formats. But most praise has to be reserved for the 26 contributing authors who have laid out the science to make it easy to understand for a layman like myself. Chapters cover aspects of climate science including “the atmosphere”, “permafrost and hydrates” and “global sea level”.
Throughout are scattered common questions about climate change and answers designed to clear up confusion. An example: “Are we just in a natural warming phase, recovering from the ‘little ice age?‘.
The document, once pictures and the reference section is including is a slim 50 pages. If you want something to get yourself up to speed on the science ahead of Copenhagen this could well be the document to download. Its even better if you have a colleague willing to run across the road and get it bound for you as I have!
The executive summary of the Copenhagen Diagnosis, which I’ve excerpted below gives the basics you need to know if even 50 pages is too much to handle as we head into the highly-stressful (for everyone other than academics) end of year period.
The diplomats and politicians soon to board flights to Denmark could do worse than slip a copy of The Copenhagen Diagnosis into their cabin luggage.
The most significant recent climate change findings are:
Surging greenhouse gas emissions: Global carbon dioxide emissions from fossil fuels in 2008 were nearly 40% higher than those in 1990. Even if global emission rates are stabilized at present-day levels, just 20 more years of emissions would give a 25% probability that warming exceeds 2°C, even with zero emissions after 2030. Every year of delayed action increases the chances of exceeding 2°C warming.
Recent global temperatures demonstrate human-induced warming: Over the past 25 years temperatures have increased at a rate of 0.19°C per decade, in very good agreement with predictions based on greenhouse gas increases. Even over the past ten years, despite a decrease in solar forcing, the trend continues to be one of warming. Natural, short-term fluctuations are occurring as usual, but there have been no significant changes in the underlying warming trend.
Acceleration of melting of ice-sheets, glaciers and ice-caps: A wide array of satellite and ice measurements now demonstrate beyond doubt that both the Greenland and Antarctic ice-sheets are losing mass at an increasing rate. Melting of glaciers and ice-caps in other parts of the world has also accelerated since 1990. Rapid Arctic sea-ice decline: Summer-time melting of Arctic sea-ice has accelerated far beyond the expectations of climate models. The area of sea-ice melt during 2007-2009 was about 40% greater than the average prediction from IPCC AR4 climate models.
Current sea-level rise underestimated: Satellites show recent global average sea-level rise (3.4 mm/yr over the past 15 years) to be ~80% above past IPCC predictions. This acceleration in sea-level rise is consistent with a doubling in contribution from melting of glaciers, ice caps, and the Greenland and West-Antarctic ice-sheets.
Sea-level predictions revised: By 2100, global sea-level is likely to rise at least twice as much as projected by Working Group 1 of the IPCC AR4; for unmitigated emissions it may well exceed 1 meter. The upper limit has been estimated as ~ 2 meters sea level rise by 2100. Sea level will continue to rise for centuries after global temperatures have been stabilized, and several meters of sea level rise must be expected over the next few centuries.
Delay in action risks irreversible damage: Several vulnerable elements in the climate system (e.g. continental ice-sheets, Amazon rainforest, West African monsoon and others) could be pushed towards abrupt or irreversible change if warming continues in a business-as-usual way throughout this century. The risk of transgressing critical thresholds (’tipping points’) increases strongly with ongoing climate change. Thus waiting for higher levels of scientific certainty could mean that some
tipping points will be crossed before they are recognized.
The turning point must come soon: If global warming is to be limited to a maximum of 2 °C above pre-industrial values, global emissions need to peak between 2015 and 2020 and then decline rapidly. To stabilize climate, a decarbonized global society — with near-zero emissions of CO2 and other long-lived greenhouse gases — needs to be reached well within this century. More specifically, the average annual per-capita emissions will have to shrink to well under 1 metric ton CO2 by 2050. This is 80-95% below the per-capita emissions in developed nations in 2000.
|
<urn:uuid:6de73326-296f-4b7a-b8ba-84761d55c25e>
| 2.78125 | 1,051 |
Personal Blog
|
Science & Tech.
| 41.26094 |
Classifying Critical Points
So let’s say we’ve got a critical point of a multivariable function . That is, a point where the differential vanishes. We want something like the second derivative test that might tell us more about the behavior of the function near that point, and to identify (some) local maxima and minima. We’ll assume here that is twice continuously differentiable in some region around .
The analogue of the second derivative for multivariable functions is the second differential . This function assigns to every point a bilinear function of two displacement vectors and , and it measures the rate at which the directional derivative in the direction of is changing as we move in the direction of . That is,
If we choose coordinates on given by an orthonormal basis , we can write the second differential in terms of coordinates
This matrix is often called the “Hessian” of at the point .
As I said above, this is a bilinear form. Further, Clairaut’s theorem tells us that it’s a symmetric form. Then the spectral theorem tells us that we can find an orthonormal basis with respect to which the Hessian is actually diagonal, and the diagonal entries are the eigenvalues of the matrix.
So let’s go back and assume we’re working with such a basis. This means that our second partial derivatives are particularly simple. We find that for we have
and for , the second partial derivative is an eigenvalue
which we can assume (without loss of generality) are nondecreasing. That is, .
Now, if all of these eigenvalues are positive at a critical point , then the Hessian is positive-definite. That is, given any direction we have . On the other hand, if all of the eigenvalues are negative, the Hessian is negative definite; given any direction we have . In the former case, we’ll find that has a local minimum in a neighborhood of , and in the latter case we’ll find that has a local maximum there. If some eigenvalues are negative and others are positive, then the function has a mixed behavior at we’ll call a “saddle” (sketch the graph of near to see why). And if any eigenvalues are zero, all sorts of weird things can happen, though at least if we can find one positive and one negative eigenvalue we know that the critical point can’t be a local extremum.
We remember that the determinant of a diagonal matrix is the product of its eigenvalues, so if the determinant of the Hessian is nonzero then either we have a local maximum, we have a local minimum, or we have some form of well-behaved saddle. These behaviors we call “generic” critical points, since if we “wiggle” the function a bit (while maintaining a critical point at ) the Hessian determinant will stay nonzero. If the Hessian determinant is zero, wiggling the function a little will make it nonzero, and so this sort of critical point is not generic. This is the sort of unstable situation analogous to a failure of the second derivative test. Unfortunately, the analogy doesn’t extent, in that the sign of the Hessian determinant isn’t instantly meaningful. In two dimensions a positive determinant means both eigenvalues have the same sign — denoting a local maximum or a local minimum — while a negative determinant denotes eigenvalues of different signs — denoting a saddle. This much is included in multivariable calculus courses, although usually without a clear explanation why it works.
So, given a direction vector so that , then since is in , there will be some neighborhood of so that for all . In particular, there will be some range of so that . For any such point we can use Taylor’s theorem with to tell us that
for some . And from this we see that for every so that . A similar argument shows that if then for any near in the direction of .
Now if the Hessian is positive-definite then every direction from gives us , and so every point near satisfies . If the Hessian is negative-definite, then every point near satisfies . And if the Hessian has both positive and negative eigenvalues then within any neighborhood we can find some directions in which and some in which .
|
<urn:uuid:1470b6e0-0c2a-416e-a3f3-01bb7910efed>
| 2.6875 | 931 |
Academic Writing
|
Science & Tech.
| 42.500034 |
Science Fair Project Encyclopedia
Cryonics is the practice of preserving organisms, or at least their brains, for possible future revival by storing them at cryogenic temperatures where metabolism and decay are almost completely stopped.
An organism held in such a state (either frozen or vitrified) is said to be cryopreserved. Barring social disruptions, cryonicists believe that a perfectly vitrified person can be expected to remain physically viable for at least 30,000 years, after which time cosmic ray damage is thought to be irreparable. Many scientists in the field, most notably Ralph Merkle and Brian Wowk, hold that molecular nanotechnology has the potential to extend even this limit many times over.
To its detractors, the justification for cryonics is unclear, given the primitive state of preservation technology. Advocates counter that even a slim chance of revival is better than no chance. In the future, they speculate, not only will conventional health services be improved, but they will also quite likely have expanded even to the conquering of old age itself (see links at the bottom). Therefore, if one could preserve one's body (or at least the contents of one's mind) for, say, another hundred years, one might well be resuscitated and live indefinitely long. But critics of the field contend that, while an interesting technical idea, cryonics is currently little more than a pipedream, that current "patients" will never be successfully revived, and that decades of research, at least, must occur before cryonics is to be a legitimate field with any hope of success.
Probably the most famous cryopreserved patient is Ted Williams. The popular urban legend that Walt Disney was cryopreserved is false; he was cremated, and interred at Forest Lawn Memorial Park Cemetery. Robert Heinlein, who wrote enthusiastically of the concept, was cremated and his ashes distributed over the Pacific Ocean. Timothy Leary was a long-time cryonics advocate, and signed up with a major cryonics provider. He changed his mind, however, shortly before his death, and so was not cryopreserved.
Obstacles to success
Damage from ice formation
Cryonics has traditionally been dismissed by mainstream cryobiology, of which it is arguably a part. The reason generally given for this dismissal is that the freezing process creates ice crystals, which damage cells and cellular structures—a condition sometimes called "whole body freezer burn "—so as to render any future repair impossible. Cryonicists have long argued, however, that the extent of this damage was greatly exaggerated by the critics, presuming that some reasonable attempt is made to perfuse the body with cryoprotectant chemicals (traditionally glycerol) that inhibit ice crystal formation.
According to cryonicists, however, the freezer burn objection became moot around the turn of the millennium, when cryobiologists Greg Fahy and Brian Wowk, of Twenty-First Century Medicine developed major improvements in cryopreservation technology, including new cryoprotectants and new cryoprotectant solutions, that greatly improved the feasibility of eliminating ice crystal formation entirely, allowing vitrification (preservation in a glassy rather than frozen state). In a glass, the molecules do not rearrange themselves into grainy ice crystals as the solution cools, but instead become locked together while still randomly arranged as in a fluid, forming a "solid liquid" as the temperature falls below the glass transition temperature. Alcor Life Extension Foundation, the world's largest cryonics provider, has since been using these cryoprotectants, along with a new, faster cooling method, to vitrify whole human brains. They continue to use the less effective glycerol-based freezing for patients who opt to have their whole bodies preserved, since vitrification of an entire body is beyond current technical capabilities. The only other full-service cryonics provider in the world, the Cryonics Institute, is currently testing its own vitrification solution.
Current solutions being used for vitrification are stable enough to avoid crystallization even when a vitrified brain is warmed up. This has recently allowed brains to be vitrified, warmed back up, and examined for ice damage using light and electron microscopy. No ice crystal damage was found. However, if the circulation of the brain is compromised, protective chemicals may not be able to reach all parts of the brain, and freezing may occur either during cooling or during warming. Cryonicists argue, however, that injury caused during cooling can be repaired before the vitrified brain is warmed back up, and that damage during rewarming can be prevented by adding more cryoprotectant in the solid state, or by improving rewarming methods.
Some critics have speculated that because a cryonics patient has been declared legally dead, their organs are dead, and thus unable to allow cryoprotectants to reach the majority of cells. Cryonicists respond that it has been empirically demonstrated that, so long as the cryopreservation process begins immediately after legal death is declared, the individual organs (and perhaps even the patient as a whole) remain biologically alive, and vitrification (particularly of the brain) is quite feasible.
Critics have often quipped that it is easier to revive a corpse than a cryonically frozen body. Many cryonicists might actually agree with this, provided that the "corpse" were fresh, but they would argue that such a "corpse" may actually be biologically alive, under optimal conditions. A declaration of legal death does not mean that life has suddenly ended—death is a gradual process, not a sudden event. Rather, legal death is a declaration by medical personnel that there is nothing more they can do to save the patient. But if the body is clearly biologically dead, having been sitting at room temperature for a period of time, or having been traditionally embalmed, then cryonicists would hold that such a body is far less revivable than a cryonically preserved patient, since any process of resuscitation will depend on the quality of the structural and molecular preservation of the brain, which is largely destroyed by ischemic damage (from lack of blood flow) within minutes or hours of cardiac arrest, if the body is left to sit at room temperature. Traditional embalming also largely destroys this crucial neurological structure.
Cryonicists would also point out that the definitions of "death" and "corpse" currently in use may change with future medical advances, just as they have changed in the past, and so they generally reject the idea that they are trying to "raise the dead", viewing their procedures instead as highly experimental medical procedures, whose efficacy is yet to be either demonstrated or refuted. Some also suggest that if technology is developed that allows mind transfer, revival of the frozen brain might not even be required; the mind of the patient could instead be "uploaded" into an entirely new substrate.
The biggest drawback to current vitrification practice is a costs issue. Because the only really cost-effective means of storing a cryopreserved person is in liquid nitrogen, possibly large-scale fracturing of the brain occurs, a result of cooling to −196°C, the temperature of liquid nitrogen. Fracture-free vitrification would require inexpensive storage at a temperature significantly below the glass transition temperature of about −125°C, but high enough to avoid fracturing (−150°C is about right). Alcor is currently developing such a storage system. Alcor believes, however, that even before such a storage system is developed, the current vitrification method is far superior to traditional glycerol-based freezing, since the fractures are very clean breaks that occur even with traditional glycerol cryoprotection, and the loss of neurological structure is still less than that caused by ice formation, by orders of magnitude.
While cryopreservation arrangements can be expensive (currently ranging from $28,000 to $150,000), most cryonicists pay for it with life insurance. The elderly, and others who may be uninsurable for health reasons, will often pay for the procedure through their estate. Others simply invest their money over a period of years, accepting the risk that they might die in the meantime. All in all, cryonics is actually quite affordable for the vast majority of those in the industrialized world who really want it, especially if they make arrangements while still young.
Even assuming perfect cryopreservation techniques, many cryonicists would still regard eventual revival as a long shot. In addition to the many technical hurdles that remain, the likelihood of obtaining a good cryopreservation is not very high because of logistical problems. The likelihood of the continuity of cryonics organizations as businesses, and the threat of legislative interference in the practice, don't help the odds either. Most cryonicists, therefore, regard their cryopreservation arrangements as a kind of medical insurance—not certain to keep them alive, but better than no chance at all and still a rational gamble to take.
Brain vs. whole-body cryopreservation
During the 1980s, the problems associated with crystallization were becoming better appreciated, and the emphasis shifted from whole body to brain-only or "neuropreservation", on the assumption that the rest of the body could be regrown, perhaps by cloning of the person's DNA or by using embryonic stem cell technology. The main goal now seems to be to preserve the information contained in the structure of the brain, on which memory and personal identity depends. Available scientific and medical evidence suggests that the mechanical structure of the brain is wholly responsible for personal identity and memories (for instance, spinal cord injury victims, organ transplant patients, and amputees appear to retain their personal identity and memories). Damage caused by freezing and fracturing is thought to be potentially repairable in the future, using nanotechnology, which will enable the manipulation of matter at the molecular level. To critics, this appears a kind of futuristic deus ex machina, but while the engineering details remain speculative, the rapidity of scientific advances over the past century, and more recently in the field of nanotechnology itself, suggest to some that there may be no insurmountable problems. And the cryopreserved patient can wait a long time. With the advent of vitrification, the importance of nanotechnology to the cryonics movement may begin to decrease.
Some critics, and even some cryonicists, question this emphasis on the brain, arguing that during neuropreservation some information about the body's phenotype will be lost and the new body may feel "unwanted", and that in case of brain damage the body may serve as a crude backup, helping restore indirectly some of the memories. Partly for this reason, the Cryonics Institute preserves only whole bodies. Some proponents of neuropreservation agree with these concerns, but still feel that lower costs and better brain preservation justify preserving only the brain.
Historically, cryonics began in 1962 with the publication of The Prospect of Immortality by Robert Ettinger. In the 1970s, the damage caused by crystallization was not well understood. Two early organizations went bankrupt, allowing their patients to thaw out, bringing the matter to the public eye, at which point the problem with cellular damage became more well known and the practice gained something of the reputation of a scam. During the 1980s, the extent of the damage from the freezing process became much clearer and better known, and the emphasis of the movement began to shift from whole-body to neuropreservation.
Alcor currently preserves about 60 human bodies and heads in Scottsdale, Arizona. Before the company moved to Arizona from Riverside, California in 1994, it was the center of several controversies, including a county coroner's ruling that a client was murdered with barbiturates before her head was removed by the company's staff. Alcor contended that the drug was administered after her death. No charges were ever filed.
- engineered negligible senescence
- life extension,
- Interstellar travel,
- Immortality Institute,
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
|
<urn:uuid:808b609d-c9b2-4043-aeea-548f59273c25>
| 3.4375 | 2,487 |
Knowledge Article
|
Science & Tech.
| 20.021667 |
Next: Measuring Rotation
Previous: Angles of Elevation and Depression
Right Triangles, Bearings, and other Applications
Back to the
top of the page ↑
You need to be signed in to perform this action. Please sign-in and try again.
Oops, looks like cookies are disabled on your browser. Click
to see how to enable them.
|
<urn:uuid:1a8136cc-012a-4ac8-b554-48f3c8db5bfe>
| 2.75 | 79 |
Truncated
|
Science & Tech.
| 59.965833 |
Last August, a 3,000-pound, eight-by-22 foot-robotic platform was launched into the Hudson River just north of Denning’s Point Peninsula in Beacon, N.Y.
On board the floating platform are state-of-the-art sensors that will provide continuous air and water monitoring including barometric pressure, wind speed and direction, water depth, temperature, salinity and flow rate. The sensors will also measure the levels of hydrogen contaminants, dissolved oxygen, and chlorophyll-a (a green pigment found in algae). The data will be transferred in real time to researchers who can track fluctuations in these measurements.
The information provides a detailed record of the overall health of the river. This will alert scientists and environmentalists to escalating pollution levels or to episodic events that can be problematic, such as algae blooms, which can lead to hypoxia. Hypoxia is characterized by a low concentration of oxygen that is exacerbated by increases in nutrients or a particular set of physical conditions. It is associated with fish kills among other problems.
This technology, which promises to revolutionize the way bodies of water are monitored, was developed by a team of scientists and researchers headed up by James Bonner ’85, professor of civil & environmental engineering and director of Clarkson’s Center for the Environment (CCE).
“Our goal is to eventually cover the entire 315-mile river from Mt. Marcy to New York City with a network of sensors,” explains Bonner. “The technology will allow us to create a cyber-infrastructure that stores and processes a great deal of data about the Hudson River. Scientists and engineers around the world will be able to access this information via the Internet.”
Bonner began the development of this real-time monitoring technology at the Shoreline Environmental Research Facility at Texas A&M University where he served as founding director. While in Corpus Christi, Bonner and fellow researchers developed sensing systems that they used to monitor the Gulf of Mexico. Since joining the Clarkson faculty in 2007, Bonner (who holds a Ph.D. from Clarkson) has continued his NSF-funded research program with an eye toward transferring the technology to map and monitor the ecological health of the rivers, Great Lakes and the St. Lawrence Seaway.
The Hudson River monitoring project is a joint partnership between Clarkson University; the Beacon Institute for Rivers and Estuaries, a not-for-profit environmental research organization; and IBM. Last year, Bonner was named the Beacon Institute’s REON Director of Research and will lead the development and implementation of the River and Estuary Observatory Network (REON). The Hudson River project is the first step in a larger plan to develop technology-based monitoring and forecasting network for rivers and estuaries.
“Tremendous human impact occurs in the regions where rivers and estuaries meet the ‘coastal margin’ — coastal wetlands, bays and shorelines,” explains Bonner. “In the United States, this region is home to 70 percent of the population and 20 of its 25 largest cities. It is also where most industry and ports are found. Damage to these ecosystems comes from this increased density of anthropogenic activity associated with pollution from industry, farms and the surrounding communities.”
For example, hypoxia generally occurs in aquatic systems where the water is poorly mixed excluding oxygen and trapping pollutants in the “hypolimion” — the dense bottom layer in a stratified body of water. Chemical reactions within the hypolimion and with bottom sediments depletes the benthic oxygen so aerobic organisms such as fish, oysters, clams and other bottom dwelling organisms perish. “This problem is a growing national concern, for example increasing areas of the Gulf of Mexico (thousands of square miles), portions of the Great Lakes, embayments such Corpus Christi Bay and other near-shore areas are experiencing hypoxia,” says Bonner.
IBM is working with Bonner and the Beacon Institute to develop the cyber framework that will store the data and provide assessment tools, which researchers around the world will be able to use. “Scientists will be able to analyze data and develop models on any environmental parameter of interest.”
For Bonner, one of the most exciting aspects of the project is the way it will transform environmental science and engineering. “The old-fashioned method of retrieving data by collecting samples at discreet locations at only a few times gives a static, incomplete and aliased view or understanding. With this technology, we’ll be able to get real-time data that reflects the constantly changing, dynamic environment of the river. The information will be far more reliable.”
|
<urn:uuid:02237b71-3d97-43b4-b615-8779adad0180>
| 3.03125 | 982 |
Knowledge Article
|
Science & Tech.
| 32.182778 |
I saw some tutorial pages on the internet about how to read files using C++
But I'm kind of confused because there isn't anything in code indicate where the file is from. So I think I need some explanation.
It will open file in current (working) folder. If you want to open file which is in another folder you may write full path: ofstream ofs("C:\\some_folder\\some_file");
There is version of the constructor (and open() function) which takes std::string, if you use them.
What you pass is actually the file path, so you can give a full path, or a relative path. If you just specify the filename that is a relative path. Relative paths are relative to the working directory of program. If you start your program by double clicking on the executable file the working directory will be the directory where the executable file is located. If you start your program from the command line the working directory will be the directory that you set using the cd command. If you start your program from an IDE the working directory is often set to the project directory (not the source directory) but this can differ between IDEs.
|
<urn:uuid:539cacc7-a7b0-4ae5-a649-fc47d6f41c8c>
| 3.375 | 242 |
Q&A Forum
|
Software Dev.
| 49.120155 |
THE FRAGILE FAUNA OF ILLINOIS CAVES
by Steven J. Taylor and Donald W. Webb
Illinois has several hundred caves, many of them in nearly pristine condition.
This unique and fragile environment is home to a diverse array of creatures,
including organisms that are completely limited to the cave environment,
species that may be found in similar habitats above ground, and the many
animals that accidentally wander, fall, or are washed into caves. Many
cave animals are highly adapted for the unique and harsh living conditions
they encounter underground.
caves can be found in four distinct karst regions: in the Mississippian
limestone of the Shawnee Hills, in the Salem Plateau and in the Lincoln
Hills, and in the Ordovician limestone of the Driftless Area. These caves
have been forming though the interaction of geology vegetation, and rainfall
for the past 300 million years. Shallow seas covered much of Illinois
during the Mississippian Period. When the seas receded, forests grew over
the exposed sedimentary rocks; and rainwater-which had become slightly
acidic through interaction with carbon dioxide from both the atmosphere
and the bacterial breakdown of organic material-then seeped into cracks
and bedding planes. As the limestone dissolved, conduits formed. These
conduits eventually developed the geologic features characteristic of
karst terrain-caves, sinking streams, springs, and sinkholes.
INTO THE TWILIGHT ZONE
Caves can be divided into three ecological zones. The entrance zone is
similar in light, temperature, and relative humidity to the surrounding
surface habitat, and the creatures that live there resemble the animals
that live in the moist shaded areas near the cave. Hear we find the eastern
phoebe (Sayornis phoebe), a small gray bird whose nest is constructed
on bare bedrock walls out of mosses and other debris. In the leaf litter,
we find many animals of the forest floor: redbacked salamanders, harvestmen
(or daddy-longlegs), snails, earthworms, millipedes, centipedes, beetles,
ants, and springtails. Cave entrances are often funnel shaped or have
sheer vertical walls, and organisms and organic debris tend to concentrate
at the bottom. The entrance zone also provides a highly protected environment
for overwintering organisms.
Deeper inside the cave, in the twilight zone, there is much less light,
and photosynthesizing plants are no longer able to grow. The temperature
and relative humidity fluctuate here, but the environment is usually damp
and cool. Many animals from the entrance zone wander into the twilight
zone, but most of these creatures must eventually return to the land above.
Several species of cave crickets are common in this part of the cave,
sometimes appearing in large numbers on walls or ceilings.
In larger caves, there is a dark zone characterized by constant temperature
(about 54-58*F in Illinois) and the absence of light. Here, the relative
humidity approaches the saturation point. Many animals in the dark zone
are capable of completing their entire life cycles without leaving the
cave although food is scarce in the absence of photosynthesis. In this
zone, there are fewer species of organisms. Creatures who live here eat
primarily organic debris-wood, leaves, and accidental animals. Dark-zone
dwellers get some of their nutrients from the feces of bats and cave crickets,
animals that leave the cave at night to feed on the surface. Raccoons,
common cave explorers in Illinois, also leave their waste behind. A wide
array of bacteria and fungi feast upon these nutrient-rich items. Other
animals then feed upon the fungi and bacteria. Springtails, minute insects
typically overlooked by the casual observes, are important fungus feeders,
and a variety of beetles, flies, and millipedes get their nourishment
this way as well. These organisms may then become the prey of cave-inhabiting
spiders, harvestmen, predacious fly larvae known as webworms, and an occasional
cave salamander. In the winter, pickerel frogs, mosquitoes, and some moths
move into cave to wait for warmer weather.
ADAPTING AND SURVIVING
Common Cave inhabitants
include (left to right) the moth, Scoliopteryz libatrix,
which does not have a comon name; the cave salamander (Eurycea
lucifuga); and the monorail worm (Macrocera nonilis).
Animals that live in caves vary greatly in their degree of adaptation
to the cave environment. Accidental animals live there only temporarily;
they will either leave or die. Animals that frequent cave but must return
to the surface at some point in there life cycles are know as trogloxenes.
Bats and cave crickets are two examples. Troglophiles are animals that
can complete their entire life cycles within a cave, but they may also
be found in cool, moist habitats outside of caves. Tow troglophilic vertebrates
found in or near Illinois caves are the cave salamander (Eurycea lucifuga)
and the spring cavefish (Forbesicthys agassizi).
Diane Tecic, district
heritage biologist for the Illinois Department of Natural Resources,
looks for cave-adapted organisms in organic debris with Illinois
caver Tim Sickbert.
Most cave animals are trogloxenes and troglophiles; only 20 to 30% of
the animals in North American caves are troglobites. Troglobites are animals
that live exclusively in caves; they are especially interesting because
of their unique morphological, physiological, behavioral, and life-history
adaptations. Many troglobites, for example, lack body pigment. Because
they live where there is no light, there is no evolutionary advantage
for them in maintaining the colors that might be characteristic of their
relatives and ancestors that live above ground. In cave-adapted species,
the evolutionary pressure to maintain functional eyes is also greatly
reduced, and these species have been under strong selective pressure to
evolve other means of sensing their surroundings. Their legs and antennae
usually have more sensory nerve endings than related above-ground species.
These appendages serve important tactile functions and are often greatly
elongated in cave-dwelling creatures.
Adaptations that allow species to exist in an environment with very low
nutrient input are not as obvious. Many cave-adapted species produce fewer
offspring than their surface-inhabiting relatives, but individual eggs
may contain more nutrients. In some species, timing of reproduction may
be synchronized with spring flooding and its new supply of nutrients.
Other species, lacking the above-ground seasonal cues of temperature and
photoperiod, may reproduce year-round. Cave adaptations may include a
reduced metabolic rate, allowing animals to live on limited food resources
for long periods of time. Illinois has many troglobitic invertebrates
but no troglobitic vertebrates.
As cave-adapted species become specialized, they also tend to become
geographically isolated. The geological and hydrological history of some
areas may divide species into isolated populations, and these populations,
over time, may evolve into distinct species. During glacial periods, caves,
as serve as refugia for some aquatic, soil-, and litter-inhabiting animals.
These species may become "stranded" in caves when glaciers retreat
surface conditions are not suitable for recolonization.
VULNERABITLIY OF CAVE ENVIRONMENTS
Human disturbance affects cave ecosystems just as it affects other ecosystems.
As a result of changes we make on the surface, we unknowingly alter cave
environments, destroying unique and valuable organisms before we even
know of their existence. The public knows very little about caves and
the organisms that inhabit them. Small wonder then that the importance
of protecting groundwater, caves, and cave life is not fully appreciated.
It is not uncommon to find sinkholes filled with trash, serving as natural
garbage cans for rural waste disposal. Visitors sometimes permanently
damage caves with graffiti, break stalactites and stalagmites, and carelessly
The very adaptations that allow troglobites to survive in the harsh cave
environment make these animals more vulnerable to changes made by humans.
The reduced metabolic rates that allow these animals to survive in a nutrient-poor
environment also make them less competitive when organic enrichment is
introduced in the form of fertilizers, livestock and agricultural waste,
and human sewage. In Illinois, this effect is commonly seen in stream-inhibiting
amphipods (small shrimplike animals) and isopods (small crustaceans related
to terrestrial pillbugs or sowbugs). These groups contain troglobites
that are highly adapted to cave environments; they also contain more opportunistic
troglophilic species, which have a competitive advantage in the presence
of high levels or organic waste.
Amphipods and isopods feed on small particles of organic debris and on
decomposers such as bacteria and fungi. Because they ingest large quantities
of this material, they are exposed to contamination from a variety of
pollutants. In Illinois, samples of these animals collected in 1992 were
found to contain dieldrin and breakdown products of DDT. They were also
found to contain moderate levels of mercury, although mercury was not
detected in any water samples from the same sites.
Sedimentation also threatens aquatic species. Topsoil run-off from rural
development and agricultural fields enters caves readily when vegetative
buffers around sinkholes are too small or nonexistent. This sediment fills
the spaces in gravel streambeds, eliminating the microhabitats that allow
many cavedwelling species to exist. As a result, cave streams with high
sediment loads ten to contain few species.
Sometimes, humans can't easily see the value of these subterranean systems,
especially when their own interests conflict with the health of cave communities.
Such a conflict is occurring now in our most biologically and hydrologically
significant karst area, the Salem Plateau of Monroe and St. Clair counties.
As part of the greater St. Louis metropolitan area, the Salem Plateau
is experiencing rapid population growth. Scientists can estimate the level
and types of threats that this growth brings to the biological integrity
of the region, but it's much more difficult to develop protected areas,
educational programs, and new regulatory mechanisms within the existing
political, social, and geographic framework. Illinois caves are a high
priority for conservation because cave organisms face serious threats
from agriculture and increasing urbanization. Also, the unique and fragile
cave and environment provides a home for organisms found nowhere else
in the world.
It is not usually possible to include the entire drainage basin of significant
caves within nature preserves or other conservation easements. To manage
a cave effectively, scientists must understand the hydrology of a cave's
subterranean conduits. This knowledge is gained by doing extensive dye
tracing studies and cave mapping. Both of these activities are time- and
labor-intensive. Already, the drainage basins of some of our largest cave
systems are being compromised by agriculture and rural housing projects.
Educating the public-particularly politicians, farmers, and children-about
land use and the impact of human activities is key to the long-term health
of cave communities. We must also enact appropriate regulations for rural
residential development-especially wastewater treatment-and for agricultural
activities in a karst landscape.
For more information on cave conservation and management, contact the
National Speleological Society, 2813 Cave Avenue, Huntsville, AL 35810-4431,
or Steven Taylor or Donald Webb at the Center for Biology, Illinois Natural
History Survey, 607 East Peabody Drive, Champaign, IL 61820.
Steven J. Taylor is an aquatic entomologist in the
Center for Biodiversity at the Illinois Natural History Survey in Champaign.
Donald W. Webb is an insect systematist, also at the Center for Biodiversity.
A GOOD NEIGHBOR POLICY
In a few caves in Monroe and St. Clair counties, you can find a
small shrimplike creature that exists nowhere else in the world.
The Illinois cave amphipod has made our corner of the world its
home, but it may not be here long unless humans take steps to protect
its environment. This unassuming cave creature has been proposed
for listing as a federally endangered species.
Cave amphipods inhabit the bottoms of pools and riffles in large
cave streams, where they creep among cobbles and under stones, feeding
on decaying leaf litter and organic debris. Food is scarce in this
environment, and the amphipods have developed chemosensory structures
that detect the odor of food sources, such as dead or injured animals.
Injured or dying amphipods are vulnerable to such predators as
flatworms, cave salamanders, and even other amphipods. But
the greatest threat these vulnerable creatures face is the
deterioration of the environment. The Illinois cave amphipod
lives near the greater St. Louis metropolitan area, a region
that has been experiencing dramatic population growth for
the past 10 years. Continued urbanization without appropriate
sewage treatment and disposal is especially threatening to
the amphipods existence. Other serious threats are siltation
and the presence of agricultural chemicals in subterranean
Fortunately for the amphipod, the quality of life for people on
the land above depends on water quality in streams below. Because
agricultural chemicals and bacteria associated with sewage have
been found in well water, springs, and cave streams in this area,
a concerted effort is being made to improve the water quality in
this karst region. Efforts to provide communities with safe drinking
water could also provide a healthy cave environment and help ensure
the further existence of our underground neighbor, the Illinois
|
<urn:uuid:df2ab0ff-bb86-415b-be4c-863c8014597f>
| 3.78125 | 3,029 |
Knowledge Article
|
Science & Tech.
| 21.357917 |
There are many types of biomass—organic matter such as plants,
residue from agriculture and forestry, and the organic component of
municipal and industrial wastes—that can now be used to produce fuels,
chemicals, and power. Wood has been used to provide heat for thousands of
years. This flexibility has resulted in increased use of biomass
technologies. According to the Energy Information Administration, 53% of
all renewable energy consumed in the United States was biomass-based in
Biomass technologies break down organic matter to release stored energy
from the sun.
Biofuels are liquid or gaseous fuels produced from biomass. Most biofuels
are used for transportation, but some are used as fuels to produce
electricity. The expanded use of biofuels offers an array of benefits for
our energy security, economic growth, and environment.
Current biofuels research focuses on new forms of biofuels such as
ethanol and biodiesel, and on biofuels conversion processes.
Ethanol—an alcohol—is made primarily from the starch in corn grain. It
is most commonly used as an additive to petroleum-based fuels to reduce
toxic air emissions and increase octane. Today, roughly half of the
gasoline sold in the United States includes 5%-10% ethanol.
Biodiesel use is relatively small, but its benefits to air quality are
Biodiesel is produced through a process that combines
organically-derived oils with alcohol (ethanol or methanol) in the
presence of a catalyst to form ethyl or methyl ester. The biomass-derived
ethyl or methyl esters can be blended with conventional diesel fuel or
used as a neat fuel (100% biodiesel).
Biomass resources include any plant-derived organic matter that is
available on a renewable basis. These materials are commonly referred to
Biomass feedstocks include dedicated energy crops, agricultural crops,
forestry residues, aquatic crops, biomass processing residues, municipal
waste, and animal waste.
Dedicated energy crops
Herbaceous energy crops are perennials that are harvested annually after
taking 2 to 3 years to reach full productivity. These include such grasses
as switchgrass, miscanthus (also known as elephant grass or e-grass),
bamboo, sweet sorghum, tall fescue, kochia, wheatgrass, and others.
Short-rotation woody crops are fast-growing hardwood trees that are
harvested within 5 to 8 years of planting. These include hybrid poplar,
hybrid willow, silver maple, eastern cottonwood, green ash, black walnut,
sweetgum, and sycamore.
Agricultural crops include currently available commodity products such as
cornstarch and corn oil, soybean oil and meal, wheat starch, and vegetable
oils. They generally yield sugars, oils, and extractives, although they
can also be used to produce plastics as well as other chemicals and
Agriculture Crop Residues
Agriculture crop residues include biomass materials, primarily stalks and
leaves, that are not harvested or removed from fields in commercial use.
Examples include corn stover (stalks, leaves, husks, and cobs), wheat
straw, and rice straw. With approximately 80 million acres of corn planted
annually, corn stover is expected to become a major feedstock for biopower
Forestry residues include biomass not harvested or removed from logging
sites in commercial hardwood and softwood stands as well as material
resulting from forest management operations such as pre-commercial
thinning and removal of dead and dying trees.
There are a variety of aquatic biomass resources, such as algae, giant
kelp, other seaweed, and marine microflora.
Biomass Processing Residues
Biomass processing yields byproducts and waste streams that are
collectively called residues and have significant energy potential.
Residues are simple to use because they have already been collected. For
example, the processing of wood for products or pulp produces unused
sawdust, bark, branches, and leaves/needles.
Residential, commercial, and institutional post-consumer waste contains a
significant proportion of plant-derived organic material that constitute a
renewable energy resource. Waste paper, cardboard, wood waste, and yard
waste are examples of biomass resources in municipal waste.
Farms and animal-processing operations create animal wastes that
constitute a complex source of organic materials with environmental
consequences. These wastes can be used to make many products, including
Some biomass feedstocks, such as municipal waste, are found throughout
the United States. Others, such as energy crops, are concentrated in the
eastern half of the country. As technologies develop to more efficiently
process complex feedstocks, the biomass resource base will expand.
Collecting Gas from Landfills
Landfills can be a source of energy. Organic waste produces a gas called
methane as it decomposes, or rots.
Methane is the same
energy-rich gas that is in natural gas, the fuel sold by natural gas
utility companies. It is colorless and odorless. Natural gas utilities add
an odorant (bad smell) so people can detect seeping gas, but it can be
dangerous to people or the environment. New rules require landfills to
collect methane gas as a pollution and safety measure.
compiled from The British Antarctic Study, NASA, Environment Canada,
UNEP, EPA and other sources as stated and credited Researched by Charles
Welch-Updated daily This Website is a project of the The Ozone Hole Inc.
a 501(c)(3) Nonprofit Organization http://www.theozonehole.com
|
<urn:uuid:43454431-e724-4640-b136-09b9e018b7c6>
| 3.828125 | 1,230 |
Knowledge Article
|
Science & Tech.
| 24.609679 |
Introductionfox, carnivorous mammal of the dog family, found throughout most of the Northern Hemisphere. It has a pointed face, short legs, long, thick fur, and a tail about one half to two thirds as long as the head and body, depending on the species. Solitary most of the year, foxes do not live in dens except in the breeding season; they sleep concealed in grasses or thickets, their tails curled around them for warmth. During the breeding season a fox pair establishes a den, often in a ground burrow made by another animal, in which the young are raised; the male hunts for the family. The young are on their own after about five months; the adults probably find new mates each season.
Foxes feed on insects, earthworms, small birds and mammals, eggs, carrion, and vegetable matter, especially fruits. Unlike other members of the dog family, which run down their prey, foxes usually hunt by stalking and pouncing. They are known for their raids on poultry but are nonetheless very beneficial to farmers as destroyers of rodents.
Foxes are occasionally preyed upon by larger carnivores, such as wolves and bobcats, as well as by humans and their dogs; birds of prey may capture the young. Despite extensive killing of foxes, most species continue to flourish. In Europe this is due in part to the regulatory laws passed for the benefit of hunters. Mounted foxhunting, with dogs, became popular in the 14th cent. and was later introduced into the Americas; special hunting dogs, called foxhounds, have been bred for this sport. Great Britain banned foxhunting in which the hounds kill the fox in 2005.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
See more Encyclopedia articles on: Vertebrate Zoology
|
<urn:uuid:b06f1991-fec6-49bf-b55b-75db6d59f18d>
| 3.5 | 382 |
Knowledge Article
|
Science & Tech.
| 50.583511 |
Here is a fun one,
There was a man who greatly enjoyed golf. He also could make a perfectly consistent swing. So out of curiosity he decided to challenge a mathematician. So first he brought the mathematician to a golf field, with his golf club, a tee, and a ball. He sets the ball on the tee, all ready to swing, and then he asks the mathematician, “Write me a formula where z is the total distance the ball will travel, assuming there is no wind, the ground is level, The ball starts one inch off the ground, and I hit it with x force at y angle, all before I hit the ball.” He then swings his club, hits the ball and much to his surprise the mathematician succeeds. Not only did the mathematician have a flawless formula, but he also had the shortest formula he could have possibly written. What was his formula?
Last edited by TheTick (2013-02-28 15:50:15)
|
<urn:uuid:070e6cdd-a083-43f2-9577-27e03e835620>
| 2.765625 | 201 |
Comment Section
|
Science & Tech.
| 65.922443 |
New on the IBM developerWorks, there's an article looking at using the Scilab software integrated into PHP to perform some more complicated mathematical processing.
Scripting languages like Ruby, Python, and PHP power modern-day server-side Web development. These languages are great because you can easily and rapidly build Web sites. However, their downfall is their inefficiency with complicated algorithms, such as those found in mathematics and the sciences. [...] In this article, we'll investigate one particular way to merge the power of a particular bit of scientific software - Scilab - with the ease of development and Web-friendliness of a server-side language: PHP.
Your script uses the Scilab tool from the command line, called via something like exec, and parsing the output to spit the results back out to the viewer. They show how to create two pages with form elements for allowing the user to interact with the script and one that helps you generate a graph based on some results.
|
<urn:uuid:134f1f86-6c7d-48c9-abb1-a1be577339f4>
| 2.6875 | 199 |
Truncated
|
Software Dev.
| 42.139 |
Gamma ray bursts
are believed to be the most energetic phenomena in the universe.
In one second they can emit more than 100 times the energy that
the sun does throughout its entire 10 billion year life. This energy
output is short lived, however, and within days the burst has faded
forever beyond the reach of our telescopes.
3000 bursts having been detected through their gamma ray emission,
only 30 have been seen with ground-based telescopes, and only one
of these has been observed within an hour.
In an ambitious
project to detect the gamma ray bursts in the crucial first minute
of their occurence, the School of Physics has entered a collaboration
with the University of Michigan, Los Alamos National Laboratories,
and Lawrence Livermore National Laboratory, to place a robotic telescope,
ROTSE-III, at Siding Spring Observatory.
triggered into action by a signal relayed through the Internet from
an earth-orbiting satellite. The specially designed mounting for
ROTSE-III allows it to point to any position in the sky and take
an image within 5-10 seconds. The images are then automatically
analysed for any new or rapidly varying sources, and this information
is made available to other observatories throughout the world within
minutes. The precise positions provided by ROTSE-III are essential
to allow the worlds largest telescopes to observe the gamma
for the new telescope occurred in March 2001. The enclosure and
weather station were installed in April 2001, with the telescope
itself to be delivered in mid-2002.
|
<urn:uuid:41af5c95-84cb-4b31-990a-6fbb28055062>
| 3.875 | 327 |
Knowledge Article
|
Science & Tech.
| 29.343642 |
During this tutorial you will be asked to perform calculations involving trigonometric functiions. You will need a calulator to proceed.
| The purpose of this tutorial
is to review with you the elementary properties of the trigonometric functions.
Facility with this subject is essential to success in all branches of science,
and you are strongly urged to review and practice the concepts presented
here until they are mastered. Let us consider the right-angle triangle
shown in Panel 1. The angle at C is a right angle and the angle
A we will call θ. The lengths of the
sides of the triangle we will denote as p, q and r. From your elementary
geometry, you know several things about this triangle. For example, you
know the Pythagorean relation,
q² = p² + r². That is, the square of the length of the side opposite the right angle, which we call the hypotenuse, is equal to the sum of the squares of the lengths of the other two sides.
We know other things. For example, we know that if the lengths of the three sides of any triangle p, q and r are specified, then the whole triangle is determined, angles included. If you think about this for a moment, you will see it is correct. If I give you three sticks of fixed length and told you to lay them down in a triangle, there's only one triangle which you could make. What we would like to have is a way of relating the angles in the triangle, say θ, to the lengths of the sides.
It turns out that there's no simple analytic way to do this. Even though the triangle is specified by the lengths of the three sides, there is not a simple formula that will allow you to calculate the angle θ. We must specify it in some new way.
|To do this, we define three ratios of the sides of the triangle.
One ratio we call the sine of theta, written sin(θ), and it is defined as the ratio of the side opposite θ to the hypotenuse, that is r/q.
The cosine of θ, written cos(θ), is the side adjacent to θ over the hypotenuse, that is, p/q.
This is really enough, but because it simplifies our mathematics later on, we define the tangent of θ, written tan(θ), as the ratio of the opposite to the adjacent sides, that is r/p. This is not an independent definition since you can readily see that the tangent of θ is equal to the sine of θ divided by the cosine of θ. Verify for yourself that this is correct.
All scientific calculators provide this information. The first thing to ensure is that your calculator is set to the anglular measure that you want. Angles are usually measured in either degrees or radians (see tutorial on DIMENSIONAL ANALYSIS). The angle 2º is a much different angle than 2 radians since 180º = π radians = 3.1416... radians. Make sure that your calculator is set to degrees.
Now suppose that we want the sine of 24º. Simply press 24 followed by the [sin] key and the display should show the value 0.4067. Therefore, the sine of 24º is 0.4067. That is, in a triangle like panel 1 where θ = 24º, the ratio of the sides r to q is 0.4067. Next set your calculator to radians and find the sine of 0.42 radians. To do this enter 0.42 followed by the [sin] key. You should obtain a value of 0.4078. This is nearly the same value as you obtained for the sine of 24º. Using the relation above you should confirm that 24º is close to 0.42 radians
Obviously, using your calculator to find values of sines is very simple. Now find sine of 42º 24 minutes. The sine of 42º 24 minutes is 0.6743. Did you get this result? If not, remember that 24 minutes corresponds to 24/60 or 0.4º. The total angle is then 42.4º
| The determination of
cosines and tangents on your calculator is similar. It is now possible
for us to solve the simple problem concerning triangles. For example, in
Panel 2, the length of the hypotenuse is 3 cm and the angle θ
is 24º. What is the length of the opposite side r? The sine of 24º
as we saw is 0.4067 and it is also, by definition, r/3.
So, sine of 24º = .4067 = r/3, and therefore, r = 3 x 0.4067 = 1.22 cm.
|Conversely, suppose you knew that the opposite side was
2 cm long and the hypotenuse was 3 cm long, as in panel 3, what is the
angle θ? First determine the sine of θ
You should find that the sine of θ is 2/3, which equals 0.6667. Now we need determine what angle has 0.6667 as its sine.
If you want your answer to be in degrees, be sure that your calculator is set to degrees. Then enter 0.6667 followed by the [INV] key and then the [sin] key. You should obtain a value of 41.8º. If your calculator doesn't have a [INV] key, it probably has a [2ndF] key and the inverse sine can be found using it.
|One use of these trigonometric functions which is very important is the calculation of components of vectors. In panel 4 is shown a vector OA in an xy reference frame. We would like to find the y component of this vector. That is, the projection OB of the vector on the y axis. Obviously, OB = CA and CA/OA = sin(θ), so CA = OA sin(θ). Similarly, the x-component of OA is OC. And OC/OA = cos(θ) so OC = OA cos(θ).|
|There are many relations among the trigonometric functions
which are important, but one in particular you will find used quite often.
Panel 1 has been repeated as Panel 5 for you. Let us look at the sum cos²
+ sin². From the figure, this is (p/q)² + (r/q)², which
[(p² + r²) / (q²)]. The Pythagorean theorem tells us that p² + r² = q² so we have
[(p² + r²) / q²] = (q²/q²) = 1. Therefore, we have;
Our discussion so far has been limited to angles between 0 and 90º. One can, using the calculator, find the the sine of larger angles (eg 140º ) or negative angles (eg -32º ) directly. Sometimes, however, it is useful to find the corresponding angle betweeen 0 and 90º. Panel 6 will help us here.
|In this xy reference frame, the angle θ
is clearly between 90º and 180 º, and clearly, the angle a,
which is 180 - θ
( a is marked with a double arc) can be dealt with. In this case, we say that the magnitude of sine, cosine, and tangent of θ are those of the supplement a and we only have to examine whether or not they are positive or negative.
For example, what is the sine, cosine and tangent of 140º? The supplement is 180º - 140º = 40º. Find the sine, the cosine and the tangent of 40º.
|
<urn:uuid:00f865ac-a066-4877-8d69-479bd1350ad2>
| 4.0625 | 1,681 |
Tutorial
|
Science & Tech.
| 79.495224 |
|May20-06, 05:24 PM||#1|
Stuck on couple related rates problems..
1. A ship with a long anchor chain is anchored in 11 fathoms of water. The anchor chain is being wound in at a rate of 10 fathoms/minute, causing the ship to move toward the spot directly above the anchor resting on the seabed. The hawsehole ( the point of contact between ship and chain) is located 1 fathom above the water line. At what speed is the ship moving when there are exatly 13 fathoms of chain still out?
For this problem I started with this drawing.. http://img.photobucket.com/albums/v4...n/untitled.jpg
And then from there, I had no idea where to go... there hawsehole being 1 fathom above the water really gets to me, perhaps making the above drawing void. Another thing I don't understand is that it says it's anchored in 11 fathoms of water.. how could the question be asking what speed the boat would be moving if it were at 13 fathoms?
2. A ladder 41 feet long was leaning against a vertical wall and begins to slip. Its top slides down the wall whilte its bottom moves along the level ground at a constant speed of 4ft/sec. How fast is the top of the ladder moving when it is 9 feet above the ground?
For this one.. I didn't even know what to do.. of course I drew a triangle, hypotenuse of 41 and the vertical side of 9 feet.. and then.......?
Mainly, I think problems such as these are really easy, but I have a really hard time picturing the problem or drawing it out. I don't know which numbers apply to dx/dt and dy/dt..
|May20-06, 05:48 PM||#2|
And tehy are asking what is the speed when there is 13 fathoms of *chain* still out, which is the length of the hypothenuse on your triangle. Of course this length will be larger or equal to 12 fathoms (it will be equal to 12 fathom when the boat will be right above the anchor)
If we call "L" the length of the hypothenuse, then what you want is to write dx/dt in terms of dL/dt (which is the number they give you). All you have to do is to write an expression relating x and L (and other known values), isolate x in terms of those constants and L, and differentiate both dised with respect to t. You will get dx/dt = expression in terms of constant, L and dL/dt.
|Similar Threads for: Stuck on couple related rates problems..|
|im stuck in a couple electromagnetics problems||Introductory Physics Homework||10|
|Related Rates Problems||Calculus & Beyond Homework||1|
|Related rates - some problems =)||Calculus & Beyond Homework||5|
|stuck on a related rates problem||Introductory Physics Homework||5|
|A couple pretty easy integration problems im stuck on||Calculus||3|
|
<urn:uuid:db125f45-2e18-420f-93eb-28e0f5ee7577>
| 2.71875 | 677 |
Comment Section
|
Science & Tech.
| 82.256139 |
Last July (2012), I heard from a colleagues working at the edge of the Greenland ice sheet, and from another colleague working up at the Summit. Both were independently writing to report the exceptional conditions they were witnessing. The first was that the bridge over the Watson river by the town of Kangerlussuaq, on the west coast of Greenland, was being breached by the high volumes of meltwater coming down from the ice sheet. The second was that there was a new melt layer forming at the highest point of the ice sheet, where it very rarely melts.
A front loader being swept off a bridge into the Watson River, Kangerlussuaq, Greenland, in July 2012. Fortunately, nobody was in it at the time. Photo: K. Choquette
I’ve been remiss in not writing about these observations until now. I’m prompted to do so by the publication in Nature today (January 23, 2013) of another new finding about Greenland melt. This paper isn’t about the modern climate, but about the climate of the last interglacial period. It has relevance to the modern situation though, a point to which I’ll return at the end of this post.
|
<urn:uuid:c8dad88b-1cd0-43ad-8153-71e09064a07e>
| 2.78125 | 251 |
Personal Blog
|
Science & Tech.
| 59.101364 |
Search: Nuclear chemistry, Darmstadtium, Germany
In honour of scientist and astronomer Nicolaus Copernicus (1473-1543), the discovering team around Professor Sigurd Hofmann suggested the name copernicium with the element symbol Cp for the new element 112, discovered at the GSI Helmholtzzentrum für Schwerionenforschung (Center for Heavy Ion Research) in Darmstadt. It was Copernicus who discovered that the Earth orbits the Sun, thus paving the way for our modern view of the world. Thirteen years ago, element 112 was discovered by an international team of scientists at the GSI accelerator facility. A few weeks ago, the International Union of Pure and Applied Chemistry, IUPAC, officially confirmed their discovery. In around six months, IUPAC will officially endorse the new element's name. This period is set to allow the scientific community to discuss the suggested name copernicium before the IUPAC naming.
"After IUPAC officially recognized our discovery, we – that is all scientists involved in the discovery – agreed on proposing the name copernicium for the new element 112. We would like to honor an outstanding scientist, who changed our view of the world", says Sigurd Hofmann, head of the discovering team.
Copernicus was born 1473 in Torun; he died 1543 in Frombork, Poland. Working in the field of astronomy, he realized that the planets circle the Sun. His discovery refuted the then accepted belief that the Earth was the center of the universe. His finding was pivotal for the discovery of the gravitational force, which is responsible for the motion of the planets. It also led to the conclusion that the stars are incredibly far away and the universe inconceivably large, as the size and position of the stars does not change even though the Earth is moving. Furthermore, the new world view inspired by Copernicus had an impact on the human self-concept in theology and philosophy: humankind could no longer be seen as the center of the world.
With its planets revolving around the Sun on different orbits, the solar system is also a model for other physical systems. The structure of an atom is like a microcosm: its electrons orbit the atomic nucleus like the planets orbit the Sun. Exactly 112 electrons circle the atomic nucleus in an atom of the new element "copernicium".
Element 112 is the heaviest element in the periodic table, 277 times heavier than hydrogen. It is produced by a nuclear fusion, when bombarding zinc ions onto a lead target. As the element already decays after a split second, its existence can only be proved with the help of extremely fast and sensitive analysis methods. Twenty-one scientists from Germany, Finland, Russia and Slovakia have been involved in the experiments that led to the discovery of element 112.
Since 1981, GSI accelerator experiments have yielded the discovery of six chemical elements, which carry the atomic numbers 107 to 112. The discovering teams at GSI already named five of them: element 107 is called bohrium, element 108 hassium, element 109 meitnerium, element 110 darmstadtium, and element 111 is named roentgenium.
The new element 112 discovered by GSI has been officially recognized and will be named by the Darmstadt group in due course. Their suggestion should be made public over this summer.
The element 112, discovered at the GSI Helmholtzzentrum für Schwerionenforschung (Centre for Heavy Ion Research) in Darmstadt, has been officially recognized as a new element by the International Union of Pure and Applied Chemistry (IUPAC). IUPAC confirmed the recognition of element 112 in an official letter to the head of the discovering team, Professor Sigurd Hofmann. The letter furthermore asks the discoverers to propose a name for the new element. Their suggestion will be submitted within the next weeks. In about 6 months, after the proposed name has been thoroughly assessed by IUPAC, the element will receive its official name. The new element is approximately 277 times heavier than hydrogen, making it the heaviest element in the periodic table.
“We are delighted that now the sixth element – and thus all of the elements discovered at GSI during the past 30 years – has been officially recognized. During the next few weeks, the scientists of the discovering team will deliberate on a name for the new element”, says Sigurd Hofmann. 21 scientists from Germany, Finland, Russia and Slovakia were involved in the experiments around the discovery of the new element 112.
Since 1981, GSI accelerator experiments have yielded the discovery of six chemical elements, which carry the atomic numbers 107 to 112. GSI has already named their officially recognized elements 107 to 111: element 107 is called Bohrium, element 108 Hassium, element 109 Meitnerium, element 110 Darmstadtium, and element 111 is named Roentgenium.
Recommendation for the Naming of Element of Atomic Number 110
Prepared for publication by J. Corish and G. M. Rosenblatt
A joint IUPAC-IUPAP Working Party confirms the discovery of element number 110 and this by the collaboration of Hofmann et al. from the Gesellschaft für Schwerionenforschung mbH (GSI) in Darmstadt, Germany.
In accord with IUPAC procedures, the discoverers have proposed a name and symbol for the element. The Inorganic Chemistry Division Committee now recommends this proposal for acceptance. The proposed name is darmstadtium with symbol Ds. This proposal lies within the long established tradition of naming an element after the place of its discovery.
|
<urn:uuid:149ab25b-f1f4-4231-88ea-4e1968ed8a9d>
| 3.671875 | 1,178 |
Knowledge Article
|
Science & Tech.
| 37.686338 |
Given all the evidence presently available, we believe it entirely reasonable that Mars is inhabited with living organisms and that life independently originated there
The conclusion of a study by the National Academy of Sciences in March 1965, after 88 years of surveying the red planet through blurry telescopes. Four months later, NASA’s Mariner 4 spacecraft would beam back the first satellite images of Mars confirming the opposite.
After Earth and Mars were born four and a half billion years ago, they both contained all the elements necessary for life. After initially having surface water and an atmosphere, scientists now believe Mars lost it’s atmosphere four billion years ago, with Earth getting an oxygenated atmosphere around half a billion years later.
According to the chief scientist on NASA’s Curiosity mission, if life ever existed on Mars it was most likely microscopic and lived more than three and a half billion years ago. But even on Earth, fossils that old are vanishingly rare. “You can count them on one hand,” he says. “Five locations. You can waste time looking at hundreds of thousands of rocks and not find anything.”
The impact of a 40kg meteor on the Moon on March 17 was bright enough to see from Earth without a telescope, according to NASA, who captured the impact through a Moon-monitoring telescope.
Now NASA’s Lunar Reconnaissance Orbiter will try and search out the impact crater, which could be up to 20 metres wide.
|
<urn:uuid:132d7809-ba28-4c89-8ce0-867a2a81c1e6>
| 4.1875 | 300 |
Content Listing
|
Science & Tech.
| 42.244446 |
By Alexander Villafania INQUIRER.NET In the aftermath of perhaps the worst typhoon that struck Metro Manila in recent years, environmental groups are blaming climate change for the effects of âOndoyâ (international name âKetsanaâ). In different statements, the World Wildlife Fund (WWF) and Greenpeace warned that such a disaster could be repeated unless comprehensive measures are taken immediately. Greenpeace, in their statement , reiterated their call for industrialized countries to put in money to fund climate change measures especially in disaster-prone countries, including the Philippines. Greenpeace Climate and Energy Campaigner Amalie Obusan said in a statement that the disaster in the Philippines had to happen in between two international climate change meetings, the recently concluded G20 Summit and the upcoming United Nations Framework Convention on Climate Change (UNFCCC) Summit. âWhile world leaders are pussyfooting on their commitments, countries like ours are left to experience the ravages of climate change,â Obusan said. In a separate statement, WWF-Philippines Vice Chair Jose Lorenzo Tan is calling for the reduction of fossil fuel consumption, which is being blamed for contributing to climate change. Tan said the country is not equipped to take the brunt of another similar disaster and so measures must be taken to help mitigate its effects. âPlanning must start from scenarios of the future, rather than from the present. Collectively, we must identify 'next practices', because today's 'best practice' will no longer suffice. We must start small, learn fast and scale rapidly,â Tan said. The Philippine Atmospheric, Geophysical and Astronomical Services Administration (PAGASA) reported that Ondoy dropped the heaviest rainfall in Metro Manila in recent history, a record 34.1 centimeters (13 inches) of water in less than six hours. The previous record was in 1967 with 33.4 centimeters of rainwater over the course of 24 hours.
September 2009 Archives
By Dennis Posadas THERE are interesting developments in Chinese cleantech, and I will discuss some headlines of interest that have been reported recently. While I will continue to write about Philippine cleantech efforts in renewables and energy efficiency, it is also important to take note of what is happening in the region, and maybe some implications for us. The first is a news report in the New York Times that First Solar, a company that makes thin film solar photovoltaics, bagged a contract to build the world’s largest solar installation in Mongolia. The rated capacity of the solar plant will be 2GW (or 2,000 MW if you prefer), and will be built using the non-silicon technology of First Solar. Thin films like Cadmium Telluride are typically deposited on surfaces like glass, and do not require silicon. The upside of thin films is that you can make it into windows and basically coat a building with it, at a cheaper price. The downside is it is only around 7% efficient, as compared to 11% efficiency of silicon-based solar photovoltaics, which means you need more cells and you need more space (e.g. land). Another is that Cadmium is poisonous, and so while there is no danger of leaching for the active life of the solar cell, the cells have to be disposed of properly once these are past their useful life of around 25 years. The implication for us is that this particular project, because the winner was a thin-film solar technology (which we do not make here as far as I know) did not result in additional business for the local Philippine operations of SunPower and Solaria, which make silicon-based photovoltaics. However, if the 2GW China project is an indication of future opportunities, maybe it will be good for the industry as a whole. The second, featured in both in MIT Technology Review and the New York Times, is what the Chinese are doing with clean coal. It appears that most of the plants being built in China these days are advanced technology clean coal plants, which do not burn the coal directly (which releases carbon dioxide) but instead, using an old pre World War II process, converts coal into synthetic gas (similar to natural gas). China has the world’s third largest coal reserves, after the US and Russia. US Energy Secretary and Nobel Laureate Steven Chu has promised to prioritize its adoption in the US as well. It is important to stress that while the carbon dioxide emissions have been cut by a large percentage, these new plants still emit carbon dioxide. The Chinese have even built a small experimental plant to remove the carbon dioxide from power emissions, and use it for softdrinks carbonation. What a creative way to do carbon capture and storage! Store it in our bodies when we drink it. Of course, we will eventually release it back to the atmosphere. But seriously, the Chinese are also looking at Carbon Capture and Storage (CCS), although I have not seen any major advances yet in China in this arena. The implication here for us is that if the Chinese can develop a better way, or an alternative to CCS that cuts carbon emissions of coal, then maybe coal can have a second life, particularly since we have a lot of it. But that is, in my opinion, still in the realm of research. I do not expect to see carbon capture and storage in the Philippines for a long time; it is still very, very expensive, unless someone comes up with a breakthrough. In wind, China has doubled its capacity in the past few years and will become the world’s largest market for wind equipment. Interestingly enough, India, through a company called Suzlon Energy (you may have seen their commercials on CNN) is now giving US and European wind players like GE and Vestas a run for their money. Locally, I think we should pursue the development of micro-wind and micro-hydro systems. In electric vehicles, Fortune recently did a profile on a company called BYD (Build Your Dreams) which Warren Buffett recently invested in. In solar photovoltaics, Suntech, a Wuxi-based company which was started by local government funds is now one of the largest solar cell manufacturers in the world. The key learning for us here is that Suntech was started by Chinese local government funds, not even national government funds. The figure mentioned in Fortune was $4m, which is doable even here. Maybe that is a learning we can use, but I am not sure if local laws will permit that. Finally, the UK Guardian recently reported that US President Barack Obama may be in China this November to sign a major US-China cleantech alliance accord, prior to the December Copenhagen climate summit. While it is hard to convince the US Senate, which has to contend with a strong oil, gas and coal industry lobby, to go green, it appears that the Chinese see green as a way, not just to improve their worldwide image in the climate arena, but to actually make some serious green (as in greenbacks) out of it. The question there is where does that leave us? __________________________________________________________________________ Dennis Posadas is the editor of Cleantech Asia Online, and the author of Jump Start: A Technopreneurship Fable (Singapore: Pearson Prentice Hall, 2009)
By Dennis Posadas While I appreciate the enthusiasm that groups like Greenpeace and WWF about enabling as much clean/renewable energy as we can put into the system, given that we have a new renewable energy law, there are also a few mindset changes we need to put into place. I am all for renewable energy; however, as a trained engineer, I also realize that there are some hurdles that need to be overcome. First is, some renewable energy sources, like solar and wind, while abundant, are also intermittent. The sun doesn’t always shine, and the wind doesn’t always blow. On the other hand, cogeneration and biomass plants, which are clean sources, can be stable if enough heat or biomass material is forecast and planned. For solar and wind, if we want to use it for 24x7 use, we need to make sure that there is an energy storage mechanism of some type. The most common energy storage device is of course a battery. For bigger solar and wind systems, running in the megawatt range, batteries would have to be connected together, so it probably won’t be practical. Concentrated Solar Plants (CSPs) that employ banks of mirrors in the desert use some type of liquid like molten salt. Another possibility is to use pumped storage, like in Lake Caliraya. When power is available, it is used to pump water up an elevated lake. During nighttime, the lake water can be released to drive a generating turbine. Other schemes involve compressed air (in the US), or as in the case of some wind systems, natural gas turbines. But for many systems, the storage technique they employ is to simply connect the renewable energy system to the grid. Now as we increase the percentage of renewable energy systems that connect directly to the grid, we have to remember again that these are intermittent. You can’t exactly tell the sun to shine exactly at 6:00am, or the wind to start blowing at 9:00pm. So there has to be a way to prevent blowups of circuit breakers or fuses, a way to plan when each energy source will come on stream. There is a role for software and intelligent grid systems that work with meteorological information to determine that there is a high/low likelihood that the wind/sun will be available at a certain time. The grid itself, and components will have to be redesigned to take into account the higher occurrence of intermittent turn-on and turn-off of power sources, many of them being renewable. Appliances may need to have chips in them, telling them that the power at a given hour is mostly coming from renewable sources, or not. Meralco’s plan, for example, to offer Internet over broadband lines, is indicative of this. The common perception is that they plan to mainly utilize this to offer broadband services to the public through their power lines. Actually, it is not as simple as that. The Internet over power lines can also be used to command and control equipment, such as chillers in malls, to turn on or to idle at a certain time. The grid needs to be intelligent, to handle the intermittent nature of clean/renewable energy systems. There will be a lot of new capabilities, already being experienced in places like California and Europe, that we will soon have here. Our electric meters (“kontadors”) for example, will run backwards and forwards. So if we decide to install solar panels or wind turbines on our roofs, not only can we be consumers, we can also be mini power producers supplying to Meralco. The amount we sold, is then subtracted from the amount we consumed. The more citizens and private industry, as well as government, invest in these mini and private renewable energy systems, the less need there will be for big, and often carbon emitting power plants. In other words, power generation will be decentralized to many small renewable power producers, as opposed to a few large ones. Now who will pay for that? Some cities in the US consider solar panels as part of the house (roof) and allow citizens to simply add a little extra to their real estate tax, and amortize the solar panels over 25 years. The payment can actually be taken from the savings generated by the panels, so in effect a no-cash out scheme is feasible. Are we ready for that? We all want reduced carbon emissions. But we don’t get there by simply joining token Earth Hour or Earth Day celebrations. We also need to do some work, and take the time to educate ourselves. ___________________________________________________________ Dennis Posadas is the editor of Cleantech Asia Online, and the author of Jump Start: A Technopreneurship Fable (Singapore: Pearson Prentice Hall, 2009)
By Alexander Villafania INQUIRER.NET After over 50 years, mathematical genius Alan Turing could get the justice he deserves after being prosecuted as a homosexual. Two separate online petitions for an apology by the British government were set up by supporters of Alan Turing, the British cryptanalyst who broke the codes of the legendary German Enigma machines during World War II. The first petition was created by computer scientist John Graham-Cumming. In his blog Cumming said he wanted all records about Turing to be released by the British government. He also said he wanted Turing to get a posthumous knighthood. So far, his petition has gathered about 22,800 supporters. The deadline for the end of signing the online petition is on January 20, 2010. The second petition demanded an apology from the British government for Turing, who was alleged to have been prosecuted because of his homosexuality. The second petition was started by Cameron Buckner in support of Cummingâs first petition. So far, Bucknerâs petition has 8,700 signatories. Based on the records of the British National Archives British National Archives Turing joined the British governmentâs Government Code and Cypher School during World War II specifically to decipher the Enigma machine used by the Germans. His paper, âOn Computable Numbersâ led to the creation of the âTuring machine,â a thought process experiment that simulated the logic of a computer algorithm. Turingâs work on computational algorithms thus led to future development of computer science concepts, as well as the modern computer. But in 1953 Turing was arrested for being a homosexual and was subjected to chemical castration using estrogen injections. He died by consuming a cyanide-laced apple the following year.
|
<urn:uuid:02304444-c95f-4f31-9bf8-87f611ecbc3f>
| 2.71875 | 2,828 |
Comment Section
|
Science & Tech.
| 40.463503 |
Our Changing Ocean
Vast and powerful though the ocean is people have changed it. It’s a different ocean now.
The ocean’s enduring surface beauty hides its plight. But the ocean today is a diminished version of a much healthier ocean of not so long ago.
The ocean is the source of about half the oxygen we breathe, much of the water we drink, and much of the food we eat. (If you don’t eat fish, consider this: about a third of the world fish catch gets made into feed for chickens, pigs, and other livestock.) Changes to the ocean undermine the health and well-being of people and wildlife worldwide.
These changes include depletion from overfishing, warming, ocean acidification caused by the same carbon dioxide that is warming the atmosphere and the upper sea, chemical pollution, plastic debris, loss of wetlands, coastal mangrove forests, and coral reefs, and invasive species.
Each of these alone is serious.
Can any particular part of the oceans survive these things happening all at once? The answer is: “it depends.”
There is still time to reverse course and restore the ocean to a healthy balance. Many dedicated people and organizations, including Blue Ocean Institute, are working actively to solve the oceans’ problems.
Be a part of this hopeful work. Jump in and help save the oceans!
Dive into our Issues section to learn more. Being knowledgeable will help you decide what part of the solution is just for you.
Why the Oceans?
Simple: the ocean supports life on this planet. It feeds us, produces the oxygen we breathe, maintains our climate, cycles vital nutrients through countless ecosystems and provides food and medicines. The ocean provides jobs, food, energy, and recreation. As if that weren’t enough, the ocean is beautiful and inspiring. And that would be enough. People
Climate change is the defining environmental issue of our time and our children’s time. Into one crowded elevator go conservation of nature, human health, the prospects for agriculture, international stability, national security, and of course energy policy and technology. Climate change reflects our intensifying presence on the surface of this planet. It wraps together everything
Carbon dioxide from burning fossil fuels is changing the oceans’ chemistry. This is ocean acidification. The head of the National Oceanic and Atmospheric Administration calls ocean acidification global warming’s equally evil twin. The oceans are absorbing up to a million tons of carbon dioxide every hour. The good news: less carbon dioxide in the air
Carbon dioxide from burning fossil fuels is not only changing the oceans’ chemistry and warming the atmosphere, it is also warming the oceans. There’s a third more carbon dioxide in the air than at the start of the Industrial Revolution. The carbon acts like insulation in the atmosphere, or like glass in a greenhouse —
Overfishing is depleting the world’s oceans and having a negative impact on marine biodiversity and on human health, welfare, and prosperity. Links to more complete info in our Fish as Food section.
In the ocean, little fish play a big role. Small fish like sardines and anchovies are some of the most important fish in the sea. Fish such as herring, anchovies, menhaden, and sardines feed mostly on plankton all their lives. They supply calories and nourishment (food!) for many top predators including cod, tuna, salmon, and
Invasive Marine Species
Invasive species are animals and plants that hitchhike or ride along to places where they are not normally found. In their new homes, invasive species can sometimes create big problems for native species and ecosystems. The main source of marine invasive species is the global shipping industry, specifically through ballast water. Species can also be
Marine debris comes from everyone and every source that makes every kind of garbage. Tons of trash from both land – up to 80 percent — and ships constantly finds its way to the sea. Much of this marine debris does not go away; it cannot dissolve and it lasts in the oceans for many
Coastal Habitat Loss
Homes, jetties, seawalls, canals, and other structures built on beaches or wetlands often destroy habitat for sea turtles, birds, fish, and other sea life. Salt and tidal marshes, wetlands, mangroves, and coral reefs also suffer when development is unsustainable. Wetlands, mangroves and sea grasses are valuable natural resources as they hold sediment and nutrients, filter
In addition to carbon dioxide, mercury, and marine debris, which are types of pollution, other man made pollutants constantly enter the oceans from a range of sources. These include oil, fertilizers, toxic chemicals, and sewage. OIL & CHEMICALS Oil spills may be the most infamous pollutant because popular media often vividly shows dramatic damage. The
Aquaculture – Farmed Seafood
Aquaculture can impact many aspects of ocean life. Visit Aquaculture in our Fish as Food section which also includes sustainable seafood choices plus discussions about genetically modified fish, seafood fraud, bycatch, and more.
|
<urn:uuid:37813f25-88c4-46b8-a404-67d7ad70dc62>
| 3.0625 | 1,062 |
Knowledge Article
|
Science & Tech.
| 39.662902 |
Chandra "Hears" a Supermassive Black Hole in Perseus
A 53-hour Chandra observation of the central region of the Perseus galaxy cluster (left) has revealed wavelike features (right) that appear to be sound waves. The features were discovered by using a special image-processing technique to bring out subtle changes in brightness.
These sound waves are thought to have been produced by explosive events occurring around a supermassive black hole (bright white spot) in Perseus A, the huge galaxy at the center of the cluster. The pitch of the sound waves translates into the note of B flat, 57 octaves below middle-C. This frequency is over a million billion times deeper than the limits of human hearing, so the sound is much too deep to be heard.
The image also shows two vast, bubble-shaped cavities, each about 50 thousand light years wide, extending away from the central supermassive black hole. These cavities, which are bright sources of radio waves, are not really empty, but filled with high-energy particles and magnetic fields. They push the hot X-ray emitting gas aside, creating sound waves that sweep across hundreds of thousands of light years.
The detection of intergalactic sound waves may solve the long-standing mystery of why the hot gas in the central regions of the Perseus cluster has not cooled over the past ten billion years to form trillions of stars. As sounds waves move through gas, they are eventually absorbed and their energy is converted to heat. In this way, the sound waves from the supermassive black hole in Perseus A could keep the cluster gas hot.
The explosive activity occurring around the supermassive black hole is probably caused by large amounts of gas falling into it, perhaps from smaller galaxies that are being cannibalized by Perseus A. The dark blobs in the central region of the Chandra image may be fragments of such a doomed galaxy.
|
<urn:uuid:7c5032f8-872f-474b-bda7-8c70bc31adaa>
| 4.34375 | 389 |
Knowledge Article
|
Science & Tech.
| 43.14427 |
Declares a cursor definition.
A cursor is declared in accordance with the select-statement or the result set procedure call specified in procedure-call-statement.
The select-statement may be specified explicitly in ordinary embedded SQL applications or by the name of a prepared select-statement, identified by statement-name, in dynamic SQL statements, see the Mimer SQL Programmer's Manual, chapter 11, Dynamic SQL.
The cursor is identified by cursor-name, and may be used in FETCH, DELETE CURRENT and UPDATE CURRENT statements. The cursor must be activated with an OPEN statement before it can be used.
A cursor declared as REOPENABLE may be opened several times in succession, and previous cursor states are saved on a stack, see OPEN. Saved cursor states are restored when the current state is closed, see CLOSE.
A cursor declared as SCROLL will be a scrollable cursor. For a scrollable cursor, records can be fetched using an orientation specification. See the description of FETCH for a description of how the orientation can be specified.
A cursor will be non-scrollable if NO SCROLL is explicitly specified. The cursor will be non-scrollable and not reopenable by default.
select-statement, see SELECT Statements.
procedure-call-statement, see CALL.
If an execute-statement-command is used, the precompiled statement must be a select or a result-set procedure.
If a procedure-call-statement is specified, it must specify a result set procedure.
The following restrictions apply to procedural usage:
- The cursor cannot be declared as REOPENABLE
- The select-statement cannot be in the form of a prepared dynamic SQL statement, i.e. specifying statement-name is not allowed
- If the cursor declaration contains a select statement, the access-clause for the procedure must be READS SQL DATA or MODIFIES SQL DATA, see CREATE PROCEDURE.
The DECLARE CURSOR statement is declarative, not executable. In an embedded usage context, access rights for the current ident are checked when the cursor is opened, not when it is declared.
In a procedural usage context, access rights for the current ident are checked when the cursor is declared, i.e. when the procedure containing the declaration is created.
The value of cursor-name may not be the same as the name of any other cursor declared within the same compound statement (Procedural usage) or in the same compilation unit (Embedded usage).
The select-statement is evaluated when the cursor is opened, not when it is declared. This applies both to select-statement's identified by statement name, and to host variable references used anywhere in the select statement.
The execution of the result set procedure specified in a CALL statement is controlled by the opening of the cursor and subsequent fetches, see the Mimer SQL Programmer's Manual, chapter 12, Result Set Procedures.
REOPENABLE cannot be used if evaluation of select-statement uses a work table, or if the cursor declaration occurs within a procedure.
If the declared cursor is a dynamic cursor, the DECLARE statement must be placed before the PREPARE statement.
A reopenable cursor can be used to solve the 'Parts explosion' problem. Refer to the Mimer SQL Programmer's Manual, chapter 8, The 'Parts explosion' Problem for a description.
ExampleDECLARE cur1 CURSOR FOR EXECUTE STATEMENT seltaba
EXTENDED The EXECUTE STATEMENT command is a Mimer SQL extension.Support for the keyword REOPENABLE is a Mimer SQL extension.
Note: See also standard compliance for SELECT.
Upright Database Technology AB
Voice: +46 18 780 92 00
Fax: +46 18 780 92 40
|
<urn:uuid:b1993193-d6e1-462e-bd4a-4801beb1522b>
| 2.84375 | 803 |
Documentation
|
Software Dev.
| 36.165247 |
Scientists have long projected that areas north and south of the tropics will grow drier in a warming world –- from the Middle East through the European Riviera to the American Southwest, from sub-Saharan Africa to parts of Australia.
These regions are too far from the equator to benefit from the moist columns of heated air that result in steamy afternoon downpours. And the additional precipitation foreseen as more water evaporates from the seas is mostly expected to fall at higher latitudes. Essentially, a lot of climate scientists say, these regions may start to feel more like deserts under the influence of global warming.
Now scientists have measured a rapid recent expansion of desert-like barrenness in the subtropical oceans –- in places where surface waters have also been steadily warming. There could be a link to human-driven climate change, but it’s too soon to tell, the scientists said.
[UPDATED below, 3/6, 1 p..m] Read more…
|
<urn:uuid:71855304-2f8a-4425-8945-02a9b90be1ae>
| 3.078125 | 203 |
Truncated
|
Science & Tech.
| 46.19395 |
Many people are confused about the concepts in DBus. This page gives an analogy to the web which should help to explain things.
- unique bus name
- well-known bus name
- object path
- method name
- in parameters
- out parameters
Web Server Analogy
- unique bus name is like an IP address. In particular it is dynamic.
- well-known bus name is like a hostname. It can be held by different programs at different times, but they should all implement the same API
- object path is like the path on the server
- interface/method name is like GET or POST
- in parameters are like like GET/POST variables
- out parameters are like the page which is returned.
Object-Oriented Language Analogy
- an object path refers to an object, such as a java.lang.Object
- an interface is exactly like a Java interface
- in parameters are method arguments
- out parameters are method return values
- unique bus name identifies the running process or application uniquely (these bus names are never re-used by a different process)
- well-known bus name is a "symlink" that points to the process providing a particular API
- an API is made up of objects that are expected to exist, which are expected to implement certain interfaces
- see also http://log.ometer.com/2007-05.html#17
|
<urn:uuid:cb0bcce9-2024-41cc-84bb-9ebb601e44b8>
| 3.03125 | 295 |
Knowledge Article
|
Software Dev.
| 39.245758 |
The effect of UVR on biological systems is wavelength dependent. Action spectrum for DNA damage is an essential component of understanding the effects of increased UVB on a range of Antarctic invertebrate larvae. The wavelength dependency is quantified using spectral weighting functions which provide information such as the target organelles/molecules of the UVR, the degree that organisms are ... influenced by wavelengths that are enhanced by the process of ozone depletion and the activity of sunscreening and anti-oxidant compounds. Biological weighting functions (BWFs) were made for 3 embryonic stages of Sterechinus (eggs, blastula, 4 armed larvae) and embryos of Acodantaster, Perknaster and Parbolarsis. The embryos and larvae were exposed to artificial lights for 3 days. Three filter treatments with 50% nominal cut-off at 280, 305, 320, 375 and 400nm wavelengths were used. DNA was analysed for cyclobutane pyrimidine dimers (CPDs). Using the species specific BWF and spectral irradiance data, biological effective irradiances were calculated for a given ambient light environment. Modelling of the species specific and stage specific effects of ozone depletion on larval stage were made using the BWFs and the change in ambient light field during ozone depletion.
|
<urn:uuid:8aa7e8f1-43dd-4954-ad12-56a69e48c91b>
| 3.078125 | 263 |
Academic Writing
|
Science & Tech.
| 29.766875 |
Sketch the graph of the following.
Any help would be appreciated!
You know that there are two vertical asymptotes at x = 5 and x = -3, then the graph also tends to the line y = x as x becomes large.
Since there are two asymptotes, the graph is in three parts and as similar graphs, the central part is fairly similar to the numerator, that is a cubic.
Now what you can do is try out what is the value of y slightly to the left and slightly to the right of the asymptotes to know the shape of the graph, whether it's ascending or descending.
|
<urn:uuid:b6dc6fda-e814-45bf-8df1-b4f612a1c966>
| 2.796875 | 136 |
Q&A Forum
|
Science & Tech.
| 69.590585 |
You have an empty container, and an infinite number of marbles, each numbered with an integer from 1 to infinity.
At the start of the minute, you put marbles 1 - 10 into the container, then remove one of the marbles and throw it away. You do this again after 30 seconds, then again in 15 seconds, and again in 7.5 seconds. You continuosly repeat this process, each time after half as long an interval as the time before, until the minute is over.
Since this means that you repeated the process an infinite number of times, you have "processed" all your marbles.
How many marbles are in the container at the end of the minute if for every repetition (numbered N)
A. You remove the marble
numbered (10 * N)
B. You remove the marble numbered (N)
(In reply to My ideas?
Well... to be specific, I think you said that you can't multiply or divide infinity (not dividing BY infinity).
I'm not sure what dividing by infinity means, unless you're implying dividing by a variable as the variable grows towards infinity. In which case, you are talking about the "normal" limit described by calculus.
You said in your first post "so it's infinity times 9 divided by 10... Wait, we can't divide or multiply infinity".
I don't see why not. But an infinity multiplied by, divided by, added to, or lessened by a constant is the same infinity.
Again, I would refer you to studies about what infinities mean and that they are normally dealt with as sets of elements and operations (often mappings) ON those sets.
|
<urn:uuid:d185d41e-eca2-40cb-80e0-525a863d830c>
| 2.921875 | 350 |
Comment Section
|
Science & Tech.
| 60.809695 |
What is API?
API is an interface that allows software programs to interact with each other. It defines a set of rules that should be followed by the programs to communicate with each other. APIs generally specify how the routines, data structures, etc. should be defined in order for two applications to communicate. APIs differ in the functionality provided by them. There are general APIs that provide library functionalities of a programming language such as the Java API. There are also APIs that provides specific functionalities such as the Google Maps API. There are also language dependent APIs, which could only be used by a specific programming language. Furthermore, there are language independent APIs that could be used with several programming languages. APIs needs to be implemented very carefully by exposing only the required functionality or data to the outside, while keeping the other parts of the application inaccessible. Usage of APIs has become very popular in the internet. It has become very common to allow some of the functionality and data through an API to the outside on the Web. This functionality can be combined to offer an improved functionality to the users.
What is SDK?
SDK is a set of tools that can be used to develop software applications targeting a specific platform. SDKs include tools, libraries, documentation and sample code that would help a programmer to develop an application. Most of the SDKs could be downloaded from the internet and many of the SDKs are provided free to encourage the programmers to use the SDK‘s programming language. Some widely used SDKs are Java SDK (JDK) that includes all the libraries, debugging utilities, etc., which would make writing programs much easier in Java. SDKs make the life of a software developer easy, since there is no need to look for components/ tools that are compatible with each other and all of them are integrated in to a single package that is easy to install.
What is the difference between API and SDK?
API is an interface that allows software programs to interact with each other, whereas a SDK is a set of tools that can be used to develop software applications targeting a specific platform. The simplest version of a SDK could be an API that contains some files required to interact with a specific programming language. So an API can be seen as a simple SDK without all the debugging support, etc.
|
<urn:uuid:7cf35450-4a45-4a04-92c2-84c70317cbd0>
| 3.40625 | 462 |
Q&A Forum
|
Software Dev.
| 39.488346 |
In Python, for a binary file, I can write this:
buf_size=1024*64 # this is an important size... with open(file, "rb") as f: while True: data=f.read(buf_size) if not data: break # deal with the data....
With a text file that I want to read line-by-line, I can write this:
with open(file, "r") as file: for line in file: # deal with each line....
Which is shorthand for:
with open(file, "r") as file: for line in iter(file.readline, ""): # deal with each line....
This idiom is documented in PEP 234 but I have failed to locate a similar idiom for binary files.
I have tried this:
>>> with open('dups.txt','rb') as f: ... for chunk in iter(f.read,''): ... i+=1 >>> i 1 # 30 MB file, i==1 means read in one go...
I tried putting
iter(f.read(buf_size),'') but that is a syntax error because of the parens after the callable in iter().
I know I could write a function, but is there way with the default idiom of
for chunk in file: where I can use a buffer size versus a line oriented?
Thanks for putting up with the Python newbie trying to write his first non-trivial and idiomatic Python script.
|
<urn:uuid:49c7c0b6-a2ce-4617-b05c-a6e4bae21e61>
| 2.78125 | 319 |
Q&A Forum
|
Software Dev.
| 85.561623 |
Explanation: Please wait while one of the largest mobile machines in the world crosses the road. The machine pictured above is a bucket-wheel excavator used in modern surface mining. Machines like this have given humanity the ability to mine minerals and change the face of planet Earth in new and dramatic ways. Some open pit mines, for example, are visible from orbit. The largest excavators are over 200 meters long and 100 meters high, now dwarfing the huge NASA Crawler that transports space shuttles to the launch pads. Bucket-wheel excavators can dig a hole the length of a football field to over 25 meters deep in a single day. They may take a while to cross a road, though, with a top speed under one kilometer per hour.
|
<urn:uuid:7e88181f-71be-4790-9182-7f6015ab60d7>
| 3.265625 | 155 |
Personal Blog
|
Science & Tech.
| 49.371154 |
July Rendezvous with Vesta
"We often refer to Vesta as the smallest terrestrial planet," said Christopher T. Russell, a UCLA professor of geophysics and space physics and the mission's principal investigator. "It has planetary features and basically the same structure as Mercury, Venus, Earth and Mars. But because it is so small, it does not have enough gravity to retain an atmosphere, or at least not to retain an atmosphere for very long.
"There are many mysteries about Vesta," Russell said. "One of them is why Vesta is so bright. The Earth reflects a lot of sunlight — about 40 percent — because it has clouds and snow on the surface, while the moon reflects only about 10 percent of the light from the Sun back. Vesta is more like the Earth. Why? What on its surface is causing all that sunlight to be reflected? We'll find out."
Dawn will map Vesta's surface, which Russell says may be similar to the moon's. He says he expects that the body's interior is layered, with a crust, a mantle and an iron core. He is eager to learn about this interior and how large the iron core is.
Named for the ancient Roman goddess of the hearth, Vesta has been bombarded by meteorites for 4.5 billion years.
"We expect to see a lot of craters," Russell said. "We know there is an enormous crater at the south pole that we can see with the Hubble Space Telescope. That crater, some 280 miles across, has released material into the asteroid belt. Small bits of Vesta are floating around and make their way all the way to the orbit of the Earth and fall in our atmosphere. About one in every 20 meteorites that falls on the surface of the Earth comes from Vesta. That has enabled us to learn a lot about Vesta before we even get there."
Dawn will arrive at Vesta in July. Beginning in September, the spacecraft will orbit Vesta some 400 miles from its surface. It will then move closer, to about 125 miles from the surface, starting in November. By January of 2012, Russell expects high-resolution images and other data about surface composition. Dawn is arriving ahead of schedule and is expected to orbit Vesta for a year.
Vesta, which orbits the Sun every 3.6 terrestrial years, has an oval, pumpkin-like shape and an average diameter of approximately 330 miles. Studies of meteorites found on Earth that are believed to have come from Vesta suggest that Vesta formed from galactic dust during the Solar System's first 3 million to 10 million years.
Dawn's cameras should be able to see individual lava flows and craters tens of feet across on Vesta's surface.
"We will scurry around when the data come in, trying to make maps of the surface and learning its exact shape and size," Russell said.
Dawn has a high-quality camera, along with a back-up; a visible and near-infrared spectrometer that will identify minerals on the surface; and a gamma ray and neutron spectrometer that will reveal the abundance of elements such as iron and hydrogen, possibly from water, in the soil. Dawn will also probe Vesta's gravity with radio signals.
The study of Vesta, however, is only half of Dawn's mission. The spacecraft will also conduct a detailed study of the structure and composition of the "dwarf planet" Ceres. Vesta and Ceres are the most massive objects in the main asteroid belt between Mars and Jupiter. Dawn's goals include determining the shape, size, composition, internal structure, and the tectonic and thermal evolution of both objects, and the mission is expected to reveal the conditions under which each of them formed.
Dawn, only the second scientific mission to be powered by an advanced NASA technology known as ion propulsion, is also the first NASA mission to orbit two major objects.
"Twice the bang for the buck on this mission," said Russell, who added that without ion propulsion, Dawn would have cost three times as much.
UCLA graduate and postdoctoral students work with Russell on the mission. Now is an excellent opportunity for graduate students to join the project and help analyze the data, said Russell, who teaches planetary science to UCLA undergraduates and solar and space physics to undergraduates and graduate students.
After orbiting Vesta, Dawn will leave for its three-year journey to Ceres, which could harbor substantial water or ice beneath its rock crust — and possibly life. On the way to Ceres, Dawn may visit another object. The spacecraft will rendezvous with Ceres and begin orbiting in 2015, conducting studies and observations for at least five months.
Russell believes that Ceres and Vesta, formed almost 4.6 billion years ago, have preserved their early record, which was frozen into their ancient surfaces.
"We're going back in time to the early solar system," he said.
|
<urn:uuid:62f9a0fe-badd-43cc-ae00-acdb6adf5f1a>
| 3.640625 | 1,015 |
Knowledge Article
|
Science & Tech.
| 53.927142 |
Algae plus salt water equals … fuel? Bilal Bomani wants to create a biofuel that is "extreme green"— sustainable, alternative and renewable. At NASA's GreenLab Research Facility, he uses algae and halophytes to create a self sustaining, renewable energy ecosystem that doesn't consume arable land or fresh water.
Bilal Bomani currently serves as the lead scientist for NASA's biofuels research program focusing on the next generation of aviation fuel. The intent is to use algae and halophytes with the goal of providing a renewable energy source that does not use freshwater, arable land or compete with food crops.
|
<urn:uuid:eed9527c-7d8a-4dfb-812f-9c0597ec971d>
| 3.296875 | 129 |
Nonfiction Writing
|
Science & Tech.
| 31.717 |
Exploring Nonlinear Mechanical Behaviour of Rocks at LANCE
SMARTS - Spectrometer for Materials Research at Temperature and Stress
Atomic-scale stress-strain information obtained from the neutron Rietveld data indicate that the strain experienced by the crystalline quartz is ~1/5 of the macroscopic strain (the rest taken up by the grain contacts and bonds in the rock). No hints of nonlinearity whatsoever are evident in the neutron data.
Conclusion? The grain bond system (a small fraction of the total rock) is responsible for all the peculiar quasi-static nonlinearity we see.
Beamlines at the LANSCE (Los Alamos Neutron Science Center)/Lujan Center - LANSCE produces intense sources of pulsed protons and spallation neutrons from a tungsten target. Proton beam currents during all the experiments varied from 100 to 110 µA.
The Neutron Powder Diffractometer has the unique capability of simultaneous high-Q Rietveld and pair-density function analyses, enabling determination of the average and local structures of complex materials with high accuracy. The questions these experiments are designed to answer are (1) can neutrons "see" the grain bond system and if so, (2) can neutrons help to ascertain the role(s) of intergranular bonds vs. the bulk crystalline volume in the nonlinear behaviour of rocks? Results below show evidence of non-crystalline silica in a pure quartz sandstone.
Above - Rietveld analysis shows an excellent match with crystalline quartz; there are no other crystal phases in Fontainebleau sandstone.
Above - A revised model adding ~7% amorphous silica to the crystal model makes a greatly improved fit.
Above - When the PDF data (red crosses) is compared to a perfect quartz model, there is a large discrepancy in the nearest neighbor peaks.
Above - PDF data (red) of amorphous silica shows that only the nearest neighbor peaks are sharp and correspond with those of the crystal (blue).
HIPPO's proximity to the neutron spallation source and its numerous detectors mean it can watch atomic plane structures change in real time. Counting for 1 minute or less is sufficient for a Rietveld analysis of the scattering data. Scattering experiments were performed to observe the crystalline structure of sandstone samples undergoing periodic temperature changes. Modulus (resonance frequency) and temperature was tracked as a function of time. Neutron results -unit cell volume- show none of the peculiar macroscopic nonlinear behavior.
History - Modulus drop observed after a temperature change IN either direction for a sample of Berea sandstone
Sandstone sample in holder, thermocouples, and a piezoelectric source and receiver all mounted in an isothermal temperature chamber and mylar thermal radiation shielding.
Corresponding shift of frequency as temperature changed.
Plot of temperature and unit cell volume during the experiment.
Work supported by Office of Basic Energy Sciences, DOE, with Los Alamos National Laboratory Institutional Support.
HTML conversion by Jeff Simpson.
|
<urn:uuid:cc2dd5ff-dead-495c-b3ab-c30e87bf94c3>
| 2.90625 | 652 |
Academic Writing
|
Science & Tech.
| 26.517243 |
tree-equal tree-1 tree-2 &key test test-not => generalized-boolean
Arguments and Values:
test---a designator for a function of two arguments that returns a generalized boolean.
test-not---a designator for a function of two arguments that returns a generalized boolean.
generalized-boolean---a generalized boolean.
tree-equal tests whether two trees are of the same shape and have the same leaves. tree-equal returns true if tree-1 and tree-2 are both atoms and satisfy the test, or if they are both conses and the car of tree-1 is tree-equal to the car of tree-2 and the cdr of tree-1 is tree-equal to the cdr of tree-2. Otherwise, tree-equal returns false.
tree-equal recursively compares conses but not any other objects that have components.
The first argument to the :test or :test-not function is tree-1 or a car or cdr of tree-1; the second argument is tree-2 or a car or cdr of tree-2.
(setq tree1 '(1 (1 2)) tree2 '(1 (1 2))) => (1 (1 2)) (tree-equal tree1 tree2) => true (eql tree1 tree2) => false (setq tree1 '('a ('b 'c)) tree2 '('a ('b 'c))) => ('a ('b 'c)) => ((QUOTE A) ((QUOTE B) (QUOTE C))) (tree-equal tree1 tree2 :test 'eq) => true
Side Effects: None.
Affected By: None.
The consequences are undefined if both tree-1 and tree-2 are circular.
equal, Section 3.6 (Traversal Rules and Side Effects)
The :test-not parameter is deprecated.
|
<urn:uuid:0dff129e-b6b0-4cf2-aef9-9316f348a147>
| 3.234375 | 402 |
Documentation
|
Software Dev.
| 73.98829 |
Texas Dust Storms
The same weather system that brought snow and ice to the American Midwest just after Thanksgiving 2005 also kicked up significant dust in western Texas and eastern Mexico. The winds associated with this cold front also fanned the flames of grass fires in the region, adding smoke to the mixture of aerosols. The most obvious dust cloud is a pale beige dust plume swirling through Texas and Mexico. However, a second, more orange-colored cloud of dust blows across northern Texas. Parts of northern Texas saw wind speeds around 60 miles per hour. Resulting dust storms reduced visibility to just 2.5 miles in some areas, and swamped local fire departments with calls regarding both fires and downed power lines.
Image Credit: NASA/GSFC/MODIS Land Rapid Response Team/Jeff Schmaltz
|
<urn:uuid:2091f67c-e700-47ea-81fd-57c4320514bc>
| 3.421875 | 164 |
Knowledge Article
|
Science & Tech.
| 49.772164 |
DID shrinking guts and high-energy food help us evolve enormous, powerful brains? The latest round in the row over what's known as the "expensive tissue hypothesis" says no. But don't expect that to settle the debate.
The hypothesis has it that in order to grow large brains relative to body size, our ancestors had to free up energy from elsewhere - perhaps by switching to rich foods like nuts and meat, which provide more calories and require less energy to break down, or possibly by learning to cook: cooked food also requires less energy to digest.
Kari Allen and Richard Kay of Duke University in Durham, North Carolina, turned to New World monkeys to explore the hypothesis. Previous studies offer a wealth of data on the monkeys' diets and show that their brain size varies greatly from species to species. But when the pair controlled for similarities between related species, they found no correlation between large brains and small guts (Proceedings of the Royal Society B, DOI: 10.1098/rspb.2011.1311).
As Robin Dunbar at the University of Oxford points out: "It is one thing to say that the hypothesis doesn't apply to New World monkeys, and another to extrapolate that to humans."
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
|
<urn:uuid:5da2971f-7ad1-46da-8894-812addcdbae4>
| 3.0625 | 338 |
Truncated
|
Science & Tech.
| 51.253 |
S-1. Sunlight & Earth
S-1B. Global Climate
S-3.The Magnetic Sun
S-4. Colors of Sunlight
Optional: Doppler Effect
S-4A-1 Speed of Light
S-4A-2. Frequency Shift
S-4A-3 Rotating Galaxies
and Dark Matter
S-5.Waves & Photons
Optional: Quantum Physics
Q3. Energy Levels
Q4. Radiation from
One widely used property of waves is the shift in frequency when the source approaches or recedes. If the engine of a train blows its whistle as it passes by, a listener standing near the track cannot help but notice that the tone of the whistle drops as it passes.
Actually, the tone is already raised above its normal note as the engine approaches, and then drops below it as it recedes. This shift in frequency, also noted in electromagnetic waves such as light or radio, is named the Doppler Effect after its discoverer, the Austrian Christian Doppler, born in 1803.
Earlier, a somewhat similar phenomenon was discovered by the Dane Ole Roemer in 1676. The story deserves to be told because it also led to the first determination of the velocity of light.
Those were the times when the sailing ships of seafaring nations – especially, France, Spain, Britain and the Netherlands (Holland) – fought to dominate the oceans and to establish (and protect) trade routes and distant bases. In such a struggle, one technology was crucial: commanders of ships had to somehow know at all times their position in mid-ocean, that is, their latitude and longitude.
Latitude was relatively easy: the elevation of the celestial pole above the horizon (deduced, for instance, from the position of the pole star) gave that. Or else, the elevation of the Sun when it was most distant from the horizon ("solar noon"), i.e. made the greatest angle between it and the horizon, gave the latitude (after being adjusted for the day of the year). The cross staff, or a later more accurate instrument, the marine sextant (or the octant) allowed "shooting the Sun," i.e. finding its elevation above the horizon, and by combining several timed observations, its greatest elevation for that day could be derived.
Longitude was much harder. It required knowledge of the time at Greenwhich (longitude zero) when a cross staff or sextant determined that the Sun was passing local noon. For example, if the Sun passed local noon when it was 1 p.m. at Greenwich, the ship was 15° west of Greenwich, because
To get this information, the captain needed a clock which kept accurate time for a many months: it could be set in Greenwich (or set to Greenwich time at a location of known longitude), and used later to give "Greenwich time" of local noon. Such clocks ("chronometers") were in fact developed in the 1700s, but clocks of the 1600s were not accurate enough, especially on a ship that rolled and pitched, and their errors accumulated rapidly.
A less precise clock may be used, if somehow it can be constantly corrected, reset to the correct "Greenwich time" at frequent intervals. In a later era this was done using time signals obtained by radio, but in the 1600s accurately timed celestial phenomena held the greatest promise. One class of such phenomena were the eclipses of the four large moons of Jupiter, discovered by Galileo and easily seen through even a small telescope.
In particular, Io, the innermost moon of Jupiter, seemed suitable: being closest to Jupiter, Kepler's 3rd law assured that it had the fastest motion, making its entry into eclipses and out of them particularly rapid. With an orbital period of 1.77 days, Io also offered the largest number of eclipses, and every one of its orbits crossed Jupiter's shadow. (In the satellite age Io was found to have other unique features, such as sulfur volcanoes.)
Giovanni Domenico Cassini, an Italian astronomer who headed of the Paris Observatory, therefore assigned Roemer to make a table of the predicted times of Io eclipses, allowing sailors at sea to set their clocks (within a minute or so, deemed accurate enough). Roemer did so, but soon discovered that the period was not constant. When Earth (which moves faster than Jupiter) was approaching Jupiter, the observed period was shorter, and when it was receding, longer.
He guessed the reason: light did not spread instantly, but (like sound) did so at a certain speed. If Earth and Jupiter maintained a constant distance, the eclipses would have been spaced at regular intervals, equal to the orbital period of Io. When Earth is approaching, however, the return trip is shortened, compared to the time it would have taken if the distance stayed constant. When Earth is receding, the return trip is longer, and the time between eclipses is longer too
That gave Roemer convincing evidence that light spread in space with a certain velocity--later denoted by the letter c (lower case, not capital). However, he and his contemporaries had only a vague idea how big c was, because the dimensions of the solar system were uncertain. About that same time, the French astronomer Jean Richer used a telescope to estimate of the distance of Mars, and gradually, the value of c was obtained with increasing accuracy. Today it is known to an accuracy of 9 decimals, and has therefore been used to define the metre, the unit of length, replacing optical wavelengths or scratches on a metal bar kept in a vault (supposedly derived from the size of our globe).
And the problem of longitude?
It turned out that observing the eclipses of Io from a constantly moving ship, even in a calm sea, was a difficult task. Even a small telescope magnifies all motions tremendously, and early telescopes in particular showed only a small patch of the sky. Also, the method required a sky free of clouds. On the other hand, the method proved very useful for determining the longitude of ports, capes, islands and other features on land.
Consistent determinations of longitude from a moving ship had to wait for sophisticated clocks, using a balance wheel compensated for changes due to variation of temperature. One early model of such a "chronometer" accompanied Captain James Cook on his journey around the world.
(S-4A-2) The Frequency Shift and the Expanding Universe
(S-5) Waves and Photons
Timeline Glossary Back to the Master List
Author and Curator: Dr. David P. Stern
Mail to Dr.Stern: stargaze("at" symbol)phy6.org .
Last updated: 9 December 2006
|
<urn:uuid:9268091d-e08f-49a1-9c2a-bfdc6d447834>
| 3.859375 | 1,410 |
Knowledge Article
|
Science & Tech.
| 46.500109 |
Mechanics: Circular Motion and Gravitation
Circular Motion and Gravitation: Audio Guided Solution
A loop de loop track is built for a 938-kg car. It is a completely circular loop - 14.2 m tall at its highest point. The driver successfully completes the loop with an entry speed (at the bottom) of 22.1 m/s.
a. Using energy conservation, determine the speed of the car at the top of the loop.
b. Determine the acceleration of the car at the top of the loop.
c. Determine the normal force acting upon the car at the top of the loop.
Audio Guided Solution
Click to show or hide the answer!
b. 30. m/s/s
c. 1.9 x 104 N
Habits of an Effective Problem Solver
- Read the problem carefully and develop a mental picture of the physical situation. If necessary, sketch a simple diagram of the physical situation to help you visualize it.
- Identify the known and unknown quantities in an organized manner. Equate given values to the symbols used to represent the corresponding quantity - e.g., m = 61.7 kg, v= 18.5 m/s, R = 30.9 m, Fnorm = ???.
- Use physics formulas and conceptual reasoning to plot a strategy for solving for the unknown quantity.
- Identify the appropriate formula(s) to use.
- Perform substitutions and algebraic manipulations in order to solve for the unknown quantity.
Read About It!
Get more information on the topic of Circular Motion and Gravitation at The Physics Classroom Tutorial.
- Mathematics of Circular Motion
- Newton's Second Law - Revisited
- Situations Involving Energy Conservation
Return to Problem Set
Return to Overview
|
<urn:uuid:1d779db4-f950-4006-9a69-5785000fcf08>
| 3.671875 | 380 |
Tutorial
|
Science & Tech.
| 64.747194 |
Should I write: [itex](1-t)[(1-t)(2-t)-2 = -(t-3)(t-1)(t)[/itex]. This is the characteristic polynomial. Thus, the roots are 3,1,0. These are the eigenvalues. If I have equations,
(1-t)x + 2y = 0
1x + (2-t)y = 0
(1-t)z = 0,
and I plug in for t=0,1,3, I find for t=3 that eigenvectors are multiples of (1,1,0). For t=1, eigenvectors are multiples of (0,0,1). For t=0, eigenvectors are multiples of (-2,1,0). The matrix is diagonalizable because T has three linearly indep. eigenvectors.
Because these vectors are linearly independent, and because the number of vectors = dim(R3), these vectors span R3. Thus, R3 is the eigenspace of T (???)
How does that look?
|
<urn:uuid:42c86bf5-494c-4912-bdda-c646595625e6>
| 2.875 | 247 |
Q&A Forum
|
Science & Tech.
| 100.506786 |
This Site is
ICRA and SafeSurf
For All Ages!
Descriptions of the Fields of Science
Chemistry is the science of matter at or near the atomic scale. (Matter is the substance of which all physical objects are made.)
Chemistry deals with the properties of matter, and the transformation and interactions of matter and energy. Central to chemistry is the interaction of one substance with another, such as in a chemical reaction, where a substance or substances are transformed into another. Chemistry primarily studies atoms and collections of atoms such as molecules, crystals or metals that make up ordinary matter. According to modern chemistry it is the structure of matter at the atomic scale that determines the nature of a material.
Chemistry has many specialized areas that overlap with other sciences, such as physics, biology or geology. Scientists who study chemistry are called chemists. Historically, the science of chemistry is a recent development but has its roots in alchemy which has been practiced for millennia throughout the world. The word chemistry is directly derived from the word alchemy.
Link to us
Over $7 Billion
lost MONEY now.
Free Name Search
|
<urn:uuid:65386665-f202-4b45-8afc-b2c62fb38072>
| 3.484375 | 234 |
Knowledge Article
|
Science & Tech.
| 38.210655 |
A group of researchers at DTU Space is developing an observatory to be mounted on the International Space Station. Called ASIM, the observatory will among other things photograph giant lightning discharges above the clouds. The objective is to determine whether giant lightning discharges affect the Earth’s climate.
The question is whether giant lightning discharges, which shoot up from the clouds towards space, are simply a spectacular natural phenomenon, or whether they alter the chemical composition of the atmosphere, affecting the Earth’s climate and the ozone layer.
In recent years, scientists at DTU Space have studied giant lightning using high-altitude mountain cameras. From time to time, the cameras have succeeded in capturing low-altitude lightning flashes which have shot up from a thundercloud. The International Space Station provides a clear view of these giant lightning discharges, and the opportunity to study them will be significantly improved with the introduction of the observatory.
The researchers will also use ASIM to study how natural and man-made events on the ground – such as hurricanes, dust storms, forest fires and volcanic eruptions – influence the atmosphere and climate.
|
<urn:uuid:64609457-8d80-4c2f-9854-ad43579b4866>
| 3.90625 | 231 |
Knowledge Article
|
Science & Tech.
| 24.952379 |
Comparison of water in two adjacent watersheds before and after implementing a brush management strategy in one of the watersheds helps us see what water resource characteristics are sensitive to brush management and how.
Changes in the way communities address potential problems with stormwater runoff may affect surface waters. This study combines geographic with hydrologic analyses to better understand the effects of the management strategies.
Study of the effects of the practice of cycling municipal nutrient-enriched wastewater from holding ponds through forested wetlands. Studies were in the Cypiere Perdue Swamp, Louisiana, and the Drummond Bog, Wisconsin.
Reviews how coal fires occur, how they can be detected by airborne and remote surveys, and, most importantly, the impact coal-fire emissions may have on the environment and human health, especially mercury, carbon dioxide, carbon monoxide, and methane.
The USGS reviews and prepares technical comments on environmental impact statements and establishes policies to implement the National Environmental Policy Act (NEPA). Site has links to environmental laws and regulations including NEPA.
Wetlands and oil wells shouldn't mix, but in some areas they do. This explains what problems may arise and how we study the effects of highly salty water produced by oil wells when it leaks into nearby wetlands and streams.
|
<urn:uuid:bfd6d5a9-0ff7-493f-be13-62a7869b0cf1>
| 2.890625 | 257 |
Content Listing
|
Science & Tech.
| 23.707624 |
During the week of May 13th, the CO2 level at the Mauna Loa Observatory in Hawaii topped 400 ppm repeatedly. Daily levels of CO2 can vary due to weather, and there are seasonal trends as well. The level of atmospheric greenhouse gases continues to increase, now over 120 ppm since the Industrial Revolution began. For more on the Keeling Curve, see http://keelingcurve.ucsd.edu/. Find out more about greenhouse gases and warming.
The week of May 19 brings dozens of tornadoes to Tornado Alley in the states of Oklahoma, Kansas, Iowa, Illinois and Missouri. On May 20th, a massive tornado struck Moore, Oklahoma, devastating communities - destroying over 100 homes and hitting two elementary schools and a hospital - with many casualties and deaths. Our thoughts are with our friends and colleagues suffering from these storms. For more on the May 20th storms, see the NOAA Storm Prediction Center Storm Report.
Did you know that individuals donít evolve, but populations do?
Did you know that the Japanese god Susanowo was the god of the sea and storms, and that he had a terrible temper?
Earth and Space Science Concept of the Day
Do you know what this word or phrase means?
xDip-slip fault : Dip-slip faults are inclined fractures where the blocks have mostly shifted vertically. If the rock mass above an inclined fault moves down, the fault is termed normal, whereas if the rock above the fault moves up, the fault is termed reverse.
Tiny variations in the isotopic composition of silver in meteorites and Earth rocks are helping scientists put together a timetable of how our planet was assembled, beginning 4.568 billion years ago. Results...Read more
|
<urn:uuid:9316d379-1f75-4e3f-992c-57d24c0b89af>
| 3 | 353 |
Content Listing
|
Science & Tech.
| 57.035238 |
Douglass Jacobs, an associate professor of forestry and natural resources, found that American chestnuts grow much faster and larger than other hardwood species, allowing them to sequester more carbon than other trees over the same period. And since American chestnut trees are more often used for high-quality hardwood products such as furniture, they hold the carbon longer than wood used for paper or other low-grade materials.
"Maintaining or increasing forest cover has been identified as an important way to slow climate change," said Jacobs, whose paper was published in the June issue of the journal Forest Ecology and Management. "The American chestnut is an incredibly fast-growing tree. Generally the faster a tree grows, the more carbon it is able to sequester. And when these trees are harvested and processed, the carbon can be stored in the hardwood products for decades, maybe longer."
At the beginning of the last century, the chestnut blight, caused by a fungus, rapidly spread throughout the American chestnut's natural range, which extended from southern New England and New York southwest to Alabama. About 50 years ago, the species was nearly gone.
New efforts to hybridize remaining American chestnuts with blight-resistant Chinese chestnuts have resulted in a species that is about 94 percent American chestnut with the protection found in the Chinese species. Jacobs said those new trees could be ready to plant in the next decade, either in existing forests or former agricultural fields that are being returned to forested land.
"We're really quite close to having a blight-resistant hybrid that can be reintroduced into eastern forests," Jacobs said. "But because American chestnut has been absent from our forests for so long now, we really don't know much about the species at all."…
Douglass Jacobs examines a young hybrid of the American chestnut. He expects the trees could be reintroduced in the next decade. (Purdue University file photo/Nicole Jacobs)
|
<urn:uuid:ade2c38d-a45b-4987-9c84-8d2fc184da53>
| 3.84375 | 396 |
Personal Blog
|
Science & Tech.
| 35.308634 |
int WidthInInches(int feet);
// Initialize variables by calling functions.
int feet = WidthInFeet();
int wd = WidthInInches(feet);
// Display results.
std::cout << "Width in inches = " << wd;
std::cout << "Enter width in feet: ";
std::cin >> feet;
int WidthInInches(int feet)
return feet * 12;
I'm a new to C++ and I understand that it reads up to down. However, I don't understand how the last part could return a number and then that number is returned to the out line in the main function. Can someone please explain this?
|
<urn:uuid:a1a9e390-3568-45d2-be7e-695120f07aa9>
| 3.15625 | 150 |
Documentation
|
Software Dev.
| 73.456759 |
Special & General Relativity Questions and Answers
If a photon travels at the speed of light, why isn't its mass infinite?
Because the photon is one of those handful of particles ( photon, graviton, gluon) which has 'zero rest mass'. The special relativistic formula that shows mass increasing with speed only applies to particles with non-zero rest mass such as neutrinos, electrons, quarks and so on.
Return to the Special & General Relativity Questions and Answers page.
All answers are provided by Dr. Sten Odenwald (Raytheon STX) for the NASA Astronomy Cafe, part of the NASA Education and Public Outreach program.
|
<urn:uuid:7b69a7b7-162d-4061-84c4-c0b93d50fd95>
| 3.140625 | 142 |
Q&A Forum
|
Science & Tech.
| 45.252333 |
The theory behind fossil fuels is actually quite simple. Burning coal, natural gas, and petroleum releases energy stored in the fuel as heat. The energy contained by the fuels is derived from the energy of the sun. For more detailed explanations of the origins of the different fossil fuels, visit the coal, natural gas, and petroleum pages.
The heat that is recovered upon combustion of the fuel can be used by us in several ways. Industrial processes that require extremely high temperatures may burn a great deal of very pure coal known as "coke" and use the energy released to directly heat a system. Some people make use of clean burning natural gas to heat their homes. Combustion of fossil fuels can also be used to generate electricity; the fuel is burned to heat water, and the steam from the boiling water spins turbines that power a generator, thereby manufacturing electricity:
Next Page: "Pollution"
|
<urn:uuid:9af8df0e-8b5a-4355-b5f1-b4a57327a9fb>
| 3.984375 | 181 |
Knowledge Article
|
Science & Tech.
| 35.081623 |
Many, see text.
Hummingbirds (family Trochilidae) are small birds capable of hovering in mid-air due to the rapid flapping of their wings (15 to 80 beats per second, depending on the size of the bird). They are named for the characteristic hum of this rapid wing motion. They are the only birds that can fly backwards.
Hummingbirds bear the most glittering plumage and some of the most elegant adornments. Male hummingbirds are usually brightly coloured, females duller. The males take no part in nesting. The nest is usually a neat cup in a tree. Two white eggs are laid, which are quite small, but large relative to the bird's size. Incubation is typically 14-19 days.
The names that admiring naturalists have given to hummingbirds suggest exquisite, fairylike grace and gemlike refulgence. Fiery-tailed Awlbill , Ruby-topaz Hummingbird, Glittering-bellied Emerald , Brazilian Ruby , Green-crowned Brilliant --are some of the names applied to the 233 species of the hummingbirds briefly described in Meyer de Schauensee's scientific Guide to Birds of South America.
Iridescent colors are common among hummingbirds. By changing position, the direction of the reflected light might give the effect of two completely different colors of the same plumage parts.
On the hummingbird's glittering throat or crown, the exposed surfaces of the barbules resemble tiny flat mirrors, which send forth their resplendence in the favored direction. This mechanism plays an important role in social interaction and species recognition.
All the metallic colours of hummingbirds are caused by interference.
Source (Skutch, 1973
Hummingbirds have the highest metabolism of all animals except insects in flight, a necessity in order to support the rapid beating of their wings. Their heartbeat can reach 500 beats per minute. They also typically consume more than their own weight in food each day, and to do that, they have to visit hundreds of flowers every day. But at any given moment, they're hours away from starving. Fortunately, they are capable of slowing down their metabolism at night, or any other time food is not readily available. They enter a hibernation-like state known as torpor. During torpor, the heartrate and rate of breathing are both slowed dramatically, reducing their need for food.
Studies of hummingbirds' metabolism are highly relevant to the question of whether a migrating ruby-throated hummingbird can cross 500 miles of Gulf of Mexico on a nonstop flight, as field observations suggest it does. The ruby-throated hummingbird like other birds preparing to migrate, stores up fat to serve as fuel, thereby augmenting its weight by as much as 40 to 50 per cent--this would increase the bird's flying time. (Skutch, 1973) --Ccson 10:06, 18 Mar 2005 (UTC)
Hummingbirds of the U.S. and Canada generally migrate to warmer climates, though some remain in the warmest coastal regions. In addition, there is an increasing trend for Rufous Hummingbirds to migrate east to winter in the eastern United States, rather than south to Central America, this trend being the result of increased survival with the provision of artificial feeders in gardens. In the past, individuals that migrated east would usually die, but now they survive, and their tendency to migrate east is inherited by their offspring. Provided sufficient food and shelter is available, they are surprisingly hardy, able to tolerate temperatures down to at least -20°C.
Hummingbirds owe their wide distribution to their great power of flight and wandering habits no less than to their hardiness.
Hummingbirds and People
Hummingbirds will use feeders, particularly red ones. A suitable artificial nectar consists of one part sugar to four parts water. It is easiest to dissolve the sugar in boiling water, then cool it completely before putting it out for the birds. Sweet foods other than white sugar, such as honey, ferment too quickly and can injure the birds. Some commercial hummingbird foods are available, but they contain red dyes which are unnecessary and have been anecdotally reported to poison the birds. They also contain small amounts of nutrients, but hummingbirds apparently get their nutrients from the insects they eat, not from nectar, so the nutrients are also unnecessary. Thus plain white sugar and water make the best nectar.
The feeder should be rinsed and the water changed weekly, or more often in warm weather. At least once a month, or whenever black mold appears, it should be soaked in a solution of chlorine bleach. Hummingbirds tend to avoid feeders that have been cleaned with soap, possibly because they dislike the smell.
Much more detailed information is available at .
Hummingbirds sometimes fly into garages and become trapped. It is widely believed that this is because they mistake the hanging (usually red-colored) door-release handle for a flower, although hummingbirds can also get trapped in enclosures that do not contain anything red. Once inside, they may be unable to escape because their natural instinct when threatened or trapped is to fly upward. This is a life-threatening situation for hummingbirds, as they can become exhausted and die in a relatively short period of time, possibly as little as an hour. If a trapped hummingbird is within reach, it can often be caught gently and released outdoors. It will lie quietly in the space between cupped hands until released.
The Ohlone tells the story of how a Hummingbird brought fire to the world.
Traditionally hummingbirds were placed in the order Apodiformes, which also contains the swifts. In the modern Sibley-Ahlquist taxonomy, hummingbirds are separated as a new hummingbird order Trochiliformes.
There are between 325 and 340 species of hummingbird, depending on taxonomic viewpoint, divided into two subfamilies, the hermits (subfamily Phaethornithinae, 34 species in six genera), and the typical hummingbirds (subfamily Trochilinae, all the others).
Hummingbirds have been thought by evolutionists to have evolved in South America, and the great majority of the species are found there. All the most familiar North American species are thought to be of relatively recent origin, and are therefore (following the usual procedure of lists starting with more 'ancestral' species and ending with the most recent) listed close to the end of the list.
Genetic analysis has indicated that hummingbirds diverged from other birds 30 to 40 million years ago, but fossil evidence has proved elusive. Fossil hummingbirds have been found as old as a million years, but older fossils had not been securely identifiable as hummingbirds. Then, in 2004, Dr. Gerald Mayr of the Senkenberg Natural History Museum in Frankfurt-am-Main identified two 30-million-year old German hummingbird fossils and published his results in Nature. The fossils of the extinct hummingbird species, Eurotrochilus inexpectatus ("unexpected European hummingbird") had been sitting in a museum drawer in Stuttgart. They had been unearthed in a claypit in Frauenweiler, south of Heidelberg.
|
<urn:uuid:10898cf8-af56-42f3-863a-7a402ec5c489>
| 3.96875 | 1,502 |
Knowledge Article
|
Science & Tech.
| 40.922178 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.