text
large_string
id
large_string
score
float64
tokens
int64
format
large_string
topic
large_string
fr_ease
float64
18 Feb 2009 Corn starch/water on an audio speaker. There is a very interesting video at this link of oobleck dancing on an audio speaker: Subject:University of Iowa Hydraulics Center Films on Youtube (six films) Introduction to the study of fluid motion (1961, 25 minutes) http://youtu.be/EIuU9Q8CGDk The first in a widely used series of films on fluid mechanics, produced at IIHR under the direction of Hunter Rouse. This introductory program, designed to orient engineering students, shows examples of flow phenomena from a host of everyday experiences. Empirical solutions by means of scale models are illustrated. The significance of the Euler, Froude, Reynolds, and Mach numbers as similitude parameters is illustrated. Dr. Hunter Rouse served as Director of IIHR from 1944 to 1966. During this time, he was instrumental in strengthening IIHRs fundamental research emphasis and in developing teaching programs for hydraulic engineers. Through his writings, research, and global travels, he established IIHR as an internationally acclaimed innovative research and teaching laboratory. Fundamental Principles of flow (23 min) Second in the series, this video departs from the essential generality of the first by explicitly illustrating, through experi ment and animation, the basic concepts and physical relation ships that are involved in the analysis of fluid motion. The concepts of velocity, acceleration, circulation, and vorticity are introduced, and the use of integral equations of motion is demonstrated by a simple example. Fluid Motion in a gravitational field (24 min) http://youtu.be/-xoyLhiEOus In this third video of the series, which proceeds from the intro ductory and the basic material presented in the first two videos, emphasis is laid upon the action of gravity. Principles of wave propagation are illustrated, including aspects of gen eration, celerity, reflection, stability, and reduction to steadiness by relative motion. Simulation of comparable phenomena in the atmosphere and the ocean is considered. Characteristics of laminar and turbulent flow (26 min) http://youtu.be/eIHVh3cIujU The fourth video deals with the effect of viscosity. Dye, smoke, suspended particles, and hydrogen-bubbles are used to reveal the Various combinations of Couette and plane Poiseuille flow introduce the principles of lubrication. Axisymmetric Poiseuille flow and development of the flow around an elliptic cylinder are related to variation in the Reynolds number, and the growth of the boundary layer along a flat plate is shown. Instability in boundary layers and pipe flow is shown to lead to turbulence. The eddy viscosity and apparent stress are intro duced by hotwire-anemometer indications. The processes of turbulence production, turbulent mixing, and turbulence decay are considered. Form, drag, lift, and propulsion (24 In the fifth video of the series, emphasis is laid upon the role of boundary-layer separation in modifying the flow pattern and producing longitudinal and lateral components of force on a moving body. Various conditions of separation and methods of separation control are first illustrated. Attention is then given to the distribution of pressure around typical body profiles and its relation to the resulting drag. The concept of circulation introduced in the second film is developed to explain the forces on rotating bodies and the forced vibration of cylin dri cal bodies. Structural failure of unstable sections is Effects of fluid compressibility (17 The last in the six-video series makes extensive use of the analogy between gravity and sound waves and illustrates, through laboratory demonstrations and animation, the con cepts of wave celerity, shock waves and surges, wave reflec tion and water hammer. Two-dimensional waves are produced by flow past a point source at various speeds relative to the wave celerity to illus trate the effect of changing Mach number, and the principle is applied to flow at curved and abrupt wall deflections. Axisymmetric and three-dimensional wave patterns are then portrayed using color Schlieren pictures.
<urn:uuid:58bbc3a2-2cca-4aac-975b-6a2a183c1e45>
2.890625
923
Content Listing
Science & Tech.
25.221004
Famous Women in Astronomy Part of the Astronomy For Dummies Cheat Sheet When you’re studying astronomy don’t forget the women that made an impact in the field. Check out this list of amazing achievements by women astronomers and astrophysicists: Caroline Herschel (1750–1848) Discovered eight comets. Annie Jump Cannon (1863–1941) Devised the basic method for classifying the stars. Henrietta Swan Leavitt (1868–1921) Discovered the first accurate method for measuring great distances in space. Sally Ride (1951–2012) A trained astrophysicist, she is the first American woman in space. Jocelyn Bell Burnell Discovered pulsars in her work as a graduate student. E. Margaret Burbidge Pioneered modern studies of galaxies and quasars. Wendy Freedman Leader in measuring the expansion rate of the universe. Carolyn C. Porco Leads the Cassini imaging science team in the study of Saturn and its moons and rings. Nancy G. Roman As NASA’s first chief astronomer, she led the development of telescopes in space. Vera C. Rubin Investigated the rotation of galaxies and detected the existence of dark matter. Carolyn Shoemaker Discovered many comets, including one that smashed into Jupiter. Jill Tarter Leader in the search for extraterrestrial intelligence.
<urn:uuid:c1a9e1d0-9350-465a-a08c-c188f4ed1b70>
3.859375
306
Listicle
Science & Tech.
42.766321
Fire and Invasive Plants -- Combustibility of Native and Invasive Exotic Plants Alison C. Dibble, U.S. Department of Agriculture, Forest Service, Northern Research Station, 686 Government Rd., Bradley, ME 04411 William A. Patterson III, Department of Forestry and Wildlife Management, University of Massachusetts, Amherst, MA 01003 Robert H. White, U.S. Department of Agriculture, Forest Service, Forest Products Laboratory, Madison, WI 53705-2398 The ease with which a plant fuel catches fire – its combustibility, or flammability, might differ between native plants versus invasive exotic plants that overtake their habitat. By comparing combustibility in these two groups of plants, we are seeking to improve the effectiveness of prescribed fire and the assessment of fire hazards in the Northeastern U.S. Risk of wildfire could be greater in the wildland-urban interface if invasive plants are dense and have higher combustibility than the native species. Conversely, a fire-prone ecosystem invaded by exotics might have less frequent fire return and lower severity, with consequences for fire-dependent species, e.g., federally endangered Karner Blue butterfly and its host plant, a native lupine of pitch pine forests. With support from the Joint Fire Science Program (JFSP) we, with Mark J. Ducey of University of New Hampshire, are modifying the Rothermel fuel models to better represent conditions in the Northeast. Heat content is a missing link, especially regarding common shrubs and herbs, and some invasive exotic plants. These combustibility data can be used in BEHAVE Plus, FARSITE, and the Emissions Production Model (EPM) so that models better represent the local vegetation, and will added to the Fuel Characteristic Classification System (Cushon et al 2002), which is a clearinghouse of fuels information. We sampled flammability of plants in a cone calorimeter (ASTM 2002, see Fig. 1) to quantify effective heat of combustion (HOC) as a measure of heat content in dried (60°C), unground leaves and twigs. We compared 14 invasives, 12 of which are exotic, to 13 native species which might be displaced in disturbed habitats. Based on five replicates per species, we found a range from 6-17 Mj/kg, which is overall lower than for green and dry plant fuels from California and Colorado. |Fig. 1 Cone calorimeter apparatus used to measure heat content in oven dried, unground leaves and twigs of 27 native and exotic plants that grow in the Northeastern U.S.| Highest average heat content was in speckled alder. Among shrubs and vines, it was relatively high in highbush blueberry, purple nightshade, common barberry, and Japanese honeysuckle, and lowest in smooth buckthorn and Oriental bittersweet. Among six herbs, rough-stemmed goldenrod had the highest heat content while Japanese stiltgrass and Japanese knotweed were lowest. Quaking aspen had higher heat content than invasive trees, while Norway maple and apple were lower than the others. Overall, invasive plants tended to have lower heat content than native species (Fig. 2). |Fig. 2. Notched box plot summarizing effective heat of combustion in six tree species, half of which are invasive in northeastern North America and half native. Because the notched portions of the two boxes do not overlap on the horizontal plane, the groups are significantly different.| When broken out as a subset, three invasive trees (black locust -- Robinia pseudoacacia, which is native only as far north as Pennsylvania; apple -- Malus sp., and Norway maple -- Acer platanoides) are significantly LESS flammable than three native trees (Fig. 2). Our sample is small. In January 2003 Dibble, Ducey and White applied to the JFSP to conduct a nation-wide combustibility survey of native and invasive exotic We conclude that (1) use of fire to control undesirable vegetation can be more effective if a species-by-species approach is taken to meet management objectives in a particular stand; (2) flammability also involves leaf surface to volume ratio and moisture content (which is being measured in another study), and these should be quantified to improve modeling fire behavior; and (3) comparison of combustibility data from other regions will increase our understanding of fuels in the Northeast. ASTM International. 2002. Standard test method for heat and visible smoke release rates for materials and products using an oxygen consumption calorimeter. Designation E 1354-02. West Conshohocken, PA: ASTM International. Cushon, G. H., R. D. Ottmar, D. V. Sandberg, J. A. Greenough and J. L. Key. In press. Fuel characteristic classification: characterizing wildland fuelbeds in the United States. In A. Brennan, et. al. (eds.) National Congress on Fire Ecology, Prevention and Management Proceedings, No. 1. Tall Timbers Research Station, Tallahassee, FL. http://www.fs.fed.us/pnw/fera/jfsp/fcc/FCCpaper.pdf Richburg, J. A., A. C. Dibble, and W. A. Patterson III. 2001. Woody invasive species and their role in altering fire regimes of the Northeast and Mid-Atlantic states. Pp. 104-111 in K.E.M. Galley and T. P. Wilson (eds.). Proceedings of the Invasive Species Workshop: the Role of Fire in the Control and Spread of Invasive Species. Fire Conference 2000: the First National Congress on Fire Ecology, Prevention and Management. Misc. Publ. No. 11, Tall Timbers Research Station, Tallahassee, FL.(Top)
<urn:uuid:edc43201-bbe0-4879-8f31-cc11f5f812ab>
2.828125
1,256
Academic Writing
Science & Tech.
47.401774
Joined: 16 Mar 2004 |Posted: Thu Aug 06, 2009 11:24 am Post subject: Nanotubes Could Aid Understanding of Retrovirus Transmission |Recent findings by medical researchers indicate that naturally occurring nanotubes may serve as tunnels that protect retroviruses and bacteria in transit from diseased to healthy cells — a fact that may explain why vaccines fare poorly against some invaders. To better study the missions of these intercellular nanotubes, scientists have sought the means to form them quickly and easily in test tubes. Sandia National Laboratories researchers have now learned serendipitously to form nanotubes with surprising ease. “Our work is the first to show that the formation of nanotubes is not complicated, but can be a general effect of protein-membrane interactions alone,” says Darryl Sasaki of Sandia's Bioscience and Energy Center . The tunnel-like structures have been recognized only recently as tiny but important bodily channels for the good, the bad, and the informational. In addition to providing protected transport to certain diseases, the nanotubes also seem to help trundle bacteria to their doom in the tentacles of microphages. Lastly, the nanotubes may provide avenues to send and receive information (in the form of chemical molecules) from cell to cell far faster than their random dispersal into the bloodstream would permit. Given the discovery of this radically different transportation system operating within human tissues, it was natural for researchers to attempt to duplicate the formation of the nanotubes. In their labs, they experimented with giant lipid vesicles that appeared to mimic key aspects of the cellular membrane . Giant lipid vesicles resemble micron-sized spherical soap bubbles that exist in water. They are composed of a lipid bilayer membrane only five nanometers thick. The object for experimenters was to create conditions in which the spheres would morph into cylinders of nanometer radii. But researchers had difficulties, says Sasaki, perhaps because they used a composite lipid called egg PC that requires unnecessarily high energies to bend into a tubular shape. Egg PC is inexpensive, readily available, and offers good, stable membrane properties. It is the usual lipid of choice in forming nanocylinders via mechanical stretching techniques. But Sandia postdoctoral researcher Haiqing Lui instead used POPC — a single pure lipid requiring half the bending energy of egg PC. She was trying to generate nanotubes by a completely different approach that involved the use of motor proteins to stretch naturally occurring membranes into tubes. Working with Sandia researcher George Bachand, she serendipitously found that interaction of the POPC membrane with a high affinity protein called streptavidin alone was enough to form the nanotubes. “Perhaps this information — linking membrane bending energy with nanotube formation — may provide some clue about the membrane structure and the cell's ability to form such intercellular connections,” Sasaki says. The formation was confirmed by Sandia researcher Carl Hayden, who characterized the nanotube formation through a confocal imaging microscope. The custom instrument allows pixel-by-pixel examination of the protein interaction with the membranes comprising the nanotubes by detecting the spectrum and lifetimes of fluorescent labels on the proteins. Nanotube formation had been noticed previously by cell biologists, but they had dismissed the tiny outgrowths as “junk — an aberration of cells growing in culture,” says Sasaki. “The reason they were only noticed recently as trafficking routes is because of labeling studies that marked organelles and proteins. This allowed a focused look at what these nanostructures might be used for.” It became clear, says Sasaki, that the organelles were being transported with “specific directionality” on the backs of motor proteins within the tubes, rather than randomly. Three-dimensional networks of nanotubes also are found to be created by macrophages — part of the police force of the body — grown in culture, says George. The tubes in appearance and function resemble a kind of spider web, capturing bacterium and transporting them to the macrophages, which eat them. Other paper authors include postdoc Hahkjoon Kim and summer intern Elsa Abate. The lipid work is supported by Sandia's Laboratory Directed Research and Development office. Motor protein work is supported by DOE's Office of Basic Energy Sciences. Results were published in the American Chemical Society's Langmuir Journal in mid-March. Source: Sandia.gov /...
<urn:uuid:8be1c3e2-ea54-459f-83ce-a0fd4be35a9f>
3.03125
950
Comment Section
Science & Tech.
23.619571
Fri Mar 21 18:02:28 GMT 2008 by Gordon Why is it that gamma rays cannot penerate the earth's atmosphere when they will happily travel though thick lead. Fri Mar 21 18:59:29 GMT 2008 by Radek Air in atmosphere corresponds to ... One meter of lead! Fri Mar 21 19:24:17 GMT 2008 by Tony Byron "The atmosphere shields us from cosmic rays about as effectively as a 13-foot layer of concrete,..." "The Earth's atmosphere would soak up most of the gamma rays, Melott says, but their energy would rip apart nitrogen and oxygen molecules, creating a witch's brew of nitrogen oxides, especially the toxic brown gas nitrogen dioxide that colours photochemical smog (see graphic)." (long URL - click here) All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:ff6d54ec-e3b9-412d-9909-96c7ca2e037a>
3.234375
218
Comment Section
Science & Tech.
71.288992
Falling Objects and Bouncing Location: Outside U.S. Date: April 2008 I am looking to compare different masses, objects, shapes and compare these to dents made in a specific plate (e.g. polystyrene). For this experiment I would need not only to be able to work out the velocity of the object but also how much air resistance is effecting the object plus the amount of air is jammed between the object and the plate. If possible, I would also be looking for a way to measure how much, for instance, an object would bounce back, or how much weight and height will get me the best results, but I also need to find a formula to see wether any of my results make any sense. I was thinking of letting an object dimensions around 5 x 5 (base) by 5-10 (height) depending on object otherwise for sphere a 3 cm radius object drop from around 3 meters object mass around 250g to 1kg. I do not know if pressure, humidity or temperature matters. It sounds like you're asking for someone to 1. validate your methodology, and 2. suggest any other factors you need to consider. Is that right? (if not, reply and let me know what else). First, the methodology. It sounds like you have an ambitious approach, but I think some organization up front will really help you get good value from your efforts. There is a method known as 'design of experiments' that might help. I am going to walk you through some basic steps, and hopefully it will Step one is to have a hypothesis. What are you trying to prove? It sounds like you are testing something about elastic collisions (such as the relationship between objects and the mark they leave on a plate), but I am not clear what. Step two is to organize and categorize your variables. You have two kinds of variables: independent and dependent. An independent variable is something you can set yourself (such as how high to drop the object, which object with which properties, etc.). A dependent variable, often called a response variable, is one that is determined by independent variables. The mark left on the plate or the height the object bounces might be response variables. A third type of "variable" is a factor that you do not intentionally change (I put "variable" in quotes because sometimes they change and sometimes they do not). There are lots of these factors, some of which you can control and some you cannot. You might always choose to use the same target plate -- that is a factor that you hold constant. You might work outside, and have to deal with wind or temperature changes -- these affect your results, but you cannot control them. It is a good idea to record variables and factors that affect your results -- they may be helpful later in interpreting your Step three is to revisit your hypothesis -- restate your idea in terms of the variables that you can measure. Saying "I want to see what happens....". is not as powerful as saying something like "A change of independent variable A will lead to a change in dependent variable B in this way C." Step four is to set up your equipment to actually test your hypothesis. Keep it simple -- pick materials and equipment that fit what you are trying to test. Remove things that will introduce uncontrollable variables. The more variables you try to change, the harder the experiment will be to run and the harder the results will be to analyze. Sometimes you have to have a lot of variables, but it is often a good idea to start simple first, and then work your way up to more complicated experiments. I strongly recommend you read about 'design of experiments' to help you understand the approach I am suggesting here. The Internet has a ton of information, as would a library too. Now for your specific situation. It sounds like you are trying to do experiments involving colliding objects. Have you studied 'kinetic energy' in physics yet? I would start there. You can get all the equations you need. I would specifically study elastic and inelastic collisions. Usually collisions are not purely one or the other. With a rubber ball, the ball deforms as it strikes a hard object. Some energy is dissipated in the deformation, and some is returned elastically. If you are hitting an expanded Polystyrene ('Styrofoam') target, the energy of the falling object will be partially/mostly absorbed by the Styrofoam. For your objects and distances (~1kg, ~10m), I think you can safely neglect air/wind effects. If you consider objects of different shapes, now you have a very difficult-to-control factor as now the orientation of the object affects how it bounces (I would avoid this variable, to be honest -- stick with spheres). As for weights and masses, it probably does not matter that much unless you use very light, low-density objects (they will be affected by air). Ball bearings, rocks, and other similar 'heavy' objects should all Hope this helps, That is a massively difficult and computationally intensive endeavor you want to undertake! I am afraid that the best answer I can give, is to say that with without a supercomputer running extremely complex Finite Element Analysis software, and a lot of very expensive computer time, there is no way to do what you are suggesting. It is amazing how complex it is to accurately describe something as seemingly simple as dropping a block though air! Further, before you can even think of attempting to see how far an object would bounce off your polystyrene plate, you would need to mathematically characterize the detailed physical characteristics of both the plate and the falling object. So, I am sorry, but to do what you want to do is simply impossible with the resources available to someone like you or even me. You have a pretty complicated project. For a "dropping" distance of ~ 3 meters, air pressure, humidity and temperature will probably be negligible. For a sphere, Stokes' Law says that the shear viscosity is F=6 x pi x a x nu x v (for Reynolds numbers 1 (true for air)). For heavy objects of radius 'a' the velocity 'v' falling through air with a viscosity 'nu' will not be significant I don't think. How bodies of different shape fall is a complicated problem because they tend to tumble, so you should probably stick to spheres. Relating the indentation of the base to the mechanical parameters may be very tricky too. Not all polystyrene, for example, has the same elasticity, which determines how much of the energy of the falling object is absorbed compared to how much is retained by the falling object. Click here to return to the Engineering Archives Update: June 2012
<urn:uuid:e4b16c9b-077a-4540-930d-1c571c35557a>
2.984375
1,485
Q&A Forum
Science & Tech.
52.5096
Time and Frequency from A to Z: D to Do A number or series of numbers used to identify a given day with the least possible ambiguity. The date is usually expressed as the month, day of month, and year. However, integer numbers such as the Julian Date are also used to express the date. Daylight Saving Time The part of the year when clocks are advanced by one hour, effectively moving an hour of daylight from the morning to the evening. In 2007, the rules for Daylight Saving Time (DST) have changed for the first time since 1986. The new changes were enacted by the The Energy Policy Act of 2005, which extended the length of DST by about one month in the interest of reducing energy consumption. DST will now be in effect for 238 days, or about 65% of the year, although Congress retained the right to revert to the prior law should the change prove unpopular or if energy savings are not significant. Under the current rules, DST in the U.S. begins at 2:00 a.m. on the second Sunday of March and ends at 2:00 a.m. on the first Sunday of November. Daylight Saving Time is not observed in Hawaii, American Samoa, Guam, Puerto Rico, the Virgin Islands, and the state of Arizona (not including the Navajo Indian Reservation, which does observe). The time that elapses between the end of one measurement and the start of the next measurement. This time interval is generally called dead time only if information is lost. For example, when making measurements with a time interval counter, the minimum amount of dead time is the elapsed time from when a stop pulse is received to the arrival of the next start pulse. If a counter is fast enough to measure every pulse (if it can sample at a rate of 1 kHz, for instance, and the input signals are at 100 Hz), we can say there is no dead time between measurements. Disciplined Oscillator (DO) An oscillator whose output frequency is continuously steered (often through the use of a phase locked loop) to agree with an external reference. For example, a GPS disciplined oscillator (GPSDO) usually consists of a quartz or rubidium oscillator whose output frequency is continuously steered to agree with signals broadcast by the GPS satellites. The apparent change of frequency caused by the motion of the frequency source (transmitter) relative to the destination (receiver). If the distance between the transmitter and receiver is increasing the frequency apparently decreases. If the distance between the transmitter and receiver is decreasing, the frequency apparently increases. To illustrate this, listen to the sound of a train whistle as a train comes closer to you (the pitch gets higher), or as it moves further away (the pitch gets lower). As you do so, keep in mind that the frequency of the sound produced at the source has not changed.
<urn:uuid:8885f164-65e1-4fdd-beef-17d693860475>
3.484375
589
Knowledge Article
Science & Tech.
51.032085
Pascal is an influential imperative and procedural programming language, intended to encourage good programming practices using so called structured programming and data structuring. : What is the reason for this problem? If I leave pascal doing anything : in loop, After ~30 secs loop is runiing slower than before, but loop returns at normal speed when i move mouse or press a... : Hi There : I'm using Turbo Pascal for Windows 1.5, and using WinCrt in my program. : 1. How can I use color text or color background? I try to use : Textcolor(1); Textbackground(4);, but it... : when using this code: : procedure TForm1.ApplicationEvents1Message(var Msg: tagMSG; : var Handled: Boolean); : if (Msg.message = wm_KeyUp) or (Msg.message=wm_KeyDown) then There are a lot of applications where speed is a critical factor -- such as real-time programs. MS Windows and Unix are not real-time operating systems, so speed is not all that... Is there any way in Pascal (or asm) to shut down the computer (you know, as in Windows - you press shut down and its turning itself off power). I'd be grateful.
<urn:uuid:12c439b2-de89-49db-af74-dbe35321f621>
2.8125
280
Comment Section
Software Dev.
66.903623
Antimatter came about as a solution to the fact that the equation describing a free particle in motion (the relativistic relation between energy, momentum and mass) has not only positive energy solutions, but negative ones as well! If this were true, nothing would stop a particle from falling down to infinite negative energy states, emitting an infinite amount of energy in the process--something which does not happen. In 1928, Paul Dirac postulated the existence of positively charged electrons. The result was an equation describing both matter and antimatter in terms of quantum fields. This work was a truly historic triumph, because it was experimentally confirmed and it inaugurated a new way of thinking about particles and fields. In 1932, Carl Anderson discovered the positron while measuring cosmic rays in a Wilson chamber experiment. In 1955 at the Berkeley Bevatron, Emilio Segre, Owen Chamberlain, Clyde Wiegand and Thomas Ypsilantis discovered the antiproton. And in 1995 at CERN, scientists synthesized anti-hydrogen atoms for the first time. When a particle and its anti-particle collide, they annihilate into energy, which is carried by "force messenger" particles that can subsequently decay into other particles. For example, when a proton and anti-proton annihilate at high energies, a top-anti-top quark pair can be created! An intriguing puzzle arises when we consider that the laws of physics treat matter and antimatter almost symmetrically. Why then don't we have encounters with anti-people made of anti-atoms? Why is it that the stars, dust and everything else we observe is made of matter? If the cosmos began with equal amounts of matter and antimatter, where is the antimatter? Experimentally, the absence of annihilation radiation from the Virgo cluster shows that little antimatter can be found within ~20 Megaparsecs (Mpc), the typical size of galactic clusters. Even so, a rich program of searches for antimatter in cosmic radiation exists. Among others, results form the High-Energy Antimatter Telescope, a balloon cosmic ray experiment, as well as those from 100 hours worth of data from the Alpha Magnetic Spectrometer aboard NASA's Space Shuttle, support the matter dominance in our Universe. Results from NASA's orbiting Compton Gamma Ray Observatory , however, are uncovering what might be clouds and fountains of antimatter in the Galactic Center. We stated that there is an approximate symmetry between matter and antimatter. The small asymmetry is thought to be at least partly responsible for the fact that matter outlives antimatter in our universe. Recently both the NA48 experiment at CERN and the KTeV experiment at Fermilab have directly measured this asymmetry with enough precision to establish it. And a number of experiments, including the BaBar experiment at the Stanford Linear Accelerator Center and Belle at KEK in Japan, will confront the same question in different particle systems. Antimatter at lower energies is used in Positron Emission Tomography (see this PET image of the brain). But antimatter has captured public interest mainly as fuel for the fictional starship Enterprise on Star Trek. In fact, NASA is paying attention to antimatter as a possible fuel for interstellar propulsion. At Penn State University, the Antimatter Space Propulsion group is addressing the challenge of using antimatter annihilation as source of energy for propulsion. See you on Mars? Answer originally posted October 18, 1999
<urn:uuid:0ee63bec-49ee-4e62-a8f7-6b79d5db4f5b>
3.6875
717
Knowledge Article
Science & Tech.
26.957256
If you read this blog regularly, you know I have a fondness for the so-called “missing eruptions” — that is, volcanic events found in ice core or sediment records but not yet identified in the geologic/volcanic record. The most glaring right now is the eruption of 1258 A.D., supposedly 1.8 times as large as the 1815 eruption of Tambora, but no candidate volcano has been conclusively identified as the source. Another enigmatic climate event that has a little more potential to be matched with a volcano happened during the mid-1450s, a period that saw cold winters in China, dry fogs in Constantinople and stunted tree ring growth around the world. It also saw one of the biggest cases of sulfur loading in the atmosphere in the last few thousand years, rivaling that of the famous 1783 Laki eruption in Iceland. All these climatic effects have been attributed to an eruption in the New Hebrides arc, specifically the Kuwae caldera in Vanuatu. However, the relationship between this eruption at the climate signatures — and the existence of the eruption itself — is still hotly debated.
<urn:uuid:eecc926a-6ecd-4a92-bf87-2115959015a7>
3.140625
239
Personal Blog
Science & Tech.
41.674462
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. October 24, 1998 Explanation: Sunrise seen from low Earth orbit by the shuttle astronauts can be very dramatic indeed ( and the authors apologize to Hemingway for using his title!). In this breathtaking view, the Sun is just visible peaking over towering anvil-shaped storm clouds whose silhouetted tops mark the upper boundary of the troposphere, the lowest layer of planet Earth's atmosphere. Sunlight filtering through suspended dust causes this dense layer of air to appear red. In contrast, the blue stripe marks the stratosphere, the tenuous upper atmosphere, which preferentially scatters blue light. Authors & editors: NASA Technical Rep.: Jay Norris. Specific rights apply. A service of: LHEA at NASA/ GSFC &: Michigan Tech. U.
<urn:uuid:c9075ef6-33ed-4b24-8c3e-133f541a5196>
3.171875
192
Knowledge Article
Science & Tech.
43.921905
I believe that, from the perspective of the C++ Standard, there is no difference between #include "xyz.h" and #include "xyz.cpp" if they both contain the same thing. In practice, an IDE might create the makefile (or other build script) such that "xyz.cpp" is compiled even when it should not be, possibly leading to redefinition errors. SQLite's amalgamation is an example of an optimisation where the final source code is generated from the various header files and source files such that it becomes one big source file. This might be a better way than trying to develop by including (non-header) source files all over the place. But it can.Quote: It would be truly amazing of the Microsoft compiler if it can do that. I cannot say for sure, but usually when compiling a Release, it is a pretty long process and many files are typically re-compiled, and during a Debug build, you do not use optimizations.Quote: But is that to say every time a cpp file is changed, the whole project needs to be recompiled? Since that's the only way cross-file inlining can be done? But I think the optimization is done at the linker stage, so perhaps only the linking stage needs to be redone. For one thing, it is considered bad practice to include source files. Not that it is really such a bad thing if used like this, but anyway.Quote: If that is the case... then what's the difference between that and including cpp files? (and keeping dummy header files for human reference, or include all headers before all cpp's?) Secondly, the entire code base is completely re-compiled everytime, even if nothing has changed in those source files. Thirdly, I guess there will be complications, such as global variables with internal linkage, and such. Probably much more. Not sure what you are hinting at?Quote: I thought one of the main advantages of using headers is that the project can be incrementally compiled. I think we should not call it "linking" when we're talking of "inlining from all of the source code", because what really happens is that the compiler is doing the work in two or three steps. The first step involves reading and "understanding" the source code. The second step involves generating the actual binary code. In the case of "whole program optimization", you'd only spend a little bit of time parsing the code and making some intermediate form that can be used for producing the final binary. But certainly, some of the steps in the actual code generation step will involve quite a bit of "hard work" for the processor, compared to just linking together already compiled object files. But for a total build from scratch, I'd expect that it's not much difference. And as Elysia says, most development is done in debug builds, where very little time is spent on optimization.
<urn:uuid:2f3a7e6a-1591-4c19-b893-45b791782dd2>
2.75
619
Comment Section
Software Dev.
53.381491
|Name, Symbol, Number|| krypton, Kr, 36 |Chemical series||noble gases| |Group, Period, Block||18, 4, p| |Appearance|| colorless | |Atomic mass||83.798(2) g/mol| |Electron configuration||[Ar] 3d10 4s2 4p6| |Electrons per shell||2, 8, 18, 8| |Density|| (0 °C, 101.325 kPa)| |Melting point|| 115.79 K| (-157.36 °C, -251.25 °F) |Boiling point|| 119.93 K| (-153.22 °C, -243.8 °F) |Critical point||209.41 K, 5.50 MPa| |Heat of fusion||1.64 kJ·mol−1| |Heat of vaporization||9.08 kJ·mol−1| |Heat capacity||(25 °C) 20.786 J·mol−1·K−1| |Crystal structure||cubic face centered| |Electronegativity||3.00 (Pauling scale)| | Ionization energies| |1st: 1350.8 kJ·mol−1| |2nd: 2350.4 kJ·mol−1| |3rd: 3565 kJ·mol−1| |Atomic radius (calc.)||88 pm| |Covalent radius||110 pm| |Van der Waals radius||202 pm| |Thermal conductivity||(300 K) 9.43 mW·m−1·K−1| |Speed of sound||(gas, 23 °C) 220 m/s| |Speed of sound||(liquid) 1120 m/s| |CAS registry number||7439-90-9| Krypton (IPA: /ˈkrɪptən/ or /ˈkrɪptan/) is a chemical element with the symbol Kr and atomic number 36. A colorless, odorless, tasteless noble gas, krypton occurs in trace amounts in the atmosphere, is isolated by fractionating liquefied air, and is often used with other rare gases in fluorescent lamps. Krypton is inert for most practical purposes but it is known to form compounds with fluorine. Krypton can also form clathrates with water when atoms of it are trapped in a lattice of the water molecules. Notable characteristics Edit Krypton, a noble gas due to its very low chemical reactivity, is characterized by a brilliant green and orange spectral signature. It is one of the products of uranium fission. Solidified krypton is white and crystalline with a face-centered cubic crystal structure which is a common property of all "rare gases". In 1960 an international agreement defined the metre in terms of light emitted from a krypton isotope. This agreement replaced the longstanding standard metre located in Paris which was a metal bar made of a platinum-iridium alloy (the bar was originally estimated to be one ten millionth of a quadrant of the earth's polar circumference). But only 23 years later, the Krypton-based standard was replaced itself by the speed of light—the most reliable constant in the universe. In October 1983 the Bureau International des Poids et Mesures (International Bureau of Weights and Measures) defined the metre as the distance that light travels in a vacuum during 1/299,792,458 s. Like the other noble gases, krypton is widely considered to be chemically inert. Following the first successful synthesis of xenon compounds in 1962, synthesis of krypton difluoride was reported in 1963. Other fluorides and a salt of a krypton oxoacid have also been found. ArKr+ and KrH+ molecule-ions have been investigated and there is evidence for KrXe or KrXe+. There are 32 known isotopes of krypton. Naturally occurring krypton is made of five stable and one slightly radioactive isotope. Krypton's spectral signature is easily produced with some very sharp lines. 81Kr is the product of atmospheric reactions with the other naturally occurring isotopes of krypton. It is radioactive with a half-life of 250,000 years. Like xenon, krypton is highly volatile when it is near surface waters and 81Kr has therefore been used for dating old (50,000 - 800,000 year) groundwater. 85Kr is an inert radioactive noble gas with a half-life of 10.76 years, that is produced by fission of uranium and plutonium. Sources have included nuclear bomb testing, nuclear reactors, and the release of 85Kr during the reprocessing of fuel rods from nuclear reactors. A strong gradient exists between the northern and southern hemispheres where concentrations at the North Pole are approximately 30% higher than the South Pole due to the fact that most 85Kr is produced in the northern hemisphere, and north-south atmospheric mixing is relatively slow. Krypton fluoride laser Edit - For more details on this topic, see Krypton fluoride laser. The compound will decompose once the energy supply stops. During the decomposition process, the excess energy stored in the excited state complex will be emitted in the form of strong ultraviolet laser radiation. - Los Alamos National Laboratory - Krypton - USGS Periodic Table - Krypton - "Chemical Elements: From Carbon to Krypton" By: David Newton & Lawrence W. Baker - "Krypton 85: a Review of the Literature and an Analysis of Radiation Hazards" By: William P. Kirk |This page uses content from Wikipedia. The original article was at Krypton. The list of authors can be seen in the page history. As with Chemistry, the text of Wikipedia is available under the GNU Free Documentation License.|
<urn:uuid:dd2b7efe-0dbd-42a5-b50d-781bb9434163>
3.28125
1,260
Knowledge Article
Science & Tech.
60.270636
Economic growth in China has led to significant increases in fossil fuel consumption © stock.xchng (frédéric dupont, patator) Per capita CO2 emissions in China reach EU levels Global emissions of carbon dioxide (CO2) – the main cause of global warming – increased by 3% last year. In China, the world’s most populous country, average emissions of CO2 increased by 9% to 7.2 tonnes per capita, bringing China within the range of 6 to 19 tonnes per capita emissions of the major industrialised countries. In the European Union, CO2 emissions dropped by 3% to 7.5 tonnes per capita. The United States remain one of the largest emitters of CO2, with 17.3 tonnes per capita, despite a decline due to the recession in 2008-2009, high oil prices and an increased share of natural gas. According to the annual report ‘Trends in global CO2 emissions’, released today by the JRC and the Netherlands Environmental Assessment Agency (PBL), the top emitters contributing to the global 34 billion tonnes of CO2 in 2011 are: China (29%), the United States (16%), the European Union (11%), India (6%), the Russian Federation (5%) and Japan (4%). With 3%, the 2011 increase in global CO2 emissions is above the past decade's average annual increase of 2.7%. An estimated cumulative global total of 420 billion tonnes of CO2 has been emitted between 2000 and 2011 due to human activities, including deforestation. Scientific literature suggests that limiting the rise in average global temperature to 2°C above pre-industrial levels – the target internationally adopted in UN climate negotiations – is possible only if cumulative CO2 emissions in the period 2000–2050 do not exceed 1 000 to 1 500 billion tonnes. If the current global trend of increasing CO2 emissions continues, cumulative emissions will surpass this limit within the next two decades
<urn:uuid:5dbd7929-f5e4-4e00-8ee1-ac82d4729d56>
3.03125
398
Knowledge Article
Science & Tech.
42.668752
Plants have evolved a number of cold-response genes encoding proteins that induce tolerance to freezing, alter water absorption and initiate many other low temperature induced processes. In the 1 April Genes and Development, Jian-Kang Zhu and colleagues of the Department of Plant Sciences, University of Arizona, shed light on how these genes are regulated. Lee et al. report that the protein HOS1 negatively regulates cold-response genes in Arabidopsis. At low temperatures, HOS1 relocalizes from the cytoplasm to the nucleus where it regulates gene expression; hos1 mutants show an excessive induction of cold-response genes. The HOS1 gene was mapped to chromosome II of Arabidopsis and cloned. It encodes a protein of 915 amino acids with a nuclear localization signal and a RING finger. Proteins with this motif have been implicated in the breakdown of other proteins by a process that involves ubiquitination. Lee et al. speculate that HOS1 might regulate the function of cold-response genes by targeting the gene products for degradation. Lee H, Xiong L, Gong Z, Ishitani M, Stevenson B, Zhu JK: The Arabidopsis HOS1 gene negatively regulates cold signal transduction and encodes a RING finger protein that displays cold-regulated nucleo-cytoplasmic partitioning. Genes Dev 2001, 15. Department of Plant Sciences, University of Arizona
<urn:uuid:3649223e-8026-4378-8234-d8647036ba6d>
2.984375
297
Academic Writing
Science & Tech.
27.405926
- If the Earth rotated in the opposite sense (clockwise rather than counterclockwise), how long would the solar day be? - Suppose that the Earth’s pole was perpendicular to its orbit. How would the azimuth of sunrise vary throughout the year? How would the length of day and night vary throughout the year at the equator? at the North and South Poles? where you live? - You are an astronaut on the moon. You look up, and see the Earth in its full phase and on the meridian. What lunar phase do people on Earth observe? What if you saw a first quarter Earth? new Earth? third quarter Earth? Draw a picture showing the geometry. - If a planet always keeps the same side towards the Sun, how many sidereal days are in a year on that planet? - If on a given day, the night is 24 hours long at the North Pole, how long is the night at the South Pole? - On what day of the year are the nights longest at the equator? - From the fact that the Moon takes 29.5 days to complete a full cycle of phases, show that it rises an average of 48 minutes later each night. - What is the ratio of the flux hitting the Moon during the first quarter phase to the flux hitting the Moon near the full phase? - Titan and the Moon have similar escape velocities. Why does Titan have an atmosphere, but the Moon does not? Friday, October 30, 2009 Astronomers have confirmed that an exploding star spotted by Nasa's Swift satellite is the most distant cosmic object to be detected by telescopes. In the journal Nature, two teams of astronomers report their observations of a gamma-ray burst from a star that died 13.1 billion light-years away. The massive star died about 630 million years after the Big Bang. UK astronomer Nial Tanvir described the observation as "a step back in cosmic time". Professor Tanvir led an international team studying the afterglow of the explosion, using the United Kingdom Infrared Telescope (UKIRT) in Hawaii. Swift detects around 100 gamma ray bursts every year He told BBC News that his team was able to observe the afterglow for 10 days, while the gamma ray burst itself lasted around 12 seconds. The event, dubbed GRB 090423, is an example of one of the most violent explosions in the Universe. It is thought to have been associated with the cataclysmic death of a massive star - triggered by the centre of the star collapsing to form a "stellar-sized" black hole. "Swift detects something like 100 gamma ray bursts per year," said Professor Tanvir. "And we follow up on lots of them in the hope that eventually we will get one like this one - something really very distant." Another team, led by Italian astronomer Ruben Salvaterra studied the afterglow independently with the National Galileo Telescope in La Palma. Little red dot He told BBC News: "This kind of observation is quite difficult, so having two groups have the same result with two different instruments makes this much more robust." "It is not surprising - we expected to see an event this distant eventually," said Professor Salvaterra. "But to be there when it happens is quite amazing - definitely something to tell the grandchildren." A GAMMA-RAY BURST RECIPE Models assume GRBs arise when giant stars burn out and collapse During collapse, super-fast jets of matter burst out from the stars Collisions occur with gas already shed by the dying behemoths The interaction generates the energetic signals detected by Swift Remnants of the huge stars end their days as black holes The astronomers were able to calculate the vast distance using a phenomenon known as "red shift". Most of the light from the explosion was absorbed by intergalactic hydrogen gas. As that light travelled towards Earth, the expansion of the Universe "stretches" its wavelength, causing it to become redder. "The greater that amount of movement [or stretching], the greater the distance." he said. The image of this gamma ray burst was produced by combining several infrared images. "So in this case, it's the redness of the dot that indicates that it is very distant," Professor Tanvir explained. Before this record-breaking event, the furthest object observed from Earth was a gamma ray burst 12.9 billion light-years away. "This is quite a big step back to the era when the first stars formed in the Universe," said Professor Tanvir. "Not too long ago we had no idea where the first galaxies came from, so astronomers think this is a profound moment. "This is... the last blank bit of the map of the Universe - the time between the Big Bang and the formation of these early galaxies." Data from two powerful telescopes confirmed the result And this is not the end of the story. Bing Zhang, an astronomer from the University of Nevada, who was not involved in this study, wrote an article in Nature, explaining its significance. The discovery, he said, opened up the exciting possibility of studying the "dark ages" of the Universe with gamma ray bursts. And Professor Tanvir is already planning follow-up studies "looking for the galaxy this exploding star occurred in." Next year, he and his team will be using the Hubble Space Telescope to try to locate that distant, very early galaxy. Source: BBC News
<urn:uuid:dbe88d6f-99d3-40e4-ae1c-e659a8cace09>
3.625
1,143
Content Listing
Science & Tech.
56.8545
Search Loci: Convergence: The Reader may here observe the Force of Numbers, which can be successfully applied, even to those things, which one would imagine are subject to no Rules. There are very few things which we know, which are not capable of being reduc'd to a Mathematical Reasoning; and when they cannot it's a sign our knowledge of them is very small and confus'd; and when a Mathematical Reasoning can be had it's as great a folly to make use of any other, as to grope for a thing in the dark, when you have a Candle standing by you. Of the Laws of Chance (1692) Georg Cantor at the Dawn of Point-Set Topology A first course in point-set topology can be challenging for the student because of the abstract level of the material. In an attempt to mitigate this problem, we use the history of point-set topology to obtain natural motivation for the study of some key concepts. In this article, we study an 1872 paper by Georg Cantor. We will look at the problem Cantor was attempting to solve and see how the now familiar concepts of a point-set and derived set are natural answers to his question. We emphasize ways to utilize Cantor's methods in order to introduce point-set topology to students. In his introduction to his book Introduction to Phenomenology , Msgr. Robert Sokolowski writes As a philosopher, Msgr. Sokolowski is accustomed to the traditional methods of teaching philosophy to undergraduates – start with Plato, Aristotle and the other ancients, continue with developments through the Scholastic and Enlightenment eras, and then show how modern philosophy builds upon all that has gone before. He must be puzzled, then, by the lack of attention to the historical development of ideas that generally attends to the teaching of mathematics. He perceives that something important is missing, and he is correct. In recent years, interest has grown considerably in developing an historical approach to the teaching of mathematics. Victor Katz has edited an anthology of articles giving different perspectives on the development of mathematics in general from an historical point of view . Some authors, such as Klyve, Stemkoski, and Tou, focus on one particular historical figure – in their case, Euler – important to the development of mathematics . There is also interest in the historical development of certain areas of mathematics commonly included in the undergraduate curriculum. Brian Hopkins has written a textbook introducing discrete mathematics from an historical point of view ; David Bressoud has written two textbooks that present analysis from an historical perspective (, ); and Adam Parker has compiled an original sources bibliography for ordinary differential equations instructors that contains many of the original papers in ODEs. This is the first paper in a planned series that will outline ways to introduce point-set topology concepts motivated by their place in history. To borrow a phrase from David Bressoud, it is an "attempt to let history inform pedagogy" [2, p. vii]. A growing collection of the historic papers that are important to the development of point-set topology may be found on the author's web site. This paper focuses on the seminal work of Georg Cantor (1845-1918), a German mathematician well-known for his contributions to the foundations of set theory, but whose contributions to point-set topology are not very well known. Cantor’s works are collected in . For complete biographical information, see Dauben’s definitive work . Table Of Contents Scoville, Nicholas, "Georg Cantor at the Dawn of Point-Set Topology," Loci (March 2012), DOI: 10.4169/loci003861
<urn:uuid:1133c1bd-455a-4f42-be03-ecffa85e1482>
2.75
763
Academic Writing
Science & Tech.
34.359673
Will The Earth Stop Rotating? Date: 1999 - 2000 Will the earth stop rotating? Yes, but not for a long long long time. (If I remember correctly, it is currently slowing down by about half a second per century.) As the earth rotates it gets stretched and squeezed by tidal forces. The energy required to do this work comes from the earth's rotation. The simple answer to this is No. It is believed that the Earth's day will be twice as long as it is now, in about 5 thousand million years time, but there is too much momentum in the Earth to stop it from rotating. By the way, at the moment the Earth is rotating its fastest since the late 1920s, having lost approximately 0.63 milliseconds per day in the last 12 months (to June 28, 2001) against atomic time, based on preliminary International Earth Rotation Service data; compared with 3.13 milliseconds per day in 1972, and 3.89 milliseconds per day in 1912. The Earth GAINED on atomic time in 1929 by 0.35 ms/day. Because of tidal friction.... yes it will. In fact, it is slowing as we ride on it now. Actually, it will not stop, but rather the period of rotation will equal its period of revolution. I do not have the number at hand, but I seem to recall that each (solar) year is .00024 seconds slower than the year one century earlier. The number may not be correct, but the concept is. In the same way that the moon has rotates around the earth, the earth will eventually rotate around the sun... if the sun does not supernova first! There is a small tidal drag on the earth caused by the gravitational forces of the moon and sun which have a small effect on the earth's rotation, but the effect, while measurable, is exceedingly small. On the other hand, the reason the moon always presents the same face to the earth it is believed was caused by tidal drag of the earth on the moon, which is much greater because the mass of the moon is so much smaller than that of Click here to return to the Astronomy Archives Update: June 2012
<urn:uuid:04155edf-d0d0-4ea6-b914-a10ba3c95a22>
3.15625
476
Knowledge Article
Science & Tech.
74.65
Using OpenMP - The Book and Examples Use this forum to discuss the book: Using OpenMP- Portable Shared Memory Parallel Programming by Barbara Chapman, Gabriele Jost and Ruud van der Pashttp://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=11387 The sources are available as a free download under the BSD license. Each source comes with a copy of the license. Please do not remove this. You are encouraged to try out these examples and perhaps use them as a starting point to better understand and perhaps further explore OpenMP. Each source file constitutes a full working program. Other than a compiler and run time environment to support OpenMP, nothing else is needed. With the exception of one example, there are no source code comments. Not only are these examples very straightforward, they are also discussed in the above mentioned book. As a courtesy, each source directory has a make file called "Makefile". This file can be used to build and run the examples in the specific directory. Before you do so, you need to activate the appropriate include line in file Makefile. There are include files for several compilers and Unix based Operating Systems (Linux, Solaris and Mac OS to precise). These files have been put together on a best effort basis. The User's Guide that is bundled with the examples explains this in more detail. Please post your feedback about the book and/or these examples to this forum.
<urn:uuid:cdc62d61-188b-4a3a-9317-ec4f7188eca6>
2.890625
312
Comment Section
Software Dev.
45.194425
An Analysis of the Classic Arctic Outbreak Event of Late December 2008-Early January 2009 By Christian M. Cassell Stratospheric Role | The 2008-2009 winter was characterized by colder than normal temperatures and above normal snowfall for each month from October through March. While there was no one significant snow event that overshadowed any other this past winter, a bitterly cold Arctic outbreak that persisted for more than two weeks brought the coldest temperatures in a decade to the Anchorage area, and grabbed headlines around the world for extreme cold in interior parts of the state. This analysis will show how the outbreak developed and how it was able to persist for a prolonged period of time. *-Indicates a record low value for that particular date. 1.    Summary of temperatures and records from the outbreak The following chart is a breakdown of temperatures and extremes at Anchorage during the two week Arctic outbreak. **-Indicates tying or setting of the lowest temperature of this decade (2000-2009). Though it is arbitrary as to when the outbreak began and ended based on the numbers, the temperature at Anchorage dropped below zero degrees during the evening hours of December 29th, and remained below zero until January 8th except for a one-hour period during the afternoon of January 5th when the temperature managed to make it to 0.4 degrees briefly during the mid afternoon hours. This represented the longest streak of sub-zero days since 30 January . 5 February 1999. Additionally, the eleven-day streak (29 Dec . 8 Jan) with the minimum temperature falling to -10 degrees or lower from the official reporting station at the National Weather Service office on Sand Lake Road was the longest such streak since 17-29 December 1961. Therefore, while there were no record low minimum temperature values set at the official temperature station in Anchorage, the duration of the cold in terms of minimum temperatures at or below -10 degrees was the longest such stretch in 47 years. Go to next page Stratospheric Role |
<urn:uuid:9aae393d-86f4-4331-b269-7344c5e77b24>
2.78125
406
Truncated
Science & Tech.
38.90308
Science subject and location tags Articles, documents and multimedia from ABC Science Monday, 18 February 2013 Heavy metal music fans in a mosh pit act like atoms in a gas - a finding that could advance emergency evacuation design and planning. Wednesday, 21 September 2011 An Australian seismologist says this week's trial of Italian scientists for failing to warn of a devastating earthquake could muzzle experts from sharing their knowledge in the future. Friday, 27 August 2010 Australia's leading body responsible for monitoring space weather has dismissed claims that a massive solar storm could "wipe out the Earth's entire power grid". Monday, 12 July 2010 Australian researchers develop software to let mobile phones communicate with each other where there is no reception. Thursday, 18 February 2010 Society needs to learn from resilient ecosystems if it is to better cope with unanticipated shocks in the future, say experts. Tuesday, 10 February 2009 An Australian fire-behaviour specialist who helped authorities track the infernos, says the golden rule of surviving a bushfire - evacuate early or fight to the bitter end - still stands, despite the weekend's high death toll. Monday, 9 February 2009 Australians remain unprepared to deal with bushfires despite a long history of loss and devastation from natural disasters, according to some of the country's leading bushfire researchers. Monday, 10 March 2008 We can expect an average three catastrophic, magnitude 9 or greater earthquakes around the world each century, according to a new study.
<urn:uuid:3ab4852f-3f65-46af-a429-f89b21ced198>
2.703125
305
Content Listing
Science & Tech.
29.245053
stressArticle Free Pass stress, in physical sciences and engineering, force per unit area within materials that arises from externally applied forces, uneven heating, or permanent deformation and that permits an accurate description and prediction of elastic, plastic, and fluid behaviour. A stress is expressed as a quotient of a force divided by an area. There are many kinds of stress. Normal stress arises from forces that are perpendicular to a cross-sectional area of the material, whereas shear stress arises from forces that are parallel to, and lie in, the plane of the cross-sectional area. If a bar having a cross-sectional area of 4 square inches (26 square cm) is pulled lengthwise by a force of 40,000 pounds (180,000 newtons) at each end, the normal stress within the bar is equal to 40,000 pounds divided by 4 square inches, or 10,000 pounds per square inch (psi; 7,000 newtons per square cm). This specific normal stress that results from tension is called tensile stress. If the two forces are reversed, so as to compress the bar along its length, the normal stress is called compressive stress. If the forces are everywhere perpendicular to all surfaces of a material, as in the case of an object immersed in a fluid that may be compressed itself, the normal stress is called hydrostatic pressure, or simply pressure. The stress beneath the Earth’s surface that compresses rock bodies to great densities is called lithostatic pressure. Shear stress in solids results from actions such as twisting a metal bar about a longitudinal axis as in tightening a screw. Shear stress in fluids results from actions such as the flow of liquids and gases through pipes, the sliding of a metal surface over a liquid lubricant, and the passage of an airplane through air. Shear stresses, however small, applied to true fluids produce continuous deformation or flow as layers of the fluid move over each other at different velocities like individual cards in a deck of cards that is spread. For shear stress, see also shear modulus. Reaction to stresses within elastic solids causes them to return to their original shape when the applied forces are removed. Yield stress, marking the transition from elastic to plastic behaviour, is the minimum stress at which a solid will undergo permanent deformation or plastic flow without a significant increase in the load or external force. The Earth shows an elastic response to the stresses caused by earthquakes in the way it propagates seismic waves, whereas it undergoes plastic deformation beneath the surface under great lithostatic pressure. What made you want to look up "stress"? Please share what surprised you most...
<urn:uuid:79e0dfc6-44f0-433d-bdf7-1f27c991027e>
4.21875
547
Knowledge Article
Science & Tech.
46.057482
evidence suggests that life originated in extreme environments, for example, at high temperatures. The National Science Foundation (NSF) has initiated a program called Life in the Extreme Environment (LExEn) that is dedicated to finding new and exciting organisms that live in harsh environments. The Extreme 2000 research expedition, at hydrothermal vent sites in the Sea of Cortés, is led by marine scientists George Luther and Craig Cary from the University of Delaware and Anna-Louise Reysenbach from Portland State University. Their chief objective is to make real-time chemical measurements at the vents using microsensors developed by Dr. Luthers group, which will guide the microbiologists and molecular biologists in Dr. Carys and Dr. Reysenbachs groups in finding organisms that are descendants of early Chemical Detective Work at the Bottom of the Sea hydrothermal vents home to the closest relatives of the oldest life on Earth? Using special tools housed in a wand on the sub Alvin, researchers will be testing the chemistry of vent water in search of microscopic organisms. The wand houses a thermometer, an apparatus called the Sipper to collect small water samples, and a super-sensitive The analyzer is like a sophisticated underwater snooper. It can be used near the vents and, from its chemical readings, tell scientists what kind of microbes might live there. While our food chain is based on energy from the sun, the suns rays never reach the deep sea. There, organisms must rely on a different energy source: the chemicals that rocket out of the vents. During a previous expedition, the Extreme 2000 scientific team found that the presence of two compounds hydrogen sulfide (H2S) and iron monosulfide (FeS) may be an important indicator of the oldest microscopic vent life. These compounds react to form the mineral pyrite (fools gold) and hydrogen gas. The hydrogen provides the energy that these microbes need to grow. With the analyzers help, marine scientists may be able to track down the nearest descendants of the first life on Earth, and perhaps on other planets. Europa, one of the moons of Jupiter, is covered in ice. However, recent findings suggest that portions of the ice move, which is strong evidence that liquid water lies beneath the ice. The water may be maintained in its liquid state by hydrothermal vents. If hydrothermal vents exist on Europa, theres a possibility that ancient microbes could live
<urn:uuid:34eb878c-35eb-443f-b0b8-8f0942552023>
3.984375
546
Knowledge Article
Science & Tech.
35.707536
We consider a simple pure substance under hydrostatic conditions described by the following fundamental equation: where the extensive variables U, V and N are the internal energy, the volume, and the number of particles respectively, and the intensive variables T, p and are the temperature, the pressure and the chemical potential respectively. Equation () corresponds to the choice of the variables U, V and N as independent variables of the entropy S(U,V,N). These variables are precisely those which are fixed and determine the macrostate of the members of the Microcanonical Ensemble and consequently S is the relevant potential in this statistical ensemble. It is useful to define the following quantities: , and so that Eq. () can then be written in the dimensionless form: In general, for other thermodynamic systems with degrees of freedom, one will have: where are extensive variables, and the corresponding entropic conjugate variables. Massieu-Planck functions are entropic thermodynamic potentials defined as Legendre transformations of the entropy. In the case of a pure substance, the following (dimensionless) potentials can be formally defined: The function was first introduced by Massieu , and it is called Massieu's potential. The function was introduced by Planck and is called Planck's. potential. Given the extensivity of , and using Euler's theorem for homogeneous functions, it is easy to see that . Therefore the Legendre transformation of all variables redefines the entropy, Substituting Eq. () into the differentials of the potentials defined above one gets: From Eq. () one obtains: The above equations allow a re-derivation of all the standard thermodynamic equations in terms of , and . For instance, Maxwell relations can be deduced, by imposing that the equations ()-() are exact differentials (equality of crossed derivatives). Moreover, Eq. () is the Gibbs-Duhem equation which states that the complete set of intensive variables of the system are not all independent. On the other hand, the extremal condition of leads us to deduce that , and are homogeneous at equilibrium .
<urn:uuid:f64a9fe0-fb27-407b-891a-d7ab31581e3a>
2.78125
451
Academic Writing
Science & Tech.
23.064551
Electricity and magnetism The dot product Introduction to the vector dot product. The dot product ⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles. - Let's learn a little bit about the dot product. - The dot product, frankly, out of the two ways of multiplying - vectors, I think is the easier one. - So what does the dot product do? - Why don't I give you the definition, and then I'll give - you an intuition. - So if I have two vectors; vector a dot vector b-- that's - how I draw my arrows. - I can draw my arrows like that. - That is equal to the magnitude of vector a times the - magnitude of vector b times cosine of the - angle between them. - Now where does this come from? - This might seem a little arbitrary, but I think with a - visual explanation, it will make a little bit more sense. - So let me draw, arbitrarily, these two vectors. - So that is my vector a-- nice big and fat vector. - It's good for showing the point. - And let me draw vector b like that. - Vector b. - And then let me draw the cosine, or let me, at least, - draw the angle between them. - This is theta. - So there's two ways of view this. - Let me label them. - This is vector a. - I'm trying to be color consistent. - This is vector b. - So there's two ways of viewing this product. - You could view it as vector a-- because multiplication is - associative, you could switch the order. - So this could also be written as, the magnitude of vector a - times cosine of theta, times-- and I'll do it in color - appropriate-- vector b. - And this times, this is the dot product. - I almost don't have to write it. - This is just regular multiplication, because these - are all scalar quantities. - When you see the dot between vectors, you're talking about - the vector dot product. - So if we were to just rearrange this expression this - way, what does it mean? - What is a cosine of theta? - Let me ask you a question. - If I were to drop a right angle, right here, - perpendicular to b-- so let's just drop a right angle - there-- cosine of theta soh-coh-toa so, cah cosine-- - is equal to adjacent of a hypotenuse, right? - Well, what's the adjacent? - It's equal to this. - And the hypotenuse is equal to the magnitude of a, right? - Let me re-write that. - So cosine of theta-- and this applies to the a vector. - Cosine of theta of this angle is equal to ajacent, which - is-- I don't know what you could call this-- let's call - this the projection of a onto b. - It's like if you were to shine a light perpendicular to b-- - if there was a light source here and the light was - straight down, it would be the shadow of a onto b. - Or you could almost think of it as the part of a that goes - in the same direction of b. - So this projection, they call it-- at least the way I get - the intuition of what a projection is, I kind of view - it as a shadow. - If you had a light source that came up perpendicular, what - would be the shadow of that vector on to this one? - So if you think about it, this shadow right here-- you could - call that, the projection of a onto b. - Or, I don't know. - Let's just call it, a sub b. - And it's the magnitude of it, right? - It's how much of vector a goes on vector b over-- that's the - adjacent side-- over the hypotenuse. - The hypotenuse is just the magnitude of vector a. - It's just our basic calculus. - Or another way you could view it, just multiply both sides - by the magnitude of vector a. - You get the projection of a onto b, which is just a fancy - way of saying, this side; the part of a that goes in the - same direction as b-- is another way to say it-- is - equal to just multiplying both sides times the magnitude of a - is equal to the magnitude of a, cosine of theta. - Which is exactly what we have up here. - And the definition of the dot product. - So another way of visualizing the dot product is, you could - replace this term with the magnitude of the projection of - a onto b-- which is just this-- times the - magnitude of b. - That's interesting. - All the dot product of two vectors is-- let's just take - one vector. - Let's figure out how much of that vector-- what component - of it's magnitude-- goes in the same direction as the - other vector, and let's just multiply them. - And where is that useful? - Well, think about it. - What about work? - When we learned work in physics? - Work is force times distance. - But it's not just the total force - times the total distance. - It's the force going in the same - direction as the distance. - You should review the physics playlist if you're watching - this within the calculus playlist. Let's say I have a - 10 newton object. - It's sitting on ice, so there's no friction. - We don't want to worry about fiction right now. - And let's say I pull on it. - Let's say my force vector-- This is my force vector. - Let's say my force vector is 100 newtons. - I'm making the numbers up. - 100 newtons. - And Let's say I slide it to the right, so my distance - vector is 10 meters parallel to the ground. - And the angle between them is equal to 60 degrees, which is - the same thing is pi over 3. - We'll stick to degrees. - It's a little bit more intuitive. - It's 60 degrees. - This distance right here is 10 meters. - So my question is, by pulling on this rope, or whatever, at - the 60 degree angle, with a force of 100 newtons, and - pulling this block to the right for 10 meters, how much - work am I doing? - Well, work is force times the distance, but not just the - total force. - The magnitude of the force in the direction of the distance. - So what's the magnitude of the force in the - direction of the distance? - It would be the horizontal component of this force - vector, right? - So it would be 100 newtons times the - cosine of 60 degrees. - It will tell you how much of that 100 - newtons goes to the right. - Or another way you could view it if this - is the force vector. - And this down here is the distance vector. - You could say that the total work you performed is equal to - the force vector dot the distance vector, using the dot - product-- taking the dot product, to the force and the - distance factor. - And we know that the definition is the magnitude of - the force vector, which is 100 newtons, times the magnitude - of the distance vector, which is 10 meters, times the cosine - of the angle between them. - Cosine of the angle is 60 degrees. - So that's equal to 1,000 newton meters - times cosine of 60. - Cosine of 60 is what? - It's square root of 3 over 2. - Square root of 3 over 2, if I remember correctly. - So times the square root of 3 over 2. - So the 2 becomes 500. - So it becomes 500 square roots of 3 joules, whatever that is. - I don't know 700 something, I'm guessing. - Maybe it's 800 something. - I'm not quite sure. - But the important thing to realize is that the dot - product is useful. - It applies to work. - It actually calculates what component of what vector goes - in the other direction. - Now you could interpret it the other way. - You could say this is the magnitude of a - times b cosine of theta. - And that's completely valid. - And what's b cosine of theta? - Well, if you took b cosine of theta, and you could work this - out as an exercise for yourself, that's the amount of - the magnitude of the b vector that's - going in the a direction. - So it doesn't matter what order you go. - So when you take the cross product, it matters whether - you do a cross b, or b cross a. - But when you're doing the dot product, it doesn't matter - what order. - So b cosine theta would be the magnitude of vector b that - goes in the direction of a. - So if you were to draw a perpendicular line here, b - cosine theta would be this vector. - That would be b cosine theta. - The magnitude of b cosine theta. - So you could say how much of vector b goes in the same - direction as a? - Then multiply the two magnitudes. - Or you could say how much of vector a goes in the same - direction is vector b? - And then multiply the two magnitudes. - And now, this is, I think, a good time to just make sure - you understand the difference between the dot product and - the cross product. - The dot product ends up with just a number. - You multiply two vectors and all you have is a number. - You end up with just a scalar quantity. - And why is that interesting? - Well, it tells you how much do these-- you could almost say-- - these vectors reinforce each other. - Because you're taking the parts of their magnitudes that - go in the same direction and multiplying them. - The cross product is actually almost the opposite. - You're taking their orthogonal components, right? - The difference was, this was a a sine of theta. - I don't want to mess you up this picture too much. - But you should review the cross product videos. - And I'll do another video where I actually compare and - contrast them. - But the cross product is, you're saying, let's multiply - the magnitudes of the vectors that are perpendicular to each - other, that aren't going in the same direction, that are - actually orthogonal to each other. - And then, you have to pick a direction since you're not - saying, well, the same direction that - they're both going in. - So you're picking the direction that's orthogonal to - both vectors. - And then, that's why the orientation matters and you - have to take the right hand rule, because there's actually - two vectors that are perpendicular to any other two - vectors in three dimensions. - Anyway, I'm all out of time. - I'll continue this, hopefully not too confusing, discussion - in the next video. - I'll compare and contrast the cross - product and the dot product. - See you in the next video. Be specific, and indicate a time in the video: At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger? Have something that's not a question about this content? This discussion area is not meant for answering homework questions. Share a tip When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831... Have something that's not a tip or feedback about this content? This discussion area is not meant for answering homework questions. Discuss the site For general discussions about Khan Academy, visit our Reddit discussion page. Flag inappropriate posts Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians. - disrespectful or offensive - an advertisement - low quality - not about the video topic - soliciting votes or seeking badges - a homework question - a duplicate answer - repeatedly making the same post - a tip or feedback in Questions - a question in Tips & Feedback - an answer that should be its own question about the site
<urn:uuid:897a62d5-1a0a-42ef-a9ef-517a73b9b936>
4.53125
2,885
Truncated
Science & Tech.
74.68687
Zoologger is our weekly column highlighting extraordinary animals – and occasionally other organisms – from around the world Step from a sunlit hillside into the darkness of a cave, and you immediately have a problem: you can't see. It's best to stand still for a few minutes until your eyes adjust to the dimness, otherwise you might blunder into a hibernating bear that doesn't appreciate your presence. The same thing will happen when you leave again: the brightness of the sun will dazzle you at first. That's because your eyes have two types of receptor: one set works in bright light and the other in dim light. Barring a few minutes around sunset, only one set of receptors is ever working at any given time. Peters' elephantnose fish has no such limitations. Its peculiar eyes allow it to use the two types of receptor at the same time. That could help it to spot predators as they approach through the murky water it calls home. Peters' elephantnose fish belongs to a large family called the elephantfish, all of which live in Africa. They get their name from the trunk-like protrusions on the front of their heads. But whereas the trunks of elephants are extensions of their noses, the trunks of elephantfish are extensions of their mouths. To find a Peters' elephantnose fish, you must lurk in muddy, slow-moving water. Look closely, because the fish is brown and so is the background. It finds its way through the murk using its trunk, which generates a weak electrical field that helps it sense its surroundings and even discriminate between different objects. The fish's electric sense allows it to hunt insect larvae in pitch darkness. The fish has paid a price for its electrical sensitivity. Processing the signals takes brainpower, so it has an exceptionally large brain. As a result, 60 per cent of the oxygen taken in by the fish goes to its brain. Even humans, with our whopping brains, only devote 20 per cent of our oxygen to them. Now for its eyes. Most vertebrates, including humans, have two types of light receptors on their retinas: rods and cones. Rods can sense dim light, but become bleached in bright light and stop working. Cones can't see in dim light, but given enough light they can see fine details and colours. Most animals' eyes are specialised for one or the other. Animals that are active during the day tend to have more cones than nocturnal animals such as foxes. In the human eye, the cones are clustered in a central region called the fovea, where the light is sharply focused, and the rods are outside it. As a result, we have excellent daytime vision and rather poor night vision. The retina of the Peters' elephantnose fish looks completely different. It is covered with cup-shaped depressions. Around 30 cones sit inside each cup, and a few hundred rods are buried underneath. Because of the peculiar design of the fish's retina, it was thought to be blind until about 10 years ago, says Andreas Reichenbach of the Paul Flechsig Institute for Brain Research in Leipzig, Germany. Reichenbach has now worked out what the cups are for. Each cup has a layer of massive cells that are full of guanine crystals. These form a mirrored surface that amplifies the light intensity within the cups, ensuring that the cones have enough light to work with. At the same time, because the cups are eating up so much of the light, only a small amount reaches the cones. As a result, both sets of receptors are supplied with the right amount of light. Yet when Reichenbach tested the fishes' vision, they didn't seem to do very well. For instance, they could only see objects that covered a big swathe of their visual field. If humans had vision that bad, we would miss any object whose width was less than one sixth of a full moon. However, the Peters' elephantnose fish were very good at spotting large moving objects against a cluttered background – essential for fish that live in dirty water. Presented with a monitor displaying a black stimulus on a white background, they took as long to spot it as goldfish. But when a grey noise pattern – like an untuned TV – was superimposed, the elephantnose fish spotted the stimulus faster than the goldfish. The fish's ability to see the wood for the trees probably helps it spot incoming predators like catfish. So Reichenbach thinks its oddball visual system isn't a mistake. "It's the right type for this fish," he says. Journal reference: Science, DOI: 10.1126/science.1218072 If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article Thu Jun 28 22:15:54 BST 2012 by Freederick If I understand correctly, each cup-shaped depression serves as a single aggregate receptor, combining the output of all the individual light-sensitive cells comprising it. In effect, the fish is trading resolution for sensitivity. This is the same sort of effect as used to be employed in high-ISO photographic film, where the larger, flattened grains of photosensitive chemicals resulted in high sensitivity, at the cost of a coarse-grained image. The fish employs an even more effective method, effectively combining many smaller "grains" into one huge hypersensitive receptor. All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:af338612-8df7-49f3-953a-3be7407dfe41>
3.390625
1,252
Truncated
Science & Tech.
53.685916
- 1 hammer - something hard and resonant (and inanimate) to bang it on - 1 clock with a second hand - 1 measuring tape - 1 helper - 1 pair of binoculars Sound travels at 344 metres per second in air at 20 °C. This is slow enough for noises to be noticeably delayed when heard from even quite a short distance. You can use this effect to measure the speed of sound. Ask your helper to a hit a wall or a piece of metal repeatedly with the hammer, about twice every second. The exact frequency of the beat doesn't matter; it can be measured later. But the beat should be regular. Now start walking away, looking back from time to time to watch your helper pounding away as you listen to the sound of their hammering. As the distance increases, the delay after each beat before the sound arrives will become longer and longer. Eventually the delay ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:b9c9157d-c12e-4d9b-b2f0-ad73f0197497>
3.703125
218
Truncated
Science & Tech.
64.579538
|National Weather Service| Contents: About, Graph, Status Maps, History Button, Credits To use this website, click on the appropriate REGION. This will update the list of STATIONS and show a "Status Map" for that region. Click on your desired station, either on the map or in the list of STATIONS. This will bring up a graph of the total water level, as well as a text file that contains the numbers used in the graph. The graph combines several sources of data to produce a total water level prediction. To do so, it graphs the observed water levels in comparison to the predicted tide and predicted surge before the current time. This allows it to compute the "Anomaly". The "Anomaly" is the amount of water that was not predicted by either the tide or the storm surge model. This "Anomaly" is averaged over 5 days, and is then added to the future predictions of the tide and storm surge to predict the Total Water Level.Example: The first thing one notices is that there are two magenta vertical lines. The earlier one is when the storm surge model was run. It is run at 0Z and 12Z every day and the text form is available at: http://www.nws.noaa.gov/mdl/marine/etsurge.htm. The later magenta line is when the graph was generated. It is currently being generated 15 minutes after the top of every hour. (This is also the date that follows the label.) The next thing one notices are the horizontal lines labeled MLLW, MSL, MHHW, and MAT. These stand for the Mean Lower Low Water, Mean Sea Level, Mean Higher High Water, and Maximum Astronomical Tide. MAT was computed using our tide model, by computing the maximum of the predicted value for every hour (on the hour) for 19 years. The thought is that there is probably flooding if the total water level crosses MAT. The other datums came from http://www.co-ops.nos.noaa.gov/data_res.html. One might next notice the red observation line. This is based on data attained from Tides Online . Please see their Disclaimer for information as to the quality of these observations. If there is no red line, then either Tides Online does not have data for that station, or there has been a communications break down. In this case, the graph computes an anomaly based on what data it has, or sets it to 0. Then it predicts the total water level for all hours, or after the last of any observations it does have. The next thing of interest is the blue Tide line. This is the astronomical tide at every hour. The Harmonic Constants used were obtained from http://www.co-ops.nos.noaa.gov/data_res.html. We then note the gold storm surge curve, which is created by "pasting" one 48 hour prediction to the next 48 hour prediction. That is, using 12 hours from each prediction until the last prediction where we use 48 hours. The result is that we may generate kinks in the curve every 12 hours, where the model adjusted its prediction based on new data from the GFS wind model. Next we note the green curve, which is the "Anomaly" referred to above. This is simply the observation - (tide + storm surge). Preferably it is constant. The amount of deviation from a constant is an approximation of our error. Since we add the 5 day average of this value to our prediction, the perfect forecast does not have to have a zero Anomaly. Finally we see the black forecast curve. This is what we are really interested in, which is the total water level created by adding the 5 day average anomaly to the predicted tide, and the predicted storm surge. The history button allows one to see how the model has done over the last day or so. It displays 3 graphs. The first one is the current graph based on the current model run, and the current observations. The second graph is the last graph generated using the last model run. The third graph is the last graph generated using the next to last model run. This gives a view of the model over the last 24 to 36 hours depending on when the current time is. To print this page out (Netscape instructions) it is recommended that you right click on the history frame and choose "Open Frame in New Window". Then choose page setup, and set the top and bottom margins to 0. Then choose print, and preferably send it to a color printer, (although a black and white does work). The result should be 3 graphs on the same page. We would like to thank the following people/organizations:
<urn:uuid:2ef9c003-396e-4752-b9a6-dcc3dae7c434>
3.171875
985
Documentation
Science & Tech.
64.47905
The associative array -- an indispensable data type used to describe a collection of unique keys and associated values -- is a mainstay of all programming languages, PHP included. In fact, associative arrays are so central to the task of Web development that PHP supports dozens of functions and other features capable of manipulating array data in every conceivable manner. Such extensive support can be a bit overwhelming to developers seeking the most effective way to manipulate arrays within their applications. In this article, I'll offer 10 tips that can help you shred, slice and dice your data in countless ways. 1. Adding Array Elements PHP is a weakly typed language, meaning you're not required to explicitly declare an array nor its size. Instead you can both declare and populate the array simultaneously: Additional array elements can be appended like this: $capitals['Arkansas'] = 'Little Rock'; If you're dealing with numerically indexed arrays and would rather prepend and append elements using an explicitly-named function, check out the array_push() and array_unshift() functions (these functions don't work with associative arrays). 2. Removing Array Elements To remove an element from an array, use the unset() function: When using numerically indexed arrays you have a bit more flexibility in terms of removing array elements in that you can use the array_shift() and array_pop() functions to remove an element from the beginning and end of the array, respectively. 3. Swapping Keys and Values Suppose you wanted to create a new array called $states, which would use state capitals as the index and state names as the associated value. This task is easily accomplished using the array_flip() function: Suppose the previous arrays were used in conjunction with a Web-based "flash card" service, and you wanted to provide students with a way to test their knowledge of worldwide capitals, U.S. states included. You can merge arrays containing both state and country capitals using the array_merge() function: Suppose the data found in an array potentially contains capitalization errors, and you want to correct these errors before inserting the data into the database. You can use the array_map() function to apply a callback to every array element: The Standard PHP Library (SPL) offers developers with quite a few data structures, iterators, interfaces, exceptions and other features not previously available within the PHP language. Among these features is the ability to iterate over an array using a convenient object-oriented syntax: $capitals = array( 'Arizona' => 'Phoenix', 'Alaska' => 'Juneau', 'Alabama' => 'Montgomery' $arrayObject = new ArrayObject($capitals); foreach ($arrayObject as $state => $capital) printf("The capital of %s is %s<br />", $state, $capital); // The capital of Arizona is Phoenix // The capital of Alaska is Juneau // The capital of Alabama is Montgomery This is just one of countless great features bundled into the SPL, be sure to consult the PHP documentation for more information. About the Author Jason Gilmore is the founder of the publishing and consulting firm WJGilmore.com. He also is the author of several popular books, including "Easy PHP Websites with the Zend Framework", "Easy PayPal with PHP", and "Beginning PHP and MySQL, Fourth Edition". Follow him on Twitter at @wjgilmore.
<urn:uuid:8d9d41d5-5f23-454d-95a8-8abc57ab3eae>
3.046875
722
Tutorial
Software Dev.
24.560172
CSIRO Marine and Atmospheric Research, Barrie Hunt, says 'Despite 2010 being a very warm year globally, the severity of the 2009-2010 northern winter and a wetter and cooler Australia in 2010 relative to the past few years have been misinterpreted by some to imply that climate change is not occurring.' 'Recent wet conditions in eastern Australia mainly reflect short-term climate variability and weather events, not longer-term climate change trends. Conclusions that climate is not changing are based on a misunderstanding of the roles of climatic change caused by increasing greenhouse gases and climatic variability due to natural processes in the climatic system. 'These two components of the climate system interact continuously, sometimes enhancing and sometimes counteracting one another to either exacerbate or moderate climate extremes.' Mr Hunt says his climatic model simulations support what is clear from recent observations – that in addition to the role of climate change linked to human activity, natural variability produces periods where the global climate can be either cooler or warmer than usual. Mr Hunt’s results were published in the latest edition of the international journal Climate Dynamics. He says some such natural temperature variations can last for 10 to 15 years, with persistent variations of about 0.2°C. 'Such natural variability could explain the above average temperatures observed globally in the 1940s, and the warm but relatively constant global temperatures of the last decade.' Mr Hunt also found that seasonal cold spells will still be expected under enhanced greenhouse conditions. For example, monthly mean temperatures up to 10°C below present values were found to occur over North America as late as 2060 in model simulations, with similar cold spells over Asia. Variations of up to 15°C below current temperatures were found to occur on individual days, even in 2060, despite a long-term trend of warming on average. 'These results suggest that a few severe winters in the Northern hemisphere are not sufficient to indicate that climatic change has ceased. The long-term trends that characterise climate change can be interpreted only by analysing many years of observations.' 'Future changes in global temperature as the concentration of greenhouse gases increases will not show a simple year-on-year increase but will vary around a background of long-term warming. Winters as cold as that recently experienced in the Northern Hemisphere, however, will become progressively less frequent as the greenhouse effect eventually dominates,' Mr Hunt said. This underlying warming trend, reflected in the projections of future climate and the observation that the past decade has been the warmest in the instrumental record, underline the need to both adapt to what is now inevitable change and mitigate even greater changes. Photographs are copyright by law. If you wish to use or buy a photograph you must contact the photographer directly (there is a hyperlink in most cases to their website, or do a Google search.) with your request. Please do not contact as we cannot give permission for use of other photographer’s images.
<urn:uuid:df0d3c64-9bd5-4d02-98f8-7a7c217e5a58>
3.40625
601
Knowledge Article
Science & Tech.
26.796172
The magnitude system works quite well for quantifying the brightness of stars. We know that a 6th magnitude star will be barely visible to the unaided eye from rural areas, yet easily seen in even the smallest of telescopes. The magnitude system doesn’t work as well for deep-sky objects. Consider the spiral galaxy M33 in Triangulum. Listed as a 6th magnitude object, it’s notoriously difficult to view in telescopes. M33 is elusive because its light is spread over an area four times that of the full moon. Defocus a 6th magnitude star until it’s that large and you’ll get the idea. Another reason why M33 is such a demanding target is its location in a star-poor region of the late autumn sky. I usually find it by training my telescope on an area roughly 4 ½ degrees west and slightly north of alpha (a) Trianguli. You can also trace an imaginary line from the Andromeda Galaxy (M31) to the star beta (b) Andromedae, then extend an equal distance beyond (refer to the accompanying finder chart). In either case, begin a low power sweep of the area until you encounter a large, faint glow. The key to observing M33 is to use an eyepiece that affords a field of view of at least 1½ to 2 degrees. One of the best views I’ve had of M33 was with a 4-inch f/4 RFT (the Edmund Astroscan) and a magnifying power of 16X. I’ve spotted it with 7X50 binoculars, and some observers even report seeing it with the unaided eye. The key, of course, is to conduct a search for M33 from a dark-sky site on a clear, moonless evening. Numerous sources credit the discovery of M33 to Messier himself (in 1764); however evidence exists that the true discoverer may have been the Italian astronomer Giovanni Battista Hodiema over a century earlier. M33 is part of the Local Group of galaxies that includes our Milky Way and the Andromeda Galaxy. It’s approximately half the size of the Milky Way and lies about 2.9 million light-years away.
<urn:uuid:a964755c-840c-4fb5-bf10-6a64094c1257>
3.71875
470
Knowledge Article
Science & Tech.
60.508222
Note the cold spots are not along the geographic line from the N. Pole to the S. Pole but where the globe, tilted and tipped, is getting the least amount of sunshine! Depending on the tipping and what part of the globe is getting sunlight, the coldest spot is not the geographic N. Pole, but that part of the globe that gets less sunlight. On the weather maps below, the Italy Face makes Russia, not Sweden, receive less sunlight as Sweden is tilted toward the Sun. The Americas Face makes the areas NW of Hudson Bay, not the N. Pole, receive less sunlight as this part receives a shorter day as it is pushed into an early sunset during the tilt swing. The New Zealand face shows an uneven distribution of cold along latitude 60° as the globe is pushed up and away from the Sun over New Zeland. This is then pulled forward over the India Face for warmer temperatures North of Mongolia. The weather maps, and the verbal descriptions, match. Dec 3: No precise sunset data because of the clouds, but but somewhere between 280° and 320°. [Assume 300°. Compass reading, subtract 30° for deviation. Skymap expects Azi 238°. Sunset NORTH by 32°] Dec 19: I think we have passed the actual Solstice already a week ago at least, as the sun is already now again higher in my view in Europe than it was 2 weeks ago. I feel we have rolled by at least 12-30 degrees. Dec 21: Sun set SSW rather than SW at Azi 225°. Dec 2: Sunset early by 30 minutes, SOUTH. Dec 3: Sunset SOUTH by 21° Dec 5: Sunrise NORTH by 11° Dec 11: Sunset SOUTH by 12° Dec 11: Sunset SOUTH by 14° Dec 13: Sunset SOUTH by 22° Dec 14: Sunrise NORTH by 7° Dec 26: Sunrise NORTH by 12° Dec 6: Sunrise high by 19° NORTH, early. Dec 7: Sunrise 50 minutes, NORTH. Dec 8: dark in the 2nd week of Nov at 5 PM, normal for in the first week of Dec. Dec 21: Sunset SOUTH by 14° Dec 23: Sunset SOUTH by 16° Dec 24: Sunset SOUTH by 18° Dec 8: SOUTH by 38°! Dec 10: Sunset SOUTH by 6° Dec 18: Sunset SOUTH by 3° Dec 22: Sunset SOUTH by 8° Dec 18: Sunset 5° SOUTH. Sunset Dec 3: SOUTH by 9°. Sunrise Dec 11: SOUTH by 8° Sunset Dec 11: SOUTH by 11° Midday Dec 12: SOUTH by 25° and too HIGH by 15-20° deg! Sunset Dec 12: SOUTH by 11° Sunrise Dec 7: SOUTH, late by 47 minutes. Sunset Dec 6: SOUTH, late by 28 minutes.
<urn:uuid:c22f6858-5b30-4acd-b9da-15d49797c5ba>
2.8125
627
Comment Section
Science & Tech.
86.902402
Tornadoes and Climate Change: Huge Stakes, Huge Unknowns Posted: 12:05 PM EDT on May 23, 2013 We currently do not know how tornadoes and severe thunderstorms may be changing due to climate change, nor is there hope that we will be able to do so in the foreseeable future. It does not appear that there has been an increase in U.S. tornadoes stronger than EF-0 in recent decades, but climate change appears to be causing more extreme years--both high and low--of late. We may see an increase in the number of severe thunderstorms over the U.S. by late this century. Read This Blog Entry Other Featured Blogs: Did you know that... Large golf ball-sized hail was produced from thunderstorms over the eastern United States on this date in 1988. Also, in 1990, a cloudburst washed topsoil and large rocks into the town of Culdesac, Idaho. More Weather Education Resources
<urn:uuid:e7908a91-7b74-42c2-ab95-59e1ffdccb6a>
3.015625
204
Content Listing
Science & Tech.
64.028462
Exceptions are a means of breaking out of the normal flow of control of a code block in order to handle errors or other exceptional conditions. An exception is The Python interpreter raises an exception when it detects a run-time error (such as division by zero). A Python program can also explicitly raise an exception with the raise statement. Exception handlers are specified with the try ... except statement. The try ... finally statement specifies cleanup code which does not handle the exception, but is executed whether an exception occurred or not in the preceding code. Python uses the ``termination'' When an exception is not handled at all, the interpreter terminates execution of the program, or returns to its interactive main loop. In either case, it prints a stack backtrace, except when the exception is Exceptions are identified by string objects or class instances. Selection of a matching except clause is based on object identity (i.e., two different string objects with the same value represent different exceptions!) For string exceptions, the except clause must reference the same string object. For class exceptions, the except clause must reference the same class or a base class of it. When an exception is raised, an object (maybe None) is passed as the exception's ``parameter'' or ``value''; this object does not affect the selection of an exception handler, but is passed to the selected exception handler as additional information. For class exceptions, this object must be an instance of the exception class See also the description of the try statement in section 7.4 and raise statement in section 6.8.
<urn:uuid:369dd57f-25d9-44e6-832c-29ed8d0645d2>
4.09375
331
Documentation
Software Dev.
47.998813
July 24, 2011 The photo above shows a lovely group of mushrooms nestled against the trunk of a eucalyptus tree. The association between the fungi and the tree however is no accident. This is a mutualistic relationship, where the two species assist each other, and in fact probably would be poorer without each other. Mutualism is any relationship between two species of organisms that benefits both species. Up to a quarter of the mushrooms you see while walking through the woods actually make their living through a mutualistic relationship with the trees in the forest. Remember of course that the mushroom is just the reproductive structure of a far more extensive organism consisting of a highly intertwined mass of fine white threads called a mycelium. The word mycorrhiza is derived from the Classical Greek words for "mushroom" and "root." In a mycorrhizal association, the fungal hyphae of an underground mycelium are in contact with plant roots but without the fungus parasitizing the plant. While it's clear that the majority of plants form mycorrhizas, the exact percentage is uncertain, but it's likely to lie somewhere between 80 and 90 percent. When the fungus’ mycelium envelopes the roots of the tree the effect is to greatly increase the soil area covered by the tree’s root system. This essentially extends the plant’s reach to water and nutrients, allowing it to utilize more of the soil’s resources. This mutualistic association provides the fungus with a relatively constant and direct access to carbohydrates, such as glucose and sucrose, supplied by the plant. In return the plant gains the benefits of the mycelium's higher absorptive capacity for water and mineral nutrients (due to comparatively large surface area of mycelium-to-root ratio), thus improving the plant's mineral absorption capabilities. Photo taken on May 7, 2011. Photo details: Camera Maker: Canon; Camera Model: Canon EOS 50D; Focal Length: 70.0mm; Aperture: f/10.0; Exposure Time: 0.013 s (1/80); ISO equiv: 1250; Exposure Bias: -1.00 EV; Metering Mode: Matrix; Exposure: aperture priority (semi-auto); White Balance: Auto; Flash Fired: No (enforced); Orientation: Normal; Color Space: sRGB.
<urn:uuid:8821cc54-1a17-46f7-9c23-b857acf0dd8d>
3.40625
492
Personal Blog
Science & Tech.
43.200575
A. There is an instant at which the string is completely straight. B. When the two pulses interfere, the energy of the pulses is momentarily zero. C. There is a point on the string that does not move up or down. D. There are several points on the string that do not move up or down. E. A and C are both true. F. B and D are both true.
<urn:uuid:abc13551-d525-435b-9b24-7edce26b05a0>
2.890625
88
Q&A Forum
Science & Tech.
93.35131
Our main goal here is to give a quick visual summary that is at once convincing and data rich. These employ some of the most basic tools of visual data analysis and should probably become form part of the basic vocabulary of an experimental mathematician. Note that traditionally one would run a test such as the Anderson-Darling test (which we have done) for the continuous uniform distribution and associate a particular probability with each of our sets of probability, but unless the probability values are extremely high or low it is difficult to interpret these statistics. Experimentally, we want to test graphically the hypothesis of normality and randomness (or non-periodicity) for our numbers. Because the statistics themselves do not fall into the nicest of distributions, we have chosen to plot only the associated probabilities. We include two different types of graphs here. A quantile-quantile plot is used to examine the distribution of our data and scatter plots are used to check for correlations between statistics. The first is a quantile-quantile plot of the chi square base 10 probability values versus a a discrete uniform distribution. For this graph we have placed the probabilities obtained from our square roots and plotted them against a perfectly uniform distribution. Finding nothing here is equivalent to seeing that the graph is a straight line with slope 1. This is a crude but effective way of seeing the data. The disadvantage is that the data are really plotted along a one dimensional curve and as such it may be impossible to see more subtle patterns. The other graphs are examples of scatter plots. The first scatter plot shows that nothing interesting is occurring. We are again looking at probability values this time derived from the discrete Cramer-von Mises (CVM) test base 10,000. For each cube root we have plotted the point , where is the CVM base 10,000 probability associated with the first 2500 digits of the cube root of i and is the probability associated with the next 2500 digits. A look at the graph reveals that we have now plotted our data on a two dimensional surface and there is a lot more `structure' to be seen. Still, it is not hard to convince oneself that there is little or no relationship between the probabilities of the first 2500 digits and the second 2500 digits. The last graph is similar to the second. Here we have plotted the probabilities associated with the Anderson-Stephens statistic of the first 10,000 digits versus the first 20,000 digits. We expect to find a correlation between these tests since there is a 10,000 digit overlap. In fact, although the effect is slight, one can definitely see the thinning out of points from the upper left hand corner and lower right hand corner. Figure 1: Graphs 1-3
<urn:uuid:6697aede-f5b6-4d7b-b653-9cc6d6586fb4>
3.5625
554
Academic Writing
Science & Tech.
42.204993
No for "dry", yes for "wet". For "dry friction", such as a box on a floor, it is relatively constant. Why is this? Most objects are microscopically rough with "peaks" that move against each-other. As more pressing force is applied, the peaks deform more and the true contact area is increases proportionally. The surfaces adhere forming a bond that will take a certain amount of shear force to break. Since the molecules are moving much faster ~300m/s than the box they have plenty of time to adhere (so velocity is not an issue). However, static friction is sometimes be higher, in one explanation because the peaks have time to settle and interlock with each-other. Neglecting static friction, force is constant. The simplest case in wet friction is two objects separated by a film of water. In this case there is zero static friction, as the thermal energy is sufficient to disrupt any static, shear-bearing water molecule structure. However, water molecules still push and pull on each-other, transferring momentum from the top to the bottom. The rate of momentum transfer i.e. "friction" grows in proportion to how much momentum is available, which in turn grows with velocity. Thus, force is linear with velocity. However, interesting things happen when the bulk mass of the water gets important. In this case, bumps, etc on the surface push on the water creating currents that can ram into bumps on the other surface. If you double the velocity, your bumps will push twice as much water twice as fast for 4 times the force; force is quadratic to velocity. You can plug in formulas for the linear case (which depends on viscosity) and quadratic case (which depends on density) to see which one "wins" (this is roughly the Reynolds number), if there is no clear winner the answer is complex (see the Moody diagram). Nevertheless these are approximations and the real answer could fail to follow these "rules".
<urn:uuid:55986c1f-03be-4b53-892f-8b50cf3b888c>
3.203125
417
Q&A Forum
Science & Tech.
52.590476
Major Section: HISTORY Example Forms: ACL2 !>:puff* :max ACL2 !>:puff* :x ACL2 !>:puff* 15 ACL2 !>:puff* "book"where General Form: :puff* cd cdis a command descriptor (see command-descriptor) for a ``puffable'' command. See puff for the definition of ``puffable'' and for a description of the basic act of ``puffing'' a command. Puff*is just the recursive application of puff. Puff*prints the region puffed, using To puff a command is to replace it by its immediate subevents, each of which is executed as a command. To puff* a command is to replace the command by each of its immediate subevents and then to each of the puffable commands among the newly introduced ones. For example, suppose "ab" is a book containing the following (in-package "ACL2") (include-book "a") (include-book "b")Suppose that book defuns for the functions Now consider an ACL2 state in which only two commands have been executed, the first being (include-book "ab") and the second being (include-book "c"). Thus, the relevant part of the display pbt 1 would be: 1 (INCLUDE-BOOK "ab") 2 (INCLUDE-BOOK "c")Call this state the ``starting state'' in this example, because we will refer to it several times. :puff 1 is executed in the starting state. Then the first command is replaced by its immediate subevents and :pbt 1 would 1 (INCLUDE-BOOK "a") 2 (INCLUDE-BOOK "b") 3 (INCLUDE-BOOK "c")Contrast this with the execution of :puff* 1in the starting state. Puff*would first puff (include-book "ab")to get the state shown above. But then it would recursively puff*the puffable commands introduced by the first puff. This continues recursively as long as any puff introduced a puffable command. The end result of :puff* 1in the starting state is 1 (DEFUN A1 ...) 2 (DEFUN A2 ...) 3 (DEFUN B1 ...) 4 (DEFUN B2 ...) 5 (INCLUDE-BOOK "c")Observe that when puff*is done, the originally indicated command, (include-book "ab"), has been replaced by the corresponding sequence of primitive events. Observe also that puffable commands elsewhere in the history, for example, command 2 in the starting state, are not affected (except that their command numbers grow as a result of the splicing in of earlier commands).
<urn:uuid:7bd5fd8e-4c80-4b18-a90d-19caa3325195>
3.203125
605
Documentation
Software Dev.
63.6203
C++ is the canonical example of a language that combines low-level and high-level features1. It doesn't simulate anything, it provides native support for almost every high-level construct you'll usually find in a common high-level language and almost every low-level construct you'll find in C. But of course the terms are highly relative, there was a point in time (not that long ago2) where C was considered a very high level language. And there are quite a few other languages that offer considerable low-level functionalities while still commonly regarded as high-level, and vice versa, the lines are kind of fuzzy. As for the syntax, that's something that naturally affected by the language's level of abstraction. Low-level generally means: In computer science, a low-level programming language is a programming language that provides little or no abstraction from a computer's instruction set architecture. Generally this refers to either machine code or assembly language. The word "low" refers to the small or nonexistent amount of abstraction between the language and machine language; because of this, low-level languages are sometimes described as being "close to the hardware." So naturally a low-level language adopts a syntax that's closer to machine code, which is inherently non human friendly. Quite a few languages, like C++, have adopted a wide variety of syntactic sugar, as a mechanism to make things easier to read or to express. But syntactic sugar is something that almost every high level language has opted for, C++'s sugar alone doesn't make it a low-level language. As for the complexity of a low & high-level language, it's also natural: It's a tool with multiple goals, every single goal adds to its complexity. That's unavoidable regardless of the goal. High-level languages are not "better" than low-level one, they are just more concentrated on one goal. Languages that are designed with ease of use as a primary goal tend to be high-level, but that's only important if the necessary trade-offs to achieve the goal don't affect your applications. Low or high level doesn't really matter, languages are primarily tools. You should choose the one that best fits whatever you're building in combination with what skills you have. Most popular languages are multi-purpose and Turing complete, in theory they are valid choices for building almost anything. There are no absolutes, of course, you may win in some areas if you opt for a high-level language and in others if you opt for a lower-level one, even within the same application. Most large scale applications mix and match, following the "right tool for the job" mentality, and that's a more efficient approach, imho, than trying to have your cake and eat it too. 1 But please note that there isn't a definitive answer on what's considered a strictly high-level feature and what a low-level one. 2 In human years, in software years it was long ago...
<urn:uuid:1057cee7-b38e-492a-9886-804e9b564515>
3.515625
621
Q&A Forum
Software Dev.
43.192733
Question by Alexis: chemistry reaction problem?? about mass? please help! thanks!? An experiment that led to the formation of the new field of organic chemistry involved the synthesis of urea, CN2H4O, by the controlled reaction of ammonia and carbon dioxide. 2 NH3(g) + CO2(g) CN2H4O(s) + H2O(l) What is the mass of urea when ammonia is reacted with 100. g of carbon dioxide? Answer by jreut Use dimensional analysis and stoichiometry: 100 g CO2 x 1 mole CO2 / 44 g CO2 x 1 mol urea / 1 mole CO2 x 60 g urea / 1 mole urea = 100/44*60= 136. grams of urea produced. The first term, 100 g CO2, is your starting amount. The second fraction, 1 mol CO2 / 44 g CO2, is a conversion factor that equals 1, since there are 44 g CO2 in a mole of CO2. The third fraction is the stoichiometric ratio in the chemical equation: for every one mole of CO2 consumed, 1 mol of urea is formed. The fourth fraction is the conversion factor back to grams. Add your own answer in the comments! - Installing Virtue OLED Board & Laser Eyes in Dye DM9 Paintball Gun - Bridging Digital and Physical Worlds With SixthSense - Official Angry Birds 3 Star Walkthrough Theme 3 Levels 1-5 - HTC Schubert - Sketching Out a Future for the Stylus
<urn:uuid:1f0694c7-c307-49e9-9302-c31b7f0251dc>
2.96875
335
Q&A Forum
Science & Tech.
78.739294
Science Fair Project Encyclopedia Chalcedony is one of the cryptocrystalline varieties of the mineral quartz, having a waxy luster. It may be semitransparent or translucent and is usually white to gray, grayish-blue or some shade of brown, sometimes nearly black. Other shades have been given different names. A clear red chalcedony is known as carnelian or sard; a green variety colored by nickel oxide is called chrysoprase. Prase is a dull green. Plasma is a bright to emerald-green chalcedony that is sometimes found with small spots of jasper resembling blood drops; it has been referred to as blood stone or heliotrope. Chalcedony is one of the few minerals other than quartz that is found in geodes. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:e588db7b-ec20-4a2f-a0ec-63d57c6f1119>
3.78125
197
Knowledge Article
Science & Tech.
44.505086
Joined: Oct. 2006 Ever hear of the Bacterial Flagellum? Upon electron microscope examination, it looks very much like a machine! In fact, it bears a strong resemblance to an outboard motor, you know, the kind you see on the back of those small aluminum-fishing boats. Any way, the Bacterial Flagellum is what scientists call an “irreducible complex system”. An irreducible complex system is made in such a way that if you take away any one part of the system, the system ceases to function. A good example of this is the common mousetrap. Take away any one part, and it ceases to function. In the case of irreducibly complex organisms, taking away any one part causes it to die. Question #6 The theory of evolution says that less complex organisms evolved into more complex life forms. How could the Bacterial Flagellum have evolved from a lower life form? What is its' transitional fossil?
<urn:uuid:4bd9bc64-1499-49d0-b513-8236057bf840>
3.796875
212
Q&A Forum
Science & Tech.
50.977998
We have seen that the grand disparity that was believed to exist between the way Nature works here on earth and in the heavens is not valid. The question remains, however, can we learn everything we need to know by investigating phenomena here on earth and extending that result to the Universe at large? The answer must be no for the following reasons: 1) Who would have thought to look for a law of Universal gravitation without the precise measurements and detailed analysis of Brahe and Kepler? Cavendish's laboratory measurement of G was done in response to interpret results obtained for the solar system. 2) Even if someone would have used the Cavendish apparatus to map out the gravitational force between two bodies, independently of knowing Kepler's results, would we be able to infer a complete understanding of celestial motion? No. We know Newton's Law of Universal Gravitation is For example, there are certain aspects of Mercury's motion that can not be explained using the Newtonian form. The correct explanation of Mercury's orbital motion requires General Relativity. In fact, General Relativity predicts that the path of a beam of light will be bent in a gravitational field. This effect is too feeble to see in a lab on earth. It was first observed by starlight being bent near the disk of the sun in a solar eclipse. 3) If we consider then the solar system to be our laboratory, is that a big enough laboratory to establish all that could be known? The answer to this must be no too. In the 20th century, since Zwicky in the 1930's, it is known that either the gravitational force deviates from Newtonian gravity at large distances, or that there is substantial dark matter in and between galaxies. The density of dark matter is so low that it has an imperceptible effect on small scale motions, like that in the solar system. The data seem to favor the existence of some very large amount of unknown, maybe even exotic ( Is this the new celestial matter ?) type of matter. 4) Is the galaxy large enough as a laboratory to pin down all the Laws of Nature? This seems to require a negative answer as well. There are structures that encompass groups of galaxies, and the non isotropy of the cosmic background radiation is a pattern on an extremely large scale. We have also seen that the luminosity vs distance plot for supernovae (SNe1A) suggest that the universe is accelerating in its expansion. This was the discussion about "dark energy" or the cosmological constant. This effect is not seen until we look out to red shifts > 1, or about 6 billion light years. Sometimes features of the world are not visible unless we look on the large scale. In fact, the most recent analysis from WMAP, using the angular spot size of the CMBR temperature fluctuations, fits a flat space scenario. Hence, ignoring local gravitational distortions of space-time the sum of angles in a triangle that covers most of the universe is 180 degrees! 5) If we could include the entire universe in our laboratory, would we have enough data to explain it all?
<urn:uuid:f9420df7-da98-4a38-b73a-49b3f5de5570>
3.265625
638
Q&A Forum
Science & Tech.
45.422311
This lecture is about ways of looking at DNA sequences in complete genomes and chromosomes, in terms of symmetry elements. There are two parts to this talk. In Part 1, I will discuss the fact that we simply have "Too Much Information" becoming available, and the problem will only get worse in the near future. There are ways of cataloging and organising the data, of course. I have found that the true diversity of genome sizes in Nature is often neglected, so we'll talk for a few minutes about the "C-value paradox", along with some possible ideas for WHY certain organisms have so much DNA. I would like to think that one way of dealing with the explosion of sequence information, in terms of DNA sequences, is to think about it in biological terms, in particular in physical-chemical terms of structure and function of symmetry elements. For example, there are specific DNA sequences which "code" for a telomere, and different DNA sequences which are specific for centromeres. Specific DNA sequences, their structures, and biological functions will be discussed. In Part 2, I will introduce "DNA Atlases", first having a look at base composition throughout sequenced chromosomes, and then looking at gene expression throughout the whole genome. I have also made separate file, containing specific LEARNING OBJECTIVES for this lecture, as well as a "self-test quiz", which I recommend having a look at, BEFORE the lecture, if possible. I've incorporated the answers to questions 1 and 2 into PART 1 of the lecture notes. Brevis esse laboro, Obscuro fio. - Horace The information in GenBank is doubling every 10 months. What are the implications of this? A look at genome sequencing since 1994: |YEAR||# GENOMES Sequenced| Although the number of genomes being sequenced is increasing rapidly, one has to this into perspective - the organisms can be placed into four different classes: |Organism group||Size (bp)||No. sequenced| |viruses||~300 bp to ~350,000 bp||545| |prokaryotes||~250,000 to ~15,000,000 bp||>100| ||~12,000,000 to ~600,000,000,000 bp||4| |multi-celled eukaryotes||~20,000,000 to ~500,000,000,000 bp||3| |Drosophila species||Genome Size | (in base pairs) |D. americana||~300,000,000 bp| |D. arizonensis||~225,000,000 bp| |D. eohydei (male)||~234,000,000 bp| |D. eohydei (female)||~246,000,000 bp| |D. funebris||~255,000,000 bp| |D. hydei||~202,000,000 bp| |D. melanogaster||~180,000,000 bp| (~138,000,000 bp sequenced) |D. miranda||~300,000,000 bp| |D. nasutoides||~800,000,000 bp| |D. neohydei||~192,000,000 bp| |D. simulans||~127,500,000 bp| |D. virilis||~345,000,000 bp| In summary, the genome sizes of the Drosophila species that have been examined so far range from about 127 million bp to about 800 million bp. But of course at present we SUSPECT that they contain roughly the same number of genes, although it is possible (likely) that they contain duplicated regions (or perhaps even entire chromosomes; there is ample space to have an entire extra copy (or two or more) of the entire genome). In addition, they also contain various types of repeats, known as "selfish DNA". Why does amoeba have more than 200x as much DNA as humans? Think about it for a discussion in class. I have a possible explanation, although I'm not sure anyone really knows the answer to this, to be honest. This brings us to the first question on the quiz: Answers to the self-test quiz which you are supposed to do BEFORE the lecture: 1. The short answer - a very long time. About 2.4x1012 years. That's about 160 times longer than the estimated age of the universe! 2. The piece of paper would be quite thick - it would reach outside the earth's atmosphere and beyond the orbit of the planet Mars. Today's lecture will cover: Next Tuesday's lecture will cover: One way of dealing with the problem of how to display so much sequence information is to have a look at the whole chromosome at once, smoothing over a large window. The entire bacterial chromosome is displayed as a circle, with different colours representing various parameters. First, as an introduction to atlases, we will look at base-composition. Then we will have a look at levels of expression of mRNA and proteins throghout the chromosome. As examples, I will use my very favourite organism, Escherichia coli K-12. There are several things to notice in this plot. First, the concentration of the bases are not uniformly distributed throughout the genome, but there are "clumps" or clusters where specific bases are a bit more concentrated. Also, the G's (turquoise) clearly are seen to be favoured on one half of the chromosome, whilst the C's (magenta) are on the other strand. This shows up in the "GC-skew" lane as well (2nd circle from the middle). I have labelled the entire terminus region, which ranges from TerE (around 1.08 million bp (Mbp) to TerG (~2.38 Mbp) in Escherichia coli K-12. Finally, several genes corresponding to the darker bands (e.g., more biased nucleotide composition) are labelled. The same pattern can be seen for the other three Escherichia coli chromosomes which have been sequenced (so far!), as shown in the table below. Strain: K-12, isolate W3110 DDBJ NCBI tax Strain: K-12, isolate MG1655 U. Wisconsin TIGR cmr NCBI tax NCBI entrez Strain: O157:H7 (substrain EDL93) U. Wisconsin NCBI tax NCBI entrez Strain: O157:H7 (substrain RIMD 0509952) Miyazaki, Japan NCBI tax NCBI entrez ||DNA Res. 8:11-22 In addition to showing overall global properties of the chromosome (such as replication origin and terminus), the base composition can also highlight regions different from the rest of the genome. For example, in the plasmid pO157, there are some regions which are much more AT rich (probably these came about as a result of horizontal gene transfer - we will discuss this again in the next lecture...) Note that the "toxB" gene is much more AT rich than the average for the rest of the plasmid. This COULD be due to the fact that this gene came from an organism with a more AT rich genome, or (more likely in my opinion) it is more AT rich because it is important for this gene to vary in sequence (e.g., have a higher mutational frequencey). Escherichia coli is probably the best characterised organism. There are 4085 predicted genes in Escherichia coli strain K-12 isolate W3110. There are 4289 predicted genes in Escherichia coli strain K-12 isolate MG1665. There are 5283 predicted genes in Escherichia coli strain O157:H7 isolate EDL933 (enterohemorrhagic pathogen). There are about 5361 predicted genes in Escherichia coli strain O157:H7 substrain RIMD 0509952 (enterohemorrhagic pathogen). Roughly 2600 genes have been found to be expressed in Escherichia coli strain K-12 cells, under standard laboratory growth conditions. About 2100 spots can be seen on 2-D protein gels. Very roughly 1000 different genes (only about 600 mRNA transcripts) are expressed at "detectable levels" in E. coli cells grown in LB media. Only about 350 proteins exist at concentrations of > 100 copies per cell. (These make up 90% of the total protein in E.coli!) Most (>90%) of the proteins are present in very low amounts (less than 100 copies per cell). It has been known since the 1960's that genes closer to the replication origin are more highly expressed. However, it has only been in the past few years that technology has allowed the simultaneous monitoring of ALL the genes in Escherichia coli. There are 4397 annotated genes in the E. coli K-12 genome. Shown below is an "Atlas plot" of the E. coli K-12 genome, with the outer circle representing the concentration of proteins (roughly in number of molecules/cell) and mRNA (again, roughly number of molecules/cell). Under these conditions (e.g., cells grown to late log phase, in minimal media), there were 2005 genes expressed at detectable levels, and only 233 proteins have been found to exist in "abundant" conditions (e.g., very roughly more than 100 molecules per cell). For E. coli K-12 cells, grown in minimal media to late log phase: 4397 annotated genes -> 2005 mRNAs expressed -> 233 abundant proteins (note that these numbers will vary for different experimental conditions....) In this picture, the outer lane represents the concentration of proteins (blue), the next lane the concentration of mRNA (green), and then the annotated genes. The inner three circles represent different aspects of the DNA base composition throughout the genome. The innermost circle (turquoise/violet) is the bias of G's towards one strand or the other (that is, a look at the mono-nucleotide distribution of the 4 DNA bases). The next lane is the density of stretches of purine (or pyrimidine) stretches of 10 bp or longer. Note that in both cases purines tend to favour the leading strand of the replicore, whilst pyrimidine tracts are more likely to occur on the lagging strand. Finally, the next circle (turquoise/red) is simply the AT content of the genome, averaged over a 50,000 bp window. Note that the terminus is slightly more AT rich, whilst the rest of the genome is slightly GC rich. (The AT content scale ranges from 45% to 55%). Link to more atlases for Escherichia coli genomes. Link to the main "Genome Atlas" web page Friday (6 April, 2001) Link to a list of recent papers and talks on DNA structures. Watson, James D. "A PASSION FOR DNA: Genes, Genomes, and Society", (Oxford University Press, Oxford, 2000). Amazon Barnes&Noble Sinden, Richard R., "DNA: STRUCTURE and FUNCTION", (Academic Press, New York, 1994). Amazon Barnes&Noble Calladine,C.R., Drew,H.R., "Understanding DNA: The Molecule and How It Works", (2nd edition, Academic Press, San Diego, 1997). Amazon Barnes&Noble A List of more than a thousand books about DNA
<urn:uuid:4efde337-022b-4a51-bb2b-99949f551b28>
3.046875
2,521
Audio Transcript
Science & Tech.
61.187538
Delaware Bay — One a 10,000-Mile-Long Chain During May and early June, the shores of Delaware Bay resonate with the cheerful chattering of more than 20 species of migratory shorebirds. Delaware Bay provides an ecologically important stepping-stone for the birds' spring pilgrimage to Arctic nesting grounds.The Delaware Bay is the largest spring staging area for shorebirds in eastern North America. A staging site is an area with plentiful food where migrating birds gather to replenish themselves before continuing on their journey. Staging sites serve as a link in a chain connecting wintering areas with breeding grounds, sites for which there are no alternatives. Place cursor on map to see the Southward Migration Shorebirds begin to arrive in early May. The numbers of birds soar upward during mid-month and usually peak between May 18 and 24 (in some years as late as May 28). They have traveled from the coasts of Brazil, Patagonia, and Tierra del Fuego, from desert beaches of Chile and Peru, and from mud flats in Suriname, Venezuela, and the Guyanas. After several days of non-stop flight, and having come as far as 10,000 miles, they reach the bay beaches depleted of their energy reserves. Luckily, nature provides an abundant food supply in this area at just this time of year: the eggs of hundreds of thousands of horseshoe crabs that have migrated to Delaware Bay beaches to spawn. A Feast for Feathered The shorebirds spend between two to three weeks gorging primarily on fresh horseshoe crab eggs, although worms and small bivalves are also plentiful. High in protein and fat, the eggs are an energy-rich source of food. This high-calorie diet enables the birds to nearly double or triple their body weight before continuing on to Arctic nesting areas. More Than a Million Mouths Each spring, scientists from the Delaware and New Jersey Divisions of Fish and Wildlife conduct weekly aerial surveys of migratory shorebirds on Delaware Bay beaches. In May 2001, scientists observed more than 775,000 shorebirds along beach habitat. Ninety-five percent of these birds were represented by four species: red knots, ruddy turnstones, semipalmated sandpipers, and dunlins. Migratory shorebirds are also known to utilize marshes and back-bay habitats. Thus, throughout their spring migration, the actual number of shorebirds using Delaware Bay as a staging ground may surpass one million. Click here to meet a few of these Delaware diners. recent decline in the horseshoe crab population appears to correlate with a decline in migrating shorebird populations. Click here to learn more about the problems facing migratory shorebirds. Click here to learn why horseshoe crabs are decreasing in abundance.
<urn:uuid:04ebbc42-3082-425f-9757-68642066de98>
3.375
630
Knowledge Article
Science & Tech.
41.248588
Two researchers from the State Key Laboratory of Millimeter Waves at Southeast University from Nanjing, China, have discovered and prototyped a device that acts like a black hole for electromagnetic waves in the microwave spectrum. It consists of 60 concentric rings of metamaterials, a class or ordered composites that can distort light and other waves. Qiang Cheng and Tie Jun Cui called their device “omnidirectional electromagnetic absorber”. The 60 rings of circuit board are arranged in concentric layers and coated in copper. Each of the layers is printed with alternating patterns, which resonate or don’t resonate in electromagnetic waves. What is indeed very amazing is that their device can spiral 99% of the radiations coming from all directions inside it and convert them into heat, acting like an “electromagnetic black body” (or “hole”). The omnidirectional electromagnetic absorber could be used in harvesting the energy that exists in form of electromagnetic waves and turn them into usable heat. Of course, turning the heat back into electricity isn’t a 100% efficient process (far from it), but directly harvesting electromagnetic waves in the classic antenna-fashion is way too inefficient compared to this black hole. “Since the lossy core can transfer electromagnetic energies into heat energies, we expect that the proposed device could find important applications in thermal emitting and electromagnetic-wave harvesting.” Possible uses can vary from powering your phone with the existing electromagnetic energy that surrounds it, to wireless power transmission and even powering space ships – it all depends on the wavelength that the device is tuned to. The question that arises is: would this kind of devices have other uses than these constructive ones mentioned above? More like this article Not what you were looking for? Search The Green Optimistic! Join the Discussion4046 total comments so far. What's your opinion ? Electromagnetic wave harvesting? Extremely fascinating. When one thinks about it, it makes sense. Electromagnetism is one of the more powerful forms of the universe (next to gravity and strong/weak nuclear forces). The inner sci-fi geek in me loves the idea and can only imagine what an EM device could do for humanity in the future. But of course the part of me stuck in reality is still skeptical of such technologies and what their applicable use would be. Very very cool science though!-Consumer Energy Alliance "A balanced approach towards America's energy future"
<urn:uuid:9223455a-55b0-44bb-b0c4-c351a964fb39>
3.328125
510
Personal Blog
Science & Tech.
30.252155
ATP hydrolysis in F1-ATPase Why is F1Fo-ATP synthase so important? F1Fo-ATP synthase, or ATP synthase for short, is one of the most abundant proteins in every organism. It is responsible for synthesizing the molecule adenosine tri-phosphate (ATP), the cells’ energy currency. ATP is depicted in Fig. 1 and used to power and sustain virtually all cellular processes needed to survive and reproduce. Even when at rest, the human body metabolizes more than half its body weight in ATP per day, this figure rising to many times the body weight under conditions of physical activity. What do we know about F1Fo-ATP synthase? Researchers have been trying to uncover the "secret" behind ATP synthase’s very efficient mode of operation for quite some time. Unfortunately, even after more than 30 years of study, we still don’t fully understand how F1Fo-ATPase really works. The protein consists of two coupled rotary molecular motors, called Fo and F1, respectively, the first one being membrane embedded and the latter one being solvent exposed. One of the most important breakthroughs in the field was the determination of an atomic resolution X-ray crystal structure for the F1 part of ATP synthase. This allowed researchers, for the first time, to connect biochemical data to the three dimensional structure of the protein (Abrahams et al., Nature 370:621-628, 1994). The X-ray structure beautifully supported Paul Boyer’s "binding change mechanism" (Boyer, Bioch. Bioph. Acta 215-250, 1993) as the modus operandi for ATP synthase’s rotational catalytic cycle and lead to the 1997 Nobel Prize in chemistry for Boyer and Walker. F1-ATPase in its simplest prokaryotic form (shown schematically in Fig. 2) consists of a hexameric assembly of alternating α and β subunits arranged in the shape of an orange. The central cavity of the hexamer is occupied by the central stalk formed by subunits γ, δ and ε. Due to a lack of high resolution structures for the Fo part of ATP synthase, much less is known about this subunit. It is currently thought that a transmembrane proton gradient drives rotation of the c-subunit ring of Fo which is then coupled to movement of the central stalk. The rotation of the latter eventually causes conformational changes in the catalytic sites located in F1 leading to the synthesis of ATP. What are some of the missing pieces in our understanding of F1? ATP synthase can be separated into its two constituent subunits F1 and Fo, which can then be studied individually. Solvated F1 is able to hydrolyze ATP and experiments pioneered by Noji et al. (Nature 386:299-302, 1997) have shown that ATP hydrolysis in F1 drives rotation of the central stalk. However, we don’t know if ATP hydrolysis itself or rather binding of ATP to the catalytic sites induces rotation. We would also like to know how the binding pockets cooperate during steady-state ATP hydrolysis to achieve their physiological catalysis rates. It has been suggested that ATP binding and product unbinding provide the main "power stroke" and that the actual catalytic step inside the binding pockets is equi-energetic, but, unfortunately, there is currently no consensus regarding this issue. In any case, since ATP in solution is a very stable molecule, the catalytic sites have to be able to lower the reaction barrier toward product formation considerably in order to cause efficient hydrolysis. Computational Study of ATP hydrolysis in F1-ATPase Our research focuses on investigating the ATP hydrolysis reaction and its interaction with the protein environment in the catalytic sites of F1-ATPase using computer simulations. To be able to study a chemical reaction inside the extended protein environment provided by the catalytic sites we employ combined quantum mechanical/molecular mechanical (QM/MM) simulations to investigate both the βTP and βDP catalytic sites. Fig. 3 depicts the quantum mechanically treated region of the former. Quite surprisingly, our simulations show that there is a dramatic change in the reaction energetics in going from βTP (strongly endothermic) to βDP (approximately equi-energetic), despite the fact that the overall protein conformation is quite similar. In both βTP and βDP, the actual chemical reaction proceeds via a multi-center proton relay mechanism involving two water molecules. A careful study of the electrostatic interactions between the protein environment and the catalytic core region as well as several computational mutation studies identified the "arginine finger" residue αR373 as the most significant element involved in this change in energetics. Several important conclusions can be drawn from our simulations: Efficient catalysis proceeds via a multi-center proton pathway and a major factor for ATPase’s efficiency is, therefore, the ability to provide the proper solvent environment by means of its catalytic binding pocket. Furthermore, the sidechain of the arginine finger residue αR373 is found to be a major element in signaling between catalytic sites to enforce cooperation since it controls the reaction barrier height as well as the reaction equilibrium of the ATP hydrolysis/synthesis reaction. Zooming in on ATP hydrolysis in F1. Markus Dittrich and Klaus Schulten. Journal of Bioenergetics and Biomembranes, 37:441-444, 2005. ATP hydrolysis in the bTP and bDP catalytic sites of F1-ATPase. Markus Dittrich, Shigehiko Hayashi, and Klaus Schulten. Biophysical Journal, 87:2954-2967, 2004. On the mechanism of ATP hydrolysis in F1-ATPase. Markus Dittrich, Shigehiko Hayashi, and Klaus Schulten. Biophysical Journal, 85:2253-2266, 2003. Other QM/MM projects This material is based upon work supported by the National Science Foundation under Grant No. 0234938. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
<urn:uuid:ffc4d1d4-49ed-4dfd-b327-6c2f202a05b7>
3.53125
1,351
Academic Writing
Science & Tech.
36.90577
a. Synoptic history An extratropical low pressure system formed just east of the Turks and Caicos Islands near 0000 UTC 25 October in response to an upper level cyclone interacting with a frontal system. The low initially moved northwestward, and in combination with a strong surface high to the north developed into a gale center six hours later. By 1800 UTC that day it had developed sufficient organized convection to be classified using the Herbert-Poteat subtropical cyclone classification system, and the best track of the subtropical storm begins at this time (Table 1 and Figure 1). Upon becoming a subtropical storm, the cyclone turned northward. This motion continued for 24 h while the system slowly intensified. The storm jogged north-northwestward late on 26 October, followed by a north-northeastward turn and acceleration on the 27th. During this time, satellite imagery indicated intermittent bursts of central convection while Air Force Reserve Hurricane Hunter aircraft indicated a large (75-100 n mi) radius of maximum winds. This evolution was in contrast to that of Hurricane Michael a week-and-a-half before. Although of similar origin to the subtropical storm, Michael developed persistent central convection and completed a transition to a warm-core hurricane. After reaching a 50 kt intensity early on 27 October, little change in strength occurred during the next 24 h. The storm turned northeastward and accelerated further on the 28th in response to a large and cold upper-level cyclone moving southward over southeastern Canada. A last burst of organized convection late on the 28th allowed the storm to reach a peak intensity of 55 kt. A strong cold front moving southward off the New England coast then intruded into the system, and the storm became extratropical near Sable Island, Nova Scotia, around 0600 UTC 29 October. The extratropical center weakened rapidly and lost its identity near eastern Nova Scotia later that day. It should be noted that the large cyclonic circulation that absorbed the subtropical storm was responsible for heavy early-season snowfalls over portions of the New England states and b. Meteorological statistics Table 1 shows the best track positions and intensities for the subtropical storm, with the track plotted in Figure 1. Figure 2 and Figure 3 depict the curves of minimum central sea-level pressure and maximum sustained one-minute average "surface" (10 m above ground level) winds, respectively, as a function of time. These figures also contain the data on which the curves are based: satellite-based Hebert-Poteat and experimental extratropical transition intensity (Miller and Lander, 1997) estimates from the Tropical Analysis and Forecast Branch (TAFB), the Satellite Analysis Branch (SAB) of the National Environmental Satellite Data and Information Service (NESDIS), and the Air Force Weather Agency (AFWA), as well as data from aircraft, ships, buoys and land stations. The Air Force Reserve Hurricane Hunters flew two mission into the storm with a total of four center fixes. Central pressures on both flights were in the 997-1000 mb range, and the maximum flight level (1500 ft) winds were 60 kt on the first flight and 61 kt on the second. A weak temperature gradient was observed in the system on the first flight, suggesting that the cyclone still had some baroclinic characteristics. The second flight showed a uniform airmass within 100 n mi the center with temperatures of about The storm had a large envelope, and many ships reported 34 kt or higher winds. Table 2 summaries these observations. There were few observations near the central core. Canadian buoy 44137 reported winds 160/39 kt with a pressure of 979.1 mb at 0200 UTC 29 October, which is the basis for the lowest pressure. Other reports from this buoy indicate that the winds increased in the last hour before the center passed, suggesting that some kind of inner wind maximum was present even as the storm was becoming extratropical. Earlier, a drifting buoy about 35 n mi southeast of the center reported a pressure of 996.6 mb at 2051 UTC 27 October, which showed that the storm had begun to deepen. Sable Island, Nova Scotia, reported a pressure of 980.6 mb as the center passed over at 0600 UTC on the 29th. Maximum sustained winds were 35 kt after the center passage at 0700 and 0800 UTC. Several other stations in eastern Nova Scotia and southwestern Newfoundland reported sustained 35-50 kt winds around 1200 UTC on the 29th. The maximum intensity of this system is uncertain. Satellite intensity estimates late on the 28th and early on the 29th along with a 35-40 kt forward motion indicate the possibility of 65-75 kt sustained winds. However, this is not supported by surface observations near the center early on the 29th. The maximum intensity is estimated to have been 55 kt. c. Casualty and damage statistics No reports of casualties or damage have been received at the National Hurricane Center (NHC). d. Forecast and warning critique No advisories were written on this storm, as a decision was made operationally to handle it in marine forecasts as an extratropical storm. Post-analysis of satellite imagery and of 27 October aircraft data are the basis for classifying the system now as subtropical. Due to the operational handling, there are no formal NHC forecasts to verify. Large-scale numerical models generally performed well in forecasting the genesis and motion of this cyclone. The models did mostly underestimate the intensification that occurred north of the Gulf Stream. However, this strengthening was fairly well forecast by the GFDL model. No tropical cyclone watches or warnings were issued for this storm. Marine gale and storm warnings were issued in high seas and offshore forecasts from Marine Prediction Center and the TAFB of the TPC. Gale warnings were also issued for portions of the North Carolina coastal waters by local National Weather Service offices. Miller, D. W and M. A. Lander, 1997: Intensity estimation of tropical cyclones during extratropical transition. JTWC/SATOPSTN-97/002, Joint Typhoon Warning Center/Satellite Operations, Nimitz Hill, Guam, 9 pp. Best track, Subtropical Storm, 25-29 October 2000. |Lat. (°N)||Lon. (°W) |25 / 0000||21.5|| 69.5||1009|| 30|| extratropical low| |25 / 0600||22.5|| 70.0||1007|| 35||extratropical gale| |25 / 1200||23.5|| 70.9||1006|| 35||"| |25 / 1800||24.5|| 71.7||1005|| 35||subtropical storm| |26 / 0000||25.7|| 71.7||1004|| 35||"| |26 / 0600||26.6|| 71.7|| 1003|| 35||"| |26 / 1200||27.4|| 71.8|| 1002|| 40||"| |26 / 1800||28.3|| 72.1|| 1000|| 45||"| |27 / 0000||29.2|| 72.5|| 997|| 50||"| |27 / 0600||30.0|| 72.6|| 997||50||"| |27 / 1200||30.9|| 72.5|| 997||50||"| |27 / 1800||32.6|| 71.6|| 996||50||"| |28 / 0000||34.2|| 70.7|| 994||50||"| |28 / 0600||35.7|| 69.9|| 992||50||"| |28 / 1200||36.5|| 68.1|| 990|| 50||"| |28 / 1800||38.0|| 65.5|| 984|| 55||"| |29 / 0000||40.5|| 62.6|| 978|| 55||"| |29 / 0600||44.0|| 60.0|| 980|| 50||extratropical| |29 / 1200||46.0|| 59.5|| 992|| 45||"| |29 / 1800|| ||absorbed into larger extratropical low| |29 / 0200||41.7||61.6||976||55||minimum pressure| Selected ship and buoy observations of subtropical storm or greater winds associated with the subtropical storm, 25-29 October 2000. |Dock Express 20||25/1200||27.0||68.9||050/45||1009.0| |Splendour of the Seas||25/1800||28.6||65.2||070/40||1015.0| a 8 minute average wind b 10 minute average wind Best track for the subtropical storm, 25-29 October 2000. Best track minimum central pressure curve for the subtropical storm, 25-29 Best track maximum sustained 1-minute 10 meter wind speed curve for the subtropical storm, 25-29 October 2000. Vertical black bars denote wind ranges in subtropical and extratropical satellite intensity estimates.
<urn:uuid:d64fb991-6106-4d55-b5c7-2cf381540ae9>
3.09375
2,083
Knowledge Article
Science & Tech.
70.098467
In programming, classification of a particular type of information. It is easy for humans to distinguish between different types of data. We can usually tell at a glance whether a number is a percentage, a time, or an amount of money. We do this through special symbols -- %, :, and $ -- that indicate the data's type. Similarly, a computer uses special internal codes to keep track of the different types of data it processes. Most programming languages require the programmer to declare the data type of every data object, and most database systems require the user to specify the type of each data field. The available data types vary from one programming language to another, and from one database application to another, but the following usually exist in one form or another:
<urn:uuid:47437832-f7d4-4366-94ba-2caef5456fb8>
3.609375
152
Knowledge Article
Software Dev.
27.357179
xmlsh is derived from a similar syntax as the unix shells (see Philosophy) . If you are familiar with any of these shell languages (sh,bash,ksh,zsh) you should be right at home. An attempt was made to stay very close to the sh syntax where reasonable, but not all subtlies or features of the unix shells are implemented. In order to accomidate native XML types and pipelines some deviations and extensions were necessary. Lastly, as an implementation issue, xmlsh is implemented in java using the javacc compiler for parsing. This made support for some of the syntax and features of the C based shells difficult or impossible. Future work may try to tighten up these issues. xmlsh can run in 2 modes, interactive and batch. In interactive mode, a prompt ("$ ") is displayed and in batch mode there is no prompt. Otherwise they are identical. Running xmlsh with no arguments starts an interactive shell. Running with an argument runs in batch mode and invokes the given script. You can run an xmlsh script by passing it as the first argument, followed by any script arguments xmlsh myscript.xsh arg1 arg2 For details on xmlsh invocation and parameters see xmlsh command - Current Directory - Environment variables - Standard ports ( input/output/error ) The shell itself maintains additional environment which is passed to all subshells, but not to external (sub process) commands. - Namespaces, including the default namespace (See Namespaces) - Declared functions (See SyntaxFunction ) - imported modules and packages (See Modules) - Shell variables (Environment variables and internal shell variables) (See BuiltinVariables) - Positional parameters ($1 ... $n) - Shell Options (-v, -x ...) On startup, xmlsh reads the standard input (interactive mode) or the script file (batch mode), parses one command at a time and executes it. The following steps are performed - Parse statement. Statements are parsed using the Core Syntax. - Expand variables. Variable expansion is performed. See Variables and CoreSyntax. - Variable assignment. Prefix variable assignment is performed. Variables and CoreSyntax. - IO Redirection. IO redirection (input,output, here documents) CommandRedirect and CoreSyntax. - Command execution. Commands are executed. CommandExecution - Exceptions raised can be handled with a try/catch block. After the command is executed, then the process repeats.
<urn:uuid:693b762b-6579-4096-8873-426b157e93c6>
2.78125
532
Documentation
Software Dev.
41.737462
Simple observational proof of the greenhouse effect of carbon dioxide Posted by Ari Jokimäki on April 19, 2010 Recently, I showed briefly a simple observational proof that greenhouse effect exists using a paper by Ellingson & Wiscombe (1996). Now I will present a similar paper that deepens the proof and shows more clearly how different greenhouse gases really are greenhouse gases. I’ll highlight the carbon dioxide related issues in their paper. Walden et al. (1998) studied the downward longwave radiation spectrum in Antarctica. Their study covers only a single year so this is not about how the increase in greenhouse gases affects. They measured the downward longwave radiation spectrum coming from atmosphere to the surface during the year (usually in every 12 hours) and then selected three measurements from clear-sky days for comparison with the results of a line-by-line radiative transfer model. First they described why Antarctica is a good place for this kind of study: Since the atmosphere is so cold and dry (<1 mm of precipitable water), the overlap of the emission spectrum of water vapor with that of other gases is greatly reduced. Therefore the spectral signatures of other important infrared emitters, namely, CO2, O3, CH4, and N2O, are quite distinct. In addition, the low atmospheric temperatures provide an extreme test case for testing models Spectral overlapping is a consideration here because they are using a moderate resolution (about 1 cm-1) in their spectral analysis. They went on further describing their measurements and the equipment used and their calibration. They also discussed the uncertainties in the measurements thoroughly. They then presented the measured spectra in similar style than was shown in Ellingson & Wiscombe (1996). They proceeded to produce their model results. The models were controlled with actual measurements of atmospheric consituents (water vapour, carbon dioxide, etc.). The model is used here because it represents our theories which are based on numerous experiments in laboratories and in the atmosphere. They then performed the comparison between the model results and the measurements. Figure 1 shows their Figure 11 where total spectral radiance from their model is compared to measured spectral radiance. The upper panel of Figure 1 shows the spectral radiance and the lower panel shows the difference of measured and modelled spectrum. The overall match is excellent and there’s no way you could get this match by chance so this already shows that different greenhouse gases really are producing a greenhouse effect just as our theories predict. Walden et al. didn’t stop there. Next they showed the details of how the measured spectral bands of different greenhouse gases compare with model results. The comparison of carbon dioxide is shown here in Figure 2 (which is the upper panel of their figure 13). The match between the modelled and measured carbon dioxide spectral band is also excellent, even the minor details track each other well except for couple of places of slight difference. If there wouldn’t be greenhouse effect from carbon dioxide or if water vapour would be masking its effect, this match should then be accidental. I see no chance for that, so this seems to be a simple observational proof that carbon dioxide produces a greenhouse effect just as our theories predict. Walden, V. P., S. G. Warren, and F. J. Murcray (1998), Measurements of the downward longwave radiation spectrum over the Antarctic Plateau and comparisons with a line-by-line radiative transfer model for clear skies, J. Geophys. Res., 103(D4), 3825–3846, doi:10.1029/97JD02433. [abstract]
<urn:uuid:7ca379c3-faf0-4aab-83e1-0999a130f017>
2.78125
746
Personal Blog
Science & Tech.
45.861755
The Active Galactic Nucleus (AGN) of Seyfert galaxy M77 (NGC 1068), about 60 million light years from Earth, in the X-ray light, as photographed by Chandra X-ray Observatory. A composite Chandra X-ray (blue/green) and Hubble optical (red) image of M77 (NGC 1068) shows hot gas blowing away from a central supermassive object at speeds averaging about 1 million miles per hour. The elongated shape of the gas cloud is thought to be due to the funneling effect of a torus, or doughnut-shaped cloud, of cool gas and dust that surrounds the central object, which many astronomers think is a black hole. The X-rays are scattered and reflected X-rays that are probably coming from a hidden disk of hot gas formed as matter swirls very near the black hole. Regions of intense star formation in the inner spiral arms of the galaxy are highlighted by the optical emission. This image extends over a field 36 arcsec on a side. This three-color high energy X-ray image (red =1.3-3 keV, green = 3-6 keV, blue = 6-8 keV) of NGC 1068 shows gas rushing away from the nucleus. The brightest point-like source may be the inner wall of the torus that is reflecting X-rays from the hidden nucleus. Scale: Image is 30 arcsec per side. This three-color low energy X-ray image of M77 (NGC 1068) (red = 0.4-0.6 keV, green = 0.6-0.8 keV, blue = 0.8-1.3 keV) shows gas rushing away from the the nucleus (bright white spot). The range of colors from blue to red corresponds to a high through low ionization of the atoms in the wind. Scale: Image is 30 arcsec per side. This optical image of the active galaxy NGC 1068, taken by Hubble's WFPC2, gives a detailed view of the spiral arms in the inner parts of the galaxy. Scale: Image is 30 arcsec per side. Credit: X-ray: NASA/CXC/MIT/P. Ogle et.al.; Optical: NASA/STScI/A. Capetti et.al. Last Modification: July 12, 2003
<urn:uuid:ca6a66be-8a64-433f-8fae-bbae7e01bdef>
3.59375
499
Knowledge Article
Science & Tech.
79.921157
Mission Type: Flyby Launch Vehicle: Titan IIIE-Centaur (TC-7 / Titan no. 23E-7 / Centaur D-1T) Launch Site: Cape Canaveral, USA, Launch Complex 41 NASA Center: Jet Propulsion Laboratory Spacecraft Mass: 2,080 kg (822 kg mission module) Spacecraft Instruments: 1) imaging system; 2) ultraviolet spectrometer; 3) infrared spectrometer; 4) planetary radio astronomy experiment; 5) photopolarimeter; 6) magnetometers; 7) plasma particles experiment; 8) low-energy charged-particles experiment; 9) plasma waves experiment and 10) cosmic-ray telescope Spacecraft Dimensions: Decahedral bus, 47 cm in height and 1.78 m across from flat to flat Spacecraft Power: 3 plutonium oxide radioisotope thermoelectric generators (RTGs) Maximum Power: 470 W of 30-volt DC power at launch, dropping to about 287 W at the beginning of 2008, and continuing to decrease Antenna Diameter: 3.66 m X-Band Data Rate: 115.2 kbits/sec at Jupiter, less at more distant locations (first spacecraft to use X-band as the primary telemetry link frequency) Total Cost: Through the end of the Neptune phase of the Voyager project, a total of $875 million had been expended for the construction, launch, and operations of both Voyager spacecraft. An additional $30 million was allocated for the first two years of VIM. Deep Space Chronicle: A Chronology of Deep Space and Planetary Probes 1958-2000, Monographs in Aerospace History No. 24, by Asif A. Siddiqi National Space Science Data Center, http://nssdc.gsfc.nasa.gov/ Solar System Log by Andrew Wilson, published 1987 by Jane's Publishing Co. Ltd. Voyager Project Homepage, http://voyager.jpl.nasa.gov An alignment of the outer planets that occurs only once in 176 years prompted NASA to plan a grand tour of the outer planets, consisting of dual launches to Jupiter, Saturn, and Pluto in 1976-77 and dual launches to Jupiter, Uranus, and Neptune in 1979. The original scheme was canceled for budgetary reasons, but was replaced by Voyager 1 and 2, which accomplished similar goals at significantly lower cost. The two Voyager spacecraft were designed to explore Jupiter and Saturn in greater detail than the two Pioneers (Pioneers 10 and 11) that preceded them had been able to do. Each Voyager was equipped with slow-scan color TV to take live television images from the planets, and each carried an extensive suite of instruments to record magnetic, atmospheric, lunar, and other data about the planets. The original design of the spacecraft was based on that of the older Mariners. Power was provided by three plutonium oxide radioisotope thermoelectric generators (RTGs) mounted at the end of a boom. Although launched about two weeks before Voyager 1, Voyager 2 exited the asteroid belt after its twin and followed it to Jupiter and Saturn. The primary radio receiver failed on 5 April 1978, placing the mission's fate on the backup unit, which has been used ever since. A fault in this backup receiver severely limits its bandwidth, but the mission has been a major success despite this obstacle. All of the experiments on Voyager 2 have produced useful data. Voyager 2 began transmitting images of Jupiter on 24 April 1979 for time-lapse movies of atmospheric circulation. They showed that the planet's appearance had changed in the four months since Voyager 1's visit. The Great Red Spot had become more uniform, for example. The spacecraft relayed spectacular photos of the entire Jovian system, including its moons Amalthea, Io, Callisto, Ganymede, and Europa, all of which had also been imaged by Voyager 1, making comparisons possible. Voyager 2's closest encounter with Jupiter was at 22:29 UT on 9 July 1979 at a range of 645,000 km. Voyager 1's discovery of active volcanoes on Io prompted a 10-hour volcano watch for Voyager 2. Though the second spacecraft approached no closer than a million kilometers to Io, it was clear that the moon's surface had changed and that six of the volcanic plumes observed earlier were still active. Voyager 2 imaged Europa at a distance of 206,000 km, resolving the streaks seen by Voyager 1 into a collection of cracks in a thick covering of ice. No variety in elevation was observed, prompting one scientist to say that Europa was "as smooth as a billiard ball." An image of Callisto, studied in detail months later, revealed a 14th satellite, now called Adrastea. It is only 30 to 40 km in diameter and orbits close to Jupiter's rings. As Voyager 2 left Jupiter, it took an image that revealed a faint third component to the planet's rings. It is thought that the moons Amalthea and Thebe may contribute some of the material that constitutes the ring. Following a midcourse correction two hours after its closest approach to Jupiter, Voyager 2 sped to Saturn. Its encounter with the sixth planet began on 22 August 1981, two years after leaving the Jovian system, with imaging of the moon Iapetus. Once again, Voyager 2 repeated the photographic mission of its predecessor, although it flew 23,000 km closer to Saturn. The closest encounter was at 01:21 UT on 26 August 1981 at a range of 101,000 km. The spacecraft provided more detailed images of the ring spokes and kinks, as well as the F-ring and its shepherding moons. Voyager 2's data suggested that Saturn's A-ring was perhaps only 300 m thick. It also photographed the moons Hyperion, Enceladus, Tethys, and Phoebe. Using the spacecraft's photopolarimeter (the instrument that had failed on Voyager 1), scientists observed a star called Delta Scorpii through Saturn's rings and measured the flickering level of light over the course of 2 hours, 20 minutes. This provided 100-m resolution, which was 10 times better than was possible with the cameras, and many more ringlets were discovered. After Voyager 2 fulfilled its primary mission goals with its flybys of Jupiter and Saturn, mission planners set the spacecraft on a 4.5-year journey to Uranus, during which it covered 33 AU (about 5 billion km). The geometry of the Uranus encounter was designed to enable the spacecraft to use a gravity assist to help it reach Neptune. Voyager 2 had only 5.5 hours of close study during its flyby, the first (and so far, only) human-made spacecraft to visit the planet Uranus. Long-range observations of Uranus began on 4 November 1985. At that distance, the spacecraft's radio signals took approximately 2.5 hours to reach Earth. Light conditions were 400 times less than terrestrial conditions. The closest approach took place at 17:59 UT on 24 January 1986 at a range of 71,000 km. The spacecraft discovered 10 new moons, two new rings, and a magnetic field (stronger than that of Saturn) tilted at 55 degrees off-axis and off-center, with a magnetic tail twisted into a helix that stretches 10 million km in the direction opposite that of the sun. Uranus, itself, displayed little detail, but evidence was found of a boiling ocean of water some 800 km below the top cloud surface. The atmosphere was found to be 85 percent hydrogen and 15 percent helium (26 percent helium by mass). Strangely, the average temperature of 60 K (-351.4 degrees Fahrenheit, -213 degrees Celsius) was found to be the same at the sun-facing south pole and at the equator. Wind speeds were as high as 724 km per hour. Voyager 2 returned spectacular photos of Miranda, Oberon, Ariel, Umbriel, and Titania, the five larger moons of Uranus. In a departure from Greek mythology, four of Uranus' moons are named for Shakespearean characters and one-Umbriel-is named for a sprite in a poem by Alexander Pope. Miranda may be the strangest of these worlds. It is believed to have fragmented at least a dozen times and reassembled in its current confused state. Following the Uranus encounter, the spacecraft performed a single midcourse correction on 14 February 1986 to set it on a precise course to Neptune. Voyager 2's encounter with Neptune capped a 7-billion-km journey when on 25 August 1989, at 03:56 UT, it flew about 4,950 km over the cloud tops of the giant planet, closer than its flybys of the three previous planets. As with Uranus, it was the first (and so far, only) human-made object to fly by the planet. Its 10 instruments were still in working order at the time. During the encounter, the spacecraft discovered five new moons and four new rings. The planet itself was found to be more active than previously believed, with winds of 1100 km per hour. Hydrogen was found to be the most common atmospheric element, although the abundant methane gives the planet its blue appearance. Voyager data on Triton, Neptune's largest moon, revealed the coldest known planetary body in the solar system and a nitrogen ice volcano on its surface. The spacecraft's flyby of Neptune set it on a course below the ecliptic plane that will ultimately take it out of the solar system. After Neptune, NASA formally renamed the entire project (including both Voyager spacecraft) the Voyager Interstellar Mission (VIM). Approximately 56 million km past the Neptune encounter, Voyager 2's instruments were put into low-power mode to conserve energy. In November 1998, twenty-one years after launch, nonessential instruments were permanently turned off. Six instruments are still operating. Data from at least some of the instruments should be received until at least 2025. Sometime after that date, power levels onboard the spacecraft will be too low to operate even one of its instruments. As of March 2010, Voyager 2 was about 92 AU (13.7 billion km) from the sun, increasing its distance at a speed of about 3.3 AU (about 494 million km) per year.
<urn:uuid:8dc77e83-9fed-44f4-9ea1-785ba6eb8250>
2.8125
2,129
Knowledge Article
Science & Tech.
54.372877
This page describes how to specify the direction of a vector. It contains a text description and an animation of an arrow turning counterclockwise that displays the degree that it is at. There are links at the bottom of the page for similar animations. This tutorial is part of The Physics Classroom. This web site also includes interactive tools to help students with concepts and problem solving, worksheets for student assignments, and recommendations for simple introductory laboratories. %0 Electronic Source %A Henderson, Tom %D 1996 %T The Physics Classroom: Vector Direction %V 2013 %N 26 May 2013 %9 image/gif %U http://www.physicsclassroom.com/mmedia/vectors/vd.cfm Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
<urn:uuid:6a11f536-346c-4d0c-958b-70bb1362951b>
3.46875
192
Tutorial
Science & Tech.
45.497727
Category: Sponges view all from this category Description 8" tall (20 cm) and 1" (2.5 cm) wide. Sponge produces brilliantly colored tree-like, intertwined branches in red and orange. Surface is covered with tiny, scattered pores. Habitat Ocean or bay shallows, Tidepools. Range Eastern Canada, Florida, New England, Mid-Atlantic, California, Texas, Northwest. Discussion Branches of the Red Beard Sponge provide important habitats for crustaceans and juvenile fish species. Reproduces both asexually and sexually. Broken branches can regenerate into new sponges. When cells from the Red Beard Sponge are separated, they have the ability to reorganize themselves.
<urn:uuid:5b37c27a-2d82-4de1-8942-df11c18b4567>
3.09375
147
Knowledge Article
Science & Tech.
33.730857
- Formation of H2 and CH4 by weathering of olivine at temperatures between 30 and 70°CAnna Neubeck Department of Geological Sciences, Stockholm University, Sweden Geochem Trans 12:6. 2011..This may expand the range of environments plausible for abiotic CH4 formation both on Earth and on other terrestrial bodies... - Methane emissions from Pantanal, South America, during the low water season: toward more comprehensive samplingDavid Bastviken Department of Thematic Studies Water and Environmental Studies, Linkoping University, Linkoping, Sweden Environ Sci Technol 44:5450-5. 2010..Future measurements with static floating chambers should be based on many individual chambers distributed in the various subenvironments of a lake that may differ in emissions in order to account for the within lake variability... - Freshwater methane emissions offset the continental carbon sinkDavid Bastviken Department of Thematic Studies Water and Environmental Studies, Linkoping University, SE 58183 Linkoping, Sweden Science 331:50. 2011..Thus, the continental GHG sink may be considerably overestimated, and freshwaters need to be recognized as important in the global carbon cycle... - Measurement of methane oxidation in lakes: a comparison of methodsDavid Bastviken Department of Water and Environmental Studies, Linkoping University, Sweden Environ Sci Technol 36:3354-61. 2002..We conclude that methods using the stable isotope or mass balance modeling approach represent promising alternatives, particularly for studies focusing on ecosystem-scale carbon metabolism... - Organic matter chlorination rates in different boreal soils: the role of soil organic matter contentMalin Gustavsson Department of Thematic Studies, Water and Environmental Studies, Linkoping University, 58183 Linkoping, Sweden Environ Sci Technol 46:1504-10. 2012....
<urn:uuid:42de6163-2efa-4490-ba7a-c4d0a2a4ed7f>
2.796875
378
Content Listing
Science & Tech.
25.039844
|Feb8-13, 06:53 AM||#1| Friction conditions on contact point of disc I have some doubts regarding the friction force on a certain situation. Imagine a disc over a fixed flat surface. Like this: The disc has two motions, rotational and translational but these are independent of each other. I mean the translational motion does not come from the rotation of the disc, imagine an external force that moves the disc. Now, with this in mind lets say that the disc is moving from left to right always keeping contact with the surface. At the same time the disc is rotating with an angular velocity clockwise. Because of the different movements, at the contact point, there is slip. Now to the question itself. What kind of friction do I have at the contact point? Do I have rolling or sliding friction at the contact point? and what direction does the friction force have since the disc is moving to the right but at the contact point the direction of rotation is to the left? |Similar Threads for: Friction conditions on contact point of disc| |Help with Point Charges and Charge by Contact||Introductory Physics Homework||12| |accumulation point unit disc||Calculus & Beyond Homework||4| |Movement of a point on the perimeter of a disc||Advanced Physics Homework||0| |Point contact diode||Atomic, Solid State, Comp. Physics||0| |Contact friction||Introductory Physics Homework||3|
<urn:uuid:71afd96d-d742-443c-bdac-974756c057f4>
2.75
316
Comment Section
Science & Tech.
52.911731
A regular expression (regexp) is a text string that describes some set of strings. Functions that handle regular expressions, based on GNU regexp-0.12, have been implemented (for more details, see the GNU documentation about regexp rules). The functions available from Search menu provide search forward or backward and replace. Each of them prompts a dialog box to get the target regexp. Regular expressions are composed of characters and operators that match one or more characters. Here is an abstract of commons operators: |matches one of a choice of regular expressions. [...]matches one item of a list. [^...]matches a single character not represented by one of the list items. (...)treats any number of other operators (i.e. subexpressions) as a unit. \digit matches a specified preceding group. ^matches the beginning of line. $matches the end of line. Smac provides the following functions: returns the position of the next regular expression regexp, or -1 if regexp has not been found, or -2 if regexp is not valid. returns the position of the previous regular expression regexp, or -1 if regexp has not been found, or -2 if regexp is not valid. returns the beginning position of the substring n of the regexp found by the previous search call to a regexp. returns the end position of the substring n of the regexp found by the previous search call to a regexp. replaces the regular expression regexp with the string newstring. If the argument regexp is ommited, the previous search call to a regexp is used. It returns 1 on success else 0.
<urn:uuid:3fca356f-7757-4c16-b824-ba85de9811d8>
2.953125
370
Documentation
Software Dev.
62.315392
Fascinating creatures indeed bentley! Cuttlefish belong to the Cephalopoda class and includes squid, octopuses, and nautiluses). Although they are known as fish, they are mollusks and not fish! Recent studies indicate that cuttlefish are among the most intelligent invertebrates. Internationally there are 120 species of cuttlefish recognized. I am not sure how many species are in South African waters though! Cuttlefish have eight arms and two tentacles furnished with suckers, with which they secure their prey. They have a life expectancy of approximately 2 years and feed on small mollusks, crabs, shrimp, fish, octopuses, worms, and other cuttlefish. They are being preyed upon by dolphins, sharks, fish, seals and other cuttlefish. Their cuttle-bones are porous and used for buoyancy by changing the gas-to-liquid ratio in the chambered cuttle-bone. Cuttlefish eyes are among the most developed in the animal kingdom. The blood of a cuttlefish is an unusual shade of green-blue. The reason for this is the fact that they use the copper containing protein hemocyanin to carry oxygen instead of the red iron-containing protein hemoglobin that is found in mammals. They have 3 separate hearts to pump their blood. Two hearts pump blood to the pair of gills and the third pumps blood to the rest of the body. Photo of 2 Cuttlefish:Herewith something interesting facts about their ability to change colours:Cuttlefish are sometimes referred to as the chameleon of the sea because of their remarkable ability to rapidly alter their skin color at will. Their skin flashes a fast-changing pattern as communication to other cuttlefish and to camouflage them from predators. This color-changing function is produced by groups of red, yellow, brown, and black pigmented chromatophores above a layer of reflective iridophores and leucophores, with up to 200 of these specialized pigment cells per square millimeter. The pigmented chromatophores have a sac of pigment and a large membrane that is folded when retracted. There are 6-20 small muscle cells on the sides which can contract to squash the elastic sac into a disc against the skin. Yellow chromatophores (xanthophores) are closest to the surface of the skin, red and orange are below (erythrophores), and brown or black are just above the iridophore layer (melanophores). The iridophores reflect blue and green light. Iridophores are plates of chitin or protein, which can reflect the environment around a cuttlefish. They are responsible for the metallic blues, greens, golds, and silvers often seen on cuttlefish. All of these cells can be used in combinations. For example, orange is produced by red and yellow chromatophores, while purple can be created by a red chromatophore and an iridophore. The cuttlefish can also use an iridophore and a yellow chromatophore to produce a brighter green. As well as being able to influence the color of the light that reflects off their skin, cuttlefish can also affect the light's polarization, which can be used to signal to other marine animals, many of which can also sense polarization.
<urn:uuid:b54600a1-7fc1-4680-a155-982badee16b7>
3.421875
703
Comment Section
Science & Tech.
39.829133
This is another image I found on Google+ All lines are absolutely straight, parallel and perpendicular but why does it appear to have a curvature? Related: How does this illusion work? Like these questions :) Many of these illusions come from Prof. Akiyoshi Kitaoka, a japanese Psychologist and expert for Gestalt Psychology. On his website you'll find some more fascinating illusions and questions to ask here ;) The illusion above is named Cafe Wall illusion and the newest model to explain those illusions is the contrast-polarity model. Short explanation from his webpage: The paper explained it better to me: This explains why you perceive a tilt. If you position the smaller squares now in distinct edges of the big squares, you can achieve 2- and 3-dimenional illusions. Here you see a increasing of the tilt due to more smaller squares: Here you can see that the positioning of the smaller squares is critical to achieve the 3D effect of the orignial bulge effect in your question: Notice that Gestalt Psychology is a non-reductionistic theory approach and investigates mainly the phenomenology and underlying Gestalt Laws of visual perception. How these Gestalt Laws developed on a deeper level is a question of neurobiological evolution similar to, "why have some species of apes color-vision and some not". The ellipses in the explaining picture above show you, that our cognitive visual machine somehow tries to group divided objects (square and line of same contrast/brightness) in one line and we see a tilt. I'm guessing here, but this is probably due to cognitive brain algorithm that saves things and objects we see and perceive mainly by countor and shapes, rather than pixel by pixel like a computer and digital camera do it, which of course don't perceive any tilt or 3D illusion in any of those trick images :) Read the papers for more explanations and examples, not behind a paywall:
<urn:uuid:41b7145c-e537-491e-9369-ebe5ab5d7f66>
2.71875
399
Q&A Forum
Science & Tech.
25.346958
Scientific name: Coenonympha tullia Rests with wings closed. Some have row of ‘eyespots’ on underwings, like Ringlet, but some don’t. The Large Heath is restricted to wet boggy habitats in northern Britain, Ireland, and a few isolated sites in Wales and central England. The adults always sit with their wings closed and can fly even in quite dull weather provided the air temperature is higher than 14B:C. The size of the underwing spots varies across its range; a heavily spotted form (davus) is found in lowland England, a virtually spotless race (scotica) in northern Scotland, and a range of intermediate races elsewhere (referred to aspolydama). The butterfly has declined seriously in England and Wales, but is still widespread in parts of Ireland and Scotland. Size and Family - Family – Browns - Small/Medium Sized - Wing Span Range (male to female) - 41mm - Listed as a Section 41 species of principal importance under the NERC Act in England - Listed as a Section 42 species of principal importance under the NERC Act in Wales - Classified as a Northern Ireland Priority Species by the NIEA - UK BAP status: Priority Species - Butterfly Conservation priority: High - European Status: Vulnerable - Protected in Great Britain for sale only The main foodplant is Hare's-tail Cottongrass (Eriophorum vaginatum) but larvae have been found occasionally on Common Cottongrass (E. angustifolium) and Jointed Rush (Juncus articulatus). Early literature references to White Beak-sedge (Rhyncospora alba), are probably erroneous. - Countries – England, Scotland and Wales - Northern Britain and throughout Ireland - Distribution Trend Since 1970’s = -43% The butterflies breed in open, wet areas where the foodplant grows, this includes habitats such as; lowland raised bogs, upland blanket bogs and damp acidic moorland. Sites are usually below 500m (600m in the far north) and have a base of Sphagnum moss interspersed with the foodplant and abundant Cross-leaved Heath (the main adult nectar source). In Ireland, the butterfly can be found where manual peat extraction has lowered the surface of the bog, creating damp areas with local concentrations of foodplant.
<urn:uuid:3a335f27-c035-4215-b4c2-b0179298929c>
3.5
520
Knowledge Article
Science & Tech.
28.962464
We said on the Getting Started Page that HTML is nothing more than a box of highlighters that we use to carefully describe our text. This is mostly the entire story. Normally our content is just text we want to define in some way. But what if our content is not just text? What if, let’s say, we have a bunch of images that we want to include on the page? We certainly can’t type in 4-thousand pixels on the keyboard to make up a 200x200-pixel image… Motivation and Syntax When the content we want is not text, then we have to have of including that content on the page. The most common example is an image. The problem, however, is that html tags are like highlighters — they have an opening tag and a closing tag. Between the opening and closing tags fits the data that is “highlighted” by the tag. If we were to have an <image> tag in HTML (we don’t have that tag—one close to it though), what would go “inside” of it? What might you replace the stuff with inside of It simply doesn’t make sense for an <image> tag to exist like all the other HTML tags because the other HTML tags define something else while image is, itself, something that can be defined. The image tag and all such manner of tags are called “element” tags because, just like the name implies, the tags are themselves the elements all their own. For all intents and purposes you can treat element tags just like text. If your content is like the words in a textbook and regular HTML is like a pack of highlighters, then these special element tags are indeed like the text and not like the highlighters at all. The XML standard says that every tag must be closed. But we have this new breed of tags that really don’t make sense to be closed. What we have is a compromise between the two extremes. We have a self-closing tag. The tag is just like the tags we learned about on the General Syntax Page with two exceptions. - There is no closing tag - There is a >to indicate that the tag is self-closing. So this looks like: (There is commonly a space before the /, but again spacing after the name of the tag is arbitrary.) You might imagine that there could be a tag that produces the copyright symbol (©). There isn’t (we’ll get to special characters later). But if there were, you might imagine there being an element tag called copyrightsymbol that we could use right in line with our text to produce a © Images on Web sites take the form of image files stored on a server. Much like line breaks, images are element tags that are treated like text. The difference is that the image element tag is replaced by the actual image file. We mentioned the (non-existent) <image> tag earlier in our discussion on the necessity of the element-style tags. The real tag to include an image on the page is <img>. This tag makes little sense if used without its Let’s say we have the image image1.jpg, , uploaded to the same folder as our HTML file. To include the image on the page, all we have to insert is: <p><img src="./image1.jpg" /></p> Which would be rendered (displayed) like: And, again, images are like text — they go right in with your content: <p>This is image1: <img src="./image1.jpg" />. Cool, right?</p> This would be rendered like: This is image1: . Cool, right? (More information on how to reference your images using different paths depending on where your images are stored can be found on the Internet File Management Page of the Web Publishing at the UW online curriculum.) If your images contribute to the content of your site, then you should provide an alt attribute for your images. The alt attribute is a text version of your image. Usually it is just a concise sentence describing the image. The alt attribute will be used if your image is unavailable for any reason (e.g. if you delete the image file, if your viewers can’t see images, etc.) If we had a picture of a dog jumping into a lake called spot.jpg, we might use the following HTML to place it on the page: <p>A picture I took: <img src="./spot.jpg" alt="Spot jumping into a lake." /></p> If your image is purely a visual element (e.g. an icon next to a download link or an image used in your page’s layout), then you don’t need to provide an alt attribute. If your web design work is sponsored by the University, be sure to check out the UW’s page on Web Site Accessibility by clicking here. The spacing rules of HTML say that when we break the line in the source code (e.g. using the enter key on the keyboard), we don’t also break the line on the rendered (displayed) version of the page. This is why the following two blocks of code: <p>This is text. This is more text</p> <p>This is text This is text</p> …are considered equivalent. They will both be displayed by the web browser in exactly the same way: This is text. This is more text <br />element tag. The following block of code: ..is rendered like: <p> In what particular thought to work I know not; <br /> But in the gross and scope of my opinion, <br /> This bodes some strange eruption to our state.<br /> </p> In what particular thought to work I know not; But in the gross and scope of my opinion, This bodes some strange eruption to our state. Above we imagined that there was an HTML element tag called copyrightsymbol that would be used to produce a Copyright symbol (©). If there were such a tag, we might have the following HTML: <p>This page is Copyright (<copyrightsymbol />) 1989 By George Orwell</p> There turns out to be so many such symbols that the creators of HTML decided to create a whole group of “special symbols” (or “special characters”). These characters are used in the place of any character you cannot type using a standard US-English QWERTY keyboard. They are also used in the place of some “reserved” characters (like the less-than and greater-than signs, <, and >). There are many such characters. They all start with an ampersand ( &) and end with a semicolon ( ;). The web browser sees these and replaces them with the special character. Some of them are mentioned in the table to the right. You can find a complete listing of all such special characters by doing a search in your favorite search engine for HTML Special Characters.
<urn:uuid:802e6015-705c-4518-969a-a08bf3e5ad88>
3.234375
1,534
Documentation
Software Dev.
64.109588
I was given this code from a thread that I created (i cannot remember who) and I would like somebody (preferably the same person who gave it to me) to explain what each line does ('cause i've got no idea). And also, when I compile it (in MSVC++ 6.0), it does nothing but sit there!! It is meant to display all of the lines of a text file. I really wanted it to store each line in a variable if it wasn't empty, but anyway..... ...here is the code: using namespace std; ifstream file("myfile.txt"); // or whatever if(!CurrentLine.empty()) // store it if not empty // show the stored lines for(int i = 0; i < Lines.size(); i++) cout << Lines[i] << endl;
<urn:uuid:364ad431-a0c8-44e7-8c15-f614a15b6f5a>
2.875
180
Personal Blog
Software Dev.
78.901365
Cyanobacterial Emergence at 2.8 Gya and Greenhouse Feedbacks D. Schwartzman, K. Caldeira & A. Pavlov Approximately 2.8 billion years ago, cyanobacteria and a methane-influenced greenhouse emerged nearly simultaneously. Here we hypothesize that the evolution of cyanobacteria could have caused a methane greenhouse. Apparent cyanobacterial emergence at about 2.8 Gya coincides with the negative excursion in the organic carbon isotope record, which is the first strong evidence for the presence of atmospheric methane. The existence of weathering feedbacks in the carbonate-silicate cycle suggests that atmospheric and oceanic CO2 concentrations would have been high prior to the presence of a methane greenhouse (and thus the ocean would have had high bicarbonate concentrations). With the onset of a methane greenhouse, carbon dioxide concentrations would decrease. Bicarbonate has been proposed as the preferred reductant that preceded water for oxygenic photosynthesis in a bacterial photosynthetic precursor to cyanobacteria; with the drop of carbon dioxide level, Archean cyanobacteria emerged using water as a reductant instead of bicarbonate (Dismukes et al., 2001). Our thermodynamic calculations, with regard to this scenario, give at least a tenfold drop in aqueous CO2 levels with the onset of a methane-dominated greenhouse, assuming surface temperatures of about 60°C and a drop in the level of atmospheric carbon dioxide from about 1 to 0.1 bars. The buildup of atmospheric methane could have been triggered by the boost in oceanic organic productivity that arose from the emergence of pre-cyanobacterial oxygenic phototrophy at about 2.8–3.0 Gya; high temperatures may have precluded an earlier emergence. A greenhouse transition timescale on the order of 50–100 million years is consistent with results from modeling the carbonate-silicate cycle. This is an alternative hypothesis to proposals of a tectonic driver for this apparent greenhouse transition.
<urn:uuid:f4947171-69c0-4989-a627-b8ca8e544ab3>
2.71875
417
Academic Writing
Science & Tech.
22.795393
Note: Using access() to check if a user is authorized to e.g. open a file before actually doing so using open() creates a security hole, because the user might exploit the short time interval between checking and opening the file to manipulate it. Note: I/O operations may fail even when access() indicates that they would succeed, particularly for operations on network filesystems which may have permissions semantics beyond the usual POSIX permission-bit model. Although Windows supports chmod(), you can only set the file's read-only flag with it (via the S_IREAD constants or a corresponding integer value). All other bits are ignored. |path, uid, gid)| |path, uid, gid)| '..'even if they are present in the directory. Availability: Macintosh, Unix, Windows. Changed in version 2.3: On Windows NT/2k/XP and Unix, if path is a Unicode object, the result will be a list of Unicode objects. 0666(octal). The current umask value is first masked out from the mode. Availability: Macintosh, Unix. FIFOs are pipes that can be accessed like regular files. FIFOs exist until they are deleted (for example with os.unlink()). Generally, FIFOs are used as rendezvous between ``client'' and ``server'' type processes: the server opens the FIFO for reading, and the client opens it for writing. Note that mkfifo() doesn't open the FIFO -- it just creates the rendezvous point. |filename[, mode=0600, device])| 0777(octal). On some systems, mode is ignored. Where it is used, the current umask value is first masked out. Availability: Macintosh, Unix, Windows. 0777(octal). On some systems, mode is ignored. Where it is used, the current umask value is first masked out. Note: makedirs() will become confused if the path elements to create include os.pardir. New in version 1.5.2. Changed in version 2.3: This function now handles UNC paths correctly. pathconf_namesdictionary. For configuration variables not included in that mapping, passing an integer for name is also accepted. Availability: Macintosh, Unix. If name is a string and is not known, ValueError is raised. If a specific value for name is not supported by the host system, even if it is included in OSError is raised with errno.EINVAL for the os.path.join(os.path.dirname(path), result). Availability: Macintosh, Unix. >>> import os >>> statinfo = os.stat('somefile.txt') >>> statinfo (33188, 422511L, 769L, 1, 1032, 100, 926L, 1105022698,1105022732, 1105022732) >>> statinfo.st_size 926L >>> Changed in version 2.3: If stat_float_times returns true, the time values are floats, measuring seconds. Fractions of a second may be reported if the system supports that. On Mac OS, the times are always floats. See stat_float_times for further discussion. On some Unix systems (such as Linux), the following attributes may also be available: st_blocks (number of blocks allocated for file), st_blksize (filesystem blocksize), st_rdev (type of device if an inode device). st_flags (user defined flags for file). On other Unix systems (such as FreeBSD), the following attributes may be available (but may be only filled out if root tries to use them): st_gen (file generation number), st_birthtime (time of file creation). On Mac OS systems, the following attributes may also be available: st_rsize, st_creator, st_type. On RISCOS systems, the following attributes are also available: st_ftype (file type), st_attrs (attributes), st_obtype (object type). For backward compatibility, the return value of stat() is also accessible as a tuple of at least 10 integers giving the most important (and portable) members of the stat structure, in the order st_mode, st_ino, st_dev, st_nlink, st_uid, st_gid, st_size, st_atime, st_mtime, st_ctime. More items may be added at the end by some implementations. The standard module stat defines functions and constants that are useful for extracting information from a stat structure. (On Windows, some items are filled with dummy values.) Note: The exact meaning and resolution of the st_atime, st_mtime, and st_ctime members depends on the operating system and the file system. For example, on Windows systems using the FAT or FAT32 file systems, st_mtime has 2-second resolution, and st_atime has only 1-day resolution. See your operating system documentation for details. Availability: Macintosh, Unix, Windows. Changed in version 2.2: Added access to values as attributes of the returned object. Changed in version 2.5: Added st_gen, st_birthtime. True, future calls to stat() return floats, if it is False, future calls return ints. If newvalue is omitted, return the current setting. For compatibility with older Python versions, accessing stat_result as a tuple always returns integers. Changed in version 2.5: Python now returns float values by default. Applications which do not work correctly with floating point time stamps can use this function to restore the old behaviour. The resolution of the timestamps (that is the smallest possible fraction) depends on the system. Some systems only support second resolution; on these systems, the fraction will always be zero. It is recommended that this setting is only changed at program startup time in the __main__ module; libraries should never change this setting. If an application uses a library that works incorrectly if floating point time stamps are processed, this application should turn the feature off until the library has been corrected. For backward compatibility, the return value is also accessible as a tuple whose values correspond to the attributes, in the order given above. The standard module statvfs defines constants that are useful for extracting information from a statvfs structure when accessing it as a sequence; this remains useful when writing code that needs to work with versions of Python that don't support accessing the fields as attributes. Changed in version 2.2: Added access to values as attributes of the returned object. None. If given and not None, prefix is used to provide a short prefix to the filename. Applications are responsible for properly creating and managing files created using paths returned by tempnam(); no automatic cleanup is provided. On Unix, the environment variable TMPDIR overrides dir, while on Windows the TMP is used. The specific behavior of this function depends on the C library implementation; some aspects are underspecified in system documentation. Warning: Use of tempnam() is vulnerable to symlink attacks; consider using tmpfile() (section 14.1.2) instead. Availability: Macintosh, Unix, Windows. None, then the file's access and modified times are set to the current time. Otherwise, times must be a 2-tuple of numbers, of the form (atime, mtime)which is used to set the access and modified times, respectively. Whether a directory can be given for path depends on whether the operating system implements directories as files (for example, Windows does not). Note that the exact times you set here may not be returned by a subsequent stat() call, depending on the resolution with which your operating system records access and modification times; see stat(). Changed in version 2.0: Added support for Nonefor times. Availability: Macintosh, Unix, Windows. (dirpath, dirnames, filenames). dirpath is a string, the path to the directory. dirnames is a list of the names of the subdirectories in dirpath '..'). filenames is a list of the names of the non-directory files in dirpath. Note that the names in the lists contain no path components. To get a full path (which begins with top) to a file or directory in If optional argument topdown is true or not specified, the triple for a directory is generated before the triples for any of its subdirectories (directories are generated top down). If topdown is false, the triple for a directory is generated after the triples for all of its subdirectories (directories are generated bottom up). When topdown is true, the caller can modify the dirnames list in-place (perhaps using del or slice assignment), and walk() will only recurse into the subdirectories whose names remain in dirnames; this can be used to prune the search, impose a specific order of visiting, or even to inform walk() about directories the caller creates or renames before it resumes walk() again. Modifying dirnames when topdown is false is ineffective, because in bottom-up mode the directories in dirnames are generated before dirpath itself is generated. By default errors from the os.listdir() call are ignored. If optional argument onerror is specified, it should be a function; it will be called with one argument, an OSError instance. It can report the error to continue with the walk, or raise the exception to abort the walk. Note that the filename is available as the filename attribute of the exception object. os.path.islink(path), and invoke walk(path)on each directly. This example displays the number of bytes taken by non-directory files in each directory under the starting directory, except that it doesn't look under any CVS subdirectory: import os from os.path import join, getsize for root, dirs, files in os.walk('python/Lib/email'): print root, "consumes", print sum(getsize(join(root, name)) for name in files), print "bytes in", len(files), "non-directory files" if 'CVS' in dirs: dirs.remove('CVS') # don't visit CVS directories In the next example, walking the tree bottom up is essential: rmdir() doesn't allow deleting a directory before the directory is empty: # Delete everything reachable from the directory named in 'top', # assuming there are no symbolic links. # CAUTION: This is dangerous! For example, if top == '/', it # could delete all your disk files. import os for root, dirs, files in os.walk(top, topdown=False): for name in files: os.remove(os.path.join(root, name)) for name in dirs: os.rmdir(os.path.join(root, name)) New in version 2.3. See About this document... for information on suggesting changes.
<urn:uuid:cc9bee38-ec23-4459-b5cc-5601b63c418b>
2.921875
2,355
Documentation
Software Dev.
47.354915
|Foundation of Quantum Theory| The following well-known experiments serve as a motivation for studying quantum theory. The experimental results cannot be explained using ideas from classical physics. |1. Blackbody Radiation||2. Photoelectric Effect||3. Compton Effect| It is well-known that when a body is heated it emits electromagnetic radiation. For example, if a piece of iron is heated to a few hundred degrees, it gives off e.m. radiation which is predominantly in the infra-red region. When the temperature is raised to 1000C it will begin to glow with reddish color which means that the radiation emitted by it is in the visible red region having wavelengths shorter than in the previous case. If heated further it will become white-hot and the radiation emitted is shifted towards the still shorter wave-length blue color in the visible spectrum. Thus the nature of the radiation depends on the temperature of the emitter. A heated body not only emits radiation but it also absorbs a part of radiation falling on it. If a body absorbs all the radiant energy falling on it, then its absorptive power is unity. Such a body is called a black body. An ideal blackbody is realized in practice by heating to any desired temperature a hollow enclosure (cavity) and with a very small orifice. The inner surface is coated with lamp-black. Thus radiation entering the cavity through the orifice is incident on its blackened inner surface and is partly absorbed and partly reflected. The reflected component is again incident at another point on the inner surface and gets partly absorbed and partly reflected. This process of absorption and reflection continues until the incident beam is totally absorbed by the body. The inner walls of the heated cavity also emit radiation, a part of which can come out through the orifice. This radiation has the characteristics of blackbody radiation - the spectrum of which can be analyzed by an infra-red spectrometer. Experimental results show that the blackbody radiation has a continuous spectrum (shown in the graph). The intensity of the emitted radiation El is plotted as a function of the wavelength l for different temperatures. The wavelength of the emitted radiation ranges continuously from zero to infinity. El increases with increasing temperature for all wavelengths. It has very low values for both very short and very long wavelengths and has a maximum in between at some wavelength lmax. lmax depends on the temperature of the blackbody and decreases with increasing temperature. The shift in the peak of the intensity distribution curve obeys an empirical relationship known as Wien's displacement law: lmax T = constant. The total power radiated per unit area of a blackbody can be derived from thermodynamics. This is known as Stefan-Boltzmann law which can be expressed mathematically as: E = s T4, where s = 5.67 x 10-8 W m-2 K-4 is known as Stefan's constant. Note that the total power E radiated is obtained by integrating El over all wavelengths. W. Wien proposed an empirical relationship between El with l for a given temperature T: El (T) = A exp(-B/lT)/l5, where the constants A and B are chosen arbitrarily so as to fit the experimental energy distribution curves. But it was later found that the experimental data don't follow Wien's empirical relation at larger wavelengths [See Fig. below ]. Wien's theory of intensity of radiation was based only on arguments from thermodynamics not on any plausible model. Considering the radiation system as composed of a bunch of harmonic oscillators Rayleigh and Jeans derived (using thermodynamics) an expression for the emitted radiation El: El = (c/4) (8pkBT/l4). 'kB' is the Boltzman constant (kB=1.345 x 10-23 J/K). The above expression agrees well with the experimental results at long wavelengths but drastically fails at shorter wavelengths. In the limit l -> 0, El -> infinity from the expression above, but in the experiments El -> 0, as l -> 0. This serious disagreement between theory and experiment indicates the limitations of classical mechanics. Max Planck later derived an expression for the emitted radiation using quantum mechanics. He made a bold new postulate that an oscillator can have only energies which are discrete, i.e., an integral multiple of a finite quantum of energy hf where h is Planck's constant (h= 6.55 x 10-34 J.s) and f is the frequency of the oscillator. Thus the energy of the oscillator is, E = nhf, where n is an integer or zero. Planck further assumed that the change in energy of the oscillator due to emission or absorption of radiant energy can also take place by a discrete amount hf. Since radiation is emitted from the oscillators, and since according to Planck, the change in energy of the oscillators can only take discrete values, the energy carried by the emitted radiation, which is called a photon, will be hf, and that is also equal to the loss of energy of the oscillator. Again, this is also the energy gain of the oscillator when it absorbs a photon. Based on these ideas Planck derived the expression for the energy distribution of blackbody radiation: El = (c/4) (8phc/l5)(1/[exp(hc/lkBT) - 1]). Rayleigh-Jean's expression and Wien's displacement law are special cases of Planck's law of radiation. Planck's formula for the energy distribution of blackbody radiation agrees well with the experimental results, both for the long wavelengths and the short wavelengths of the energy spectrum. Please on the simulation below to see nice interactive demonstration of the physics of Blackbody radiation. Simulation on Blackbody Radiation Back to Top Planck's postulate regarding the discrete nature of the possible energy states of an oscillator marked a radical departure from the ideas of classical physics. According to the laws of classical mechanics, the energy of an oscillator can vary continuously, depending only on the amplitude of the vibrations - this is in total contrast to Planck's hypothesis of discrete energy states of an oscillator. Photoelectric effect is another classic example which can not be explained with classical physics. Einstein was awarded Nobel prize for his explanation of the physics of photoelectric effect. The basic experiment of photoelectric effect is simple. It was observed that a metal plate when exposed to ultraviolet radiation became positively charged which showed that it has lost negative charges from its surface. These negatively charged particles were later identified to be electrons (later named photoelectrons). This phenomenon is known as photoelectric effect. Please out the physics applet below which shows the effect of light on various metals. Simulation on Photoelectric effect The main results of the experiment can be summarized as follows: On exposure to the incident light, photoelectrons with all possible velocities ranging from 0 upto a maximum vm are emitted from the metal plate. When a positive potential is applied to the collector (which collects the emitted photoelectrons), a fraction of the total number emitted is collected by the collector. This fraction increases as the voltage is increased. For potentials above about +10 volts, all the electrons emitted by the light are collected by the collector which accounts for the saturation of the photoelectric current [Figs. (a) and (b) below]. On the other hand, when a negative retarding potential is applied on the collector, the lower energy electrons are unable to reach the collector so that the current gradually decreases with increasing negative potential. Finally for a potential -V0 (known as the stopping potential), the photoelectrons of all velocities upto the maximum vm are prevented from reaching the collector. At this point, the maximum kinetic energy of the emitted electrons equals the energy required to overcome the effect of the retarding potential - so we can write mvm2/2 = eV0. Conclusion from the experimental results: (1) The photoelectric current depends upon the intensity of the light used. It is independent of the wavelength of the light [See Fig. (a) above]. (2) The photoelectrons are emitted with all possible velocities from 0 upto a maximum vm which is independent of the intensity of the incident light, but depends only on its wavelength (or frequency). It is found that if f is the frequency of the light used, then the maximum kinetic energy of the photoelectrons increases linearly with f [See Figs. (b) and (c) above ]. (3) Photoelectron emission is an instantaneous effect. There is no time gap between the incidence of the light and the emission of the photoelectrons. (4) The straight line graph showing the variation of the maximum kinetic energy of the emitted electrons with the frequency f of the light intersects the abscissa at some point f0. No photoelectron emission takes place in the frequency range f<f0. This minimum frequency f0 is known as the threshold frequency. Its value depends on the nature of the emitting material [See Fig. (c) above]. Breakdown of Classical Physics: According to classical physics - (a) Light is an electromagnetic wave - the intensity of light is determined by the amplitudes of these electromagnetic oscillations. When light falls on an electron bound in an atom, it gains energy from the oscillating electric field. Larger the amplitude of oscillations, larger is the energy gained by the emitted electron - thus energy of the emitted electrons should depend on the intensity of the incident light. This is in contrast to what has been observed in experiment (point 2 above). (b) According to the electromagnetic theory, the velocity of the emitted electrons should not depend on the frequency of the incident light. Whatever may be the frequency of the incident light, the electron would be emitted if it gets sufficient time to collect the necessary energy for emission. So the photoelectric emission is not an instantaneous effect. These are in contrary to points 3 and 4 above. (c) Finally, the incident electromagnetic wave acts equally on all the electrons of the metal surface. There is no reason why only some electrons will be able to collect the necessary energy for emission from the incident waves. Given sufficient time, all electrons should be able to collect the energy necessary for emission. So there is no reason why the photoelectric current should depend upon the intensity of the incident light. However, this is again in contrary to the observed facts (point 1 above). Einstein's light quantum hypothesis and photoelectric equation: We have seen from above that the maximum kinetic energy of the emitted photoelectrons increases linearly with the frequency of the incident light. In terms of equation we have mvm2/2 = eV0 = af - W where a and W are constants. W is known as the work function of the emitting material. The constant a was determined experimentally and is found to be equal to the Planck's constant h. We can then rewrite the above equation as - mvm2/2 = eV0 = hf - W. For the special value of f = f0 = W/h, the K. E. of the emitted photoelectrons becomes zero. So there will be no photoelectron emission if f < f0. f0 is the threshold frequency. The equation, mvm2/2 = eV0 = hf - hf0 is known as the famous Einstein's photoelectric equation. Einstein used the quantum hypothesis of Planck to explain the photoelectric effect. He postulated that light is emitted from a source in the form of energy packets of the amount hf known as the light quantum or photon. This is known as Einstein's light quantum hypothesis. When a photon of energy hf falls on an electron bound inside an atom, the electron absorbs the energy hf and is emitted from the atom provided that hf is greater than the energy of binding of the electron in the atom which is equal to the work function W of the metal. The surplus of energy (hf - W) is taken away by the electron as its kinetic energy. Obviously if hf < W, i.e. f<f0, no photoelectric emission can take place. This explains the existence of the threshold frequency. Furthermore, according to Einstein's theory, larger the number of photons falling on the metal, greater is the probability of their encounter with the atomic electrons and hence greater is the photoelectric current. So the increase of photoelectric current with the increasing light intensity can be easily explained. Finally, as soon as the photon of energy hf > W falls on an electron, the latter absorbs it and is emitted instantaneously. Note that Einstein's light quantum hypothesis postulates the corpuscular nature of light in contrast to the wave nature. We will talk about this wave-particle duality later on in this course. Back to Top The discovery of Compton scattering of x-rays provides direct support that light consists of pointlike quanta of energy called photons. A schematic diagram of the apparatus used by Compton is shown in the Figure below. A graphite target was bombarded with monochromatic x-rays and the wavelength of the scattered radiation was measured with a rotating crystal spectrometer. The intensity was determined by a movable ionization chamber that generated a current proportional to the x-ray intensity. Compton measured the dependence of scattered x-ray intensity on wavelength at three different scattering angles of 45o, 90o, and 135o. The experimental intensity vs. wavelength plots observed by Compton for the above three scattering angles (See Fig. below) show two peaks, one at the wavelength l of the incident x-rays and the other at a longer wavelength l'. The functional dependence of l' on the scattering angle and l was predicted by Compton to be: l' - l = (h/mec)[ 1- cosq ] = l0 [ 1- cosq ]. The factor l0=h/mec, also known as Compton wavelength can be calculated to be equal to 0.00243 nm. The physics of Compton effect: To explain his observations Compton assumed that light consists of photons each of which carries an energy hf and a momentum hf/c (as p = E/c = hf/c). When such a photon strikes a free electron the electron gets some momentum (pe) and kinetic energy (Te) due to the collision, as a result of which the momentum and energy of the photon are reduced. Considering energy and momentum conservation (For the detail derivation please here) one can derive the change in wavelength due to Compton scattering: l' - l = (h/mec)[ 1- cosq ]. Note that the result is independent of the scattering material and depends only on the angle of scattering. The appearance of the peak at the longer wavelength in the intensity vs. wavelength curve is due to Compton scattering from the electron which may be considered free, since its energy of binding in the atom is small compared to the energy hf of the photon. The appearance of the other peak at the wavelength of the incident radiation is due to scattering from a bound electron. In this case the recoil momentum is taken up by the entire atom, which being much heavier compared to the electron, produces negligible wavelength shift. Compton effect gives conclusive evidence in support of the corpuscular character of electromagnetic radiation. Please out the simulation below which shows Compton scattering. Simulation on Compton Scattering Back to Top © Kingshuk Majumdar (2000)
<urn:uuid:ce82c755-4001-49b5-866b-576e95373ef8>
3.671875
3,244
Academic Writing
Science & Tech.
44.877323
In the applet below you see 1D realisation of white and correlated noise with equidistant step in x. Independent random points uniform distribution on interval (-1, 1) make the white noise (the blue curve). Correlated random points Vi (the red curve) are obtained by averaging of white noise in radius Rc sphere, i.e. kernel Ko is used Vi = ∑j= -Rc,Rc fi+j To get a 2D fractal noise (mountain) you take an elastic string (see Fig.1), then a random vertical displacement is applied to its middle point. The process is repeated recursively to the middle point of every new segment. The random displacement decreases m times each iteration (usually m = 2 are used). Using Fourier transformation for V(r), K(r), f(r) g(k) = ∫ g(r) eikr dr , we get for V(k) V(k) = K(k) f(k) . I.e. averaging (*) means the white noise filtration by means of a filter with bandwidth K(k). The bandwidths for the two used filters are shown in Fig.3 (for Rc = 1) KG(k) ~ exp[-(Rck)2], Ko ~ sin(Rck)/k. At last 2D correlated random landscape. To get a smooth potential 2D Gauss kernel is used Percolation in random potential landscape Drag mouse to rotate 3D image (with "shift" to zoom it). The white line (in the blue bar to the right) corresponds to the average <V> value. The yellow line corresponds to the Fermi energy εF . Drag the line by mouse to change εF . See also 3D Mountains and Hidden Surface Removal Algorithms.
<urn:uuid:ad8ea79d-d059-4e6b-9162-059c5c2dc206>
2.6875
403
Academic Writing
Science & Tech.
76.022895
Length of metal strips produced by a machine are normally distributed with mean length of 150 cm and a standard deviation of 10 cm. Find the probability that the length of a randomly selected strip is i/ Shorter than 165 cm? ii/ Longer than 170 cm? iii/ Between 145 cm and 155 cm?
<urn:uuid:b1c81644-32dd-49d4-ba08-7b8c59cdda58>
3.1875
65
Q&A Forum
Science & Tech.
88.035307
Newton's first law states that an object will keep doing what it is doing if left alone, in other words - The natural state of an object is static - unchanging - motion. Newton's second law clarifies the first. Acceleration, or any change in motion, is an unnatural state for an arbitrary object left to its laurels, however it is a state that clearly exists all around us. Newton defines the "thing" that forces an object to change its state of being - a force. In this most rigorous sense, a force is defined to be that which causes a change in motion. The observation of a change in momentum necessitates that there is some force driving that change, so in this sense the two are equivalent (there is an equals sign there after all) - wherever you see a (net) force you will see an acceleration, wherever you see an acceleration you will find a force responsible for it. However, going back to the first law, acceleration is a change in the (kinetic) state of an object, an objects natural tendency is to statically maintain its state. The observation of an unnatural state of being would logically imply that there is a cause. Intuitively it seems unnatural that accelerations would happen spontaneously and that the universe will invent a force just to balance the books if you will.
<urn:uuid:9653235f-f275-4ae2-8e05-52f71a5a082d>
3.390625
273
Q&A Forum
Science & Tech.
39.848107
Contrary to popular belief, astronauts still have weight while they are orbiting the earth. In fact, Shuttle astronauts weigh almost as much in space as they do on the earth's surface. But these astronauts are in free fall, together with their ship, and their downward accelerations prevents them from measuring their weights directly. Instead, astronauts make a different type of measurement—one that accurately determines how much of them there is: they measure their masses. Your weight is the force that the earth's gravity exerts on you; your mass is the measure of your inertia, how hard it is to make you accelerate. For deep and interesting reasons, weight and mass are proportional to one another at a given location, so measuring one quantity allows you to determine the other. Instead of weighing themselves, astronauts measure their masses. They make these mass measurements with the help of a shaking device. They strap themselves onto a machine that gently jiggles them back and forth to see how much inertia they have. By measuring how much force it takes to cause a particular acceleration, the machine is able to compute the mass of its occupant. Answered by Lou A. Bloomfield of the University of Virginia
<urn:uuid:5696a8c8-21e4-4bdc-a7a2-5523499690d2>
4.5
240
Q&A Forum
Science & Tech.
42.654701
Defects combine to make perfect devices Jun 26, 2002 Faulty components are usually rejected in the manufacture of computers and other high-tech devices. However, Damien Challet and Neil Johnson of Oxford University say that this need not be the case. They have used statistical physics to show that the errors from defective electronic components or other imperfect objects can be combined to create near perfect devices (D Challet and N Johnson 2002 Phys. Rev. Lett. 89 028701). Most computers are built to withstand the faults that develop in some of their components over the course of the computer’s lifetime, although these components initially contain no defects. However, many emerging nano- and microscale technologies will be inherently susceptible to defects. For example, no two quantum dots manufactured by self-assembly will be identical. Each will contain a time-independent systematic defect compared to the original design. Historically, sailors have had to cope with a similar problem – the inaccuracy in their clocks. To get round this they often took the average time of several clocks so that the errors in their clocks would more or less cancel out. Similarly, Challet and Johnson consider a set of N components, each with a certain systematic error – for example the difference between the actual and registered current in a nanoscale transistor at a given applied voltage. They calculated the effect of combining the components and found that the best way to minimize the error is to select a well-chosen subset of the N components. They worked out that the optimum size of this subset for large numbers of devices should equal N/2. On this basis, the researchers say that it should be possible to generate a continuous output of useful devices using only defective components. To find the optimum subset from each batch of defective devices, all of the defects can be measured individually and the minimum calculated with a computer. Alternatively, components can be combined through trial and error until the aggregate error is minimized. Once the optimum subset has been selected, fresh components can be added to replenish the original batch and the cycle started over again. Challet and Johnson point out that this process and the wiring together of the components will add to the overall cost of making the device. But they believe that these extra costs are likely to be outweighed by the fact that defective components can be produced cheaply en masse. Hewlett Packard, for example, has already built a supercomputer – known as Teramac – from partially defective conventional components using adaptive wiring. “Our scheme implies that the ‘quality’ of a component is not determined solely by its own intrinsic error,” write the researchers. “Instead, error becomes a collective property, which is determined by the ‘environment’ corresponding to the other defective components.” About the author Edwin Cartlidge is News Editor of Physics World
<urn:uuid:7ed1dcb4-7e35-4561-bffe-329002b7a93d>
3.671875
584
Truncated
Science & Tech.
32.359988
Launch Date: December 02, 1997 Mission Project Home Page - http://www.mpe-garching.mpg.de/EQS/eqs_home.html EQUATOR-S was a low-cost mission designed to study the Earth's equatorial magnetosphere out to distances of 67000 km. It formed an element of the closely-coordinated fleet of satellites that compose the IASTP program. Based on a simple spacecraft design, it carries a science payload comprising advanced instruments that were developed for other IASTP missions. Unique features of EQUATOR-S were its nearly equatorial orbit and its high spin rate. It was launched as an auxiliary payload on an Ariane-4, December 2nd, 1997. The mission was intended for a two-year lifetime but stopped transmitting data on May 1, 1998. The idea of an equatorial satellite dates back to NASA's GGS (Global Geospace Science) program, originally conceived in 1980. The equatorial element of the program was abandoned in 1986 and several subsequent attempts to rescue the mission failed, leaving a significant gap in both NASA's GSS and the international IASTP programs. The Max-Planck-Institut für Extraterrestrische Physik (MPE) decided to fill this gap because of its interest in GSS and the opportunity for a test of an advanced instrument to measure electric fields with dual electron beams. In addition to MPE-internal funds and personnel, the realization of EQUATOR-S was possible through a 1994 grant from the German Space Agency DARA (meanwhile part of DLR).
<urn:uuid:08b61c80-8adc-4d5c-92e2-9ac587918416>
3.09375
335
Knowledge Article
Science & Tech.
48.947562
PHP is considered an insecure language to develop in not because of secret backdoors put in by the PHP language developers, but because it was initially developed without security as a major concern and compared to other languages/web frameworks its difficult to develop securely in it. E.g., if you develop a LAMP/LAPP (linux+apache+mysql/postgresql+PHP) web app, you have to manually code in input/output sanitation to prevent SQL injection/XSS/CSRF, make sure there are no subtle calls to eval user-supplied code (like in preg_replace with a '/e' ending the regexp argument), safely deal with file uploads, make sure user passwords are securely hashed (not plaintext), authentication cookies are unguessable, secure (https) and http-only, etc. Most modern web-frameworks simplify many of these issues by doing most of these things in a secure fashion (or initially doing them insecurely and then getting secure updates). The risk of there being a secret backdoor in an open-source PHP is small; and the risk is present in every piece of software (windows/linux/apache/nginx/IIS/postgresql/oracle) you use -- both open-source and closed-source. The open-source ones at least have the benefit that many independent eyes look at it all the time and you could examine it if you wanted. Also note in principle, even after fully examining the source code and finding no backdoors and fully examining the source code of your compiler (finding no backdoors), if you then recompile your compiler (bootstrap by using some untrusted existing compiler) and then compile the safe source code with your newly compiled "safe" compiler, your executable code could still have backdoors brought in from using the untrusted existing compiler to compile the new compiler. See Ken Thompson's Reflections on Trusting Trust. (The way this is defended against in practice is by using many independent and obscure compilers from multiple sources to compile any new compiler and then compare the output).
<urn:uuid:9f5695bc-5609-4c4f-ad0d-a28ed7a4e1d1>
2.78125
437
Q&A Forum
Software Dev.
23.270071
Volume 9, Issue 2 Tricks of the Trade In and Out Download This Issue Learning about Differential Equations from Their Symmetries Application of MathSym to Analyzing an Ordinary Differential Equation In the previous section we used a scaling symmetry to help understand the solutions of a pair of differential equations. In each case, the scaling symmetry was found by inspection. Here I present the computation of the complete set of point symmetries for two additional differential equations. Our third example is a nonlinear ordinary differential that we analyze using its two symmetries. The final example is the partial differential equation known as the cubic nonlinear Schrödinger equation . Example three is the ordinary differential equation that arises in the study of nonlinear water wave equations. I also show that we can use its two symmetries to begin to learn something about the structure of its solutions. MathSym returns a system of equations, the determining equations, whose solutions generate the symmetries of equation (8). Internally, the MathSym package denotes all independent variables in an equation as and dependent variables as . This way it can be run on systems of equations with arbitrary numbers of independent and dependent variables without needing to know how to treat different variable names. Furthermore, constants are represented as internally and printed as . With this notation, constants are treated correctly by Mathematica's differentiation routine Dt. MathSym's output is the following list of determining equations. With the output from MathSym we can continue our analysis of equation (8). First, we solve the determining equations: The functions and determine two symmetries that can be used to convert equation (8) into two integrals. The reader is directed to similar computations for the Blasius boundary layer equation which appear on pages 118-120 of . We begin by considering the symmetry that occurs because of the term. Setting and produces a transformation and . We next look for two quantities that do not change under this transformation. Obvious choices are and . If we assume that is a function of and write the differential equation for that arises by insisting that satisfy equation (8), we find This is a standard reduction of order for autonomous equations that may be found in a sophomore differential equations text such as . This equation in and has a symmetry that is generated by the constant appearing in equations (9) and (10). From this symmetry we can derive new variables and and consider as a function of . In terms of and , equation (11) becomes We have now converted the problem of solving the original equation into two integrations. First we find as a function of giving us a solution of equation (12) and hence of equation (11). Then we return to the original variables and have implicitly as a function of . Integrating again gives a relationship between and . We can make Mathematica carry out some of these computations. First we will ask that it determine a solution to equation (12) by integrating both sides of the equation. In the equation for we can return to the original variables and . What results is an implicit relationship between and and while MathSym has been successful in generating the symmetries of equation (8), it still is a challenge to solve this equation. About Mathematica | Download Mathematica Player Copyright © 2004 Wolfram Media, Inc. All rights reserved.
<urn:uuid:621b4b37-387c-4787-9931-0da8ab590a19>
3.359375
692
Truncated
Science & Tech.
36.476891
EVEN a material 10 billion times as strong as steel has a breaking point. It seems neutron stars may shatter under extreme forces, explaining puzzling X-ray flares. Neutron stars are dense remnants of stars gone supernova, packing the mass of the sun into a sphere the size of a city. Their cores may be fluid, but their outer surfaces are solid and extremely tough - making graphene, the strongest material on Earth, look like tissue paper by comparison. These shells may shatter, though, in the final few seconds before a pair of neutron stars merges to form a black hole - a union thought to generate explosions known as short gamma-ray bursts. David Tsang of the California Institute of Technology in Pasadena and colleagues have calculated how the mutual gravitational pull of such stars will distort their shape, creating moving tidal bulges. As the stars spiral towards each other, orbiting ever faster, they squeeze and stretch each other ever faster too. A few seconds before the stars merge, the frequency of this squeezing and stretching matches the frequency at which one of the stars vibrates most easily. This creates a resonance that boosts the vibrations dramatically, causing the star's crust to crack in many places - just as a wine glass may shatter when a certain note is sung, the team says (Physical Review Letters, DOI: 10.1103/physrevlett.108.011102). The star's gravity is too powerful to let the pieces fly away, but the sudden movement can disturb its magnetic field, accelerating electrons and leading to a powerful X-ray flare. That could explain observations by NASA's Swift satellite in which a blast of X-rays preceded some short gamma-ray bursts by a few seconds. Combining observations of X-ray flares with those of gravitational waves emitted by the stars as they spiral together could fix the exact frequency at which the shattering occurs, which would reveal more about the stars' mysterious interiors, says Tsang. If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article
<urn:uuid:dde9296d-febe-4cf0-8ed1-a0f99cdc6029>
3.921875
484
Truncated
Science & Tech.
47.882598
A tsunami is a series of waves most commonly caused by violent movement of the sea floor. In some ways, it resembles the ripples radiating outward from the spot where stone has been thrown into the water, but a tsunami can occur on an enormous scale. Tsunamis are generated by any large, impulsive displacement of the sea bed level. The movement at the sea floor leading to tsunami can be produced by earthquakes, landslides and volcanic eruptions. Most tsunamis, including almost all of those traveling across entire ocean basins with destructive force, are caused by submarine faulting associated with large earthquakes. These are produced when a block of the ocean floor is thrust upward, or suddenly drops, or when an inclined area of the seafloor is thrust upward or suddenly thrust sideways. In any event, a huge mass of water is displaced, producing tsunami. Such fault movements are accompanied by earthquakes, which are sometimes referred to as “tsunamigenic earthquakes”. Most tsunamigenic earthquakes take place at the great ocean trenches, where the tectonic plates that make up the earth’s surface collide and are forced under each other. When the plates move gradually or in small thrust, only small earthquakes are produced; however, periodically in certain areas, the plates catch. The overall motion of the plates does not stop; only the motion beneath the trench becomes hung up. Such areas where the plates are hung up are known as “seismic gaps” for their lack of earthquakes. The forces in these gaps continue to build until finally they overcome the strength of the rocks holding back the plate motion. The built-up tension (or comprehension) is released in one large earthquake, instead of many smaller quakes, and these often generate large deadly tsunamis. If the sea floor movement is horizontal, a tsunami is not generated. Earthquakes of magnitude larger than M 6.5 are critical for tsunami generation. Tsunamis produced by landslides: Probably the second most common cause of tsunami is landslide. A tsunami may be generated by a landslide starting out above the sea level and then plunging into the sea, or by a landslide entirely occurring underwater. Landslides occur when slopes or deposits of sediment become too steep and the material falls under the pull of gravity. Once unstable conditions are present, slope failure can be caused by storms, earthquakes, rain, or merely continued deposit of material on the slope. Certain environments are particularly susceptible to the production of landslide-generated earthquakes. River deltas and steep underwater slopes above sub-marine canyons, for instance, are likely sites for landslide-generated earthquakes. Tsunami produced by Volcanoes: The violent geologic activity associated with volcanic eruptions can also generate devastating tsunamis. Although volcanic tsunamis are much less frequent, they are often highly destructive. These may be due to submarine explosions, pyroclastic flows and collapse of volcanic caldera. (1) Submarine volcanic explosions occur when cool seawater encounters hot volcanic magma. It often reacts violently, producing stream explosions. Underwater eruptions at depths of less than 1500 feet are capable of disturbing the water all the way to the surface and producing tsunamis. (2) Pyroclastic flows are incandescent, ground-hugging clouds, driven by gravity and fluidized by hot gases. These flows can move rapidly off an island and into the ocean, their impact displacing sea water and producing a tsunami. (3) The collapse of a volcanic caldera can generate tsunami. This may happen when the magma beneath a volcano is withdrawn back deeper into the earth, and the sudden subsidence of the volcanic edifice displaces water and produces tsunami waves. The large masses of rock that accumulate on the sides of the volcanoes may suddenly slide down slope into the sea, causing tsunamis. Such landslides may be triggered by earthquakes or simple gravitational collapse. A catastrophic volcanic eruption and its ensuing tsunami waves may actually be behind the legend of the lost island civilization of Atlantis. The largest volcanic tsunami in historical times and the most famous historically documented volcanic eruption took lace in the East Indies-the eruption of Krakatau in 1883. Tsunami waves : A tsunami has a much smaller amplitude (wave height) offshore, and a very long wavelength (often hundreds of kilometers long), which is why they generally pass unnoticed at sea, forming only a passing "hump" in the ocean. Tsunamis have been historically referred to tidal waves because as they approach land, they take on the characteristics of a violent onrushing tide rather than the sort of cresting waves that are formed by wind action upon the ocean (with which people are more familiar). Since they are not actually related to tides the term is considered misleading and its usage is discouraged by oceanographers. These waves are different from other wind-generated ocean waves, which rarely extend below a dept of 500 feet even in large storms. Tsunami waves, on the contrary, involvement of water all the way to the sea floor, and as a result their speed is controlled by the depth of the sea. Tsunami waves may travel as fast as 500 miles per hour or more in deep waters of an ocean basin. Yet these fast waves may be only a foot of two high in deep water. These waves have greater wavelengths having long 100 miles between crests. With a height of 2 to 3 feet spread over 100 miles, the slope of even the most powerful tsunamis would be impossible to see from a ship or airplane. A tsunami may consist of 10 or more waves forming a ‘tsunami wave train’. The individual waves follow one behind the other anywhere from 5 to 90 minutes apart. As the waves near shore, they travel progressively more slowly, but the energy lost from decreasing velocity is transformed into increased wavelength. A tsunami wave that was 2 feet high at sea may become a 30-feet giant at the shoreline. Tsunami velocity is dependent on the depth of water through which it travels (velocity equals the square root of water depth h times the gravitational acceleration g, that is (V=√gh). The tsunami will travel approximately at a velocity of 700 kmph in 4000 m depth of sea water. In 10 m, of water depth the velocity drops to about 35 kmph. Even on shore tsunami speed is 35 to 40 km/h, hence much faster than a person can run.It is commonly believed that the water recedes before the first wave of a tsunami crashes ashore. In fact, the first sign of a tsunami is just as likely to be a rise in the water level. Whether the water rises or falls depends on what part of the tsunami wave train first reaches the coast. A wave crest will cause a rise in the water level and a wave trough causes a water recession. Seiche (pronounced as ‘saysh’) is another wave phenomenon that may be produced when a tsunami strikes. The water in any basin will tend to slosh back and forth in a certain period of time determined by the physical size and shape of the basin. This sloshing is known as the seiche. The greater the length of the body, the longer the period of oscillation. The depth of the body also controls the period of oscillations, with greater water depths producing shorter periods. A tsunami wave may set off seiche and if the following tsunami wave arrives with the next natural oscillation of the seiche, water may even reach greater heights than it would have from the tsunami waves alone. Much of the great height of tsunami waves in bays may be explained by this constructive combination of a seiche wave and a tsunami wave arriving simultaneously. Once the water in the bay is set in motion, the resonance may further increase the size of the waves. The dying of the oscillations, or damping, occurs slowly as gravity gradually flattens the surface of the water and as friction turns the back and forth sloshing motion into turbulence. Bodies of water with steep, rocky sides are often the most seiche-prone, but any bay or harbour that is connected to offshore waters can be perturbed to form seiche, as can shelf waters that are directly exposed to the open sea. The presence of a well developed fringing or barrier of coral reef off a shoreline also appears to have a strong effect on tsunami waves. A reef may serve to absorb a significant amount of the wave energy, reducing the height and intensity of the wave impact on the shoreline itself. The popular image of a tsunami wave approaching shore is that of a nearly vertical wall of water, similar to the front of a breaking wave in the surf. Actually, most tsunamis probably don’t form such wave fronts; the water surface instead is very close to the horizontal, and the surface itself moves up and down. However, under certain circumstances an arriving tsunami wave can develop an abrupt steep front that will move inland at high speeds. This phenomenon is known as a bore. In general, the way a bore is created is related to the velocity of the shallow water waves. As waves move into progressively shallower water, the wave in front will be traveling more slowly than the wave behind it .This phenomenon causes the waves to begin “catching up” with each other, decreasing their distance apart i.e. shrinking the wavelength. If the wavelength decreases, but the height does not, then waves must become steeper. Furthermore, because the crest of each wave is in deeper water than the adjacent trough, the crest begins to overtake the trough in front and the wave gets steeper yet. Ultimately the crest may begin to break into the trough and a bore formed. A tsunami can cause a bore to move up a river that does not normally have one. Bores are particularly common late in the tsunami sequence, when return flow from one wave slows the next incoming wave. Though some tsunami waves do, in deed, form bores, and the impact of a moving wall of water is certainly impressive, more often the waves arrive like a very rapidly rising tide that just keeps coming and coming. The normal wind waves and swells may actually ride on top of the tsunami, causing yet more turbulence and bringing the water level to even greater heights.
<urn:uuid:87a817df-e201-474d-b964-dcde3f8d1a17>
4.90625
2,112
Knowledge Article
Science & Tech.
39.834447
Right now, the accelerator is stopped for the annual maintenance shutdown. This is the opportunity to fix all problems that occurred during the past year both on the accelerator and the experiments. The detectors are opened and all accessible malfunctioning equipment is being repaired or replaced. In the 27-km long LHC tunnel, surveyors are busy getting everything realigned to a high precision, while various repairs and maintenance operations are on their way. By early March, all magnets will have been cooled down again and prepared for operation. The experimentalists are not only working on their detectors but also improving all aspects of their software: the detector simulations, event reconstruction algorithms, particle identification schemes and analysis techniques are all being revised. By late March, the LHC will resume colliding protons with the goal of delivering about 16 inverse femtobarns of data, compared to 5 inverse femtobarns in 2011. This will enable the experiments to improve the precision of all measurements achieved so far, push all searches for new phenomena slightly further and explore areas not yet tackled. The hope is to discover particles associated with new physics revealing the existence of new phenomena. The CMS and ATLAS physicists are looking for dozens of hypothetical particles, the Higgs boson being the most publicized but only one of many. When protons collide in the LHC accelerator, the energy released materializes in the form of massive but unstable particles. This is a consequence of the well-known equation E=mc2, which simply states that energy (represented by E) and mass (m) are equivalent, each one can change into the other. The symbol c2 represents the speed of light squared and acts like a conversion factor. This is why in particle physics we measure particle masses in units of energy like GeV (giga electronvolt) or TeV (tera electronvolt). One electronvolt is the energy acquired by an electron through a potential difference of one volt. It is therefore easier to create lighter particles since less energy is required. Over the past few decades, we have already observed the lighter particles countless times in various experiments. So we know fairly well how many events containing them we should observe. We can tell when new particles are created when we see more events of a certain topology than what we expect from those well-known phenomena, which we refer to as the background. We can claim that something additional and new is also occurring when we see an excess of events. Of course, the bigger the excess, the easier it is to claim something new is happening. This is the reason why we accumulate so many events, each one being a snap-shots of the debris coming out of a proton-proton collisions. We want to be sure the excess cannot be due to some random fluctuation. Some of the particles we are looking for are expected to have a mass in the order of a few hundred GeV. This is the case for the Higgs boson and we already saw possible signs of its presence last year. If the observed excess continues to grow as we collect more data in 2012, it will be enough to claim the Higgs boson discovery beyond any doubt in 2012 or rule it out forever. Other hypothetical particles may have masses as large as a few thousand GeV or equivalently, a few TeV. In 2011, the accelerator provided 7 TeV of energy at the collision point. The more energy the accelerator has, the higher the reach in masses, just like one cannot buy a 7000 CHF car with 5000 CHF. So to create a pair of particles with a mass of 3.5 TeV (or 3500 GeV), one needs to provide at least 7 TeV to produce them. But since some of the energy is shared among many particles, the effective limit is lower than the accelerator energy. There are ongoing discussions right now to decide if the LHC will be operating at 8 TeV this year instead of 7 TeV as in 2011. The decision will be made in early February. If CERN decides to operate at 8 TeV, the chances of finding very heavy particles will slightly increase, thanks to the extra energy available. This will be the case for searches for particles like the W’ or Z’, a heavier version of the well-known W and Z bosons. For these, collecting more data in 2012 will probably not be enough to push the current limits much farther. We will need to wait until the LHC reaches full energy at 13 or 14 TeV in 2015 to push these searches higher than in 2011 where limits have already been placed around 1 TeV. For LHCb and ALICE, the main goal is not to find new particles. LHCb aims at making extremely precise measurements to see if there are any weak points in the current theoretical model, the Standard Model of particle physics. For this, more data will make a whole difference. Already in 2011, they saw the first signs of CP-violation involving charm quarks and hope to confirm this observation. This measurement could shed light on why matter overtook antimatter as the universe expanded after the Big Bang when matter and antimatter must have been created in equal amounts. They will also investigate new techniques and new channels. Meanwhile, ALICE has just started analyzing the 2011 data taken in November with lead ion collisions. The hope is to better understand how the quark-gluon plasma formed right after the Big Bang. This year, a special run involving collisions of protons and lead ions should bring a new twist in this investigation. Exploring new corners, testing new ideas, improving the errors on all measurements and most likely the final answer on the Higgs, that is what we are in with the LHC for in 2012. Let’s hope that in 2012 the oriental dragon, symbol of perseverance and success, will see our efforts bear fruit. To be alerted of new postings, follow me on Twitter: @GagnonPauline or sign-up on this mailing list to receive and e-mail notification.
<urn:uuid:f37ea100-b3b9-472e-bffa-c0ee6d515f58>
3.328125
1,236
Personal Blog
Science & Tech.
47.898318
There has been a flurry of recent commentary concerning Amazon drought – some of it good, some of it not so good. The good stuff has revolved around a recently-completed interesting field experiment that was run out of the Woods Hole Research Center (not to be confused with the Woods Hole Oceanographic Institution), where they have been examining rainforest responses to drought – basically by using a very large rainproof tent to divert precipitation at ground level (the trees don’t get covered up). As one might expect, a rainforest without rain does not do well! But exactly what happens when and how the biosphere responds are poorly understood. This 6 year long field experiment may provide a lot of good new data on plant strategies for dealing with drought which will be used to improve the models and our understanding of the system. The not-so-good part comes when this experiment is linked too directly to the ongoing drought in the southern Amazon. In the experiment, older tree mortality increased markedly after the third year of no rain at all (with around 1 in 10 trees dying). Since parts of the Amazon are now entering a second year of drought (possibly related to a persistent northward excursion of the ITCZ), the assumption in the Independent story (with the headline ‘One year to save the Amazon’) was that trees will start dying forest-wide next year should the drought continue. This is incorrect for a number of reasons. Firstly, drought conditions are not the same as no rain at all – the rainfall deficit in the middle of the Amazon is significant, but not close to 100%! Secondly, the rainfall deficits are quite regionally variable, so a forest-wide response is highly unlikely. Also, the trees won’t all die in just one more year and could recover, depending on yearly variation in climate. While this particular article is exaggerated, there are, however, some issues that should provoke genuine concern. Worries about the effects of the prolonged drought (and other natural and human-related disturbances) in the Amazon are indeed widespread and are partly related to the idea that there may be a ‘tipping point’ for the rainforest (see this recent article for some background). This idea is exemplified in a study last year (Hutrya et al, 2005) which looked at the sharp transition between forest and savannah and related that to the coupling of drought incidence and wild fires with the forest ecosystem. Modelling work has suggested that the Amazon may have two vegetation/regional climate equilibria due to vegetation and climate tending to reinforce each other if one is pushed in a particular direction (Oyama and Nobre, 2003). The two alternative states could be one rainforested and wet like today, the other mainly savannah and dry in the Eastern Amazon. Thus there is a fear that too much drought or disturbance could flip parts of the forest into a more savannah-like state. However, there is a great deal of uncertainty in where these thresholds may lie and how likely they are to be crossed, and the rate at which change will occur. Models go from predicting severe and rapid change (Cox et al, 2004), to relatively mild changes (Friedlingstein et al (2003)). Locally these responses can be dramatic, but of course, these changes also have big implications for total carbon cycle feedback and so have global consequences as well. Part of that uncertainty is related to the very responses that are being monitored in the WHRC experiment and so while I would hesitate to make a direct link, indirectly these results may have big consequences for what we think may happen to the Amazon in the future. Special thanks to Nancy Kiang for taking the time to discuss this with me. Update: WHRC comments on the articles below.
<urn:uuid:34247e22-9bec-4ed3-b5e5-15b6d905fbaf>
2.8125
773
Personal Blog
Science & Tech.
38.063333
& Tornado Alley the Programming Language A Language for Symbolic Computation through the Processing of Lists There are primarily two computer languages used in artificial intelligence work, LISP and PROLOG. LISP, which is short for List Processing, was created by John McCarthy of Stanford University. It looks klutzy but it is based upon the lamba calculus and works quite well for computation associated with artificial intelligence. PROLOG has an elegant formulation but it does not have the range of application that LISP has. The Japanese when they formulated the Fifth Generation project chose PROLOG over LISP as the programming language. This was perhaps one of the factors that contributed to the failure of the Fifth Generation project. HOME PAGE OF Thayer Watkins
<urn:uuid:bfb69420-bb97-4807-ab29-878736b74ff1>
3.21875
154
Knowledge Article
Software Dev.
27.653437
A Field Guide to Supernova Spectra Both types exhibit a wide variety of subclasses. Type Ia is of no interest because these stars don't emit neutrinos. Types Ib and Ic are thought to undergo core collapse like Type II supernovae and, therefore, should emit neutrinos. As Maurice Gavin explains in "The Revival of Amateur Spectroscopy", low-resolution spectra of objects as faint as magnitude 13 or thereabouts are accessible to modest amateur equipment. (A few superposed 20-minute exposures with a 12-inch telescope or so should produce an adequate image.) But what will supernovae spectra look like especially shortly after the outburst begins as captured by small telescopes and low-resolution spectrographs? Here's your field guide. To prepare it, we started with high-resolution, calibrated spectra supplied by Alexei Filippenko (University of California, Berkeley). Then, to simulate Gavin's CCD results, we degraded the spectra to a resolution of 50 angstroms per pixel. Finally, and with dramatic results, we changed the intensity along each spectrum to reflect variations in the unfiltered sensitivity of popular CCD chips the KAF-0400 from Kodak and the ICX055BL from Sony. Thus, what you see here is what you will get! (Astrophotographers using panchromatic emulsions will record spectra that look much like the originals.)
<urn:uuid:0144adaf-17ca-4b41-b201-32a6d62c4484>
3.171875
294
Tutorial
Science & Tech.
31.275975
New from Webteacher Software and partners, GoogleMapBuilder.com An easy interface to turn any spreadsheet into a Google Map Webteacher Software now offers I teach computer classes for a living to corporate clients of all levels. After 2 years of teaching, I have learned a lot about communication between people of various levels of computer experience. This tutorial assumes that you have no prior programming experience, but that you have created your own HTML pages. If you find this tutorial helpful, please let me know (it's my only reward). Also, links are graciously accepted. Actually, the 2 languages have almost nothing in common except for the name. Although Java is technically an interpreted programming language, it is coded in a similar fashion to C++, with separate header and class files, compiled together prior to execution. It is powerful enough to write major applications and insert them in a web page as a special object called an "applet." Java has been generating a lot of excitment because of its unique ability to run the same program on IBM, Mac, and Unix computers. Java is not considered an easy-to-use language for non-programmers. What is Object Oriented Programming? OOP is a programming technique (note: not a language structure - you don't even need an object-oriented language to program in an object-oriented fashion) designed to simplify complicated programming concepts. In essence, object-oriented programming revolves around the idea of user- and system-defined chunks of data, and controlled means of accessing and modifying those chunks. Object-oriented programming consists of Objects, Methods and Properties. An object is basically a black box which stores some information. It may have a way for you to read that information and a way for you to write to, or change, that information. It may also have other less obvious ways of interacting with the information. Some of the information in the object may actually be directly accessible; other information may require you to use a method to access it - perhaps because the way the information is stored internally is of no use to you, or because only certain things can be written into that information space and the object needs to check that you're not going outside those limits. The directly accessible bits of information in the object are its properties. The difference between data accessed via properties and data accessed via methods is that with properties, you see exactly what you're doing to the object; with methods, unless you created the object yourself, you just see the effects of what you're doing. Objects and Properties Your web page document is an object. Any table, form, button, image, or link on your page is also an object. Each object has certain properties (information about the object). For example, the background color of your document is written document.bgcolor. You would change the color of your page to red by writing the line: document.bgcolor="red" The contents (or value) of a textbox named "password" in a form named "entryform" is document.entryform.password.value. Most objects have a certain collection of things that they can do. Different objects can do different things, just as a door can open and close, while a light can turn on and off. A new document is opened with the method document.open() You can write "Hello World" into a document by typing document.write("Hello World") . open() and write() are both methods of the object: document.
<urn:uuid:7772f169-1fe0-4821-9f17-fc1a29f7ccbe>
3.390625
717
Personal Blog
Software Dev.
45.295065
Chronometric Techniques–Part II Most of the chronometric dating methods in use today are radiometric . That is to say, they are based on knowledge of the rate at which certain radioactive isotopes within dating samples decay or the rate of other cumulative changes in atoms resulting from radioactivity. Isotopes are specific forms of elements. The various isotopes of the same element differ in terms of atomic mass but have the same atomic number. In other words, they differ in the number of neutrons in their nuclei but have the same number of protons. The spontaneous decay of radioactive elements occurs at different rates, depending on the specific isotope. These rates are stated in terms of half-lives. One half-life is the amount of time required for ½ of the original atoms in a sample to decay. Over the second half-life, ½ of the atoms remaining decay, which leaves ¼ of the original quantity, and so on. In other words, the change in numbers of atoms follows a geometric scale as illustrated by the graph below. The red curve line shows the of atomic decay The decay of atomic nuclei provides us with a reliable clock that is unaffected by normal forces in nature. The rate will not be changed by intense heat, cold, pressure, or moisture. The most commonly used radiometric dating method is radiocarbon dating. It is also called carbon-14 and C-14 dating. This technique is used to date the remains of organic materials. Dating samples are usually charcoal, wood, bone, or shell, but any tissue that was ever alive can be dated. Radiocarbon dating is based on the fact that cosmic radiation from space constantly bombards our planet. As cosmic rays pass through the atmosphere, they occasionally collide with gas atoms resulting in the release of neutrons. When the nucleus of a nitrogen (14N) atom in the atmosphere captures one of these neutrons, the atom subsequently changes into carbon-14 (14C) after the release of a proton. The carbon-14 quickly bonds chemically with atmospheric oxygen to form carbon dioxide gas. Carbon-14 is a rare, unstable form of carbon. Only one in a trillion carbon atoms in the atmosphere is carbon-14. The majority are carbon-12 (98.99%) and carbon-13 (1.1%). From a chemical standpoint, all of these isotopes of carbon behave exactly the same. Carbon dioxide in the atmosphere drifts down to the earth's surface where much of it is taken in by green growing plants, and the carbon is used to build new cells by photosynthesis . Animals eat plants or other animals that have eaten them. Through this process, a small amount of carbon-14 spreads through all living things and is incorporated into their proteins and other organic molecules. of carbon-14 in the atmosphere and its entrance into the As long as an organism is alive, it takes in carbon-14 and the other carbon isotopes in the same ratio as exists in the atmosphere. Following death, however, no new carbon is consumed. Progressively through time, the carbon-14 atoms decay and once again become nitrogen-14. As a result, there is a changing ratio of carbon-14 to the more atomically stable carbon-12 and carbon-13 in the dead tissue. That rate of change is determined by the half-life of carbon-14, which is 5730 ± 40 years. Because of this relatively rapid half-life, there is only about 3% of the original carbon-14 in a sample remaining after 30,000 years. Beyond 40-50,000 years, there usually is not enough left to measure with conventional laboratory methods. Radioactive decay rate for Carbon-14 (N = the number of atoms) Half-Lives Years Past C-14 Atoms C-12 Atoms 0 0 1 N 1 N 1 5,730 1/2 N 1 N 2 11,460 1/4 N 1 N 3 17,190 1/8 N 1 N 4 22,920 1/16 N 1 N 5 28,650 1/32 N 1 N 6 34,380 1/64 N 1 N 7 40,110 1/128 N 1 N The conventional radiocarbon dating method involves burning a sample in a closed tube containing oxygen. The carbon containing gas that is produced is then cooled to a liquid state and placed in a lead shielded box with a sensitive Geiger counter. This instrument registers the radioactivity of the carbon-14 atoms. Specifically, it detects the relatively weak beta particles released when carbon-14 nuclei decay. The age of a sample is determined by the number of decays recorded over a set period of time. Older samples have less carbon-14 remaining and, consequentially, less frequent decays. Knowing the half-life of carbon-14 allows the calculation of a sample's age. A radiocarbon sample being prepared for dating with the AMS technique A relatively new variation of the radiocarbon dating method utilizes an accelerator mass spectrometer , which is a device usually used by physicists to measure the abundance of very rare radioactive isotopes. When used for dating, this AMS method involves actually counting individual carbon-14 atoms. This allows the dating of much older and smaller samples but at a far higher cost. Although, organic materials as old as 100,000 years potentially can be dated with AMS, dates older than 60,000 years are still rare. Radiocarbon and tree-ring date comparisons made by Hans Suess provide needed data to make radiocarbon dates more reliable Paleoanthropologists and archaeologists must always be aware of possible radiocarbon sample contamination that could result in inaccurate dates. Such contamination can occur if a sample is exposed to carbon compounds in exhaust gasses produced by factories and motor vehicles burning fossil fuels such as coal or gasoline. The result is radiocarbon dates that are too old. This has been called the Autobahn effect, named after the German high speed roadway system. Archaeologists in that country first noted this source of contamination when samples found near the Autobahn were dated. The effect of global burning of fossil fuels on radiocarbon dates was verified and calibrated by Hans Suess of the University of California, San Diego when he radiocarbon dated bristlecone pine tree growth rings that were of known chronometric ages. Subsequently, it is also called the Suess effect. Other kinds of sample contamination can cause carbon-14 dates to be too young. This can occur if the sample is impregnated with tobacco smoke or oils from a careless researcher's hands. This is now well known and is easily avoided during excavation. Still another potential source of error in radiocarbon dating that is adjusted for stems from the assumption that cosmic radiation enters our planet's atmosphere at a constant rate. In fact, the rate changes slightly through time, resulting in varying amounts of carbon-14 being created. This has become known as the de Vries effect because of its discovery by the Dutch physicist Hessel de Vries. All of these potential sources of error in radiocarbon dating are now well understood and compensating corrections are made so that the dates are reliable. There are a number of other radiometric dating systems in use today that can provide dates for much older sites than those datable by radiocarbon dating. Potassium-argon (K-Ar) dating is one of them. It is based on the fact that potassium-40 (40K) decays into the gas argon-40 (40Ar) and calcium-40 (40Ca) at a known rate. The half-life of potassium-40 is approximately 1.25 billion years. Measurement of the amount of argon-40 in a sample is the basis for age determination. Dating samples for this technique are geological strata of volcanic origin. While potassium is a very common element in the earth's crust, potassium-40 is a relatively rare isotope of it. However, potassium-40 is usually found in significant amounts in volcanic rock and ash. In addition, any argon that existed prior to the last time the rock was molten will have been driven off by the intense heat. As a result, all of the argon-40 in a volcanic rock sample is assumed to date from that time. When a fossil is sandwiched between two such volcanic deposits, their potassium-argon dates provide a minimum and maximum age. In the example below, the bone must date to sometime between 1.75 and 1.5 million years ago. Using the potassium-argon method to date volcanic ash strata above and below a bone sample in order to determine a minimum and a maximum age Potassium-argon dates usually have comparatively large statistical plus or minus factors. They can be on the order of plus or minus 1/4 million years for a 2 million year old date. This is still acceptable because these dates help us narrow down the time range for a fossil. The use of additional dating methods at the same site allow us to refine it even more. NOTE: the plus or minus number following radiometric dates is not an error factor. Rather, it is a probability statement. For instance, a date of 100,000 ± 5,000 years ago means that there is a high probability the date is in the range of 95,000 and 105,000 years ago and most likely is around 100,000. Radiometric dates, like all measurements in science, are close statistical approximations rather than absolutes. This will always be true due to the finite limits of measuring equipment. This does not mean that radiometric dates or any other scientific measurements are unreliable. Potassium-argon dating has become a valuable tool for human fossil hunters, especially those working in East Africa. Theoretically it can be used for samples that date from the beginning of the earth (4.54 billion years) down to 100,000 years ago or even more recently. Paleoanthropologists use it mostly to date sites in the 1 to 5 million year old range. This is the critical time period during which humans evolved from their ape ancestors. A relatively new technique related to potassium-argon dating compares the ratios of argon-40 to argon-39 in volcanic rock. This provides more accurate dates for volcanic deposits and allows the use of smaller samples. Fission Track Dating Another radiometric method that is used for samples from early human sites is fission track dating. This is based on the fact that a number of crystalline or glass-like minerals, such as obsidian, mica, and zircon crystals, contain trace amounts of uranium-238 (238U), which is an unstable isotope. When atoms of uranium-238 decay, there is a release of energy-charged alpha particles which burn narrow fission tracks, or damage trails, through the glassy material. These can be seen and counted with an optical microscope. Fission tracks in obsidian as they would appear with an optical microscope The number of fission tracks is directly proportional to the amount of time since the glassy material cooled from a molten state. Since the half-life of uranium-238 is known to be approximately 4.5 billion years, the chronometric age of a sample can be calculated. This dating method can be used with samples that are as young as a few decades to as old as the earth and beyond. However, paleoanthropologists rarely use it to date sites more than several million years old. With the exception of early historic human made glass artifacts , the fission track method is usually only employed to date geological strata. Artifacts made out of obsidian and mica are not fission track dated because it would only tell us when the rocks cooled from a molten state, not when they were made into artifacts by our early human ancestors. Thermoluminescence (TL) dating is a radiometric method based on the fact that trace amounts of radioactive atoms, such as uranium and thorium, in some kinds of rock, soil, and clay produce constant low amounts of background ionizing radiation. The atoms of crystalline solids, such as pottery and rock, can be altered by this radiation. Specifically, the electrons of quartz, feldspar, diamond, or calcite crystals can become displaced from their normal positions in atoms and trapped in imperfections in the crystal lattice of the rock or clay molecules. These energy charged electrons progressively accumulate over time. When a sample is heated to high temperatures in a laboratory, the trapped electrons are released and return to their normal positions in their atoms. This causes them to give off their stored energy in the form of light impulses (photons). This light is referred to as thermoluminescence (literally "heat light"). A similar effect can be brought about by stimulating the sample with infrared light. The intensity of thermoluminescence is directly related to the amount of accumulated changes produced by background radiation, which, in turn, varies with the age of the sample and the amount of trace radioactive elements it contains. A ground up placed in a Heat is raised in an energy from the sample Thermoluminescence release resulting from rapidly heating a crushed clay sample What is actually determined is the amount of elapsed time since the sample had previously been exposed to high temperatures. In the case of a pottery vessel, usually it is the time since it was fired in a kiln. For the clay or rock lining of a hearth or oven, it is the time since the last intense fire burned there. For burned flint, it is the time since it had been heated in a fire to improve its flaking qualities for stone tool making. The effective time range for TL dating is from a few decades back to about 300,000 years, but it is most often used to date things from the last 100,000 years. Theoretically, this technique could date samples as old as the solar system if we could find them. However, the accuracy of TL dating is generally lower than most other radiometric techniques. Electron Spin Resonance Dating Another relatively new radiometric dating method related to thermoluminescence is electron spin resonance (ESR). It is also based on the fact that background radiation causes electrons to dislodge from their normal positions in atoms and become trapped in the crystalline lattice of the material. When odd numbers of electrons are separated, there is a measurable change in the magnetic field (or spin) of the atoms. Since the magnetic field progressively changes with time in a predictable way as a result of this process, it provides another atomic clock, or calendar, that can be used for dating purposes. Unlike thermoluminescence dating, however, the sample is not destroyed with the ESR method. This allows samples to be dated more than once. ESR is used mostly to date calcium carbonate in limestone, coral, fossil teeth, mollusks, and egg shells. It also can date quartz and flint. Paleoanthropologists have used ESR mostly to date samples from the last 300,000 years. However, it potentially could be used for much older samples. Comparison of the Time Ranges for Dating Methods Whenever possible, paleoanthropologists collect as many dating samples from an ancient human occupation site as possible and employ a variety of chronometric dating methods. In this way, the confidence level of the dating is significantly increased. The methods that are used depend on the presumed age of the site from which they were excavated. For instance, if a site is believed to be over 100,000 years old, dendrochronology and radiocarbon dating could not be used. However, potassium-argon, fission track, amino acid racemization, thermoluminescence, electron spin resonance, and paleomagnetic dating methods would be considered. EFFECTIVE TIME RANGE OF THE MAJOR CHRONOMETRIC DATING METHODS In addition to the likely time range, paleoanthropologists must select dating techniques based on the kinds of datable materials available. Dendrochronology can only date tree-rings. Any organic substances can be used for radiocarbon and amino acid racemization dating. Calcium rich parts of animals such as coral, bones, teeth, mollusks, and egg shells can be dated with the electron spin resonance technique. In addition, ESR can date some non-organic minerals including limestone, quartz, and flint. Burned clay and volcanic deposits are materials used for paleomagnetic dating. Glassy minerals, such as mica, obsidian, and zircon crystals are datable with the fission track method. Pottery and other similar materials containing crystalline solids are usually dated with the thermoluminescence technique. The potassium-argon and argon-argon methods are used to date volcanic rock and ash deposits. Other chronometric dating methods not described here include uranium/thorium dating, oxidizable carbon ratio (OCR) dating, optically stimulated luminescence (OSL) dating, varve analysis, and obsidian hydration dating. Copyright © 1998-2012 by Dennis O'Neil. All rights
<urn:uuid:0e63bd67-645e-4c01-8ec7-e353f79e75fb>
3.71875
3,552
Knowledge Article
Science & Tech.
42.607484
Forest Ecosystems: Current Research Regional Fire/Climate Relationships in the Pacific Northwest and Beyond Fire exerts a strong influence on the structure and function of many terrestrial ecosystems. In forested ecosystems, the factors controlling the frequency, intensity, and size of fires are complex and operate at different spatial and temporal scales. Since climate strongly influences most of these factors (such as vegetation structure and fuel moisture), understanding the past and present relationships between climate and fire is essential to developing strategies for managing fire-prone ecosystems in an era of rapid climate change. The influence of climate change and climate variability on fire regimes and large fire events in the Pacific Northwest (PNW) and beyond is the focus of this project. There is mounting evidence that a detectable relationship exists between extreme fire years in the West and Pacific Ocean circulation anomalies. The El Niño/Southern Oscillation (ENSO) influences fire in the Southwest (SW) and the Pacific Decadal Oscillation (PDO) appears to be related to fire in the PNW and Northern Rockies (NR). However, there are reasons to expect that processes driving fire in PNW, SW, and NR are not constant in their relative influence on fire through time or across space and that their differentiation is not stationary through time or across space. - How regionally specific is the relationship between large fire events and precipitation/atmospheric anomalies associated with ENSO and PDO during the modern record? - What do tree-ring and other paleo-records tell us about the temporal variability of the patterns of fire/climate relationships? - How is climate change likely to influence climate/fire relationships given the demonstrated influences of climate variability? Figure 1 A simple model of climate–fire-vegetation linkages. This project emphasizes the mechanisms and variability indicated by (1). For publications on climate impacts on PNW forest ecosystems, please see CIG Publications. Gedalof, Z. 2002. Links between Pacific basin climatic variability and natural systems of the Pacific Northwest. PhD dissertation, School of Forestry, University of Washington, Seattle. Littell, J.S. 2002. Determinants of fire regime variability in lower elevation forests of the northern greater Yellowstone ecosystem. M.S. Thesis, Big Sky Institute/Department of Land Resources and Environmental Sciences, Montana State University, Bozeman. Mote, P.W., W.S. Keeton, and J.F. Franklin. 1999. Decadal variations in forest fire activity in the Pacific Northwest. In Proceedings of the 11th Conference on Applied Climatology, pp. 155-156, Boston, Massachusetts: American Meteorological Society.
<urn:uuid:e4092633-013e-4995-97f5-6212c2dac106>
2.8125
549
Academic Writing
Science & Tech.
27.702762
During the 1980s the number of babies born annually was around 12. The total twice fell sharply in the 1990s until just a single calf appeared in 2000. Since then, the average has risen to more than 20 calves a year. Yet this remains 30 percent below the whales' potential rate of reproduction. Why? If scientists are to guide the species' salvation, they need more data and more answers. Fast. One August morning in 2006, when the sea was a sheet of dimpled satin shot through with silver threads, I joined Scott Kraus, the New England Aquarium's vice president of research, and Rosalind Rolland, a veterinarian and senior scientist with the aquarium, on an unlikely quest in the Bay of Fundy. When leviathans rose in the distance through the sea's shimmering skin, Kraus steered the boat downwind of where they had briefly surfaced, handed me a data sheet to log our movements, and zigzagged into the faint breeze. Rolland moved onto the bow. Beside her was Fargo, the world's premier whale-poop-sniffing dog. Fargo began to pace from starboard to port, nostrils flaring. Rolland focused on the rottweiler's tail. If it began to move, it would mean he had picked up a scent—and he could do that a nautical mile away. Twitch … Twitch … Wag, wag. "Starboard," Rolland called to Kraus. "A little more. Nope, too far. Turn to port. OK, he's back on it." A quarter of an hour ran by like the bay's currents. All I saw were clumps of seaweed. Suddenly, the dog sat and turned to fix Rolland with a look. We stopped, and out of the vast ocean horizon came a single chunk of digested whale chow, bobbing along mostly submerged, ready to sink from view or dissolve altogether within minutes. Kraus grabbed the dip net and scooped up the fragrant blob. You'd have thought he was landing a fabulous fish. "At first, people are incredulous. Then come the inevitable jokes. But this," said the man who has led North Atlantic right whale research for three decades, "is actually some of the best science we've done." With today's technology, DNA from sloughed-off intestinal cells in a dung sample can identify the individual that produced it. Residues of hormones tell Rolland about the whale's general condition, its reproductive state—mature? pregnant? lactating?—levels of stress, and presence of parasites.
<urn:uuid:f20dd62f-b6cd-4a43-b899-d8bd8fdb0627>
3.1875
541
Nonfiction Writing
Science & Tech.
65.860114
Please use this identifier to cite or link to this item: http://hdl.handle.net/1959.13/916979 - The 'humped' soil production function: eroding Arnhem Land, Australia Heimsath, Arjun M.; Hancock, Greg R. - The University of Newcastle. Faculty of Science & Information Technology, School of Environmental and Life Sciences - We report erosion rates and processes, determined from in situ-produced beryllium-10 (¹⁰Be) and aluminum-26 (²⁶Al), across a soil-mantled landscape of Arnhem Land, northern Australia. Soil production rates peak under a soil thickness of about 35 cm and we observe no soil thicknesses between exposed bedrock and this thickness. These results thus quantify a well-defined ‘humped’ soil-production function, in contrast to functions reported for other landscapes. We compare this function to a previously reported exponential decline of soil production rates with increasing soil thickness across the passive margin exposed in the Bega Valley, south-eastern Australia, and found remarkable similarities in rates. The critical difference in this work was that the Arnhem Land landscapes were either bedrock or mantled with soils greater than about 35 cm deep, with peak soil production rates of about 20 m/Ma under 35–40 cm of soil, thus supporting previous theory and modeling results for a humped soil production function. We also show how coupling point-specific with catchment-averaged erosion rate measurements lead to a better understanding of landscape denudation. Specifically, we report a nested sampling scheme where we quantify average erosion rates from the first-order, upland catchments to the main, sixth-order channel of Tin Camp Creek. The low (~5 m/Ma) rates from the main channel sediments reflect contributions from the slowly eroding stony highlands, while the channels draining our study area reflect local soil production rates (~10 m/Ma off the rocky ridge; ~20 m/Ma from the soil mantled regions). Quantifying such rates and processes help determine spatial variations of soil thickness as well as helping to predict the sustainability of the Earth's soil resource under different erosional regimes. - Earth Surface Processes and Landforms Vol. 34, Issue 12, p. 1674-1684 - Publisher Link - John Wiley & Sons - Resource Type - journal article
<urn:uuid:aaec1b94-9ba9-4b55-bc0f-d21d25032da1>
2.96875
496
Academic Writing
Science & Tech.
41.331953
As the popularized side of the debate has led us to expect, the authors found that the coldest year (1863) and the coldest decade (1810s) are early in the record, well before the ballyhooed warming of the 20th century. Problematic from a climate change standpoint is the fact that the two distinct cold periods that made the 1810s the coldest decade followed an 1809 “unidentified” volcanic eruption and the eruption of Tambora in 1815 – unusual geologic events that defined the climate. However, of greater importance is the fact that the researchers found the warmest year on record to be 1941, while the 1930s and 1940s are the warmest decades on record. This represents very bad news for climate change alarmists, since the warmest period was NOT the last quarter of the 20th century. In fact, the last two decades of the 20th century (1981-1990 and 1991-2000) were colder across the study area than any of the previous six decades, dating back to the 1900s and 1910s. When examining the instrumental records of the stations it is apparent that no net warming has occurred since the warm period of the 1930s and 1940s.Ouch. The note concludes - In a region of the world where climate models indicate that the greatest impacts of CO2-induced global warming will be most rapid and most evident, this recent extension of instrumental surface air temperature records produces a climate history that seems to suggest otherwise. If global climate models are correct, the increase in CO2 concentration since 1930 should be evidenced rather dramatically in air temperature across a high-latitude region of the Northern Hemisphere such as Greenland. The evidence provided by the instrumental record of air temperature along the western and southern coasts of Greenland produces doubt in the degree to which increased CO2 concentrations impact high latitude climate as represented by the climate models upon which climate change alarmists are hanging their hats.What's fascinating to this layman is how new observations are still being made which seem to challenge what is evidently not at all a settled body of theory. And that theory - and a dodgy discount rate - are a basis for major action?
<urn:uuid:54298b9a-1cd7-4039-ba9b-013180e3e21a>
3.734375
447
Personal Blog
Science & Tech.
34.807563
Climate Witness: Pak Azhar, Indonesia I have been living in Balikukup since 1999. Balikukup is a small island of 18 ha consisting mainly of sandbanks. However, the island’s size is not fixed as it depends on the tides. During low tide, a large sandbank is exposed, extending 1 km towards the sea. The weather is a significant factor in the work of a sea cucumber fisherman I started collecting sea cucumbers in 2001. There are 2 ways to catch sea cucumbers; some fishermen just search on the beaches around the island during low tide at night, while others dive underwater, down to depths of 10 m. Sea cucumber fishermen are highly dependent on the weather to do their job. Fishermen cannot catch good harvests during rainy or stormy weather, as sea cucumbers hide underneath the sand during that time. Therefore, it is important for a sea cucumber fisherman to predict what the weather will be like before going to work. Usually I observe the weather at dusk or in the early evening to predict whether it is going to rain or be stormy at night. But nowadays, it is getting harder to predict the weather accurately. For example, early evening yesterday I predicted that there would be no rain at night, but around midnight and early morning heavy rain came down. In the old days, we fishermen could predict the weather. But not anymore. The elders on our island also mentioned the same thing. Since 2002, Atang, one of the fisherman elders whom we regard as the best expert in predicting the weather in Balikukup, said that the weather was getting unpredictable. Before, Atang could produce a very good prediction, even for the course of a full year. ‘Bulan janda’ or Widow month One example of unpredictable weather is the gone phenomena of ‘bulan janda’, or ‘widow month’. It is called widow-month because when the fishermen went to the sea during the event, they rarely came home safely. Thus, their wives became widows. Widow month is an annual event when the wind blows very strongly for 44 days from the south. This wind stops for a short period of time (half an hour), and then goes back to blowing very hard. During that time it is impossible for fishermen to go at sea. Fishermen who had saved enough money and food supply did not need to go at sea during ‘widow month’ because the conditions were too dangerous. However, other fishermen had no other option but to go to sea during the event. The phenomenon of ‘widow month’ does not exist anymore. The last time it happened was in 1991 according to fishermen. After 1991, during the supposedly ‘widow-month’, there could be calm periods for up to 2 weeks. None of the fishermen understands why the ‘widow month’ phenomenon has slowly disappeared. No clue when money will come The unpredictable weather is a disadvantage for us fishermen because we no longer know when we can go fishing. It is difficult for us to predict when we will make money. Before, we could estimate when was the right time to make income and put some money on the side, as we could predict when we can go fishing. Now, whenever we have good weather, we just go fishing. We can no longer make financial plans. Credit: WWF-Indonesia / Primayunta Scientific reviewReviewed by: Dr Heru Santoso, Project Coordinator of the TroFCCA (Tropical Forests and Climate Change Adaptation) project, Indonesia The witnesses told three natural phenomena that they considered climate related. They are increased land erosion, higher tides and unpredictable weather. Even though non-climatic factor could contribute to these phenomena, for example an increase in land erosion could be due to land mismanagement, or a higher tide could be the subsequent of regional subsidence, etc. However, in all three different locations the people observed an increase of wave energy and increasing unpredictable weather that could affect the sustainability of their villages and their livelihood. There are very few scientific literatures to report whether the observed phenomena in this specific region are related to climate change. This region is open to Sulawesi and Sulu seas as flow paths of oceanic current from the western Pacific Ocean to Indian Ocean. Higher tides in Berau area could be related to the increase of sea surface level in the western Pacific during La Niña events. This phenomenon recently has become noticeable than in the past probably because global warming has accentuated the extent of this climate mechanism (Mimura et al. 2007). For the same reason, unpredictable and abrupt change of weather has become noticeable. Abrupt changes are usually associated with high wind speed which could only happen if there is a significant difference in pressures between two areas. Striking heat, in particular over a heat sensitive land area, under a warmer condition could generate this high pressure difference quickly. Land sensitivity to heat is higher if the forest cover has gone or heavily degraded. The ‘widow month’, a regular phenomenon of strong southerly wind that has been disappearing, is normally associated with the monsoonal trade wind in which the easterly wind from eastern Indonesia turn northward to Asia. Global warming or higher regional temperature could alter the distribution of regional or subregional energy concentration and could also alter the scale and extent of circulation. Therefore, global warming could have contributed to the increasing trend of recurrences of natural phenomena as reported by witnesses. However, it is quite proper to verify whether this global warming has accentuated climate mechanisms in this subregion by comparing with other climate variables. For example, during La Niña events warm waters from the east flow to the west and usually bringing more rains. The high tides in the Berau region which could be explained by this mechanism could be verified with rainfall data during that particular time of the events, preferably with a long period of observation data. All articles are subject to scientific review by a member of the Climate Witness Science Advisory Panel.
<urn:uuid:17978afc-38d0-4b2f-ac01-3569ee170f80>
2.796875
1,253
Knowledge Article
Science & Tech.
39.749783
Plan for an unmanned mission to Earth's core First, split the ground open with cataclysmic force, then fill it with the world's entire supply of molten iron carrying a small communication probe - and the resulting 3,000 kilometre journey to Earth's core should take about a week, according to a U.S. planetary physicist. "We would learn a lot more about the nature of Earth and how it works - the generation of the magnetic field, the origin of some kinds of volcanoes, the heat sources inside Earth, the stuff Earth is made of - in short, all the basic questions," he told ABC Science Online. In his paper, Stevenson argues that "planetary missions have enhanced our understanding of the Solar System and how planets work, but no comparable exploratory effort has been directed towards the Earth's interior". "Space probes have so far reached a distance of about 6,000 million kilometres, but subterranean probes (drill holes) have descended only some 10 kilometres into the Earth," he writes in his article. The main barrier to travelling to the core is the dense matter of the Earth's mantle. The energy required to penetrate the mantle by melting is about a thousand million times the energy needed for space travel, per unit distance travelled. Stevenson's scheme relies on principles observed in 'magma fracturing' - where molten rock migrates through the Earth's interior. He proposes pouring 100 million tonnes of molten iron alloy into a crack of about 300 metres deep in the Earth's surface. This massive volume of iron, containing a small communication probe, would work its way down to the Earth's core, along the crack, which would open up by the force of gravity and close up behind itself. The crack would open downwards at 5 metres per second, giving a mission timescale of "around a week". Such 'Earth dives' have not been tried before on any scale, nor is the technology yet available. "No, we can't do it now," said Stevenson. "But the basic scientific principles are understood. The same answer applied to the atomic bomb in 1940." The initial crack would require a force equivalent to several mega tonnes of TNT, an earthquake of magnitude 7 on the Richter scale, or a nuclear device "with a capability within the range of those currently stockpiled". The amount of iron needed could be as much as the amount produced world-wide in a week. Heat would be maintained through the release of gravitational energy and the partial melting of silicate rock walls. "But of course, the mantle is hot anyway," said Stevenson, "so once you get below the first 100 kilometres, there are alloys that would never freeze in equilibrium with the mantle." He said the probe would penetrate the outer core but the solid inner core of the Earth would probably stop it from going any further. The grapefruit-sized probe embedded in the molten iron would contain instruments to measure temperature, conductivity, and chemical composition. It would rely on encoded sound waves to beam data to the surface, as the Earth's interior does not transmit electromagnetic radiation. One of the existing Laser Interferometer Gravitational-wave Observatories (LIGO), used to detect tiny amounts of gravitational radiation from space, could be reconfigured to read the acoustic frequencies from the probe burrowing beneath. "My paper is an idea, not a blueprint!" Stevenson told ABC Science Online. "But the physical process involved - with melt moving through the outermost 100 kilometres of earth - is something the Earth does every day." "This proposal is modest compared with the space program, and may seem unrealistic only because little effort has been devoted to it," he concludes in Nature. "The time has come for action." Click here to listen to a follow-up of this story broadcast on The Science Show, ABC Radio National.
<urn:uuid:e3d8cbe1-af62-4fab-911a-d7705b5c0ea2>
3.796875
785
Truncated
Science & Tech.
42.832035
Get ready for Comet PANSTARRS — 2013's first naked-eye comet Comet PANSTARRS promises to be the brightest comet in six years when it peaks in March. February 26, 2013 Luis Argerich from Buenos Aires, Argentina, captured Comet PANSTARRS in the sky above Mercedes, Argentina, on February 11, 2013. The comet shone at magnitude 4.5 to the left of an Iridium flare. I’m here today to talk about what promises to be the brightest comet during the first half of 2013 and likely one of the brightest comets of the 21st century — so far. Comet PANSTARRS (C/2011 L4) will peak in March and remain bright well into April. If predictions hold, it should be an easy naked-eye object and will look great through binoculars for several weeks. Astronomers discovered this comet June 6, 2011. As the fourth new comet detected during the first half of June that year, it received the designation “C/2011 L4.” And because researchers first spotted the object on images taken through the 1.8-meter Panoramic Survey Telescope and Rapid Response System on Haleakala in Hawaii, it received the instrument’s acronym, PANSTARRS, as a secondary name. Astronomers credit this scope with more than two dozen comet discoveries, so the “C/2011 L4” designation is more precise even though it’s much easier to say “PANSTARRS.” The comet is making its first trip through the inner solar system. Its journey began eons ago when a star or interstellar cloud passed within a light-year or two of the Sun. This close encounter jostled the so-called Oort Cloud, a vast reservoir of icy objects that lies up to a light-year from the Sun and probably holds a trillion comets. PANSTARRS has been heading toward the Sun ever since. For complete coverage of Comet PANSTARRS, visit www.astronomy.com/panstarrs. Southern Hemisphere observers had the best comet views during February. But by early March, PANSTARRS veers sharply northward and gradually becomes visible in the evening sky for Northern Hemisphere observers. The earliest views should come around March 6 or 7, when it appears a degree above the western horizon 30 minutes after sunset. Each following day, the comet climbs a degree or two higher, which dramatically improves its visibility. It comes closest to the Sun (a position called “perihelion”) the evening of March 9, when it lies just 28 million miles (45 million kilometers) from our star. It then appears 7° high in the west 30 minutes after sunset. If predictions hold true — never a sure thing when it comes to comets making their first trip through the inner solar system — the comet will be a superb object through binoculars and probably an impressive naked-eye sight. Astronomers expect it to reach magnitude 0 or 1 at perihelion, although no one would be too surprised if it ends up one or two magnitudes brighter or dimmer. From perihelion to the end of March, the comet moves almost due north through Pisces and Andromeda while its brightness drops by about a magnitude every five days. In the admittedly unlikely event that the tail of PANSTARRS stretches 10° or more March 13, it will pass behind a two-day-old crescent Moon. The comet should glow around 4th magnitude in early April, which would make the extended object visible only through binoculars or a telescope. It passes 2° west of the Andromeda Galaxy (M31) on the 3rd, then crosses into Cassiopeia on the 9th. During the third week of April, the comet fades to 6th magnitude and is visible all night for those at mid-northern latitudes, where it appears highest before dawn. If Comet PANSTARRS lives up to expectations, it should show two tails emanating from a round glow. The photograph at right shows Comet Hale-Bopp from 1997. Although PANSTARRS likely won’t get as bright as Hale-Bopp was, it lets us see the major components of a comet. If Comet PANSTARRS lives up to expectations, it should show two tails emanating from a round glow. Although PANSTARRS likely won’t get as bright as 1997's Comet Hale-Bopp (pictured) did, it lets us see the major components of a comet. // Tony Hallas The circular head, known as the “coma,” masks the comet’s nucleus. The nucleus is a ball of ice and dust that typically measures a mile or two across. As sunlight hits the nucleus, the ices boil off, and the process liberates dust particles. This cloud of gas and dust forms the coma, which can span a million miles or more. Sunlight removes electrons from the ejected gas molecules, causing then to glow with a bluish color. The solar wind carries this gas away from the comet, creating a straight bluish gas tail. The ejected dust gets pushed away from the Sun more gently, so it forms a curving tail. The dust particles simply reflect sunlight, so the dust tail has a white to pale-yellow color. Although Comet McNaught didn’t show much of a gas tail when it achieved fame in 2007, it more than made up for it with a 30°-long curving dust tail. Will PANSTARRS rival Hale-Bopp or McNaught? The best way to find out is to plan a few observing sessions for this March and April. Even if PANSTARRS falls short of greatness, goodness is a fine attribute when it comes to comets. And remember that 2013 isn’t over yet. November and December should provide exceptional views of Comet ISON (C/2012 S1), which could be 100 times brighter than PANSTARRS. I’ll be back later this year with more details on viewing Comet ISON. Expand your observing with these online tools from Astronomy magazine - Special Coverage: Find everything you need to know about Comet PANSTARRS in Astronomy.com's Year of the Comet section. - StarDome: Locate Comet C/2011 L4 (PANSTARRS) in your night sky with our interactive star chart. To ensure the comet is displayed, click on the "Display..." drop-down menu under Options (lower right) and make sure "Comets" has a check mark next to it. Then click the "Show Names..." drop-down menu and make sure "Comets" is checked there, too. Images: Submit images of Comet PANSTARRS to our Online Reader Gallery. Discussion: Ask questions and share your observations in our Reader Forums. - Sign up for our free weekly e-mail newsletter. Look for this icon. This denotes premium subscriber content. Learn more »
<urn:uuid:e69a0af5-424d-41c4-ae2e-f8bcfbc644b5>
3.109375
1,443
Nonfiction Writing
Science & Tech.
61.908393
So last time, Tetra was being enlightened by MC-kun about definitions. This actually arises from MC-kun using prime numbers as a motivating example. Primes are megas important in mathematics and even more important today. The entire branch of mathematics called number theory is all about studying the properties of prime numbers. They’re so useful that we’ve done stuff like extend the notion of prime elements to algebraic structures called rings or apply analytic techniques to learn more about them, but we’ll stick with elementary number theory for now. Now, for hundreds of years, we’d been studying number theory only because it’s cool and mathematicians love prime numbers. Last time, I mentioned some examples of math preceding useful applications. Well, number theory is a really good example of that, because in the 70s, we found a use for it, which is its main use today, in cryptography. There have been some new techniques using some algebra as well, but for the most part, modern cryptography relies on the hardness of factoring primes. Neat! Okay, so we’re back to the original question that MC-kun tries to get Tetra to answer, which is, what is a prime number? Definition. An integer $p$ is prime if and only if $p\geq 2$ and the only positive divisors of $p$ are 1 and itself. MC-kun explains that the motivation for excluding 1 from the definition of a prime number is because we want to be able to say that we can write every number as a unique product of prime numbers. This is very useful, because now we know we can break down every number like this and we can tell them apart because they’re guaranteed to have a unique representation. This is called unique prime factorization. Theorem. Let $a > 0$ be an integer. Then we can write $a = p_1p_2\cdots p_k$ for some primes $p_1,\dots,p_k$. This representation is unique up to changing the order of terms. We can show this by induction on $a$. We’ve got $a=2$ so that’s pretty obvious. So let’s say that every integer $k\lt a$ can be decomposed like this and suppose we can’t decompose $a$ into prime numbers, assuming $a$ itself isn’t already a prime since it would just be its own prime decomposition. Then we can factor $a=cd$ for some integers $c$ and $d$. But both $c$ and $d$ are less than $a$, which means they can be written as a product of primes, so we just split them up into their primes and multiply them all together to get $a$. Tada. As a sort of side note, I mentioned before that primes are so useful that we wanted to be able to extend the idea of prime elements into rings. Well, it turns out for certain rings, it isn’t necessarily true that numbers will always have a unique representation when decomposed into primes. This is something that comes up in algebraic number theory, which is named so because it involves algebraic structures and techniques. This was invented while we were trying to figure out if Fermat’s Last Theorem was actually true (which needed this and other fun mathematical inventions from the last century that implies that Fermat was full of shit when he said he had a proof). So at the end of the chapter, after Tetra gets her chair kicked over by the megane math girl, we’re treated to a note that acts as a sort of coda to the chapter that mentions that there are infinitely many primes. How do we know this? Suppose that there are only finitely many primes. Then we can just list all of the prime numbers, like on Wikipedia or something. So we’ve got our list of primes $p_1,p_2,\dots,p_k$. So let’s make a number like $N=1+p_1\cdots p_k$. Well, that number is just a regular old number, so we can break it down into its prime factors. We already know all the primes, so it has to be divisible by one of them, let’s say $p_i$. Now we want to consider the greatest common divisor of the two numbers, which is just the largest number that divides both of them. We’ll denote this by $\gcd(a,b)$. So since $p_i$ is a factor of $N$, we’ve got $\gcd(N,p_i)=p_i$. But then that gives us $p_i=\gcd(N,p_i)=\gcd(p_i,1)=1$ by a lemma that says that for $a=qb+r$, we have $\gcd(a,b)=\gcd(b,r)$. This means that we have $p_i=1$, which is a contradiction, since 1 isn’t a prime number, and so I guess there are actually infinitely many primes. So the nice thing is that we won’t run out of prime numbers anytime soon, which is very useful because as we get more and more computing power, we’ll have to increase the size of the keys we use in our cryptosystems. Luckily, because factoring is so hard, we don’t need to increase that size very much before we’re safe for a while. Or at least until we develop practical quantum computers.
<urn:uuid:64fa679e-b305-4951-86ba-269d9887820f>
3.203125
1,216
Personal Blog
Science & Tech.
62.536472
Dino Eggs…And What's Inside by Sara F. Schacter What could be rarer than discovering the egg of a real dinosaur? How about finding the baby dinosaur still inside? In a huge dinosaur nesting ground in Argentina, scientists recently found the fossil remains of six unhatched baby dinosaurs. About a foot long and snuggled up inside eggs the size of grapefruit, these dinosaur embryos have helped solve the mystery of which dinosaurs laid the miles and miles of eggs buried in the dirt and rock. The tiny embryos were titanosaurs—a type of sauropod, the long-necked, plant-eating dinosaurs that were among the largest land animals ever. Scientists were amazed that their delicate skulls and fragile skin had survived long enough to become fossilized. Some embryos still had tiny, sharp teeth in their mouths. By studying the embryos' skulls, scientists are learning just how dramatically the structure of the titanosaurs' faces changed as they grew. The embryos' nostrils are at the tips of their snouts, but by the time titanosaurs were full grown, their skulls changed so that their nostrils were almost between their eyes. In yet another amazing discovery, scientists in England have found fossilized dino vomit! Coughed up 160 million years ago by a large marine reptile called ichthyosaur, the vomit contains the undigested shells of squidlike shellfish—no doubt ichthyosaur's favorite snack. “We believe that this is the first time the existence of fossil vomit on a grand scale has been proven,” said one excited scientist. - embryo: An animal in the earliest stage of development. - fossil: Something that remains of a living thing from long ago. - What kinds of things did scientists learn about the way titanosaurs reproduce? [anno: The scientists learned that titanosaurs laid a lot of eggs over a wide area. They had a nesting ground.] - Where was the dinosaur vomit found? [anno: It was found in England.] - What kind of a dinosaur made the vomit? [anno: an ichthyosaur] - How has the habitat of the ichthyosaur changed, from the time it lived until today? How do you know this change has happened? [anno: When the ichthyosaur lived, its habitat was an ocean. The ichthyosaur was a marine dinosaur, so the area that is now England must have been under water.]
<urn:uuid:5fedcac0-271c-4f51-a936-a65585b0428f>
3.71875
518
Truncated
Science & Tech.
51.383438
GEL is a dynamically scoped language. We will explain what this means below. That is, normal variables and functions are dynamically scoped. The exception are parameter variables, which are always global. Like most programming languages, GEL has different types of variables. Normally when a variable is defined in a function, it is visible from that function and from all functions that are called (all higher contexts). For example, suppose a function f defines a variable and then calls function g. Then function g can reference a. But once f returns, a goes out of scope. For example, the following code will print out 5. The function g cannot be called on the top level (outside f as will not be defined). function f() = (a:=5; g()); function g() = print(a); f(); If you define a variable inside a function it will override any variables defined in calling functions. For example, we modify the above code and write: function f() = (a:=5; g()); function g() = print(a); a:=10; f(); ato 5 inside f does not change the value of aat the top (global) level, so if you now check the value of ait will still be 10. Function arguments are exactly like variables defined inside the function, except that they are initialized with the value that was passed to the function. Other than this point, they are treated just like all other variables defined inside the function. Functions are treated exactly like variables. Hence you can locally redefine functions. Normally (on the top level) you cannot redefine protected variables and functions. But locally you can do this. Consider the following session: genius> function f(x) = sin(x)^2 = (`(x)=(sin(x)^2)) genius> function f(x) = sin(x)^2 = (`(x)=(sin(x)^2)) genius> function g(x) = ((function sin(x)=x^10);f(x)) = (`(x)=((sin:=(`(x)=(x^10)));f(x))) genius> g(10) = 1e20 Functions and variables defined at the top level are considered global. They are visible from anywhere. As we said the following function f will not change the value of a to 5. a=6; function f() = (a:=5); f(); ato the value 3 you could call: The set function always sets the toplevel global. There is no way to set a local variable in some function from a subroutine. If this is required, must use passing by reference. So to recap in a more technical language: Genius operates with different numbered contexts. The top level is the context 0 (zero). Whenever a function is entered, the context is raised, and when the function returns the context is lowered. A function or a variable is always visible from all higher numbered contexts. When a variable was defined in a lower numbered context, then setting this variable has the effect of creating a new local variable in the current context number and this variable will now be visible from all higher numbered contexts. There are also true local variables, which are not seen from anywhere but the current context. Also when returning functions by value it may reference variables not visible from higher context and this may be a problem. See the sections True Local Variables and Returning Functions.
<urn:uuid:0a75caa1-d419-405f-a5a1-a844a1b452be>
3.015625
741
Documentation
Software Dev.
60.073473
MANY geologists rather dismiss man-made climate change. On the timescales they work in, they figure nature will absorb anything we throw at it. Not David Archer. The Long Thaw shows how, by digging up and burning our planet's carbon, we are determining climate for millennia hence. It also shows how we may soon unleash changes to the carbon cycle that will cancel the next ice age, and maybe the one after that, not to mention melting enough ice to flood land less than 20 metres above sea level. A beautifully written primer on why climate change matters hugely for our future - on all timescales. To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:afce46c1-c3eb-4974-8bef-ada8cf58d2b5>
2.921875
153
Truncated
Science & Tech.
50.520179
Geoscience experts have developed a system of smart buoys that can predict the formation of self-reinforcing underwater waves, or solitons, 10 hours before they threaten the safety of oil rigs and divers. In 2008, Martin Goff and his colleagues at FUGROS, a geoscience consulting agency, successfully tested the system for three months in the Andaman Sea. Now, Global Ocean Associates have acknowledged the device as "the first deployed system with real-time warning capability." Scientists discover ancient rocks on the sea-floor that give them a window into the Earth's mantle By Gregory Mone Posted 04.14.2008 at 8:28 am 0 Comments No, you can't hike or spelunk or even tunnel down to the center of the Earth, even if movies like The Core or this summer's 3D adventure flick, Journey to the Center of the Earth, suggest otherwise. To find out about our planet's insides, scientists rely on very different tricks. And, apparently, a little luck. Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more.
<urn:uuid:67de25b3-2aa5-4eb7-8108-2b29a04d3ecc>
3.21875
266
Content Listing
Science & Tech.
52.998413
Using an ultra-bright electron source, scientists at the University of Toronto have recorded atomic motions in real time, offering a glimpse into the very essence of chemistry and biology at the atomic level. Their recording is a direct observation of a transition state in which atoms undergo chemical transformation into new structures with new properties. Using a new tool called a quantum simulator—based on a small-scale quantum computer—... A massive telescope buried in the Antarctic ice has detected 28 extremely high-energy... A fried breakfast food popular in Spain provided the inspiration for the development of doughnut-shaped droplets that may provide scientists with a new approach for studying fundamental issues in physics, mathematics, and materials. The doughnut-shaped droplets, a shape known as toroidal, are formed from two dissimilar liquids using a simple rotating stage and an injection needle. The massive ball of iron sitting at the center of Earth is not quite as "rock-solid" as has been thought, say two Stanford University mineral physicists. By conducting experiments that simulate the immense pressures deep in the planet's interior, the researchers determined that iron in Earth's inner core is only about 40% as strong as previous studies estimated. Graphene has dazzled scientists ever since its discovery more than a decade ago. But one long-sought goal has proved elusive: how to engineer into graphene a property called a band gap, which would be necessary to use the material to make transistors and other electronic devices. New findings by Massachusetts Institute of Technology researchers are a major step toward making graphene with this coveted property. With the hand of nature trained on a beaker of chemical fluid, the most delicate flower structures have been formed in a Harvard University laboratory—and not at the scale of inches, but microns. These minuscule sculptures, curved and delicate, don't resemble the cubic or jagged forms normally associated with crystals, though that's what they are. Rather, fields of flowers seem to bloom from the surface of a submerged glass slide. A new joint innovation by the National Physical Laboratory and the University of Cambridge could pave the way for redefining the ampere in terms of fundamental constants of physics. The world's first graphene single-electron pump provides the speed of electron flow needed to create a new standard for electrical current based on electron charge. Described as the "most beautiful experiment in physics," Richard Feynman emphasized how the diffraction of individual particles at a grating is an unambiguous demonstration of wave-particle duality and contrary to classical physics. A research team recently used carefully made fluorescent molecules and nanometric detection accuracy to provide clear and tangible evidence of the quantum behavior of large molecules in real time. Bubble baths and soapy dishwater and the refreshing head on a beer: These are foams, beautiful yet ephemeral as the bubbles pop one by one. Now, a team of researchers has described mathematically the successive stages in the complex evolution and disappearance of foamy bubbles, a feat that could help in modeling industrial processes in which liquids mix or in the formation of solid foams such as those used to cushion bicycle helmets. An international team of physicists has found the first direct evidence of pear-shaped nuclei in exotic atoms. The findings could advance the search for a new fundamental force in nature that could explain why the Big Bang created more matter than antimatter—a pivotal imbalance in the history of everything. From powerful computers to super-sensitive medical and environmental detectors that are faster, smaller, and use less energy—yes, we want them, but how do we get them? In research that is helping to lay the groundwork for the electronics of the future, University of Delaware scientists have confirmed the presence of a magnetic field generated by electrons which scientists had theorized existed, but that had never been proven until now. Physicists working with optical tweezers have conducted work to provide an all-in-one guide to help calculate the effect the use of these tools has on the energy levels of atoms under study. This effect can change the frequency at which atoms emit or absorb light and microwave radiation and skew results; the new findings should help physicists foresee effects on future experiments. Physicists in Switzerland have demonstrated one of the quintessential effects of quantum optics—known as the Hong-Ou-Mandel effect—with microwaves, which have a frequency that 100,000 times lower than that of visible light. The experiment takes quantum optics into a new frequency regime and could eventually lead to new technological applications. The allure of personalized medicine has made new, more efficient ways of sequencing genes a top research priority. One promising technique involves reading DNA bases using changes in electrical current as they are threaded through a nanoscopic hole. Now, a team led by University of Pennsylvania physicists has used solid-state nanopores to differentiate single-stranded DNA molecules containing sequences of a single repeating base. An international research team led by astronomers from the Max Planck Institute for Radio Astronomy used a collection of large radio and optical telescopes to investigate in detail a pulsar that weighs twice as much as the sun. This neutron star, the most massive known to date, has provided new insights into the emission of gravitational radiation and serves as an interstellar laboratory for general relativity in extreme conditions. Using uniquely sensitive experimental techniques, scientists have found that laws of quantum physics—believed primarily to influence at only sub-atomic levels—can actually impact on a molecular level. The study shows that movement of the ring-like molecule pyrrole over a metal surface runs counter to the classical physics that govern our everyday world. In a process comparable to squeezing an elephant through a pinhole, researchers at Missouri University of Science and Technology have designed a way to engineer atoms capable of funneling light through ultrasmall channels. Their research is the latest in a series of recent findings related to how light and matter interact at the atomic scale. Cancer cells that can break out of a tumor and invade other organs are more aggressive and nimble than nonmalignant cells, according to a new multi-institutional nationwide study. These cells exert greater force on their environment and can more easily maneuver small spaces. One simple phenomenon explains why practical, self-sustaining fusion reactions have proved difficult to achieve: Turbulence in the superhot, electrically charged gas, called plasma, that circulates inside a fusion reactor can cause the plasma to lose much of its heat. This prevents the plasma from reaching the temperatures needed to overcome the electrical repulsion between atomic nuclei. Until now. Lawrence Berkeley National Laboratory’s sound-restoration experts have done it again. They’ve helped to digitally recover a 128-year-old recording of Alexander Graham Bell’s voice, enabling people to hear the famed inventor speak for the first time. The recording ends with Bell saying “in witness whereof, hear my voice, Alexander Graham Bell.” Researchers at University of California, Santa Barbara in collaboration with colleagues at the École Polytechnique in France, have conclusively identified Auger recombination as the mechanism that causes light-emitting diodes (LEDs) to be less efficient at high drive currents. A Harvard University-led team of researchers has created a new type of nanoscale device that converts an optical signal into waves that travel along a metal surface. Significantly, the device can recognize specific kinds of polarized light and accordingly send the signal in one direction or another. The planet-hunting Kepler telescope has discovered two planets that seem like ideal places for some sort of life to flourish. According to scientists working with the NASA telescope, they are just the right size and in just the right place near their star. The discoveries, published online Thursday, mark a milestone in the search for planets where life could exist. Throughout decades of research on solar cells, one formula has been considered an absolute limit to the efficiency of such devices in converting sunlight into electricity: Called the Shockley-Queisser efficiency limit, it posits that the ultimate conversion efficiency can never exceed 34% for a single optimized semiconductor junction. Now, researchers have shown that there is a way to blow past that limit. Scientists in Australia have recently demonstrated that ultra-short durations of electron bunches generated from laser-cooled atoms can be both very cold and ultra-fast. The low temperature permit sharp images, and the electron pulse duration has a similar effect to shutter speed, potentially allowing researchers to observe critical but quick dynamic processes, such as the picosecond duration of protein folding. A University of Missouri engineer has built a system that is able to launch a ring of plasma as far as two feet. Plasma is commonly created in the laboratory using powerful electromagnets, but previous efforts to hold the super-hot material through air have been unsuccessful. The new device does this by changing how the magnetic field around the plasma is arranged. Physicists operating an experiment located half a mile underground in Minnesota reported this weekend that they have found possible hints of dark-matter particles. The Cryogenic Dark Matter Search experiment has detected three events with the characteristics expected of dark matter particles.
<urn:uuid:38bd495e-a715-4cfc-97e2-fee204e62652>
3.328125
1,873
Content Listing
Science & Tech.
22.222546
Giant squids, once believed to be mythical creatures, are squid of the Architeuthidae family, represented by as many as eight species of the genus Architeuthis. They are deep-ocean dwelling squid that can grow to a tremendous size: recent estimates put the maximum size at 10 m (34 ft) for males and 13 m (44 ft) for females from caudal fin to the tip of the two long tentacles (second only to the Colossal Squid at an estimated 14 m, one of the largest living organisms). For more information about the topic Giant squid, read the full article at Wikipedia.org, or see the following related articles: Recommend this page on Facebook, Twitter, and Google +1: Other bookmarking and sharing tools:
<urn:uuid:d8040f71-3afa-434b-8a3e-1971af13bb0c>
2.84375
161
Knowledge Article
Science & Tech.
31.655344
Risky Business: Gambling on Climate Sensitivity Posted on 21 September 2010 by gpwayne There are some things about our climate we are pretty certain about. Unfortunately, climate sensitivity isn’t one of them. Climate sensitivity is the estimate of how much the earth's climate will warm if carbon dioxide equivalents are doubled. This is very important because if it is low, as some sceptics argue, then the planet isn’t going to warm up very much. If sensitivity is high, then we could be in for a very bad time indeed. There are two ways of working out what climate sensitivity is (a third way – waiting a century – isn’t an option, but we’ll come to that in a moment). The first method is by modelling: Climate models have predicted the least temperature rise would be on average 1.65°C (2.97°F) , but upper estimates vary a lot, averaging 5.2°C (9.36°F). Current best estimates are for a rise of around 3°C (5.4°F), with a likely maximum of 4.5°C (8.1°F). The second method calculates climate sensitivity directly from physical evidence: These calculations use data from sources like ice cores, paleoclimate records, ocean heat uptake and solar cycles, to work out how much additional heat the doubling of greenhouse gases will produce. The lowest estimate of warming is close to the models - 1.8°C (3.24°F ) on average - but the upper estimate is a little more consistent, at an average of around 3.5°C (6.3°F). It’s all a matter of degree To the lay person, the arguments are obscure and complicated by other factors, like the time the climate takes to respond. But climate sensitivity is not just an abstract exchange of statistics relevant only to scientists. It also tells us about the likely changes to the climate that today's children will inherit. Consider a rise in sea levels, for example. Predictions range from centimetres to many metres, and the actual increase will be governed by climate sensitivity. The 2007 IPCC report proposed a range of sea level rises based on different increases in temperature, but we now know they underestimated sea level rise, perhaps by a factor of three, in part because of a lack of data about the behaviour of Greenland and Antarctic ice-sheets. Current estimates of sea level rise alone, as a result of a two degree rise in temperature, are very worrying. More worrying is that the current projections do not account for recently accelerated melting of polar regions. There are also many other possible effects of a 2°C rise (3.6°F) that would be very disruptive. All the models and evidence confirm a minimum warming close to 2°C for a doubling of atmospheric CO2 with a most likely value of 3°C and the potential to warm 4.5°C or even more. Even such a small rise would signal many damaging and highly disruptive changes to the environment. In this light, the arguments against mitigation because of climate sensitivity are a form of gambling. A minority claim the climate is less sensitive than we think, the implication being we don’t need to do anything much about it. Others suggest that because we can't tell for sure, we should wait and see. In truth, nobody knows for sure quite how much the temperature will rise, but rise it will. Inaction or complacency heightens risk, gambling with the entire ecology of the planet, and the welfare of everyone on it. This post is the Basic version (written by Graham Wayne) of the skeptic argument "Climate sensitivity is low". For the stout of heart, be sure to also check out the Advanced Version by Dana which is currently getting rave reviews on Climate Progress.
<urn:uuid:2464ec74-3208-4133-9bec-21d308e5cbbb>
2.859375
793
Personal Blog
Science & Tech.
52.635958
Changing Planet: Black Carbon Black carbon contributes to global warming in two ways. When in the atmosphere, it absorbs sunlight and generates heat, warming the air. When deposited on snow and ice, it changes the albedo of the surface, absorbing sunlight and generating heat. This further accelerates warming, since the heat melts snow and ice, revealing a lower albedo surface which continues to absorb sunlight - a vicious cycle of warming. Click on the video at the left to watch the NBC Learn video - Changing Planet: Black Carbon. Lesson plan: Changing Planet: Black Carbon - A Dusty Situation Shop Windows to the Universe Science Store! is a fun group game appropriate for the classroom. Players follow nitrogen atoms through living and nonliving parts of the nitrogen cycle. For grades 5-9. You might also be interested in: Earth’s climate is warming. During the 20th Century Earth’s average temperature rose 0.6° Celsius (1.1°F). Scientists are finding that the change in temperature has been causing other aspects of our planet...more This picture shows a part of the Earth surface as seen from the International Space Station high above the Earth. A perspective like this reminds us that there are lots of different things that cover the...more Arctic sea ice is covered with snow all winter. Bright white, the snow-covered ice has a high albedo so it absorbs very little of the solar energy that gets to it. And during the Arctic winter, very little...more Altocumulus clouds (weather symbol - Ac), are made primarily of liquid water and have a thickness of 1 km. They are part of the Middle Cloud group (2000-7000m up). They are grayish-white with one part...more Altostratus clouds (weather symbol - As) consist of water and some ice crystals. They belong to the Middle Cloud group (2000-7000m up). An altostratus cloud usually covers the whole sky and has a gray...more Cirrocumulus clouds (weather symbol - Cc) are composed primarily of ice crystals and belong to the High Cloud group (5000-13000m). They are small rounded puffs that usually appear in long rows. Cirrocumulus...more Cirrostratus (weather symbol - Cs) clouds consist almost entirely of ice crystals and belong to the High Cloud (5000-13000m) group. They are sheetlike thin clouds that usually cover the entire sky. The...more
<urn:uuid:7e0f4306-a276-497d-a041-0d920b423022>
3.796875
523
Tutorial
Science & Tech.
62.789721
The study of motion is often called kinematics. We will begin our study with one dimensional kinematics. We will later expand to 2 and 3 dimensional kinematics after we have studied vectors. We can give the position of an object in relation to a reference point. There are a number of variables we can use for position, such as x, d, or s. The official metric unit for position is the meter (abbreviated m). The meter was first defined in terms of the circumference of the Earth on a meridian passing through Paris. It is now defined in terms of the speed of light. When working with other scales, it might be convenient to use other metric units such as the nanometer (nm), the centimeter (cm), and the kilometer (km). We will often use exponential notation. Exponential notation is convenient for expressing very large and small numbers. For instance, 12,300 would be expressed as 1.23 x 10,000 or 1.23 x 104 So 3.14 km = 3140 m = 3.14 x 103 m For small numbers, 0.000345 = 3.45 x 10-4 A micrometer, 1 μm = 10-6 m The width of a human hair on average is 10 μm. This would be 10 x 10-6 m. The wavelength of a helium-neon laser is 633 nm = 633 X 10-9 m = 6.33 x 10-7 m The common metric units are given in powers or 3. The kilometer is 1000 m. Although the 100 centimeters = 1 meter it is not actually a common unit. 1 Millimeter = 1mm = 10-3 m 1 Micrometer = 1um = 10-6 m 1 Nanometer = 1nm = 10-9 m 1 Picometer = 1pm = 10-12 m 1 Femotometer = 1fm = 10-15 m also known as a Fermi Except for kilometer, we often do not use the larger metric prefixes for distance. But they are used for frequencies and other units in physics. 1 Kilometer = 1 km = 1000 m = 103 m Megameter = 1Mm = 106 m Gigameter = 1Gm = 109 m Terrameter = 1 Tm = 1012 m Common British Imperial units for measuring distance include the inch, the foot, the yard, and the mile. An easy way to remember the conversion from meters to miles can be remembered in terms of Track and Field. The loop in a track is ¼ mile long. It is also known as the 400 m race, so 1 mile is approximately = 1600 m. Engineers in America commonly use Imperial units. Very small measurements for the purposes of manufacturing are given in 1/1000ths of an inch. When dealing with astronomical distances there are other units we might use such as the light-year, the parsec, or the Astronomical Unit. The light-year is the distance light will travel in one year. An object which is one parsec away has one arc-second of parallax from Earth. An astronomical unit is the average distance from the Earth to the Sun. Distance vs Displacement In physics we often study the change in position of an object. If we are only examining the change in position from the start of our observation to the end, we are talking about displacement. We ignore how we get from point A to point B. We are only concerned with how the crow flies. If we are concerned with our path, we are working with distance (see figure A). For example, let us suppose I were to talk around the perimeter of a square classroom (see figure B). The classroom is 10 meters on a side. At the end of my trip I return to my original starting position. The distance traveled would be 40 m. The displacement would be zero meters because displacement only depends on the starting and ending positions. The other important distinction between distance and displacement is that distances do not have a direction. If you were wearing a pedometer is would record distance. The odometer on a car records distance. Displacement has a direction and a magnitude. Magnitude is a fancy physics term for size or amount. For instance, suppose I walked 10 m North, 10 m East, 10 South, and then 5 m West (see figure C). My distance traveled would be 35 m. There is a magnitude but no net direction. Since we can describe distance with just a magnitude (but no direction) we call it a scalar. But my displacement would be 5 m due East. As displacement has both a magnitude and a direction, we call it a vector. We measure time in seconds. We will use the variable t for time. The elapsed time for a certain action would be ΔT. The Greek letter delta, Δ, is used to represent a change in a quantity. If we are talking about a reoccurring event (such as the orbit of the Earth around the sun) we talk about the Period of time T, with a capitol T. For longer periods of time we will often use the conventional minutes, hours, days, or years. For shorter periods of time will often use exponential notation or may use milliseconds, microseconds, picoseconds, or femtoseconds. For instance, chemical reactions may often take place on the picosecond timescale. Just as when you dance under a strobe like at a cool school dance you can see your movements in stop action. Scientists use pulsed lasers with picosecond and femtosecond pulses to examine dynamics at the molecular level. Speed and Velocity Building on changes in position and changes in time, we can examine the rate at which these changes in position take place. How fast are we moving? You probably use the terms speed and velocity interchangeably in your everyday vernacular, but in physics they have distinct meanings. Speed is a scalar and has no direction. Speed can be defined as speed = distance/elapsed time Velocity is a vector. We could consider velocity to be speed in a given direction. To calculate the average velocity over a period of time, we use displacement and elapsed time. Where v is velocity, x is position, t is time. The Greek letter delta, Δ, means a change in a quantity, such as the change in position or the change in time. The bar over the velocity v means the terms in averaged. For instance, Δx = xf – xo , or the change is position equals the difference of the final position and the original positions. Our first set of problems will involve the above kinematic equation. Problem Solving Method When solving physics problems, it is useful to follow a simple problem solving strategy. Although at first, it may be easy to solve some problems in your head, by following this strategy you will develop good problem solving habits. Just as you must develop good habits by brushing your teeth every day, you should attempt to follow the following methodology for solving physics problems. The first step is Step 0 because it does not always apply. Step 0: Draw a picture of the problem if appropriate. Step 1: Write down the given information Step 2: Write down the unknown quantity you are trying to find out Step 3: Write down the physics equations or relationships that will connect your given information to the unknown variables. Step 4: Perform algebraic calculations necessary to isolate the unknown variable. Step 5: Plug in the given information to the new equation. Cancel appropriate units and do the arithmetic. Example 1: A robot travels across a countertop a distance of 88.0 cm, in 30 seconds. What is the speed of the robot? In this case, we do not need to do any algebra. Significant figures: At this point we should not how many significant figures our answer has. Your final answer cannot have more information that your original data. We were presented with a distance and a time with only 3 significant figures, therefore our final answer cannot have more precision than this. Now let us look at a problem which does require some algebra. Example 2: The SR-71 Blackbird could fly at a speed of Mach 3, or 1,020 m/s. How much time would it take the SR-71 to take off from Los Angeles and fly to New York City via a path which is a distance of 5500 miles. You should note that you need to convert miles to meters, remembering that 1 mile = 1600 m First we need to algebraically isolate the variable t .First we multiply both sides by t, and t cancels on the right hand side of the equation. Then dividing both sides by v gives us Now, plugging in for distance and speed gives us Note the units and the number of significant digits. Because one piece of our original data (distance) only had two significant digits, we have to round off our final answer to 2 significant digits. Also, look at the cancelation of units. The meters in the units cancel. Our units have the reciprocal of a reciprocal, thus the final units are in seconds, which you might have guessed since we are working with time. For ease of perspective we converted these units into minutes. Average velocity vs instantaneous velocity Another important distinction is finding an average value or the velocity versus the velocity at a given instant in time. To find an average velocity we only the measure the change in position and the total elapsed time. However, finding the velocity at a given instant can be tricky. The elapsed time for an instant has no finite length. Similarly, a physical position in space has no finite size. To calculate this using equations we would have to reduce the elapsed time to a near infinitesimally small amount of time. Mathematically, this is the basic for calculus which was developed separately by both Newton and Leibniz. In standard calculus notation we would say the instantaneous velocity can be expressed as In our next lesson we will learn how to determine the instantaneous velocity using graphical techniques.
<urn:uuid:627b76b2-d80d-4591-b9b8-01ec5ccc1148>
3.96875
2,093
Tutorial
Science & Tech.
59.364874